Posts

Paradox and pedagogy 2017-11-05T21:01:15.738Z · score: 19 (6 votes)
Non-market failures: inefficient networks 2017-11-05T04:10:21.376Z · score: 16 (4 votes)
Against naming things, and so on 2017-10-15T23:48:05.600Z · score: 48 (11 votes)
Exposition and guidance by analogy 2017-09-28T03:33:36.672Z · score: 6 (3 votes)
Unfair outcomes from fair tests 2017-09-26T23:48:47.316Z · score: 8 (3 votes)
Thinking on the page 2017-09-26T02:22:38.440Z · score: 13 (5 votes)
A self-experiment in training "noticing confusion" 2014-02-20T01:55:12.742Z · score: 38 (39 votes)
Rationality training: academic and clinical perspectives 2014-02-19T07:09:21.514Z · score: 13 (14 votes)
Book Review: How Learning Works 2014-01-19T20:45:58.339Z · score: 46 (39 votes)
Try more things. 2014-01-12T01:25:56.497Z · score: 45 (50 votes)

Comments

Comment by whales on How Popper killed Particle Physics · 2017-11-09T01:07:36.022Z · score: 7 (2 votes) · LW · GW

Maybe we're talking about different things, but from the page I'm on now where I'm looking at and replying to the discussion of the link (https://www.lesserwrong.com/posts/vhAJ4DBXZukE7SNtq/how-popper-killed-particle-physics/) the only link to the actual article is still gjm's. In particular, the title of the blog post is not a link, although I would have expected it to be. To get to the actual article I have to click on the linkpost title in one of the other post listings (Featured/Frontpage/All). This happens to me for all link posts and for different browsers on both mobile and desktop.

Comment by whales on Non-market failures: inefficient networks · 2017-11-05T17:39:19.260Z · score: 3 (1 votes) · LW · GW

Content note: This is a collection/expansion of stuff I've previously posted about elsewhere. I've gathered it here because it's semi-related to Eliezer's recent posts. It's not meant to be a response to the "inadequacy" toolbox or a claim to ownership of any particular idea, but only one more perspective people may find useful as they're thinking about these things.

Comment by whales on Continuing the discussion thread from the MTG post · 2017-10-25T19:06:19.600Z · score: 19 (5 votes) · LW · GW

For what it's worth, I was another (the other?) person who downvoted the comment in question early (having upvoted the post, mostly for explaining an unfamiliar interesting thing clearly).

Catching up on all this has been a little odd to me. I'm obviously not a culture lord, but also my vote wasn't about this question of "the bar" except (not that I would naturally frame it this way) perhaps as far as I read CoolShirtMcPants as doing something similar to what you said you were doing---"here is my considered position on this, I encourage people to try it on and attend to specifically how it might come out as I imply"---and you as creating an impasse instead of recognizing that and trying to draw out more concrete arguments/scenarios/evidence. Or that even if CSMP wasn't intentionally doing that, a "bar" should ask that you treat the comment that way.

On one hand, sure, the situation wasn't quite symmetric. And it was an obvious, generic-seeming objection, surely already considered at least by the author and better-expressed in other comments. But on the other hand, it can still be worth saying for the sake of readers or for starting a more substantive conversation; CSMP at least tried to dig a little deeper. And in this kind of blogging I don't usually see one person's (pseudonymously or otherwise) staking out some position as stronger evidence than another's doing so. Neither should really get you further than deciding it's worth thinking about for yourself. This case wasn't an exception.

(I waffled on saying anything at all here because your referendum, if there is one, appears to have grown beyond this, and all this stuff about status seems to me to be a poor framing. But reading votes is a tricky business, so I can at least provide more information.)

Comment by whales on LDL 2: Nonconvex Optimization · 2017-10-21T05:56:03.762Z · score: 3 (1 votes) · LW · GW

Two more thoughts: the above is probably more common in [what I intuitively think of as] "physical" problems where the parameters have some sort of geometric or causal relationship, which is maybe less meaningful for neural networks?

Also, for optimization more broadly, your constraints will give you a way to wind up with many parameters that can't be changed to decrease your function, without requiring a massive coincidence. (The boundary of the feasible region is lower-dimensional.) Again, I guess not something deep learning has to worry about in full generality.

Comment by whales on LDL 2: Nonconvex Optimization · 2017-10-21T04:00:18.654Z · score: 10 (3 votes) · LW · GW

Hm. Thinking of this in terms of the few relevant projects I've worked on, problems with (nominally) 10,000 parameters definitely had plenty of local minima. In retrospect it's easy to see how. Saddles could be arbitrarily long, where many parameters basically become irrelevant depending on where you're standing, and the only way out is effectively restarting. More generally, the parameters were very far from independent. Besides the saddles, for example, you had rough clusters of parameters where you'd want all or none but not half to be (say) small in most situations. In other words, the problem wasn't "really" 10,000-dimensional; we just didn't know how or where to reduce dimensionality. I wonder how common that is.

Comment by whales on Against naming things, and so on · 2017-10-18T03:10:14.745Z · score: 8 (2 votes) · LW · GW

I think the main thing I want to say [besides my response to Oliver below] is that this post was not framed in my head as starting a conversation in response to your post, but as gesturing in the direction of some under-emphasized considerations as one contribution in a long-running conversation about rationalist jargon. Of course, I ended up opening with and only taking quotes from you, and now it looks the way it does, i.e. targeting your "bid" but somewhat askew. So that was a mistake, for which I apologize.

Also, I know I basically asked for your "actually a defeater" response, but I really was non-rhetorically hoping people would think about what I was leaning upon and accomplishing (or not) by using the Names that I chose throughout that might not align with their prior ideas about what the Names are for.

Comment by whales on Against naming things, and so on · 2017-10-18T02:53:43.844Z · score: 6 (1 votes) · LW · GW

Pretty much agreed. I might go beyond "provisional" to "disposable". I really do take maintaining fluidity and not fooling yourself to be more important/possible than creating common vocabulary or high-level unitary concepts or introspective handles [though I don't introspect verbally, so maybe I would say that]; I really do think the way the community treats words is a good lever for that.

(Of course, this is all very abstract, isn't a full elaboration of what I believe, and certainly has no force of argument. At best, I'm pointing towards a few considerations I could readily abstract out of the sum of my observations, in the hopes that people can recontextualize some of their reading with concerns along these lines.)

I'd also like to see someone try your last suggestion. (If nothing else, I might use it in a fiction project.)

Comment by whales on On the construction of beacons · 2017-10-18T01:34:32.891Z · score: 7 (3 votes) · LW · GW

I appreciate your outspokenness on these things. Writing like yours on EA has made me pause after having been resigned for a long time that these communities weren't (and maybe never were) growing towards my idealizations of them. I don't know how much we want the same things, and anyway I'm perhaps too much of an outsider with other commitments these days to make too much noise, but I'll continue to look forward to your posting.

Taking up your framework, I'm not sure how much of what I see is predatory behavior by sociopaths (though there is that, malicious or otherwise) versus ordinary selection pressure in a loose coalition of different sorts of geeks, some whom may think they're the same sort. Either way, it seems like I've connected with more like-minded people by dimming my beacon even into obscurantism than otherwise.

Comment by whales on Against naming things, and so on · 2017-10-16T06:27:51.065Z · score: 24 (6 votes) · LW · GW

(I don't consider this rude at all, and will welcome your post-mulling thoughts should you choose to add them. I can also say more about where I'm coming from when I get the chance.)

Comment by whales on [deleted post] 2017-10-16T00:13:11.993Z

Yeah, my autocorrect guessed what he meant easily enough, but I'm convinced. I think I just needed to see someone else say this.

Comment by whales on [deleted post] 2017-10-15T17:58:52.358Z

Woah! That sounds very unusual---it might be valuable for you talk about all that explicitly rather than write more like this post (which was presumably generated from your internalization of all that study, but which doesn't go out of its way to show it).

(Also, for what it's worth, I thought the title "Theodicy in Humans" was good---good enough for me to generate an approximation of the post before even reading it, although with slightly different context I'd have expected "theodicy" to be a derogatory analogy. And to bikeshed a bit, I might have used "theodicy for humans" [or maybe "of"], as you do in the text; it seems more accurate, and for your purposes it would make sense to use the title verbatim at least once.)

Comment by whales on [deleted post] 2017-10-15T16:59:00.723Z

Also in favor of not only reserving judgment but ideally deferring exposure until one can seriously evaluate things, You Can't Not Believe Everything You Read; and then there's the mere-exposure effect to worry about, especially from prolific authors or in environments with a lot of repetition. (This is again the weird thing where you have apparently opposite biases which show up in similar situations, and it may not be obvious which direction you'll be taken. In this case I'd guess it depends on one's initial disposition and the level of conscious attention the idea is getting. [In particular, "inferential distance" isn't the determinant---with the illusion of transparency, the gap can go unrecognized by either party and lead to unjustified agreement.] Luckily, one is led to similar reading/discussion policies either way.)

Comment by whales on Writing That Provokes Comments · 2017-10-05T19:48:26.053Z · score: 8 (4 votes) · LW · GW

Venue also matters a lot through the social context it brings. Individual Wordpress blogs often feel like you're saying "this is where my writing lives; by commenting, you're coming into my house", which can be challenging to take lightly -- especially when you're talking about a neighborhood of individual blogs, few of which get regular comments. Meanwhile social media is a weird mix of jokes and personal content with discussion-oriented ideas, where there's an uncertain rudeness in potentially burying someone with attention or notifications by Starting Discourse. And in both of these, if it's not controversy or gossip or dilettantism, then posting the most makes you king.

So I was/am hopeful about posting more to LW 2.0 largely for the sake of better defaults around "this is for having a conversation" -- both "formally" in responding directly to or building on the OP, and more "socially" or indirectly by contributing thoughts on the same subject, and in a venue with moderation and karma where things can bubble up without the speculative/social/everyone's-an-expert elements (or sheer consistent quantity).

I find that my writing seems to actively repel comments compared to stuff that gets comparably received by other metrics. I do try to go out of my way to write mostly on the rare occasions I have something unambiguously sensible or useful to contribute; it earns me a high upvote/downvote ratio, but little sense of how people are engaging with what I have to say.

At the same time, maybe this makes me part of the problem of silence on the best writing. I'm also interested in learning to be a better commenter, but I'm not someone who thinks they can or should always have something to say. For my part, I think this mostly indicates that I should comment more with thoughtful questions, but I'm very interested in you or anyone else fleshing out your "being a better commenter" open problem -- I think this is potentially more important for success here than writing the right kinds of posts.

Comment by whales on What are some Meta-Umeshsims? · 2017-10-03T03:40:26.453Z · score: 9 (3 votes) · LW · GW

Also from Scott, Malthusianisms and Anthropicisms.

Comment by whales on Thinking on the page · 2017-09-30T21:59:48.043Z · score: 2 (1 votes) · LW · GW

I appreciate this perspective! My first instinct is to zoom out from stock phrases to entire ideas or arguments while drafting (when everything is working well, sentences or paragraphs get translated atomically like this), then use 'close reading' as an editing tactic. But you're right that zooming in to find the exact word when stuck on the page can also be very focusing (as it were). And there's a lot of room for interplay between the two approaches, as far as there's even a clean separation between self-expression and self-editing in the first place.

Comment by whales on Open thread, May 29 - June 4, 2017 · 2017-06-01T00:35:48.212Z · score: 0 (0 votes) · LW · GW

I've started cleaning up and posting some old drafts on my blog. I've drifted away, but some of them may be of interest to people still here. Most directly up this alley so far would be this post recommending people read Trial By Mathematics.

Comment by whales on 16 types of useful predictions · 2015-04-11T00:47:06.505Z · score: 2 (1 votes) · LW · GW

I like this post. I lean towards skepticism about the usefulness of calibration or even accuracy, but I'm glad to find myself mostly in agreement here.

For lots of practical (to me) situations, a little bit of uncertainty goes a long way concerning how I actually decide what to do. It doesn't really matter how much uncertainty, or how well I can estimate the uncertainty. It's better for me to just be generally humble and make contingency plans. It's also easy to imagine that being well-calibrated (or knowing that you are) could actually demolish biases that are actually protective against bad outcomes, if you're not careful. If you are careful, sure, there are possible benefits, but they seem modest.

But making and testing predictions seems more than modestly useful, whether or not you get better (or better calibrated) over time. I find I learn better (testing effect!) and I'm more likely to notice surprising things. And it's an easy way to lampshade certain thoughts/decisions so that I put more effort into them. Basically, this:

Or in other other words: the biggest problem with your predictions right now is that they don't exist.

To be more concrete, a while back I actually ran a self-experiment on quantitative calibration for time-tracking/planning (your point #1). The idea was to get a baseline by making and resolving predictions without any feedback for a few weeks (i.e. I didn't know how well I was doing--I also made predictions in batches so I usually couldn't remember them and thus target my prediction "deadlines"). Then I'd start looking at calibration curves and so on to see if feedback might improve predictions (in general or in particular domains). It turned out after the first stage that I was already well-calibrated enough that I wouldn't be able to measure any interesting changes without an impractical number of predictions, but while it lasted I got a moderate boost in productivity just from knowing I had a clock ticking, plus more effective planning from the way predictions forced me to think about contingencies. (I stopped the experiment because it was tedious, but I upped the frequency of predictions I make habitually.)

Comment by whales on Compilation of currently existing project ideas to significantly impact the world · 2015-03-08T19:33:55.310Z · score: 5 (5 votes) · LW · GW

If I can introduce a problem domain that doesn't get a lot of play in these communities but (I think) should:

End-of-life healthcare in the US seems like a huge problem (in terms of cost, honored preferences, and quality of life for many people) that's relatively tractable for its size. The balance probably falls in favor of making things happen rather than researching technical questions, but I'm hoping it still belongs here.

There's a recent IOM report that covers the presently bleak state of affairs and potential ways forward pretty thoroughly. One major problem is that doctors don't know their patients' care preferences, resulting in a bias towards acute care over palliative care, which in turn leads to unpleasant (and expensive) final years. There are a lot of different levers in actual care practices, advanced care planning, professional education/development, insurance policies, and public education. I might start with the key findings and recommendations (PDF) and think about where to go from there. There's also Atul Gawande's recent book Being Mortal, which I've yet to read but people seem excited about. Maybe look at what organizations like MyDirectives and Goals of Care are doing.

This domain probably has a relative advantage in belief- or value-alignment for people who think widely available anti-aging is far in the future or undesirable, although I'm tempted to argue that in a world with normalized life extension, the norms surrounding end-of-life care become even more important. The problem might also be unusually salient from some utilitarian perspectives. And while I've never been sure what civilizational inadequacy means, people interested in it might be easier to sell on fixing end-of-life care.

Comment by whales on Open thread, Feb. 23 - Mar. 1, 2015 · 2015-02-27T05:34:13.645Z · score: 2 (2 votes) · LW · GW

You can predict how long tasks/projects will take you (stopwatch and/or calendar time). Even if calibration doesn't generalize, it's potentially useful on its own there. And while you can't quite mass-produce questions/predictions, it's not such a hassle to rack up a lot if you do them in batches. Malcolm Ocean wrote about doing this with a spreadsheet, and I threw together an Android todo-with-predictions app for a similar self experiment.

Comment by whales on Superintelligence 5: Forms of Superintelligence · 2014-10-14T07:23:33.876Z · score: 5 (5 votes) · LW · GW

I measured science and technology output per scientist using four different lists of significant advances, and found that significant advances per scientist declined by 3 to 4 orders of magnitude from 1800 to 2000. During that time, the number of scientific journals has increased by 3 to 4 orders of magnitude, and a reasonable guess is that so did the number of scientists.

I'd be really interested in reading more about this.

Comment by whales on What are your contrarian views? · 2014-09-17T18:27:23.196Z · score: 2 (2 votes) · LW · GW

Yeah, that happened when I edited a different part from my phone. Thanks, fixed.

Comment by whales on What are your contrarian views? · 2014-09-17T09:20:52.948Z · score: 6 (6 votes) · LW · GW

See this tumblr post for an example of Ozy expressing dissatisfaction with Scott's lack of charity in his analysis of SJ (specifically in the "Words, Words, Words" post). My impression is that this is a fairly regular occurrence.

You might be right about him not having updated. If anything it seems that his updates on the earlier superweapons discussion have been reverted. I'm not sure I've seen anything comparably charitable from him on the subject since. I don't follow his thoughts on feminism particularly closely, so I could easily be wrong (and would be glad to find I'm wrong here).

Comment by whales on A self-experiment in training "noticing confusion" · 2014-08-24T19:55:27.393Z · score: 2 (1 votes) · LW · GW

I wrote down a handful as I was doing this, but not all of them. There were a couple about navigation (where rather than say "well, I don't know where I am, I'll just trust the group" I figured out how I was confused about different positions of landmarks). I avoided overbaking my cookies when the recipe had the wrong time written down. Analytics for a site I run pointed to a recent change causing problems for some people, and I saw the (slight) pattern right away but ignored it until it got caught on my confusion hook. It's also a nice hook for asking questions in casual conversations. People are happy to explain why they like author X but not the superficially similar author Y I've heard them complain about before, for example.

Comment by whales on A self-experiment in training "noticing confusion" · 2014-08-23T15:55:45.530Z · score: 1 (1 votes) · LW · GW

Thanks, I'm glad you liked it!

Did someone link this recently? It seems to have gotten a new burst of votes.

Comment by whales on Open thread, 11-17 August 2014 · 2014-08-11T18:20:48.212Z · score: 2 (2 votes) · LW · GW

There are concept inventories in a lot of fields, but these vary in quality and usefulness. The most well-known of these is the Force Concept Inventory for first semester mechanics, which basically aims to test how Aristotelian/Newtonian a student's thinking is. Any physicist can point out a dozen problems with it, but it seems to very roughly measure what it claims to measure.

Russ Roberts (host of the podcast EconTalk) likes to talk about the "economic way of thinking" and has written and gathered links about ten key ideas like incentives, markets, externalities, etc. But he's relatively libertarian, so the ideas he chose and his exposition will probably not provide a very complete picture. Anyway, EconTalk has started asking discussion questions after each podcast, some of which aim to test basic understanding along these lines.

Comment by whales on Experiments 1: Learning trivia · 2014-07-20T19:45:22.335Z · score: 3 (3 votes) · LW · GW

If anyone has already posted any similar posts, then I would really appreciate any links.

Off the top of my head, Swimmer963 wrote about her experiences trying meditation, and I wrote about trying to notice confusion better. Gwern has run more serious self-experiments, and he talks about a bunch of them in the context of value of information here.

Comment by whales on "Dialectics"? · 2014-07-12T20:21:55.732Z · score: 0 (0 votes) · LW · GW

I find this idea (or a close relative) a useful guide for resolving a heuristic explanation or judgment into a detailed, causal explanation or consequentialist judgment. If someone draws me a engine cycle that creates infinite work out of finite heat (Question 5), I can say it violates the laws of thermodynamics. Of course their engine really is impossible. But there's still confusion: our explanations remain in tension because something's left unexplained. To fully resolve this confusion, I have to look in detail at their engine cycle, and find the error that allows the violation.

Principled explanations, especially about human behavior or society, tend to come into tension in a similar way. That tension can similarly point the way to detailed, causal explanations that will dissolve the question. For example, you say that an idea meeting a counter-idea may well fail to generate facts, which is contrary to your understanding of dialectics. It's not very useful to merely state these ideas in opposition to each other, but there's something to be learned by looking at where they conflict and why.

So in this case, where you doubt that this process generates facts, consider how it might or might not reliably do so. One way it could do so is if there were a recipe for turning the conflict into an opportunity for learning, like "look for detailed causal mechanisms where the two big ideas directly conflict." One way it might fail is if people who held each one of the two ideas entrenched themselves as opposed to the other, and everyone continued to simply talk past one another without attempting to understand. Now you've refined your heuristic so you can better judge how well this will work in individual cases, and you can iterate.

I think of the moral version of this as a generalization of the argument from marginal cases against giving moral standing to humans alone (i.e. that there's no value-relevant principle that selects all and only humans). The generalization is to come at this from both sides of a debate, and say that you can expect any principled judgment to fail on marginal cases. The content of your principle is in large part how it treats those marginal cases. From this perspective, you study the marginal cases to improve your understanding of your values, rather than try to use heuristics to decide the marginal cases. (Sometimes this perspective is useful, and sometimes it's not. Hmm, why is that?)

Comment by whales on Open thread, 30 June 2014- 6 July 2014 · 2014-07-05T19:25:30.074Z · score: 0 (0 votes) · LW · GW

Yes, that's a good example, thanks.

Comment by whales on Open thread, 30 June 2014- 6 July 2014 · 2014-07-01T08:22:30.921Z · score: 6 (6 votes) · LW · GW

I've collected some quotes from Beyond Discovery, a series of articles commissioned by the National Academy of Sciences from 1997 to 2003 on paths from basic research to useful technology. My comments there:

The articles (each around 8 pages) are roughly popular-magazine-level accounts of variable quality, but I learned quite a bit from all of them, particularly from the biology and medicine articles. They're very well written, generally with input from the relevant scientists still living (many of them Nobel laureates). In particular I like the broad view of history, the acknowledged scope of the many branches leading to any particular technology, the variety of topics outside the usual suspects, the focus on fairly recent technology, and the emphasis bordering on propagandist on the importance and unpredictability of basic research. It seems to me that they filled an important gap in popular science writing in this way.

I'm interested in histories of science that are nonstandard in those and other ways (for example, those with an unusual focus on failures or dead ends), and I'm slowly collecting some additional notes and links at the bottom of that page. Do you have any recommendations? Or other comments?

Comment by whales on Open thread, 23-29 June 2014 · 2014-06-29T23:10:13.350Z · score: 2 (2 votes) · LW · GW

Avpr. V'q abgr gung gur ebyr bs "srzvavfgf" va guvf zlgu-znxvat vf fbzrjung nanybtbhf (gubhtu boivbhfyl abg cresrpgyl fb) gb gur ebyr bs "gur zrqvpny rfgnoyvfuzrag" va cebzhytngvat gur vqrn bs ovplpyr snpr va gur svefg cynpr.

Comment by whales on Separating the roles of theory and direct empirical evidence in belief formation: the examples of minimum wage and anthropogenic global warming · 2014-06-27T23:42:48.952Z · score: 2 (2 votes) · LW · GW

Theory also influences what data you consider in the first place. (Are you looking at your own local weather, global surface temperatures, stratospheric temperatures, ocean temperatures, extreme weather events, Martian climate, polar ice, or the beliefs and behavior of climatologists, and over what time scales and eras?) See also philosophy of science since at least Kuhn on theory-laden observation: http://plato.stanford.edu/entries/science-theory-observation/

Comment by whales on Open thread, 23-29 June 2014 · 2014-06-25T06:37:01.927Z · score: 1 (1 votes) · LW · GW

They address this in footnote 4: they're just deriving that the amplitudes squared should be interpreted as probabilities using quantum mechanics as defined, which includes unitary evolution and all that.

You could try the same thing with a QM variant with different mathematical structure, although you might be interested to know that linear transformations that preserve l^p norm for p other than 2 are boring (generalized permutation matrices). So you wouldn't be able to evolve your orthogonal environmental states into the right combinations of identical environments + coin flips. There also are other reasons why p = 2 is special. Scott Aaronson has written about this (and also linearity and the use of complex numbers) in the context of whether quantum mechanics is an island in theoryspace.

Going a bit deeper: it seems like all of the work is done by factoring out the environment. That is, they identify unitary transformations of the environment as producing epistemically equivalent states, but why shouldn't non-unitary transformations also be epistemically equivalent, whether or not unitary evolution is what happens in quantum mechanics? They have to leave the environment states orthogonal since that's assumed by decoherence, but why not (say) just multiply one of those environment states by an arbitrary number and derive any probability you want (i.e. why shouldn't the observer be indifferent to the relative measure of environment branches, since the environment is supposed to be independent, and then why not absorb any coefficients you like into the environment part)?

The answer is that you can't think of non-unitary transformations as acting independently on one part of a system, and that this is also part of the way quantum mechanics is specified. Given the mathematics of quantum mechanics, it only makes sense to talk about two parts of a wavefunction as independent under unitary transformations of the individual parts. See Appendix B of their companion paper, and think about what happens if you replace U_B with something non-unitary in equation B.4.

Comment by whales on False Friends and Tone Policing · 2014-06-18T21:48:30.004Z · score: 21 (21 votes) · LW · GW

I'd add that this kind of misunderstanding is frequently mutual; it's generally not the case that one party is sensitive to tone and the other is immune. The version in which someone takes an expression of feeling as an attempt to shame them into silence or otherwise limit allowable discourse is more or less the same failure mode.

Perhaps I say something, unaware that someone with different experiences and perspective might hear it differently, and it makes you mildly uncomfortable (somewhat like your examples). You try to communicate what you're feeling, perhaps intending only to provide me with more detailed information about the kind of reaction I'm provoking and why (some version of the Emotions As Inputs To Rationality approach). There may be good reasons for your reaction: for example, maybe you've heard things like that before from people who caused related harms, and you want to make sure I'm not likely to hurt anyone or normalize harmful behavior in others.

But then I take your expression of feeling as an anti-rational rhetorical move meant to silence me, because that's a thing that some people do using the same language that you used. Then my following plea for dispassionate rationality and a return to the details of the argument gets heard as dismissive/disrespectful and nitpicking, because, well, you know. And so on back and forth.

(It's also, importantly, not always the case that these are mere misunderstandings. Even if I didn't mean something a certain way, you can still be right that it was harmful to say or that it's a sign that I might cause harm. And even if you're not trying to silence me, it could conceivably be the case that by expressing your feelings you weakened our discourse, although I'm not sure I've ever seen that happen.)

Comment by whales on Group Rationality Diary, June 16-30 · 2014-06-17T04:47:09.015Z · score: 1 (1 votes) · LW · GW

I've had a similar experience with wishlists. There are some worthwhile corollaries: rather than follow interesting-looking links as you encounter them, open them in new tabs or add them to a read-later list. Or rather than look up everything you have a passing curiosity about, or switch to whatever task catches your immediate attention, add a note to yourself in your GTD/whatever system. If you're like me, your immediate desire will be satisfied by the knowledge that you'll get to it soon if it's important. And when you get around to reviewing these things, you'll be in a more reflective mode and will notice that many of these things are not in fact worth your time.

There's the same caveat: avoiding these things (like sources of potentially worthless links) in the first place might be a better solution for you (depending on density of chaff, to what extent lists and tab explosions stress you out, how likely you are to responsibly prune these things, whether you'll still capture the important things without universal capture, and so on). Try both, decide for yourself.

Comment by whales on Open thread, 9-15 June 2014 · 2014-06-11T20:39:50.273Z · score: 15 (15 votes) · LW · GW

I have no idea how likely it is, but an alternative explanation is that the vote counts were first converted to percentages to one decimal place, then someone else converted them back to absolute numbers for this announcement.

Comment by whales on June 2014 Media Thread · 2014-06-01T20:15:42.371Z · score: 1 (1 votes) · LW · GW

Failed theories of superconductivity. My favorite part:

The second idea proposed in 1932 by Bohr and Kronig was that superconductivity would result from the coherent quantum motion of a lattice of electrons. Given Bloch’s stature in the field, theorists like Niels Bohr where eager to discuss their own ideas with him. In fact Bohr, whose theory for superconductivity was already accepted for publication in the July 1932 issue of the journal “Die Naturwissenschaften”, withdrew his article in the proof stage, because of Bloch’s criticism (see Ref.[20]). Kronig was most likely also aware of Bloch’s opinion when he published his ideas[22]. Only months after the first publication he responded to the criticism made by Bohr and Bloch in a second manuscript[23]. It is tempting to speculate that his decision to publish and later defend his theory was influenced by an earlier experience: in 1925 Kronig proposed that the electron carries spin, i.e. possesses an internal angular momentum. Wolfgang Pauli’s response to this idea was that it was interesting but incorrect, which discouraged Kronig from publishing it. The proposal for the electron spin was made shortly thereafter by Samuel Goudsmit and George Uhlenbeck[29]. Kronig might have concluded that it is not always wise to follow the advice of an established and respected expert.

"History of what didn't work" seems like an important genre, for example if you want help avoiding hindsight/survivorship biases. Are there other good examples? It seems a lot of histories of science impose a false sense of direction or inevitability and don't cover many dead ends if any; all I can think of are some biographies that cover a lone genius's missteps on his way to the true theory.

Comment by whales on Positive Queries - How Fetching · 2014-04-30T05:17:27.671Z · score: 2 (2 votes) · LW · GW

"Be careful" is another good example of an instruction that doesn't really help. The default interpretation seems to be "move slowly and with intense concentration," which can lead to tunnel vision or a failure to act decisively. How to better cash it out depends on the task, but it's often an improvement to promote situational awareness by frequently asking what you expect to happen next and how it will go wrong. For example, "drive defensively" rather than "drive carefully."

Comment by whales on Be comfortable with hypocrisy · 2014-04-09T06:18:45.444Z · score: 0 (0 votes) · LW · GW

I'm not sure if I agree with this characterization of the current political climate; in any case, that's not the point I'm interested in. I'm also not interested in moral relativism.

As an aside, then, if anyone is interested in the sort of thing Stephenson is possibly referring to, David Foster Wallace's essay E. Unibus Pluram: Television and U.S. Fiction (1993, two years before The Diamond Age) is a classic. In DFW's version, hypocrisy was the monarch of vices for a time, although discourse was not a matter of simply pointing it out (which still required the kind of positive statement untenable to a jaded relativist) so much as satirizing it. But that kind of irony was co-opted, leaving people not only unable to take a positive moral stand but now also ineffectual in the only critique remaining. He suggested a return to sincere, positive values:

And the rebellious irony in the best postmodern fiction wasn't only credible as art; it seemed downright socially useful in its capacity for what counterculture critics call "a critical negation that would make it self-evident to everyone that the world is not as it seems." [...] Irony in sixties art and culture started out the same way youthful rebellion did. It was difficult and painful, and productive—a grim diagnosis of a long-denied disease. The assumptions behind this early postmodern irony, on the other hand, were still frankly idealistic: that etiology and diagnosis pointed toward cure; that revelation of imprisonment yielded freedom.

[...]

Rebels are great at exposing and overthrowing corrupt hypocritical regimes, but seem noticeably less great at the mundane, non-negative tasks of then establishing a superior governing alternative. Victorious rebels, in fact, seem best at using their tough cynical rebel skills to avoid being rebelled against themselves—in other words they just become better tyrants.

And make no mistake: irony tyrannizes us.

[...]

The next real literary "rebels" in this country might well emerge as some weird bunch of "anti-rebels," born oglers who dare to back away from ironic watching, who have the childish gall to endorse single-entendre values. [...] The new rebels might be the ones willing to risk the yawn, the rolled eyes, the cool smile, the nudged ribs, the parody of gifted ironists, the "How banal." Accusations of sentimentality, melodrama. Credulity. Willingness to be suckered by a world of lurkers and starers who fear gaze and ridicule above imprisonment without law. Who knows.

Comment by whales on Open Thread March 31 - April 7 2014 · 2014-04-05T22:23:22.895Z · score: 3 (3 votes) · LW · GW

Is there a consensus on the account of unemployment and inflation F. A. Hayek provides in his Nobel Lecture (1974)? I'm sympathetic to the abstract philosophy-of-science considerations he argues there, but I don't know enough (anything) about economics to say whether he's using that account to substantiate those considerations, or he's using those considerations to obliquely promote a controversial account. Here's an excerpt:

The theory which has been guiding monetary and financial policy during the last thirty years, and which I contend is largely the product of such a mistaken conception of the proper scientific procedure, consists in the assertion that there exists a simple positive correlation between total employment and the size of the aggregate demand for goods and services; it leads to the belief that we can permanently assure full employment by maintaining total money expenditure at an appropriate level. Among the various theories advanced to account for extensive unemployment, this is probably the only one in support of which strong quantitative evidence can be adduced. I nevertheless regard it as fundamentally false, and to act upon it, as we now experience, as very harmful.

[...]

Let me illustrate this by a brief sketch of what I regard as the chief actual cause of extensive unemployment - an account which will also explain why such unemployment cannot be lastingly cured by the inflationary policies recommended by the now fashionable theory. This correct explanation appears to me to be the existence of discrepancies between the distribution of demand among the different goods and services and the allocation of labour and other resources among the production of those outputs...

Comment by whales on Rationality Quotes April 2014 · 2014-04-02T01:49:57.871Z · score: 22 (24 votes) · LW · GW

He said:

When you play bridge with beginners—when you try to help them out—you give them some general rules to go by. Then they follow the rule and something goes wrong. But if you'd had their hand you wouldn't have played the thing you told them to play, because you'd have seen all the reasons the rule did not apply.

from The Last Samurai by Helen DeWitt

Comment by whales on April 2014 Media Thread · 2014-04-01T19:44:09.052Z · score: 1 (1 votes) · LW · GW

The Last Samurai (2000) by Helen DeWitt. (Not related to the film with Tom Cruise.) About genius, rationality, art, and their limits, among other things. From one perspective it's both an argument about and an example of the creation and appreciation of art being valuable and exciting. Highly recommended for LWers.

If you want a better idea of the book, try the summary on Amazon. Save the horrifying Wikipedia book report for after you've read the novel if you want a good laugh.

Comment by whales on Increasing the pool of people with outstanding accomplishments · 2014-03-29T06:12:50.723Z · score: 3 (3 votes) · LW · GW

We're interested in impact in other contexts as well, but we know less about the subject. We're interested in learning more.

I'd hesitate to call your estimate of the social value you'll generate a lower bound, as you do, if you're not sure about the value of the invisible/conventional work you might be persuading people away from. It seems like most of what you're doing and planning should give a boost to any kind of achievement, but I get the sense that much of the Effective Altruist community underestimates the marginal impact of an exceptional person with a strategic mindset and altruistic leanings in a "conventional" career like engineering, management, engineering/management consulting, industrial or basic research, medicine (and likely law and others, though I have less of an idea there). (You don't seem to rely on it, but I especially don't think replaceability is the knockdown argument many people treat it as here.)

Comment by whales on Two arguments for not thinking about ethics (too much) · 2014-03-28T05:13:18.525Z · score: 3 (3 votes) · LW · GW

I think that attempting to come up with a verbal formalization of our underlying logic and then doing what that formalization dictates is akin to "playing baseball with verbal probabilities"...

I wonder if the extent to which one thinks in words is anti-correlated with sharing that intuition.

I'm a mostly non-verbal thinker and strongly in favor of your arguments. On the other hand, I once dismissed the idea of emotional vocabulary, feeling that it was superfluous at best, and more likely caused problems via reductive, cookie-cutter introspection. Why use someone else's fixed terminology for my emotional states, when I have perfectly good nonverbal handles on them? I figured out later that some people have trouble distinguishing between various versions of "feeling bad" (for example), and that linguistic handles can be really helpful for them in understanding and responding to those states. (That also moved me favorably towards supplementing my own introspection with verbal labels.)

I don't think that kind of difference really bears on your arguments here, but I wouldn't be surprised if there were a typical-mind thing going on in the distribution of underlying intuitions.

Comment by whales on Terrorist baby down the well: a look at institutional forces · 2014-03-19T20:40:00.274Z · score: 2 (2 votes) · LW · GW

I made a related argument recently:

A theory that doesn’t account for detailed behavior is an approximation, and even in scientific domains, you can find conflicting approximations. When that happens—and if you’re not doing science, it’s “when,” not “if”—if you want to keep using your approximation, you have to use the details of the situation to explain why your approximation is valid. Your best defense against reductio ad absurdum, against Proving Too Much, is casuistry. Expect things to be complex, expect details to matter. Don’t ascribe intention or agency to abstract concepts and institutions. Look for chains of cause and effect. Look at individual moving parts and the forces acting on them. Make empirical predictions, and look for unintended empirical predictions. Ask what the opposite principle explains, and find the details that make those explanations compatible.

Comment by whales on What are you working on? March / April 2014 · 2014-03-17T05:18:25.549Z · score: 3 (3 votes) · LW · GW

I've been writing more, and posting some of it online since January, hoping to get broader feedback. The four articles I posted here seemed well-received, although they didn't generate much discussion. More have gone up on my personal site, including some self-indulgent fiction. Think Star Maker fanfiction, but only of the pedantic, moralizing parts, not of the wonder-and-terror-inspiring parts.

Most recently, I gathered some thoughts I'd scattered into recent comments and tweets for an essay on "principled" reasoning. It's probably relevant to LW interests, but I'm not cross-posting it here because I'm not sure LW needs more of that kind of meta-discourse. (I've aimed to make top-level posts only based on things I've actually done, for that reason.)

Comment by whales on Reference Frames for Expected Value · 2014-03-16T21:57:05.314Z · score: 0 (0 votes) · LW · GW

Right, it seems kind of strange to declare that you're considering only states of the world in your decisions, but then to treat judgments of right and wrong as an deontological layer on top of that where you consider whether the consequentialist rule was followed correctly. But that does seem to be a mainstream version of consequentialism. As far as I can tell, it mostly leads to convoluted, confused-sounding arguments like the above and the linked talk by Neiladri Sinhababu, but maybe I'm missing something important.

Comment by whales on In favour of terseness · 2014-03-08T22:07:36.737Z · score: 5 (5 votes) · LW · GW

It seems like you have several separate things in mind: readability, information density, arguments masquerading as true causes of beliefs, trustworthiness of "experienced rationalists," and the value of the "main point" vs disclaimers and qualification. Do you have an example post in mind, and specific suggestions for an improved version? I'm not sure if I'm about to respond to you, or just ramble. I understand that it's hard to call someone out without being mean, but these meta discussions seem to go nowhere without specific examples. (That said, the rest of my comment is not any better in this regard. Also, feel free to use any of my posts or comments for target practice.)

I agree that LW falls short on readability. Most people, LW posters included, are not good writers to begin with. Conciseness is one absent virtue among many. On the other hand, extremely information-dense texts can also be unreadable. Also, conversational, forceful, and polemical styles -- which are easier and more entertaining to read than academic styles heavy with caveats -- accompany lower epistemic standards. It's possible to be highly readable, informative, and rigorously correct, but it's hard. Littering your claims with "I think"s and "probably"s is a poor solution. Even if your main worry is coming off as too authoritative, such ugly filler qualifying language can be replaced with specific qualifications, possibly in a preface or footnote.

Your "murder is wrong" example is a poorly-constructed sentence, sure, for reasons beyond the above. But the details and qualifications are the real content of that statement. I don't think that's because "murder is wrong" is an uncharacteristically content-free claim. For any basic principle or sweeping generalization, there will be cases where it obviously works, cases where it obviously doesn't, and the real information is in where you draw the line. (And that statement applies to itself. I may need to elaborate in a separate post.)

I already see over-reliance on simple/abstract/principled arguments and beliefs as a weakness of LW discourse. This is a shame, because people here should have a huge advantage in terms of consequentialist reasoning and the ability to recognize and discuss tradeoffs without knee-jerk responses.

Regarding your other points, I agree that people often present arguments which do not include the true causes of their beliefs, and that this is bad. I also (relatedly) have enough confidence in very few people here such that a mere statement of their beliefs would be informative.

Comment by whales on Open Thread: March 4 - 10 · 2014-03-05T04:41:23.148Z · score: 8 (8 votes) · LW · GW

These are interesting questions. I think the keyword you want for "hash collisions" is interference. Here's a more helpful overview from an education perspective: Learning Vocabulary in Lexical Sets: Dangers and Guidelines (2000). It mostly talks about semantic interference, but it mentions some other work on similar-sounding and similar-looking words.

Comment by whales on A self-experiment in training "noticing confusion" · 2014-02-22T04:56:30.160Z · score: 4 (4 votes) · LW · GW

Thanks! Hardly a nitpick, I should really know better. It looks especially bad that my laziness/carelessness led to overstated results. 150 is the correct number of counts, and I agree with your calculation. Embarrassingly, I also screwed up the p-value for the sleep correlation, [EDIT] which I retracted briefly but now have fixed.

Comment by whales on Open Thread for February 3 - 10 · 2014-02-04T08:01:44.636Z · score: 11 (11 votes) · LW · GW

Off the top of my head, some good top-level posts touching on this area: How to understand people better (plus isaacschlueter's particularly good comment) and Alicorn's Luminosity sequence. Searching gives maybe a partial match for How to Be Happy, which cites some studies on training empathy and concludes that little is scientifically known about it--still, I think a top-level post on what is known would be welcome. Swimmer963's post on emotional-regulation research is nice.

Mindfulness is something else that comes up pretty regularly. Meditation trains metacognition and Overcoming suffering are pretty good examples.

CFAR also places more explicit emphasis on emotional awareness, and that sometimes comes up in the group rationality diaries.

I think one reason that these topics are relatively neglected is that people seem to develop social skills and emotional awareness in pretty idiosyncratic ways. Still, LW seems to accept more personal accounts, like this post on a variation on the CBT technique of labeling. So it seems worthwhile to post things along those lines.