Posts

Meetup : LessWrong/HPMoR Harvard 2013-04-06T19:24:44.217Z
Want to help me test my Anki deck creation skills? 2013-01-22T22:12:59.603Z
Draft: Get Lucky 2012-08-07T06:26:13.809Z
Meetup : LessWrong Megameetup 2012-01-30T19:33:02.904Z
East Coast Megameetup II: Request for Talks 2012-01-05T03:23:18.515Z
East Coast Megameetup II: Electric Boogalloo 2011-12-23T23:24:16.633Z
Anti-Akrasia Tactics Discussion 2011-11-30T01:00:02.187Z
[Request] Feedback on my Writing 2011-10-30T03:58:38.870Z
The Protagonist Problem 2011-10-23T03:06:46.507Z
Meetup : UMD Calibration Games 2011-10-12T18:40:25.631Z
Thinking in Bayes: Light 2011-10-10T04:08:11.573Z
Neural Correlates of Conscious Access 2011-10-07T23:12:04.837Z
[Link] Neural Correlates of Confusion? 2011-10-06T23:52:45.445Z
Meetup : UMD Meetup 2011-10-04T04:22:10.903Z
Blindsight and Consciousness 2011-09-22T18:42:01.305Z
Meetup : University of Maryland, Influence 2011-09-20T03:14:05.260Z
Meetup : Northern Virginia: Nonviolent Communication 2011-09-12T17:17:09.717Z
Meetup : University of Maryland 2011-09-12T16:42:14.267Z
[Story] Rejection 2011-09-09T04:39:18.007Z
[Question] What's your Elevator Pitch For Rationality? 2011-09-06T21:43:21.967Z
Meetup : UMD, Social Effectiveness and Calibration Exercises 2011-09-05T19:47:02.012Z
Hacking on LessWrong Just Got Easier 2011-09-04T06:59:39.443Z
Rationality Attractors in Personspace 2011-09-03T21:25:08.952Z
Meetup : DC Meetup: NoVa 2011-09-01T02:33:24.919Z
Meetup : DC Meetup: NoVa, Rationality Games 2011-08-18T19:02:53.351Z
[Link] Study on Group Intelligence 2011-08-15T08:56:40.889Z
Anki Library 2011-08-08T05:00:22.904Z
San Diego Meetup? 2011-08-07T05:10:42.126Z
Other Useful Sites LWers Read 2011-07-11T19:44:24.645Z
Meetup : DC Special Meetup 2011-06-29T11:57:03.188Z
Meetup : East Coast Super Fun Rationalists Sleepover Party Extravaganza 2011-06-28T03:53:20.946Z
Foma: Beliefs that Cause Themselves to be True 2011-06-20T05:13:54.325Z
Less Wrong DC Experimental Society 2011-06-13T03:56:26.744Z
DC Meetup June 12th (New Location) 2011-06-09T04:56:03.761Z
DC Meetup June 5th, 1 PM 2011-06-02T01:33:26.292Z
DC Meetup: May 22nd 2011-05-17T20:13:14.234Z
DC Meetup: Sunday May 15th, 3:30 PM 2011-05-13T20:53:48.623Z
Save the Date: DC Meetup May 15th 2011-05-08T15:22:17.951Z
DC Meetup: Discussion 2011-05-03T02:39:55.204Z
Question: How many people have tried to optimize rationality outreach? 2011-04-30T04:18:27.772Z
DC Meetup: Sunday May 1st, 1 PM 2011-04-27T01:24:59.946Z
DC Meetup: Last Discussion Thread 2011-04-15T04:48:10.113Z
DC Meetup Discussion 2011-04-04T23:27:50.032Z
Just Try It: Quantity Trumps Quality 2011-04-04T01:13:45.765Z
Don't Fear Failure 2011-04-03T22:52:32.694Z
Q: What has Rationality Done for You? 2011-04-02T04:13:34.789Z
Claremont Meetup 2011-03-28T22:07:11.820Z
Limitless, a Nootropics-Centered Movie 2011-03-15T01:58:29.057Z
College Selection Advice 2011-03-09T22:13:54.783Z
Rationality Activism: Open Thread 2011-03-04T02:06:56.838Z

Comments

Comment by atucker on Double Crux — A Strategy for Mutual Understanding · 2016-12-01T14:31:23.620Z · LW · GW

I think that crux is doing a lot of work in that it forces the conversation to be about something more specific than the main topic, and because it makes it harder to move the goal posts partway through the conversation. If you're not talking about a crux then you can write off a consideration as "not really the main thing" after talking about it.

Comment by atucker on On the importance of Less Wrong, or another single conversational locus · 2016-11-27T23:57:03.898Z · LW · GW

What's the minimum set of powers (besides ability to kick a user off the site) that would make being a Moderator non-frustrating? One-off feature requests as part of a "restart LW" focus seem easier than trying to guarantee tech support responsiveness.

Comment by atucker on On the importance of Less Wrong, or another single conversational locus · 2016-11-27T23:50:03.916Z · LW · GW

"Strong LW diaspora writers" is a small enough group that it should be straightforward to ask them what they think about all of this.

Comment by atucker on Meetup : Boston: Self Therapy · 2014-11-16T18:25:30.066Z · LW · GW

Yes. This meetup is at the citadel.

Comment by atucker on Crossing the History-Lessons Threshold · 2014-10-17T20:02:35.299Z · LW · GW

My impression is that the OP says that history is valuable and deep without needing to go back as far as the big bang -- that there's a lot of insight in connecting the threads of different regional histories in order to gain an understanding of how human society works, without needing to go back even further.

Comment by atucker on Maybe we're not doomed · 2014-08-03T06:22:12.342Z · LW · GW

The second and most already-implemented way is to jump outside the system and change the game to a non-doomed one. If people can't share the commons without defecting, why not portion it up into private property? Or institute government regulations? Or iterate the game to favor tit-for-tat strategies? Each of these changes has costs, but if the wage of the current game is 'doom,' each player has an incentive to change the game.

This is cooperation. The hard part is in jumping out, and getting the other person to change games with you, not in whether or not better games to play exist.

Moloch has discovered reciprocal altruism since iterated prisoner's dilemmas are a pretty common feature of the environment, but because Moloch creates adaptation-executors rather than utility maximizers, we fail to cooperate across social, spatial, and temporal distance, even if the payoff matrix stays the same.

Even if you have an incentive to switch, you need to notice the incentive before it can get you to change your mind. Since many switches require all the players to cooperate and switch at the same time, it's unlikely that groups will accidentally start playing the better game.

Convincing people that the other game is indeed better is hard when evaluating incentives is difficult. Add too much complexity and it's easy to imagine that you're hiding something. This is hard to get past since moving past it requires trust, in a context where we maybe are correct to distrust people -- i.e. if only lawyers know enough law to write contracts, they should probably add loopholes that lawyers can find, or at least make it complicated enough that only lawyers can understand it, so that you need to continue to hire lawyers to use your contracts. In fact contracts are generally complicated and full of loopholes and basically require lawyers to deal with.

Also, most people don't know about Nash equilibria, economics, game theory, etc., and it would be nice to be able to do things in a world with sub-utopian levels of understanding incentives. Also, trying to explain game theory to people as a substep of getting them to switch to another game runs into the same kind of justified mistrust as the lawyer example -- if they don't know game theory and you're saying that game theory says you're right, and evaluating arguments is costly and noisy, and they don't trust you at the start of the interaction, it's reasonable to distrust you even after the explanation, and not switch games.

Comment by atucker on [meta] Policy for dealing with users suspected/guilty of mass-downvote harassment? · 2014-06-07T18:35:04.818Z · LW · GW

I tend to think of downvoting as a mechanism to signal and filter low-quality content rather than as a mechanism to 'spend karma' on some goal or another. It seems that mass downvoting doesn't really fit the goal of filtering content -- it just lets you know that someone is either trolling LW in general, or just really doesn't like someone in a way that they aren't articulating in a PM or response to a comment/article.

Comment by atucker on Examples of Rationality Techniques adopted by the Masses · 2014-06-07T17:36:57.354Z · LW · GW

That just means that the sanity waterline isn't high enough that casinos have no customers -- it could be the case that there used to be lots of people who went to casinos, and the waterline has been rising, and now there are fewer people who do.

Comment by atucker on Discovering Your Secretly Secret Sensory Experiences · 2014-03-18T15:37:12.903Z · LW · GW

I have the same, though it seems to be stronger when the finger is right in front of my nose. It always stops if the finger touches me.

Comment by atucker on Fascists and Rakes · 2014-01-05T22:53:49.408Z · LW · GW

Hobbes uses a similar argument in Leviathan -- people are inclined towards not starting fights unless threatened, but if people feel threatened they will start fights. But people disagree about what is and isn't threatening, and so (Hobbes argues) there needs to be a fixed set of definitions that all of society uses in order to avoid conflict.

Comment by atucker on A critique of effective altruism · 2013-12-05T17:12:31.497Z · LW · GW

See the point about why its weird to think that new affluent populations will work more on x-risk if current affluent populations don't do so at a particularly high rate.

Also, it's easier to move specific people to a country than it is to raise the standard of living of entire countries. If you're doing raising-living-standards as an x-risk strategy, are you sure you shouldn't be spending money on locating people interested in x-risk instead?

Comment by atucker on A critique of effective altruism · 2013-12-05T08:06:37.981Z · LW · GW

My guess is that Eli is referring to the fact that the EA community seems to largely donate to where GiveWell says to donate, and that a lot of the discourse is centered around a system of trying to figure out all of the effects of a particular intervention, weigh it against all other factors, and then come up with a plan of what to do, where said plan is incredibly sensitive to you being right about the prioritization, facts about the situation, etc. in a way that will cause you to predictably fail to do as well as you could, due to factors like lack of on-the-ground feedback suggesting other important areas, misunderstanding people's values, errors in reasoning, and a lack of diversity in attempts to do something so that if one of the parts fails nothing gets accomplished.

I tend to think that global health is relatively non-controversial as a broad goal (nobody wants malaria! like, actually nobody) that doesn't suffer from the "we're figuring out what other people value" problem as much as other things, but I also think that that's almost certainly not the most important thing for people to be dealing with now to the exclusion of all else, and lots of people in the EA community seem to hold similar views.

I also think that GiveWell is much better and handling that type of issue than people in the EA community are, but that (at least the facebook group) is somewhat slow to catch up.

Comment by atucker on Questions and comments about Eliezer's Dec. 2 2013 Oxford speech · 2013-12-05T07:57:49.689Z · LW · GW

It seems that "donate to a guide dog charity" and "buy me a guide dog" are pretty different w/r/t the extent that it's motivated cognition. EAs are still allowed to do expensive things for themselves, or even as for support in doing so.

Comment by atucker on A critique of effective altruism · 2013-12-02T19:44:26.162Z · LW · GW

It seems easier to evaluate "is trying to be relevant" than "has XYZ important long-term consequence". For instance, investing in asteroid detection may not be the most important long-term thing, but it's at least plausibly related to x-risk (and would be confusing for it to be actively harmful), whereas third-world health has confusing long-term repercussions, but is definitely not directly related to x-risk.

Even if third world health is important to x-risk through secondary effects, it still seems that any effect on x-risk it has will necessarily be mediated through some object-level x-risk intervention. It doesn't matter what started the chain of events that leads to decreased asteroid risk, but it has to go through some relatively small family of interventions that deal with it on an object level.

Insofar as current society isn't involved in object-level x-risk interventions, it seems weird to think that bringing third-world living standards closer to our own will lead to more involvement in x-risk intervention without there being some sort of wider-spread availability of object-level x-risk intervention.

(Not that I care particularly much about asteroids, but it's a particularly easy example to think about.)

Comment by atucker on A critique of effective altruism · 2013-12-02T06:22:04.276Z · LW · GW

Social feedback is an incentive, and the bigger the community gets the more social feedback is possible.

Insofar as Utilitarianism is weird, negative social feedback is a major reason to avoid acting on it, and so early EAs must have been very strongly motivated to implement utilitarianism in order to overcome it. As the community gets bigger, it is less weird and there is more positive support, and so it's less of a social feedback hit.

This is partially good, because it makes it easier to "get into" trying to implement utilitarianism, but it's also bad because it means that newer EAs need to care about utilitarianism relatively less.

It seems that saying that incentives don't matter as long as you remove social-approval-seeking ignores the question of why the remaining incentives would actually push people towards actually trying.

It's also unclear what's left of the incentives holding the community together after you remove the social incentives. Yes, talking to each other probably does make it easier to implement utilitarian goals, but at the same time it seems that the accomplishment of utilitarian goals is not in itself a sufficiently powerful incentive, otherwise there wouldn't be effectiveness problems to begin with. If it were, then EAs would just be incentivized to effectively pursue utilitarian goals.

Comment by atucker on Why officers vs. enlisted? · 2013-11-02T14:25:41.969Z · LW · GW

My guess is just that the original reason was that there were societal hierarchies pretty much everywhere in the past, and they wanted some way to have nobles/high-status people join the army and be obviously distinguished from the general population, and to make it impossible to be demoted far down enough so as to be on the same level. Armies without the officer/non-officer distinction just didn't get any buy-in from the ruling class, and so they wouldn't exist.

I think there's also a pretty large difference in training -- becoming an officer isn't just about skills in war, but also involves socialization to the officer culture, through the different War Colleges and whatnot.

Comment by atucker on Making Fun of Things is Easy · 2013-09-29T14:35:04.309Z · LW · GW

You would want your noticing that something is bad to, in some way, indicate what would be a better way to make the thing better. You want to know what in particular is bad and can be fixed, rather than the less informative "everything". If your classifier triggers on everything, it tells you less on average about any given thing.

Comment by atucker on High School, Human Capital, Signaling and College Admissions · 2013-09-08T23:22:34.654Z · LW · GW

My personal experience (going to Harvard, talking to students and admissions counselors) suggests that at one of the following is true:

Teacher recommendations and the essays that you submit to the colleges are also important in admissions, and the main channel through which human capital not particularly captured by grades, and personal development are signaled.

There are particularly known-to-be-good schools that colleges disproportionately admit students from, and for slightly different reasons that they admit students from other schools.

I basically completely ignored signalling while in high school, and often prioritized taking more interesting non-AP classes over AP classes, and focused on a couple of extracirricular relationships rather than diversifying and taking many. My grades and standardized test scores also suffered as a result of my investment in my robotics team.

Comment by atucker on Arguments Against Speciesism · 2013-07-29T03:41:50.901Z · LW · GW

All I can say is that I don't understand why intelligence is relevant for whether you care about suffering.

Intelligence is relevant for the extent to which I expect alleviating suffering to have secondary positive effects. Since I expect most of the value of suffering alleviation to come through secondary effects on the far future, I care much more about human suffering than animal suffering.

As far as I can tell, animal suffering and human suffering are comparably important from a utility-function standpoint, but the difference in EV between alleviating human and animal suffering is huge -- the difference in potential impact on the future between a suffering human vs a non-suffering human is massive compared to that between a suffering animal and a non-suffering animal.

Basically, it seems like alleviating one human's suffering has more potential to help the far future than alleviating one animal's suffering. A human who might be incapacitated to say, deal with x-risk might become helpful, while an animal is still not going to be consequential on that front.

So my opinion winds up being something like "We should help the animals, but not now, or even soon, because other issues are more important and more pressing".

Comment by atucker on Can we dodge the mindkiller? · 2013-06-15T10:10:49.272Z · LW · GW

Political instrumental rationality would be about figuring out and taking the political actions that would cause particular goals to happen. Most of this turns out to be telling people compelling things that you know that they don't happen to, and convincing different groups that their interests align (or can align in a particular interest) when it's not obvious that they do.

Political actions are based on appeals to identity, group membership, group bounding, group interests, individual interests, and different political ideas in order to get people to shift allegiances and take action toward a particular goal.

For any given individual, the relative importance of these factors will vary. For questions of identity and affiliation, they will weigh those factors based on meaning being reinforced, and memory-related stuff (i.e. clear memories of meaningful experiences count, but so do not-particularly meaningful but happens every day stuff). For actual action, it will be based on various psychological factors, as well as simply options being available and salient while they have the opportunity to act in a way that reinforces their affiliations/meaning/standing with others in the group/personal interests.

As a result, political instrumental rationality is going to be incredibly contingent on local circumstances -- who talks to who, who believes what how strongly, who's reliable, who controls what, who wants what, who hears about what, etc.

A more object level example takes place in The Wire, when a pastor is setting up various public service programs in an area where drug dealing is effectively legalized.

The pastor himself is able to appeal to his community on the basis of religious solidarity in order to get money, and so he can fund some stuff. He cares about public health and the fate of the now unemployed would-be drug runners who are no longer necessary for drug dealing because of Christian reasons (since drugs are legal, the gang members don't bother with various steps that ensure that none of them can be photographed handing someone drugs for money -- the dealer gets the money then the runner (typically a child) goes to the stash to give the buyer drugs). Further, he knows people from various community/political events in Baltimore.

So far, so good. He controls some resources (money), has a goal (public health, child development), and knows some people.

One of the first people he talks to is a doctor who has been trying to do STD prevention for a while, but hasn't had the funding or organizational capacity to do much of anything. The pastor points out to him that there are a lot of at-risk people who are now concentrated in a particular location so that the logistics of getting services to people is much simpler. In this case, the pastor simply had information (through his connections) that the doctor didn't, and got the doctor to cooperate by pointing out the opportunity to do something that the doctor had wanted.

He gets the support of the police district chief who decided to selectively enforce drug laws by appealing to the police chief's desire for improving the district under his command (he was initially trying to shift drug trafficking away from more populated areas, and decrease violence by decreasing competition over territory), and it more or less worked.

That being said, I have more or less no idea what kinds of large-scale political action ought to be possible/is desirable.

I totally have the intuition though that step one of any plan is to become personally acquainted with people who have some sort of influence over the areas that you're interested in, or to build influence by getting people who have some control over what you're interested in to pay more attention to you. Borderline, if you can't name names, and can't point at groups of people involved in the action, then you can't do anything particularly useful politically.

Comment by atucker on Near-Term Risk: Killer Robots a Threat to Freedom and Democracy · 2013-06-15T08:51:25.963Z · LW · GW

This distinction is just flying/not-flying.

Offense has an advantage over defense in that defense needs to defend against more possible offensive strategies than offense needs to be capable of doing, and offense only needs one undefended plan in order to succeed.

I suspect that not-flying is a pretty big advantage, even relative to offense/defense. At the very least, moving underground (and doing hydroponics or something for food) makes drones just as offensively helpful as missles. Not flying additionally can have more energy and matter supplying whatever it is that it's doing than flying, which allows for more exotic sensing and destructive capabilities.

Comment by atucker on Near-Term Risk: Killer Robots a Threat to Freedom and Democracy · 2013-06-15T01:41:10.707Z · LW · GW

Almost certainly, but the point that stationary counter-drones wouldn't necessarily be in a symmetric situation to counter-counter-drones holds. Just swap in a different attack/defense method.

Comment by atucker on Near-Term Risk: Killer Robots a Threat to Freedom and Democracy · 2013-06-14T23:15:07.024Z · LW · GW

I think that if you used an EMP as a stationary counter-drone you would have an advantage over drones in that most drones need some sort of power/control in order to keep on flying, and so counter-drones would be less portable, but more durable than drones.

Comment by atucker on Useful Concepts Repository · 2013-06-10T17:50:22.321Z · LW · GW

From off site:

Energy and Focus is more scarce than Time (at least for me), Be Specific (somewhat on site, but whatever),

From on the site:

Mind Projection Fallacy, Illusion of Transparency, Trivial Inconveniences, Goals vs. Roles, Goals vs. Urges

Comment by atucker on Research is polygamous! The importance of what you do needn't be proportional to your awesomeness · 2013-05-28T19:15:08.199Z · LW · GW

Fair, but at least some component of this working in practice seems to be a status issue. Once we're talking about awesomeness and importance, and the representativeness of a person's awesomeness and the importance of what they're working on, and how different people evaluate importance and awesomeness, it seems decently likely that status will come into play.

Comment by atucker on Research is polygamous! The importance of what you do needn't be proportional to your awesomeness · 2013-05-28T19:00:39.718Z · LW · GW

Good point, I did summarize a bit fast.

There's two issues at hand, one asserting that you're doing something that's high status within your community, and asserting that your community's goals are more important (and higher status) than the goals of the listener's community.

If there's a large inferential distance in justifying your claims of importance, but the importance is clear, then it's difficult to distinguish you from say, cranks and conspiracy theorists.

(The dialogues are fairly unrealistic, but trying to gesture at the pattern)

A within culture issue:

"I do rocket surgery"

"I'm working on hard Brain Science problem X"

"Doesn't Charlie work on X?"

"Yeah."

"Are you working with Charlie on X?"

"No."

"Isn't Charlie really smart though?"

"Yep."

"Are you saying that you're really smart too?"

"No."

"Why bother?"

Between cultures:

"I do Rocket Surgery".

"That's pretty cool. I'm trying to destroy the One Ring".

"Huh?"

"Basically, I'm trying to destroy the power source for the dark forces that threaten everything anyone holds dear".

"Shouldn't Rocket Brain Surgery Science be able to solve that"?

"No. that's a fundamentally flawed approach on this problem -- the One Ring doesn't have a brain, and you carry it around. If you look at --"

"So you're looking for a MacGuffin?"

"No."

Comment by atucker on Research is polygamous! The importance of what you do needn't be proportional to your awesomeness · 2013-05-27T01:23:16.592Z · LW · GW

I entirely agree with this point, but suspect that actually following this advice would make people uncomfortable.

Since different occupations/goals have some amount of status associated with them (nonprofits, skilled trades, professions) many people seem to take statements about what you're working on to be status claims in addition to their denotational content.

As a result, working on something "outside of your league" will often sound to a person like you're claiming more status than they would necessarily give you.

Comment by atucker on Problems with Academia and the Rising Sea · 2013-05-25T00:35:15.022Z · LW · GW

Textbooks replace each other on clarity of explanation as well as adherence to modern standards of notation and concepts.

Maybe just cite the version of an experiment that explains it the best? Replications have a natural advantage because you can write them later when more of the details and relationships are worked out.

Comment by atucker on Meetup : London Special Guests: Jaan Tallinn and Michael Vassar of MetaMed · 2013-05-06T06:59:41.220Z · LW · GW

If I were in London, or even within an hour or two of it, I would try to go to this.

Comment by atucker on Good luck, Mr. Rationalist · 2013-04-30T07:36:12.715Z · LW · GW

"May your plans come to fruition"

I used to say that more when leaving megameetups or going on a trip or something. It has the disadvantage that you can't say it very fast.

I also want a word/phrase that expresses sympathy but isn't "sorry".

Comment by atucker on [Link] Should Psychological Neuroscience Research Be Funded? · 2013-04-19T21:16:38.214Z · LW · GW

Entirely agreed. Even if you more often than not get the same answers from fMRI and surveys, the fMRI externalizes the judgment of whether or not someone is empathizing/emotional/cognitive stating with regards to something else.

One might argue that we probably have a decent understanding of how well people's verbal statements line up with different facts, but where this diverges from the neurological reality is interesting enough to be spending money on the chance of finding the discrepancies. If we don't find them, that's also fascinating, and is worth knowing about.

Taking for granted that what people say about themselves is accurate, but externalized measurement is also worthwhile for it's own sake.

Comment by atucker on Explicit and tacit rationality · 2013-04-10T02:35:55.651Z · LW · GW

I think it would probably be worth going into a bit more about what delineates tacit rationality from tacit knowledge. Rationality seems to me to apply to things that you can reflect about, and so the concept of things that you can reflect about but can't necessarily articulate seems weird.

For instance, at first it wasn't clear to me that working at a startup would give you any rationality-related skills except insofar as it gives you instrumental rationality skills, which could possibly just be explained as better tacit knowledge -- you know a bajillion more things about the actual details necessary to run a business and make things happening.

There's actually a ton of non-tacit knowledge potential powerups from running a startup though! That probably even engage reflection!

For instance, a person could learn what it feels like when they're about to be too tired to work for the rest of the day, and learn to stop before then so that they could avoid burnout. This would be a reflective skill (noticing a particular sensation of tiredness), and yet it would be nigh impossible to articulate (can you describe what it feels like to almost be unable to work well enough that I can detect it in myself?).

Comment by atucker on Explicit and tacit rationality · 2013-04-10T02:34:27.103Z · LW · GW

When evaluating the relationship between success and rationality it seems worth keeping in mind survivorship bias.

An interesting case is that Will Smith seems likely to be explicitly rational in a way that other people in entertainment don't talk about -- he'll plan and reflect on various movie-related strategies so that he can get progressively better roles and box office receipts.

For instance, before he started acting in movies, he and his agent thought about what top-grossing movies all had in common, and then he focused on getting roles in those kinds of movies.

http://www.time.com/time/magazine/article/0,9171,1689234,00.html

Comment by atucker on Problems in Education · 2013-04-09T03:11:36.521Z · LW · GW

Marginal effort within the bounds of a consulting agency offering a service "tailored" to each school district.

Comment by atucker on Problems in Education · 2013-04-09T01:44:45.159Z · LW · GW

I think the hard part of refitting the model would probably just be getting access to the data -- beyond that it seems like a statistician or programmer would be able to just tell a computer how to minimize some appropriate cost function.

Something like most of the marginal effort is devoted to gathering the data, which presumably doesn't require that much expertise relative to understanding the model in the first place.

Comment by atucker on Problems in Education · 2013-04-09T01:38:09.350Z · LW · GW

Maybe slightly vary the parameters to make the model "new"? Like, fit it to data from that district, and it will probably be slightly different from "other" models.

Comment by atucker on Problems in Education · 2013-04-09T01:29:25.917Z · LW · GW

Has anyone published data on the effectiveness of Bayesian prediction models as an educational intervention? It seems like that would be very helpful in terms of being able to convince school districts to give them a shot.

Comment by atucker on Meetup : LessWrong/HPMoR Harvard · 2013-04-07T19:31:37.028Z · LW · GW

We've relocated to Sever 105.

Comment by atucker on Anybody want to join a Math Club? · 2013-04-05T12:38:05.256Z · LW · GW

Same. I'd be interested in trying this for a bit starting after mid-May.

Comment by atucker on Reflection in Probabilistic Logic · 2013-03-23T20:04:44.307Z · LW · GW

It's somewhat tricky to separate "actions which might change my utility function" from "actions". Gandhi might not want the murder pill, but should he eat eggs? They have cholesterol that can be metabolized into testosterone which can influence aggression. Is that a sufficiently small effect?

Comment by atucker on Harry Potter and the Methods of Rationality Bookshelves · 2013-03-22T23:05:07.851Z · LW · GW

A lot of Herodotus' histories have interesting stories about people exhibiting and not exhibiting ancient Greek virtues.

Comment by atucker on Personal Evidence - Superstitions as Rational Beliefs · 2013-03-22T23:02:37.267Z · LW · GW

Though, the other stuff in the post, and his other comments on the thread, really make it seem to me to be related to the house rather than to him, or his friends.

Comment by atucker on [LINK] Transcendence (2014) -- A movie about "technological singularity" · 2013-03-21T17:38:58.734Z · LW · GW

Given that Johnny Depp appears to be on the Singularity side (as the uploaded human), I suspect that they'll be portrayed sympathetically, even if the ending isn't exactly happy.

Comment by atucker on Getting myself to eat vegetables · 2013-03-13T20:16:44.790Z · LW · GW

I think that the nutritional value of the food, or at least the perceived nutritional value of the food, also plays a role in how quickly you start liking it. I've started liking raw beef liver and fish oil after waaaay fewer tries than say, ceviche.

Comment by atucker on Induction; or, the rules and etiquette of reference class tennis · 2013-03-06T16:11:39.140Z · LW · GW

So given some data, to determine the relative probability of two competing hypotheses, we start from the ratio of their prior probabilities, and then multiply by the ratio of their likelihoods. If we restrict to hypotheses which make predictions "within our means"---if we treat the result of a computation as uncertain when we can't actually compute it---then this calculation is tractable for any particular pair of hypotheses.

...

When two people disagree about the relative complexity of two hypotheses, it must be because that hypothesis is simpler in one of their languages than in the other.

This is a fairly minor point, but do you mean to imply that the prior probability of a never-happened-before-event normally swamps out updates about it's probability that you find out later on? Or that people update to information by reformulating their concepts so as to express more probable events to have lower complexity?

Either of those would be very interesting, though I also think the argument would stand if you didn't mean either of those as well.

Comment by atucker on MetaMed: Evidence-Based Healthcare · 2013-03-05T17:38:32.934Z · LW · GW

From what I understand, Watson is more supposed to do machine learning and question answering in order to do something like make medical diagnoses based on the literature.

MetaMed tries to evaluate the evidence itself, in order to come up with models for treatment for a patient that are based on good data and an understanding of their personal health.

They both involve reviewing literature, but MetaMed is actually trying to ignore and discard parts of the literature that aren't statistically/logically valid.

Comment by atucker on MetaMed: Evidence-Based Healthcare · 2013-03-05T17:31:03.584Z · LW · GW

Upvoted, but I'm a bit confused as to what we're trying to refer to with "spam".

If by spam we mean advertising, yes. Definitely.

If by spam we mean undesirable messaging that lowers the quality of the site, then I would think that this is very much not spam.

Comment by atucker on Michael Vassar's Edge contribution: summary · 2013-01-22T21:24:14.074Z · LW · GW

It's in some weird-to-link-to facebook format.

Basically, it's the same as the edge essay, but you should replace the last paragraph with...

Robert Altmeyer's research shows that for a population of authoritarian submissives, authoritarian dominators are a survival necessity. Since those who learn their school lessons are too submissive to guide their own lives, our society is forced to throw huge wads of money at the rare intelligent authoritarian dominants it can find, from derivative start-up founders to sociopathic Fortune 500 CEOs. However, with their attention placed on esteem, their concrete reasoning underdeveloped and their school curriculum poorly absorbed, such leaders aren’t well positioned to create value. They can create some, by imperfectly imitating established models, but can’t build the abstract models needed to innovate seriously. For such innovations, we depend on the few self-actualizers we still get; people who aren’t starving for esteem. People like Aaron Swartz.

Aaron Swartz is dead now. He died surrounded by friends; the wealthy, the powerful and the ‘smart’. He died desperate and effectively alone. A friend of mine, when she was seventeen, was involuntarily incarcerated in a mental hospital. She hadn’t created Reddit, but she had a blog with some readers- punks, fan girls and street kids. They helped her to escape, and to hide until the chase blew over. Aaron didn’t have friends like that. The wealthy, the powerful, and the ‘smart’ tend not to fight back; they learned their lessons well in school.

Comment by atucker on Michael Vassar's Edge contribution: summary · 2013-01-21T02:41:57.972Z · LW · GW

I more or less agree with your reading of this essay, but it misses an important point that the edited version on Edge leaves out -- in the original version, he compared the friends of Aaron Swartz with the friends of someone Michael knows.

Basically, when she was institutionalized against her will, her low-status, relatively poor friends helped break her out of the mental hospital and hide her until the police chase blew over. In contrast, when placed in a legal battle Aaron Swartz wasn't able to rely on his much smarter, wealthier, and in almost every way better off friends to help him. The well-learned elites couldn't really help protect him because they were too used to submitting.

With this in mind, the point of the essay is much more that we rely on non-social cognition for innovation, but that the culture of submission has destroyed our mechanisms for supporting self-actualizing innovators who come under fire. With this lack of support, our innovators are even worse off than they are just based on worse social skills and being less well understood.

Comment by atucker on Macro, not Micro · 2013-01-07T06:21:04.166Z · LW · GW

I think it's pretty possible to macro-optimize successfully and still lose. All you have to do is know what to do and not how to do it.