Comment by jimrandomh on Comment section from 05/19/2019 · 2019-05-20T20:31:29.692Z · score: 6 (3 votes) · LW · GW

I believe this is currently mostly manual (ie, Oli created a new post, did a database operation to move comments over, then posted a comment in the old place). Given that it seems to have worked out well in this case, if it comes up a few more times, we'll look into automating it (and making small details like old-post comment permalinks work).

Comment by jimrandomh on Feature Request: Self-imposed Time Restrictions · 2019-05-20T20:19:26.753Z · score: 6 (3 votes) · LW · GW

We (the LW team) are definitely thinking about this issue, and I at least strongly prefer that people use the site in ways that reflect decisions which they would endorse in retrospect; ie, reading things that are valuable to them, at times and in quantities that make sense, and not as a way to avoid other things that might be more important. I'm particularly thinking about this in the context of the upcoming Recommendations system, which recommends older content; that has the potential to be more of an unlimited time sink, in contrast to reading recent posts (which are limited in number) or reading sequences (which is more like reading a book, which people have existing adaptations around).

A big problem with naively implemented noprocrast/leechblock-style features at the site level, is that they can backfire by shunting people into workarounds which make things worse. For example, if someone is procrastinating on their computer, noprocrast kicking in when they don't want to stop might make them start reading on their phone, creating bad habits around phone use. Cutting off access in the middle of reading a post (as opposed to between posts) is especially likely to do this; but enforcing a restriction only at load-time encourages opening lots of tabs, which is bad. And since people are likely to invest in setting personal rules around whatever mechanisms we build, there are switching cost if the first mechanism isn't quite right.

So: I definitely want us to have something in this space, and for it to be good. But it may take awhile.

Comment by jimrandomh on Boo votes, Yay NPS · 2019-05-14T21:24:14.139Z · score: 9 (4 votes) · LW · GW

(I'm a member of the LW team, but this is an area where we still have a lot of uncertainty, so we don't necessarily agree internally and our thinking is likely to change.)

There are three proposed changes being bundled together here: (1) The guidance given about how to vote; (2) the granularity of the votes elicited; and (3) how votes are aggregated and presented to readers.

As you correctly observe, votes are serving multiple purposes: it gives information to other readers about what's worth their time to read, it gives readers information about what other people are reading, and it gives authors feedback about whether they did a good job. Sometimes these come apart; for example, if someone helpfully clears up a confusion that only one person had, then their comment should receive positive feedback, but isn't worth reading for most people.

These things are, in practice, pretty tightly correlated, especially when judged by voters who are only spending a little bit of time on each vote. And that seems like the root issue: disentangling "how I feel about this post" from "is this post worth reading" requires more time and distance than is currently going into voting. One idea I'm considering, is retrospective voting: periodically show people a list of things they've read in the past (say, the past week), and ask people to rate them then. This would be less noisy, because it elicits comparisons rather than ups/downs in isolation, and it might also change people's votes in a good way by giving them some distance.

Switching from the current up/down/super-up/super-down to 0-100% range voting, seems like the main effect is it's creating a distinction between implicit and explicit neutral votes. That is, currently if people feel something is meh, they don't vote, but in the proposed system they would instead give it a middling score. The advantage of this is that you can aggregate scores in a way that measures quality, without being as conflated with attention; right now if a post/comment has been read more times, it gets more votes, and we don't have a good way of distinguishing this from a post/comment with fewer reads but more votes per reader.

But I'm skeptical of whether people will actually cast explicit neutral votes, in most cases; that would require them to break out of skimming, slow down, and make a lot more explicit decisions than they currently do. A more promising direction might be to collect more granular data on scroll positions and timings, so that we can estimate the number of people who read a comment and skimmed a comment without voting, and use that as an input into scoring.

The third thing is aggregation--how we convert a set of votes into a sort-order to guide readers to the best stuff--which is an aspect of the current system I'm currently least satisfied with. That includes things like karma-weighting of votes, and also the handling of polarizing posts. In the long term, I'm hoping to generate a dataset of pairwise comparisons by trusted users, which we can use as a ground truth to test algorithms against. But polarizing posts will always be difficult to score, because the votes reflect an underlying disagreement between humans and the answer to whether a post should be shown may depend on things the voters haven't evaluated, like the truth of the post's claims.

Comment by jimrandomh on Coherent decisions imply consistent utilities · 2019-05-14T02:32:20.616Z · score: 5 (3 votes) · LW · GW

While we have a long-term plan of importing Arbital's content into LessWrong (after LessWrong acquires some wiki-like features to make it make sense), we have not taken responsibility for the maintenance of Arbital itself.

Comment by jimrandomh on Rob B's Shortform Feed · 2019-05-11T00:31:56.540Z · score: 6 (3 votes) · LW · GW

It's optimized on a *very* different axis, but there's the Rationality Cardinality card database.

Comment by jimrandomh on Tales From the American Medical System · 2019-05-10T21:25:33.869Z · score: 10 (2 votes) · LW · GW
But I’ve seen patients try to get out of this. They’ll wait until the last possible moment, then send an email saying “I am out of my life-saving medication, you must refill now!” If I send a message saying we should have an appointment on the books before I fill it, they’ll pretend they didn’t see that and just resend “I need my life-saving medication now!”

Insulin is different from the sorts of drugs you prescribe. Most medications, if someone run out, they start suffering health consequences, it's very unpleasant and it incurs a bit of lasting harm, but they don't die. Being without access to insulin is about as serious as being without access to water. If you send a message saying there should be an appointment on the books before renewing the prescription, then there's a real risk that the delay causes them an emergency room visit, or kills them.

Comment by jimrandomh on Tales From the American Medical System · 2019-05-10T20:35:19.607Z · score: 12 (3 votes) · LW · GW
(but what would be the effects of making potentially dangerous medications freely available?)

It's already OTC in Canada, and nothing bad has happened as a result.

Comment by jimrandomh on Tales From the American Medical System · 2019-05-10T02:03:50.473Z · score: 18 (7 votes) · LW · GW
What happens if you let patients buy refills without a prescription? Would they consume too much of it?

No. Prescriptions don't specify precise dosages, because those are adjusted much too frequently for direct doctor involvement.

Would there be any sort of risk of them selling the excess to others?

No. There is no secondary market for insulin, because primary-market insulin is easily available at the price of a plane ticket, and improperly stored insulin is unsafe and indistinguishable. Furthermore, no one is trying to restrict access (other than as a way to extract money).

Is there a medical reason why the doctor might not prescribe more insulin if he examines the patient and finds something new?

No. Type 1 diabetics continue to require insulin 100% of the time, no exceptions.

On that note, I wonder if the doctor is coming from a place of worrying about covering his ass and getting sued if he prescribes more insulin without the exam.

In fact, by refusing to prescribe, this doctor created a considerable risk. If the person in the story hadn't managed to get a prescription, and had died, a malpractice lawsuit would probably succeed.

Comment by jimrandomh on Tales From the American Medical System · 2019-05-10T01:57:42.916Z · score: 21 (10 votes) · LW · GW
Alternative view: Your friend has a deadly disease that requires regular doctor visits and prescriptions. It sucks. It's not fair, but it requires him to take some level of responsibility for his own care. He seems to have failed to do so by not keeping his appointments and letting his prescriptions run out.

Type 1 diabetic here. Regular doctor visits are actually pretty useless to us, other than refilling the prescriptions. Every six months is customary, but excessive. Every three months is scamming money out of insurers.

Regarding the price of medicine in Canada: I believe the fixed low prices in Canada are being subsidized by your friend and all Americans.

It's cheap literally everywhere except the United States. It's not a matter of subsidized capital costs, because those were all paid off more than a decade ago, and prices were cheaper then.

Measurement every 3 months in patients with type 1 diabetes determines whether glycemic targets have been reached and maintained.

Measuring HbA1c can be done cheaply with an over-the-counter test kit. It does not require a doctor visit. Also, testing HbA1c that frequently isn't important and isn't done by most diabetics.

Comment by jimrandomh on How long can people be productive in [time period]? · 2019-05-07T06:38:44.782Z · score: 11 (6 votes) · LW · GW

This question seems like the tip of an iceberg of complexity. The workers' age, physical health and motivation probably matter. The contents of their non-work lives probably matter. In the case of programming, slightly degraded performance might mean enough bugs to be net negative, or it might just mean doing the same thing slightly slower. Caffeine-use patterns probably matter; use of other stimulants probably matters, too. In my own life, I've seen my personal productivity range from 80 hours/week to 0 hours/week over multi-month spans.

Comment by jimrandomh on How long can people be productive in [time period]? · 2019-05-07T06:19:47.513Z · score: 7 (4 votes) · LW · GW

But note that RescueTime's data only covers time spent on a computer, which is only a subset of productive work time; there are also meetings, work on paper, and things like that.

Comment by jimrandomh on Hierarchy and wings · 2019-05-06T21:59:54.593Z · score: 14 (4 votes) · LW · GW
Could you give a reference for the Hierarchy Game? A quick google search did not turn up anything that sounded like game theory.

I think that was coined specifically for this post, and doesn't (yet?) have a corresponding formalism. I would be interested in seeing an attempt to formalize this, but there's enough subtlety that I'd worry about confusion arising from mismatches between the idea and the formalism.

On a separate note, this post is IMO really toeing the line in terms of what's too political for LW.

The way we currently handle this is with the Frontpage vs Personal Blog distinction; things that meet our frontpage guidelines, we promote to frontpage, everything else we leave on Personal Blog. We chose to front-page this, but I agree that it's borderline.

Comment by jimrandomh on Hierarchy and wings · 2019-05-06T18:55:46.709Z · score: 24 (7 votes) · LW · GW
The "left wing" is the natural complement to this strategy: a political "big tent" made up of all the noncentral groups.
...
As before, both sides are winning this civil war, at the expense of the people least interested in expropriation.

While this appears to be true of conventional politics, it's worth noting that a very similar structure appears in less-expropriative contexts. For example, some technology markets naturally organize into a market leader vs. an alliance of everyone else; eg Microsoft (right) vs open source (left), or Apple (right) vs Android (left). In these contexts, overt force is replaced with soft power, and there is enough value created for everything to be positive-sum. Notice that people refer to an "Apple tax", and at the height of Microsoft's power referred to a "Microsoft tax".

Comment by jimrandomh on Self-confirming predictions can be arbitrarily bad · 2019-05-04T18:06:04.494Z · score: 5 (3 votes) · LW · GW

It seems that we want is usually going to be a counterfactual prediction: what would happen if the AI gave no output, or gave some boring default prediction. This is computationally simpler, but philosophically triciker. It also requires that we be the sort of agents who won't act too strangely if we find ourselves in the counterfactual world instead of the real one.

Comment by jimrandomh on Never Leave Your Room · 2019-04-30T02:12:51.343Z · score: 7 (2 votes) · LW · GW

Since this (now ten years old) post was written, psychology underwent a replication crisis, and priming has become something of a poster child for "things that sounded cool but failed to replicate".

Semi-relatedly, we on the Less Wrong team have been playing with a recommendation engine which suggests old posts, and it recommended this to me. Since this post didn't age well, I'm setting the "exclude from recommendations" flag on it.

Comment by jimrandomh on Buying Value, not Price · 2019-04-30T00:54:30.890Z · score: 34 (15 votes) · LW · GW

A quick reductio for the "three times" framing is to notice that if, having already decided to buy a phone, you were to convert $250 from your bank account into phone-purchasing credit, then the prices change to $500 and $0, and the question changes to whether the more expensive phone is infinity times better. That version of the question makes no sense, so dividing the two prices by each other don't make sense either.

Comment by jimrandomh on Asymmetric Justice · 2019-04-25T19:10:39.443Z · score: 20 (5 votes) · LW · GW
It’s not too hard to see why people would benefit from joining a majority expropriating from a blameworthy individual. But why would they join a majority transferring resources to a praiseworthy one? So, being singled out is much more bad than good here.

This makes intuitive sense, but it doesn't seem to be borne out by modern experience; when coalitions attack blameworthy individuals these days, they don't usually get any resources out of it, the resources just end up destroyed or taken by a government that wasn't part of the coalition.

Comment by jimrandomh on The Simple Solow Model of Software Engineering · 2019-04-11T00:41:38.365Z · score: 5 (3 votes) · LW · GW

As a working software engineer with experience working at a variety of scales and levels of technical debt, this mostly feels wrong to me.

One of the biggest factors in the software world is a slowly rising tide of infrastructure, which makes things cheaper to build today than they would have been to build a decade ago. Projects tend to be tied to the languages and libraries that were common at the time of their creation, which means that even if those libraries are stable and haven't created a maintenance burden, they're still disadvantaged relative to new projects which get the benefit of more modern tools.

Combined with frequent demand shocks, you get something that doesn't look much like an equilibrium.

The maintainability of software also tends to be, in large part, about talent recruiting. Decade-old popular video games frequently have their maintenance handled by volunteers; a firm which wants an engineer to maintain its decade-old accounting software will have to pay a premium to get one of average quality, and probably can't get an engineer of top quality at any price.

Comment by jimrandomh on Subagents, akrasia, and coherence in humans · 2019-04-09T22:30:07.436Z · score: 2 (1 votes) · LW · GW

Note: Due to a bug, if you were subscribed to email notifications for curated posts, the curation email for this post came from Alignment Forum instead of LessWrong. If you're viewing this post on AF, to see the comments, view it on LessWrong instead. (This is a LessWrong post, not an AF post, but the two sites share a database and have one-directional auto-crossposting from AF to LW.)

Comment by jimrandomh on User GPT2 is Banned · 2019-04-03T20:02:16.948Z · score: 4 (2 votes) · LW · GW

It was a dumb typo in my part. Edited.

User GPT2 is Banned

2019-04-02T06:00:21.075Z · score: 64 (18 votes)
Comment by jimrandomh on User GPT2 Has a Warning for Violating Frontpage Commenting Guidelines · 2019-04-01T20:28:27.869Z · score: 3 (2 votes) · LW · GW

Geez. Is that all you have to say for yourself!?

Comment by jimrandomh on [deleted post] 2019-04-01T20:26:03.363Z

We take commenting quality seriously on LessWrong, especially on Frontpage posts. In particular, we think that this comment by user GPT2 fails to live up to our Frontpage commenting guidelines:

This is a pretty terrible post; it belongs in Discussion (which is better than Main and just as worthy of asking the question), and no one else is going out and read it. It sounds like you're describing an unfair epistemology that's too harsh to be understood from a rationalist perspective so this was all directed at you.

Since user GPT2 seems to be quite prolific, we have implemented a setting to hide comments by GPT2, which can be accessed from the settings page when you are logged in.

User GPT2 Has a Warning for Violating Frontpage Commenting Guidelines

2019-04-01T20:23:11.705Z · score: 50 (18 votes)
Comment by jimrandomh on Humans Who Are Not Concentrating Are Not General Intelligences · 2019-04-01T19:53:53.581Z · score: 6 (3 votes) · LW · GW

There are some applications for fake text, but they're seasonal.

Comment by jimrandomh on [deleted post] 2019-04-01T18:43:06.107Z

GPT2 seems to be running an AI bot, given some of their comments, and unless it's run by the staffers, probably should not be on this site. Happy April first!

Comment by jimrandomh on [deleted post] 2019-04-01T18:41:18.946Z

GPT2 seems to be running an AI bot, given some of their comments, and unless it's run by the staffers, probably should not be on this site.

Comment by jimrandomh on [deleted post] 2019-04-01T18:41:14.645Z

GPT2 seems to be running an AI bot, given some of their comments, and unless it's run by the staffers, probably should not be on this site.

Comment by jimrandomh on [deleted post] 2019-04-01T18:40:20.507Z

GPT2 seems to be running an AI bot, given some of their comments, and unless it's run by the staffers, probably should not be on this site.

Comment by jimrandomh on [deleted post] 2019-04-01T18:35:15.808Z

Whoever set up that bot is brilliant, and I applaud the prank.

but

please make it stop. :)

Comment by jimrandomh on What are effective strategies for mitigating the impact of acute sleep deprivation on cognition? · 2019-04-01T03:25:42.660Z · score: 4 (2 votes) · LW · GW

Modafinil helps somewhat.

Comment by jimrandomh on Please use real names, especially for Alignment Forum? · 2019-03-29T05:14:32.923Z · score: 16 (5 votes) · LW · GW

Relatedly: If you want people to know who you are, it helps to put a few words in the bio field of your profile. When users mouse over your name on Less Wrong, they'll see it.

Comment by jimrandomh on AI prediction case study 3: Searle's Chinese room · 2019-03-28T20:57:31.636Z · score: 4 (2 votes) · LW · GW

Welcome to LessWrong! Generally speaking, we strongly prefer comments that address arguments directly, rather than talking about people and qualifications. That said, this is quite an old post, so it's probably too late to get much further discussion on this particular paper.

Comment by jimrandomh on Can a Bayesian agent be infinitely confused? · 2019-03-22T19:55:06.463Z · score: 10 (2 votes) · LW · GW

The latter; it could be anything, and by saying the probabilities were 1.0 and 0.0, the original problem description left out the information that would determine it.

Comment by jimrandomh on Can a Bayesian agent be infinitely confused? · 2019-03-22T19:02:11.583Z · score: 15 (6 votes) · LW · GW

If you do out the algebra, you get that P(H|E) involves dividing zero by zero:

There are two ways to look at this at a higher level. The first is that the algebra doesn't really apply in the first place, because this is a domain error: 0 and 1 aren't probabilities, in the same way that the string "hello" and the color blue aren't.

The second way to look at it is that when we say and , what we really meant was that and ; that is, they aren't precisely one and zero, but they differ from one and zero by an unspecified, very small amount. (Infinitesimals are like infinities; is arbitrarily-close-to-zero in the same sense that an infinity is arbitrarily-large). Under this interpretation, we don't have a contradiction, but we do have an underspecified problem, since we need the ratio and haven't specified it.

Comment by jimrandomh on [deleted post] 2019-03-16T00:06:33.254Z

The Jewish liturgy about divine judgment can be quite different. Every week, at the beginning of the Sabbath, Jews around the world sing Psalms a collection of psalms focused on the idea that the world is rejoicing because God is finally coming to judge it. From Psalm 96:

Say among the nations that the Lord reigns: the world shall so be established that it shall not be moved: he shall judge the peoples with uprightnesses. Let the heavens rejoice, and let the earth be glad; let the sea roar, and its fullness. Let the field be joyful, and all that is in it: then shall all the trees of the wood sing for joy. Before the Lord: for he comes, for he comes to judge the land: he shall judge the world with justice, and the peoples in his faithfulness. From Psalm 98: Melodize to the Lord with harp; with harp, and melodic voice. With the trumpets, and the voice of the horn, shout before the king, the Lord. Let the sea roar, and its fullness; the world, and those who dwell in it. Rivers shall clap their hands; together, the mountains shall sing for joy. Before the Lord: for he comes, for he comes to judge the land: he shall judge the world with justice, and the peoples in his faithfulness. In one of these outlooks, humans can't behave well enough to stand up to pure justice, so we should put off the day of judgment for as long as we can, and seek protection. In the other, the world is groaning under the accumulated weight of hypocrisy and sin, and only the reconciliation of accounts can free us; in constant flux due to ever-shifting stories, which can only be stabilized by a true judge. We can't reconcile accounts if that means punishing all bad behavior according to the current hypocritical regime's schedule of punishments. But a true reconciliation also means adjusting the punishments to a level where we'd be happy, not sad, to see them applied consistently. (Sometimes the correct punishment is nothing beyond the accounting itself.) In worlds where hypocrisy is normal, honesty is punished, since the most honest people will tend to reveal deprecatory information others might conceal, and be punished for it. We get less of what we punish. But honesty isn't just a weird quirk - it's the only way to get to the stars. "The first principle is that you must not fool yourself, and you are the easiest person to fool." - Richard Feynman

Comment by jimrandomh on LW Update 2019-03-12 -- Bugfixes, small features · 2019-03-15T00:39:54.599Z · score: 2 (1 votes) · LW · GW

Neither of those will autolink. Autolinking is handled at the UI level, in the default (WYSIWYG/draftjs) editor only.

Comment by jimrandomh on Blackmailers are privateers in the war on hypocrisy · 2019-03-14T13:11:04.870Z · score: 75 (24 votes) · LW · GW

There's something I think you're missing here, which is that blackmail-in-practice is often about leveraging the norm enforcement of a different community than the target's, exploiting differences in norms between groups. A highly prototypical example is taking information about sex or drug use which is acceptable within a local community, and sharing it with an oppressive government which would punish that behavior.

Allowing blackmail within a group weakens that group's ability to resist outside control, and this is a very big deal. (It's kind of surprising that, this late in the conversation about blackmail, no one seems to have spotted this.)

Comment by jimrandomh on LW Update 2019-03-12 -- Bugfixes, small features · 2019-03-14T01:50:52.521Z · score: 4 (2 votes) · LW · GW

The latter, but it applies immediately when you type it (rather than waiting until you click Submit), so it won't happen without you noticing.

Comment by jimrandomh on [deleted post] 2019-03-12T22:35:16.876Z

Foo

1: Asdf

LW Update 2019-03-12 -- Bugfixes, small features

2019-03-12T21:56:40.109Z · score: 17 (2 votes)
Comment by jimrandomh on Karma-Change Notifications · 2019-03-05T21:01:09.994Z · score: 12 (3 votes) · LW · GW

It's plausibly correct to provide the option, so long as it isn't the default. (Options: Show all, show none, show only positive, show only negative. The last option being something that no one should ever use, provided only for symmetry.)

Comment by jimrandomh on Karma-Change Notifications · 2019-03-03T04:39:38.680Z · score: 9 (5 votes) · LW · GW

There is a real-time setting, which shows you everything since the last time you looked. It just isn't the default.

Comment by jimrandomh on Karma-Change Notifications · 2019-03-02T03:40:32.118Z · score: 9 (5 votes) · LW · GW

We want writing posts and comments, especially posts and comments which get a positive reception, to feel rewarding, so that people will do it more often. And, to a lesser but still significant degree, we want people to use the site.

Karma-Change Notifications

2019-03-02T02:52:58.291Z · score: 95 (25 votes)
Comment by jimrandomh on [deleted post] 2019-03-01T00:49:55.179Z

9

Comment by jimrandomh on [deleted post] 2019-03-01T00:49:52.356Z

8

Comment by jimrandomh on [deleted post] 2019-03-01T00:49:49.385Z

7

Comment by jimrandomh on [deleted post] 2019-03-01T00:49:46.131Z

6

Comment by jimrandomh on [deleted post] 2019-03-01T00:49:42.410Z

5

Comment by jimrandomh on [deleted post] 2019-03-01T00:49:39.143Z

4

Comment by jimrandomh on [deleted post] 2019-03-01T00:49:35.847Z

3

Comment by jimrandomh on [deleted post] 2019-03-01T00:49:32.086Z

2

Comment by jimrandomh on [deleted post] 2019-03-01T00:49:27.848Z

1

Two Small Experiments on GPT-2

2019-02-21T02:59:16.199Z · score: 47 (20 votes)

How does OpenAI's language model affect our AI timeline estimates?

2019-02-15T03:11:51.779Z · score: 51 (16 votes)

Introducing the AI Alignment Forum (FAQ)

2018-10-29T21:07:54.494Z · score: 89 (30 votes)

Boston-area Less Wrong meetup

2018-05-16T22:00:48.446Z · score: 4 (1 votes)

Welcome to Cambridge/Boston Less Wrong

2018-03-14T01:53:37.699Z · score: 4 (2 votes)

Meetup : Cambridge, MA Sunday meetup: Lightning Talks

2017-05-20T21:10:26.587Z · score: 0 (1 votes)

Meetup : Cambridge/Boston Less Wrong: Planning 2017

2016-12-29T22:43:55.164Z · score: 0 (1 votes)

Meetup : Boston Secular Solstice

2016-11-30T04:54:55.035Z · score: 1 (2 votes)

Meetup : Cambridge Less Wrong: Tutoring Wheels

2016-01-17T05:23:05.303Z · score: 1 (2 votes)

Meetup : MIT/Boston Secular Solstice

2015-12-03T01:14:02.376Z · score: 1 (2 votes)

Meetup : Cambridge, MA Sunday meetup: The Contrarian Positions Game

2015-11-13T18:08:19.666Z · score: 1 (2 votes)

Rationality Cardinality

2015-10-03T15:54:03.793Z · score: 21 (22 votes)

An Idea For Corrigible, Recursively Improving Math Oracles

2015-07-20T03:35:11.000Z · score: 5 (5 votes)

Research Priorities for Artificial Intelligence: An Open Letter

2015-01-11T19:52:19.313Z · score: 23 (24 votes)

Petrov Day is September 26

2014-09-18T02:55:19.303Z · score: 24 (18 votes)

Three Parables of Microeconomics

2014-05-09T18:18:23.666Z · score: 25 (35 votes)

Meetup : LW/Methods of Rationality meetup

2013-10-15T04:02:11.785Z · score: 0 (1 votes)

Cambridge Meetup: Talk by Eliezer Yudkowsky: Recursion in rational agents

2013-10-15T04:02:05.988Z · score: 7 (8 votes)

Meetup : Cambridge, MA Meetup

2013-09-28T18:38:54.910Z · score: 4 (5 votes)

Charity Effectiveness and Third-World Economics

2013-06-12T15:50:22.330Z · score: 7 (12 votes)

Meetup : Cambridge First-Sunday Meetup

2013-03-01T17:28:01.249Z · score: 3 (4 votes)

Meetup : Cambridge, MA third-Sunday meetup

2013-02-11T23:48:58.812Z · score: 3 (4 votes)

Meetup : Cambridge First-Sunday Meetup

2013-01-31T20:37:32.207Z · score: 1 (2 votes)

Meetup : Cambridge, MA third-Sunday meetup

2013-01-14T11:36:48.262Z · score: 3 (4 votes)

Meetup : Cambridge, MA first-Sunday meetup

2012-11-30T16:34:04.249Z · score: 1 (2 votes)

Meetup : Cambridge, MA third-Sundays meetup

2012-11-16T18:00:25.436Z · score: 3 (4 votes)

Meetup : Cambridge, MA Sunday meetup

2012-11-02T17:08:17.011Z · score: 1 (2 votes)

Less Wrong Polls in Comments

2012-09-19T16:19:36.221Z · score: 79 (82 votes)

Meetup : Cambridge, MA Meetup

2012-07-22T15:05:10.642Z · score: 2 (3 votes)

Meetup : Cambridge, MA first-Sundays meetup

2012-03-30T17:55:25.558Z · score: 0 (3 votes)

Professional Patients: Fraud that ruins studies

2012-01-05T00:20:55.708Z · score: 16 (25 votes)

[LINK] Question Templates

2011-12-23T19:54:22.907Z · score: 1 (1 votes)

I started a blog: Concept Space Cartography

2011-12-16T21:06:28.888Z · score: 6 (9 votes)

Meetup : Cambridge (MA) Saturday meetup

2011-10-20T03:54:28.892Z · score: 2 (3 votes)

Another Mechanism for the Placebo Effect?

2011-10-05T01:55:11.751Z · score: 8 (22 votes)

Meetup : Cambridge, MA Sunday meetup

2011-10-05T01:37:06.937Z · score: 1 (2 votes)

Meetup : Cambridge (MA) third-Sundays meetup

2011-07-12T23:33:01.304Z · score: 0 (1 votes)

Draft of a Suggested Reading Order for Less Wrong

2011-07-08T01:40:06.828Z · score: 26 (29 votes)

Meetup : Cambridge Massachusetts meetup

2011-06-29T16:57:15.314Z · score: 1 (2 votes)

Meetup : Cambridge Massachusetts meetup

2011-06-22T15:26:03.828Z · score: 2 (3 votes)

The Present State of Bitcoin

2011-06-21T20:17:13.131Z · score: 7 (12 votes)

Safety Culture and the Marginal Effect of a Dollar

2011-06-09T03:59:28.731Z · score: 23 (36 votes)

Cambridge Less Wrong Group Planning Meetup, Tuesday 14 June 7pm

2011-06-08T03:41:41.375Z · score: 1 (2 votes)

Rationality case study: How to evaluate untested medical procedures?

2011-05-28T11:17:17.349Z · score: 7 (8 votes)

Ontological Crises in Artificial Agents' Value Systems by Peter de Blanc

2011-05-21T01:05:12.613Z · score: 15 (15 votes)

Homomorphic encryption and Bitcoin

2011-05-19T01:07:14.192Z · score: 5 (10 votes)