Posts

Comments

Comment by ScottMessick on Superintelligence Reading Group - Section 1: Past Developments and Present Capabilities · 2014-09-17T23:55:14.293Z · LW · GW

I was disappointed to see my new favorite "pure" game Arimaa missing from Bostrom's list. Arimaa was designed to be intuitive for humans but difficult for computers, making it a good test case. Indeed, I find it to be very fun, and computers do not seem to be able to play it very well. In particular, computers are nowhere close to beating top humans despite the fact that there has arguably been even more effort to make good computer players than good human players.

Arimaa's branching factor dwarfs that of Go (which in turn beats every other commonly known example). Since a super-high branching factor is also a characteristic feature of general AI test problems, I think it remains plausible that simple, precisely defined games like Arimaa are good test cases for AI, as long as the branching factor keeps the game out of reach of brute force search.

Comment by ScottMessick on The dangers of zero and one · 2013-11-25T05:03:52.403Z · LW · GW

The Pentium FDIV bug was actually discovered by someone writing code to compute prime numbers.

Comment by ScottMessick on Harry Potter and the Methods of Rationality Bookshelves · 2013-03-19T01:11:50.969Z · LW · GW

Suggestions for Slytherin: Sun Tzu's Art of War and some Nietzsche, maybe The Will to Power?

Suggestion for Ravenclaw: An Enquiry Concerning Human Understanding, David Hume.

Comment by ScottMessick on Constructive mathemathics and its dual · 2013-03-03T23:11:45.086Z · LW · GW

The post seems to confuse the law of non-contradiction with the principle of explosion. To understand this point, it helps to know about minimal logic which is like intuitionistic logic but even weaker, as it treats ((false)) the same way as any other primitive predicate. Minimal logic rejects the principle of explosion as well as the law of the excluded middle (LEM, which the main post called TND).

The law of non-contradiction (LNC) is just ). (In the main post this is called ECQ, which I believe is erroneous; ECQ should refer to the principle of explosion (especially the second form).) The principle of explosion is either %20\to%20Q) or . These two forms are equivalent in minimal logic (due to the law of non-contradiction). As mentioned above, minimal logic has the law of non-contradiction, but not the principle of explosion, so this shows that they're not equivalent in every circumstance. Rejecting the principle of explosion (especially the second form) is the defining feature of paraconsistent logics (a class into which many logics fall). Some of these still have the validity of the law of non-contradiction. Anti-intuitionistic logic does not, because LNC is dual to LEM, which is invalid intuitionistically.

Ok, so I ended up taking a lot of time researching that nitpick so I could say it correctly. Anyway, I'm curious to see where this is going.

Comment by ScottMessick on Farewell Aaron Swartz (1986-2013) · 2013-01-24T21:31:10.973Z · LW · GW

Super-upvoted.

Comment by ScottMessick on Proofs, Implications, and Models · 2012-11-01T05:38:23.855Z · LW · GW

I'm not going to say they haven't been exposed to it, but I think quite few mathematicians have ever developed a basic appreciation and working understanding of the distinction between syntactic and semantic proofs.

Model theory is, very rarely, successfully applied to solve a well-known problem outside logic, but you would have to sample many random mathematicians before you could find one that could tell you exactly how, even if you restricted to only asking mathematical logicians.

I'd like to add that in the overwhelming majority of academic research in mathematical logic, the syntax-semantics distinction is not at all important, and syntax is suppressed as much as possible as an inconvenient thing to deal with. This is true even in model theory. Now, it is often needed to discuss formulas and theories, but a syntactical proof need not ever be considered. First-order logic is dominant, and the completeness theorem (together with soundness) shows that syntactic implication is equivalent to semantic implication.

If I had to summarize what modern research in mathematical logic is like, I'd say that it's about increasingly elaborate notions of complexity (of problems or theorems or something else), and proving that certain things have certain degrees of complexity, or that the degrees of complexity themselves are laid out in a certain way.

There are however a healthy number of logicians in computer science academia who care a lot more about syntax, including proofs. These could be called mathematical logicians, but the two cultures are quite different.

(I am a math PhD student specializing in logic.)

Comment by ScottMessick on 2012 Less Wrong Census Survey: Call For Critiques/Questions · 2012-10-23T03:38:20.423Z · LW · GW

The explanation "number of partners" question is problematic right now. It reads "0 for single, 1 for monogamous relationship, >1 for polyamorous relationship" which makes it sound like you must be monogamous if you happen to have 1 partner. I am polyamorous, have one partner and am looking for more.

In fact, I started wondering if it really meant "ideal number of partners", in which case I'd be tempted to put the name of a large cardinal.

Comment by ScottMessick on 2012 Less Wrong Census Survey: Call For Critiques/Questions · 2012-10-23T03:29:24.591Z · LW · GW

I continue to be surprised (I believe I commented on this last year) that under "Academic fields" pure mathematics is not listed on its own; it is also not clear to me that pure mathematics is a hard science; relatedly, are non-computer science engineering folk expected to write in answers?

I second this: please include pure mathematics. I imagine there are a fair few of us, and there's no agreed upon way to categorize it. I remember being annoyed about this last year. (I'm pretty sure I marked "hard sciences".)

Comment by ScottMessick on High School Lecture - Report · 2012-09-24T02:42:23.590Z · LW · GW

I wonder how it would be if you asked instead "When should we say a statement is true?" instead of "What is truth?" and whether your classmates would think them the same (or at least closely related) questions.

Comment by ScottMessick on [deleted post] 2012-08-23T23:22:54.422Z

I think this hypothesis is worth bearing in mind. However, it doesn't explain advancedatheist's observation that wealthy cryonicists are eager to put a lot of money in revival trusts (whose odds of success are dubious, even if cryonics works) rather than donate to improve cryonics research or the financial viability of cryonics organizations.

Comment by ScottMessick on [Link] Reddit, help me find some peace I'm dying young · 2012-08-20T16:34:09.807Z · LW · GW

I was mainly worried that she would suffer information-theoretic death (or substantial degradation) before she could be cryopreserved.

Comment by ScottMessick on [Link] Reddit, help me find some peace I'm dying young · 2012-08-19T02:45:16.658Z · LW · GW

What about the brain damage her tumor is causing?

This seems important and I'm a little surprised no one's asked. How will her brain damage impact her chances of revival? (From the blog linked in the reddit post, it sounds like she is already experiencing symptoms.) Obviously she is quite mentally competent right now, but what about when she is declared legally dead? I am far from an expert and simply would like to hear some authoritative commentary on this. I am interested in donating but only if there's a reasonable chance brain damage won't make it superfluous.

Comment by ScottMessick on Solving the two envelopes problem · 2012-08-09T03:18:02.692Z · LW · GW

This is a really good exposition of the two envelopes problem. I recall reading a lot about that when I first heard it, and didn't feel that anything I read satisfactorily resolved it, which this does. I particularly liked the more precise recasting of the problem at the beginning.

(It sounds like some credit is also due to VincentYu.)

Comment by ScottMessick on [Link] Why prison doesn't work and what to do about it · 2012-08-03T05:11:21.259Z · LW · GW

I haven't read the article, but I want to point out that prisons are enormously costly. So there is still much to gain potentially even if the new system is only equally effective at deterrence and rehabilitation.

The fact that prisons are inhumane is another issue, of course.

Comment by ScottMessick on What Is Signaling, Really? · 2012-07-12T18:54:23.286Z · LW · GW

I had long ago (but after being heavily influenced by Overcoming Bias) thought that signaling could be seen simply as a corollary to Bayes' theorem. That is, when one says something, one knows that its effect on a listener will depend on the listener's rational updating on the fact that one said it. If one wants the listener to behave as if X is true, one should say something that the listener would only expect in case X is true.

Thinking in this way, one quickly arrives at conclusions like "oh, so hard-to-fake signals are stronger" and "if everyone starts sending the same signal in the same way, that makes it a lot weaker", which test quite well against observations of the real world.

Powerful corollary: we should expect signaling, along with these basic properties, to be prominent in any group of intelligent minds. For example, math departments and alien civilizations. (Non-example: solitary AI foom.)

Comment by ScottMessick on Reply to Holden on The Singularity Institute · 2012-07-11T18:25:08.379Z · LW · GW

I'm really glad you pointed out that SI's strategy is not predicated on hard take-off. I don't recall if this has been discussed elsewhere, but that's something that always bothered me since I think hard take-off is relatively unlikely. (Admittedly, soft take-off still considerably diminishes my expected impact for SI and donating to it.)

Comment by ScottMessick on Interlude for Behavioral Economics · 2012-07-06T20:34:16.511Z · LW · GW

But this elegant simplicity was, like so many other things, ruined by the Machiguenga Indians of eastern Peru.

Wait, is this a joke, or have the Machiguenga really provided counterexamples to lots of social science hypotheses?

Comment by ScottMessick on Rationality Quotes July 2012 · 2012-07-04T17:50:09.932Z · LW · GW

These phrases are mainly used in near mode, or when trying to induce near mode. The phenomenon described in the quote is a feature (or bug) of far mode.

Comment by ScottMessick on Suggest alternate names for the "Singularity Institute" · 2012-06-19T18:04:14.136Z · LW · GW

I have direct experience of someone highly intelligent, a prestigious academic type, dismissing SI out of hand because of its name. I would support changing the name.

Almost all the suggestions so far attempt to reflect the idea of safety or friendliness into the name. I think this might be a mistake, because for people who haven't thought about it much, this invokes images of Hollywood). Instead, I propose having the name imply that SI does some kind of advanced, technical research involving AI and is prestigious, perhaps affiliated with a university (think IAS).

Center for Advanced AI Research (CAAIR)

Comment by ScottMessick on Neil deGrasse Tyson on Cryonics · 2012-06-19T00:56:22.188Z · LW · GW

Summary: Expanding on what maia wrote, I find it plausible that many people could produce good technical arguments against cryonics but don't simply because they're not writing about cryonics at all.

I was defending maia's point that there are many people who are uninterested in cryonics and don't think it will work. This class probably includes lots of people who have relevant expertise as well. So while there are a lot of people who develops strong anti-cryonics sentiments (and say so), I suspect they're only a minority of the people who don't think cryonics will work. So the fact that the bulk of anti-cryonics writings lack a tenable technical argument is only weak evidence that no one can produce one right now. It's just that the people who can produce them aren't interested enough to bother writing about cryonics at all.

I wholeheartedly agree that we should encourage people who may have them to write up strong technical arguments why cryonics won't work.

Comment by ScottMessick on Neil deGrasse Tyson on Cryonics · 2012-06-17T02:49:00.478Z · LW · GW

I think you may be missing a silent majority of people who passively judge cryonics as unlikely to work, and do not develop strong feelings or opinions about it besides that, because they have no reason to. I think this category, together with "too expensive to think about right now", forms the bulk of intelligent friends with whom I've discussed cryonics.

Comment by ScottMessick on I've had it with those dark rumours about our culture rigorously suppressing opinions · 2012-01-26T22:48:10.564Z · LW · GW

Wow, when I read "should not be treated differently from those issues", I assumed the intention was likely to be "child acting, indoctrination, etc., should be considered abuse and not tolerated by society", a position I would tentatively support (tentatively due to lack of expertise).

Incidentally, I found many of the other claims to be at least plausible and discussion-worthy, if not probably true (and certainly not things that people should be afraid to say).

Comment by ScottMessick on The Singularity Institute's Arrogance Problem · 2012-01-21T23:47:14.667Z · LW · GW

Yes, and isn't it interesting to note that Robin Hanson sought his own higher degrees for the express purpose of giving his smart contrarian ideas (and way of thinking) more credibility?

Comment by ScottMessick on The Singularity Institute's Arrogance Problem · 2012-01-21T23:28:04.042Z · LW · GW

One issue is that the same writing sends different signals to different people. I remember thinking about free will early in life (my parents thought they'd tease me with the age-old philosophical question) and, a little later in life, thinking that I had basically solved it--that people were simply thinking about it the wrong way. People around me often didn't accept my solution, but I was never convinced that they even understood it (not due to stupidity, but failure to adjust their perspective in the right way), so my confidence remained high.

Later I noticed that my solution is a standard kind of "compatibilist" position, which is given equal attention by philosophers as many other positions and sub-positions, fiercely yet politely discussed without the slightest suggestion that it is a solution, or even more valid than other positions except as the one a particular author happens to prefer.

Later I noticed that my solution was also independently reached and exposited by Eliezer Yudkowsky (on Overcoming Bias before LW was created, if I remember correctly). The solution was clearly presented as such--a solution--and one which is easy to find with the right shift in perspective--that is, an answer to a wrong question. I immediately significantly updated the likelihood of the same author having further useful intellectual contributions, to my taste at least, and found the honesty thoroughly refreshing.

Comment by ScottMessick on Intuition and Mathematics · 2012-01-02T05:03:34.859Z · LW · GW

What is "intuition" but any set of heuristic approaches to generating conjectures, proofs, etc., and judging their correctness, which isn't a naive search algorithm through formulas/proofs in some formal logical language? At a low level, all mathematics, including even the judgment of whether a given proof is correct (or "rigorous"), is done by intuition (at least, when it is done by humans). I think in everyday usage we reserve "intuition" for relatively high level heuristics, guesses, hunches, and so on, which we can't easily break down in terms of simpler thought processes, and this is the sort of "intuition" that Terence Tao is discussing in those quotes. But we should recognize that even regarding the very basics of what it means to accept a proof is correct, we are using the same kinds of thought processes, scaled down.

Few mathematicians want to bother with actual formal logical proofs, whether producing them or reading them.

(And there's an even subtler issue, that logicians don't have any one really convincing formal foundation to offer, and Godel's theorem makes it hard to know which ones are even consistent--if ZFC turned out to be inconsistent, would that mean that most of our math is wrong? Probably not, but since people often cite ZFC as being the formal logical basis for their work, what grounds do we we have for this prediction?)

Comment by ScottMessick on No one knows what Peano arithmetic doesn't know · 2011-12-18T03:51:45.468Z · LW · GW

Imagine you have an oracle that can determine if an arbitrary statement is provable in Peano arithmetic. Then you can try using it as a halting oracle: for an arbitrary Turing machine T, ask "can PA prove that there's an integer N such that T makes N steps and then halts?". If the oracle says yes, you know that the statement is true for standard integers because they're one of the models of PA, therefore N is a standard integer, therefore T halts. And if the oracle says no, you know that there's no such standard integer N because otherwise the oracle would've found a long and boring proof involving the encoding of N as SSS...S0, therefore T doesn't halt. So your oracle can indeed serve as a halting oracle.

I don't think this works. We can't expect PA to decide whether or not any given Turing machine halts. For example, there is a machine which enumerates the theorems proven by PA and halts if it ever encounters a proof of 0=1. By incompleteness, PA will not prove that that this machine halts. (I'm assuming PA is consistent.) This argument works for any stronger consistent theory as well, such as ZFC or even much stronger ones. Note: I basically stole this argument from Scott Aaronson.

Note that this is different from the question of whether or not the halting problem is reducible to the set of theorems of PA (i.e. whether or not the oracle you've specified is enough to compute whether or not a given TM halts). It's just that this particular approach does not give such an algorithm.

ETA: I was in error, see replies. In the OP, PA doesn't need to prove that a non-halting machine doesn't halt, it only needs to fail to prove that it halts (and it certainly does, if we believe PA is sound).

Comment by ScottMessick on What independence between ZFC and P vs NP would imply · 2011-12-09T00:25:55.061Z · LW · GW

Scott Aaronson (a well-known complexity theorist) has written a survey article about exactly this question.

Comment by ScottMessick on Alzheimer's vs Cryonics · 2011-08-28T18:38:17.453Z · LW · GW

Upvoted for accuracy. My maternal grandmother is the same way and just the resulting politics in my mother's family for how to deal with her empty shell are unpleasant, let alone the fact that she died so slowly hardly anyone acknowledged it as it was happening.

Comment by ScottMessick on The basic questions of rationality · 2011-08-23T01:49:35.290Z · LW · GW

I think you mean, "When is it irrational to study rationality explicitly?"

Comment by ScottMessick on The basic questions of rationality · 2011-08-23T01:45:46.604Z · LW · GW

For me there is always the lurking suspicion that my biggest reason for reading LessWrong is that it's a whole lot of fun.

Comment by ScottMessick on For fiction: How could alien minds differ from human minds? · 2011-08-21T16:34:01.389Z · LW · GW

Do you mean what Eliezer calls the Machiavellian intelligence hypothesis? (That is, human intelligence evolved via runaway sexual selection--people who were politically competent were more reproductively successful, and as people got better and better at the game of politics, so too did the game get harder and harder, hence the feedback loop.)

Perhaps a species could evolve intelligence without such a mechanism, if something about their environment is just dramatically more complex in a peculiar way compared with ours, so that intelligence was worthwhile just for non-social purposes. The species' ancestors may have been large predators on top of the food chain, where members are typically solitary and territorial and hunt over a large region of land, with its ecosystem strangely different from ours in some way that I'm not specifying (but you'd need a pretty clever idea about it to make this whole thing work).

These aliens wouldn't be inherently social the way humans are, but they wouldn't be antisocial either--they would have noticed that cooperation allows them to solve even more difficult problems and get even more from their environment. (Still in a pre-technological stage. Remember, something about this environment is such that it provides a nearly smooth, practically unbounded (in difficulty) array of puzzles/problems to solve with increasing rewards.) Eventually, they may build a civilization just to more efficiently organize to obtain these benefits, which will also allow them to advance technologically. (I'm probably drawing too sharp of a distinction between technological advancement and interaction with their strangely complex environment.)

They might lack the following trait that is very central to human nature: affection through familiarity. When we spend a lot of time with a person/thing/activity/idea, we grow fond of it. They might not have this, or they might not have it in such generality (e.g. they might still have it for, say, mates, if they reproduce sexually). They might also be a lot less biased than we are by social considerations, for the obvious reason, but perhaps they have less raw cognitive horsepower (their environment being no substitute for the human pastimes of politics and sex).

Recklessly speculative, obviously, but I gather that's all we can hope to offer to Solvent.

Comment by ScottMessick on Please do not downvote every comment or post someone has ever made as a retaliation tactic. · 2011-08-21T15:56:17.911Z · LW · GW

Ah, now I feel extremely silly. The irony did not occur to me; it was simply a long comment that I agreed with completely, and I wasn't satisfied merely upvoting it because it didn't have any (other) upvotes yet at the time. Plus, doubly ironically, I was on a moral crusade to defend the karma system...

Comment by ScottMessick on Please do not downvote every comment or post someone has ever made as a retaliation tactic. · 2011-08-21T15:24:43.148Z · LW · GW

Right on.

Comment by ScottMessick on Please do not downvote every comment or post someone has ever made as a retaliation tactic. · 2011-08-21T15:23:24.135Z · LW · GW

Imagine a thousand professional philosophers would join lesswrong, or worse, a thousand creationists.

This test seems rather unfair--it's pretty much a known that people who join LessWrong are likely to be already sympathetic to the LessWrong's way of thinking. Besides, the only way to avoid a situation where thousands of dissidents joining could wreck the system is to have centralized power, i.e., more traditional moderation, which I think we were hoping to avoid for exactly the types of reasons that are being brought up here (politics, etc.).

The availability of a reputation system also discourages people to actually explain themselves by being able to let off steam or ignore cognitive dissonance by downvoting someone with a single mouse click.

True, but I think you have missed a positive incentive for response that is created by the reputation system in addition to the negative ones--a post/comment with a bad argument or worse creates an opportunity to win karma by writing a clear refutation, and I frequently see such responses being highly upvoted.

The initial population of a community might have been biased about something and the reputation system might provide a positive incentive to keep the bias and a negative incentive for those who disagree.

This is a problem, but based purely on my subjective experience it seems that people are more than willing to upvote posts that try to shatter a conventional LessWrong belief, and do so with good argumentation.

Comment by ScottMessick on Please do not downvote every comment or post someone has ever made as a retaliation tactic. · 2011-08-21T14:52:09.353Z · LW · GW

Why all the karma bashing? Yes, absolutely, people will upvote or downvote for political reasons and be heavily influenced by the name behind the post/comment. All the time. But as far as I can tell, politics is a problem with any evaluation system whatsoever, and karma does remarkably well. In my experience, post and comment scores are strongly correlated with how useful I find them, how much they contribute to my experience of the discussion. And the list of top contributors is full of people who have written posts that I have saved forever, that in many cases irreversibly impacted my thinking. The fact that EY is sometimes deservingly downvoted is a case in point. The abuse described in the original post is unfortunate, but overall the LessWrong system does a difficult job incredibly well.

Comment by ScottMessick on What a practical plan for Friendly AI looks like · 2011-08-20T15:11:07.415Z · LW · GW

I have seen too many discussions of Friendly AI, here and elsewhere (e.g. in comments at Michael Anissimov's blog), detached from any concrete idea of how to do it....

At present, it is discussed in conjunction with a whole cornucopia of science fiction notions such as: immortality, conquering the galaxy, omnipresent wish-fulfilling super-AIs, good and bad Jupiter-brains, mind uploads in heaven and hell, and so on. Similarly, we have all these thought-experiments: guessing games with omniscient aliens, decision problems in a branching multiverse, "torture versus dust specks". Whatever the ultimate relevance of such ideas, it is clearly possible to divorce the notion of Friendly AI from all of them....

SIAI, in discussing the quest for the right goal system, emphasizes the difficulties of this process and the unreliability of human judgment. Their idea of a solution is to use artificial intelligence to neuroscientifically deduce the actual algorithmic structure of human decision-making, and to then employ a presently nonexistent branch of decision theory to construct a goal system embodying ideals implicit in the unknown human cognitive algorithms.

In short, there is a dangerous and almost universal tendency to think about FAI (and AGI generally) primarily in far mode. Yes!

However, I'm less enamored with the rest of your post. The reason is that building AGI is simply an altogether higher-risk activity than traveling to the moon. Using "build a chemical powered rocket" as your starting point for getting to the moon is reasonable in part because the worst that could plausibly happen is that the rocket will blow up and kill a lot of volunteers who knew what they were getting into. In the case of FAI, Eliezer Yudkowsky has taken great pains to show that the slightest, subtlest mistake, one which could easily pass through any number of rounds of committee decision making, coding, and code checking, could lead to existence failure for humanity. He has also taken pains to show that approaches to the problem which entire committees have in the past thought were a really good idea, would also lead to such a disaster. As far as I can tell, the LessWrong consensus agrees with him on the level of risk here, at least implicitly.

There is another approach. My own research pertains to automated theorem proving, and its biggest application, software verification. We would still need to produce a formal account of the invariants we'd want the AGI to preserve, i.e., a formal account of what it means to respect human values. When I say "formal", I mean it: a set of sentences in a suitable formal symbolic logic, carefully chosen to suit the task at hand. Then we would produce a mathematical proof that our code preserves the invariants, or, more likely we would use techniques for producing the code and the proof at the same time. So we'd more or less have a mathematical proof that the AI is Friendly. I don't know how the SIAI is trying to think about the problem now, exactly, but I don't think Eliezer would be satisfied by anything less certain than this sort of approach.

Not that this outline, at this point, is satisfactory. The formalization of human value is a massive problem, and arguably where most of the trouble lies anyway. I don't think anyone's ever solved anything even close to this. But I'd argue that this outline does clarify matters a bit, because we have a better idea what a solution to this problem would look like. And it makes it clear how dangerous the loose approach recommended here is: virtually all software has bugs, and a non-verified recursively self-improving AI could magnify a bug in its value system until it no better approximates human values than does paperclip-maximizing. Moreover, the formal proof doesn't do anyone a bit of good if the invariants were not designed correctly.

Comment by ScottMessick on [deleted post] 2011-08-20T02:42:46.435Z

I don't know if there's a big reason behind it, but because .com is so entrenched as the "default" TLD, I think it's probably best to be LessWrong.com rather than LessWrong.net or any other choice, simply because "LessWrong.com" is more likely to be correctly remembered by people who hear of it briefly, or correctly guessed by people who heard "Less Wrong" and randomly take a stab at their brower's navigation bar.

I admit this point may be relatively trivial since it's the first google hit for "less wrong" and that's probably how a lot of people look for it who've only heard of it.

Comment by ScottMessick on Rationality and Relationships · 2011-08-18T22:59:42.430Z · LW · GW

Agreed, would like to see this again in the form of a top-level post. This cuts at the heart of one of the most important sets of lies we are told by society and expected to repeat.

Comment by ScottMessick on (US only) Donate $2 to charity (bing rewards) · 2011-08-18T22:44:40.812Z · LW · GW

Personally, I'd need see good reasons to expect the charity I'm donating to is going to have a significant positive impact before I consider donating, relative to other charities I might be able to find on my own. Inefficiency, corruption, and poor choice of target are major concerns. (One example of the latter issuemight be donating to help US poor when it's possible to just as efficiently help people who are far worse off somewhere else). Also the mechanism by which to help maybe poorly thought out. (Do the poor really need education, as opposed to other more concrete things? I'm not giving an answer, just saying I'd need to see one before I donated.)

I think many here are already aware of GiveWell, an organization which evaluates charities on many of these criteria, and is nice enough to publish the details of their analysis. GiveWell finds that overwhelming numbers of charities fare very poorly. Helpfully, they also say very clearly what they think the most effective charity to donate to is, how effective they think it is, and why. (Currently VillageReach, last I checked, which works on very basic medical supply infrastructure in Africa.)

EDIT: Should have paid more attention to what you actually said. Obviously if you are already earning these "reward points" then spending them on donations is no additional cost to you. However, the questions about effectiveness stand, and based on analyses I've seen, many charities are so poor that you'd be obviously doing more good spending the same money on yourself. Or using your reward points on some other trivial reward. (Technically, in terms of opportunity cost, spending the reward points is still like spending money, if you can spend them on other things you would spend money on.)

Comment by ScottMessick on Charitable Cryonics · 2011-08-17T19:25:28.541Z · LW · GW

So Alcor runs at a loss and doesn't actually freeze that many people because it can't afford to?

This seems extremely misleading. Unless I'm very much mistaken, Alcor cryopreserves every one of its members upon their legal death to the absolute best of its ability, as indeed they are contractually obligated to do. They even now have an arrangement with Suspended Animation so that an SA team can provide SST (standby, stabilization, and transport) in cases where Alcor cannot easily get there in time. (SA is a for-profit company founded to provide exactly this type of service; they also have a working relationship with Cryonics Institute of a different sort.)

To my understanding, Alcor runs at a "loss" (in quotes because donations are just as much a source of revenue as membership and cryopreservation fees) for similar reasons that any small-but-growing business would: because growth is the best way to ensure long-term stability, and keeping the price of cryopreservation as low as possible given the other constraints promotes growth.

Finally, I think it's worth mentioning Alcor created the Patient Care Trust fund and gave it legal independence specifically to prevent funds from being usurped that are intended to go toward the care and eventual resuscitation of Alcor's patients, regardless of Alcor's future financial situation. Even if Alcor collapses financially, these funds are contractually mandated to be used toward protecting Alcor's patients, and maximizing their continued chances of being successfully revived (for example, by transferring them to another cryonics organization).

Comment by ScottMessick on How good an emulation is required? · 2011-08-17T14:53:03.290Z · LW · GW

But I think those are examples of neurons operating normally, not abnormally. Even in the case of mind-influencing drugs, mostly the drugs just affect the brain on its own terms by altering various neurotransmitter levels. On the other hand, a low-level emulation glitch could distort the very rules by which information is processed in the brain.

Comment by ScottMessick on That letter after B is not rendering on Less Wrong? · 2011-08-17T00:27:50.494Z · LW · GW

They aren't showing up in comments on the older posts though (see above links). Perhaps the folks looking at the code now can explain why.

Comment by ScottMessick on That letter after B is not rendering on Less Wrong? · 2011-08-17T00:14:07.308Z · LW · GW

Yes, or here. Wow, this is bizarre.

Comment by ScottMessick on That letter after B is not rendering on Less Wrong? · 2011-08-17T00:07:53.624Z · LW · GW

Yes, same symptoms. With the letters and the blockquotes.

EDIT: Also, it's not consistent for me even on this page. I can see the 'c' (letter after 'b') in "blockquotes" in your post that I replied to, and in a few other comments, including mine, but not in the original post.

Comment by ScottMessick on How good an emulation is required? · 2011-08-16T23:33:35.654Z · LW · GW

Disclaimer: my formal background here consists only of an undergraduate intro to neuroscience course taken to fulfill a distribution requirement.

I'm wondering if this is actually a serious problem. Assuming we are trying to perform a very low-level emulation (say, electro-chemical interactions in and amongst neurons, or lower), I'd guess that one of two things would happen.

0) The emulation isn't good enough, meaning every interaction between neurons has a small but significant error in it. The errors would compound very, very quickly, and the emulated mind's thought process would be easily distinguishable from a human's within minutes if not seconds. In the long term, if the emulation is even stable at all, its behavior would fall very much into the trough of the mental uncanny valley, or else be completely inhuman. (I don't know if anyone has talked about a mental uncanny valley before, but it seems like it would probably exist.)

1) The emulation is good enough, so the local emulation errors are suppressed by negative feedback instead of accumulating. In this case, the emulation would be effectively totally indistinguishable from the original brain-implemented mind, from both the outside and the inside.

My reason for rejecting borderline cases as unlikely is basically that I think an "uncanny valley" effect would occur whenever local errors accumulate into larger and larger discrepancies, and that for a sufficiently high fidelity emulation, errors would be suppressed by negative feedback. (I know this isn't a very concrete argument, by my intuition strongly suggests that the brain already relies on negative feedback to keep thought processes relatively stable.) The true borderline cases would be ones in which the errors accumulate so slowly that it would take a long time before a behavior discrepancy is noticeable, but once it is noticeable, that would be the end of it, in that no one could take seriously the idea that the emulation is the same person (at least, in the sense of personal identity we're currently used to). But even this might not be possible, if the negative feedback effect is strong.

I would love to hear from someone who knows better.

Comment by ScottMessick on The Goal of the Bayesian Conspiracy · 2011-08-16T23:02:48.992Z · LW · GW

While beautifully written; it does sound all an idealist's dream. Or at least you have said very little to suggest otherwise.

More downvotes would send you to negative karma if there is such a place, and that's a harsh punishment for someone so eloquent. In sparing you a downvote, I encourage you to figure out what went wrong with this post and learn from it.

I downvoted the OP. A major turn-off for me was the amount of rhetorical flourish. While well-written posts should include some embellishment for clarity and engagement, when there's this much of it, the alarm bells go off...what is this person trying to convince me of by means other than reasoned argument?

See also: the dark arts.

Comment by ScottMessick on [Link] Study on Group Intelligence · 2011-08-15T17:59:07.515Z · LW · GW

Robin Hanson had an old idea about this which I liked: http://hanson.gmu.edu/equatalk.html

It's not going to be a silver bullet, but I think it would work well in contexts where the group of people who are in the conversation and how long it should last are well defined. Situations where an ad hoc committee is expected to meet and produce a solution to a problem, but there is no clear leader, for example. (Or there is a clear leader, but lacking expertise herself, she chooses to make use of this mechanism.)

It'd be nice to see a study on whether "EquaTalk" can produce the high "c" value observed in this study. (Disclosure: I didn't read or even skim the linked paper.)

Comment by ScottMessick on Why are certain trends so precisely exponential? · 2011-08-09T18:09:08.891Z · LW · GW

Fascinating question. I share your curiosity, and I'm not at all convinced by any attempted explanations so far. Further, I note that the trend makes a prediction: an economic crunch will be followed by a swell of corresponding magnitude. So who wants to go invest in the US stock market now?