Posts

Is the AI timeline too short to have children? 2022-12-14T18:32:45.045Z
Was the K-T event a Great Filter? 2022-02-23T20:35:58.898Z

Comments

Comment by Yoreth on A Bias Against Altruism · 2022-07-24T17:21:55.764Z · LW · GW

Curb Your Enthusiasm - I didn't know you could be anonymous and tell people! I would've taken that option!

This is a good chance for me to interrogate my priors because I share (although not very strongly) the same intuitions that you criticize in this post. There's tension between the following and my desire not to live in a bland tall-poppy-syndrome dystopia where nobody ever wants to accomplish great things; I don't really know how I'd resolve it.

Intuition 1: Social praise is a superstimulus which titillates the senses and disturbs mental tranquility. When I tell a joke that lands well, or get a lot of upvotes on a post, or someone tells me that something I did years ago affected them in a good way and they still remember it, I feel a big boost to my ego and I'm often tempted to mentally replay those moments over and over. However, too much of this is a distraction from what's really important. If I were a talented stock trader I'd be spending my time doing that rather than lying in bed obsessively refreshing my portfolio valuation; analogously, if I did actually possess the traits for which I received praise, I wouldn't be so preoccupied with others' affirmations.

More generally, we don't want people to get addicted to social status, because then they'll start chasing highs to the point where their motivation diverges from actual altruism. It's better to nip this tendency in the bud.

Intuition 2: Social status is zero-sum, which means that if I spend money to gain status, I am necessarily making it more costly for others to do so. Therefore, telling people about your altruism is a "public bad" which we try to discourage through teasing/shaming. Now, some altruistic acts inherently cannot be done in a status-indifferent way (e.g. working full-time for a charity), but for something like donating money, which can easily be kept private, the reaction against doing it publicly is proportionally harsh.

Comment by Yoreth on Cryptoepistemology · 2022-02-24T22:36:24.310Z · LW · GW

Proof-of-work is a radical and relatively recent idea which does not yet have a direct correspondent in philosophy. Here, cryptographic proofs witness the expenditure of resources like physical energy to commit to particular beliefs. In this way, the true scale of the system which agrees on certain beliefs can be judged, with the largest system being the winner.

I think this relates to the notion that constructing convincing falsehoods is more difficult and costly than discovering truths, because (a) the more elaborate a falsehood is, the more likely it is to contradict itself or observed reality, and (b) false information has no instrumental benefit to the person producing it. Therefore, the amount of "work" that's been put into a claim provides some evidence of its truth, even aside from the credibility of the claimant.

Example: If you knew nothing about geography and were given, on the one hand, Tolkien's maps of Middle-Earth, and on the other, a USGS survey of North America, you'd immediately conclude that the latter is more likely to be real, based solely on the level of detail and the amount of work that must've gone into it. We could imagine that Tolkien might get to work drawing a fantasy map even more detailed than the USGS maps, but the amount of work this project would require would vastly outweigh any benefit he might get from it.

Comment by Yoreth on Debugging Writer's Block · 2021-10-10T14:34:34.601Z · LW · GW
  1. Reward yourself after each session.

What kinds of rewards do you use for this?

Comment by Yoreth on Anthropic Effects in Estimating Evolution Difficulty · 2021-07-06T04:00:03.495Z · LW · GW

Consider the following charts:

Chart 1

Chart 2

Chart 1 shows the encephalization quotient (EQ) of various lineages over time, while Chart 2 shows the maximum EQ of all known fossils from any given time. (Source 1, Source 2. Admittedly this research is pretty old, so if anyone knows of more recent data, that'd be good to know.)

Both of these charts show a surprising fact: that the intelligence of life on Earth stagnated (or even decreased) throughout the entire Mesozoic Era, and did not start increasing until immediately after the K/T event. From this it appears that life had gotten stuck in a local equilibrium that did not favor intelligence; i.e. the existence of dinosaurs (or other Mesozoic species) made it impossible for any more intelligent creatures to emerge. Thus the K/T event was a Great Filter: we needed a shock severe enough to dislodge this equilibrium, but not so severe as to wipe out all the lineages from which intelligence could evolve.

If this is true, then the existence of ravens and elephants today is not much evidence that evolving intelligence is easy, because they exist for the same reason that humans do.

None of this considers octopuses. It would be interesting to see if their brain size history follows similar curves as for the vertebrates illustrated above (but since they're made up of soft tissue we may never know). If so, then that would confirm the view that evolving intelligence is difficult. On the other hand, it's hard to imagine that the marine ecosystem would've been affected by the K/T event in the same way that the terrestrial was. Or maybe octopuses are themselves what is suppressing the evolution of greater intelligence among marine invertebrates.

Comment by Yoreth on Reply to Nate Soares on Dolphins · 2021-06-10T12:36:19.499Z · LW · GW

Such a category is called paraphyletic. It can be informationally useful if the excluded subgroup is far-divergent from the overarching group, such that it has gained characteristics not shared by the others, and lost characteristics otherwise shared. But the less divergence has taken place, the harder it is to justify a paraphyletic category. The category "reptile" (excluding birds) makes sense today, but it wouldn't have made sense in the Jurassic period. The mammal/cetacean distinction is somewhere in the middle.

Animal/human is different because the evolutionary divergence is so recent that it's difficult to justify the paraphyletic usage on biological grounds. Rather this is more of an ingroup/outgroup distinction, along the lines of βαρβαρος ("anybody who isn't Greek"). If humans learned to communicate with e.g. crows, the shared language probably wouldn't have a compact word for "non-human animal," although it might have one for "non-human non-crow animal."

Comment by Yoreth on Unrefined thoughts on some things rationalism is missing vs religions · 2021-06-08T04:05:44.477Z · LW · GW

I’m also not sure how far non-core and core identity rationalism are mutually exclusive. (Just like a lot of people are vaguely christian without belonging to a church, so maybe a lot of people would be vaguely interested in rationalism without wanting to join their local temple)

Agreed; finding a way for multiple levels of involvement to coexist would be helpful. Anecdotally, when I first tried attending LW meetups in around 2010, I was turned off and did not try again for many years, because the conversation was so advanced I couldn't follow it. But when I did try again, I enjoyed it a lot more because I found that the community had expanded to include a "casual meetup attendee and occasional commenter" tier, which I fitted comfortably into. Now we could imagine adding a 3rd tier, namely "people who come and listen to a speech and then make small talk and go for a picnic afterward" (or whatever).

Could this be considered a "temple"? Maybe, but I'd guess that most prospective members wouldn't think of it that way and would be embarrassed to hear such talk. "Philosophical society" might be closer to the mark. It's fun to imagine a Freemason-like society where people are formally allocated into "tiers" and then promoted to the next inner tier by a secret vote, perhaps involving black and white marbles. But at this point, such a level of ritual would probably be a waste of weirdness points.

If you believe as I do that rationalism makes people better human beings, is morally right and leads to more open, free, just and advanced societies, then creating and spreading it is good pretty much irrespective of social circumstances.

I'm uncertain about this, but there is something I suspect and fear may be true, which is that rationalism (as exemplified by current LW members) is not actually helpful for most people on an individual level (see e.g.). There are some people, like me, who are born in the Uncanny Valley and must study rationalism as part of a lifelong effort to climb up out of it. But for others, I would not want to pull them down into the Valley just so I can have company.

For example, I enjoy going to rationalist meetups and spending hours talking about philosophical esoterica, because it fills an intellectual void that I can't fill elsewhere. But most people wouldn't enjoy this, and it wouldn't be a good use of their time.

That's not to say that rationalism is totally inert in society. The ideas developed by rationalists can percolate into the wider population, even to those who are more passive consumers than active participants.

  • Rationalist content is mostly in english. Most people don’t speak/​read english. Even those that do as a second language don’t consumer primarily english sources

You're probably right, although as a monolingual English speaker I myself wouldn't know. I have heard of efforts to translate some of the sequences into Russian and Spanish. But for less popular languages, it may be difficult to assemble enough people who both speak the language and are interested in rationalism. In that respect it differs from Christianity in that there is no definitive text that you can point to and say "If you read and understand this, then you understand rationality." Rationality must be cultivated through active engagement in dialogue, which requires a critical mass of people.

  • Rationalism is niche and hard to stumble upon. It’s not like christianity or left/​right ideology in the west. Whereas those ideologies are broadcasted at you constantly and you will know about them and roughly what they represent, rationalism is something you only find if you happen to just luck out and stumble on this weird internet trail of breadcrumbs.

This is a challenge I've faced when I've tried to explain what, exactly, rationalism is when friends ask me what it's all about. I struggle to answer, because there is no single creed that rationalists believe. One could try to put together a soundbite-tier explanation, but to do so would risk distorting the very essence of rationality, which at its core is a process, not a conclusion. At best, we might try and draw up a list of 40 statements and say "Rationalists all agree that at least 30 of these are true, but there is vehement disagreement as to which."

Comment by Yoreth on Unrefined thoughts on some things rationalism is missing vs religions · 2021-06-07T04:07:51.909Z · LW · GW

A few thoughts on this.

First, I probably have a higher appetite for religion-ifying rationalism than others in the community, but I wouldn't want to push my preferences too hard lest it scare people off. This may stem from my personal background as a cradle atheist. Religious people don't want rationality to become rivalrous with their religion, and ex-religionists don't want it to become they very thing they escaped. To the extent that it's good for rationality to become more religion-like, I think it'll happen on its own in the next few decades or centuries without any concerted effort. I'm not in a hurry.

Second, we should avoid treating "religion" as a fixed concept already optimized for a particular social niche, as if to say that if rationality has some attributes of a religion, then it would necessarily gain by taking on the rest as well. Some of the functions that a religion might manage are:

  1. Marriage and family life
  2. Non-familial social ties
  3. The relationship between people and the state
  4. Matters of interpersonal morality
  5. Matters of private morality
  6. Explaining the origin and fate of the universe
  7. Explaining consciousness and death
  8. Ethnic identification
  9. Etc.

Different societies will have different ways of allocating these responsibilities amongst the various institutions/philosophies within it. In Western cultures we use the word "religion" because it's common for most or all of these domains to be handled by the same thing, so we need a word for whatever category of thing that is. But the Western bias is revealed whenever we try to apply the concept to non-Western societies. E.g. a Chinese person may be a Confucianist with respect to (1) (3) and (4), a Taoist for (2) (6) and (8), and a Buddhist for (5) and (7). Which of these is a "religion"? Does it matter?

Even within the West, these boundaries have shifted over time. (3) was forcibly purged from Christianity in the European Wars of Religion, leading ultimately to the 1st Amendment in the US. And (8) is common in the Middle East and Eastern Europe, while mainline Protestantism is indifferent or outright hostile towards it. We can expect that the boundaries will continue to shift in the future, which leads into the third point.

Third, we should ask ourselves (and I'd be curious to hear your answer) what kind of future we're planning for in which the religion-ification of rationalism becomes relevant. I can think of three scenarios:

  • (A) A technological singularity happens within the next few decades.
  • (B) A major civilizational collapse delays the singularity by hundreds or thousands of years.
  • (C) Civilization doesn't collapse, but the singularity is nevertheless delayed by several centuries, due to technological stagnation (or something).

As for (A), I'm not qualified to weigh in on how likely that is; but if it does happen, then this whole question is pretty much irrelevant anyway, because there won't be any humans (as we know them) to practice any religion. The only possible relevance is that it would be bad for people to expend too much effort now in creating a rationalist religion if they could otherwise have been working on AI safety. But that probably doesn't apply to most people.

I don't think (B) is likely, but there's a compelling cultural narrative in its favor that we need to actively counterbalance in our estimates. We all like to imagine an apocalypse where we can wipe the slate clean and remake a "perfect" society. And everyone likes to look back to the Fall of Rome as an easy-to-apply historical template. If you imagine a rationalist religion in that context, you end up with something like "D&D magic + medieval Catholicism," where monks copy manuscripts to preserve knowledge that would otherwise be lost. But, again, I don't think loss of knowledge is major concern for the future, so efforts to create such an order of monks will probably be wasted.

(C) is where the question becomes most relevant, but since this scenario has no historical precedent, we can't just look to existing or past religions and think that we can just change a few incidentals and slot it into the future world. Whatever rationality ends up becoming in this world, it won't be what we'd call a "religion" (but perhaps a word for it will be devised eventually).

For example, in the future, scientific knowledge may never again be lost, but people will nevertheless feel adrift in a flood of false information so vast and confusing that they can't figure out what to believe. What sort of institution could remedy this situation? Not monks copying manuscripts, to be sure.

Lastly, some disjointed thoughts on outreach. There's a certain personality type that feels drawn to rationalist ideas, for reasons that are probably innate or at least very difficult to change. You know you're one of these people if your reaction upon finding LessWrong was "All my life people have been talking nonsense, but finally I've found something that makes sense!" Even if you don't agree with most of it.

At some point (perhaps already past), all of those people who can be persuaded will be. This will only comprise a small fraction of the population, but they will cling to the "rationalist community" with a near-religious zeal. (I have friends who absolutely loathe "rationalists" but still participate in the community online because, in their view, literally no one else even tries to make convincing arguments.) This zeal is a valuable quality, but most normal people will not sympathize. The question then becomes: For that majority of people who are not rationalists-by-disposition, is there some way they can benefit by associating with the community?

I think the answer will involve addressing this:

We don’t have rituals. Hence meetups are awkward to organize, often stilted and revolve around the discussion of readings or rationality problems or even just lack any structure at all. Contrast this to a church where you show up every Sunday, listen to a service and then make smalltalk or go to a picnic.

Maybe rationalists should give talks that are open to the public and geared towards a general audience, and encourage listeners to talk about it amongst themselves. That way there'd be less pressure to follow along with extremely esoteric conversations. But you don't have to think of it as a "religion" or a "ritual" - it's just a public lecture, which is a perfectly normal thing for someone of any religious views to attend. Putting it forward as a religion-substitute would probably turn people off.

Comment by Yoreth on [deleted post] 2019-12-14T22:30:01.212Z
Comment by Yoreth on [deleted post] 2019-12-14T18:35:26.806Z
Comment by Yoreth on Meetups as Institutions for Intellectual Progress · 2019-09-21T10:50:00.081Z · LW · GW

1-3 months doesn't seem so bad as a timeline. While it's important not to let the perfect be the enemy of the good (since projects like this can easily turn into a boondoggle where everyone quibbles endlessly about what the end-product should look like), I think it's also worth a little bit of up-front effort to create something that we can improve upon later, rather than getting stuck with a mediocre solution permanently. (I imagine it's difficult to migrate a social network to a new platform once it's already gotten off the ground, the more so the more people have joined.)

Comment by Yoreth on Meetups as Institutions for Intellectual Progress · 2019-09-20T05:50:12.248Z · LW · GW

I would also like to register my opposition to using Facebook. While it might seem convenient in the short term, it makes the community more fragile by adding a centralized failure point that's unaccountable to any of its members. Communicating on LessWrong.com has the virtue of it being owned by the same community that it serves.

Comment by Yoreth on Meetups as Institutions for Intellectual Progress · 2019-09-20T05:49:44.157Z · LW · GW

It seems to me that there's a tension at the heart of defining what the "purpose" of meetups is. On the one hand, the community aspect is one of the most valuable things one can get out of it - I love that I can visit dozens of cities across the US, and go to a Less Wrong meetup and instantly have stuff to talk about. On the other hand, a community cannot exist solely for its own sake. Someone's personal interest in participating in the community will naturally fluctuate over time, and if everyone quits the moment their interest touches zero then nobody will ever feel like it's worth investing in its long-term health.

Personally, I do have a sense that going to meetups matters, in that it helps (however marginally) to raise the sanity waterline in one's local community, and to move important conversations about x-risk and the future of humanity into the mainstream. I myself was motivated to dive into Less Wrong again, after a hiatus of many years, by finding a lively meetup group that was discussing these ideas regularly.

In any case I think that the question of "why meetups matter" is something that we're all collectively trying to figure out over time. I don't claim to know the answer right now.

I do, however, have some concern about creating a "monoculture" among the various sub-groups. It's good that we have a wide variety of intellectual interests, ways-of-running-meetups, etc., because this allows for mistakes to be corrected and innovations to be discovered. If we are all given a directive from on high[1] saying "We are going to mobilize all the resources of the Rationality Community towards goal X, which we will achieve by strategy Y," then it might at first seem like a lot of stuff is getting done. But what if strategy Y is ineffective, or goal X is a bad goal? Then we would have ruined our chance to discover our mistake until it was too late. This is especially important when the goals of the community are so ill-defined, as is the case now.

Of course, in order to reap these benefits of having a diverse community, a prerequisite is that there be any communication at all between groups. So, the suggestion of having meetups write up blog posts for public consumption seems like a good one[2]. But I don't think the groups should be told which topics they must discuss, because they might be interested in something else that nobody else would've thought of. Perhaps it's enough to provide a list of topics that any meetup group can draw from if they can't think of something. And maybe, after one group publishes a writeup, another group might be inspired to discuss the same topic later and submit their own writeup in response.

[1] Or, more realistically, a persuasive message to the effect of "All the cool kids are doing Z and you're going to feel left out if you don't," which can feel like a compulsory directive because of Schelling points, etc.

[2] Caveat: The mood of a conversation is likely to change dramatically if it's known that someone is taking notes that will be posted later, since then one is not speaking merely to those in attendance, but effectively to an indefinitely large audience of all LessWrong readers. So, I would recommend that meetups have a mixture of on- and off-the-record conversations, with a clear signal of which norm is in effect at any given time.

Comment by Yoreth on 2011 Survey Results · 2011-12-06T11:16:28.940Z · LW · GW

What's the relation between religion and morality? I drew up a table to compare the two. This shows the absolute numbers and the percentages normalized in two directions (by religion, and by morality). I also highlighted the cells corresponding to the greatest percentage across the direction that was not normalized (for example, 22.89% of agnostics said there's no such thing as morality, a higher percentage than any other religious group).

Many pairs were highlighted both ways. In other words, these are pairs such that "Xs are more likely to be Ys" and vice-versa.

  • [BLANK]; [BLANK]
  • Atheist and not spiritual; Consequentialist
  • Agnostic; No such thing
  • Deist/Pantheist/etc.; Virtue ethics
  • Committed theist; Deontology

(I didn't do any statistical analysis, so be careful with the low-population groups.)

Comment by Yoreth on Deontological Decision Theory and The Solution to Morality · 2011-01-10T21:42:16.253Z · LW · GW

Would it be correct to say that, insofar as you would hope that the one person would be willing to sacrifice his/her life for the cause of saving the 5*10^6 others, you yourself would pull the switch and then willingly sacrifice yourself to the death penalty (or whatever penalty there is for murder) for the same cause?

Comment by Yoreth on Open Thread, August 2010 · 2010-08-08T17:09:32.367Z · LW · GW

I think I may have artificially induced an Ugh Field in myself.

A little over a week ago it occurred to me that perhaps I was thinking too much about X, and that this was distracting me from more important things. So I resolved to not think about X for the next week.

Of course, I could not stop X from crossing my mind, but as soon as I noticed it, I would sternly think to myself, "No. Shut up. Think about something else."

Now that the week's over, I don't even want to think about X any more. It just feels too weird.

And maybe that's a good thing.

Comment by Yoreth on Open Thread, August 2010 · 2010-08-04T05:31:40.122Z · LW · GW

I suppose, perhaps, an asteroid impact or nuclear holocaust? It's hard for me to imagine a disaster that wipes out 99.999999% of the population but doesn't just finish the job. The scenario is more a prompt to provoke examination of the amount of knowledge our civilization relies on.

(What first got me thinking about this was the idea that if you went up into space, you would find that the Earth was no longer protected by the anthropic principle, and so you would shortly see the LHC produce a black hole that devours the Earth. But you would be hard pressed to restart civilization from a space station, at least at current tech levels.)

Comment by Yoreth on Open Thread, August 2010 · 2010-08-04T04:58:16.749Z · LW · GW

But apparently it still wasn't enough to keep them together...

Comment by Yoreth on Open Thread, August 2010 · 2010-08-02T06:33:00.018Z · LW · GW

Suppose you know from good sources that there is going to be a huge catastrophe in the very near future, which will result in the near-extermination of humanity (but the natural environment will recover more easily). You and a small group of ordinary men and women will have to restart from scratch.

You have a limited time to compile a compendium of knowledge to preserve for the new era. What is the most important knowledge to preserve?

I am humbled by how poorly my own personal knowledge would fare.

Comment by Yoreth on Open Thread: July 2010, Part 2 · 2010-07-11T04:25:06.855Z · LW · GW

Is there any philosophy worth reading?

As far as I can tell, a great deal of "philosophy" (basically the intellectuals' wastebasket taxon) consists of wordplay, apologetics, or outright nonsense. Consequently, for any given philosophical work, my prior strongly favors not reading it because the expected benefit won't outweigh the cost. It takes a great deal of evidence to tip the balance.

For example: I've heard vague rumors that GWF Hegel concludes that the Prussian State (under which, coincidentally, he lived) was the best form of human existence. I've also heard that Descartes "proves" that God exists. Now, whether or not Hegel or Descartes may have had any valid insights, this is enough to tell me that it's not worth my time to go looking for them.

However, at the same time I'm concerned that this leads me to read things that only reinforce the beliefs I already have. And there's little point in seeking information if it doesn't change your beliefs.

It's a complicated question what purpose philosophy serves, but I wouldn't be posting here if I thought it served none. So my question is: What philosophical works and authors have you found especially valuable, for whatever reason? Perhaps the recommendations of such esteemed individuals as yourselves will carry enough evidentiary weight that I'll actually read the darned things.

Comment by Yoreth on Open Thread: July 2010 · 2010-07-02T07:11:38.755Z · LW · GW

Long ago I read a book that asked the question “Why is there something rather than nothing?” Contemplating this question, I asked “What if there really is nothing?” Eventually I concluded that there really isn’t – reality is just fiction as seen from the inside.

Much later, I learned that this idea had a name: modal realism. After I read some about David Lewis’s views on the subject, it became clear to me that this was obviously, even trivially, correct, but since all the other worlds are causally unconnected, it doesn't matter at all for day-to-day life. Except as a means of dissolving the initial vexing question, it was pointless, I thought, to dwell on this topic any more.

Later on I learned about the Cold War and the nuclear arms race and the fears of nuclear annihilation. Apparently, people thought this was a very real danger, to the point of building bomb shelters in their backyards. And yet somehow we survived, and not a single bomb was dropped. In light of this, I thought, “What a bunch of hype this all is. You doomsayers cried wolf for decades; why should I worry now?”

But all of that happened before I was born.

If modal realism is correct, then for all I know there was* a nuclear holocaust in most world-lines; it’s just that I never existed there at all. Hence I cannot use the fact of my existence as evidence against the plausibility of existential threats, any more than we can observe life on Earth and thereby conclude that life is common throughout the universe.

(*Even setting aside MWI, which of course only strengthens the point.)

Strange how abstract ideas come back to bite you. So, should I worry now?

Comment by Yoreth on Open Thread June 2010, Part 3 · 2010-06-14T08:10:24.694Z · LW · GW

A prima facie case against the likelihood of a major-impact intelligence-explosion singularity:

Firstly, the majoritarian argument. If the coming singularity is such a monumental, civilization-filtering event, why is there virtually no mention of it in the mainstream? If it is so imminent, so important, and furthermore so sensitive to initial conditions that a small group of computer programmers can bring it about, why are there not massive governmental efforts to create seed AI? If nothing else, you might think that someone could exaggerate the threat of the singularity and use it to scare people into giving them government funds. But we don’t even see that happening.

Second, a theoretical issue with self-improving AI: can a mind understand itself? If you watch a simple linear Rube Goldberg machine in action, then you can more or less understand the connection between the low- and the high-level behavior. You see all the components, and your mind contains a representation of those components and of how they interact. You see your hand, and understand how it is made of fingers. But anything more complex than an adder circuit quickly becomes impossible to understand in the same way. Sure, you might in principle be able to isolate a small component and figure out how it works, but your mind simply doesn’t have the capacity to understand the whole thing. Moreover, in order to improve the machine, you need to store a lot of information outside your own mind (in blueprints, simulations, etc.) and rely on others who understand how the other parts work.

You can probably see where this is going. The information content of a mind cannot exceed the amount of information necessary to specify a representation of that same mind. Therefore, while the AI can understand in principle that it is made up of transistors etc., its self-representation necessary has some blank areas. I posit that the AI cannot purposefully improve itself because this would require it to understand in a deep, level-spanning way how it itself works. Of course, it could just add complexity and hope that it works, but that’s just evolution, not intelligence explosion.

So: do you know any counterarguments or articles that address either of these points?

Comment by Yoreth on Hacking the CEV for Fun and Profit · 2010-06-04T03:11:06.332Z · LW · GW

This seems to be another case where explicit, overt reliance on a proxy drives a wedge between the proxy and the target.

One solution is to do the CEV in secret and only later reveal this to the public. Of course, as a member of said public, I would instinctively regard with suspicion any organization that did this, and suspect that the proffered explanation (some nonsense about a hypothetical "Dr. Evil") was a cover for something sinister.

Comment by Yoreth on Attention Lurkers: Please say hi · 2010-04-17T20:26:35.357Z · LW · GW

Hi!

I've been registered for a few months now, but only rarely have I commented.

Perhaps I'm overly averse to loss of karma? "If you've never been downvoted, you're not commenting enough."

Comment by Yoreth on The mathematical universe: the map that is the territory · 2010-03-26T20:59:37.068Z · LW · GW

Suppose we had a G.O.D. that takes N bits of input, and uses the input as a starting-point for running a simulation. If the input contains more than one simulation-program, then it runs all of them.

Now suppose we had 2^N of these machines, each with a different input. The number of instantiations of any given simulation-program will be higher the shorter the program is (not just because a shorter bit-string is by itself more likely, but also because it can fit multiple times on one machine). Finally, if we are willing to let the number of machines shrink to zero, the same probability distribution will still hold. So a shorter program (i.e. more regular universe) is "more likely" than a longer/irregular one.

(All very speculative of course.)

Comment by Yoreth on The things we know that we know ain't so · 2010-01-14T07:27:54.489Z · LW · GW

How so? Could you clarify your reasoning?

My thinking is: Given that a scientist has read (or looked at) a paper, they're more likely to cite it if it's correct and useful than if it's incorrect. (I'm assuming that affirmative citations are more common than "X & Y said Z but they're wrong because..." citations.) If that were all that happened, then the number of citations a paper gets would be strongly correlated with its correctness, and we would expect it to be rare for a bad paper to get a lot of citations. However, if we take into account the fact that citations are also used by other scientists as a reading list, then a paper that has already been cited a lot will be read by a lot of people, of whom some will cite it.

So when a paper is published, there are two forces affecting the number of citations it gets. First, the "badness effect" ("This paper sounds iffy, so I won't cite it") pushes down the number of citations. Second, the "popularity effect" (a lot of people have read the paper, so a lot of people are potential citers) pushes up the number of citations. The magnitude of the popularity effect depends mostly on what happens soon after publication, when readership is small and thus more subject to random variation. Of course, for blatantly erroneous papers the badness effect will still predominate, but in marginal cases (like the aphasia example) the popularity effect can swamp the badness effect. Hence we would expect to see more bad papers getting widely cited; the more obviously bad they are, the stronger this suggests the popularity effect is.

I suppose one could create a computer simulation if one were interested; I would predict results similar to Simkin & Roychowdhury's.

Comment by Yoreth on The things we know that we know ain't so · 2010-01-12T18:51:04.877Z · LW · GW

I am reminded of a paper by Simkin and Roychowdhury where they argued, on the basis of an analysis of misprints in scientific paper citations, that most scientists don't actually read the papers they cite, but instead just copy the citations from other papers. From this they show that the fact that some papers are widely cited in the literature can be explained by random chance alone.

Their evidence is not without flaws - the scientists might have just copied the citations for convenience, despite having actually read the papers. Still, we can easily imagine a similar effect arising if the scientists do read the papers they cite, but use the citation lists in other papers to direct their own reading. In that case, a paper that is read and cited once is more likely to be read and cited again, so a small number of papers acquire an unusual prominence independent of their inherent worth.

If we see a significant number of instances where the conclusions of a widely-accepted paper are later debunked by a simple test, then we might begin to suspect that something like this is happening.