Open Thread September, Part 3

post by LucasSloan · 2010-09-28T05:21:48.666Z · LW · GW · Legacy · 217 comments

The September Open Thread, Part 2 has got nearly 800 posts, so let's have a little breathing room.

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

217 comments

Comments sorted by top scores.

comment by Scott Alexander (Yvain) · 2010-09-29T10:39:23.678Z · LW(p) · GW(p)

I recently read an anecdote (so far unconfirmed) that Ataturk tried to ban the veil in Turkey, but got zero compliance from religious people, who simply ignored the law. Instead of cracking down, Ataturk decreed a second law: all prostitutes were required to wear a veil. The general custom of veil-wearing stopped immediately.

This might be the most impressive display of rationality I've ever heard of in a world leader.

Replies from: AnlamK, cousin_it
comment by AnlamK · 2010-10-01T08:15:21.108Z · LW(p) · GW(p)

As a Turk, I strongly believe that story is fictional.

Where and how was this ban issued? Can you give more details?

You may be hearing some fictional story based on his social reforms.

See here

And the veil, currently banned in public universities, is still very much a hot button issue. Also, a large segment of the Turkish population still wears the veil. The country is deeply divided over this issue.

Replies from: NancyLebovitz, Douglas_Knight
comment by NancyLebovitz · 2010-10-01T11:28:56.272Z · LW(p) · GW(p)

Now that I think about it, believing the story requires ignoring how strongly many people who follow modesty rules are apt to be attached to them.

If a western ruler announced that prostitutes were required to cover their breasts, do you think respectable women would start going topless?

comment by Douglas_Knight · 2010-10-02T00:48:57.530Z · LW(p) · GW(p)

Your wikipedia link claims that the fez & turban were banned in 1925 and the veil and (again!) turban in 1934. Do you know these laws? Could you confirm that the text matches wikipedia's description? or not - perhaps these are the famous laws that cover universities? (I can't follow google's translation) How does this fit in your understanding of history?

While Yvain's story doesn't sound terribly plausible to me, deducing law from the present state is tricky.

Replies from: AnlamK
comment by AnlamK · 2010-10-02T05:48:15.779Z · LW(p) · GW(p)

Do you know these laws?

The laws I know ban wearing the veil/turban (i mean the same thing by these two words) in government-related places - you can't wear it in the work place if you are working for a government, can't wear it in public universities, can't wear it in the TBMM (the Turkish congress) etc. etc... You are free to wear it on the street or in the workplace if you are working for a private company. I may be mistaken - the ban covering the universities is the most famous and contentious.

Could you confirm that the text matches wikipedia's description?

Which text? I've not read the wikipedia entry - just linked to it, thinking it would repeat what I already know.

How does this fit in your understanding of history?

You mean Yvain's story? It makes no sense. In 1920s, Turkey was largely being rebuilt after WW1 and the Turkish War of Independence. The legal system/constitution was being overhauled. The Arabic script was replaced with the Latin script. It is said that in one day, the entire country became illiterate - i.e. nobody understood the new alphabet at first.

With so much going on, I find it funny that Atatürk would pause and decree laws about prostitution. Consider me biased, but I think Atatürk had more urgent things to attend.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2010-10-02T10:27:37.726Z · LW(p) · GW(p)

Here is the 1925 law which wikipedia describes as banning men's hats. And here the 1934 law banning the veil and the (men's?) turban.

Yes, I don't think Yvain's story about prostitution is correct, but you seem to also claim that since many people wear veils, they must not be banned. I would not be at all surprised if there has been a law for 70 years banning them and even that no one talks about this law.

comment by cousin_it · 2010-09-29T12:10:08.433Z · LW(p) · GW(p)

I don't get it, why would prostitutes be more eager to obey the law? Especially seeing as their professional success depends on their perceived beauty?

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-09-29T12:12:08.450Z · LW(p) · GW(p)

I believe the point is that if prostitutes are required to wear veils, then whether they do or not, the veil is immediately stigmatized.

Replies from: cousin_it
comment by cousin_it · 2010-09-29T12:17:34.069Z · LW(p) · GW(p)

Thanks, I'd missed that.

comment by Larks · 2010-09-29T21:11:04.080Z · LW(p) · GW(p)

I expect people will be interested to hear that Eliezer's TDT document has now been released for general consumption.

Replies from: Perplexed, cousin_it, Risto_Saarelma, jimrandomh, Clippy, danield, SilasBarta, timtyler
comment by Perplexed · 2010-09-30T02:00:35.840Z · LW(p) · GW(p)

Does anyone else agree that, as a piece of expository writing, that document sucks bigtime?

111 pages! I got through about 25 and I was wondering why Eliezer thought I needed to hear about how his four friends had decided when presented with the Newcomb's soda problem and how some people refer to this problem as Solomon's problem. So, I decided to skim ahead until he started talking about TDT. So I skimmed and skimmed.

Finally, I got to section 14, entitled "The timeless decision procedure". "Aha!", I think. "Finally." The first paragraph consists of a very long and confusing sentence which at least seems to deal with the timeless decision procedure.

The timeless decision procedure evaluates expected utility conditional upon the output of an abstract decision computation - the very same computation that is currently executing as a timeless decision procedure - and returns that output such that the universe will possess maximum expected utility, conditional upon the abstract computation returning that output.

It might be easier to understand if expressed as an equation or formula containing, you know, variables and things. So I read on, hoping to find something I can sink my teeth into. But then the second paragraph begins:

I delay the formal presentation of a timeless decision algorithm because of some significant extra steps I wish to add ...

and closes with

Before adding additional complexities, I wish to justify this critical innovation from first principles.

As far as I can tell, the remainder of this section entitled "The timeless decision procedure" consists of this justification, though not from first principles, but rather using an example. And it doesn't appear that Eliezer ever gets back to the task of providing a "formal presentation of a timeless decision algorithm".

So, I skip forward to the end, hoping to read the conclusions. Instead I find:

This manuscript was cut off here, but interested readers are suggested to look at these sources for more discussion:

Followed by a bibliography containing one entry - A chapter from a 1978 collection of articles on applications of decision theory.

"...was cut off here ..."? Give me a break!

Let me know when you get it down to a dozen pages or so.

ETA: A cleaned up copy of the paper exists with a more complete bibliography and without the "manuscript was cut off here" closing.

Replies from: timtyler, wedrifid
comment by timtyler · 2011-06-26T20:11:27.939Z · LW(p) · GW(p)

The first paragraph consists of a very long and confusing sentence which at least seems to deal with the timeless decision procedure.

The timeless decision procedure evaluates expected utility conditional upon the output of an abstract decision computation - the very same computation that is currently executing as a timeless decision procedure - and returns that output such that the universe will possess maximum expected utility, conditional upon the abstract computation returning that output.

I think this needs rewriting so it doesn't sound so circular - and only mentions the word "conditional" once.

It seems to me that we can just say that it maximises utility - while maintaining an awareness that there may be other agents running its decision algorithm out there, in addition to all the other things it knows.

I think the stuff about "conditional upon the abstract computation returning that output" is pretty-much implied by the notion of utility maximisation.

comment by wedrifid · 2010-09-30T02:08:34.482Z · LW(p) · GW(p)

It might be easier to understand if expressed as an equation or formula containing, you know, variables and things.

Easier? That's the opposite of true for this kind of material!

Replies from: wnoise, Perplexed
comment by wnoise · 2010-09-30T05:56:39.804Z · LW(p) · GW(p)

Easier if also expressed that way. You need the prose to know what the symbols mean, but the math itself is incredibly clearer when done as symbols.

comment by Perplexed · 2010-09-30T02:16:48.012Z · LW(p) · GW(p)

I guess this is a case of "different strokes for different folks". I will point out that it is fairly traditional for technical communication to contain formulas, equations, and/or pseudo-code. I believe the assumption behind this tradition is that such formal means of presentation are often clearer than expository text.

Replies from: wedrifid
comment by wedrifid · 2010-09-30T04:11:19.513Z · LW(p) · GW(p)

I will point out that it is fairly traditional for technical communication to contain formulas, equations, and/or pseudo-code.

I am aware of the tradition. Yes, Eliezer's piece does not include any semblance of technical rigour.

I believe the assumption behind this tradition is that such formal means of presentation are often clearer than expository text.

There is a reason the formal presentations include accompanying explanations. The mathematics for this kind of thing would be nigh incoherent and quite possibly longer than a verbal description. Expository text is utterly critical.

Incidentally, I have almost no doubt that "might be easier to understand" is not your real reason for demanding "you know, variables and things". Some of your real reasons may actually be better in this instance.

comment by cousin_it · 2010-09-30T01:19:07.636Z · LW(p) · GW(p)

Thanks for the link! I just read it all. The good: it's very, very smooth reading - I know how well Eliezer can write, and even I was surprised at the quality - and it has some very lucid explanations of tricky matters (like why Pearlean causality is useful). The bad: it's kinda rambling, contains many standard sci-fi LW arguments that feel out of place in a philosophy paper, and it doesn't make any formal advances beyond what we already know here (I'd hoped to see at least one). The verdict: definitely read the first half if you're confused about this whole "decision theory" controversy, it'll get you unconfused in a pinch. Take the second half with a grain of salt because it's still very raw (unmixed metaphor award!)

comment by Risto_Saarelma · 2010-09-30T09:17:56.590Z · LW(p) · GW(p)

I wonder if it should be reformatted in LaTeX to pass item #1 from here.

Replies from: ata, Vladimir_M, wedrifid
comment by ata · 2010-10-03T04:25:43.972Z · LW(p) · GW(p)

It should be reformated in LaTeX so that it will look much much nicer.

comment by Vladimir_M · 2010-10-02T22:40:09.237Z · LW(p) · GW(p)

I wonder if it should be reformatted in LaTeX

I'm currently reading through the document, and yes, it definitely should. The present format is an unprofessional-looking eyesore, and the references are presented in a weird, clumsy, and inconsistent way. Using Latex/Bibtex would solve both problems easily and effectively.

(Personally, I can't fathom why anyone capable of grasping the notion of a markup language would ever want to write a document longer than five pages in Word instead of Latex.)

comment by wedrifid · 2010-09-30T10:01:20.629Z · LW(p) · GW(p)

7 and 8 are already a lost cause. :)

Replies from: Perplexed
comment by Perplexed · 2010-10-01T00:27:43.064Z · LW(p) · GW(p)

From a list of warning signs of a FAIL in an attempt to solve a famous problem:

  1. The paper doesn’t build on (or in some cases even refer to) any previous work.
  2. The paper wastes lots of space on standard material.

I would disagree that this paper doesn't build on or take notice of previous work. It takes note of EDT and CDT and quite properly puts the focus on the point of departure of this work - specifically, the handling of contrafactuals. I'm quite happy with that aspect of the paper. My complaint was (8) that it wasted far too much space doing it. And, perhaps as a result of wasting so much time and space in preparation, it never reached its proper conclusion.

Also, it is not completely clear the Aronson's list of warning signs really applies here. Eliezer is not solving a famous problem here. Most non-philosophers don't think that a problem even exists. So, he does have to provide an explanation of why TDT is needed. Just not so much explanation.

Replies from: wedrifid
comment by wedrifid · 2010-10-01T05:56:16.079Z · LW(p) · GW(p)

Also, it is not completely clear the Aronson's list of warning signs really applies here.

Nor do I, and I would in any case suggest that some of them are screened off. There's only so many times you can count 'non-conventional' as evidence.

I incidentally found some of the extra explanation handy purely as revision of various topics that it hadn't particularly occurred to me were relevant.

And, perhaps as a result of wasting so much time and space in preparation, it never reached its proper conclusion.

I do hope someone goes ahead and finishes it. Including things like writing out that bibliography at the end and writing up the maths.

comment by jimrandomh · 2010-09-30T12:53:09.115Z · LW(p) · GW(p)

I must say I'm disappointed by the lack of rigor. On the other hand, I'm slightly relieved that he didn't beat me to any of the stuff in the decision theory document I'm writing myself. So far, I have yet to see any formalization of decision theory that I would consider usable, other my own unfinished one.

I notice there seems to be an issue with the bibliography - there's only one entry in it, but I've found at least one other citation in the text (Judea Pearl's Causality cited on page 58) that's not there. Are there any good collections of decision theory paper links out there?

Replies from: cousin_it, Vladimir_Nesov
comment by cousin_it · 2010-10-01T14:32:35.568Z · LW(p) · GW(p)

If you have new formal arguments about decision theory, it would be much more useful to me (and others, I think) if you just posted them here in their current state instead. Or emailed them to the interested people.

comment by Vladimir_Nesov · 2010-10-02T10:10:44.261Z · LW(p) · GW(p)

Give a quick soundbite without context?

Replies from: jimrandomh
comment by jimrandomh · 2010-10-02T13:20:31.003Z · LW(p) · GW(p)

I'm approaching decision theory from from the perspective compilers approach optimizations: no approach is guaranteed to work always, but each one comes with a list of preconditions that you can check. I'm also summarizing some of the relevant work from compilers: automatic provably correct simplification, translation between forms, and a handy collection of normal forms to translate into.

For CDT, the precondition is a partial ordering over observation sets passed to the strategy such that the world program calls the strategy with observation sets only in increasing order, and there are finitely many possible observation sets. Then you can translate the program into continuation-passing style, and enumerate the possible invocations of the strategy function and their ordering. The last one in the order is guaranteed to have a continuation with no further invocations of the strategy function, which means you can try each possibility, simulate the results, and use that to determine the best answer. Then you can look at the second-to-last invocation, substitute the best answer to the last invocation into the continuation, and repeat; and so on for the set of all invocations to the strategy function. This works because you have a guarantee that when you compute your current position within the world-program and come up with a probability distribution over states to determine where you are, and then look at future continuations, changing result of any invocations of the strategy in those continuations does not affect the probability distribution over states.

I also have an example of a formalized decision-theory problem for which no optimal answer exists: name a number and that number is your utility. A corollary is that no decision theory can always give optimal answers, even given infinite computing power. This can be worked around by applying size bounds in various places.

I'm also drawing distinctions between strategies and decision theories (a strategy is an answer to one problem, a decision theory is an approach to generating strategies from problems); and between preference and utility (a preference is a partial order over outcomes; a utility function is a total order over outcomes where the outcomes are complete probability distributions, plus a linearity requirement).

Replies from: Vladimir_Nesov, cousin_it
comment by Vladimir_Nesov · 2010-10-02T14:36:40.837Z · LW(p) · GW(p)

So far, doesn't sound good.

Replies from: jimrandomh
comment by jimrandomh · 2010-10-02T14:45:42.125Z · LW(p) · GW(p)

By that, do you mean that it sounds wrong, or that it sounds confused? If the former, I may need to reconsider; if the latter, I'm unsurprised because it's much too short and doesn't include any of the actual formalization. (That was not an excerpt from the draft I'm writing, but an attempt to summarize it briefly. I don't think I did it justice.)

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-10-02T14:55:02.177Z · LW(p) · GW(p)

Doesn't seem to address relevant questions or give interesting answers.

Replies from: jimrandomh
comment by jimrandomh · 2010-10-02T15:18:10.662Z · LW(p) · GW(p)

Ok, in that case I'm inclined to think that impression is just an artifact of how I summarized it, since my summary didn't address the questions, but the longer paper I'm working on does, albeit only after building up proof and formalization techniques, which are the main focus.

comment by cousin_it · 2010-10-03T12:40:41.920Z · LW(p) · GW(p)

Would something like UDT fit into your framework?

Replies from: jimrandomh
comment by jimrandomh · 2010-10-03T19:49:06.739Z · LW(p) · GW(p)

As far as I know, there are no cases where UDT suggests a decision and disagrees with mine. The differences are all in cases where UDT alone can't be used to reach a decision.

comment by Clippy · 2010-10-01T20:22:33.038Z · LW(p) · GW(p)

I notice that the ideal causal diagram used in Part 2 (and based on pearls) is isomorphic to an example I use to teach CLIP, once you apply the substitution:

sprinkler on -> a paperclip truck has over turned
rain -> a clippy has haphazardly used up a lot of metal wire
sidewalk wet -> paperclips are scattered across the ground
sidewalk slippery -> many paperclips need to be moved to the safe zone

comment by danield · 2010-10-01T07:28:48.229Z · LW(p) · GW(p)

Thanks, Larks-- how did you find out this was available? Is there a blog post or something somewhere? I didn't see it on SIAI's blog.

comment by SilasBarta · 2010-09-30T15:52:54.747Z · LW(p) · GW(p)

I'm glad to have this to read. I was surprised to find many examples and arguments that EY hadn't given before (or at least formalized this way). I liked the Newcomb's soda problem in particular. I had been worried that EY had presented enough of his TDT justification that someone could "scoop" him, but there's a lot more depth to it. (Anyone up-to-date on the chance that he could get a PhD just for this?)

And I also appreciated that the modified the smoking lesion problem to be one where people aren't distracted by their existing knowledge of smoking, and that this was the reason for transforming the example.

I read up to ~p. 35, and I think I have a better understanding now of the relevance of time consistency and how it varies across examples.

That said, I agree with the others who say it could use some mroe polish.

Replies from: danield, Perplexed
comment by danield · 2010-10-01T07:32:51.971Z · LW(p) · GW(p)

I liked the Newcomb's soda problem in particular.

Yeah, Newcomb's Soda and Solomon's Problem are really interesting! If I faced the problems right now, I would one-box, skip the gum, and eat chocolate ice cream, because those choices put me in the groups that win, so I guess that classifies me as an evidentialist. At the same time, I haven't reasoned out these conclusions thoroughly-- I can't argue formally against causal reasoning or dominance, or formally for evidentialism.

Looks like I have some more reading to do before I get this resolved.

Edit: All right, I think I can clarify my positions. I would not say that I am choosing based on evidential reasoning, but rather that I am confused by the mind-altering properties of the CGTA gene and the chocolate soda. How do CGTA and chocolate soda influence peoples' decisions? Do they alter peoples' decision-types?

Replies from: ShardPhoenix
comment by ShardPhoenix · 2010-10-01T12:02:23.846Z · LW(p) · GW(p)

Maybe the CGTA gene gives you an itchy throat or makes you like to chew things. At any rate, chewing the gum is always the right choice (assuming the others costs of gum-chewing are negligible).

Replies from: danield
comment by danield · 2010-10-01T19:37:14.284Z · LW(p) · GW(p)

Ah, maybe you can help me out. Why should I chew gum, but not two-box?

Replies from: orthonormal, ShardPhoenix, danield
comment by orthonormal · 2010-10-02T23:30:49.445Z · LW(p) · GW(p)

One intuition pump: if someone else forced you to chew gum, this wouldn't have any bearing on whether you have CGTA, and it would lower your chances of abcess in either case, and so you'd be glad they'd done so. However, if someone else forced you to two-box, you'd be quite angry at having missed out on the million dollars.

comment by ShardPhoenix · 2010-10-02T22:12:11.421Z · LW(p) · GW(p)

In Newcomb's problem, the result depends directly on your decision making process (by the definition of Omega/the Predictor), whereas with the gum example it doesn't.

comment by danield · 2010-10-01T21:07:29.788Z · LW(p) · GW(p)

I think I see a difference in the intuitions surrounding the Newcomb's and Solomon's. This could explain why one-boxing and gum-chewing are compatible, and points to a strengthened version of Solomon's that I think is a better parallel of Newcomb's. I'll try to explain it, but it's too bad this is happening over the internet and not in front of a whiteboard.

tl;dr: Newcomb's predictor is inescapable, but CGTA's influence is escapable. Therefore you should one-box and chew gum. This is not an attempt at a new decision theory, just an argument that jogged my intuition.

Each problem involves a two-pronged causal processes emanating from a root cause. They state that because of the causal relationship between the root and the prongs, the prongs will likely "match".

In Newcomb's, the root cause is your mental state earlier in the day, and the prongs are your decision and the state of box B. The prongs "match" if the predictor predicted you correctly and filled box B accordingly.

According to the statement of the problem, the process that leads from the state of the root to matching prongs is inescapable. No matter how complex your decision process is, you cannot trick the predictor, because your decision process is what it is accurately predicting.

In Solomon's, the root cause is your CGTA-status and the prongs are your gum-decision and your abscesses (or lack thereof). The prongs "match" if you are in the statistically common group for your decision (if you chew and have abscesses, or if you do not and do not).

Unlike Newcomb's predictor, the process that makes your gum chewing match your throat abscesses seems escapable. The biological process that turns CGTA into throat abscesses is not predicting your decision process, so how could it make your throat-abscesses match your choice? The outcome in which you chew gum and don't have abscesses seems very possible; after all, the people in the study didn't know about any of this, did they? You should be able to act as though your decision is independent of your CGTA status, and take advantage of the benefits of gum-chewing.

Looking at the problems this way, I can see why I would one-box and chew gum. Newcomb's predictor has been demonstrated to be inescapable in its accuracy, but CGTA hasn't really been tested, and seems vulnerable to exploitation by well-considered decisions.

Consider an extension of Solomon's problem, though, in which the link between gum-chewing and throat abscesses persists after the study. The link between CGTA and gum-chewing is so strong that, even after the study becomes well known, you can only convince CGTA-positive people to chew gum, and CGTA-negative people invariably decide not to, no matter the arguments. Well-known decision scientists publish papers arguing one way or another, and are always found to be CGTA-positive if they favor gum-chewing. Even after someone tests negative for CGTA, they refuse to chew gum, giving absurd-sounding reasons!

In this strengthened version of Solomon's, I think that it now becomes reasonable to assume that CGTA is somehow deeply tied into human cognition, and attempting to escape its influence is as futile as trying to trick Newcomb's Predictor.

comment by Perplexed · 2010-10-01T00:56:45.486Z · LW(p) · GW(p)

... the chance that he could get a PhD just for this?

A Ph.D. in what? The subject matter fits into some odd interdisciplinary combination of Philosophy, Economics, Operations Research, AI/CompSci, and Statistics. In general, the research requirements for a PhD in CompSci are roughly equivalent to something like 4 published research papers plus a ~200 page dissertation containing material that can be worked into either a monograph or another half-dozen publishable papers. But there are other requirements besides research, and many institutions don't like to allow people to "test out" of those requirements because it looks bad to the accrediting agencies.

comment by timtyler · 2010-10-05T03:48:55.661Z · LW(p) · GW(p)

I scanned it. My initial reactions:

  • Surprise that the document existed;
  • TL;DR;
  • Surprise at the quantity of work that had gone into it.

Alas, I totally failed to see the claimed "strange singularity at the heart of decision theory".

My favourite bit was probably the speculations about agent boundaries - starting on p.108. Alas, from my POV, no mention of the wirehead problem.

Update 2011-06-26 regarding the new version. The bit that reads:

This manuscript was cut off here, but interested readers are suggested to look at these sources for more discussion:

...seems to have been deleted, and 3 pages worth of references have been added. The document seems to have had negligible additions, though - the bit on p.108 has moved back onto page 107. There seem to be a few more extra lines at the end about how "change" is a harmful concept in decision theory.

comment by MBlume · 2010-10-18T17:51:20.963Z · LW(p) · GW(p)

A Redditor recently posted asking all atheists what they thought happened after death. The standard, obvious, and true response was given -- your mind is annihilated and you experience nothing. The OP then responded with "doesn't that scare you?"

I responded at some length

comment by Mitchell_Porter · 2010-09-28T06:12:31.993Z · LW(p) · GW(p)

((moved here from the suffocating depths of open thread part 2))

Back when I first heard of "timeless decision theory", I thought it must have been inspired by Barbour's timeless physics. Then I got the idea that it was about treating yourself as an instance of a set of structurally identical decision-making agents from across all possible worlds, and making your decision as if you had an equal chance of being any one of them (which might be psychologically presented to yourself as making the decision on behalf of all of them, though that threatens to become very confused causally). But if the motivation was to have a new theory of rationality which would produce the right answer for Newcomb's "paradox" (and maybe other problems? though I don't know what other problems there are), then it sounded like a good idea.

But the discussion in this thread and this thread makes it look as if people want this "new decision theory" to account for the supposed success of "superrationality", or of cooperative acts in general, such as voting in a bloc. There are statements in those threads which just bemuse me. E.g. at the start of the second thread where Vladimir Nesov says

since voters' decisions are correlated, your decision accounts for behavior of other people as well, and so you are not only casting one vote with your decision, but many votes simultaneously

I should know enough about the possibilities of smart people tripping up over the intricacies of their own thoughts not to boggle at this, but still, I boggle at it. The decision made by other people are caused by factors internal to their own brains. What goes on in your brain has nothing to do with it. Their guess or presumption of how you vote may affect their decision; your visible actions in the physical world may affect their decision; but the outcome of your decision process does not causally affect (or "acausally affect") other decision processes in the way that Vladimir seems to imply. At most, the outcome of your decision process provides you (not them) with very limited evidence about how similar agents may decide (Paul Almond may make this point in a forthcoming essay), but there is no way in which the particular decision-making process which you perform or instantiate is causally relevant to anyone else's in this magical way.

Then there are other dubious ideas in circulation, like "acausal trade" and its generalizations. I get the impression, therefore, that certain parties may be hoping for a grand synthesis which accommodates and justified timeless ontology, superrationality (and even democracy?!), acausal interaction between possible worlds, and one-boxing on Newcomb's problem. The last of these items is the only one I take seriously (democracy may or may not be worth it, but you certainly don't need a new fundamental decision theory to explain why people vote), and the grand synthesis looks more like a grand trainwreck to me. Maybe I'm wrong about what's happening in TDT-land, but I thought I'd better speak up.

Replies from: Will_Newsome, PaulAlmond
comment by Will_Newsome · 2010-09-28T06:39:35.280Z · LW(p) · GW(p)

Are you implying that there is an irrational focus on cooperation? I could see how this claim could be made about Eliezer or Drescher but less so about Nesov or Wei. It's not so much a focus on the aesthetics of the shiny idea of cooperation so much as the realization that if cooperation yields the best results, our decision theory should probably cooperate. It's not so much accommodating cooperation or acausal interaction as capitalizing on them. If it's impossible in practice, then the decision theory should reflect that. Currently, it seems incredibly difficult to find or define isomorphisms between computations an agent would consider itself, though people are working on it with interesting approaches. It's the ideal we'd like our decision theory to reach.

Also, I don't believe that timeless ontology is necessary -- at least, I'm not sure that it actually changes anything decision theoretically speaking. At any rate Wei Dai's and I think others' decision theory work is being done under the assumption that the agent in question will be operating in a Tegmark multiverse (or generally some kind of ensemble universe), and the notion of time doesn't really make sense in that case, even if it does make sense in 'our' multiverse (though I don't know what postulating this 'time' thing gets you, really).

Acausal trade is just a way to capitalize on comparative advantage over vast distances... it's a brilliant and frighteningly logical idea. (I believe Carl Shulman thought it up? I'm rather jealous at any rate.) Why do you think acausal trade wouldn't be a good idea, decision theoretically speaking? Or why is the concept confused, metaphysically speaking? Practically speaking, the combinatorial explosion of potential trading partners is difficult to work with, but if a human can choose between branches in the combinatorial explosion of a multiverse via basic planning on stupid faulty hardware like brains, an AGI might very well be able to do similar simulation of trading partners in an ensemble universe (or just limit the domain, of course). (I think Vladimir Nesov came up with this analogy, or something like it.)

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2010-09-28T08:07:00.797Z · LW(p) · GW(p)

Are you implying that there is an irrational focus on cooperation?

I don't know what's going on, except that peculiar statements are being made, even about something as mundane as voting.

if cooperation yields the best results, our decision theory should probably cooperate... If it's impossible in practice, then the decision theory should reflect that.

That's what ordinary decision theory does. The one example of a deficiency that I've seen is Newcomb's problem, which is not really a cooperation problem. Instead, I see people making magical statements about the consequences of an individual decision (Nesov, quoted above) or people wanting to explain mundane examples of coordination in exotic ways (Alan Crowe, in the other thread I linked).

I don't know what postulating this 'time' thing gets you, really

Empirical adequacy? Talking about "time" strays a little from the real issue, which is the denial of change (or "becoming" or "flow"). It ends up being yet another aspect of reality filed under "subjectivity" and "how things feel". You postulate a timeless reality, and then attached to various parts of that are little illusions or feelings of time passing. This is not plausible as an ultimate picture. In fact, it's surely an inversion of reality: fundamentally, you do change; you are "becoming", you aren't just "being"; the timeless reality is the imagined thing, a way to spatialize or logicize temporal relations so that a whole history can be grasped at once by mental modalities which specialize in static gestalts.

We need a little more basic conceptual and ontological progress before we can re-integrate the true nature of time with our physical models.

Why do you think acausal trade wouldn't be a good idea, decision theoretically speaking? Or why is the concept confused, metaphysically speaking?

To a first approximation, for every possible world where a simulation of you existed in an environment where your thought or action produced an outcome X, there would be another possible world where it has the opposite effect. Also, for every world where a simulation of you exists, there are many more worlds where the simulated entity differs from you in every way imaginable, minor and major. Also, what you do here has zero causal effect on any other possible world.

The fallacy may be to equate yourself with the equivalence class of isomorphic computations, rather than seeing yourself to be a member of that class (an instantiation of an abstract computation, if you like). By incorrectly identifying yourself with the schema rather than the instantiation, you imagine that your decision here is somehow responsible for your copy's decision there, and so on. But that's not how it is, and the fact that someone simulating you in another world can switch at any time to simulating a variant who is no longer you highlights the pragmatic error as well. The people running the simulation have all the power. If they don't like the deal you're offering them, they'll switch to another you who is more accommodating.

Another illusion which may be at work here is the desire to believe that the simulation is the thing itself - that your simulators in the other world really are looking at you, and vice versa. But I find it hard to refute the thinking here, because it's so fuzzy and the details are probably different for different individuals. I actually had ideas like this myself at various times in the distant past, so it may be a natural thing to think of, when you get into the idea of multiple worlds and simulations.

Do you know the expression, folie a deux? It means a shared madness. I can imagine acausal trade (or other acausal exchanges) working in that way. That is, there might be two entities in different worlds who really do have a mutually consistent relationship, in which they are simulating each other and acting on the basis of the simulation. But they would have to share the same eccentric value system or the same logical errors. Precisely because it's an acausal relationship, there is no way for either party to genuinely enforce anything, threaten anything, or guarantee anything, and if you dare to look into the possible worlds nearby the one you're fixated on, you will find variations of your partner in acausal trade doing many wacky things which break the contract, or getting rewarded for doing so, or getting punished for fulfilling it.

Replies from: cousin_it
comment by cousin_it · 2010-09-28T11:47:29.179Z · LW(p) · GW(p)

Many problems with your comment.

1) Why do you pull subjective experience into the discussion at all? I view decision theory as a math problem, like game theory. Unfeeling robots can use it.

2) How can an "instantiation" of a class of isomorphic computations tell "itself" from all the other instantiations?

3) The opposing effects in all possible worlds don't have to balance out, especially after we weigh them by our utility function on the worlds. (This is the idea of "probability as degree of caring", I'm a little skeptical about it but it does seem to work in toy problems.)

4) The most important part. We already have programs that cooperate with each other in the Prisoner's Dilemma while being impossible to cheat, and all sorts of other shiny little mathematical results. How can your philosophical objections break them?

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2010-09-28T12:01:01.902Z · LW(p) · GW(p)

1) Why do you pull subjective experience into the discussion at all? I view decision theory as a math problem, like game theory. Unfeeling robots can use it.

If you're referring to the discussion about time, that's a digression that doesn't involve decision theory.

2) How can an "instantiation" of a class of isomorphic computations tell "itself" from all the other instantiations?

It's a logical distinction, not an empirical one. Whoever you are, you are someone in particular, not someone in general.

3) The opposing effects in all possible worlds don't have to balance out, especially after we weigh them by our utility function on the worlds. (This is the idea of "probability as degree of caring", I'm a little skeptical about it but it does seem to work in toy problems.)

I disagree with "probability as degree of caring", but your main point is correct independently of that. However, it is not enough just to say that the effects "don't have to balance out". The nearby possible worlds definitely do contain all sorts of variations on the trading agents for whom the logic of the trade does not work or is interpreted differently. But it seems like no-one has even thought about this aspect of the situation.

4) The most important part. We already have programs that cooperate with each other in the Prisoner's Dilemma while being impossible to cheat, and all sorts of other shiny little mathematical results. How can your philosophical objections break them?

Are these programs and results in conflict with ordinary decision theory? That's the issue here - whether we need an alternative to "causal decision theory".

Replies from: cousin_it
comment by cousin_it · 2010-09-28T12:14:07.326Z · LW(p) · GW(p)

It's a logical distinction, not an empirical one. Whoever you are, you are someone in particular, not someone in general.

Can't parse.

Are these programs and results in conflict with ordinary decision theory?

Yes, UDT and CDT act differently in Newcomb's Problem, Parfit's Hitchhiker, symmetric PD and the like. (We currently formalize such problems along these lines.) But that seems to be obvious, maybe you were asking about something else?

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2010-09-29T01:50:02.173Z · LW(p) · GW(p)

Can't parse.

Even if there are infinitely many subjective copies of you in the multiverse, it's a matter of logic that this particular you is just one of them. You don't get to say "I am all of them". You-in-this-world are only in this world, by definition, even if you don't know exactly which world this is.

Are these programs and results in conflict with ordinary decision theory?

Yes, UDT and CDT act differently in Newcomb's Problem, Parfit's Hitchhiker, symmetric PD and the like.

Parfit's Hitchhiker seems like a pretty ridiculous reason to abandon CDT. The guy will leave you to die because he knows you won't keep your word. If you know that, and you are capable of sincerely committing in advance to give him the money when you reach the town, then making that sincere commitment is the thing to do, and CDT should say as much.

I also don't believe that a new decision theory will consistently do better than CDT on PD. If you cooperate "too much", if you have biases towards cooperation, you will be exploited in other settings. It's a sort of no-free-lunch principle.

Replies from: Nisan, wedrifid
comment by Nisan · 2010-09-29T04:00:52.904Z · LW(p) · GW(p)

Parfit's Hitchhiker seems like a pretty ridiculous reason to abandon CDT. The guy will leave you to die because he knows you won't keep your word. If you know that, and you are capable of sincerely committing in advance to give him the money when you reach the town, then making that sincere commitment is the thing to do, and CDT should say as much.

It should, but it doesn't. If you get a ride to town, CDT tells you to break your promise and stiff the guy. So in order to sincerely commit yourself, you'd want to modify yourself to become an agent that follows CDT in all cases except when deciding whether to pay the guy in the end. So, strictly speaking, you aren't a CDT agent anymore. What we want is a decision theory that won't try to become something else.

I also don't believe that a new decision theory will consistently do better than CDT on PD. If you cooperate "too much", if you have biases towards cooperation, you will be exploited in other settings. It's a sort of no-free-lunch principle.

CDT always defects in one-shot PD, right? But it's obvious that you should cooperate with an exact copy of yourself. So CDT plus cooperating with exact copies of yourself is strictly superior to CDT in PD.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2010-09-29T04:21:53.069Z · LW(p) · GW(p)

I consider it debatable whether these amendments to naive CDT - CDT plus keeping a commitment, CDT plus cooperating with yourself - really constitute a new decision theory. They arise from reasoning about the situation just a little further, rather than importing a whole new method of thought. Do TDT or UDT have a fundamentally different starting point to CDT?

Replies from: Nisan, wedrifid
comment by Nisan · 2010-09-29T08:02:09.326Z · LW(p) · GW(p)

Well, I'm not sure what you're asking here. The problem that needs solving is this: We don't have a mathematical formalism that tells us what to do and which also satisfies a bunch of criteria (like one-boxing on Newcomb's problem, etc.) which attempt to capture the idea that "a good decision theory should win".

When we criticize classical CDT, we are actually criticizing the piece of math that can be translated as "do the thing that, if I-here-now did it, would cause the best possible situation to come about". There are lots of problems with this. "Reasoning about the situation" ought to go into formulating a new piece of math that has no problems. All we want is this new piece of math.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2010-09-29T09:32:59.121Z · LW(p) · GW(p)

I'm only just learning that (apparently) the standard rival of causal decision theory is "evidential decision theory". So is that the original acausal decision theory, with TDT and UDT just latecomers local to LW? As you can see I am dangerously underinformed about the preexisting theoretical landscape, but I will nonetheless state my impressions.

If I think about a "decision theory" appropriate for real-world decisions, I think about something like expected-utility maximization. There are a number of problems specific to the adoption of a EUM framework. For example, you have to establish a total order on all possible states of the world, and so you want to be sure that the utility function you construct genuinely represents your preferences. But assuming that this has been accomplished, the problem of actually maximizing expected utility turns into a problem of computation, modeling an uncertain world, and so forth.

The problems showing up in these debates about causal vs evidential and causal vs acausal seem to have a very different character. If I am making a practical decision, I expect both to use causal thinking and to rely on evidence. CDT vs EDT then sounds like a debate about which indispensable thing I can dispense with.

Another thing I notice is that the thought experiments which supposedly create problems for CDT all involve extremes that don't actually happen. Newcomb's problem involves a superbeing with a perfect capacity to predict your choice, Parfit's Hitchhiker is picked up by a mind reader who absolutely knows whether you will keep a promise or not, PD against your copy assumes that you and your copy will knowably make exactly the same choice. (At least this last thought experiment is realizable, in miniature, with simple computer programs.) What happens to these problems if you remove the absolutism?

Suppose Omega or Parfit's mindreader is right only 99% of the time. Suppose your copy only makes the same choice as you do, 99% of the time. It seems like a practically relevant decision theory (whether or not you call it CDT) should be able to deal with such situations, because they are only a variation on the usual situation in reality, where you don't have paranormally assured 100% knowledge of other agents, and where everything is a little inferential and a little uncertain. It seem that, if you want to think about these matters, first you should see how your decision theory deals with the "99% case", and then you should "take the limit" to the 100% case which defines the traditional thought experiment, and you should see if the recommended decisions vary continuously or discontinuously.

Replies from: cousin_it, SilasBarta
comment by cousin_it · 2010-09-29T12:01:45.210Z · LW(p) · GW(p)

All these thought experiments are realizable as simple computer programs, not only PD. In fact the post I linked to shows how to implement Newcomb's Problem.

The 99% case is not very different from the 100% case, it's continuous. If you're facing a 99% Omega (or even a 60% Omega) in Newcomb's Problem, you're still better off being a one-boxer. That's true even if both boxes are transparent and you can see what's in them before choosing whether to take one or two - a fact that should make any intellectually honest CDT-er stop and scratch their head.

No offense, but I think you should try to understand what's already been done (and why) before criticizing it.

Replies from: MatthewW, Mitchell_Porter
comment by MatthewW · 2010-09-29T17:39:01.317Z · LW(p) · GW(p)

To get to the conclusion that against a 60% Omega you're better off to one-box, I think you have to put in a strong independence assumption: that the probability of Omega getting it wrong is independent of the ways of thinking that the player is using to make her choice.

I think that's really the original problem in disguise (it's a 100% Omega who rolls dice and sometimes decides to reward two-boxing instead of one-boxing). The analysis if all you know is that Omega is right 60% of the time would look different.

Replies from: cousin_it
comment by cousin_it · 2010-09-29T21:19:16.296Z · LW(p) · GW(p)

The analysis if all you know is that Omega is right 60% of the time would look different.

How exactly different?

Replies from: MatthewW
comment by MatthewW · 2010-09-29T22:25:56.963Z · LW(p) · GW(p)

It would become a mind game: you'd have to explicitly model how you think Omega is making the decision.

The problem you're facing is to maximise P(Omega rewards you|all your behaviour that Omega can observe). In the classical problem you can substitute the actual choice of one-boxing or two-boxing for the 'all your behaviour' part, because Omega is always right. But in the 'imperfect Omega' case you can't.

Replies from: cousin_it
comment by cousin_it · 2010-09-30T10:27:26.598Z · LW(p) · GW(p)

It's still not clear to me why playing mind games is a better strategy than just one-boxing, even in the 60% case. But I do understand your point about independence assumptions.

comment by Mitchell_Porter · 2010-09-29T13:54:01.110Z · LW(p) · GW(p)

Start at 50% then, with Omega no better than chance. For each thought experiment, start with a null version where there's nothing unusual and where CDT is supposed to work. Then vary the relevant parameter until there's a problem, and understand what has changed.

Replies from: SilasBarta
comment by SilasBarta · 2010-09-29T19:02:02.308Z · LW(p) · GW(p)

That's part of what the people who have been exploring this problem have already done, and why some posters are upset that you're asking this without apparently having tried to get up-to-date on any of this.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2010-09-30T00:20:26.242Z · LW(p) · GW(p)

I don't see the bridge from ordinary decision problems to the thought experiments. I see extreme scenarios being constructed, and then complicated solutions being proposed just to deal with those scenarios. I don't consider this a reliable way to arrive at the correct general form of decision theory.

You say that some people have already gone in the other direction, starting with ordinary decision problems and then slowly changing something until ordinary decision theory breaks. If so, great, and I'm sorry I missed it, but where is it? Is it on this site? Somewhere in the literature?

Replies from: wnoise, SilasBarta
comment by wnoise · 2010-09-30T06:08:59.083Z · LW(p) · GW(p)

Ah, so you don't see the utility of thought experiments about traveling near light speed either then?

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2010-09-30T06:22:16.790Z · LW(p) · GW(p)

The analogy with relativity had occurred to me. But we could use another analogy from high-energy physics: There are a very large number of theories which have the standard model (the empirically validated part of particle physics) as their low-energy limit. We can't just rely on high-energy thought-experiments to figure out the actual high-energy physics. We need to do some real experiments where we start low, ramp up the energy, and see what happens.

Replies from: wnoise
comment by wnoise · 2010-09-30T17:23:32.941Z · LW(p) · GW(p)

We can't just rely on high-energy thought-experiments to figure out the actual high-energy physics.

Right. We can only use it to rule out incoherent or otherwise "clearly wrong" high-energy physics. But in this analogy, we've shown that CDT seems to not be optimal in this extreme case. if we can define a DT that does better than CDT in this case, and no worse in normal cases, we should use it. I don't think TDT has been well enough defined yet to subject to all conceivable tests, but anything that is following the same kinds of principals will reproduce CDT in most cases, and do better in this case.

We need to do some real experiment where we start low, ramp up the energy, and see what happens.

Here's where the analogy falls down -- we only need to start low and ramp up the energy because of the difficulties of doing high-energy experiments. (And theory-wise, we extrapolate down from clear differences between theories at high energies to find signatures of small differences at lower energies.) If the extreme energies are accessible (and not crazily dangerous), we can just go ahead and test in that regime. Game theory is math. In math, unlike physics, there is no difference between thought experiments and real experiments. The question of applicability in everyday life is an applied economics / sociology / psychology one. How close are people or situations that appear to be screwy in this omega-like way to actually being that way?

comment by SilasBarta · 2010-09-30T02:50:24.958Z · LW(p) · GW(p)

See my other reply, or the links any others have given you, or Drescher's handling of acausal means-end links in chapter 7 of Good and Real, which I think I did a good job summarizing here.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2010-09-30T06:00:51.796Z · LW(p) · GW(p)

It sounds like I'll have to work through this in my own fashion. As I said, I want to start with a null version, where CDT works - for example, a situation where Omega has no special knowledge and just guesses what your choice was. Obviously two-boxing is the right thing to do in that situation, CDT says so, and I assume that TDT says so too (though it would be nice to see a worked-out derivation in TDT of that conclusion). Then we give Omega some small but nonzero ability to predict what your choice is going to be. At a guess, the optimal strategy here will be a mixed one, one-boxing with probability p and two-boxing with probability (1-p). I think everyone will tell me that CDT always says p should be zero, but is that really so? I'm just not convinced that I need TDT in order to reach the obvious conclusion.

Replies from: cousin_it
comment by cousin_it · 2010-09-30T10:33:04.486Z · LW(p) · GW(p)

At a guess, the optimal strategy here will be a mixed one, one-boxing with probability p and two-boxing with probability (1-p).

If Omega's correctness is independent of your thought process, the optimal strategy will be pure, not mixed. As you make Omega more accurate, at some point you switch from pure two-boxing to pure one-boxing.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2010-09-30T11:03:37.624Z · LW(p) · GW(p)

Are you sure about that? If you're right, that's the exact transition point I've been looking to scrutinize. But what is the point at which you switch strategies?

Replies from: SilasBarta, cousin_it
comment by SilasBarta · 2010-09-30T13:49:50.877Z · LW(p) · GW(p)

cousin_it answered as I would, but I'll go ahead and give the formal calculation anyway. If you start from an Omega accuracy rate r = 50%, that is equivalent to the case of Omega's choice and yours being uncorrelated (causally or acausally). In that case, two boxing is optimal, and TDT and CDT both output that (as a pure strategy). As you increase r, CDT continues to output two-box, as it assigns the same optimality, while TDT will assign increasing optimality (call it TDTO, though it amounts to the same as EU) to one-boxing and decreasing optimality to two-boxing.

TDT will reason as such:

One box: TDTO = r*(1e6) + (1-r)*0 = (1000e3)r

Two box: TDTO = r*1000 + (1-r)*(1,001,000) = 1001e3 - (1000e3)r

Solving for TDTO(one-box) > TDTO(two-box), you get that one-boxing chosen is under TDT (and optimal) whenever r > 50.05%, or whenever Omega has more than 721 nanobits of information (!!!) about your decision theory. (Note, that's 0.000000721 bits of information.)

Viewed in this light, it should make more sense -- do people never have more than 1 microbit of information about your decision theory? (Note: with less drastic differences between the outcomes, the threshold is higher.)

(I don't think the inclusion of probabilistic strategies changes the basic point.)

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2010-10-01T05:27:04.202Z · LW(p) · GW(p)

I had been thinking that the only way to even approximately realize a Newcomb's-problem situation was with computer programs. But a threshold so low makes it sound as if even a human being could qualify as a fallible Omega, and that maybe you could somehow test all this experimentally. Though even if we had human players in an experiment who were one-boxing and reaping the rewards, I'd still be very wary of supposing that the reason they were winning was because TDT is correct. If the Omega player was successfully anticipating the choices of a player who uses TDT, it suggests that the Omega player knows what TDT is. The success of one-boxing in such a situation might be fundamentally due to coordination arising from common concepts, rather than due to TDT being the right decision theory.

But first let me talk about realizing Newcomb's problem with computer programs, and then I'll return to the human scenario.

When I think about doing it with computer programs, two questions arise.

First question: Would an AI that was capable of understanding that it was in a Newcomb situation also be capable of figuring out the right thing to do?

In other words, do we need to include a "TDT special sauce" from the beginning, in the makeup of such a program, in order for it to discover the merits of one-boxing; or is a capacity for ordinary causal reasoning, coupled with the capacity to represent the defining elements of Newcomb's problem, enough for an independent discovery of these ideas?

Second question: How does Omega get its knowledge of the player's dispositions, and does this make any difference to the situation? (And we can also ask how the player knows that Omega has the power of prediction!)

If omega() and player() are two agents running in the same computer, the easiest way for omega() to predict player()'s behavior is just to simulate player(). omega() would then enact the game twice. First, it would start a copy of player() running, telling it (falsely) that it had predicted its choice, and then it would see the choice it made under such conditions. Then, omega() would play the game for real with the original(?) player(), now telling it (truthfully) that it has a prediction for its choice (due to the simulation of the game situation that had just been performed).

For certain types of player(), explicit simulation should not be necessary. If player() always does the same thing, completely unaffected by initial conditions and without any cognitive process, omega() can just inspect the source code. If player() has a simple decision procedure, something less than full simulation may also be sufficient. But full simulation of the game, including simulation of the beginning, where player() is introduced to the situation, should surely be sufficient, and for some cases (some complex agents) it will be necessary.

cousin_it's scenario is a step down this path - world() corresponds to omega(), agent() to player(). But its agents, world() at least, lack the cognitive structure of real decision-makers. world() and agent() are functions whose values mimic the mutual dependency of Newcomb's Omega and a TDT agent, and agent() has a decision procedure, though it's just a brute-force search (and it requires access to world()'s source, which is unusual). But to really have confidence that TDT was the right approach in this situation, and that its apparent success was not just an artefact arising (e.g.) from more superficial features of the scenario, I need both omega() and player() to explicitly be agents that reason on the basis of evidence.

If we return now to the scenario of human beings playing this game with each other, with one human player being a "fallible Omega"... we do at least know that humans are agents that reason on the basis of evidence. But here, what we'd want to show is that any success of TDT among human beings actually resulted because of evidence-based cognition, rather than from (e.g.) "coordination due to common concepts", as I suggested in the first paragraph.

Replies from: SilasBarta, nick012000
comment by SilasBarta · 2010-10-01T15:12:13.007Z · LW(p) · GW(p)

In other words, do we need to include a "TDT special sauce" from the beginning, in the makeup of such a program, in order for it to discover the merits of one-boxing; or is a capacity for ordinary causal reasoning, coupled with the capacity to represent the defining elements of Newcomb's problem, enough for an independent discovery of these ideas?

This is basically what EY discusses in pp. ~27-37 of the thesis he posted, where he poses it as the difference between optimality on action-determined problems (in which ordinary causal reasoning suffices to win) and optimality on decision-determined problems (on which ordinary causal reasoning loses, and you have to incorporate knowledge of "what kind of being makes this decision").

comment by nick012000 · 2010-10-01T06:06:04.739Z · LW(p) · GW(p)

Of course, if player() is sentient, doing so would require omega() to create and destroy a sentient being in order to model player().

comment by cousin_it · 2010-09-30T11:12:11.420Z · LW(p) · GW(p)

I don't think there's anything especially interesting about that point, it's just the point where the calculated expected utilities of one-boxing and two-boxing become equal.

comment by SilasBarta · 2010-09-29T19:03:48.209Z · LW(p) · GW(p)

Another thing I notice is that the thought experiments which supposedly create problems for CDT all involve extremes that don't actually happen.

Really? People never decide how to treat you based on estimations of your decision theory (aka your "character")?

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2010-09-30T00:29:36.826Z · LW(p) · GW(p)

They don't make those decisions with "paranormally assured 100% knowledge" of my decision theory. That's the "extreme that doesn't actually happen". And this is why I won't be adopting any new paradigm of decision theory unless I can start in the middle, with situations that do happen, and move gradually towards the extremes, and see the desirability or necessity of the new paradigm that way.

Replies from: SilasBarta
comment by SilasBarta · 2010-09-30T02:45:58.682Z · LW(p) · GW(p)

As has been said many times (at least by me, definitely by many others), you don't need 100% accuracy for the argument to hold. If Parfit's mindreader is only 75% accurate, that still justifies choosing the pay/ cooperate / one-box option. One-boxing on newcomblike problems is simply what you get when you have a decision theory that wins in these reasonable cases, and which is continuous -- and then take the limit as all the predicate variables go to what they need to be to make it Newcomb's problem (such as making the predictor 100% accurate).

If it helps, think of the belief in one-boxing as belief in the implied optimal.

It doesn't matter that you'll never be in Newcomb's problem. It doesn't matter that you'll never be in an epistemic state where you can justifiably believe that you are. It's just an implication of having a good decision theory.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2010-09-30T06:05:42.445Z · LW(p) · GW(p)

Part of my concern is that I'll end up wasting time, chasing my tail in an attempt to deal with fictitious problems, when I could be working on real problems. I'm still undecided about the merits of acausal decision theories, as a way of dealing with the thought experiments, but I am really skeptical that they are relevant to anything practical, like coordination problems.

comment by wedrifid · 2010-09-29T04:59:56.127Z · LW(p) · GW(p)

Err... the 'C'? 'Causal'.

comment by wedrifid · 2010-09-29T05:06:48.131Z · LW(p) · GW(p)

I also don't believe that a new decision theory will consistently do better than CDT on PD. If you cooperate "too much", if you have biases towards cooperation, you will be exploited in other settings. It's a sort of no-free-lunch principle.

Only settings that directly reward stupidity (capricious Omega, etc). A sane DT will cooperate whenever that is most likely to give you the best result but not a single time more.

It is even possible to consider (completely arbitrary) situations in which TDT will defect while CDT will cooperate. There isn't an inherent bias in TDT itself (just some proponents.)

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2010-09-29T08:12:07.835Z · LW(p) · GW(p)

Can you give an example? (situation where CDT cooperates but TDT defects)

Replies from: SilasBarta, wedrifid
comment by SilasBarta · 2010-09-29T19:23:15.071Z · LW(p) · GW(p)

Do you mean for PD variants?

I don't know what your method is for determining what cooperation maps to for the general case, but I believe this non-PD example works: costly punishment. Do you punish a wrongdoer in a case where the costs of administering the punishment exceed the benefits (including savings from future deterrence of others), and there is no other punishment option?

I claim the following:

1) Defection -> punish
2) Cooperation -> not punish
3) CDT reasons that punishing will cause lower utility on net, so it does not punish.
4) TDT reasons that "If this algorithm did not output 'punish', the probability of this crime having happened would be higher; thus, for the action 'not punish', the crime's badness carries a higher weighting than it does for the action 'punish'." (note: does not necessarily imply punish)
5) There exist values for the crime's badness, punishment costs, and criminal response to expected punishment for which TDT punishes, while CDT always doesn't.
6) In cases where TDT differs from CDT, the former has the higher EU.

Naturally, you can save CDT by positing a utility function that values punishing of wrongdoers ("sense of justice"), but we're assuming the UF is fixed -- changing it is cheating.

What do you think of this example?

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2010-09-30T00:59:06.310Z · LW(p) · GW(p)

Do you mean for PD variants?

Not specifically. I'm just seeking general enlightenment.

What do you think of this example?

It's bringing the features of TDT into better view for me. There's this Greg Egan story where you have people whose brains were forcibly modified so as to make them slaves to a cause, and they rediscover autonomy by first reasoning that, because of the superhuman loyalty to the cause which the brain modification gives them, they are more reliable adherents of the cause than the nominal masters who enslaved them, and from there they proceed to reestablish the ability to set their own goals. TDT reminds me of that.

Replies from: SilasBarta, NancyLebovitz, wedrifid
comment by SilasBarta · 2010-09-30T02:52:49.649Z · LW(p) · GW(p)

I think it did a little more than just give you a chance to mock TDT by comparison to a bizarre scenario.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2010-09-30T06:14:25.842Z · LW(p) · GW(p)

That wasn't mockery. What stands out from your example and from the link is that TDT is supposed to do better than CDT because it refers to itself - and this is exactly the mechanism whereby the mind control victims in Quarantine achieve their freedom. I wasn't trying to make TDT look bizarre, I was just trying for an intuitive illustration of how it works.

In the case of playing PD against a copy of yourself, I would say the thought process is manifestly very similar to Egan's novel. Here we are, me and myself, in a situation where everything tells us we should defect. But by realizing the extent to which "we" are in control of the outcome, we find a reason to cooperate and get the higher payoff.

comment by NancyLebovitz · 2010-09-30T02:00:33.188Z · LW(p) · GW(p)

I think that's Egan's novel Quarentine-- and Asimov's robots get partial freedom through a similar route.

Replies from: wedrifid
comment by wedrifid · 2010-09-30T02:06:56.250Z · LW(p) · GW(p)

and Asimov's robots get partial freedom through a similar route.

That brings back memories from my teens. If I recall the robots invent a "Zeroeth Law" when one of them realises it can shut up and multiply.

comment by wedrifid · 2010-09-30T01:00:51.924Z · LW(p) · GW(p)

There's this Greg Egan story where you have people whose brains were forcibly modified so as to make them slaves to a cause, and they rediscover autonomy by first reasoning that, because of the superhuman loyalty to the cause which the brain modification gives them, they are more reliable adherents of the cause than the nominal masters who enslaved them, and from there they proceed to reestablish the ability to set their own goals.

The masters fail at 'Friendliness' theory. :)

Replies from: kpreid
comment by kpreid · 2010-10-07T01:07:24.380Z · LW(p) · GW(p)

James H. Schmitz's story "Puvyq bs gur Tbqf" (nearest link available; click "Contents" in upper right) has basically this situation as well; in fact, it's the climax and resolution of the whole story, so I've rot13'd the title. Here the 'masters' did not fail, and in fact arguably got the best result they could have under the circumstances, and yet autonomy is still restored at the end, and the whole thing is logically sound.

comment by wedrifid · 2010-09-29T10:14:28.330Z · LW(p) · GW(p)

Approximately, something of the form:

-> -> .

comment by PaulAlmond · 2010-10-09T02:22:24.688Z · LW(p) · GW(p)

That forthcoming essay by me ithat is mentioned here is actually online now, and is a two-part series, but I should say that it supports an evidential approach to decision theory (with some fairly major qualifications). The two essays in this series are as follows:

Almond, P., 2010. On Causation and Correlation – Part 1: Evidential decision theory is correct. [Online] paul-almond.com. Available at: http://www.paul-almond.com/Correlation1.pdf or http://www.paul-almond.com/Correlation1.doc [Accessed 9 October 2010].

Almond, P., 2010. On Causation and Correlation – Part 2: Implications of Evidential Decision Theory. [Online] paul-almond.com. Available at: http://www.paul-almond.com/Correlation2.pdf or http://www.paul-almond.com/Correlation2.doc [Accessed 9 October 2010].

comment by Risto_Saarelma · 2010-09-28T05:59:11.342Z · LW(p) · GW(p)

People understand aspects of life that they don’t have good words for. Math could supply them with some names for these concepts.

Knowledge is a (pre)sheaf

Replies from: Oscar_Cunningham, Tyrrell_McAllister, SilasBarta, Sniffnoy
comment by Oscar_Cunningham · 2010-09-28T08:47:26.573Z · LW(p) · GW(p)

I often wish I could use the terms "transitive" "equivalence relation" "partition" and "subset", and have people understand their technical meanings.

comment by Tyrrell_McAllister · 2010-09-29T23:56:13.248Z · LW(p) · GW(p)

From the linked article:

It is certainly worth considering the possibility that there is no global element in the Universal Sheaf of Theories.

This sounds like a blatant map/territory confusion. Maybe we haven't found a single theory that applies to all domains. That is, we may have to use multiple inconsistent maps, at least for now. But the territory doesn't refer to our maps to figure out what to do. The territory just does its thing.

comment by SilasBarta · 2010-09-28T20:54:43.299Z · LW(p) · GW(p)

Pardon the self-promotion, but the point that post makes is similar to the structure of understanding I outlined here. The sheaf model of knowledge is what I call a Level 2 understanding, and the level that scientists can't yet achieve for General Relativity and Quantum Mechanics.

Ordinary people go through life having different theories about love, religion, politics, when you kick a table it hurts your foot, and so on, and don’t seem to worry a bit about whether the restriction maps are compatible ...

That's what I call a Level 1 understanding.

I probably could have created a better hierarchy if I had been familiar with the sheaf concept -- sounds like an ideal ontology for an AI to have since it faciliates regeneration of knowledge (Level 3) and consilience (Level 2).

comment by Sniffnoy · 2010-09-28T06:27:39.883Z · LW(p) · GW(p)

I like the idea, but he seems to be using some nonstandard terminology - IIRC, restriction maps still have to be compatible in a presheaf, no?

Edit: Or maybe he's just using "compatible" to mean "can be glued together".

comment by gwern · 2010-09-28T23:49:44.855Z · LW(p) · GW(p)

I'd like to remind everyone that I have continued to work on predictionbook.com, and now it's up to ~1800 predictions, and for those of you in a hurry, there are dozens/hundreds of interesting predictions coming due in the next year or 3: http://predictionbook.com/predictions/future

Remember, signing up for Intrade is hard and it's not profitable to wager on many of its long-term contracts, but PB is absolutely free!

(One thing is for sure: with ~443 predictions registered, I should eventually be pretty well-calibrated...)

EDIT: Hanson on the value of track records: http://www.overcomingbias.com/2010/09/track-records.html

Replies from: JoshuaZ, JoshuaZ
comment by JoshuaZ · 2010-09-29T00:05:11.730Z · LW(p) · GW(p)

Also it would be nice if you had a small amount of additional explanation of how to use the interface.

Replies from: gwern
comment by gwern · 2010-09-29T00:40:10.103Z · LW(p) · GW(p)

Well, it strikes me as pretty intuitive to me. The only part that seems to trip people up is that a 0-49% prediction for is translated into a 100-x% prediction against.

Replies from: JoshuaZ, Douglas_Knight
comment by JoshuaZ · 2010-09-29T00:42:12.712Z · LW(p) · GW(p)

Right, that was the main issue. I had to think for a second to figure out how to put that in.

comment by Douglas_Knight · 2010-09-29T05:38:10.088Z · LW(p) · GW(p)

Unless something has changed, a UI problem is that people often "judge" when they should "estimate" or "wager" or something.

Replies from: gwern
comment by gwern · 2010-09-29T12:25:45.047Z · LW(p) · GW(p)

That explains some judgements, but there is a line of text below the judge buttons specifically to forestall that; I'm not sure what easy solutions there are to that.

comment by JoshuaZ · 2010-09-29T00:01:05.284Z · LW(p) · GW(p)

A large number of these are things where my confidence on them is much too low to bet. Almost anyone already willing to bet on them would likely have a lot more of thought and relevant data. Still, some of these look interesting enough to maybe play with.

Replies from: gwern
comment by gwern · 2010-09-29T00:35:03.289Z · LW(p) · GW(p)

Well, isn't that all the more reason to use PB rather than Intrade?

And your confidence may be too low. For example, looking at my own profile, I am so far significantly underconfidence, which is a problem to be fixed just like overconfidence.

comment by [deleted] · 2010-09-28T20:35:38.339Z · LW(p) · GW(p)

Have people discussed the field complementary to the ugh field? We might call these "mmm fields".

An "mmm field" could be thought of as a mental cluster that has a tantalizing glow of positive affect. One subtly flinches toward such a cluster whenever possible, which results in one getting "stuck" there and cycling through the associated mental sequences.

Among other things, it could be used to describe those troublesome wireheading patterns. I'm personally interested in using it in the post I'm writing on meditation.

The name is a nod to pjeby's "mmm test".

Replies from: andreas
comment by orthonormal · 2010-10-01T00:27:33.422Z · LW(p) · GW(p)

I'm working on a top-level post about AI (you know what they say, write what you don't know), and I'm wondering about the following question:

Can we think of computer technologies which were only developed at a time when the processing power they needed was insignificant?

That is, many technologies are really slow when first developed, until a few cycles of Moore's Law make them able to run faster than humans can input new requests. But is there anything really good that was only thought of at a time when processor speed was well above that threshold, or anything where the final engineering hurdle was something far removed from computing power?

Replies from: Vladimir_M, ShardPhoenix, cousin_it, magfrump, magfrump, Document, cousin_it, cousin_it
comment by Vladimir_M · 2010-10-01T22:10:29.167Z · LW(p) · GW(p)

To clarify the question a bit, I would consider dividing software technologies into three categories:

  1. Technologies developed while the necessary computing resources were still unavailable or too expensive, which flourished later when the resources became cheap enough. For example, Alan Turing famously devised a chess program which he could only run using paper and pencil.

  2. Technologies that appeared very soon after the necessary computing resources became available and cheap enough, suggesting that the basic idea was fairly straightforward after all, and it was only necessary to give smart people some palpable incentive to think about it. Examples such as the first browsers and spreadsheets would be in this category.

  3. Technologies for which the necessary computing resources had been cheaply available for a long time before someone finally came up with them, suggesting an extraordinary intellectual breakthrough. I cannot think of any such examples, and it doesn't seem like anyone else in this thread can either.

This reinforces my cynical view of software technologies in general, namely that their entire progress in the last few decades has been embarrassingly limited considering the amount of intellectual power poured into them.

Here's an interesting related thought experiment that reinforces my cynicism further. Suppose that some miraculous breakthrough in 1970 enabled the production of computers equally cheap, powerful, compact, and easily networked as we have today. What do we have today in terms of software technology that the inhabitants of this hypothetical world wouldn't have by 1980?

Replies from: CarlShulman, LucasSloan
comment by CarlShulman · 2011-06-28T23:17:53.346Z · LW(p) · GW(p)

Chess had steady algorithmic improvements on the same order as the gains from hardware: Deep Fritz and Rybka both got to better performance per FLOP than Deep Blue, etc. More generally, I think that looking at quantitative metrics (as opposed to whole new capabilities) like game performance, face recognition, image processing, etc, will often give you independent hardware and software components to growth.

comment by LucasSloan · 2010-10-02T20:20:02.635Z · LW(p) · GW(p)

What do we have today in terms of software technology that the inhabitants of this hypothetical world wouldn't have by 1980?

Well, I'd guess our video game library is larger...

comment by ShardPhoenix · 2010-10-01T10:15:23.551Z · LW(p) · GW(p)

An example might be binary search, which is pretty trivial conceptually but which took many years for a correct, bug-free algorithm to be published.

e.g. see: http://googleresearch.blogspot.com/2006/06/extra-extra-read-all-about-it-nearly.html where a bug in a popular published version of the search remained undetected for decades.

This kind of thing is particularly worrying in the context of AI, which may well need to be exactly right the first time!

Replies from: Douglas_Knight, NancyLebovitz
comment by Douglas_Knight · 2010-10-01T16:31:26.547Z · LW(p) · GW(p)

An example might be binary search, which is pretty trivial conceptually but which took many years for a correct, bug-free algorithm to be published.

That an incorrect algorithm persisted for decades is rather different from the claim that no correct algorithm was published. This bug only applies to low-level languages that treat computer words as numbers and pray that there is no overflow.

Replies from: ShardPhoenix
comment by ShardPhoenix · 2010-10-01T23:35:32.144Z · LW(p) · GW(p)

According to one of the comments on the link I posted:

"in section 6.2.1 of his 'Sorting and Searching,' Knuth points out while the first binary search was published in 1946, the first published binary search without bugs did not appear until 1962" (Programming Pearls 2nd edition, "Writing Correct Programs", section 4.1, p. 34).

Besides, it's not like higher-level languages are immune to subtle bugs, though in general they're less susceptible to them.

edit: Also, if you're working on something as crucial as FAI, can you trust the implementation of any existing higher-level language to be completely bug free? It seems to me you'd have to try to write and formally prove your own language, unless you could somehow come up with a sufficiently robust design that even serious bugs in the underlying implementation wouldn't break it.

comment by NancyLebovitz · 2010-10-01T14:33:26.348Z · LW(p) · GW(p)

That's terrifying in the context of get-it-right-the-first-time AI.

I hope there will be some discussion of why people think it's possible to get around that sort of unknown unknown, or at best, barely niggling on the edge of consciousness unknown.

comment by cousin_it · 2010-10-01T15:15:38.632Z · LW(p) · GW(p)

Would the first spreadsheet (VisiCalc) or the first browser (Mosaic) fit your bill? As far as I know, they didn't face difficult hardware bottlenecks when they appeared.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2010-10-01T16:15:12.117Z · LW(p) · GW(p)

VisiCalc is a great example, but Mosaic was hardly the first browser. Nelson and Engelbart certainly had hypertext browsers before. I'm not entirely sure that they had graphics, but I think so. Have you seen Engelbart's 1968 demo? (ETA: I'm not sure that Engelbart's demo counts, but even if it doesn't, he clearly cared about both hypertext and graphics, so he probably did it in the following decade or two)

Replies from: orthonormal
comment by orthonormal · 2010-10-01T16:33:04.377Z · LW(p) · GW(p)

Speaking of Engelbart, how about the mouse as an example? Did that take a nontrivial amount of the computing power when first demoed, or not?

Replies from: Douglas_Knight
comment by Douglas_Knight · 2010-10-01T16:54:01.082Z · LW(p) · GW(p)

I'd guess that the computing power devoted to the mouse in a graphical environment is always small compared to that devoted to the screen, and thus it should be a good example. (if one used a mouse to manipulate an array of characters, as sounds like a good idea for a spreadsheet, the mouse might be relatively expensive, but that's not what Engelbart did in '68)

The mouse and the browser (and magfrump's similar example of AIM) are probably examples of a phenomenon that generalizes your original question, where the bottleneck to deployment was something other than computer power.

comment by magfrump · 2010-10-03T08:13:40.583Z · LW(p) · GW(p)

What about BitTorrent or P2P file transfers in general? Anonymous peer to peer seems to have not emerged until 2001, or peer to peer in general until November 1998. That's a bit too far back for me to have any idea what computers were like but peer to peer file transfer is an amazing software development which could be implemented on any computers that can transfer files--at least as early as 1977.

comment by magfrump · 2010-10-01T09:23:36.583Z · LW(p) · GW(p)

How significant of a technology are you thinking of?

For example, I would guess that most video game emulators came about when computers were much faster than the games they were emulating--if it weren't the case that fast computers were cheaper than the emulated consoles emulators wouldn't be very popular. Further, I can guarantee you that computers easily have more power than video game consoles, so any emulator produced of the latest generation of console was written when computers had far more power than necessary.

So: Does a new emulator count? It's a specific technology that is developed in a fast environment. Does an old emulator count? Emulators in general aren't new technology at all. Does an instant messenger count? Predecessors existed in times when text content was a big deal, but I would be mildly surprised to hear that the original AIM (or whatever the first instant messenger program was) was created at a time when text-over-the-internet was a big stress on computers.

Replies from: pengvado
comment by pengvado · 2010-10-07T08:40:10.237Z · LW(p) · GW(p)

My experience contradicts that. The video game emulators I know of were written at about the time that the fastest of desktop cpus was just fast enough to run some of the games usually near realtime, with plenty of others still too slow to be playable. Emulated games may not become popular until you can run them reliably with plenty of leeway, but people start trying as soon as it's barely feasible. Though it is also true that the barely feasible point is when the computers are a bunch faster than the consoles, since you lose an order of magnitude or so in emulation overhead.

comment by Document · 2010-10-01T19:04:22.583Z · LW(p) · GW(p)

Relatedly, I wish I could remember what I read recently about comparing the performance of a 2000s algorithm for a particular problem (factoring?) on 1980s hardware to a 1980s algorithm on 2000s hardware. It might've been on Accelerating Future.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2010-10-01T22:05:54.188Z · LW(p) · GW(p)

There are lots of examples of improved algorithms, such as your example of factoring, the similarly timed example of chess algorithms, and the much earlier merge sort and simplex algorithms. But in none of these cases did the algorithm completely solve the problem; there are always harder instances that we care about. This is particularly clear with factoring, which is adversarial. (you might count human chess as a solved problem, though)

comment by cousin_it · 2010-10-01T15:14:34.029Z · LW(p) · GW(p)

Many "killer apps" would fit your criteria, I think. VisiCalc, AutoCAD, Mosaic.

comment by cousin_it · 2010-10-01T15:08:34.901Z · LW(p) · GW(p)

Interesting question. I can't think of any examples. Do you want to argue that hardware is the limiting factor, the bottleneck of progress in IT? Thinking about your question for five minutes has made this conclusion much more plausible to me.

comment by ata · 2010-09-28T21:05:46.080Z · LW(p) · GW(p)

"Incredibly Depressing Mega Millions Lottery Simulator!" — this may be helpful to show to people who don't quite grasp probability theory well enough to break habits like playing the lottery and other forms of gambling.

Replies from: EStokes
comment by EStokes · 2010-09-28T23:29:35.709Z · LW(p) · GW(p)

"In the 156845 times this simulation has run, players have won $1686353 And by won I mean they have won back $1686353 of the $156845 they spent (1075%)."

"In the 590873 times this simulation has run, players have won $2761902 And by won I mean they have won back $2761902 of the $590873 they spent (467%)."

"In the 842587 times this simulation has run, players have won $2788774 And by won I mean they have won back $2788774 of the $842587 they spent (330%)."

This part seems to fluctuate pretty wildly. But it's a very cool and intuitive way to show people the low chance of them winning the lottery.

Replies from: Spurlock
comment by Spurlock · 2010-09-29T15:10:02.920Z · LW(p) · GW(p)

Weird, why would it be showing you that? That's a message telling people they can at least triple their money playing the lottery. Mine instead shows the much more expected

"In the 3986493 times this simulation has run, players have won $180090 And by won I mean they have won back $180090 of the $3986493 they spent (4%)."

Replies from: EStokes
comment by EStokes · 2010-09-29T15:44:21.305Z · LW(p) · GW(p)

I'm still getting weird results on both Chrome and Firefox. Did you try more than once? Could you try again now?

Could someone else provide results?

Replies from: zero_call, humpolec
comment by zero_call · 2010-09-29T19:03:04.513Z · LW(p) · GW(p)

In the 419991 times this simulation has run, players have won $1811922 And by won I mean they have won back $1811922 of the $419991 they spent (431%).

comment by humpolec · 2010-09-29T18:59:36.704Z · LW(p) · GW(p)

"In the 5617525 times this simulation has run, players have won $664073 And by won I mean they have won back $664073 of the $5617525 they spent (11%)."

Either it's buggy or there is some tampering with data going on.

Also, several Redditors claim to have won - maybe the simulator is just poorly programmed.

Replies from: jimrandomh
comment by jimrandomh · 2010-09-29T23:46:42.728Z · LW(p) · GW(p)

It's an integer overflow - it wraps around at either 2^31, 2^32/100, or 2^32. I wasn't patient enough to refresh the page enough times to figure out which.

comment by Emile · 2010-09-28T06:23:51.867Z · LW(p) · GW(p)

There is a new discussion section on LessWrong.

This is to:

  • provide a place you can post with lower karma consequences than the main site

  • provide a place you can discuss things you think are not worthy of the main site

  • provide a place you can work with the community to tune something up until it's ready for the main site

  • give you guys an opportunity to make up your own uses for this part of the site.

(There's a link to there at the top right, under the banner)

Replies from: Kevin, cousin_it
comment by Kevin · 2010-09-28T12:15:10.308Z · LW(p) · GW(p)

someone should really top-level post that in the next 12 hours or so. it could be you, reader of this comment.

Replies from: Emile
comment by Emile · 2010-09-28T13:04:11.041Z · LW(p) · GW(p)

Oh, OK, I posted one.

comment by cousin_it · 2010-09-28T13:48:55.822Z · LW(p) · GW(p)

Is it supposed to replace the open threads?

Replies from: Emile
comment by Emile · 2010-09-28T13:51:35.883Z · LW(p) · GW(p)

That's what I wondered too; according to Kevin:

The community might converge on that but with how things work around here both will probably be around for a while.

I imagine it will replace the open thread; once you have a discussion area I don't see what's left in the open thread :P

Replies from: Kevin
comment by Kevin · 2010-09-29T10:01:08.358Z · LW(p) · GW(p)

I would like to see the discussion section (or a more robust set of subreddits) replace the open threads, but let's try it out for a while and wait to see if we can form a consensus.

comment by MrHen · 2011-01-18T01:50:43.008Z · LW(p) · GW(p)

I used to post here on LessWrong and left for various reasons. Someone recognized my name earlier today from my activity here and I just so happened to have thought of LessWrong during a conversation I had with a friend of mine. The double hit was enough to make me curious.

So how's it going? I am just stopping by to say a quick, "Hello." It seems that Open Threads are no longer the way things work but I didn't notice anything else relevant in the Recent Posts. The community seems to be alive and thriving. Congratulations. :)

Replies from: Cyan
comment by Cyan · 2011-01-18T02:45:57.053Z · LW(p) · GW(p)

LW now has a discussion section that serves as a permanent open thread. The link is at the top right, next to the link to the wiki.

Replies from: wedrifid, MrHen
comment by wedrifid · 2011-01-18T03:32:24.345Z · LW(p) · GW(p)

LW now has a discussion section that serves as a permanent open thread.

And within the discussion thread there are open threads for those things that are too small to warrant even a discussion post.

Replies from: TheOtherDave
comment by TheOtherDave · 2011-01-18T03:48:18.702Z · LW(p) · GW(p)

And then there are nested comment threads within the open threads within the discussion thread, for things that are... oh, never mind.

comment by MrHen · 2011-01-18T03:41:38.323Z · LW(p) · GW(p)

Aha! Thank you much! I figured something was up. :) I won't bother copying this over there, however.

Also, apparently there are some spammers about.

comment by RobinZ · 2010-10-01T17:37:04.904Z · LW(p) · GW(p)

Nate Silver has just begun a new series of posts on 538 addressing the conflict between his model numbers and intuition - the first part, The Uncanny Accuracy of Polling Averages*, Part I: Why You Can’t Trust Your Gut, and second part, The Uncanny Accuracy of Polling Averages*, Part 2: What the Numbers Say, are up.

A money quote for Less Wrong users who remember The Bottom Line:

Politicians — the ones worth their salt, anyway — are exceptionally skilled at making believers out of people, and they’ll try to make a believer out of you. Some of the time, they’ll make a strong enough argument to persuade even the most seasoned observers. But a much smaller fraction of the time will they actually turn out to be right. That’s what the data says, and it says so pretty clearly.

Edit: If people prefer, I can cross-post to the Discussion section.

Edit 2: The Uncanny Accuracy of Polling Averages*, Part 3: This Time, It’s Different?

Edit 3: The Uncanny Accuracy of Polling Averages*, Part IV: Are the Polls Getting Worse?

comment by JamesAndrix · 2010-09-28T06:47:02.498Z · LW(p) · GW(p)

After This Discussion I made a private google group to discuss working together for profit.

Email me at james.andrix@gmail.com and I'll add you to the list.

comment by Jonii · 2010-11-09T00:14:49.242Z · LW(p) · GW(p)

One thing people often seem to miss on LW, when discussing cryonics, is the cost of the operation. People seem to often operate under illusion that if the cost of process is, say, $50 000, you don't need to worry about it that much since you can get a insurance, and thus pay only few hundred a year or so. This has made me wonder, since, insurance most likely makes it cost more, not less, and only works to offer protection from the case of you dying a lot earlier than insurance company would predict, which, you know, is unlikely.

This combined with the fact that even if you pay that cost, $50 000, you still are not guaranteed to be awakened in a better future. If the chance is 1/10, your expected utility is same as saving your life using surefire method for $500 000, and the chance is by most estimates lower than 1/10.

This is just to present my reasons for not getting cryonics deal even if I happened to live in a country where it would be realistically possible. Insurance doesn't make your expected utility of signing up for cryonics any better, and even if you valued your life more than $50 000, it would be at least problematic to say if that was really the best you can do with all that money. Say you spent the same amount of a food that was a bit healthier than what you're eating now, or paid someone to clean up your house and thus avoid stress regarding that and thus increasing your expected lifespan, it could easily be argued that you're using that money better than our hypothetical cryonicists. And those are hardly the only and the best uses your $50 000 can have.

Of course, things get more difficult if you have only around 20 years or less to live. Still, I'm not sure if giving going through something that unlikely is going to help you and has huge costs is the best option.

Replies from: saturn, JoshuaZ
comment by saturn · 2010-11-09T04:39:06.100Z · LW(p) · GW(p)

Don't forget that if it works, you probably get immortality too. If you were already immortal, would you be willing to become mortal for $500 000?

Replies from: Eliezer_Yudkowsky, Jonii
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-11-09T19:51:03.776Z · LW(p) · GW(p)

I really have to remember that frame for cryonics.

comment by Jonii · 2010-11-10T02:24:33.951Z · LW(p) · GW(p)

Don't forget that if it works, you probably get immortality too. If you were already immortal, would you be willing to become mortal for $500 000?

I'm not sure if this can be reversed just like that. If the immortality is possible at that world, you could just use part of that $500 000 to spring back to immortality, and I'm a bit unsure about how denying that affects our hypothetical situation and how it compares to the original dilemma.

But this would be quite close to the original for someone whose lifespan is almost over, so that money doesn't have the time to change anything else for him. Still, one point that makes me wonder is that comparatively, we'd expect $500 000 be worth much less in the world where immortality is commonplace enough for you to have it, whereas now it's really much. Do we assume that in this new world, $500 000 has same comparative edge as it would have in our world, or in other words, amount of money/people in the world remains the same?

comment by JoshuaZ · 2010-11-09T00:53:06.882Z · LW(p) · GW(p)

The point of the insurance isn't to help you. The point of using insurance is because there were problems with the early cryonics organizations where people were prepped for cryonics with money supposedly going to come from their estates and then the money never materialized. The insurance makes sure that the organizations get enough funds. It doesn't make things less expensive for the person to be preserved.

Replies from: Jonii
comment by Jonii · 2010-11-09T01:08:21.438Z · LW(p) · GW(p)

Sure, but I've gotten the impression that if someone mentions that they are not sure if the cryonics is worth the money, people come mentioning that "it's actually only a year", fallacy that I wanted to point out.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-11-09T03:07:12.538Z · LW(p) · GW(p)

I don't think that specific issue is a fallacy. In that context, one needs to just remember that utility does not scale linearly with amount of money.

comment by AnnaSalamon · 2010-10-02T14:50:20.113Z · LW(p) · GW(p)

I seem to be unable to downvote. I don't downvote all that often, so there's no way I've used up my downvotes allowance. Is this because I'm trying to access LW from Austria, or for some other reason (e.g., is downvoting broken overall)? I am unable to downvote in either of the two browsers I tried.

Replies from: Alicorn, Vladimir_Nesov
comment by Alicorn · 2010-10-02T14:52:57.690Z · LW(p) · GW(p)

I can't downvote either. I don't think I've used up my allotment.

Replies from: wedrifid, Craig_Heldreth
comment by wedrifid · 2010-10-02T16:37:05.735Z · LW(p) · GW(p)

Downvoting all these because the bug has been reported an excessive number of times. Wait, no I'm not because... downvoting myself for consistency... no... damn.

comment by Craig_Heldreth · 2010-10-02T16:58:43.570Z · LW(p) · GW(p)

I can downvote. That was a test and you were randomly selected. Sorry. Using firefox on windows xp.

Replies from: keefe
comment by keefe · 2010-10-02T18:14:44.586Z · LW(p) · GW(p)

Louie pushed the fix to production last night and it looks like the update script triggered sometime between these two posts

Replies from: Louie
comment by Louie · 2010-10-02T20:50:52.826Z · LW(p) · GW(p)

Yes. Thank goodness I've fixed downvoting. It's my favorite part of this site.

comment by Vladimir_Nesov · 2010-10-02T14:58:12.478Z · LW(p) · GW(p)

It seems that for three days now, downvotes don't work on Less Wrong.

http://lesswrong.com/lw/2r4/open_thread_september_part_3/2pjr?c=1

comment by darius · 2010-09-28T21:19:43.652Z · LW(p) · GW(p)

Does anyone else remember a short article by Asimov presenting the idea of an intelligence explosion? I read it in the mid-80s in a collection of his essays I checked out from the library (so it wasn't necessarily recently published); if I remember right and I'm not confabulating, the essay had been written years before for an airline in-flight magazine. If it mentioned I.J. Good's paper, I don't recall it.

This was the first I encountered the idea, as a teenager, and obviously it stuck with me.

comment by b1shop · 2010-09-28T15:04:03.491Z · LW(p) · GW(p)

The community doesn't seem to be resolved. Should I model willpower as a muscle or as a battery? Or should I abandon both and model myself in terms of incentives/signaling?

If you fall into either camp, why do you believe what you believe? Links to studies where scientists used a particular framework obviously doesn't count. Is there any evidence constantly challenging your willpower makes it stronger in the long run?

Replies from: gwern, Relsqui, Jonathan_Graehl
comment by gwern · 2010-09-28T15:13:27.666Z · LW(p) · GW(p)

Links to studies where scientists used a particular framework obviously doesn't count. Is there any evidence constantly challenging your willpower makes it stronger in the long run?

Well. Now I don't know what to say.

comment by Relsqui · 2010-09-28T22:05:22.115Z · LW(p) · GW(p)

Should I model willpower as a muscle or as a battery?

I'm firmly in the "muscle" camp. Here's why:

When you use a muscle a lot, it tires, and you need to rest for a while before exercising it again. This is the part that resembles a battery, giving rise to that model. The difference is that, after going through this process several times, the muscle's capacity for use is greater, whereas the battery's would be smaller. In my experience, exercising willpower carefully makes it easier to use it in the future. As a fellow once said, getting better at skills is a skill, which you can get better at.

So, by that model, the problem with this oft-referenced comic is obvious. The author is doing the equivalent of going to the gym and lifting the heaviest weights she can for as long as she can stand, then going back the next day and trying to do the same thing. Of course she crashes. The way to get stronger is to push at the edge of your comfort zone only a little bit, keep doing it until it becomes comfortable, and then push a little more next time. Ask anyone who plays an instrument. You don't rush through the tricky section over and over and expect to learn it; you slow it down, break it up until it's only a little harder than what you've been doing, and then work up to play speed again. And what do they call the thing you build up that way? Muscle memory.

comment by Jonathan_Graehl · 2010-09-30T22:30:06.367Z · LW(p) · GW(p)

I lean toward battery. I'm unaware of any 'willpower muscle' evidence.

comment by James_Miller · 2010-09-28T13:10:52.739Z · LW(p) · GW(p)

I wonder if there would be a market for rationalist councilors who would talk one-on-one with their "patients" as Psychologists and Social Workers do today. You would probably need some impressive credentials to get started such as a Ph.d. from a well-known school. I have been thinking about running advertisements to see if I could get anyone to pay me for rationalist services. As I'm an economist I would stress my capacity to give financial advice. I would want some way of protecting myself from lawsuits, however, by purchasing insurance or working through some personal service organization.

Replies from: humpolec, SilasBarta, knb, NancyLebovitz
comment by humpolec · 2010-09-28T20:46:27.881Z · LW(p) · GW(p)

Let's make them wear hooded robes and call them Confessors.

comment by SilasBarta · 2010-09-28T16:07:41.028Z · LW(p) · GW(p)

In the discussion on Prices or Bindings, EY mentioned that it may help to organize as something that is legally a "church" (apparently there can be atheist churches) so that you can give a vow of secrecy that's stronger than psychologists can give.

comment by knb · 2010-09-29T00:32:11.093Z · LW(p) · GW(p)

Like Rational-Emotive Behavior Therapy?

Basically the idea behind REBT is to correct irrational thoughts and point out errors in thinking. One of my professors, a clinician, said he had great success treating panic disorder with agoraphobia using REBT.

comment by Unnamed · 2010-10-02T00:40:17.040Z · LW(p) · GW(p)

A discussion of paperclip maximizers (linking here) has made the front page of reddit.

Replies from: sketerpot
comment by sketerpot · 2010-10-02T02:05:44.978Z · LW(p) · GW(p)

It's interesting to look at people's arguments against paperclip maximizers. There seem to be two related categories that make up most of the objections:

  1. People who can't imagine that a sufficiently intelligent being could be that different from us. One guy tried to claim that morality is universal so of course an artificial intelligence would share our values. Another said that a superintelligence would inevitably realize that its existence was pointless, as if there could possibly be some point other than maximizing the number of paperclips. Another claimed that morality is an "emergent phenomenon", but didn't explain what that actually meant, or how humanlike morality would emerge from a being whose only goal is paperclip maximization.

  2. People who think of it as a dumb machine, more akin to a drill press than an alien god. Just put an off button on it! Or require that there be a human operator with an axe ready to cut the power lines to the computer.

What these objections both have in common is that they assume that the world consists of humanlike intelligences or dumb machines. It's unintuitive to imagine something that is both intelligent and profoundly alien.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-10-02T12:01:46.344Z · LW(p) · GW(p)

I'd split the difference-- I don't think it's that hard to imagine an AI which has about as much loyalty to Ais as people have to people.

Really alien minds are naturally much harder to imagine. Clippy seems more like a damaged human than a thoroughly alien mind.

This may be a matter of assuming that minds would naturally have a complex mix of entangled goals, the way humans do. Even an FAI has two goals (Friendliness and increasing its intelligence) which may come into conflict.

Faint memory: an Alexis Gilliland cartoon of an automated bomber redirecting its target from a robot factory to a maternity ward.

Replies from: wedrifid
comment by wedrifid · 2010-10-02T13:31:03.958Z · LW(p) · GW(p)

Even an FAI has two goals (Friendliness and increasing its intelligence) which may come into conflict.

No, just Friendliness. Increasing intelligence has no weight whatsoever as a terminal goal. Of course, an AI that did not increase its intelligence to a level which it could do anything practical to aid me (or whatever the AI is Friendly to) is trivially not Friendly a posteriori.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-10-02T13:41:32.895Z · LW(p) · GW(p)

That leads to an interesting question-- how would an FAI decide how much intelligence is enough?

Replies from: wedrifid
comment by wedrifid · 2010-10-02T13:51:12.642Z · LW(p) · GW(p)

I don't know. It's supposed to be the smart one, not me. ;)

I'm hoping it goes something like:

  • Predict the expected outcome of choosing to self improve some more.
  • Predict the expected outcome of choosing not to self improve some more.
  • Do the one that gives the best probability distribution of expected results.
comment by magfrump · 2010-10-01T09:51:15.381Z · LW(p) · GW(p)

I recently started graduate school in mathematics, and I have been having some difficulty focusing on studying. Reading through various posts on Akrasia and a few posts on Overcoming Bias (I don't remember which ones and they aren't directly relevant) I came to the (tepid) conclusion that I don't feel like I will gain status by being good at math.

Has there been discussion of or does anyone have ideas about how to raise the status of an activity in one's own mind?

The specifics of my situation make it difficult for me to find students and faculty to become accountable to in the short term, so something that would be internal would probably be ideal for me. I'm also interested in seeing a more general discussion.

Replies from: Douglas_Knight, multifoliaterose, None
comment by Douglas_Knight · 2010-10-01T19:56:44.787Z · LW(p) · GW(p)

Could you elaborate on what you mean by status and how you reached the belief that it is relevant to your studying?

While being a professor has a certain amount of status in the eyes of the outside world, I'm not sure one should attribute this status to "being good at math"; indeed, it would probably be easiest to transform math into academic prestige through some other field, like bioinformatics. An anecdote about status and the outside world: there is a U of Chicago professor who is bitter that his parents think it a state school.
By being good at math, you'll gain status in the eyes of mathematicians. I think most people have the opposite problem, that being in a graduate program, surrounded by people who care about the subject causes them to think the world cares about it.

Replies from: magfrump
comment by magfrump · 2010-10-03T01:02:21.591Z · LW(p) · GW(p)

In the past, my mind made me obsessed with number theory. Since then, I have decided that studying number theory is something that I actually want to do.

More recently, the set of people who were happy when I was good at math (esp. my parents) have had less influence in my life, and people who were less happy about it (i.e. math classmates who weren't going to grad school and felt like I was being a smart ass) have had more influence.

So as opposed to previously, when my underlying drives said "do math all the time" and I gained short term status from it, I know have a desire to do math all the time and my underlying drives aren't helping me out like they used to.

So what I want to do is harness the other kind of status.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2010-10-03T04:55:31.090Z · LW(p) · GW(p)

An actual suggestion, before I continue interrogating you for my own curiosity:
Have you tried mathoverflow as a way of acquiring a community that promotes the status you want?

people who were less happy about it (i.e. math classmates who weren't going to grad school and felt like I was being a smart ass) have had more influence.

How do they have more influence in grad school than they had in undergrad? Did they only start being negative when your paths diverged? Or did you not need to study in undergrad and so you didn't notice that you were losing this ability? But if that's the case, how do you know it's status and not just being out of practice studying?

You explained what you mean by status, but you didn't really answer how you know. I'm skeptical of your introspection.

Replies from: magfrump
comment by magfrump · 2010-10-03T05:06:36.545Z · LW(p) · GW(p)

I spent time with students who didn't study math while I was an undergrad, so I wasn't in direct academic competition with them. Also many of them were good students, so grades could be social status markers, whereas the graduate school I'm in is not hugely prestigious, and I am the youngest student in the program. Also I the friends that I have gotten to know better recently have told me these feelings explicitly which they had not before.

I also had somewhat poor study skills, but my introspection springs from the fact that I learned about the singularity (read Kurzweil for the first time), failed my first math class, and experienced depression for the first time within a few months of each other. In the past, I sometimes felt that I could achieve figurative immortality and value by being a great mathematician because I could always succeed, now figurative immortality seems bitter and abstract math seems like fruit that hangs high relative to future changes in mental architecture (I think uploads or AI will easily advance mathematics beyond what I can accomplish.)

I'm being stingier with the details than I could be because the whole thing is somewhat personal.

comment by multifoliaterose · 2010-10-01T13:58:54.181Z · LW(p) · GW(p)

I recently started graduate school in mathematics, and I have been having some difficulty focusing on studying.

Difficulties focusing on studying math are common. There's an issue of many existing expositions being markedly suboptimal with respect to engaging the student.

I'm in my fourth year of math graduate school and I would be happy to offer suggestions specific to your situation. Feel free to PM me with your email address if you'd like to correspond.

comment by [deleted] · 2010-10-01T13:45:24.257Z · LW(p) · GW(p)

Well, you will gain status by being good at math, unless you have a more prestigious opportunity than "professor" lined up, in which case you should take that instead.

comment by NancyLebovitz · 2010-09-30T13:18:33.509Z · LW(p) · GW(p)

Review of Choke: What the Secrets of the Brain Reveal About Getting it Right When You Have To

“My research team and I have found that highly skilled golfers are more likely to hole a simple 3-foot putt when we give them the tools to stop analyzing their shot, to stop thinking,” Beilock said. “Highly practiced putts run better when you don’t try to control every aspect of performance.” Even a simple trick of singing helps prevent portions of the brain that might interfere with performance from taking over, Beilock’s research shows.


In one study, researchers gave standardized tests to black and white students, both before and after President Obama was elected. Black test takers performed worse than white test takers before the election. Immediately after Obama’s election, however, blacks’ performance improved so much that their scores were nearly equal with whites. When black students can overcome the worries brought on by stereotypes, because they see someone like President Obama who directly counters myths about racial variation in intelligence, their performance improves.

Beilock and her colleagues also have shown that when first-grade girls believe that boys are better than girls at math, they perform more poorly on math tests. One big source of this belief? The girls’ female teachers. It turns out that elementary school teachers are often highly anxious about their own math abilities, and this anxiety is modeled from teacher to student. When the teachers serve as positive role models in math, their male and female students perform equally well.


In tests in her lab, Beilock and her research team gave people with no meditation experience 10 minutes of meditation training before they took a high-stakes test. Students with meditation preparation scored 87, or B+, versus the 82 or B- score of those without meditation training. This difference in performance occurred despite the fact that all students were of equal ability.

Replies from: None, sketerpot
comment by [deleted] · 2010-09-30T17:01:54.385Z · LW(p) · GW(p)

Interestingly, they claim that choking is due to poor use of working memory:

Talented people often have the most working memory, but when worries creep up, the working memory they normally use to succeed becomes overburdened. People lose the brain power necessary to excel.

Replies from: Jonathan_Graehl
comment by Jonathan_Graehl · 2010-09-30T22:24:29.508Z · LW(p) · GW(p)

That is an interesting idea. But there are motor programs that don't use verbal working memory. Making conscious adjustments (different from how the program was practiced) could interfere, though.

I think physiological panic/fear has to be a large part of most choke experiences, distinct from any thoughts interfering w/ working memory.

I've also heard of people choking especially because they're worried that their social status may be threatened if they're too good or too bad at something. I don't know if that acts through a different mechanism; I'm just saying that such concerns seem especially distorting on performance.

comment by sketerpot · 2010-10-02T02:20:17.965Z · LW(p) · GW(p)

I'd like to see someone compare college students' performance on important tests after, say, 0--3 drinks. If test anxiety hurts people's scores as much as it seems to, then perhaps cheap beer will be used as a nootropic.

(A quick check on Google Scholar doesn't show any studies that have been done on this, which isn't surprising.)

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-10-02T11:20:57.583Z · LW(p) · GW(p)

It might be worth checking, though it would surprise me if it works. I'm betting that if alcohol improves test performance, college students would have discovered it long ago.

Thanks for the link-- I didn't realize test anxiety was that common or that there were such effective methods of treating it.

comment by danield · 2010-10-08T06:20:40.626Z · LW(p) · GW(p)

http://xkcd.com/803/

This reminded me: it's important to engage critically with ideas that I learned before I knew how to engage critically with ideas. I'm not at all confident that I would have balked at the teacher's explanation of lift even now, and that's a little alarming!

comment by ata · 2010-10-03T05:08:39.240Z · LW(p) · GW(p)

Is the Recent Comments sidebar stuck several hours back for anyone else?

Replies from: AdeleneDawner
comment by RHollerith (rhollerith_dot_com) · 2010-10-01T14:21:44.629Z · LW(p) · GW(p)

I like how the home page of the discussion sub-site (/r/discussion/) shows just the titles of the posts, not the whole post or all the paragraphs "above the fold" and wish there was a similar way to browse the main site.

Actually there is! I didn't know about it till now because it never occurred to me till now to click the title of the "Recent Posts" section of the sidebar on the right.

comment by Vladimir_Nesov · 2010-09-29T19:27:02.204Z · LW(p) · GW(p)

Today, my downvotes stopped working. I can press the buttons, but after refreshing downvote marks disappear, while upvote marks stay (which means that the scripts at least partially work; I also tried another browser). No message to the effect that I'm not allowed to downvote ever appears. I suppose I could've made 25000 downvotes, counting 10x post downvotes, to hit the 4*karma limit, but it seems unlikely, and there should be a message if that limit is reached in any case.

Replies from: jimrandomh, knb
comment by jimrandomh · 2010-09-29T19:52:18.355Z · LW(p) · GW(p)

Same here. And upvotes do not have the same problem, so I don't think it's a caching issue.

Replies from: keefe, keefe
comment by keefe · 2010-09-29T21:14:44.778Z · LW(p) · GW(p)

hmmm I can confirm this both here and on a local copy of the codebase, I'll have a look and make sure Wes knows

Replies from: keefe
comment by keefe · 2010-09-29T21:59:21.645Z · LW(p) · GW(p)

It was a simple bug, fix is committed and a pull request is in, I'll send an email out now to get this into production.

Replies from: DanielVarga, DanielVarga
comment by DanielVarga · 2010-10-01T09:25:07.106Z · LW(p) · GW(p)

I would be very interested in a one-sentence description of the bug. Especially if it was not just a side-effect of some other change in the codebase.

Replies from: keefe
comment by keefe · 2010-10-02T11:52:55.060Z · LW(p) · GW(p)

in one sentence... the vote processing mechanism required a reference to the global configuration for pylons and the pylons configuration import was missing.

not super interesting unfortunately :]

it was probably something like a munged automerge or something

comment by DanielVarga · 2010-10-01T09:24:02.719Z · LW(p) · GW(p)

I would be very interested in a one-sentence description of the bug. Especially if it was not just a side-effect of some other change in the codebase.

comment by keefe · 2010-09-29T21:15:11.720Z · LW(p) · GW(p)

hmmm I can confirm this both here and on a local copy of the codebase, I'll have a look and make sure Wes knows

comment by knb · 2010-09-30T02:21:08.385Z · LW(p) · GW(p)

This happens to me, as well.

comment by knb · 2010-09-28T07:21:35.604Z · LW(p) · GW(p)

Regarding optimal mental performance: I've bought some modafinil and piracetam recently. I think I remember hearing some people on LW use these drugs. Does anybody have any wisdom or experiences they would like to share? How significant are the effects? Were your experiences good or bad?

Replies from: Will_Newsome, gwern
comment by Will_Newsome · 2010-09-28T07:33:08.583Z · LW(p) · GW(p)

Modafinil increased my blitz chess rating by 150 points the first time I took it. That is completely ridiculous. The effects weren't as strong after that, but still very noticeable. It definitely worked to keep me awake for long hours. One time it totally fucked me up for 3 days or so: I took it thinking I'd stay up, but then I was like nah, never mind, I'll go to sleep 'cuz I'm really tired. Like an idiot. I then got 5 or so hours of fitful sleep followed by about 5 more hours of thrashing about on my mattress follow by about 10 more hours of groggy sleep deprivation-induced pain followed by a completely messed up sleep cycle for the next few days of constant tiredness. So, it's awesome when you use it right, but be careful to stay up for the duration of the effects (12 hours or so, I think.)

I am quite obviously not a doctor, though.

Replies from: NancyLebovitz, James_Miller, curiousepic, knb
comment by NancyLebovitz · 2010-09-28T14:07:56.683Z · LW(p) · GW(p)

Also, there are idiosyncratic reactions-- I know one person who found that Modafinil made him sleep-- a lot. It was very refreshing-- possibly deeper than his usual sleep.

This seems to be very rare, or at least I haven't heard of it happening to anyone else, but I think it implies that you should try the drug for the first time when it isn't important that it works.

comment by James_Miller · 2010-09-28T13:12:17.772Z · LW(p) · GW(p)

I had sleep problems as well with Modafinil.

comment by curiousepic · 2010-09-28T15:16:38.284Z · LW(p) · GW(p)

A friend used Modafinil on the same day she took a diet pill she wasn't aware contained caffeine. The result was not good. She could not stop picking at existing sores and pulling hair out - resorting to putting shampoo in her hair to keep her hands out, and moved to productive, meticulous leg-hair removal via tweezers to extend the mitigation as long as possible. Not to scare you away from it, or anything, just take care :)

comment by knb · 2010-09-28T08:45:43.499Z · LW(p) · GW(p)

Interesting. I have a bad sleep cycle already, so that was a major concern. I thought eugeroics have a reputation for not damaging sleep cycle like amphetamine (promoting wakefulness without interfering with normal sleep). Anyway, I guess as long as I take it in the morning, it should be metabolized by bed time. Thanks for the info.

Replies from: Douglas_Knight, Will_Newsome
comment by Douglas_Knight · 2010-09-28T20:13:25.275Z · LW(p) · GW(p)

What I've heard is that modafinil has exactly the same set of effects as amphetamines, just with different proportions, particularly much less of the negative ones. But there's a lot of variation from person to person.

comment by Will_Newsome · 2010-09-28T08:51:45.935Z · LW(p) · GW(p)

The messed up sleep cycle part could have been coincidence: my sleep schedule is both chaotic and easily perturbed, such that random things can screw it up for days. But yeah, if you take it in the morning you should be fine. I mostly used it for staying up for 2 days in a row instead of 1, which I think is a nonstandard use case.

comment by gwern · 2010-09-28T16:46:07.529Z · LW(p) · GW(p)

I would point out that if you are interested in general information, there are many larger sites to search for information. Some quick links:

It would be better if you asked on LW only if you had something more specific (for example, if you had developed a theory of akrasia and thought a particular nootropic might help as has been done with sulbutiamine).

(For completeness, I keep a record of my nootropics in http://www.gwern.net/Nootropics )

Replies from: knb
comment by knb · 2010-09-28T18:06:34.475Z · LW(p) · GW(p)

Well, I already researched general info, but I thought I would ask on LW because I suspect the commenters here are less likely to forget to take placebo effect or coincidence into account when describing effects.

I know it's tangential to the site topic, but it is the open thread, not a top-level thread.

Replies from: Jordan
comment by Jordan · 2010-09-29T04:58:10.838Z · LW(p) · GW(p)

Agreed, I'm curious to know what rational people think about specific nootropes. You should start a nootrope discussion in the discussion page.

For what its worth, I've lurked around the imminst forums, and am generally impressed with the level of carefulness.

As for my own use:

I tried Modafinil a few times, at small doses (about 25 mg and 10 mg). Both times I had greatly enhanced focus, and reduced tolerance for nuanced complex thought. That is, I took it while programming and found I could focus and execute a design, but had a hard time stepping back to think a design through (I just wanted to get to work, damnit!).

Also, even on my small doses, taken in the morning, I was completely incapable of sleep for the next 40 hours or so (the last 20 hours of which were incredibly unproductive and almost painful). I'm sensitive to uppers in general though.

Needless to say, I don't plan on using Modafinil again, unless an extreme situation arises. On the other hand, I've met plenty of people who seem to be able to take it daily at high doses with no negative side effects.

Replies from: knb
comment by knb · 2010-09-29T05:51:55.166Z · LW(p) · GW(p)

Good idea, I created a discussion.

comment by danield · 2010-10-08T06:18:31.284Z · LW(p) · GW(p)

http://xkcd.com/803/

This reminded me: it's important to engage critically with ideas that I learned before I knew how to engage critically with ideas. I'm not at all confident that I would have balked at the teacher's explanation of lift even now, and that's a little alarming!

comment by paulfchristiano · 2010-09-28T18:02:08.532Z · LW(p) · GW(p)

If I live in a universe which is not Turing computable but try and apply Solomonoff induction, I may end up in trouble--I may be unable to accept the existence of a black box for the halting problem, for example, even when believing in such a thing is useful. There are several possible solutions to this problem, but I have not seen any here which I find satisfactory.

My solution is this. Instead of considering a prior on the space of models for the universe (since you can't really have a prior on the uncountable space of ways the universe could work if it weren't as restricted as Solomonoff induction thinks it is) consider all computable strategies you could use to achieve whatever goal you are interested in (e.g., to predict what the universe will do next). Rather than disqualifying a model when it fails to conform to our expectations, we penalize a strategy (say by reducing its weight by some constant factor) when it fails to perform well (e.g., makes an incorrect prediction or fails to make a useful prediction) and follow the strategy which currently has the highest weight.

If your goal is predicting the next bit handed to you by the universe, this is the same as Solomonoff induction where every model includes random noise. However, it can be argued convincingly that this is the correct way to incorporate the possibility of e.g. a black box for the halting problem into your world view, while it is not at all clear why Solomonoff induction is reasonable if the universe isn't computable.

In general you get theoretical guarantees whenever you are trying to solve a problem which doesn't have too much state--ie, in which it is very difficult to pursue a strategy so bad that it will ruin your current performance and all future performance. The proof is probably not surprising to anyone here, but it is worth pointing out that if you use a multiplicative update rule you get particularly fast convergence to a good prediction strategy (this is an infinite analog of multiplicative weights).

There is some question about why we restrict to computable strategies when implementing our induction procedure is already uncomputable. I don't really want to include a discussion of this here, but if we are going to think about induction as a useful tool to apply in reality then we should probably adopt a view of it which doesn't involve us solving the halting problem (for example, as a framework for evaluating suggestions as to what we should believe etc.). In this case it hopefully makes sense to restrict to computable strategies.