Posts

Is it rational to be religious? Simulations are required for answer. 2010-08-11T15:20:04.058Z
Possibilities for converting useless fun into utility in Online Gaming 2010-04-27T22:21:56.251Z

Comments

Comment by Aleksei_Riikonen on On saving the world · 2014-01-31T14:37:51.645Z · LW · GW

Regardless, I wish to not take over (a part of) this comment thread by discussing this thing in detail.

If further comments from me on the matter are in demand, contacting me through some other means is a better option.

Comment by Aleksei_Riikonen on On saving the world · 2014-01-31T14:21:42.211Z · LW · GW

I believe this comment thread is not the proper place to discuss the details of my proposal.

(Also I believe the page linked earlier answers those specific questions.)

Comment by Aleksei_Riikonen on On saving the world · 2014-01-31T13:11:52.243Z · LW · GW

As a Brit, you already have a king/queen in your country.

Details are important as well as examples, and I'm not in the business of simply bringing back empowered kings. In the system I discussed the role mostly is about being a cool figurehead, not so terribly different from what you have now (though the king would be elected from among the re-invented Aristocracy in a meritocratic way, and therefore be better at the role than what you have now -- and it is of course true that the discussed system would be about bringing back the nobility in a genuinely empowered way).

Comment by Aleksei_Riikonen on On saving the world · 2014-01-31T10:11:30.536Z · LW · GW

Thank you. I can relate to much of what you said, as isn't terribly rare here.

And the most enjoyable of the feelings evoked in me (as has happened on several occasions already), is seeing a young one being better and more promising than me.

(Though my enjoyment at being superseded is dangerous in the sense that such may be associated with laziness, so you are very welcome to not enjoy yours -- or enjoy, however you wish.)

The actual reason why I started to comment at all, however, is that it's amusing to note how I'm in a sense in the reverse of your situation. I found MIRI/SIAI over 10 years ago (and am almost that much older than you), started contributing to them then, but have recently refocused, of all things, on political system-building. Just what you left behind!

It would be complex and unnecessary to discuss here why I have done so, but let it be said that it's not because I wouldn't still consider MIRI's work to be of paramount importance. I do, but there are circumstances that I feel influence me towards a differing personal focus, among them certainly being a degree of laziness that I've come to joyfully accept of myself, after being more demandind earlier. Also, not putting all eggs in one basket.

Anyway, studying what's currently termed neoreaction (and associated things) is what I'm doing. For the heck of it, here's some early comments I recently wrote regarding that stuff.

(Also let it be noted that the difficult part in fixing politics is devising the ways in which one can actually get to implementing the bright ideas. The class of bright political ideas that would be a clear improvement is surprisingly large and easy to pick examples out of, but the class of ideas that one is likely to be able to implement is much more constrained.)

Comment by Aleksei_Riikonen on What is moral foundation theory good for? · 2012-08-13T13:48:58.261Z · LW · GW

In response to your final questions:

Liberals (myself included) tend to very much like the idea of using regulation to transfer some wealth from the strongest players to the weakest in society. We like to try to set up the rules of the game so that nobody would be economically very poor, and so that things in general were fair and equitable.

In the case of sex and relationships, the argument could also be made for regulation that would transfer "sexual wealth" and "relationship wealth" from the strongest players to those who are not so well off. In fact, it seems to me that very many traditional conservative societies have tried to do just that, by strongly promoting e.g. such values that one should have only one sexual partner (along with marriage) during one's life. Rock stars and other sorts of alpha males who take many hot girls for themselves would be strongly disapproved of by typical traditional conservative societies. The underlying reason may be that traditional monogamy produces a sexually more equal society, and that this has been one contributing factor why societies with such values have been so successful throughout much of human history.

Most liberals, however, would be unwilling to engage in a rational discussion and cost-benefit analysis of whether conservative sexual morals (or some modified version thereof) would in fact create a more equal and strong society. Liberals are ok with the strongest players amassing as much sexual wealth as they can, at the expense of the weaker competitors, which strongly contrasts with their ideas about regulating economic activity and limitless acquisition of monetary wealth.

Comment by Aleksei_Riikonen on [Applications Closed] The Singularity Institute is hiring remote LaTeX editors · 2012-07-29T13:01:46.756Z · LW · GW

I have offered in the past to volunteer some time to this sort of thing, but I get that coordinating volunteers is harder than hiring people.

Sounds like something that happened during earlier years, when the SI people that one ran into when volunteering were different than currently.

Comment by Aleksei_Riikonen on Article about LW: Faith, Hope, and Singularity: Entering the Matrix with New York’s Futurist Set · 2012-07-26T22:57:44.760Z · LW · GW

Actually, I feel that I have sufficient experience of being reported on (including in an unpleasant way), and it is precisely that which (along with my independent knowledge of many of the people getting reported on here) gave me the confidence to suspect that I would have managed to separate from the distortions an amount of information that described reality.

That said, there is a bit of fail with regard to whether I managed to communicate what precisely impacted me. Much of it is subtle, necessarily, since it had to be picked up through the distortion field, and I do allow for the possibility that I misread, but I continue to think that I'm much better at correcting for the distortion field than most people.

One thing I didn't realize, however, is that you folks apparently didn't think the gal might be a reporter. That's of course a fail in itself, but certainly a lesser fail than behaving similarly in the presence of a person one does manage to suspect to be a reporter.

Comment by Aleksei_Riikonen on Article about LW: Faith, Hope, and Singularity: Entering the Matrix with New York’s Futurist Set · 2012-07-26T22:29:16.207Z · LW · GW

As someone who had read Eliezer's OkCupid profile sometime not very recently, I was actually gonna reply to this with something like "well, scientism goes maybe a bit too far, but he does actually have a point"

...but then I just went and reread the OkCupid profile, and no, actually it's wonderfully funny and I have no worries similar to scientism's, unlike earlier when the profile didn't explicitly mention sadism.

Obviously Eliezer is a very unusual and "weird" person, but the openness about it that we observe here is a winning move, unlike the case where one might sense that he might be hiding something. Dishonesty and secrecy is what the evil phyg leaders would go for, whereas Eliezer's openness invites scrutiny and allows him to emerge from it without the scrutinizers having found incriminating evidence.

Also, where are you seeing evangelical polyamory? I'm very much not polyamorous myself, and haven't ever felt that anyone around here would be pushing polyamory to me.

Comment by Aleksei_Riikonen on Article about LW: Faith, Hope, and Singularity: Entering the Matrix with New York’s Futurist Set · 2012-07-26T17:51:49.342Z · LW · GW

That position is "antisingularity" only in the Kurzweilian sense of the word. I wouldn't be surprised if e.g. essentially everyone at the Singularity Institute were "antisingularity" in this sense.

Comment by Aleksei_Riikonen on Article about LW: Faith, Hope, and Singularity: Entering the Matrix with New York’s Futurist Set · 2012-07-26T12:52:08.640Z · LW · GW

The starting point for my attitude was people doing things like intervening in front of a reporter to stop discussion of a topic that looks scandalous, or talking about Singularity/AI topics in a way that doesn't communicate much wisdom at all.

Being silly with regard to physical intimacy and in general having a wild party is all well and good by itself, if you're into that sort of thing, but I react negatively when that silliness seems to spill over into affecting the way serious things are handled.

(I'll partly excuse being light on the constructiveness by having seen some copy-pastes that seem to indicate that what I'm concerned about is already being tackled in a constructive way on the NYC mailing list. The folks over there are much better positioned to do the contructive things that should be done, and I wasn't into trying to duplicate their efforts.)

Comment by Aleksei_Riikonen on Article about LW: Faith, Hope, and Singularity: Entering the Matrix with New York’s Futurist Set · 2012-07-25T22:32:39.785Z · LW · GW

Though it's possible the reporter has twisted your words more than I manage to suspect, I'll say:

Wow, some of the people involved really suck at thinking (or caring to think) about how they make the scene look. I think I'm able to pretty well correct for the discrepancy between what's reported and what's the reality behind it, but even after the correction, this window into what the scene has become has further lowered my interest in flying over there to the States to hang out with you, since it seems I might end up banging my head against the wall in frustration for all the silliness that's required for this sort of reporting to get it's source material.

(Though I do also think that it's inevitable that once the scene has grown to be large and successful enough, typical members will be sufficiently ordinary human beings that I'd find their company very frustrating. Sorry, I'm a dick that way, and in a sense my negative reaction is only a sign of success, though I didn't expect quite this level of success to be reached yet.)

(By the previous I however do not mean to imply that things would have been saner 10 years ago (I certainly had significant shortcomings of my own), but back when nobody had figured much anything out yet or written Sequences about stuff, the expected level of insanity would have been partly higher for such reasons.)

Comment by Aleksei_Riikonen on CFAR website launched · 2012-07-06T11:49:30.142Z · LW · GW

Quite a good website, though I expect that when one first glances at it, it looks suspicious how much there is talk of "perfect reasoning", "knowing exactly how to weigh the relevant evidence" etc.

Gives the impression that you think your methods produce perfection. One might have to delve surprisingly deep into the website before one realizes that that's not actually among the claims made.

Comment by Aleksei_Riikonen on New Singularity.org · 2012-06-22T12:46:01.441Z · LW · GW

Works.

Comment by Aleksei_Riikonen on New Singularity.org · 2012-06-20T23:30:44.556Z · LW · GW

I don't understand.

The same thing as I described in my previous comment as the situation for http://singularity.org/about/ (except that the destination page is different).

Comment by Aleksei_Riikonen on New Singularity.org · 2012-06-20T23:21:17.044Z · LW · GW

My guess is that it's currently broken for some browsers but not others.

I'm using Firefox (Windows), and currently ALL the links on http://singularity.org/about/ take me to http://singularity.org/visiting-fellows/

Was working fine earlier, though.

(And actually, the six links in the Donate-WhatWeDo-etc bar are exceptions in that they work on that page also. But all the others take me to see the Visiting Fellows, including the link to the blog, to Facebook, to Less Wrong...)

Comment by Aleksei_Riikonen on New Singularity.org · 2012-06-20T02:12:36.795Z · LW · GW

To me, when I first saw them, they definitely looked like clip art, except that in some cases the SI logo had been edited in.

I wish I could upvote what Raemon said several times: "the whole reason clip-art is bad is not because it actually is clip-art, but because it looks like something you got off the shelf."

Comment by Aleksei_Riikonen on New Singularity.org · 2012-06-19T01:01:18.351Z · LW · GW

Btw, am I hallucinating, or did you already change the colors slightly?

Anyway, I'd like to say that I currently like how the colors look. (Though doesn't have much to do with the points that I was critical on.)

Comment by Aleksei_Riikonen on New Singularity.org · 2012-06-18T17:59:27.794Z · LW · GW

Yes, I have to say that the unprofessional vibe given off feels absolutely horrible to me. I'm surprised that the designers of the site appear to be the same as previously, since the previous style and vibe felt very good to me, and this feels so much like the opposite.

The current crop of clip-art would really need to go, I'd say. Nothing looks as hasty and unprofessional as stereotypical clip-art. You especially shouldn't with your clip-art choices communicate that you're a very formal, ordinary and uncreative men-in-suits organisation, since you're really not (and if you were, who would think you competent or even sincere in undertaking such an unusual mission? Stereotypical and ordinary men-in-suits are the antithesis of creativity, exceptionality and thinking-something-that-isn't-a-politically-correct-cliche).

The current site design could perhaps be made to rock if all the clip-art was changed out to a new theme that was creative and original (and you) and wouldn't really look like clip-art. Some associated changes to color scheme and fonts might be required, but perhaps not a complete redesign.

Comment by Aleksei_Riikonen on Help! Name suggestions needed for Rationality-Inst! · 2012-01-28T03:57:59.247Z · LW · GW

I like this. Or more generally, I like having "Advanced Sanity" in the name.

Comment by Aleksei_Riikonen on The Singularity Institute's Arrogance Problem · 2012-01-20T13:04:24.815Z · LW · GW

Curse me for presenting myself as someone having interesting secret knowledge. Now I get several PMs asking for details.

In short, this "incident" was about one or two SIAI folks making a couple of obvious errors of judgment, and in the case of the error that sparked the whole thing, getting heatedly defensive about it for a moment. Other SIAI folks however recognized the obvious mistakes as such, so the issue was resolved, even though unprofessional conduct was observed for a moment.

The actual mistakes were rather minor, nothing dramatic. The surprising thing was that heated defensiveness took place on the way to those mistakes getting corrected.

(And since Eliezer is the SIAI guy most often accused of arrogance, I'll additionally state that here that is not the case. Eliezer was very professional in the email exchange in question.)

Comment by Aleksei_Riikonen on POSITION: Design and Write Rationality Curriculum · 2012-01-20T10:17:38.781Z · LW · GW

No, not unusual. I had the same reaction, and assumed it's probably partly a deliberate joke to have such a placeholder name (or alternatively it's actually so that the Scientology connotation didn't occur to folks at SIAI).

I btw commented on this a couple of days ago in a comment to the SIAI blog, and note now that comments there seem to take a rather long time to be moderated for spam, as apparently no comments have appeared for many months. (Ok, sorry for the joke. More likely you've forgotten about the blog comments or something, than it really being about the spam moderation that commenters are told might take some time when they leave a comment.)

Comment by Aleksei_Riikonen on The Singularity Institute's Arrogance Problem · 2012-01-19T12:51:56.478Z · LW · GW

So, I have a few questions:

  1. What are the most egregious examples of SI's arrogance?

Since you explicitly ask a question phrased thus, I feel obligated to mention that last April I witnessed a certain email incident that I thought was somewhat extremely bad in some ways.

I do believe that lessons have been learned since then, though. Probably there's no need to bring the matter up again, and I only mention it since according to my ethics it's the required thing to do when asked such an explicit question as above.

(Some readers may wonder why I'm not providing details here. That's because after some thought, I for my part decided against making the incident public, since I expect it might subsequently get misrepresented to look worse than what's fair. (There might be value in showing records of the incident to new SIAI employees as an example of how not to do things, though.))

Comment by Aleksei_Riikonen on Leveling Up in Rationality: A Personal Journey · 2012-01-17T16:28:11.591Z · LW · GW

Hmm, I for one don't share the negative reactions that several other commenters seem to feel now. I felt very glad upon reading this "leveling up" post.

I was especially thinking that this is a very cool first LW article for people to bump into (and therefore shared this on some social networks). In this vein, I very much like the criticized-by-some feature that every other word is a link to a previous article. It's useful for those new people who might be inspired to check this stuff out in more detail.

Comment by Aleksei_Riikonen on Ritual Report: NYC Less Wrong Solstice Celebration · 2011-12-20T15:55:12.988Z · LW · GW

Yeah, I just thought I'd improve on your riff a bit, and add the part that pokes fun at me :)

Comment by Aleksei_Riikonen on Ritual Report: NYC Less Wrong Solstice Celebration · 2011-12-20T15:43:40.731Z · LW · GW

Damn, my plan is backfiring. I will be remembered as an arrogant schmuck who was slightly funny in an unintended way.

Serves me right.

Comment by Aleksei_Riikonen on Ritual Report: NYC Less Wrong Solstice Celebration · 2011-12-20T15:30:57.605Z · LW · GW

Yeah, I had similar thoughts actually. But I did end up thinking that this was good enough to link in a somewhat off-handed manner.

Though of course, mostly I just wanted to get myself on the public record, calling this a great success in the making at such a somewhat early stage, so that I look good when future generations look back a few thousand years from now :D

Comment by Aleksei_Riikonen on Ritual Report: NYC Less Wrong Solstice Celebration · 2011-12-20T14:43:28.733Z · LW · GW

Also, I felt the need to post a link to this post on some social networks and describe it thus:

"And so it begins. The NYC folks have taken a significant step in bringing the LW community to a whole new level of real-world Awesomeness and Win. Expect great things to grow out of such developments."

Comment by Aleksei_Riikonen on Ritual Report: NYC Less Wrong Solstice Celebration · 2011-12-20T13:24:46.912Z · LW · GW

Me too. Awesome. Thanks.

Comment by Aleksei_Riikonen on The curse of identity · 2011-11-18T23:18:27.814Z · LW · GW

Typical mind fallacy, perhaps?

Generalizing from one example, rather. Mostly I was going by what I've heard from an acquaintance that worked as a stripper.

Comment by Aleksei_Riikonen on The curse of identity · 2011-11-18T22:36:39.863Z · LW · GW

Who considers strippers to be high status?

(Certainly not the actual audience. They just see meat to eat with their eyes, not a person. Even prostitutes are probably respected a lot more on average than strippers, since it's more common that people at least talk to prostitutes, and become more aware that there's a person there.)

Comment by Aleksei_Riikonen on The curse of identity · 2011-11-17T11:29:12.600Z · LW · GW

so you decide to only do the least prestigeful work available, in order to prove that you are the kind of person who doesn't care about the prestige of the task!

Another variant is to minimize how much you directly inform your comrades of the work you're doing. You tend to get more prestige when people find out about your work through accidental-seeming ways instead of through you telling them. Also, you have aces up your sleeve with which you can play the martyr ("Well, I have been doing such and such and you didn't even know about it!").

Comment by Aleksei_Riikonen on Q&A with new Executive Director of Singularity Institute · 2011-11-10T02:55:56.671Z · LW · GW

I'm glad to hear I'm not the only fan of Eliezer who isn't reading HPMOR.

In general, like you I also don't tend to get any fiction read (unlike earlier). For years, I haven't progressed on several books I've got started that I enjoy reading and consider very smart also in a semi-useful way. It's rather weird really, since simultaneously I do with great enthusiasm watch some fictional movies and tv series, even repeatedly. (And I do read a considerable amount of non-fiction.)

And I follow the news. A lot. The number one fun thing for me, it seems.

Comment by Aleksei_Riikonen on Pascal's wager re-examined · 2011-10-05T16:50:58.521Z · LW · GW

Everyone voting down shminux, please also note that they did say:

You clearly want Christianity to have a chance in hell

it is pointless to argue about it with you, since you have already written your bottom line and will not budge

I'll downvote for those. While I don't claim Goetz' treatment of the topic to have been perfect, I don't see evidence of it necessarily having been motivated by anything else than an honest curious interest in the topic. Claims that he clearly wants Christianity to have a chance or that he wouldn't be able to change his mind on the topic seem to me to be just as uncalled for as claims that he would be a Christian.

Comment by Aleksei_Riikonen on Rationality Lessons Learned from Irrational Adventures in Romance · 2011-10-04T02:11:17.559Z · LW · GW

Wow! A 20 page essay on "why I'm breaking up with you"? That's just... brutal!

And obviously the title should have been:

"In Which I Explain How Natural Selection Has Built Me To Be Attracted To Certain Features That You Lack"

:D

Comment by Aleksei_Riikonen on Rationality Lessons Learned from Irrational Adventures in Romance · 2011-10-04T01:57:18.049Z · LW · GW

So I broke up with Alice over a long conversation that included an hour-long primer on evolutionary psychology in which I explained how natural selection had built me to be attracted to certain features that she lacked.

LOL

(Just couldn't resist posting my reaction, even though there's already an essentially identical comment.)

It seems that this was made a lot more amusing by you apparently having great social skills these days.

(And makes me all the more glad I've never broken up with anyone, even though this requirement made it kinda hard to get into a relationship in the first place.)

Comment by Aleksei_Riikonen on Polyhacking · 2011-08-27T09:07:42.876Z · LW · GW

Since moving back to the Bay Area I've been out with four other people too, one of whom he's also seeing; I've been in my primary's presence while he kissed one girl, and when he asked another for her phone number; I've gossiped with a secondary about other persons of romantic interest and accepted his offer to hint to a guy I like that this is the case; I hit on someone at a party right in front of my primary. I haven't suffered a hiccup of drama or a twinge of jealousy to speak of and all evidence (including verbal confirmation) indicates that I've been managing my primary's feelings satisfactorily too. Does this sort of thing appeal to you?

No.

But I do expect that if humans become immortal superbeings, then given enough time, most people currently in fairytale monogamous relationships will switch to poly. (Though when people are immortal superbeings, I also expect it to become common that they'll spend a very long time if necessary searching for an instance of fairytale monogamy to be their first relationship.)

I guess my philosophy is that fairytale monogamy is optimal for the young (say under 200 years or so), while poly and other non-traditional arrangements are the choice of the adult.

Comment by Aleksei_Riikonen on Please do not downvote every comment or post someone has ever made as a retaliation tactic. · 2011-08-26T09:27:43.064Z · LW · GW

I think you're probably right if we count more stuff as "high-quality thinking" than I was meaning to do. But if we're rather strict about what counts as high-quality, I think I'm right.

(Also I'll emphasize that I wasn't talking about insecurity in general, but being insecure to such an extent that one refrains from posting high-quality stuff to an anonymity-enabling website because of a fear of getting downvoted.)

Comment by Aleksei_Riikonen on Please do not downvote every comment or post someone has ever made as a retaliation tactic. · 2011-08-24T09:17:42.228Z · LW · GW

Another problem is that a reputation system might drive away people with valuable insights about certain agreed upon topics.

Relax, I doubt anyone with the ability to produce high-quality thinking is so insecure that (s)he'd be scared of getting a few downvotes on a website. (Myself, I once got an article submission voted to oblivion, but it just felt good in a feeling-of-superiority kind of way since I thought the LW community was the party being more wrong there -- though I think that to have found myself to be more wrong than I think I was would have felt good too.)

In general, I find it weird how some people manage to take the karma system so seriously. I thought it was acknowledged all along by the community that it's a very crude thing with only very limited usefulness (though still worth having).

Comment by Aleksei_Riikonen on People neglect small probability events · 2011-07-03T06:50:55.054Z · LW · GW

I enjoyed reading this comment rather a lot, since it allowed me to find myself in the not-too-common circumstance of noticing that I disagree with Eliezer to a significant (for me) degree.

Insofar as I'm able to put a number on my estimation of existential risks from AI, I also think that they're not under 5%. But I'm not really in the habit of getting into debates on this matter with anyone. The case that I make for myself (or others) for supporting SIAI is rather of the following kind:

  1. If there are any noticeable existential risks, it's extremely important to spend resources on addressing them.

  2. When looking at the various existential risks, most are somewhat simple to understand (at least after one has expended some effort on it), and are either already receiving a somewhat satisfactory amount of attention, or are likely to receive such attention before too long. (This doesn't necessarily mean that they would be of a small probability, but rather that what can be done already seems like it's mostly gonna get done.)

  3. AI risks stand out as a special case, that seems really difficult to understand. There's an exceptionally high degree of uncertainty in estimates I'm able to make of their probability; in fact I find it very difficult to make any satisfactorily rigorous estimations at all. Such lack of understanding is a potentially very dangerous thing. I want to support more research into this.

The key point in my attitude that I would emphasize, is the interest in existential risks in general. I wouldn't try to seriously talk about AI risks to anyone who couldn't first be stimulated to find within themselves such a more general serious interest. And then, if people have that general interest, they're interested in going over the various existential risks there are, and it seems to me that sufficiently smart ones realize that the AI risks are a more difficult topic than others (at least after reading e.g. SIAI stuff; things might seem deceptively simple before one has a minimum threshold level of understanding).

So, my disagreement is that I indeed would to a degree avoid debates over probability. After a general interest in existential risks being present, I would instead of probabilities argue about the difficulty of the AI topic, and how such a lack of understanding is a very dangerous thing.

(I'm not really expressing a view on whether my approach is better or worse, though. Haven't reflected on the matter sufficiently to form a real opinion on that, though for the time being I do continue to cling to my view instead of what Eliezer advocated.)

Comment by Aleksei_Riikonen on Suffering as attention-allocational conflict · 2011-05-19T08:08:32.813Z · LW · GW

Also, how similar is the present Patri-hysteria in Finland to the Beatles-hysteria in the 60's?

One difference is that I'm aware that the former happened, but not that the latter would have.

(edit: by "former" and "latter" I mean the chronological order of events, not the order in which they were mentioned in the quoted comment :)

Comment by Aleksei_Riikonen on Group of Latter Day Roleplayers · 2011-05-16T01:00:28.213Z · LW · GW

I expected this to be about how many Mormons one runs into these days don't really seem to be serious about their religious beliefs, but are instead just going along for the social benefits of belonging to a community that in some ways seems to work. (I.e. how many Mormons actually seem to be pretty similar to typical secularized somewhat-sensible-and-moderate Christians, despite stereotypes of them being more serious regarding their religion.)

Comment by Aleksei_Riikonen on Verifying Rationality via RationalPoker.com · 2011-04-01T01:32:02.682Z · LW · GW

Sounds like a strange comment to me. The threads I'm reading tend to stay very well on topic, be good treatments of that topic, and are about just as good as can be expected when the topic is poker, instead of something intellectually perhaps more refined.

Comment by Aleksei_Riikonen on Verifying Rationality via RationalPoker.com · 2011-03-28T21:16:15.397Z · LW · GW

The first long list is about programs that are allowed, btw.

But I guess I should have been been more specific that I was talking in the context of "bot-like" programs, and what I said is completely accurate in such a context.

Of course additionally e.g. programs that show your cards to your friends are forbidden. And utilizing large centralized databases is forbidden. But programs that do such things are not "bot-like".

EDIT: Ok, actually I am wrong. They go further in banning "bot-like" programs than I described here. I knew that the multi-purpose "poker analytics suites" that people like me widely use are allowed, and that real bots are forbidden, but I was mistaken where exactly they draw the line between these types of programs, since I hadn't really looked into such.

Comment by Aleksei_Riikonen on Verifying Rationality via RationalPoker.com · 2011-03-28T20:43:01.734Z · LW · GW

(EDIT: I am mistaken in what I state in this comment. See comments below for correction.)

Essentially the deciding factor whether an assistant program is allowed is whether the program does the mouse-clicking for you.

Quoting from the Terms of Service of the biggest poker site:

5.6. AUTOMATIC PLAYERS (BOTS). The use of artificial intelligence including, without limitation, "robots" is strictly forbidden in connection with the Service. All actions taken in relation to the Service by a User must be executed personally by players through the user interface accessible by use of the Software.

Comment by Aleksei_Riikonen on Verifying Rationality via RationalPoker.com · 2011-03-28T17:11:59.934Z · LW · GW

So, what is your analysis of what they would do to a small number of their customers who violate their rules by using real-time machine assistance?

That's not a violation of the rules. Assistant programs such as discussed above are used by essentially all serious players and that's fine by the sites.

People who, if they were allowed to get away with it, would destroy the online poker business.

Actually, online poker business is growing every year.

Comment by Aleksei_Riikonen on Verifying Rationality via RationalPoker.com · 2011-03-28T16:59:59.740Z · LW · GW

It is, but there's a lot of competition in that market, and major established players (i.e. very popular very good poker sites, that put a lot of money into R&D and all else required to stay on top).

Comment by Aleksei_Riikonen on Verifying Rationality via RationalPoker.com · 2011-03-28T16:57:19.308Z · LW · GW

It seems like, if these are legal, it would be rational to use them, unless they introduce new biases that are counterproductive, or you wish to develop your probability and card counting skills...

Of course, and indeed essentially all serious online poker players do use poker analytics suites. (They are legal. The point where assistant programs become disallowed by the sites is, essentially, if they start doing the mouse-clicking for you.)

EDIT: It turns out that the above description of exactly where the line between allowed and forbidden is drawn is mistaken. See this FAQ for a comprehensive presentation for how e.g. the biggest poker site lays down the rules on this matter (may vary a bit between sites).

Comment by Aleksei_Riikonen on Verifying Rationality via RationalPoker.com · 2011-03-28T00:24:56.939Z · LW · GW

I'm dubious of the idea that I should be training mentally in an area that a computer program can already trounce all humans in. Playing optimal poker is computationally solvable and not demanding, except for figuring out the biases of the human players.

You obviously have very little knowledge of the topic you're presenting yourself as an expert on.

Do professional poker players make most of their money playing against rubes, or against other professionals?

As is a logical necessity, most make their money off of non-professionals. The very best probably make more off of other professionals, though there are some very bad players even at the highest stakes (e.g. billionaires who just like poker).

Is any of the money counted as their earnings prize money contributed by sponsors, or advertising sponsorships?

There isn't good public data on sponsorship deals. When one hears how much a particular professional makes, they are seldom included.

What are the arguments that a human can outplay a computer at poker?

The fact that you don't see computers beating the best humans (except in some somewhat marginal forms of poker, and even there it's debatable), and in most forms of poker, not even the semi-good players.

This isn't a matter of "argument", but a matter of observing the facts.

How is poker more useful than practicing multiplying large numbers together?

You don't usually get money for simple multiplication.

Comment by Aleksei_Riikonen on Verifying Rationality via RationalPoker.com · 2011-03-28T00:14:13.188Z · LW · GW

They're not very good at analyzing the context, but sure, there are lots of assistant programs. Especially recommended are so-called HUDs (they overimpose real-time updated statistics on your opponents on the screen), that these days tend to be included in more general poker analytics software suites such as Hold'Em Manager.

In general, if you people have questions regarding how to play poker, it's better to ask on a poker forum than here. (TwoPlusTwo is still the biggest and best, I think.)

Comment by Aleksei_Riikonen on Verifying Rationality via RationalPoker.com · 2011-03-28T00:06:02.979Z · LW · GW

this means that he's paying the opportunity cost of hundreds of dollars an hour for the pleasure of wasting time here with me and my friends instead of going gambling. That just makes no sense at all.

Playing high level poker is mentally very taxing and tiring. Even if one can do it for 2-3 hours per day, it's often true that trying to spend too much time on it will result in losses.