Open thread, Nov. 28 - Dec. 04, 2016

post by MrMind · 2016-11-28T07:40:42.202Z · LW · GW · Legacy · 94 comments

Contents

94 comments

If it's worth saying, but not worth its own post, then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

94 comments

Comments sorted by top scores.

comment by PonchoPal · 2016-11-29T19:55:02.595Z · LW(p) · GW(p)

Hey guys,

I'm a lurker, but I'm a regular member of the Denver LW meetup crew, trying to get our scheduled meetups on the main map. There's a Karma limit for that sort of post, and the mod I talked to sent me to you here for help. Would you please give me internet points to make this possible? You'd make all of my transhumanist and EA dreams come true. You know, except the main ones.

comment by ProofOfLogic · 2016-11-30T07:32:49.916Z · LW(p) · GW(p)

Could I get a couple of upvotes so that I could post links? I'd like to put some of the LW-relevant content from weird.solar here now that link posts are a thing.

Replies from: Dagon, MrMind
comment by Dagon · 2016-11-30T16:57:30.507Z · LW(p) · GW(p)

Upvoted, but I have to say I dislike link posts in discussion. The vast majority are wrong for LW - either oversimple or just repeating what's already in the wiki or sequences. Some are on-topic and well-written, but even then the community interaction is broken when there are comments both on the hosting site and LW.

If you have something LW-appropriate to say, make a post and include the link.

note: since links are now a thing, I'm likely in the minority.

Replies from: ProofOfLogic
comment by ProofOfLogic · 2016-11-30T19:30:38.227Z · LW(p) · GW(p)

Yeah, I think the links thing is pretty important. Getting bloggers in the rationalist diaspora to move back to blogging on LW is something of an uphill battle, whereas them or others linking to their stuff is a downhill one.

comment by MrMind · 2016-11-30T07:49:28.801Z · LW(p) · GW(p)

I have upvoted you. Make us proud.

comment by scarcegreengrass · 2016-12-01T03:39:04.842Z · LW(p) · GW(p)

Downvoting is temporarily disabled! I'm very excited about this change because in the last few weeks I've seen some good conversations deleted by someone exploiting a sockpuppet glitch. Besides, I have always preferred commenting to downvoting.

Replies from: niceguyanon
comment by niceguyanon · 2016-12-01T14:14:00.669Z · LW(p) · GW(p)

Agree. There have been an influx of posts and almost no comments to go with them, looks sad. SSC's most recent post is really interesting but almost no comments, perhaps because it has already been discussed on SSC, maybe this will help.

Replies from: Lumifer
comment by Lumifer · 2016-12-01T16:02:48.624Z · LW(p) · GW(p)

SSC got a little burned out by the election madness.

comment by Vaniver · 2016-11-30T17:52:44.144Z · LW(p) · GW(p)

Check out the Double Crux post in Main!

Double Crux is one of the recent CFAR methods that seems like it could spread easily and isn't too deeply reliant on other things that CFAR teaches. (Basically, it's about what leads to conversations where people can actually change their minds, and a recipe for doing so.)

comment by MrMind · 2016-11-28T14:33:15.930Z · LW(p) · GW(p)

There's a new post in Main! I missed it completely, because on login I head straight to Discussion... if you are like me, just be aware.

Replies from: ChristianKl
comment by ChristianKl · 2016-11-28T16:19:17.824Z · LW(p) · GW(p)

Given we recently got the change to have link posts in discussion it seems to me like the ideal solution would be to show main posts as well in discussion.

comment by RainbowSpacedancer · 2016-11-30T12:56:16.142Z · LW(p) · GW(p)

The reason I visit LW is it satisfies a need for community. I'm glad to see the recent efforts at revitalisation, as a large part of the value for me generated by a single conversational locus is the social support it provides. This site has been inactive for a long time - and yet to my puzzlement I still found myself checking it regularly, despite not learning anything. I discovered that it's because I just wanted to keep in touch with what's going on in rationalist circles, and hang out a bit. I see myself as an aspiring rationalist, and that's a hard thing to be alone. Spending time with people that share your goals, values, concerns and language is rejuvenating, and that's what my mind was seeking when I had the impulse to visit here. I'm aware of the dangers of tribalism and identity and yet I find myself with a brain built for life in a tribe. It's lonely out there. So thanks for coming back everyone, I'll do what I can to see this community flourish.

comment by Vaniver · 2016-12-02T19:01:15.083Z · LW(p) · GW(p)

Moved Fact Posts to Main and promoted it; make sure you don't miss it.

comment by [deleted] · 2016-12-01T14:57:06.050Z · LW(p) · GW(p)

I had to translate an article about testing the shelf life of a viral diagnosticum of the fourth generation, and it seemed rather fishy to me (but I'm never a chemist). The authors used the "accelerated aging" method, heated the diagnosticum up for some periods of time to some temperatures, and then tested the "functional parameters". The rationale is that a 10C increase in temperature results in double the rate of the reaction. They used the results to project the shelf life at 4 C.

As far as I can tell, they did not check test kits that have reached the point being stored at 4 C.

Can anybody say anything on the applicability of the Arrhenius equation in this case? I have very little idea on what goes into viral diagnosticums and how they decompose. It just seemed wrong.

Replies from: MrMind
comment by MrMind · 2016-12-01T16:40:11.987Z · LW(p) · GW(p)

I know very little about it, but there doesn't seem to be anything wrong. If anything, Arrhenius equation over-estimates reaction rates constants because it presupposes no barrier to the acquisition of the activation energy.
Do you have any reason to believe that this diagnostic equipment has an internal source of energy which is not the temperature?

comment by DataPacRat · 2016-12-01T01:02:25.472Z · LW(p) · GW(p)

If you could pick one music track that, if turned into a music video, could most exemplify the emotions resulting from LW-style rationality, what would that song be?

Replies from: g_pepper, polymathwannabe, knb
comment by g_pepper · 2016-12-01T03:02:59.090Z · LW(p) · GW(p)

Aside from the music video constraint, I would say Schubert's piano sonata in B flat major. Overall it is an optimistic, contemplative piece (capturing the attitudes of many LWers) but the bass note trill occurring early in the piece and repeating several times throughout serves as a reminder of existential risks such as unfriendly AGI. The piece was used (to great effect) at a climactic scene of the recent movie Ex Machina.

Replies from: knb
comment by knb · 2016-12-03T20:25:48.356Z · LW(p) · GW(p)

I immediately thought of this.

comment by polymathwannabe · 2016-12-07T18:51:32.383Z · LW(p) · GW(p)

Steve Reich's Violin Phase.

comment by knb · 2016-12-03T20:26:43.933Z · LW(p) · GW(p)

I immediately thought of this.

comment by [deleted] · 2016-11-29T15:48:57.634Z · LW(p) · GW(p)

The bottom left corner of Questionable Content number 3362 (http://questionablecontent.net/view.php?comic=3362). That is all.

comment by MrMind · 2016-11-29T08:18:08.699Z · LW(p) · GW(p)

Everyone's afraid that robots will steal manual labor. But the components for robots stealing enterpreneur's jobs are already floating around: DAO's, machine learning for copywriting, maximize profit.

Replies from: ChristianKl, Lumifer
comment by ChristianKl · 2016-12-03T12:08:47.864Z · LW(p) · GW(p)

A DAO doesn't replace any of the work of understanding what customers are willing to buy and making product development decisions.

In most cases for legal products a central server costs a lot less to get the same computational power. The added complexity that the DAO provides makes no sense for most startups.

Replies from: MrMind
comment by MrMind · 2016-12-05T08:26:24.667Z · LW(p) · GW(p)

A DAO doesn't replace any of the work of understanding what customers are willing to buy and making product development decisions.

But an AI possibly can. The DAO would be just a way to redistribute the ownership of the whole "AI + robot factory" complex. It's not the AI that is distributed along the blockchain, it's the company modus operandi.

Replies from: ChristianKl
comment by ChristianKl · 2016-12-05T12:41:06.197Z · LW(p) · GW(p)

If there's GAI that might make the decisions that enterpreneur's make. Currently that's not on the horizon.

Replies from: MrMind
comment by MrMind · 2016-12-05T13:19:14.147Z · LW(p) · GW(p)

I believe that coordinating a series of specialized AIs could do the job, but until I write down a model I cannot go beyond a mere statement / hope.

comment by Lumifer · 2016-11-29T15:40:41.452Z · LW(p) · GW(p)

DAO's

Don't you mean the crater where the DAO used to be..?

Replies from: scarcegreengrass
comment by scarcegreengrass · 2016-11-29T17:05:00.653Z · LW(p) · GW(p)

Extremely confusingly, there is an organization named 'The DAO' which is a DAO (a Distributed Autonomous Organization).

Replies from: Lumifer
comment by Lumifer · 2016-11-29T17:48:47.997Z · LW(p) · GW(p)

Keeping in mind that dao and tao are different ways of spelling the same thing, let me quote from Tao Te Ching:

The Tao that can be spoken is not the eternal Tao
The name that can be named is not the eternal name
Replies from: MrMind, Viliam, scarcegreengrass
comment by MrMind · 2016-11-30T08:28:16.718Z · LW(p) · GW(p)

'The Dao' that can be spoken is not the eternal DAO

Exactly!

comment by Viliam · 2016-11-29T21:18:16.616Z · LW(p) · GW(p)

Does this mean that Tao has infinite Kolmogorov complexity?

Replies from: Lumifer
comment by Lumifer · 2016-11-29T21:20:14.622Z · LW(p) · GW(p)

I don't know -- you might want to ask Gödel, Escher, and Bach about it :-)

comment by scarcegreengrass · 2016-11-29T18:07:34.125Z · LW(p) · GW(p)

Yes! This is delightful.

comment by scarcegreengrass · 2016-11-29T18:55:25.788Z · LW(p) · GW(p)

MIRI publishes a lot of research on 'neat' systems like first order logic reasoners, and not on 'scruffy' systems like neural networks. I heard Eliezer Yudkowsky allude to the idea that this is for convenience or budgetary reasons, and that they will do more research on neural networks (etc) in the future.

Does anyone have any more information about what MIRI thinks and intends to research about 'scruffy' AI systems?

Replies from: ProofOfLogic
comment by ProofOfLogic · 2016-11-30T03:54:22.043Z · LW(p) · GW(p)

Basically, this:

https://intelligence.org/2016/07/27/alignment-machine-learning/

It's now MIRI's official 2nd agenda, with the previous agenda going under the name "agent foundations".

Replies from: scarcegreengrass
comment by scarcegreengrass · 2016-11-30T20:35:41.654Z · LW(p) · GW(p)

Okay, thanks.

comment by DataPacRat · 2016-12-03T08:45:47.123Z · LW(p) · GW(p)

I'm having an un-rational moment, and despite knowing that, it's still affecting my behaviour.

Earlier today, my newsfeed included the datum discussed here, of Trump having a phone call with the President of Taiwan; and the item discussed here, about Trump talking about 'shutting down' the Internet. And later, while listening to my music playlist of the Merry Wives of Windsor, one of the tunes that popped up was "Green Fields of France", one version of which can be heard here. And I started wondering whether I was prepared for politics to go in an even more negative direction than I'd thought it might back during the American elections, faster than I thought it might.

Specifically, I have the question stuck in my head: "Have I made the appropriate level of preparations, in case of significant military conflict within the year?". There are a variety of possibilities, from America's Congress passing laws that I find abhorrent, to China engaging in cyberwar against North American network infrastructure, to a minor US/Canadian dispute blowing up to the point Trump convinces some portion of the US military across the border to ensure the continued flow of "vital resources", to worse.

Put another way - I've just finished figuring out what I would want to have done this month if, some time next year, many websites I find valuable become permanently deleted and unrecoverable (in spite of the Internet Archive's efforts). (Part of the answer: the program wget and an archival Blu-Ray burner.)

The thing is, from inside my own head, I can't tell whether my thoughts have been doing this particular set of planning because I'm currently in the middle of one of my bouts of depression, or if it's actually a perfectly reasonable response to modern life and current events. So I'm looking for some external auditing, here where the sanity waterline is reasonably high:

How crazy do I sound to you?

Replies from: entirelyuseless, sen, MrMind, DataPacRat
comment by entirelyuseless · 2016-12-03T16:27:25.194Z · LW(p) · GW(p)

You sound paranoid. Even if there is significant military conflict, it won't affect you personally.

Replies from: DataPacRat
comment by DataPacRat · 2016-12-04T00:05:41.619Z · LW(p) · GW(p)

Maybe there's a bit of terminology confusion here; if a military conflict /doesn't/ affect me personally, it seems unlikely to be a 'significant' one. (Some historical ways a military conflict could affect me personally: Victory Gardens, the Order of the White Feather, the Fenian Raids, even less oversight and accountability for civilian police whose actions would otherwise end up in the subreddit "Bad Cop, No Donut".)

I'm thinking of scenarios such as 'It turns out China put secret backdoors into all sorts of hardware chips, and in a fit of self-righteous pique (which they think will play well to their red-state base), the war-monger side of the American Congress doesn't see any downsides to making a demand that everyone in the world shut down their supposedly Chinese-controlled hardware under threat that if they don't, they'll send the American military to shut it down'. As far as I can tell, several versions of just this one particular scenario don't obviously break the sociological law of every political actor having to act in what they perceive to be their own self-interest.

However, I no longer trust my sense of calibration for the odds of large-scale politics, given that I was willing to go along with the predicted odds of 88% for Hillary winning the election, and didn't update nearly as much as I should have by the time of the election itself. And said lack of calibration puts a sharp limit on how rationally I can act as I decide how much effort to put into preparing for the more unpleasant scenarios.

comment by sen · 2016-12-03T09:09:15.728Z · LW(p) · GW(p)

You sound insane desu.

Stop browsing reddit for a while. Any board where attention is explicitly rewarded, whether in the form of (You)s or upboats, will almost by definition tend towards encouraging high volatility of beliefs and emotions. It sounds like you've been riding that wave a bit too long.

Also, learn to recognize fear mongering.

Replies from: DataPacRat
comment by DataPacRat · 2016-12-03T09:45:00.667Z · LW(p) · GW(p)

I want to provide arguments offering further justification for increasing my priority for making personal offline backup copies of various online resources (such as "it's something I've been vaguely wanting to do for some time anyway, I've just never had any particular impetus to get more of the job done than my current mirrors"), but, from inside my head, it's hard to tell whether these are actual reasons or mere rationalizations.

Do facts such as that I've had this username for 15 years, have said "it's not just a nom-de-net, it's a way of life", and already have offline copies of Wikipedia, Project Gutenberg, and several other multi-gigabyte text references, provide a reasonable amount of evidence that my possibly-irrational desired behaviour is merely a continuation of my existing trends, rather than being a step too far?

Replies from: sen
comment by sen · 2016-12-03T10:18:37.370Z · LW(p) · GW(p)

I said you sound insane because of your paranoia, not because of what you wanted to do as a result of that paranoia. Whether or not you would be creating backups in other circumstances is irrelevant, except as an indicator of how paranoid you are. I don't think such an indicator is necessary because your first two paragraphs already demonstrate what I see as an extreme level of paranoia, and so to me it's irrelevant whether you already have backups of various sites. It's perfectly reasonable for you to create backups given your beliefs. Those beliefs though I consider insane. The solution then is not to stop creating backups, as that would accomplish nothing. The solution is to stop browsing sites that are specifically designed to make you insane.

Replies from: DataPacRat
comment by DataPacRat · 2016-12-03T10:48:08.475Z · LW(p) · GW(p)

paranoia

Ah, but is it really paranoia if "they really are out to get you"? :)

I've previously demonstrated that I'm willing to make long-shot gambles on 5% odds, given that that's roughly my estimate of cryonics working and I've signed up for it. So let's try working with that number.

Out of all the possible scenarios of a Trump presidency, if you leave out 95% of the most positive options, how unpleasant is the best-remaining one? Put another way, is there at least a 5% chance of American or international politics descending to the point where my current apparent paranoia seems reasonable? And don't forget, as you calibrate your answer, that according to FiveThirtyEight, on October 17, Hillary had been predicted to have over an 88% chance of winning, implying that many people, likely including myself, have been massively mis-calibrated about how likely unpleasant political events are.

Replies from: sen
comment by sen · 2016-12-03T11:23:05.808Z · LW(p) · GW(p)

You don't place bets based solely on probabilities. You place bets based on probabilities, odds, timescales, investments, and alternative options. Specifically, you place bets to optimize for growth of principle with respect to time. What you're doing is not placing a bet. If you were placing a bet and wanted feedback, it would have been appropriate to provide a lot more information, such as what you expect to gain from your bet, what you expect to lose in the negative case, what you're hoping to optimize, what your expected costs are, and what alternatives you're considering spending your time or money on. It's not appropriate for you to provide any of that information because what you're doing is not placing a bet.

What you're doing is panicking and looking for an echo to tell you that your beliefs are sensible, that the world really is crashing, and that what you're doing is justified. Your beliefs are not sensible, and the world isn't really crashing. I don't know if what you're doing is justified, as that would require a lot more information, but honestly I think that's irrelevant.

Replies from: DataPacRat
comment by DataPacRat · 2016-12-03T11:43:17.403Z · LW(p) · GW(p)

(A quick FYI, I'm about to try for a good night's sleep, then compare how I was feeling when I first posted in this thread with however I feel when I wake.)

comment by MrMind · 2016-12-05T08:17:16.600Z · LW(p) · GW(p)

Only very mildly.
The point is that you priority should be to get out of depression: in the case of a military conflict, how much helpful will be that? This is much more important for your long-term survival than a bunch of reddit branches.

Replies from: DataPacRat
comment by DataPacRat · 2016-12-05T10:05:31.726Z · LW(p) · GW(p)

get out of depression

If you have any clue for a method on how a person can reliably accomplish that - especially if it's one that I haven't tried yet - please share. With the whole world.

I trust that you won't mind if I don't plan on holding my breath.

Replies from: MrMind
comment by MrMind · 2016-12-05T15:48:17.063Z · LW(p) · GW(p)

I was talking about the meta-level, and your meta-level question was "Have I made the appropriate preparations?" to which I answered: no, the biggest improvement is if you prioritize depression treatment over any other.

That said, on the object level, if you have that goal, then you would try anything sensible-sounding and any combination of anything sensible until something works.
But I cannot tell you what is sensible because I'm not an expert on depression.

Replies from: DataPacRat
comment by DataPacRat · 2016-12-05T21:35:57.869Z · LW(p) · GW(p)

if you have that goal, then you would try anything sensible-sounding and any combination of anything sensible until something works.

I have had that goal for some time. I have tried the sensible-sounding things, in various combinations. They didn't work. So I've been shifting my focus from "trying to keep depressive bouts from happening" to "managing my life on the assumption I'm going to keep getting depressive bouts". I've hit enough such management tricks that even with my bout last week interrupting, I'm about 60,000 words into writing a novel, including 1600 words yesterday; I could be doing better, sure, but I could be doing a lot /worse/, too.

comment by DataPacRat · 2016-12-03T20:39:59.738Z · LW(p) · GW(p)

As a point of interest: as of when I woke up, the votes were: LessWrong, two votes for paranoid; /r/rational, two votes for not particularly crazy.

Emotionally, I'm not feeling the particular "I'm going to hate myself in January 2018 if I haven't mailed copies of my archival Blu-Ray discs to certain members of my extended family stretching halfway across the continent by then, and the Net gets taken down" urgency that I did when I posted, but it still seems like a good idea to nudge my plans in the direction of being able to handle that particular scenario with minimal losses of what I find valuable.

comment by Ixiel · 2016-12-02T14:03:38.113Z · LW(p) · GW(p)

Oops, solved

comment by MrMind · 2016-12-02T10:12:32.539Z · LW(p) · GW(p)

I think the unofficial, undercover ban on basilisks should be removed. Acausal trade is an important topic and should be openly discussed.

[pollid:1171]

Replies from: Unnamed, Dagon
comment by Unnamed · 2016-12-03T03:06:50.527Z · LW(p) · GW(p)

As of October 2015, "The Roko's basilisk ban isn't in effect anymore."

comment by Dagon · 2016-12-03T02:00:42.600Z · LW(p) · GW(p)

I've seen it referenced enough times recently that I suspect the ban is no longer enforced anyway. Our wiki has a complete writeup which includes the original post.

comment by MrMind · 2016-12-02T10:11:21.205Z · LW(p) · GW(p)

I think the unofficial, undercover ban on basilisks should be removed. Acausal trade is an important topic and should be openly discussed.

[pollid:1170]

Replies from: MrMind
comment by MrMind · 2016-12-02T10:13:21.936Z · LW(p) · GW(p)

Sorry, I messed up the number of options. The examples in the help are wrong, though.

comment by MrMind · 2016-12-01T14:35:44.868Z · LW(p) · GW(p)

Let's say that I have a belief running like this: "a DAO that controls the manifacturing output of robots to produce a UBI would be the solution to the robots-stealing-jobs problem".
What would be the best move for me to influence someone into believe / try this?
Take a degree in economics? Joining some kind of foundation? Shouting from the top of a carboard box in front of the Coliseum?
What else?

Replies from: Lumifer, ChristianKl, None
comment by Lumifer · 2016-12-01T15:58:52.507Z · LW(p) · GW(p)

You need a VC (venture capitalist) because you're effectively proposing a startup and the most important things you need are money and talent.

In parallel you can write a paper (under peer-reviewed standards) which would explain why do you think it will work and/or a set of lay posts explaining the same in accessible language.

The first question that comes to my mind is why do you need a DAO here and what is the difference from a state nationalising "robots to produce a UBI". Or an everyone-gets-a-share corporation if you're particularly distrustful of the state.

Replies from: MrMind
comment by MrMind · 2016-12-01T16:12:06.513Z · LW(p) · GW(p)

Well, a DAO is a kind of formalized, cryptographically secure everyone-gets-a-share corporation, so it's not much different.
But you actually gave me very good ideas, thanks.

Replies from: ChristianKl
comment by ChristianKl · 2016-12-03T00:39:22.875Z · LW(p) · GW(p)

Well, a DAO is a kind of formalized, cryptographically secure everyone-gets-a-share corporation, so it's not much different.

Not really given the DAO that we have seen.

There no cryptographically secure mechanism to give everybody a share with common definition of what cryptographically secure means. Sockpuppeting can't be solved with cryptography.

Replies from: MrMind
comment by MrMind · 2016-12-05T08:22:58.542Z · LW(p) · GW(p)

Very well, "secure" in cryptography just means "robust". I was not implying that it is impossible to hack, and since the DAO we are speaking about doesn't even exists, it doesn't make sense to discuss about its pro's and con's.

Replies from: ChristianKl
comment by ChristianKl · 2016-12-05T12:40:44.181Z · LW(p) · GW(p)

Very well, "secure" in cryptography just means "robust".

No. Secure in cryptography generally means the ability to make security guarantees. Trust in identity is still required.

There's also no reason why a DAO that destributes money to random people won't be outcompeted by one that reinvest all resources in furthering it's own goals.

comment by ChristianKl · 2016-12-03T00:36:34.266Z · LW(p) · GW(p)

The first step would be to get more specific about what you actually believe. List the individual beliefs that you have and write down your credence that you have for key assumptions.

comment by [deleted] · 2016-12-01T14:59:36.948Z · LW(p) · GW(p)

If you already have relevant researching skills (e.g., as a statistician), why not just hunt yourself an economist and co-work?

Replies from: MrMind
comment by MrMind · 2016-12-01T16:14:49.277Z · LW(p) · GW(p)

That might indeed be something I'm willing to try, thanks!

comment by MrMind · 2016-12-01T14:31:05.384Z · LW(p) · GW(p)

Richard Wong, head of engineers at Coursera, in an interview at the site lifehacker.com has declared:

I used to be a PC-only person, back during my days at Microsoft, but now I’m pretty much Apple only. It has some of the best development tools for engineers.

It beats me, though. I knew that PCs are good for gaming and developing, but which are the conclusively superior development tools for engineer? I'm confused.

Replies from: Vaniver, Lumifer
comment by Vaniver · 2016-12-01T16:58:19.776Z · LW(p) · GW(p)

which are the conclusively superior development tools for engineer?

Most of the cutting-edge projects that I know have simple installation instructions in Linux/Mac, along the lines of "oh, you sudo apt-get project and then it manages all the dependencies for you and just works," whereas getting them to work in Windows is something of a "alright, you need to get X, Y, and Z, and also want W but it doesn't really work in Windows so here's a hack to work around that kind of, which might stop working whenever they upgrade."

I suspect this is the impression he's pointing at.

Replies from: Lumifer
comment by Lumifer · 2016-12-01T17:31:49.593Z · LW(p) · GW(p)

Yep.

Writing code on Windows works reasonably well if you stay within a pre-built environment (e.g. Visual Studio, Eclipse, R Studio, etc.) Stray out of it and you'll be forced to painfully kludge together some bastardised version of Unix inside Windows.

Macs are Unix machines at their core (with a pretty GUI on top) so from the writing code point of view there isn't a great difference between a Mac and, say, an Ubuntu machine.

By the way, Richard Wong's favourite text editor -- Sublime Text -- is available on all three platforms.

comment by Lumifer · 2016-12-01T16:01:37.099Z · LW(p) · GW(p)

I'm not sure what are you asking. IMHO for development Linux/Unix environment is superior, but in specific cases it all depends on what kind of application software is available.

Replies from: MrMind
comment by MrMind · 2016-12-01T16:17:46.184Z · LW(p) · GW(p)

Yeah I wasn't very clear. Can you think of any developement tools that is definitely superior / works only on Apple?

Replies from: Lumifer
comment by Lumifer · 2016-12-01T16:30:31.741Z · LW(p) · GW(p)

By "development" do you mean "writing code"?

You mentioned engineering which could imply, say, CAD/CAE software, and at this level you're basically comparing applications and OSes (Win/Mac/*nix) matter only to the extent that the program you need will run on them.

Replies from: Vaniver
comment by Vaniver · 2016-12-01T16:59:29.138Z · LW(p) · GW(p)

You mentioned engineering which could imply, say, CAD/CAE software

Head of engineering at Coursera means software engineering.

comment by erratio · 2016-11-30T04:49:07.186Z · LW(p) · GW(p)

Would I be able to tap the LW academic network to get a copy of this paper?

Extreme gratitude in advance.

Replies from: MrMind
comment by MrMind · 2016-11-30T07:51:07.754Z · LW(p) · GW(p)

Does sci-hub.io work for you? I'm behind a firewall at the moment.

comment by ingive · 2016-11-28T08:19:48.213Z · LW(p) · GW(p)

Explosions in the Sky music It's very important as a rationalist, your job is to understand the machine that you are. If what you exists, how you choose your actions, seeing through the conditioning and extreme obstacles that are limiting your growth and the growth of humanity, is very important. So study neuroscience!

A reminder that rationality is a slave to our emotions and how in line our emotions are with rationality dictates how rational our actions are, for example, from one moment to the next you can become Vegan. The disconnect between emotions and rationality might have to do with the DMN - Default Mode Network. When you are depressed for example, you have higher activation in the DMN while lower activation to more frontal parts of the brain / g general intelligence / working memory. To lower the activation to the DMN you can meditate, use CBT, take SSRI's, psychedelics or the better, activate your more advanced parts of the brain. The DMN is silenced when you are focused.

The speculation/observation:

By attaching a concept or an idea to your reward center, or emotional core you can easily overcome the flaws that you've been socially conditioned to believe, although we aren't certain of this yet. (Carthexis, Feud, regarding becoming emotionally invested in ideas)

You can figure out what concept or idea you are emotionally invested in right now, for many, it's probably comfort or security which is driving their actions, thus they use rationality simply as a tool rather than the end. A means was very useful when we transitioned into the scientific era.

Instead of being emotionally invested in an idea which is limiting your growth and the Species you can become emotionally invested in rationality under which you will transform, but you have to present rationality as something which your emotional core will resonate with and the idea which your are emotionally invested in right now has to be rejected.

After that you will see reality for what it is - you have not only mastered the Way, you become the Way.

Replies from: RowanE
comment by RowanE · 2016-11-28T08:54:20.066Z · LW(p) · GW(p)

The usual rule is to identify as an "aspiring rationalist"; identifying rationality as what you are can lead to believing you're less prone to bias than you really are, while identifying it as what you aspire to reminds you to maintain constant vigilance.

Replies from: ingive
comment by ingive · 2016-11-28T10:11:26.763Z · LW(p) · GW(p)

That is mostly true, you've discovered the fallacy of most humans, they identify not out of rationality's sake but from their own comfort in most cases. Because they are not honest, they wear the facade of Rationality to rationalize their behavior even though they are not rational at all or care.

Stupidity can be categorized as writing useless posts on facebook - Yudkowsky. Not being Vegan. Smoking etc.

Connecting with the Way emotionally will allow you to scrutinize and redevelop your belief system. It's an observation our species has made but not reviewed.

Replies from: Viliam, NatashaRostova
comment by Viliam · 2016-11-28T23:44:53.339Z · LW(p) · GW(p)

Okay, I finished reading the book, and then I also looked at the wiki. So...

A few years ago I suspected that the biggest danger for the rationalist movement could be it's own success. I mean, as long as no one give a fuck about rationality, the few nerds are able to meet somewhere at the corner of the internet, debate their hobby, and try to improve themselves if they desire so. But if somehow the word "rationality" becomes popular, all crackpots and scammers will notice it, and will start producing their own versions -- and if they won't care about the actual rationality, they will have more degrees of freedom, so they will probably produce more attractive versions. Well, Gleb Tsipursky is already halfway there, and this Athene guy seems to be fully there... except that instead of "rationality", his applause light is "logic". Same difference.

Instead of nitpicking hundred small details, I'll try to get right into what I perceive as the fundamental difference between LW and "logic nation":

According to LW, rationality is hard. It's hard, because our monkey brains were never designed by evolution to be rational in the first place. Just to use tools and win tribal politics. That's what we are good at. The path to rationality is full of thousand biases, and requires often to go against your own instinct. This is why most people fail. This is why most smart people fail. This is why even most of the smartest ones fail. Humans are predictably irrational, their brains have systematic biases, even smart people believe stupid things for predictable reasons. Korzybski called it "map and territory", other people call it "magical thinking", here at LW we talk about "mysterious answers to mysterious question" -- this all points in approximately the same direction, that human brains have a predictable tendency to just believe some stupid shit, because from inside it seems perfectly real, actually even better than the real thing. And smarter people just do it in more sophisticated ways. So you have to really work hard, study hard, and even then you have a tiny chance to be fully sane; but without current research and hard work, your chances are zero for all practical purposes.

"Logic nation" has exactly the opposite approach. There is this "one weird trick", when you spend a few hours or weeks doing a mental exercise that will associate your positive emotions with "logic", and... voilà... you have achieved a quantum leap, and from now on all you have to do is to keep this emotional state, and everything will be alright. Your faith in logic will save you. And the first thing you have to do, of course, is to call your friends and tell them about this wonderful new thing, so they also get the chance to "click". As long as you keep worshiping "logic", everything will be okay. Mother Logic loves you, Mother Logic cares for you, Mother Logic will protect you, Mother Logic created this universe for you... and when you fully understand your true nature, you will see that actually Mother Logic is you. (Using my own words here, but this is exactly how what I have seen so far seems to me.)

Well, to me this smells like exactly the kind of predictable irrationality humans habitually do. Take something your group accepts as high-status and start worshiping it. Imagine that all your problems will magically disappear if you just keep believing hard. Dissolve yourself in some nebulous concept. How is this different from what the average New Age hippie believes? Oh yes, your goddess is called Logic, not Gaia. I rest my case.

I know that the topic of AI is too removed from our everyday lives, and most people's opinion on this topic will absolutely have no consequence on anything, but even look there: Athene just waves his hand and says it will be all magically okay, because an AI smarter than us will of course automatically invent morality. (Another piece of human predictable irrationality, called "anthropomorphisation". Yeah, the AI will be just another human, just like the god of rain is just another human. What else than a human could there be?)

Speaking of instrumental rationality, the book you linked provides a lot of good practical advice. I was impressed. I admit I didn't expect to see this level of sanity outside LessWrong. Some parts of the book could be converted into 5 or 10 really good posts on LW. I mean it as a compliment. But ultimately, that seems to be all there is, and the rest is just a huge hype about it. (Recently LW is kind of dying, so to get an idea about what a really high-quality content looks like, see e.g. articles written by lukeprog.) But speaking about epistemic rationality, the "logic nation" is far below the LW level. It's all just hand-waving. And salesmanship.

Also, I dislike how Athene provides scientific citations for very specific claims, but when he describes a whole concept, he doesn't bother hinting that the concept was already invented by someone else. For example, on the wiki there is his bastardized version of Tegmark Multiverse + Solomonoff Induction, but it is written as something he just made up, using "logic". You see, science is only useful for providing footnotes for his book. Science supports Athene, not the other way round.

Eliezer, for all his character flaws, may perhaps describe himself as the smartest being in the universe (I am exaggerating here (but not too much)), but then he still tells you about Kahneman and Solomonoff and Jaynes and others, and would encourage you to go and read their books.

Etc. The summary is that Athene provides a decent checklist of instrumental rationality in his book, but everything else is just a hype. And his target audience are the people who believe in "one weird trick".

Try reading the Sequences and maybe you will see what I was trying to describe here. That is a book that often moves people to a higher level of clarity of thinking, where the things that seemed awesome previously just become "oh, now I see how this is just another instance of this cognitive error". I believe what Athene is doing is built on such errors; but you need to recognize them as errors first. Again, I am not saying he is completely wrong; and he has useful things to provide. (I haven't listened to his podcasts yet, if they expand on the material from the book, that could be valuable. Although I strongly prefer written texts.) It's just, there is so much hype about something that was already done better. So obviously people on this website are not going to be very impressed. But it may be incredibly impressive to someone not familiar with the rationalist community.

Replies from: ingive
comment by ingive · 2016-11-29T02:22:07.491Z · LW(p) · GW(p)

Okay, I finished reading the book, and then I also looked at the wiki. So...

If you are aware of mathematics what do you think about this part: https://logicnation.org/wiki/A_simple_click#Did_God_create_logic.3F Is it falsifiable? There was an interesting talk how something can arise out of nothing and how it's relatable to the present moment which one can't ever grasp but I will have to condense it for you guys later.

A few years ago I suspected that the biggest danger for the rationalist movement could be it's own success. I mean, as long as no one give a fuck about rationality, the few nerds are able to meet somewhere at the corner of the internet, debate their hobby, and try to improve themselves if they desire so. But if somehow the word "rationality" becomes popular, all crackpots and scammers will notice it, and will start producing their own versions -- and if they won't care about the actual rationality, they will have more degrees of freedom, so they will probably produce more attractive versions. Well, Gleb Tsipursky is already halfway there, and this Athene guy seems to be fully there... except that instead of "rationality", his applause light is "logic". Same difference. Instead of nitpicking hundred small details, I'll try to get right into what I perceive as the fundamental difference between LW and "logic nation":

I agree.

According to LW, rationality is hard.

It's hard, because our monkey brains were never designed by evolution to be rational in the first place. Just to use tools and win tribal politics. That's what we are good at.

That's false, if it wasn't for evolution you wouldn't have the ability to be rational in the first place.

The path to rationality is full of thousand biases, and requires often to go against your own instinct. This is why most people fail. This is why most smart people fail. This is why even most of the smartest ones fail. Humans are predictably irrational, their brains have systematic biases, even smart people believe stupid things for predictable reasons. Korzybski called it "map and territory", other people call it "magical thinking", here at LW we talk about "mysterious answers to mysterious question" -- this all points in approximately the same direction, that human brains have a predictable tendency to just believe some stupid shit, because from inside it seems perfectly real, actually even better than the real thing. And smarter people just do it in more sophisticated ways. So you have to really work hard, study hard, and even then you have a tiny chance to be fully sane; but without current research and hard work, your chances are zero for all practical purposes.

Again, I agree.

"Logic nation" has exactly the opposite approach. There is this "one weird trick", when you spend a few hours or weeks doing a mental exercise that will associate your positive emotions with "logic", and... voilà... you have achieved a quantum leap, and from now on all you have to do is to keep this emotional state, and everything will be alright. Your faith in logic will save you.

Yes, the one weird trick has been observed to 'work' in the different cases although we don't know more than that right now. But someone with an understanding in neuroscience, psychology-physiology connection and can bot search the academic literature would be able to connect the dots I think.

'Logic nation' has nothing to do with this type of rationality, though, it was a mistake to deliberately say it had or use the word.

And the first thing you have to do, of course, is to call your friends and tell them about this wonderful new thing, so they also get the chance to "click".

No, you don't want to do that, it's unlikely people will care or understand or immediately cry wolf cult. It's also with a large likelihood an inefficient use of time. There is people who want to click so it's probably better to push resources there. If Yudkowsky clicked he would probably not call up someone, instead write an article 'to rule them all' and all of you would finally get it.

But this is speculation or me giving information which can be taken accounted of after you click. Because you think differently, you probably will take some time to restructure your beliefs as the "first thing".

As long as you keep worshiping "logic", everything will be okay. Mother Logic loves you, Mother Logic cares for you, Mother Logic will protect you, Mother Logic created this universe for you... and when you fully understand your true nature, you will see that actually Mother Logic is you. (Using my own words here, but this is exactly how what I have seen so far seems to me.)

Not completely accurate, it's only if you cannot fix something with an adequate amount of time, if you, for example, scratched your leg, you can accept the pain, thus the suffering go away instantly. Doing logical things and figuring out God (Spinoza) by science could be seen as prayer, maybe neuro or something else too. But it's speculation after all since it differs for everyone based on everything, their current knowledge (see the example of Yudkowsky) and so on.

Well, to me this smells like exactly the kind of predictable irrationality humans habitually do. Take something your group accepts as high-status and start worshiping it. Imagine that all your problems will magically disappear if you just keep believing hard. Dissolve yourself in some nebulous concept. How is this different from what the average New Age hippie believes? Oh yes, your goddess is called Logic, not Gaia. I rest my case.

Sure, rationality-as-LW-and-all-the-literature-puts-it. You're better asking how is this different to what I believe? Instead, your god might be 'comfort'. 'Identity' might be prevalent in rationality communities. When you realize this, doing the 4 steps, emotionally, you're on the path to mastering the Way.

I know that the topic of AI is too removed from our everyday lives, and most people's opinion on this topic will absolutely have no consequence on anything, but even look there: Athene just waves his hand and says it will be all magically okay, because an AI smarter than us will of course automatically invent morality. (Another piece of human predictable irrationality, called "anthropomorphisation". Yeah, the AI will be just another human, just like the god of rain is just another human. What else than a human could there be?)

Sure, I agree, it requires more understanding of the topic and he is lacking quite a bit. Has someone made the argument, what if humans trying to intervene with AGI be the cause of the species destruction? Hard coding values might be contradictory, for example. It might value 'logic' automatically.

Speaking of instrumental rationality, the book you linked provides a lot of good practical advice. I was impressed. I admit I didn't expect to see this level of sanity outside LessWrong. Some parts of the book could be converted into 5 or 10 really good posts on LW. I mean it as a compliment. But ultimately, that seems to be all there is, and the rest is just a huge hype about it. (Recently LW is kind of dying, so to get an idea about what a really high-quality content looks. ....................he still tells you about Kahneman and Solomonoff and Jaynes and others, and would encourage you to go and read their books.

That's good, I wonder how much he has read on LW or rationality, It might be the case he didn't bastardized Tegmark + Solomonoff. Just made it all up himself. But he knows about rationality.org, EA & LW.

Etc. The summary is that Athene provides a decent checklist of instrumental rationality in his book, but everything else is just a hype. And his target audience are the people who believe in "one weird trick".

You don't have to believe it, you can observe it, write it down, read the testimonies, think about what is going on with your current data. If there is studies and peer-review, that's changing the predictions, but there still can be one now and if you are willing to try it.

Try reading the Sequences and maybe you will see what I was trying to describe here. That is a book that often moves people to a higher level of clarity of thinking, where the things that seemed awesome previously just become "oh, now I see ................ texts.) It's just, there is so much hype about something that was already done better. So obviously people on this website are not going to be very impressed. But it may be incredibly impressive to someone not familiar with the rationalist community.

Sure, the same way I don't see you as completely wrong, you have useful things to provide and so does all the books on rationality /Sequences, etc. I agree with most of what you're saying but it seems as you don't really understand what the click is about.

a) emotions (categorize as a value) -> uses rationality as a tool to sustain the value

b) I don't know what to write here. you'll have to see for yourself.

Replies from: Viliam
comment by Viliam · 2016-11-29T10:06:11.322Z · LW(p) · GW(p)

If you are aware of mathematics what do you think about this part: https://logicnation.org/wiki/A_simple_click#Did_God_create_logic.3F Is it falsifiable?

This is exactly the part I called "his bastardized version of Tegmark Multiverse + Solomonoff Induction" in my previous comment. He intruduces a few complicated concepts, without going into details; it's all just "this could", "this would", "emerges from this".

To be falsifiable, there needs to be a specific argument made in the first place. Preferably written explicitly, not just hinted at.

For example: "Additionally it might also help us better understand the quantum weirdness such as entanglement and superposition." -- Okay, might. How exactly? Uhm, who cares, right? It's important that I said "quantum", "entanglement" and "superposition". It shows I am smart. Not like I said anything specific about quantum physics, other than that it might be connected with ones and zeroes in some unspecified way. Yeah, maybe.

Statements like "If logic is your core value you automatically try to understand everything logically" are deeply in the motte-and-bailey territory. Yeah, people who value logic, are probably more likely to try using it. On the other hand, human brains are quite good at valueing one thing and automatically doing other thing.

When I try going through individual statements in the text, too many of them contain some kind of weasel word. Statements that something "can be this", "could be this", "emerges from this", or "is one of reasons" are hard to disprove. Statements saying "I have been wondering about this", "I will define this", "this makes me look at the world differently" can be true descriptions of author's mental state; I have no way to verify it; but that's irrelevant for the topic itself. -- There are too many statements like this in the text. Probably not a coincidence. I really don't want to play this verbal game, because this is an exercise in rhetorics, not rationality.

Yes, the one weird trick has been observed to 'work' in the different cases although we don't know more than that right now.

I suspect it already has a name in psychology, and that it does much less than Athene claims. In psychotherapy, people have "breakthrough insights" every week, and it feels like their life has changed completely. But this is just a short-term emotional effect, and the miraculous changes usually don't happen.

I wonder how much he has read on LW or rationality, It might be the case he didn't bastardized Tegmark + Solomonoff. Just made it all up himself. But he knows about rationality.org, EA & LW.

So, he knows about LW and stuff, but he doesn't bother to make a reference, and instead he tells it like he made up everything himself. Nice.

Well, that probably explains my feeling of "some parts are pure manipulation, but some parts feel really LessWrong-ish". The LessWrong-ish parts are probably just... taken from Less Wrong.

Replies from: ingive
comment by ingive · 2016-11-29T10:38:55.069Z · LW(p) · GW(p)

This is exactly the part I called "his bastardized version of Tegmark Multiverse + Solomonoff Induction" in my previous comment. He intruduces a few complicated concepts, without going into details; it's all just "this could", "this would", "emerges from this". To be falsifiable, there needs to be a specific argument made in the first place. Preferably written explicitly, not just hinted at.

Falsifiable mathematically, a theory of everything which includes the theory itself, I mean. But sure, it allows someone to pick up the torch who are going to write a paper anyway.

For example: "Additionally it might also help us better understand the quantum weirdness such as entanglement and superposition." -- Okay, might. How exactly? Uhm, who cares, right? It's important that I said "quantum", "entanglement" and "superposition". It shows I am smart. Not like I said anything specific about quantum physics, other than that it might be connected with ones and zeroes in some unspecified way. Yeah, maybe. When I try going through individual statements in the text, too many of them contain some kind of weasel word. Statements that something "can be this", "could be this", "emerges from this", or "is one of reasons" are hard to disprove. Statements saying "I have been wondering about this", "I will define this", "this makes me look at the world differently" can be true descriptions of author's mental state; I have no way to verify it; but that's irrelevant for the topic itself. -- There are too many statements like this in the text. Probably not a coincidence. I really don't want to play this verbal game, because this is an exercise in rhetorics, not rationality.

They are just ramblings and no real inquiry has been made to investigate these 'bathtub theories', as Elon Musk puts it. But it is an easy way to explain certain things. Too much weight shouldn't be put on it, but it would be interesting with papers, so someone who is going to publish - do this.

Statements like "If logic is your core value you automatically try to understand everything logically" are deeply in the motte-and-bailey territory. Yeah, people who value logic, are probably more likely to try using it.

On the other hand, human brains are quite good at valueing one thing and automatically doing other thing.

That's not correct as doing another thing still arises out of what you value emotionally according to this theory.

I suspect it already has a name in psychology, and that it does much less than Athene claims. In psychotherapy, people have "breakthrough insights" every week, and it feels like their life has changed completely. But this is just a short-term emotional effect, and the miraculous changes usually don't happen.

It does: religious experience enlightenment "wikipedia.org/wiki/Enlightenment_(spiritual)" mystical experience - nondualism

We don't know if it's permanent, so far data only goes for around 1 month - 1 month 1 week. With the exception of Athene himself (creator of the experiment). But enlightenment, religious experiences etc last for awhile.

So, he knows about LW and stuff, but he doesn't bother to make a reference, and instead he tells it like he made up everything himself. Nice. Well, that probably explains my feeling of "some parts are pure manipulation, but some parts feel really LessWrong-ish". The LessWrong-ish parts are probably just... taken from Less Wrong.

Well, he probably hasn't read anything, he did apply for an LW meet-up but was rejected as he had to stay for the full amount of days, before this clicking religion thing they did reach out regarding their group on here I think, at EA forums and elsewhere. Staying there is free. Regarding rationality.org and so forth I think he mentioned they're all just intellectually masturbating.

By the way, what do you think about the website: https://www.asimpleclick.org/# ?

Replies from: Viliam
comment by Viliam · 2016-11-29T12:47:04.052Z · LW(p) · GW(p)

This is Athene:

I tried to understand the world by seeing everything as information instead since it then becomes a lot easier to find a logical answer to how we came to existence and why the logical patterns around us emerge. There are two scenario's that sound more logical for the average person, one is that there has always been nothing and the other that there has always been infinite chaos. Keep in mind, this is simplified because always makes us think about time and time came only to existence with the big bang. The issue people have though is how something could emerge from nothing without the intervention of a creator. On the other hand, if we assume there was always infinite chaos and we can find a falsifiable explanation to how our consistent reality could emerge from it we would have a much easier time to set our inner conflict at ease.

To get back to how I approach everything as information, let's represent this infinite chaos as 1's and 0's. How could our reality emerge from this and how would logic be able to bring about all this beauty and consistency. There is already mathematical models of how chaos brings about order but in this specific case we can also derive certain mathematical conclusions from infinity. For example 0 would appear around half the time and 1 as well. Same, if you take the combination 01 it would appear 25% of the time while the combination 10, 11 and 00 would do so to. What you already can see is that the longer the binary number is the less frequent it appears within infinity.

To understand the next step you need some basic understanding about the concept of compression algorithm. To illustrate, if you have a fully black background in paint and save it as a .bmp it will be a much larger file then when you save it as a .jpg. The reason for this is because the .jpg uses a compression algorithm that allows you to show the same black picture on the screen but requires a lot smaller binary number. If this black picture would be our consciousness instead and it would emerge from infinite chaos, it would naturally be the one that is most compressed since it is what is most likely to happen. This is one explanation for how everything around us seems to follow specific patterns as these are merely the compression algorithms that are brought about due to the probabilities within infinite chaos.

If this line of thinking would be true it would also have other consequences. The number 1 and a billion 0's for example would be smaller then a shorter binary number that would contain more information. This approach would also bring about a different kind of math that isn't based on Euclidean or non-Euclidean geometry. Additionally it might also help us better understand the quantum weirdness such as entanglement and superposition.

This is a Less Wrong article on a similar topic: An Intuitive Explanation of Solomonoff Induction

I hope you understand why I am not impressed with the Athene's version.

Well, he probably hasn't read anything, he did apply for an LW meet-up but was rejected as he had to stay for the full amount of days, before this clicking religion thing they did reach out regarding their group on here I think, at EA forums and elsewhere. Staying there is free. Regarding rationality.org and so forth I think he mentioned they're all just intellectually masturbating.

Having to stay somewhere for a few days doesn't sound to me like a regular LW meetup. I guess it was either a CFAR workshop, or an event like this.

(Uhm, this is probably not the case, but asking anyway to make sure -- "they did reach out regarding their group on here I think" does not refer to this, right? Because that's the only recent attempt to reach out here that I remember.)

Regarding rationality.org and so forth I think he mentioned they're all just intellectually masturbating.

Heh, sometimes I have a similar impression. On the other hand, some things take time. A few years ago, superintelligent AI was a completely fringe topic... now it's popular in media, and random people share articles about it on Facebook. So either the founders of LW caused this trend, or at least were smart enough to predict it. That requires some work. MIRI and CFAR have funding, which is also not simple to achieve. They sometimes publish scientific articles. If I remember correctly, they were also involved in creating the effective altruist movement. (Luke Muehlhauser, the former executive director of MIRI, now works for GiveWell.) There is probably more, but I think this already qualifies as more than "intellectual masturbation".

Athene has an impressive personal track record. I admit that part. But the whole thing about "clicking" is a separate claim. (Steve Jobs was an impressive person; that doesn't prove his beliefs in reincarnation are correct.)

By the way, what do you think about the website: https://www.asimpleclick.org/ ?

Any specific part of it? I have already spent hours researching this topic. I have even read the Reddit forum where people describe how they "clicked" (most posts seem the same, and so do all replies, it's a bit creepy). Am I supposed to listen to the guided meditation, or watch yet another advertising video, or...?

Replies from: niceguyanon, ingive
comment by niceguyanon · 2016-11-29T13:37:25.945Z · LW(p) · GW(p)

I have already spent hours researching this topic.

I applaud your effort and hope your hours spent means others' saved.

Replies from: Viliam
comment by Viliam · 2016-11-29T14:17:18.038Z · LW(p) · GW(p)

Thanks! I feel weird about this whole thing. Similarly how I feel weird about Gleb.

I don't want to make a full conclusion for others (that feels like too much responsibility), but at least I can point them directly towards the imporant parts, so they don't have to google and watch promotional videos.

Here is the good part -- a PDF booklet with some useful advice on instrumental rationality. It would make a good LW article, if some parts were removed.

magnet:?xt=urn:btih:e3ade7cdccc4aba33789686b9b9d765d7f14ae7b&dn=Real+Answers&tr=udp%3A%2F%2Ftracker.leechers-paradise.org%3A6969&tr=udp%3A%2F%2Fzer0day.ch%3A1337&tr=udp%3A%2F%2Fopen.demonii.com%3A1337&tr=udp%3A%2F%2Ftracker.coppersurfer.tk%3A6969&tr=udp%3A%2F%2Fexodus.desync.com%3A6969

Here are the bad parts -- wiki (just read the main page), and reddit forum (click on a few random articles, they all feel the same, and the responses all feel the same)

The rest is just marketing, hyping the contents of the wiki and of the book over and over again.

This is my conclusion after ~10 hours of looking at various materials; maybe there is something more that I missed, also I didn't listen to the podcasts. This all seems to be a one man show; a guy whose main strength is making popular youtube videos, and being a successful poker player in the past.

comment by ingive · 2016-11-29T14:24:52.282Z · LW(p) · GW(p)

This is a Less Wrong article on a similar topic: An Intuitive Explanation of Solomonoff Induction I hope you understand why I am not impressed with the Athene's version.

I understand, that article looks interesting.

Having to stay somewhere for a few days doesn't sound to me like a regular LW meetup. I guess it was either a CFAR workshop, or an event like this.

I think it was an event like you linked.

(Uhm, this is probably not the case, but asking anyway to make sure -- "they did reach out regarding their group on here I think" does not refer to this, right? Because that's the only recent attempt to reach out here that I remember.)

The message in the bottom does look a lot like Athene's writing, but the first one I don't really understand what it is about. They have mentioned they do sports betting. Athene doesn't care about lying, only if it's for the greater good, so I speculate they have some ways to make money at betting, for example, thus they thought about trying to reach out to some high IQ people to actually off-load work or have 'em join. But this is only speculation. I think they've reached out in general and around December explaining their "charity" organization and how they offer free food, housing etc. But maybe not.

Heh, sometimes I have a similar impression. On the other hand, some things take time. A few years ago, superintelligent AI was a completely fringe topic... now it's popular in media, and random people share articles about it on Facebook. So either the founders of LW caused this trend, or at least were smart enough to predict it. That requires some work. MIRI and CFAR have funding, which is also not simple to achieve. They sometimes publish scientific articles. If I remember correctly, they were also involved in creating the effective altruist movement. (Luke Muehlhauser, the former executive director of MIRI, now works for GiveWell.) There is probably more, but I think this already qualifies as more than "intellectual masturbation". Athene has an impressive personal track record. I admit that part. But the whole thing about "clicking" is a separate claim. (Steve Jobs was an impressive person; that doesn't prove his beliefs in reincarnation are correct.)

That makes sense. By the way gamingforgood has a 10X multiplier on donations to their newborn survival programs, how likely is it that is is more efficent then GiveWell's top charities?

Any specific part of it? I have already spent hours researching this topic. I have even read the Reddit forum where people describe how they "clicked" (most posts seem the same, and so do all replies, it's a bit creepy). Am I supposed to listen to the guided meditation, or watch yet another advertising video, or...?

I was thinking more the design, but sure, the guided meditations might be good if you are doing a self-experiment of the 4 steps and so forth, they're nothing special, what you'd expect I suppose.

Replies from: Viliam
comment by Viliam · 2016-11-29T15:52:14.205Z · LW(p) · GW(p)

The message in the bottom does look a lot like Athene's writing

Yeah, the "i was a great poker player" part... I missed that previously.

OK, now I am quite confused. Here are things said by "hans_jonsson" (Athene?) in that thread:

i didnt wish to post my message very publicly cause it embarrassing and awkwardly like i was bragging when i wished to be honest, and hopefully show that im competent.

So, to put things together... a guy who according to Wikipedia is a popular YouTube celebrity, raised millions for charity, and recently started the "Logic Nation"... was contacting individual LW members through private messages, instead of posting in an open thread, because posting in an open thread would be awkward bragging... and the best way to show that he was competent, was to pseudonymously post something that closely resembles popular scams.

My brain has problem processing so much logic.

(Could LessWrong really be so frightening for outsiders? So much that even starting your own cult feels less awkward and more humble than posting in LW open thread...)

I admit I am out of my depth here. There seems to be evidence flying in both directions, and I am confused. (Still learning towards "scam", though.)

Replies from: ingive
comment by ingive · 2016-11-29T16:47:10.151Z · LW(p) · GW(p)

Other than that, the poor grammar and spelling is typical of Athene and lacking of paragraphs, the further you scroll in his Reddit profile the worse his grammar becomes: https://www.reddit.com/user/Chiren :)

He also wrote this in the thread: >and i may very well have some mental issues in regards to quite a few things

I don't take things too seriously. When you type or talk you can push buttons and see how the community reacts to forward your agenda, the best option after making a mistake might not be to say "I am X and I should've posted in the open thread" instead for example say it's embarrassing, awkward and eventually that "may very well have mental issues"

My brain has problem processing so much logic.

Well, yes, maybe.

(Could LessWrong really be so frightening for outsiders? So much that even starting your own cult feels less awkward and more humble than posting in LW open thread...) I admit I am out of my depth here. There seems to be evidence flying in both directions, and I am confused. (Still learning towards "scam", though.)

Observing what was done objectively it seems as it was as it was said, to keep it in private? I don't really see how it is leaning towards scam, you might simply have been wrong all along. The first message I don't fully understand.

Replies from: Viliam
comment by Viliam · 2016-11-29T21:44:53.343Z · LW(p) · GW(p)

Oh, this is pure gold! :D

Two months ago, on Athene's Reddit forum:

Athene: "This stuff sounds so much like a scam if i see anything like this again you are permabanned. If you want to help put concrete info and don't make it sound so dodgy or have to contact you or whatever."

Some rando: "In what way does it sound like a scam? I'm not selling anything and I'm not asking them to sign up anywhere. Just to simply PM me so we can chat about it. I didn't disclose details because I didn't want to start people on a wild goose chase trying to do it when they aren't capable. I thought if I could help a few people who have the right mindset become more financially stable, they'd be able to make better use of what you teach."

(then the post was removed, presumably by Athene)

For context, this is Athene on LW, nine months ago:

Other rando: "Act publicly, especially when it includes asking members to participate in financial transactions. It is your insisting to work behind the courtains that seems fishy to me."

Athene: "why should i ask publicly when asking personal questions about personal decisions? im insisting to work behind the curtains? when did i insist, and why should i ask publicly? ... why would i change a message that i wrote as perfectly as i could? ... my priorities are to as fast as possible get someone intelligent with the right priorities educated as well as donate current money the most effecient way possible."

Karma is a bitch.

Replies from: ingive
comment by ingive · 2016-11-30T10:46:29.375Z · LW(p) · GW(p)

I remember that, pretty funny. Maybe he learned from LW and sub- and consciously understood it, responded the same way 7 months later. :) Now now, if it is him who posted here, that's simply speculation but I think it's 60-70% probability.

comment by NatashaRostova · 2016-11-28T20:08:32.605Z · LW(p) · GW(p)

You can't just classify things as stupid because you think they are stupid, and what you think is stupid is true because you think you're rational.

The idea that 'not being vegan' or 'smoking' are stupid is silly.

Replies from: Dagon, ingive
comment by Dagon · 2016-11-30T17:15:47.515Z · LW(p) · GW(p)

I agree that "stupid" is a bad label for the clustering which includes those kinds of behaviors, but I don't agree if you're saying that smoking and meat-eating are usually instrumentally rational choices for common human desires.

Stupid implies either incorrect logic or lack of consideration of an action. For many humans, these behaviors are neither one. They're some combination of weakness (knowing the better choice, but failing to override the monkey brain) and value differences (preferring current/near experienced pleasures over later/distant pain).

note: I eat lots of meat. I also play lots of video games and read lots of fiction, none of which is purely rationally motivated. I don't smoke or vape, but that's also not rationally motivated - I just find it disgusting.

Replies from: NatashaRostova
comment by NatashaRostova · 2016-11-30T17:47:04.155Z · LW(p) · GW(p)

It doesn't really matter what either of us think. If someone eats too much meat, and wishes they could stop, but can't, then for a certain function we can claim it's irrational in their achievement of that goal. If I eat a fair amount of meat because I work out, because it helps me get my weight-lifting goals, it's rational for my objective.

What's your objective? Well, my main point is really just that we can't abstract these sorts of things, they are empirical. "Is X irrational (implied: for all people under all conditions)?" is about as meaningful as "Does this chair really exist?"

Replies from: ingive
comment by ingive · 2016-12-01T10:35:04.952Z · LW(p) · GW(p)

It doesn't really matter what either of us think. If someone eats too much meat, and wishes they could stop, but can't, then for a certain function we can claim it's irrational in their achievement of that goal.

Exactly, it's better to look at the evidence, objective reality and see what's more likely to be efficient. You presume with the latter statement that your achievement of a goal is accurate.

If I eat a fair amount of meat because I work out, because it helps me get my weight-lifting goals, it's rational for my objective.

I hope that this example is simply that, eating meat is not necessary for a positive nitrogen balance and muscle hypertrophy OR strength. It might have a slight advantage, but at that point, you'd assume you're already doing everything efficiently and your genetics are on par. Very unlikely.

What's your objective? Well, my main point is really just that we can't abstract these sorts of things, they are empirical. "Is X irrational (implied: for all people under all conditions)?" is about as meaningful as "Does this chair really exist?"

You realize you are biased and not in line with the objective reality of things where your desires can be replaced and come from a certain place for a reason.

comment by ingive · 2016-11-29T01:19:28.933Z · LW(p) · GW(p)

If you define stupidity as a set of rules that we use to ensure a problem is solved longer than chance or never and is nevertheless pursued with alacrity and enthusiasm. Then it's stupid.

I don't know what the problem to be solved can be boiled down to in the context of this definition, maybe evolving as a super organism although that is not an end. General solving seems more applicable.