Open thread, Jan. 16 - Jan. 22, 2016

post by MrMind · 2017-01-16T07:52:56.197Z · LW · GW · Legacy · 133 comments

If it's worth saying, but not worth its own post, then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

133 comments

Comments sorted by top scores.

comment by Brillyant · 2017-01-16T16:44:13.260Z · LW(p) · GW(p)

My "RECENT ON RATIONALITY BLOGS" section on the right sidebar is blank.

If this isn't just me, and remains this way for long, I predict LW traffic will drop markedly as I primarily use LW habitually as a way to access SSC, and I'd bet my experience is not unique in this way.

Replies from: The_Jaded_One, Vaniver
comment by The_Jaded_One · 2017-01-16T16:49:03.418Z · LW(p) · GW(p)

Maybe you're just not rational enough to be shown that content? I see like 10 posts there.

MIRI has invented a proprietary algorithm that uses the third derivative of your mouse cursor position and click speed to predict your calibration curve, IQ and whether you would one-box on Newcomb's problem with a correlation of 95%. LW mods have recently combined those into an overall rationality quotient which the site uses to decide what level of secret rationality knowledge you are permitted to see.

Maybe you should do some debiasing, practice being well-calibrated, read the sequences and try again later?

EDIT: Some people seem to be missing that this is intended as humor ............

Replies from: Manfred, Brillyant
comment by Manfred · 2017-01-16T18:09:25.899Z · LW(p) · GW(p)

it's a shame downvoting is temporarily disabled.

Replies from: The_Jaded_One
comment by The_Jaded_One · 2017-01-20T01:36:19.369Z · LW(p) · GW(p)

Why does everyone want to downvote everything, ever!? Seriously, lighten up!!!

Replies from: Elo
comment by Elo · 2017-01-20T03:31:36.623Z · LW(p) · GW(p)

no, some things would benefit from being voted down out of existence.

Replies from: The_Jaded_One
comment by The_Jaded_One · 2017-01-20T07:35:29.859Z · LW(p) · GW(p)

Yes, I totally agree. In the last few weeks, I have seen some totally legit targets for being on -10 and not visible unless you you click on them, such as the 'click' posts, repetitive spam about that other website, probably the weird guy who just got banned from the open thread too.

However, I have also seen people advocate using mass downvoting on an OK-but-not-great article on cults that they just disagree with, and now someone wants to downvote to oblivion a joke in the open thread. Why? Is humor banned?

There is a legitimate middle ground between toxicity and brilliance.

Replies from: Elo
comment by Elo · 2017-01-20T08:09:08.805Z · LW(p) · GW(p)

There is a legitimate middle ground between toxicity and brilliance.

Agreed.

I think humour is a mixed bag. Sometimes good and sometimes bad. In my ideal situation there would be a place for humour to happen where people can choose to go, or choose not to go. Humour should exist but mixing it in with everything else is not always great.

comment by Brillyant · 2017-01-16T17:15:57.732Z · LW(p) · GW(p)

I see like 10 posts there.

Perhaps you are looking at the "RECENT POSTS" section rather than the section I mentioned?

Maybe you should do some debiasing, practice being well-calibrated, read the sequences and try again later?

I'll work on this.

Maybe you could work on reading?

Replies from: The_Jaded_One
comment by The_Jaded_One · 2017-01-16T17:23:49.440Z · LW(p) · GW(p)

No it's definitely "RECENT ON RATIONALITY BLOGS" section ;)

comment by Vaniver · 2017-01-16T20:23:34.775Z · LW(p) · GW(p)

My "RECENT ON RATIONALITY BLOGS" section on the right sidebar is blank.

If this isn't just me, and remains this way for long, I predict LW traffic will drop markedly as I primarily use LW habitually as a way to access SSC, and I'd bet my experience is not unique in this way.

It looks that way to me as well, and I don't think that should be the case. I'll investigate what's up.

Replies from: Vaniver
comment by Vaniver · 2017-01-16T20:37:24.963Z · LW(p) · GW(p)

On an initial pass, the code hasn't been updated in a month, so I doubt that's a cause. If you look at the list of feedbox urls here, two of them seem like they're not working (the Givewell one and the CFAR one).

It's not clear to me yet how Google's feed object made here works; it looks like we feed it a url, then try to load it in a way that handles errors. But if it checks the URL ahead of the load, that might error out in a way that breaks the feedbox.

(The page also has an Uncaught Error: Module: 'feeds' not found! which I'm not sure how to interpret yet, but makes me more suspicious of that region.)

Replies from: morganism, Vaniver
comment by morganism · 2017-01-16T20:55:34.574Z · LW(p) · GW(p)

Both NoScript, and Disconnect blockers, block those. I still have to whitelist Vigilink every time i come here, and i can't see lots of features and editing handles if i haven't gone to Reddit and whitelisted it before visiting here....

comment by Vaniver · 2017-01-16T20:53:59.482Z · LW(p) · GW(p)

So, we use google.feeds.Feed(url) to manage this. If you go to the docs page for that, you find:

This API is officially deprecated and will stop working after December 15th, 2016. See our deprecation policy in our Terms of Service for details.

comment by Vaniver · 2017-01-17T18:47:38.333Z · LW(p) · GW(p)

Flinter has been banned after a private warning. I'm deleting the comment thread that led to the ban because it's an inordinate number of comments cluttering up a welcome thread.

Users are reminded that responding to extremely low-quality users creates more extremely low quality comments, and extended attempts to elicit positive communication almost never work. Give up after a third comment, and probably by your second.

Replies from: Viliam, Vaniver
comment by Viliam · 2017-01-18T09:52:25.119Z · LW(p) · GW(p)

From Flinter's comment:

The mod insulted me, and Nash.

While I respect your decision as a moderator to ban Flinter, insulting Nash is a horrible thing to do and you should be ashamed of yourself!

/ just kidding

Also, someone needs to quickly make a screenshot of the deleted comment threads, and post them as new LW controversy on RationalWiki, so that people all around the world are properly warned that LW is pseudoscientific and disrespects Nash!

/ still kidding, but if someone really does it, I want to have a public record that I had this idea first

Replies from: drethelin
comment by drethelin · 2017-01-20T01:06:45.218Z · LW(p) · GW(p)

this is why we need downvotes

comment by Vaniver · 2017-01-17T19:00:49.174Z · LW(p) · GW(p)

As the Churchill quote goes:

A fanatic is one who can't change his mind and won't change the subject.

Less Wrong is not, and will not be, a home for fanatics.

Replies from: TiffanyAching
comment by TiffanyAching · 2017-01-17T19:06:53.555Z · LW(p) · GW(p)

Fair enough. Kindest thing to do really. I think people have a hard time walking away even when the argument is almost certainly going to be fruitless.

comment by Lumifer · 2017-01-17T03:29:57.720Z · LW(p) · GW(p)

For general information -- since Flinter is playing games to get people to follow the steps he suggests, it might be useful to read some of his other writings on the 'net to cut to the chase. He is known as Juice/rextar4444 on Twitter and Medium and as JokerPravis on Steemit.

Replies from: tut
comment by tut · 2017-01-18T12:23:10.645Z · LW(p) · GW(p)

Since we no longer have downvotes, might it be a good idea for the mods to start banning cult spammers like ingive and flinter?

comment by JamesFaville (elephantiskon) · 2017-01-16T21:17:17.123Z · LW(p) · GW(p)

At what age do you all think people have the greatest moral status? I'm tempted to say that young children (maybe aged 2-10 or so) are more important than adolescents, adults, or infants, but don't have any particularly strong arguments for why that might be the case.

Replies from: knb, btrettel, Elo, ChristianKl
comment by knb · 2017-01-17T01:46:11.019Z · LW(p) · GW(p)

I don't think children actually have greater moral status, but harming children or allowing children to be harmed carries more evidence of depraved/dangerous mental state because it goes against the ethic of care we are supposed to naturally feel toward children.

comment by btrettel · 2017-01-17T01:55:13.210Z · LW(p) · GW(p)

If you think in terms of QALYs, that could be one reason to prefer interventions targeted at children. Your average child has more life to live than your average adult, so if you permanently improve their quality of life from 0.8 QALYs per year to 0.95 QALYs per year, that would result in a larger QALY change than the same intervention on the adult.

This argument has numerous flaws. One which comes to mind immediately are that many interventions are not so long lasting, so both adults and children would presumably gain the same. It also is tied to particular forms of utilitarianism one might not subscribe to.

comment by Elo · 2017-01-18T00:54:47.418Z · LW(p) · GW(p)

this may be an odd counter position to the normal.

I think that adults are more morally valuable because they have proven their ability to not be murderous etc. Or possibly also to not be the next ghandi. Children could go either way.

Replies from: TiffanyAching
comment by TiffanyAching · 2017-01-18T01:30:56.073Z · LW(p) · GW(p)

Could you explain this a little more? I don't quite see your reasoning. Leaving aside the fact that "morally valuable" seems too vague to me to be meaningfully measured anyway, adults aren't immutably fixed at a "moral level" at any given age. Andrei "Rostov Ripper" Chikatilo didn't take up murdering people until he was in his forties. At twenty, he hadn't proven anything.

Bob at twenty years old hasn't murdered anybody, though Bob at forty might. Now you can say that we have more data about Bob at twenty than we do about Bob at ten, and therefore are able to make more accurate predictions based on his track record, but by that logic Bob is at his most morally valuable when he's gasping his last on a hospital bed at 83, because we can be almost certain at that point that he's not going to do anything apart from shuffle off the mortal coil.

And if "more or less likely to commit harmful acts in future" is our metric of moral value, then children who are abused, for example, are less morally valuable than children who aren't, because they're more likely to commit crimes. That's not intended to put any words in your mouth by the way, I'm just saying that when I try to follow your reasoning it leads me to weird places. I'd be interested to see you explain your position in more detail.

Replies from: Viliam
comment by Viliam · 2017-01-18T09:47:13.389Z · LW(p) · GW(p)

children who are abused, for example, are less morally valuable than children who aren't, because they're more likely to commit crimes

That reminds me of a scene in Psycho-Pass where...

...va gur svefg rcvfbqr, n ivpgvz bs n ivbyrag pevzr vf nyzbfg rkrphgrq ol gur cbyvpr sbepr bs n qlfgbcvna fbpvrgl, onfrq ba fgngvfgvpny ernfbavat gung genhzngvmrq crbcyr ner zber yvxryl gb orpbzr cflpubybtvpnyyl hafgnoyr, naq cflpubybtvpnyyl hafgnoyr crbcyr ner zber yvxryl gb orpbzr pevzvanyf va gur shgher.

(rot 13)

Replies from: TiffanyAching
comment by TiffanyAching · 2017-01-18T19:05:55.475Z · LW(p) · GW(p)

Yes, that's the sort of idea I was getting at - though not anything so extreme.

Of course I don't really think Elo was saying that at all anyway, I'm not trying to strawman. I'd just like to see the idea clarified a bit.

(We use substitution ciphers as spoiler tags? Fancy!)

Replies from: Elo
comment by Elo · 2017-01-19T20:52:13.559Z · LW(p) · GW(p)

I am not keen on a dystopian thought police. We have at the moment a lot more care given to children than to adults. For example children's hospitals VS adult's hospitals.

The idea is not drawn out to further conclusions as you have done, but I had to ask why we do the thing where we care about children's hospitals more than adult's hospitals, and generally decided that I don't like the way it is.

I believe common behaviour to like children more comes out of some measure of, "they are cute" and is similar to why we like baby animals more than fully grown ones. Simply because they have a babyness to them. If that is the case then it's a relatively unfounded belief and a bias that I would rather not carry.

Adults are (probably) productive members of society, we can place moralistic worth on that life as it stands in the relative concrete present, not the potential that you might be measuring when a child grows up. Anyone could wake up tomorrow and try to change the world, or wake up tomorrow and try to lie around on the beach. What causes people to change suddenly? Not part of this puzzle. I am confident that the snapshot gives a reasonably informative view of someone's worth. They are working hard in EA? That's their moral worth they present when they reveal with their actions what they care about.

What about old people? I don't know... Have not thought that far ahead. Was dealing with the cute-baby bias first. I suppose they are losing worth to society as they get less productive. And at the same time they have proven themselves worthy of being held/protected/cared for (or maybe they didn't).

Replies from: TiffanyAching
comment by TiffanyAching · 2017-01-19T21:26:49.525Z · LW(p) · GW(p)

The urge to protect and prioritize children is partly biological/evolutionary - they have to be "cute" otherwise who'd put up with all the screaming and poop long enough to raise them to adulthood? The urge to protect and nurture them is a survival-of-the-species thing. Baby animals are cute because they resemble human babies - disproportionately big heads, big eyes, mewling noises, helplessness.

But from a moral perspective I'd argue that there is a greater moral duty to protect and care for children because they can neither fend nor advocate for themselves effectively. They're largely at the mercy of their carers and society in general. An adult may bear some degree of responsibility for his poverty, for example, if he has made bad choices or squandered resources. His infant bears none of the responsibility for the poverty but suffers from it nonetheless and can do nothing to alleviate it. This is unjust.

There's also the self-interest motive. The children we raise and nurture now will be the adults running the world when we are more or less helpless and dependent ourselves in old age.

And there's the future-of-humanity as it extends past your own lifetime too, if you value that.

But of course these are all points about moral duty rather than moral value. I'm fuzzier on what moral value means in this context. For example the difference in moral value between the young person who is doing good right now and the old person who has done lots of good over their life, but isn't doing any right now because that life is nearly over and they can't. Does ability vs. desire to do good factor into this? The child can't do much and the end-of-life old person can't do much, though they may both have a strong desire to do good. Only the adult in between can match the ability to the will.

Replies from: Elo
comment by Elo · 2017-01-20T02:35:19.294Z · LW(p) · GW(p)

Yes. I agree with most of what you have said.

I'd argue that there is a greater moral duty to protect and care for children because they can neither fend nor advocate for themselves effectively.

I would advocate a "do no harm", attitude. Rather than a "provide added benefit" just because they are children. I wouldn't advocate to neglect children, but I wouldn't put them ahead of adults.

As for what we should do. I don't have answers to these questions, I suspect it comes down to how each person weighs the factors in their own heads, and consequently how they want the world to be balanced.

Just like some people care about animal suffering and others do not. (I like kids, definitely, but moral value is currently subjectively determined)

comment by ChristianKl · 2017-01-17T06:58:13.422Z · LW(p) · GW(p)

It depends very much on the context. In many instances where we want to save lives QALY are a good metric. In other cases like deciding how should be able to sit down in a bus, the metric is worthless.

comment by morganism · 2017-01-16T21:02:24.945Z · LW(p) · GW(p)

Is there a simple coding trick to allow this blockchain micropayment scheme into Reddit based sites ?

https://steemit.com/facebook2steemit/@titusfrost/in-simple-english-for-my-facebook-friends-how-and-why-to-join-steemit

This seems like a interesting way to get folks to write deeper and more thoughtful articles, by motivating them with some solid reward. And if something does go viral, it can allow some monetization without resorting to ad-based sites....

BTW, there was a link to simple markdown on Github in there

https://guides.github.com/features/mastering-markdown/

comment by Thomas · 2017-01-16T08:03:46.873Z · LW(p) · GW(p)

Another math problem:

https://protokol2020.wordpress.com/2017/01/11/and-yet-another-geometry-problem/

Replies from: Luke_A_Somers, gjm, Luke_A_Somers
comment by Luke_A_Somers · 2017-02-06T14:43:53.529Z · LW(p) · GW(p)

OK, I had dropped this for a while, but here are my thoughts. I haven't scrubbed everything that could be seen through rot13 because it became excessively unreadable

For Part 1: gur enqvhf bs gur pragre fcurer vf gur qvfgnapr orgjrra bar bs gur qvnzrgre-1/2 fcurerf naq gur pragre.

Gur qvfgnapr sebz gur pragre bs gur fvqr-fcurer gb gur pragre bs gur birenyy phor vf fdeg(A)/4. Fhogenpg bss n dhnegre sbe gur enqvhf bs gur fcurer, naq jr unir gur enqvhf bs gur pragre fcurer: (fdeg(A)-1)/4. Guvf jvyy xvff gur bhgfvqr bs gur fvqr-1 ulcrephor jura gung'f rdhny gb n unys, juvpu unccraf ng avar qvzrafvbaf. Zber guna gung naq vg jvyy rkgraq bhgfvqr.

Part 2: I admit that I didn't have the volume of high-dimensional spheres memorized, but it's up on wikipedia, and from there it's just a matter of graphing and seeing where the curve crosses 1, taking into account the radius formula derived above.. I haven't done it, but will eventually.

Part 3 looks harder and I'll look at it later.

Replies from: Thomas
comment by Thomas · 2017-02-06T15:16:11.163Z · LW(p) · GW(p)

Part 1 is good.

comment by gjm · 2017-01-16T13:48:48.903Z · LW(p) · GW(p)

dhrfgvba bar

Qvfgnapr sebz prager bs phor gb prager bs "pbeare" fcurer rdhnyf fdeg(a) gvzrf qvfgnapr ba bar nkvf = fdeg(a) bire sbhe. Enqvhf bs "pbeare" fcurer rdhnyf bar bire sbhe. Gurersber enqvhf bs prageny fcurer = (fdeg(a) zvahf bar) bire sbhe. Bs pbhefr guvf trgf nf ynetr nf lbh cyrnfr sbe ynetr a. Vg rdhnyf bar unys, sbe n qvnzrgre bs bar, jura (fdeg(a) zvahf bar) bire sbhe rdhnyf bar unys <=> fdeg(a) zvahf bar rdhnyf gjb <=> fdeg(a) rdhnyf guerr <=> a rdhnyf avar.

dhrfgvba gjb

Guvf arire unccraf. Hfvat Fgveyvat'f sbezhyn jr svaq gung gur nflzcgbgvpf ner abg snibhenoyr, naq vg'f rnfl gb pbzchgr gur svefg ubjrire-znal inyhrf ahzrevpnyyl. V unira'g gebhoyrq gb znxr na npghny cebbs ol hfvat rkcyvpvg obhaqf rireljurer, ohg vg jbhyq or cnvashy engure guna qvssvphyg.

dhrfgvba guerr

Abar. Lbh pnaabg rira svg n ulcrefcurer bs qvnzrgre gjb orgjrra gjb ulcrecynarf ng qvfgnapr bar, naq gur ulcrephor vf gur vagrefrpgvba bs bar uhaqerq fcnprf bs guvf fbeg.

Replies from: Thomas
comment by Thomas · 2017-01-16T13:59:41.164Z · LW(p) · GW(p)

One: Correct

Two: Incorrect

Three: Correct

Replies from: gjm
comment by gjm · 2017-01-16T15:05:23.783Z · LW(p) · GW(p)

Oooh, I dropped a factor of 2 in the second one and didn't notice because it takes longer than you'd expect before the numbers start increasing. Revised answer:

dhrfgvba gjb

Vs lbh qb gur nflzcgbgvpf pbeerpgyl engure guna jebatyl, gur ibyhzr tbrf hc yvxr (cv gvzrf r bire rvtug) gb gur cbjre a/2 qvivqrq ol gur fdhner ebbg bs a. Gur "zvahf bar" va gur sbezhyn sbe gur enqvhf zrnaf gung gur nflzcgbgvp tebjgu gnxrf ybatre gb znavsrfg guna lbh zvtug rkcrpg. Gur nafjre gb gur dhrfgvba gheaf bhg gb or bar gubhfnaq gjb uhaqerq naq fvk, naq V qb abg oryvrir gurer vf nal srnfvoyr jnl gb trg vg bgure guna npghny pnyphyngvba.

Replies from: Thomas
comment by Thomas · 2017-01-16T15:19:43.473Z · LW(p) · GW(p)

Correct.

I gave some Haskell code as a comment over there on my blog, under the posted problem.

1206 dimension is the smallest number. One can experiment with other values.

comment by Luke_A_Somers · 2017-01-16T13:22:40.713Z · LW(p) · GW(p)

On the face of it, the premise seems wrong. For any finite number of dimensions, there will be a finite number of objects in the cube, which means you aren't getting any infinity shenanigans - it's just high-dimensional geometry. And in no non-shenanigans case will the hypervolume of a thing be greater than a thing it is entirely inside of.

Replies from: Thomas
comment by Thomas · 2017-01-16T14:33:25.079Z · LW(p) · GW(p)

And in no non-shenanigans case will the hypervolume of a thing be greater than a thing it is entirely inside of.

Are you sure, it's entirely inside?

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2017-01-16T15:45:27.340Z · LW(p) · GW(p)

OK, that's an angle (pun intended) I didn't catch upon first consideration.

Replies from: gjm
comment by gjm · 2017-01-16T17:37:55.326Z · LW(p) · GW(p)

High-dimensional cubes are really thin and spiky.

Replies from: Thomas
comment by Thomas · 2017-01-17T09:45:16.471Z · LW(p) · GW(p)

They are counterintuitive. A lot is counterintuitive in higher dimensions. Especially something, I may write about in the future.

This 1206 business is even Googleable. Which I have learned only after I have calculated the actual number 1206.

https://sbseminar.wordpress.com/2007/07/21/spheres-in-higher-dimensions/

comment by Flinter · 2017-01-16T17:44:34.768Z · LW(p) · GW(p)

I wanted to make a discussion post about this but apparently I need 2 karma points and this forum is too ignorant to give them out. I'll post here and I guess probably be done with this place since its not even possible for me to attempt to engage in meaningful discussion. I'd also like to make the conjecture that this place cannot be based on rationality with the rule sets that are in place for joining-and I don't understand why that isn't obvious.

Anyways, here is what would have been my article for discussion:

"I am not perfectly sure how this site has worked (although I skimmed the "tutorials") and I am notorious for not understanding systems as easily and quickly as the general public might. At the same time I suspect a place like this is for me, for what I can offer but also for what I can receive (ie I intend on (fully) traversing the various canons).

I also value compression and time in this sense, and so I think I can propose a subject that might serve as an "ideal introduction" (I have an accurate meaning for this phrase I won't introduce atm).

I've read a lot of posts/blogs/papers that are arguments which are founded on a certain difficulties, where the observation and admission of this difficulty leads the author and the reader (and perhaps the originator of the problem/solution outlines) to defer to some form of a (relative to what will follow) long winded solution.

I would like to suggest, as a blanket observation and proposal, that most of these difficult problems described, especially on a site like this, are easily solvable with the introduction of an objective and ultra-stable metric for valuation.

I think maybe at first this will seem like an empty proposal. I think then, and also, some will see it as devilry (which I doubt anyone here thinks exists). And I think I will be accused of many of the fallacies and pitfalls that have already been previously warned about in the canons.

That latter point I think might suggest that I might learn well and fast from this post as interested and helpful people can point me to specific articles and I WILL read them with sincere intent to understand them (so far they are very well written in the sense that I feel I understand them because they are simple enough) and I will ask questions.

But I also think ultimately it will be shown that my proposal and my understanding of it doesn't really fall to any of these traps, and as I learn the canonical arguments I will be able to show how my proposal properly addresses them."

Replies from: MrMind, Flinter
comment by MrMind · 2017-01-17T08:25:49.920Z · LW(p) · GW(p)

I wanted to make a discussion post about this but apparently I need 2 karma points and this forum is too ignorant to give them out

People have come here and said: "Hey, I've something interesting to say regarding X, and I need a small amount of karma to post it. Can I have some?" and have been given plenty.
A little reflection and a moderate amount of politeness can go a long way.

Replies from: Flinter
comment by Flinter · 2017-01-17T08:28:09.868Z · LW(p) · GW(p)

Yup but that ruins my first post cause I wanted it to be something specific. So what you are effectively saying is I have to make a sh!t post first, and I think that is irrational. I came here to bring value not be filtered from doing so.

Cheers!

Replies from: MrMind
comment by MrMind · 2017-01-17T08:47:32.443Z · LW(p) · GW(p)

It makes sense from the inside of the community.
The probability of someone posting something of value as the first post is much lower than that of someone posting spam on the front page. So a very low bar to post on the front page is the best compromise between "discourage spammer" and "discourage poster that has something valuable to say".

Replies from: Flinter
comment by Flinter · 2017-01-17T08:53:31.733Z · LW(p) · GW(p)

If it filters out Nash's argument, Ideal Money, then it makes no sense and is completely irrational for it.

Think about what you are saying, its ridiculous.

Are you also unwilling to discuss the content, and simply are stuck on my posting methods, writing, and character?

Replies from: MrMind
comment by MrMind · 2017-01-17T09:45:46.231Z · LW(p) · GW(p)

If it filters out Nash's argument, Ideal Money, then it makes no sense and is completely irrational for it.

Well, since it's an automated process, it filters anything, be it spam, Nash' argument or the words of Omega itself. As I said, it's a compromise. The best we could come up, so far. If you have a better solution, spell it out.

Are you also unwilling to discuss the content, and simply are stuck on my posting methods, writing, and character?

No, mine was just a suggestion for a way that would allow you to lubricate the social friction I think you're experiencing here. On the other side, I am reading your posts carefully and reply when done thinking about.

Replies from: Flinter
comment by Flinter · 2017-01-17T10:07:49.552Z · LW(p) · GW(p)

Well, since it's an automated process, it filters anything, be it spam, Nash' argument or the words of Omega itself. As I said, it's a compromise. The best we could come up, so far. If you have a better solution, spell it out.

You are defending irrationality. It filters out the one thing it needs to not filter out. A better solution would be to eliminate it.

No, mine was just a suggestion for a way that would allow you to lubricate the social friction I think you're experiencing here. On the other side, I am reading your posts carefully and reply when done thinking about.

Sigh, I guess we never will address Ideal Money will we. I've already spent all day with like 10 posters, that refuse to do anything but attack my character. Not surprising since the subject was insta-mod'd anyways.

Well, as a last hail mary, I just want to say I think you are dumb for purposefully trolling me like this and refusing to address Nash's proposal. Its John Nash, and he spent his life on this proposal, ya'll won't even read it.

There is no intelligence here, just pompous robots avoiding real truth.

Do you know who Nash is? It took 40 years the first time to acknowledge what he did with his equilibrium work. Its been 20 in regard to Ideal Money...

Replies from: MrMind
comment by MrMind · 2017-01-17T10:36:54.273Z · LW(p) · GW(p)

You are defending irrationality. It filters out the one thing it needs to not filter out. A better solution would be to eliminate it.

I wonder what my failure in communicating my idea is in this case. Let me rephrase my argument in favor of filtering and see if I can get my point across: if we eliminated the filter, the site would be inundated with spam and fake accounts posts. By having a filter we block all this, and people willing to pass a small threshold will not be denied to post their contributions.

Sigh, I guess we never will address Ideal Money will we

In due time, I will.

I've already spent all day with like 10 posters, that refuse to do anything but attack my character.

That is unfortunate, but you must be prepared to make these discussions on the lon run. There are people that come here only once a week or only once every three months. A day can be enough to filter out the most visceral reactions, but here discussions can span days, weeks or years.

Its John Nash, and he spent his life on this proposal, ya'll won't even read it.

I am reading it right now, and exactly because it's Nash I'm reading as careful as I can.

But what won't fly here is insulting. Frustration for not being able to communicate your idea is something that we all felt, after all communicating clearly is hard. But if you let yourself below a certain standard of respect, you will be moderated and possibly even banned. That would allow you to communicate your idea even less.

Replies from: Flinter
comment by Flinter · 2017-01-17T16:40:57.586Z · LW(p) · GW(p)

I wonder what my failure in communicating my idea is in this case. Let me rephrase my argument in favor of filtering and see if I can get my point across: if we eliminated the filter, the site would be inundated with spam and fake accounts posts. By having a filter we block all this, and people willing to pass a small threshold will not be denied to post their contributions.

Let me communicate to you what I am saying. I bring the most important writing ever known to mankind. Who is the mod that moderated Nash? Where is the intelligence in that? Let's not call that intelligence and try and defend it. Let's call it an error.

In due time, I will.

Cheers! :)

That is unfortunate, but you must be prepared to make these discussions on the lon run. There are people that come here only once a week or only once every three months. A day can be enough to filter out the most visceral reactions, but here discussions can span days, weeks or years.

Do you think I am not prepared? I have been at this for about 4 years I think. I have writing 100's maybe thousands of related articles and been on many many forums and sites discussing it and "arguing" with many many people.

I am reading it right now, and exactly because it's Nash I'm reading as careful as I can.

Ah, sincerity!!!!!!!

But what won't fly here is insulting. Frustration for not being able to communicate your idea is something that we all felt, after all communicating clearly is hard. But if you let yourself below a certain standard of respect, you will be moderated and possibly even banned. That would allow you to communicate your idea even less.

I have been insulted by nearly every poster that has responded. The mod insulted me, and Nash. I have never been more insulted so quick so much on any other site.

But if you let yourself below a certain standard of respect, you will be moderated and possibly even banned. That would allow you to communicate your idea even less.

Yup ban the messenger and ignore the message. Why would these people remain ignorant to Nash? How did Nash go 20 years without anyone giving his lectures serious thought?

comment by Flinter · 2017-01-16T18:15:50.625Z · LW(p) · GW(p)

I don't think I should have done what I did to get my first two karma points. I suspect it degrades the quality of the site at a rate in which rationality can't inflate it. But I'll save my reasoning and the discussion of it ftm. I am now able to post my discussion on its own it seems, so I did it.

2x cheers.

Replies from: niceguyanon
comment by niceguyanon · 2017-01-16T18:39:52.294Z · LW(p) · GW(p)

I suspect it degrades the quality of the site...

Your first paragraph venting your frustration at the 2 karma rule was unnecessary, but cool you realized that.

I think this post is fine as an Open Thread or as an introduction post. I don't see why it is necessary for its own discussion. Plus it seems like you are making an article stating that you will make an article. I don't think you need to do that. Just come right out and say what you have to say.

Replies from: Flinter
comment by Flinter · 2017-01-16T19:20:56.155Z · LW(p) · GW(p)

No you don't understand. I have something valuable to bring but I needed to make my INTRO post an independent one and I was stripped of that possibility by the process.

Replies from: gjm
comment by gjm · 2017-01-16T20:12:51.728Z · LW(p) · GW(p)

You weren't "stripped of that possibility". LW has small barriers to entry here and there; you are expected to participate in other ways and demonstrate your bona fides and non-stupidity before posting articles. Do you think that is unreasonable? Would it be better if all the world's spammers could come along and post LW articles about their sex-enhancing drugs and their exam-cheating services and so on?

Replies from: Flinter
comment by Flinter · 2017-01-16T20:22:12.921Z · LW(p) · GW(p)

Yes I think its not reasonable, because it acted counter-productive to the intended use that you are suggesting it was implemented for.

Replies from: gjm
comment by gjm · 2017-01-16T21:24:26.506Z · LW(p) · GW(p)

How?

Replies from: Flinter
comment by Flinter · 2017-01-16T21:29:06.460Z · LW(p) · GW(p)

Because I cannot do what was required to make a proper post, which was to not have to make "shit posts" before I make my initial post (which needed to be independent). So the filter, which is trying to foster rational thinking, filtering out the seeds of it.

Replies from: gjm
comment by gjm · 2017-01-17T00:25:04.375Z · LW(p) · GW(p)

No one's requiring you to make "shit posts".

You have not explained why your post had to be "independent". Perhaps there are reasons -- maybe good ones -- why you wanted your first appearance here to be its posting, but I don't see any reason why it's better for LW for that to be so.

In any case, "X has a cost" is not a good argument against X; there can be benefits that outweigh the costs. I hope you will not be offended, but I personally am quite happy for you to be slightly inconvenienced if the alternative is having LW deluged with posts from spammers.

comment by JacobLW (JacobLiechty) · 2017-01-22T06:01:46.727Z · LW(p) · GW(p)

Was reminded to say hello here!

I'm Jacob Liechty, with a new account after using a less active pseudonym for a while. I've been somewhat active around the rationality community and know a bunch of people therein and throughout. Rationalism and its writings had a pretty deep impact on my life about 5 years ago, and I haven't been able to shake it since.

I currently make video games for a living, but will be keeping my finger to the pulse to determine when to move into more general tech startups, some sort of full time philanthropy, maybe start an EA nonprofit or metacharity, or who knows. I'm one of the creators of a game called Astroneer, which has been doing very successfully, which opens up a lot of opportunities but also gives me some responsibilities of managing it well for the purposes of giving.

comment by Viliam · 2017-01-19T10:12:24.258Z · LW(p) · GW(p)

Good news: People are becoming more aware that AI is a thing, even mainstream media mention it sometimes.

Bad news: People think that spellchecker is an example of AI.

¯\_(ツ)_/¯

Replies from: ingive
comment by ingive · 2017-01-19T15:07:54.186Z · LW(p) · GW(p)

I think then you should ask what can you do about it (or do the most effective action).

Replies from: chaosmage
comment by chaosmage · 2017-01-21T23:26:10.035Z · LW(p) · GW(p)

You could give this answer to literally anything.

comment by ingive · 2017-01-18T19:54:45.144Z · LW(p) · GW(p)

a

comment by morganism · 2017-01-17T19:26:20.138Z · LW(p) · GW(p)

I heard Britain just passed a Robotic Rights Act, but only in passing, and can't find anything on it in search, except the original paper by the U.K. Office of Science and Innovation's Horizon Scanning Centre.

"However, it warned that robots could sue for their rights if these were denied to them.

Should they prove successful, the paper said, "states will be obligated to provide full social benefits to them including income support, housing and possibly robo health care to fix the machines over time.""

not to mention slavery, international transportation of sex workers, overtime, right to quit, etc.

http%3A%2F%2Frobots.law.miami.edu%2Fwp-content%2Fuploads%2F2012%2F04%2FDarling_Extending-Legal-Rights-to-Social-Robots-v2.pdf

anyone writing this up ?

comment by moridinamael · 2017-01-16T21:38:45.261Z · LW(p) · GW(p)

Some of us sometimes make predictions with probabilities attached; does anybody here actually try to keep up a legit belief web and do Bayesian updating as the results of predictions come to pass?

If so, how do you do it?

Replies from: ChristianKl
comment by ChristianKl · 2017-01-17T06:55:36.652Z · LW(p) · GW(p)

Some of us sometimes make predictions with probabilities attached; does anybody here actually try to keep up a legit belief web and do Bayesian updating as the results of predictions come to pass?

No, and having a self-consistent belief net might decrease the quality of the beliefs a lot. Having multiple distinct perspectives on an issue was suggested by Tetlock to be very useful.

Replies from: moridinamael
comment by moridinamael · 2017-01-17T14:54:52.950Z · LW(p) · GW(p)

A Bayesian network is explicitly intended to accommodate conflicting perspectives and update the weights of two or more hypotheses based on the result of an observation. There's absolutely no contradiction between "holding multiple distinct perspectives" and "mapping belief dependencies and using Bayesian updating".

comment by ingive · 2017-01-16T12:04:42.971Z · LW(p) · GW(p)

How would we go about changing human behavior to be more aligned with reality? I was thinking it is undoubtedly the most effective thing to do. Ensure world domination of rationalist, effective altruist and utilitarian ideas. There are two parts to this, I simply mention R, EA and U because it resonates very well here with the types of users here and alignment with reality I explain next. How I expect alignment to reality to be, is accepting facts fully. For example, thinking and emotionally, this includes uncertainty of facts (because of facts like an interpretation of QM).

One example is that consciousness, Qualia, experience is a tool, not a goal. This is facts, consciousness arose or dissociated (Monistic Idealism) as an evolutionary process. If you deny this, you're denying evolution and in a death spiral of experience. If you start accepting facts emotionally, rather than fighting emotionally with reality, you merge and paradoxically get what you wanted emotionally. An example of aligning with reality. But if you are aware of the paradox you might seek for the goal of experience, so be aware.

This is truly the essence of epistemic rationality and it's hard work. Most of us want to deny that experience is not our goal, but that's why we don't care about anything except endless intellectual entertainment. How do we change human behavior to be more aligned with reality? I'm unsure. I'm thinking about locating specific centers of our brains and reducing certain activities which undoubtedly make us less aligned with reality and increase the activations of others.

I think it's important to figure out what drives human behavior to not be aligned with reality and what make us more aligned. When presented with scientific evidence, why do we not change our behavior? That's the question and how do we change it?

When we know how to become the most hardcore altruist, then obviously, everyone should as well.

As far as I can tell, P (read sequences) < P (figure this out)

Replies from: Thomas, MrMind, moridinamael, ingive, Luke_A_Somers
comment by Thomas · 2017-01-16T12:09:46.768Z · LW(p) · GW(p)

Ensure world domination of rationalist

A.K.A. Soviet Union and dependent states.

Replies from: Viliam, ingive
comment by Viliam · 2017-01-16T17:21:39.189Z · LW(p) · GW(p)

Ensure world domination of rationalist

A.K.A. Soviet Union and dependent states.

What makes you believe that the ruling class of Soviet Union was rational? It was a country where Lysenkoism was the official science, and where Kolmogorov was allowed to do math despite being gay only because he made a contribution to military technology.

Replies from: Thomas
comment by Thomas · 2017-01-16T19:00:24.944Z · LW(p) · GW(p)

It was NOT rational. It was declared rational. As "we are not going to pursue profit, but we will instead drop the prices, as the socialism is going to be much more rational system".

And many, many more such paroles. Several might be even true.

The social democrats of today still want some of those "rationalizations" to implement. The problem is, the world doesn't operate on such rationales.

And this Effective Altruism looks similar to me.. If one wants "to do good" for others, he should invest his money wisely. He should employ people, he should establish new businesses with those less fortunate people.

Giving something for nothing is not a very good idea! But using your powers for others to give something for nothing ... is a bad idea. In the name of self-perceived rationality - it's even worse.

Replies from: ingive
comment by ingive · 2017-01-16T20:02:31.069Z · LW(p) · GW(p)

I wrote to align with reality, thus accept facts fully, which includes the uncertainty of facts. There is no alignment with reality in any of what you've said in comparison to mine, so strawman at best.

And this Effective Altruism looks similar to me.. If one wants "to do good" for others, he should invest his money wisely. He should employ people, he should establish new businesses with those less fortunate people.

You're implying that "doing good" effectively couldn't be of investing, employing or establishing businesses. It's independent of a method as long as it is effective in the context of effective altruistic actions. It makes no difference as long as it's the most effective with positive expected value.

comment by ingive · 2017-01-16T13:29:02.218Z · LW(p) · GW(p)

Why do you think that?

Replies from: Thomas
comment by Thomas · 2017-01-16T14:12:16.888Z · LW(p) · GW(p)

It was the same rationale. "We know what's the best for everybody else, so we will take the power!"

Besides the fact that those revolutionaries were wrong at the beginning, they purged each other throughout the process, so that the most cunning one was selected. Which was even more wrong, than those early revolutionaries were. Or maybe Stalin was more right than Trotsky, who knows, but it didn't matter very much. Even Lenin was wrong.

But even if Lenin was right, Andropov would still be corrupted.

Replies from: ingive
comment by ingive · 2017-01-16T14:57:54.594Z · LW(p) · GW(p)

I didn't really mean that. It was just setting an emotional stage for the rest of the comment. What do you think of the rest?

Replies from: ZankerH
comment by ZankerH · 2017-01-16T17:11:26.333Z · LW(p) · GW(p)

Having actually lived under a regime that purported to "change human behaviour to be more in line with reality", my prior for such an attempt being made in good faith to begin with is accordingly low.

Attempts to change society invariably result in selection pressure for effectiveness outmatching those for honesty and benevolence. In a couple of generations, the only people left in charge are the kind of people you definitely wouldn't want in charge, unless you're the kind of person nobody wants in charge in the first place.

I'm thinking about locating specific centers of our brains and reducing certain activities which undoubtedly make us less aligned with reality and increase the activations of others.

This is the kind of thinking that, given a few years of unchecked power and primate group competition, leads to mass programs of rearranging people's brain centres with 15th century technology.

Why don't you spend some time instead thinking about how your forced rationality programme is going to avoid the pitfall all others so far fell into, megalomania and genocide? And why are you so sure your beliefs are the final and correct ones to force on everyone through brain manipulation? If we had the technology to enforce beliefs a few centuries ago, would you consider it a moral good to freeze the progress of human thought at that point? Because that's essentially what you're proposing from the point of view of all potential futures where you fail.

Replies from: ingive
comment by ingive · 2017-01-16T19:31:32.570Z · LW(p) · GW(p)

Attempts to change society invariably result in selection pressure for effectiveness outmatching those for honesty and benevolence. In a couple of generations, the only people left in charge are the kind of people you definitely wouldn't want in charge, unless you're the kind of person nobody wants in charge in the first place.

You're excluding being aligned with objective reality (accepting facts, etc) with said effectiveness. Otherwise, it's useless.

This is the kind of thinking that, given a few years of unchecked power and primate group competition, leads to mass programs of rearranging people's brain centres with 15th century technology.

I'm unsure why you're presuming rearranging people's brains isn't done constantly independent of our volition. This simply starts questioning how we can do it, with our current knowledge.

Why don't you spend some time instead thinking about how your forced rationality programme is going to avoid the pitfall all others so far fell into, megalomania and genocide?

Why would it lead to megalomania and genocide, when it's not aligned with reality? An understanding of neuroscience and evolutionary biology, presuming you were aligned with reality to figure it out and accept facts, would be enough and still understanding that we can be wrong until we know more.

And why are you so sure your beliefs are the final and correct ones to force on everyone through brain manipulation?

As I said "this includes uncertainty of facts (because of facts like an interpretation of QM)." which makes us embrace uncertainty, that reality is probabilistic with this interpretation. It's not absolute.

If we had the technology to enforce beliefs a few centuries ago, would you consider it a moral good to freeze the progress of human thought at that point

Because that's essentially what you're proposing from the point of view of all potential futures where you fail.

I'm not.

comment by MrMind · 2017-01-17T09:11:27.024Z · LW(p) · GW(p)

I think that the problem you state in unsolvable. Human brain evolved to solve social problems related to survival, not to be a perfect Bayesian reasoner (Bayesian models have a tendency to explode in computational complexity as the number of parameters increases). Unless you want to design a brain anew I see no way to modify ourselves to become perfect epistemic rationalist, besides a lot of effort. That might be a shortcoming of my imagination, though.
There's also the case that we shouldn't be perfect rationalists: possibly the cost of adding a further decimal to a probability is much higher than the utility gained because of it, but of course we couldn't know in advance. Also, sometimes our brain prefers to fool itself so that it is better motivated to something / happier, although Eliezer argued at length against this attitude.
So yeah, the landscape of the problem is thorny.

As far as I can tell, P (read sequences) < P (figure this out)

You really meant U(read sequences) < U(figure this out)

Replies from: ingive
comment by ingive · 2017-01-17T12:14:38.434Z · LW(p) · GW(p)

I see that the problem in your reasoning is that you've already presumed what it entails, what you have missed out on is understanding ourselves. Science and reasoning already tell us that we share neural activity, are a social species thus each of us could be considered to be a cell in a brain. It's not as much if every cell decides to push the limits of its rationality, rather the whole collective as long as the expected value is positive. But to do that the first cells have to be U(figure this out).

It's not either perfect or non-perfect, that's absolute thinking. Rather by inductive reasoning or QM probabilistic thinking, "when should I stop refining this, instead share this?" after enough modification and understanding of neuroscience and evolutionary biology for the important facts in what we are.

Based on not thinking in absolute perfection, it's not a question of if, but rather what do we do? Because your reasoning cannot be already flawed before thinking about this problem. We already know that we can change behavior and conditioning, look around the world how people join religious groups, but how do we capitalize on this brain mechanism to increase productivity, rationality, and so on?

Before I said, "stop refining it then share it", that's all it takes and the entire world will have changed. Regarding that, our brain can fool itself, yeah, I don't see why there can't be objective measurement outside of subjective opinion and that it'll surely be thought of in the investigation process.

comment by moridinamael · 2017-01-17T00:08:13.499Z · LW(p) · GW(p)

Could you unpack "aligning with reality" a bit? Is it meaningfully different from just having a scientific mindset?

Replies from: ingive
comment by ingive · 2017-01-17T01:23:09.334Z · LW(p) · GW(p)

A scientific mindset has a lower probability of being positive expected value because there is more than one value when it comes to making decisions, sometimes in conflict with each other. This can lead to cognitive dissonance in daily life. It's because science is a tool, the best one we got. Aligning with reality has a higher probability as it's an emotional heuristic, with only one value necessary.

Aligning with reality means submitting yourself emotionally, similar to how a religious person submits to God, but in this case, our true creator: To logic, where it is defined here as "the consistent patterns which bring about reality". Then you accept facts fully. You understand how everything is probabilities, as per one interpretation of quantum mechanics and that experience is a tool rather than a goal. Using inductive reasoning and deciding actions as per positive expected value allows you to accept facts and be aligned with reality.

It's hard if you keep thinking binary, whether it be absolutes or not, 1's or 0's. Because to be able to accept facts it to be able to accept one might be wrong, everything is probabilities, infinite possibilities. Practically, if you know exercising every day is positive expected value, for example, then as you align yourself with reality in every moment, you realize even if you injure yourself accidentally today, you won't give up reality. Because you made the most efficient action as per your knowledge and you already accounted for the probability of accidentally injuring yourself.

So as you keep feeling you also upgrade it with the probabilities to keep your emotions aligned with reality and easier able to handle situations as I mentioned above, however, maybe something more specific if someone breaks your trust. You already took it in consideration so you won't completely lose trust and emotions for reality.

When you accept and align yourself with reality, then the facts which underlie it, with our current understandings and as long as the likelihood is high, you keep aligning yourself. Experience truly is a feedback loop which results in whatever you feed it.

Regarding what aligning with reality entails: When you're constantly aligning yourself to reality, as long as you deem the probability high you'll be able to emotionally resonate with insights gained. For example, neuroscience will tell you, that you and your environment are not separate from each other, it's all a part of your neural activity. So helping another is helping you. If that doesn't resonate enough, for example, evolutionary biology that we're all descendants from stardust might. Or that there is a probability that you don't exist (as per QM) although very small. So what happens? Your identity and self vanishes, as it's no longer aligned with reality, you accept facts, emotionally. Then you keep the momentum by doing logical actions as per positive expected value after you learn everything what truly is you, and so on.

It's about what Einstein believed in and Carl Sagan, Spinoza's. However Einstein couldn't accept QM because he was thinking in absolutes already, and was unaware of how the brain works. Which we do now, for example, know we're all inherently in denial, and how memory storage works, etc. If he knew that he might have had a different view.

I can't really fix up this text right now but I hope it can somehow help for you to understand what it means to align with reality. It's really important to accept that experience is a tool, not a goal, from insights from evolutionary biology for example. Then there is reality. Who is aligning, if there is only reality?

Replies from: moridinamael
comment by moridinamael · 2017-01-17T15:07:45.599Z · LW(p) · GW(p)

I think there is an irreconcilable tension between your statement that one should completely emotionally submit to and align with facts, and that one should use a Bayesian epistemology to manage beliefs.

There are many things in life and in science that I'm very certain about, but by the laws of probability I can never be 100% certain. There are many more things that I am less than certain about, and hold a cloud of possible explanations, the most likely of which may only be 20% probable in my estimation. I should only "submit" to any particular belief in accordance with my assessment of its likelihood, and can never justify submitting to some belief 100%. Indeed, doing so would be a form of irrational fundamentalism.

For example, neuroscience will tell you, that you and your environment are not separate from each other, it's all a part of your neural activity. So helping another is helping you. If that doesn't resonate enough, for example, evolutionary biology that we're all descendants from stardust might. Or that there is a probability that you don't exist (as per QM) although very small. So what happens? Your identity and self vanishes, as it's no longer aligned with reality, you accept facts, emotionally.

I feel it might help you to know that none of this is actually factual. These are your interpretations of really vague and difficult-to-pin-down philosophical ideas, ideas about which very smart and well-read people can and do disagree.

For example, the idea that you and your environment are not separate from each other may be true in some narrow technical sense but it is also very much false in probably more relevant senses. The same could be said for the idea that helping another is helping yourself. That's not true if the other I'm helping is trying to murder me -- and if I can refute the generality with one example that I came up with in half a second of thought, it's not a very useful generality.

I suspect that you haven't read through all of Eliezer's blog posts. His writings cover all the things you're talking about, but do it in a way that is grounded in much sturdier foundations than you appear to be using. It also seems that you are very much in love with this idea of Logic as being the One Final Solution to Everything, and that is always a huge danger sign in human thinking. Just thinking probablistically, the odds that the true Final Solution to Everything has been discovered and that you are in possession of it are very low. Hence the need to keep a distribution of likelihoods over beliefs rather than putting all your weight down 100% on some perspective that appeals to you aesthetically.

Replies from: ingive
comment by ingive · 2017-01-17T15:57:50.538Z · LW(p) · GW(p)

I should only "submit" to any particular belief in accordance with my assessment of its likelihood, and can never justify submitting to some belief 100%. Indeed, doing so would be a form of irrational fundamentalism.

Not necessarily, because the submitting is a means rather than the goal, and you will always never be certain. It's important to recognize empirically how your emotions work in contrary to a Bayesian epistemology, how using its mechanisms paradoxically lead to something which is more aligned with reality. It's not done with Bayesian epistemology, it is done with emotions, that do not speak in our language and it's possibly hard-wired to be that way. So we become aware of it and mix in the inductive reasoning.

For example, the idea that you and your environment are not separate from each other may be true in some narrow technical sense but it is also very much false in probably more relevant senses.

"true in some narrow technical sense" yet "false in probably more relevant senses" this is called cognitive dissonance, empirically it can even be this way by some basic reasoning, both emotionally and factually, which is what I am talking about, and which needs to be investigated. You're proving my point :)

That's not true if the other I'm helping is trying to murder me -- and if I can refute the generality with one example that I came up with in half a second of thought, it's not a very useful generality.

That's simply semantics, the problem is attaching emotionally to a sense of "I", which is not aligned with reality, independent of action, you may speak of this practical body, hands, I, for communication, it all arises in your neural activity without a center and it's ever changing. Empirically, that arises in the subjective reference frame, which is taken as a premise for this conversation.

I suspect that you haven't read through all of Eliezer's blog posts. His writings cover all the things you're talking about, but do it in a way that is grounded in much sturdier foundations than you appear to be using.

Yes. Unsure if his writings cover what I am talking about since evident by what you've said so far. Not that I blame you, I just want us to meta observe ourselves so we can be more aligned.

It also seems that you are very much in love with this idea of Logic as being the One Final Solution to Everything, and that is always a huge danger sign in human thinking. Just thinking probablistically, the odds that the true Final Solution to Everything has been discovered and that you are in possession of it are very low. Hence the need to keep a distribution of likelihoods over beliefs rather than putting all your weight down 100% on some perspective that appeals to you aesthetically.

I'm unsure what considers as danger sign in human thinking if you change perspective, the likelihood that something is worse than what we have is low. You only need a limited emotional connection to science and rationality to realize this and how bad thinking spreads epidemically now, but from someone like us, it's more likely to be good thinking? The likelihood to investigate this is very high to be positive expected value because inherently you, I and more possess the qualities which are not aligned with reality. I want to reassure you of something, however.

Alignment with reality is the most probable to give equilibrium as it's aligned with the utility function. When in a death spiral and not aligned (yet think is aligned) then aligning with reality might seem as not aligning ("very much false in probably more relevant senses") but the opposite and that it would be against utility function and lead to experience opposite to before. That's the case, but if you are honest with your emotions, the experience which is baseline has a hard time to see beyond itself. That's why understanding that experience is a tool, not a goal, although it gives to what would be considered a "satisfaction of that goal", it is only by accepting facts that it happens, and it can't happen in the death spiral.

I'm unsure if this is possible to communicate with words, this is quite a limitation of language and it seems as regardless what I say to you, you cannot see beyond it. That's why I want to start a discussion of how we should be more aligned with reality and where to start from. Whether it be neuroscience studies or whatever.

Replies from: moridinamael
comment by moridinamael · 2017-01-17T16:19:19.224Z · LW(p) · GW(p)

It's important to recognize empirically how your emotions work in contrary to a Bayesian epistemology, how using its mechanisms paradoxically lead to something which is more aligned with reality. It's not done with Bayesian epistemology, it is done with emotions, that do not speak in our language and it's possibly hard-wired to be that way. So we become aware of it and mix in the inductive reasoning.

Science does not actually know how emotions work to the degree of accuracy you are implying. Your statement that using emotional commitment rather than Bayesian epistemology leads to better alignment with reality is a hypothesis that you believe, not a fact that has been proven. If you become a very successful person by following the prescription you advocate, that would be evidence in favor of your hypothesis, but even that would not be very strong evidence by itself.

"true in some narrow technical sense" yet "false in probably more relevant senses" this is called cognitive dissonance, empirically it can even be this way by some basic reasoning, both emotionally and factually, which is what I am talking about, and which needs to be investigated. You're proving my point :)

I am not sure what you're saying here. "Cognitive dissonance" is not the same thing as observing that a phenomenon can be framed in two different mutually contradictory ways. I do not have an experience of dissonance when I say, "From one point of view we're inseparable from the universe, from a different point of view we can be considered independent agents." These are merely different interpretative paradigms and neither are right or wrong.

Yes. Unsure if his writings cover what I am talking about since evident by what you've said so far. Not that I blame you, I just want us to meta observe ourselves so we can be more aligned.

I am trying to say nicely that Eliezer's writings comprehensively invalidate what you're saying. The reason you're getting pushback from Less Wrong is that we collectively see the mistakes that you're making because we have a shared bag of epistemic tools that are superior to yours, not because you have access to powerful knowledge and insights that we don't have. You would really benefit in a lot of ways from reading the essays I linked before you continue proselytizing on Less Wrong. We would love to have you as a member of the community, but in order to really join the community you will need to be willing to criticize yourself and your own ideas with detachment and rigor.

I'm unsure what considers as danger sign in human thinking if you change perspective, the likelihood that something is worse than what we have is low. You only need a limited emotional connection to science and rationality to realize this and how bad thinking spreads epidemically now, but from someone like us, it's more likely to be good thinking? The likelihood to investigate this is very high to be positive expected value because inherently you, I and more possess the qualities which are not aligned with reality. I want to reassure you of something, however.

I'm not arguing that changing perspective from default modes of human cognition is bad. I'm arguing that your particular brand of improved thinking is not particularly compelling, and is very far from being proven superior to what I'm already doing as a committed rationalist.

Alignment with reality is the most probable to give equilibrium as it's aligned with the utility function. When in a death spiral and not aligned (yet think is aligned) then aligning with reality might seem as not aligning ("very much false in probably more relevant senses") but the opposite and that it would be against utility function and lead to experience opposite to before. That's the case, but if you are honest with your emotions, the experience which is baseline has a hard time to see beyond itself. That's why understanding that experience is a tool, not a goal, although it gives to what would be considered a "satisfaction of that goal", it is only by accepting facts that it happens, and it can't happen in the death spiral.

I would actually suggest that you stop using the phrase "aligning with reality" because it does not seem to convey the meaning you want it to convey. I think you should replace every instance of that phrase with the concrete substance of what you actually mean. You may find that it means essentially nothing and it just a verbal/cognitive placeholder that you're using to prop up unclear thinking. For example, in the above paragraph, "Alignment with reality is the most probable to give equilibrium as it's aligned with the utility function" could be rewritten as "Performing the actions most likely to yield highest utility is most probable to be aligned with the utility function", which is a tautology, not an insight.

Replies from: ingive
comment by ingive · 2017-01-17T17:01:32.408Z · LW(p) · GW(p)

Science does not actually know how emotions work to the degree of accuracy you are implying. Your statement that using emotional commitment rather than Bayesian epistemology leads to better alignment with reality is a hypothesis that you believe, not a fact that has been proven. If you become a very successful person by following the prescription you advocate, that would be evidence in favor of your hypothesis, but even that would not be very strong evidence by itself.

I don't know, that's why I wanted to raise an investigation into it, but empirically you can validate or invalidate the hypothesis by emotional awareness, which is what I said at the start of my message you quoted and somehow make me seem to imply science when I say empirically.

First sentence: "It's important to recognize empirically"

I do not have an experience of dissonance when I say,

You might've had, but no longer. That's how cognitive dissonance works.

"From one point of view we're inseparable from the universe, from a different point of view we can be considered independent agents." These are merely different interpretative paradigms and neither are right or wrong.

Independent agents is an empirical observation which I have already taken as a premise as a matter of communication. Emotionally you don't have to be an independent agent of the universe if you emotionally choose to. It's a question whether one alignment is more aligned with reality based on factual evidence or what you feel (been conditioned). Right or wrong is a question of absolutes. More aligned overtime is not.

you will need to be willing to criticize yourself and your own ideas with detachment and rigor.

I'm unsure what it is I have not written which has not tried to communicate this message, in case you don't understand, that's exactly what I am trying to tell you. I am offering to raise a discussion to figure out how to do it. Aligning with reality implies detachment from things which are not aligned. If you wonder if attachment to it is possible, yeah as a means, but you'll soon get over it by empirical and scientific evidence.

I'm not arguing that changing perspective from default modes of human cognition is bad. I'm arguing that your particular brand of improved thinking is not particularly compelling, and is very far from being proven superior to what I'm already doing as a committed rationalist.

I'm not sure, that's why I want to raise a discussion or a study group to investigate this idea.

"Performing the actions most likely to yield highest utility is most probable to be aligned with the utility function",

Simply being aligned with reality gives you equilibrium as that's what you were designed to do. Using Occam's razor here simplifies your programming.

The bottom line is being able to accept facts emotionally (such as neural activity before) rather than relying on empirical observations of social conditioning. I'm unsure that you've in any way disproved my point I just made.

That's the point I want to bring, we should want to investigate that further and how we can align ourselves with the facts emotionally (empirically). But how do we do it?

Simply by saying it like this "true in some narrow technical sense" then "false in probably more relevant senses" so your empirical observation is probably "true" rather than scientific evidence, or facts? (which you call narrow and technical), no it's not probably true and there is a disconnect between your emotional attachments to what's less probable to what's more probable. You don't even see it as a problem because it's your lens, yet you have to do your best to admit it in a way where it doesn't seem too obvious by using words like "narrow". That's exactly what I invite you to discuss further, why are you believing things to be false, when the scientific evidence says otherwise? ("true in some narrow technnical sense") I presume you're also using true and false in a linguistic way, there's no such thing.

That's exactly why I deem it important, because if you did, you'd say "yeah the scientific evidence says so" instead of "no my senses tells me it's false" or both (which makes no sense, worth to investigate!), what if by learning of the scientific evidence, you adopt the "truth" so that your senses tell you what is "true"? That's what you would do.

Replies from: moridinamael
comment by moridinamael · 2017-01-17T17:31:22.258Z · LW(p) · GW(p)

Simply by saying it like this "true in some narrow technical sense" then "false in probably more relevant senses" so your empirical observation is probably "true" rather than scientific evidence, or facts? (which you call narrow and technical), no it's not probably true and there is a disconnect between your emotional attachments to what's less probable to what's more probable. You don't even see it as a problem because it's your lens, yet you have to do your best to admit it in a way where it doesn't seem too obvious by using words like "narrow". That's exactly what I invite you to discuss further, why are you believing things to be false, when the scientific evidence says otherwise? ("true in some narrow technnical sense") I presume you're also using true and false in a linguistic way, there's no such thing.

There is a narrow technical sense in which my actions are dependent on the gravitational pull of some particular atom in a random star in a distant galaxy. That atom is having a physical effect on me. This is true and indisputable.

In a more relevant sense, that atom is not having any effect on me that I should bother with considering. If a magical genie intervened and screened off the gravitational field of that atom, it would change none of my choices in any way that could be observed.

What am I supposedly believing that is false, that is contradicted by science? What specific scientific findings are you implying that I have got wrong?

...

Let me back way up.

You are saying a lot of really uncontroversial things that nobody here particularly cares to argue about, like "Occam's razor is good" and "we are not causally separate from the universe at large" and "living life as a human requires a constant balancing and negotiation between the emotional/sensing/feeling and rational/deliberative/calculating parts of the human mind". These ideas are all old hat around here. They go all the way back to Eliezer's original essays, and he got those ideas from much older sources.

Then you're jumping forward and making quasi-religious statements about "aligning with reality" and "emotionally submitting" and talking about how your "sense of self disappears". All that stuff is your own unsupported extrapolations. This is the reason you're having trouble communicating here.

Replies from: ingive
comment by ingive · 2017-01-17T18:16:40.955Z · LW(p) · GW(p)

What am I supposedly believing that is false, that is contradicted by science? What specific scientific findings are you implying that I have got wrong?

This is what you said:

"For example, the idea that you and your environment are not separate from each other may be true in some narrow technical sense but it is also very much false in probably more relevant senses."

You're believing that you and your environment are separate based on "relevant" senses. Scientific evidence is irrelevant to your some of your senses, it is technical. If all of your senses were in resonance, including emotional, then there wouldn't be such a thing where scientific evidence is irrelevant in this context.

So your environment and you are not separate. This is a scientific fact. Because it's all a part of your neural activity. Now I am not denying consciousness, qualia or empirical evidence. I'm already taking it as a premise. But you are emotionally attached to the idea that you and environment are separate, that's why you're unable to accept the scientific evidence. However, if you had a scientific mindset, facts would make you accept it. It's not in the way you think right now "It's true in a technical sense, but not for the relevant senses", whereas one part of you accept it but the other, your emotions, do not.

Exactly this is what I am explaining by aligning with reality, you're aligning and letting the evidence in rather than rejecting from preconditioned beliefs. I think you're starting to understand and that you will be stronger because of it. Even if it might seem a little scary at start. Of course we have to investigate it.

There is a narrow technical sense in which my actions are dependent on the gravitational pull of some particular atom in a random star in a distant galaxy. That atom is having a physical effect on me. This is true and indisputable. In a more relevant sense, that atom is not having any effect on me that I should bother with considering. If a magical genie intervened and screened off the gravitational field of that atom, it would change none of my choices in any way that could be observed.

You don't bother considering because it's an analogy in which the hypothetical scenario leads to that conclusion. Do the same with the statements in context, repeat it, is it having any effect on you that you feel that you're not separate from your environment ("Helping others is helping you?") and so on? But of course you have to write down in the same manner, but now not for an analogy.

Then you're jumping forward and making quasi-religious statements about "aligning with reality" and "emotionally submitting" and talking about how your "sense of self disappears". All that stuff is your own unsupported extrapolations. This is the reason you're having trouble communicating here.

Aligning with reality is an emotional heuristic which follows Occam's razor. Emotionally submitting, you already do. That's an example of if you emotionally submit to a heuristic which constantly aligns you to reality and acts as a guide to your decisions. Then if there is evidence, like I've written in the start of the post, you submit yourself to the extent where it's no longer in "a technical sense".

Replies from: moridinamael
comment by moridinamael · 2017-01-17T18:28:14.380Z · LW(p) · GW(p)

But you are emotionally attached to the idea that you and environment are separate, that's why you're unable to accept the scientific evidence.

No, I'm not.

This is just not a very interesting or useful line of thinking. I (and most people on this forum) already try to live as rationalists, and where your proposal implies any deviation in from that framework, your deviations are inferior to simply doing what we are already doing. Furthermore, you consistently rely on buzzwords of your own invention ("aligning with reality", "emotionally submitting") which greatly inhibit your attempts at clarifying what you're trying to say. Perhaps if you read the essays as I suggest, you could provide substantive criticisms/improvements that did not rely on your own idiosyncratic terminology.

Replies from: ingive
comment by ingive · 2017-01-17T18:52:14.122Z · LW(p) · GW(p)

You say you're not, yet you're contradicting your previous statement where scientific facts are irrelevant to your other senses [emotions]. Which you completely omitted in responding to. Please explain. Is it a blind spot?

This is just not a very interesting or useful line of thinking.

I'm unsure why accepting facts to the extent where falsehoods by other senses are overwritten, is uninteresting or not useful.

I (and most people on this forum) already try to live as rationalists, and where your proposal implies any deviation in from that framework, your deviations are inferior to simply doing what we are already doing.

It's obviously not inferior or superior as I've already explained a flaw in your reasoning, which you're either already too much of an affective death spiral to notice, or completely omitting because you have some vague sense that you are right. You could've welcomed me rather than prove to me what I've been saying all along. :)

Furthermore, you consistently rely on buzzwords of your own invention ("aligning with reality", "emotionally submitting") which greatly inhibit your attempts at clarifying what you're trying to say.

It's very explanatory. If you go against what you are and your purpose then you are not aligned with reality. If you go alongside with what you are and your purpose then you are aligned with reality. Accepting facts in all senses, including emotionally. By everything I've written so far, it should able to connect the dots with your pattern-recognition machine what these 'buzzword's mean? If I say X means this, this that, multiple times then you should have a vague sense in what I mean it?

Perhaps if you read the essays as I suggest, you could provide substantive criticisms/improvements that did not rely on your own idiosyncratic terminology.

I wasn't using 'my terminology' when I explained your contradiction, and that this contradiction is the problem?

"For example, the idea that you and your environment are not separate from each other may be true in some narrow technical sense but it is also very much false in probably more relevant senses."

.

You're believing that you and your environment are separate based on "relevant" senses. Scientific evidence is irrelevant to your some of your senses, it is technical. If all of your senses were in resonance, including emotional, then there wouldn't be such a thing where scientific evidence is irrelevant in this context.

That's the improvement we have to make.

Replies from: moridinamael
comment by moridinamael · 2017-01-17T19:10:57.031Z · LW(p) · GW(p)

You say you're not, yet you're contradicting your previous statement where scientific facts are irrelevant to your other senses [emotions].

Where did I say scientific facts are irrelevant to my emotions?

It's obviously not inferior or superior as I've already explained a flaw in your reasoning, which you're either already too much of an affective death spiral to notice, or completely omitting because you have some vague sense that you are right.

Please remind me or re-highlight where this flaw/contradiction happened. I did not notice you pointing it out before and cannot ascertain what you're referring to.

By everything I've written so far, it should able to connect the dots with your pattern-recognition machine what these 'buzzword's mean? If I say X means this, this that, multiple times then you should have a vague sense in what I mean it?

I have an idea of what you're trying to say, but I suspect that you don't. Your thinking is not clear. By using different words, you will force yourself to interrogate your own understanding of what you're putting forth.

You're believing that you and your environment are separate based on "relevant" senses. Scientific evidence is irrelevant to your some of your senses, it is technical. If all of your senses were in resonance, including emotional, then there wouldn't be such a thing where scientific evidence is irrelevant in this context.

Is this what you're talking about where you say I'm making an error in reasoning? If so it seems like you just misunderstood me. The gravitational pull of a distant atom is causally present but practically irrelevant to any conceivable choice that I make. This is not a statement that I feel is particularly controversial. It is obviously true.

Replies from: ingive
comment by ingive · 2017-01-17T19:31:30.345Z · LW(p) · GW(p)

"For example, the idea that you and your environment are not separate from each other may be true in some narrow technical sense"

In a technical sense.

"but it is also very much false in probably more relevant senses."

The relevant sense here is your emotions.

Technically you understand that self and environment is one and the same, but you don't emotionally resonate with that idea [you don't emotionally resonate with facts].

Otherwise, what do you mean with:

"For example, the idea that you and your environment are not separate from each other may be true in some narrow technical sense" It's true...?

"but it is also very much false in probably more relevant senses." But it's false... for a relevant sense?

What is the 'relevant sense'? (not emotions?)

Is it more or less probable that 'you and your environment' is separated and based on what evidence?

I have an idea of what you're trying to say, but I suspect that you don't. Your thinking is not clear. By using different words, you will force yourself to interrogate your own understanding of what you're putting forth.

Emotionally accepting or submitting to something is an empirical fact. There are no different words, but if there is, you're free to put them forward.

The gravitational pull of a distant atom is causally present but practically irrelevant to any conceivable choice that I make. This is not a statement that I feel is particularly controversial. It is obviously true.

You keep using analogies rather than the example you gave earlier. Why? I already understand what you mean, but the actual example is not irrelevant to your decisions.

So what you actually meant was:

"You and your environment are not separated. This is obviously true"?

Can you confirm? Please spot the dissonance and be honest.

Replies from: moridinamael
comment by moridinamael · 2017-01-17T19:59:27.022Z · LW(p) · GW(p)

Thanks, this is clarifying.

You're reading way too much into word choice things and projecting onto me a mentality that I don't hold.

"You and your environment are not separated. This is obviously true"?

Can you confirm? Please spot the dissonance and be honest.

Indeed, that was what I said. It is still true.

The gravitational pull of a distant atom is causally present but practically irrelevant to any conceivable choice that I make.

This is also true. Whether or not that particular atom is there or is magically whisked away, it's not going to change where I decide to eat lunch today. The activity of that atom is not relevant to my decision making process.

That's it. What part of this is supposed to be in error?

Replies from: ingive
comment by ingive · 2017-01-17T20:54:45.419Z · LW(p) · GW(p)

Indeed, this is true in the sense that it's most likely that this is the case based on the available evidence.

I'm glad that you're aligned with reality on this certain point, there's not many that are, but I wonder, why do you claim that helping others is not helping yourself, excluding practicality of semantics? It seemed as you were very new to the concept of non-emotional attachment to identity/I because you argued my semantics.

But, you claimed earlier that none of this is actually factual would you like to elaborate on that? That these are my interpretations of vague and difficult-to-pin-down philosophical ideas.

The reason why I push this is because you contradict yourself and you very much seemed to have an opinion on this specific matter.

I feel it might help you to know that none of this is actually factual. These are your interpretations of really vague and difficult-to-pin-down philosophical ideas, ideas about which very smart and well-read people can and do disagree. For example, the idea that you and your environment are not separate from each other may be true in some narrow technical sense but it is also very much false in probably more relevant senses. The same could be said for the idea that helping another is helping yourself. That's not true if the other I'm helping is trying to murder me -- and if I can refute the generality with one example that I came up with in half a second of thought, it's not a very useful generality.

So... "none of this is actually factual", it's philosophical ideas, but later on you agree that "you and your environment are not separated. This is obviously true" by saying "Indeed, that was what I said. It is still true." Which you did but it was "...in some narrow technical sense..." and "...but it is also very much false ... relevant ..." now it's "It's true" "factual"? Is it also a "philosophical idea" and a part of the ideas that "none of this is actually factual"?

Your statements in order:

  • not actually factual.
  • really vague philosophical ideas
  • may be true in some narrow technical sense
  • but it is also very much false in probably more relevant senses
  • indeed, that what was I said
  • it is still true

It's fine to be wrong and correct yourself :)

The activity of that atom is not relevant to my decision making process. That's it. What part of this is supposed to be in error?

Yeah, it isn't, but the example you gave of you and environment, is relevant to your decision-making process, as evident by your claim (outside of practicality) and of semantics that "helping others is not helping yourself" for example. So using an analogy which is not relevant to your decision-making process in contrary to your example where it is, is incorrect. That's why I say use the example which you used before. Instead of making an analogy that I don't disagree with.

Replies from: moridinamael
comment by moridinamael · 2017-01-17T21:31:51.388Z · LW(p) · GW(p)

It seemed as you were very new to the concept of non-emotional attachment to identity/I because you argued my semantics.

Not really, I've been practicing various forms of Buddhist meditation for several years and have pretty low attachment to my identity. This is substantially different from saying with any kind of certainty that helping other people is identical to helping myself. Other people want things contrary to what I want. I am not helping myself if I help them. Having low attachment to my identity is not the same thing as being okay with people hurting or killing me.

The rest of your post, which I'm not going to quote, is just mixing up lots of different things. I'm not sure if you're not aware of it or if you are aware of it and you're trying to obfuscate this discussion, but I will give you the benefit of the doubt.

I will untangle the mess. You said:

For example, neuroscience will tell you, that you and your environment are not separate from each other, it's all a part of your neural activity. So helping another is helping you. If that doesn't resonate enough, for example, evolutionary biology that we're all descendants from stardust might. Or that there is a probability that you don't exist (as per QM) although very small. So what happens? Your identity and self vanishes, as it's no longer aligned with reality, you accept facts, emotionally.

Then I said,

I feel it might help you to know that none of this is actually factual. These are your interpretations of really vague and difficult-to-pin-down philosophical ideas, ideas about which very smart and well-read people can and do disagree. For example, the idea that you and your environment are not separate from each other may be true in some narrow technical sense but it is also very much false in probably more relevant senses. The same could be said for the idea that helping another is helping yourself. That's not true if the other I'm helping is trying to murder me -- and if I can refute the generality with one example that I came up with in half a second of thought, it's not a very useful generality.

Since I have now grasped the source of your confusion with my word choice, I will reengage. You specifically say:

For example, neuroscience will tell you, that you and your environment are not separate from each other, it's all a part of your neural activity. So helping another is helping you.

This is a pure non sequitur. The fact that human brains run on physics in no way implies that helping another is helping yourself. Again, if a person wants to kill me, I'm not helping myself if I hand him a gun. If you model human agents the way Dennis Hoffman's character does in I Heart Huckabees you're going to end up repeatedly confused and stymied by reality.

So what happens? Your identity and self vanishes, as it's no longer aligned with reality, you accept facts, emotionally.

This is also just not factual. You're making an outlandish and totally unsupported claim when you say that "emotionally accepting reality" causes the annihilation of the self. The only known things that can make the identity and self vanish are

  • high dose psychotropic compounds
  • extremely long and intense meditation of particular forms that do not look much like what you're talking about

and even these are only true for certain circumscribed senses of the word "self".

So let's review:

I don't object to the naturalistic philosophy that you seem to enjoy. That's all cool and good. We're all about naturalistic science around here. The problem is statements like

So helping another is helping you.

and

Your identity and self vanishes, as it's no longer aligned with reality.

These are pseudo-religious woo, not supported by science anywhere. I have given you very simple examples of scenarios where they are flatly false, which immediately proves that they are not the powerful general truths you seem to think they are.

Replies from: ingive
comment by ingive · 2017-01-17T21:58:16.212Z · LW(p) · GW(p)

This is substantially different from saying with any kind of certainty that helping other people is identical to helping myself.

No, it's not.

Other people want things contrary to what I want.

What does that have to do with helping yourself, thus other people?

Having low attachment to my identity is not the same thing as being okay with people hurting or killing me.

Yeah, but 'me' is used practically.

The fact that human brains run on physics in no way implies that helping another is helping yourself.

I said your neural activity includes you and your environment and that there is no differentiation. So there is no differentiation by helping another as in helping yourself.

Again, if a person wants to kill me, I'm not helping myself if I hand him a gun. If you model human agents the way Dennis Hoffman's character does in I Heart Huckabees you're going to end up repeatedly confused and stymied by reality.

That's the practical 'myself' to talk about this body, its requirements and so on. You are helping yourself by not giving him a gun because you are not differentiated by your environment. You are presuming that you are helping yourself by giving gun because you think that there is another. No there is only yourself. You help yourself by not giving the gun because your practical 'myself' is included in 'yourself'.

This is also just not factual. You're making an outlandish and totally unsupported claim when you say that "emotionally accepting reality" causes the annihilation of the self. The only known things that can make the identity and self vanish are high dose psychotropic compounds extremely long and intense meditation of particular forms that do not look much like what you're talking about and even these are only true for certain circumscribed senses of the word "self".

I don't deny that it is not that factual as there is limited objective evidence.

These are pseudo-religious woo, not supported by science anywhere. I have given you very simple examples of scenarios where they are flatly false, which immediately proves that they are not the powerful general truths you seem to think they are.

I disagree with 'helping another is helping you' being psuedo-religious woo but it's because we're talking about semantics. We have to decide what 'me' or my 'self' or 'I' is. I use the neural activity as the definition of this. You seem to use some type philosophical reasoning where you are presuming I use the same definition.

So we should investigate if your self and identity can die from that and if other facts which we don't embrace emotionally leads to a similar process but for their area. That's the entire point of my original post.

Replies from: moridinamael
comment by moridinamael · 2017-01-17T22:05:01.157Z · LW(p) · GW(p)

It doesn't look like there's anywhere to go from here. It looks like you are acknowledging that where your positions are strong, they are not novel, and where they are novel, they are not strong. If you enjoy drawing the boundaries of your self in unusual places or emotionally associating your identity with certain ideas, go for it. Just don't expect anybody else to find those ideas compelling without evidence.

Replies from: ingive
comment by ingive · 2017-01-17T22:13:57.497Z · LW(p) · GW(p)

I agree.

These are the steps I did to have identity death: link to steps I also meditated on the 48 min hypnosis track youtube If you are interested in where I got my ideas from and if you want to try it yourself. It's of course up to you but you have a strong identity and ego issues and I think it will help "you"(and me).

Replies from: moridinamael
comment by moridinamael · 2017-01-17T22:23:26.331Z · LW(p) · GW(p)

You've had people complete these steps and report that the "What will happen after you make the click" section actually happens?

Replies from: ingive
comment by ingive · 2017-01-17T22:33:33.892Z · LW(p) · GW(p)

Yeah, it's also called 'Enlightenment' in theological traditions. You can read the testimonies here. MrMind has, for example, read them, but he's waiting a bit longer to contact these people on Reddit to see if it sticks around. I think the audio can work really well with a good pair of headphones and playing it as FLAC.

comment by ingive · 2017-01-16T20:23:45.550Z · LW(p) · GW(p)

How disappointing. No one on LW appears to want to discuss this. Except for a few who undoubtedly misunderstood this post and started raving about some irrelevant topics. At least let me know why you don't want to.

1) How would we go about changing human behavior to be more aligned with reality?

Aligned with reality = Accepting facts fully (probably leads to EA ideas, science, etc)

2) When presented with scientific evidence, why do we not change our behavior? That's the question and how do we change it?

Replies from: username2, plethora
comment by username2 · 2017-01-18T10:33:54.921Z · LW(p) · GW(p)

1) How would we go about changing human behavior to be more aligned with reality?

Replace all humans with machines.

2) When presented with scientific evidence, why do we not change our behavior? That's the question and how do we change it?

That's basically related to the entire topic of this site. People probably aren't engaging with this question because it's too tiresome to summarize all the information that is available from that little search bar in the upper right corner.

Replies from: ingive
comment by ingive · 2017-01-18T11:28:53.608Z · LW(p) · GW(p)

Replace all humans with machines.

Changing human behavior is probably more efficient than to build machines, to align more with reality. It's a question whether a means is a goal for you? If not, you would base your operations on the most effective action, probably changing behavior (because you could change the behavior of one, to equal the impact of your robot building, but probably more). I don't think replacing all humans with machines is a smart idea anyway. Merging biology with technology would be a smarter approach from my view as I deem life to be conscious and machines to not be. Of course, I might be wrong, but sometimes you might not have an answer but still give yourself the benefit of the doubt, for example, if you believed that every action is inherently selfish, you would still do actions which were not. By giving you the benefit of the doubt, if you figured out later on (which we did) that it is not the case, then that was a good choice. This includes consciousness since we can't prove the external world it would be wise to keep humans around or utilize the biological hardware. If we had machines which replaced all humans, then that would be not very smart machines to at least not keep some around in a jungle or so, which hadn't been contacted. Which undoubtedly mean unfriendly AI, like a paperclip maximizer.

I just want to tell you that you have to recognize what you're saying and how it looks, even though you only wrote 5 words, you could as well be supporting a paperclip maximizer.

That's basically related to the entire topic of this site. People probably aren't engaging with this question because it's too tiresome to summarize all the information that is available from that little search bar in the upper right corner.

What should I search for to find an answer to my question? Flaws of human behavior that can be overcome (can they?) like biases and fallacies is relevant, but it's quite specific however, I guess that's very worthwhile to go through to improve functionality. Something other would be stupid.

Replies from: niceguyanon
comment by niceguyanon · 2017-01-18T17:20:13.030Z · LW(p) · GW(p)

Why I think people are not engaging you. But don't take this as a criticism of your ideas or questions.

  • You have been strongly associated with a certain movement, and people might not want to engage you in conversation even on different topics, because they are afraid your true intention is to lead the conversation back to ideas that they didn't want to talk with you about in the first place.

  • I think username2 was making a non-serious cheeky comment which went over your head and you responded with a wall of text touching on several ideas. People sometimes just want small exchanges and they have no confidence in you to keep exchanges short.

  • Agreeing with the sentiment that people probably aren't engaging with this question because it's too tiresome to summarize all the information that is available, and what is available is probably incomplete as well. By asking such a broad question rather than a narrower, specific, or applied question, you won't get many responses.

Replies from: username2, ingive
comment by username2 · 2017-01-18T19:31:12.055Z · LW(p) · GW(p)

I was being cheeky, yes, but also serious. What do you call a perfect rationalist? A sociopath[1]. A fair amount of rationality training is basically reprogramming oneself to be mechanical in one's response to evidence and follow scripts for better decision making. And what kind of world would we live in if every single person was perfectly sociopathic in their behaviour? For this reason in part, I think the idea of making the entire world perfectly rationalist is a potentially dangerous proposition and one should at least consider how far along that trajectory we would want to take it.

But the response I gave to ingive was 5 words because for all the other reasons you gave I did not feel it would be a productive use of my time to engage further with him.

[1] ETA: Before I get nitpicked to death, I mean the symptoms often associated with high-functioning sociopathy, not the clinical definition which I'm aware is actually different from what most people associate with the term.

Replies from: ingive
comment by ingive · 2017-01-18T20:10:24.492Z · LW(p) · GW(p)

No, you don't. A perfect rationalist is not a sociopath because a perfect rationalist understands what they are, and by scientific inquiry can constantly update and align themselves with reality. If every single person was a perfect rationalist then the world would be a utopia, in the sense that extreme poverty would instantly be eliminated. You're assuming that a perfect rationalist cannot see through the illusion of self and identity, and update its beliefs by understanding neuroscience and evolutionary biology. Complete opposite, they will be seen as philanthropic, altruistic and selfless.

The reason why you think so is because of straw Vulcan, your own attachment to your self and identity, and your own projections onto the world. I have talked about your behavior previously in one of my posts. do you agree? I also gave you suggestions on how to improve, by meditating, for example. http://lesswrong.com/lw/5h9/meditation_insight_and_rationality_part_1_of_3/

In another example, as you and many in society seem to have a fetish for sociopaths, yes you'll be a sociopath, but not for yourself, for the world. By recognizing your neural activity includes your environment and that they are not separate, that all of us evolved from stardust, and practicing for example meditation or utilizing psychotropic substances, your "Identity" "I" "self" becomes more aligned, and thus what your actions are directed to. That's called Effective Altruism. (emotions aside, selflessness speaks louder in actions!)

Edit: You changed your post after I replied to you.

[1] ETA: Before I get nitpicked to death, I mean the symptoms often associated with high-functioning sociopathy, not the clinical definition which I'm aware is actually different from what most people associate with the term.

Still apply. Doesn't matter.

Replies from: niceguyanon
comment by niceguyanon · 2017-01-18T20:33:41.034Z · LW(p) · GW(p)

If I remember correctly username2 is a shared account, so the person are talking to now might not be whom you have had previously conversed with. Just thought you should know because I don't want you to mistake the account with a static person.

Replies from: ingive
comment by ingive · 2017-01-18T20:46:12.349Z · LW(p) · GW(p)

It's unlikely that it's not the same person, or people on average utilize shared accounts to try and share their suffering (by that I mean have a specific attitude) in a negative way. It would be interesting to compare shared accounts with other accounts by for example IBM Watson personality insights. In a large scale analysis.

I would just ban them from the site. I'd rather see a troll spend time creating new accounts and people noticing the sign-up dates. Relevant: Internet Trolls Are Narcissists, Psychopaths, and Sadists

By the way, I was not consciously aware of the user when I wrote my text or the analysis of the user agenda. But afterwards I remembered "oh it's that user again".

Replies from: username2
comment by username2 · 2017-01-18T20:58:08.491Z · LW(p) · GW(p)

The username2 account exists for a reason. Anonymous speech does have a role in any free debate, and it is virtuous to protect the ability to speak anonymously.

Replies from: ingive
comment by ingive · 2017-01-18T21:26:47.645Z · LW(p) · GW(p)

I agree. Now I'd like the password for username2.

You have been strongly associated with a certain movement, and people might not want to engage you in conversation even on different topics, because they are afraid your true intention is to lead the conversation back to ideas that they didn't want to talk with you about in the first place.

-niceguyanon

Replies from: username2
comment by username2 · 2017-01-18T22:52:49.866Z · LW(p) · GW(p)

The password is a Schelling point, the most likely candidate for an account named 'username'. Consider it a right of passage to guess... (and don't post it when you discover it).

comment by ingive · 2017-01-18T17:40:25.531Z · LW(p) · GW(p)

You have been strongly associated with a certain movement, and people might not want to engage you in conversation even on different topics,

You forgot to say that you think that. But for username 2's point, you had to reiterate that you think.

because they are afraid your true intention is to lead the conversation back to ideas that they didn't want to talk with you about in the first place.

That's unfortunate if it is the case if ideas which are outside their echo chamber create such fear, then what I say might be of use in the first place, if we all come together and figure things out :)

I think username2 was making a non-serious cheeky comment which went over your head and you responded with a wall of text touching on several ideas. People sometimes just want small exchanges and they have no confidence in you to keep exchanges short.

It was but it speaks of his underlying ideas and character to even be in the position to do that. I don't mind it, I enjoy typing walls of texts. What would you want me to respond, if at all?

Agreeing with the sentiment that people probably aren't engaging with this question because it's too tiresome to summarize all the information that is available, and what is available is probably incomplete as well. By asking such a broad question rather than a narrower, specific, or applied question, you won't get many responses.

Yeah, I think so too, but I do think there is a technological barrier in how this forum was setup for the type of problem-solving I am advising for. If we truly want to be Less Wrong, it's fine with how it is now, but there can definitely be improvements in an effort for the entire species rather than a small subset of it, 2k people.

Replies from: niceguyanon
comment by niceguyanon · 2017-01-18T19:22:16.285Z · LW(p) · GW(p)

It was but it speaks of his underlying ideas and character to even be in the position to do that.

What do you mean by this? Assuming its a joke, why does it speaks to his character and underlying ideas; why would it, it wasn't meant for you to take seriously.

What would you want me to respond, if at all?

Probably not at all.

Replies from: ingive
comment by ingive · 2017-01-18T20:35:23.658Z · LW(p) · GW(p)

What do you mean by this? Assuming its a joke, why does it speaks to his character and underlying ideas; why would it, it wasn't meant for you to take seriously.

Because a few words tell a large story when they also decided it was worth their time to write it. I wrote in my post and explained for example what type of viewpoints it implies and that it's stupid (in the sense inefficient and not aligned with reality).

Probably not at all.

I will update my probabilities then as I gain more feedback.

comment by plethora · 2017-01-18T14:09:28.091Z · LW(p) · GW(p)

Accepting facts fully (probably leads to EA ideas,

It's more likely to lead to Islam; that's at least on the right side of the is-ought gap.

Replies from: ingive
comment by ingive · 2017-01-18T14:37:02.222Z · LW(p) · GW(p)

that's at least on the right side of the is-ought gap.

I'm having a hard time understanding what you mean.

Accepting facts fully is EA/Utilitarian ideas. There is no 'ought' to be. 'leads' was the incorrect word choice.

Replies from: plethora
comment by plethora · 2017-01-19T07:47:52.933Z · LW(p) · GW(p)

No. Accepting facts fully does not lead to utilitarian ideas. This has been a solved problem since Hume, FFS.

Replies from: ingive
comment by ingive · 2017-01-19T14:55:57.333Z · LW(p) · GW(p)

You're welcome to explain why this isn't the case. I'm thinking mostly about neuroscience and evolutionary biology. It tells us everything.

Replies from: moridinamael
comment by moridinamael · 2017-01-19T15:08:20.239Z · LW(p) · GW(p)

Is-ought divide. If you have solved this problem, mainstream philosophy wants to know.

Replies from: ingive
comment by ingive · 2017-01-19T15:51:36.588Z · LW(p) · GW(p)

If someone wins the Nobel prize you heard it here first.

The is-ought problem implies that the universe is deterministic, which is incorrect, it's an infinite range of possibilities or probabilities which are consistent but can never be certain. Humes beliefs about is-ought came from his own understanding of his emotions and those around him's emotions. He correctly presumed that it is what drives us and that logic and rationality could not (thus not ought to be in any way because things are) and thought the universe is deterministic (without the knowledge of the brain and QM). The insight he's not aware of that even though his emotions are the driving factor, he misses out that he can emotionally be with rationality and logic, facts, so there is no ought to be from what is. 'What is' implies facts, rationality, and logic and so on, EA/Utilitarian ideas. The question about free will is an emotional one if you are aware your subjective reference frame, awareness, was a part of it then you can let go of that.

Replies from: moridinamael, plethora
comment by moridinamael · 2017-01-19T16:21:35.055Z · LW(p) · GW(p)
  1. The universe is deterministic.

  2. You seem to be misunderstanding is-ought. The point is that you cannot conclude what ought to be, or what you ought to do, from what is. You can conclude what you ought to do in order to achieve some specific goal, but you cannot infer "evolutionary biology, therefor effective altruism". You are inserting your own predisposition into that chain and pretending it is a logical consequence.

Replies from: ingive
comment by ingive · 2017-01-19T17:11:39.230Z · LW(p) · GW(p)
  1. With that interpretation, not Copenhagen. I'm unsure, because inherently, can we really be certain of absolutes because of our lack of understanding of the human brain? I think that how memory storage and how the brain works shows us that we can't be certain of our own knowledge.

  2. If you are right with that the universe is deterministic then what ought to be is what is. But if you ought to do the opposite from what 'is' tell us, what are you doing then? You are not allowed to have a goal which is not aligned with what is because that goes against what you are. I do agree with you now however, I think that this is semantics. I think it was a heuristic. But then I'll say "What is, is what you ought to be".

Replies from: moridinamael
comment by moridinamael · 2017-01-19T17:32:33.720Z · LW(p) · GW(p)

If reasonable people can disagree regarding Copenhagen vs. Many Worlds, then reasonable people can disagree on whether the universe is deterministic. In which case, since your whole philosophy seems to depend on the universe not being deterministic, you should scream "oops!" and look for where you went wrong, not try to come up with some way to quickly patch over the problem without thinking about it too hard.

Also: How could 'is' ever tell you what to do?

An innocent is murdered. That 'is'. So it's okay?

You learn that an innocent is going to be murdered. That 'is', so what force compels you to intervene?

The universe is full of suffering. That 'is'. So you ought to spread and cause suffering? If not, what is your basis for saying so?

Replies from: ingive
comment by ingive · 2017-01-19T18:27:41.156Z · LW(p) · GW(p)

In which case, since your whole philosophy seems to depend on the universe not being deterministic, you should scream "oops!" and look for where you went wrong, not try to come up with some way to quickly patch over the problem without thinking about it too hard.

I'm glad that it's clarified, indeed it relies on the universe not being deterministic. However, I do think that a belief in a deterministic universe has an easier time for its agents to go against their utility so my philosophy might boil down more to one's emotions, probably what even put Humes to philosophize about this in the first place. He has apparently talked a lot about emotions/rationality duality and probably contradicted himself on 'is-ought' in his own statements.

You learn that an innocent is going to be murdered. That 'is', so what force compels you to intervene?

Is tells me what I should write to your hypothetical scenario to align you more with reality, rather than continuing the intellectual masturbation. Which philosophers are notorious for, all talk, no action.

The universe is full of suffering. That 'is'. So you ought to spread and cause suffering? If not, what is your basis for saying so?

We are naturally aligned into the decrease of suffering, I don't know exactly, so what is is in every moment whereas the low hanging fruit has to be picked up in for example poverty reduction. Long-term probably awareness of humans like you and I, the next on the list might be an existential risk reduction, seems to be high expected value.

Replies from: moridinamael
comment by moridinamael · 2017-01-19T18:40:33.435Z · LW(p) · GW(p)

Is tells me what I should write to your hypothetical scenario to align you more with reality, rather than continuing the intellectual masturbation. Which philosophers are notorious for, all talk, no action.

Not sure what this means. If "Just align with reality!" is your guiding ethical principle, and it doesn't return answers to ethical questions, it is useless.

We are naturally aligned into the decrease of suffering,

Naw, we're naturally aligned to decrease our own suffering. Our natural impulses and ethical intuitions are frequently mutually contradictory and a philosophy of just going with whatever feels right in the moment is (a) not going to be self-consistent and (b) pretty much what people already do, and definitely doesn't require "clicking".

Sufficiently wealthy and secure 21st century Westerners sometimes conclude that they should try to alleviate the suffering of others, for a complex variety of reasons. This also doesn't require or "clicking".

By the way, you seem to have surrendered on several key points along the way without acknowledging or perhaps realizing it. I think it might be time for you to consider whether your position is worth arguing for at all.

Replies from: ingive
comment by ingive · 2017-01-25T03:27:35.819Z · LW(p) · GW(p)

Not sure what this means. If "Just align with reality!" is your guiding ethical principle, and it doesn't return answers to ethical questions, it is useless.

It does return answers for ethical questions. In fact I think it will for all.

Naw, we're naturally aligned to decrease our own suffering. Our natural impulses and ethical intuitions are frequently mutually contradictory and a philosophy of just going with whatever feels right in the moment is (a) not going to be self-consistent and (b) pretty much what people already do, and definitely doesn't require "clicking".

What if your suffering is gone and there are only others suffering based on intellectual assumptions?

Sufficiently wealthy and secure 21st century Westerners sometimes conclude that they should try to alleviate the suffering of others, for a complex variety of reasons. This also doesn't require or "clicking".

What if that was the goal and being wealthy and secure 21st century Westerner was the means as with all?

By the way, you seem to have surrendered on several key points along the way without acknowledging or perhaps realizing it. I think it might be time for you to consider whether your position is worth arguing for at all.

I didn't surrender, I tried to wake you up. I can easily refute all of your arguments by advising you to gain knowledge of certain things and accepting it fully.

Replies from: moridinamael
comment by moridinamael · 2017-01-25T04:57:40.368Z · LW(p) · GW(p)

ingive, I made it an experiment this last few days to interact with you much more than I would normally be inclined to. I had previously noticed my own tendency to disengage with people online when I suspected that my interactions with them would not lead anywhere useful. I thought there was a possibility that my default tendency was to disengage prematurely, and that I might be missing out on opportunities to learn, or test myself in various other ways.

What I have learned is that my initial instinct to not engage with you was correct, and that my initial impression of you as essentially a member of a cult was accurate. I had thought there was a chance that I was missing something, or failing that, there was a chance that I could actually break through to you by simply pointing out the errors in your thought processes. I thought maybe I could spare you some confusion and pain in your life. I think that neither of those outcomes have come to pass. All I've learned is that I should trust my instincts and remain reserved and cautious in my online persona.

Replies from: ingive
comment by ingive · 2017-01-28T06:20:59.274Z · LW(p) · GW(p)

That's interesting. You haven't simply pointed out my errors in my thought processes. I have yet to see you simply point them out, rather than arguing with assumptions that I can refute with basic reasoning. It's cute that you, for example, assume I don't have an answer to your hypothetical scenarios because I simply point out that it's a waste of time. Hypotheticals are intellectual entertainment. But it might've been a better choice to answer your questions from the mindset I was speculating of.

I just watched The Master which was an aesthetically pleasing movie. It does give some taste of cults/new-age thinking, and I can see myself doing the same type of thinking for other things. I've discussed with people with different perspectives and watched such content as well. I've come to the conclusion that this is human nature. Thinking back long ago in my life and now, unfortunately, if you think you're incapable of such thinking or not actually a part of such a thing right now, you probably are. But that is very confrontational and I wouldn't be surprised that you, or someone else, would without hesitation deny that fact. I can only tell you that in some hope that you don't reinforce the belief that you probably are not.

I'm going to open my mind now, you're free to reprogram my brain, tell me, Master and break through to me. Seriously, I am open minded.

comment by plethora · 2017-01-23T18:31:52.808Z · LW(p) · GW(p)

The is-ought problem implies that the universe is deterministic

What?

Replies from: ingive
comment by ingive · 2017-01-25T03:41:41.821Z · LW(p) · GW(p)

What?

Because Hume thought the universe is without taking in consideration that it ought to be different because of probabilistic nature (one interpretation) of it all.

comment by Luke_A_Somers · 2017-01-16T15:44:26.364Z · LW(p) · GW(p)

P (read sequences) < P (figure this out)

What?