Posts

A rational unfalsifyable believe 2016-07-25T02:15:07.807Z

Comments

Comment by Arielgenesis on Open thread, Jan. 09 - Jan. 15, 2017 · 2017-01-16T05:50:17.340Z · LW · GW

Second language might still be necessary for the cognitive development effect.

Comment by Arielgenesis on Open thread, Jan. 09 - Jan. 15, 2017 · 2017-01-16T05:49:07.891Z · LW · GW

And very scary as well.

Comment by Arielgenesis on Open thread, Jan. 09 - Jan. 15, 2017 · 2017-01-16T05:48:33.251Z · LW · GW

Given the current status quo, it is impossible. However, I can imagine the political world developing into an atmosphere where Esperanto might be made the lingua franca. Imagine that American and British power continues to decline, and Russia and China and German, and maybe India, become more influential, leading to a new status quo, a stalemate. Given sufficiently long stalemate, like decades, Esperanto might once again become a politically viable situation.

Comment by Arielgenesis on Open thread, Jan. 09 - Jan. 15, 2017 · 2017-01-13T10:39:46.635Z · LW · GW

Are people here is interested in having a universal language, and have strong opinions on esperanto?

Comment by Arielgenesis on Open thread, Sep. 26 - Oct. 02, 2016 · 2016-10-01T03:01:57.347Z · LW · GW

I just thought of this 'cute' question and not sure how to answer it.

The sample space of an empirical statement is True or False. Then, given an empirical statement, one would then assign a certain prior probability 0<p<1 to TRUE and one minus that to FALSE. One would not assign a p=1 or p=0 because it wouldn't allow believe updating.

For example: Santa Claus is real.

I suppose most people in LW will assign a very small p to that statement, but not zero. Now my question is, what is the prior probability value for the following statement:

Prior probability cannot be set to 1.

Comment by Arielgenesis on Open thread, Jul. 25 - Jul. 31, 2016 · 2016-09-29T04:19:51.128Z · LW · GW

Thank you. This reply actually answer the first part of my question.

The 'working' presuppositions include:

  • Induction
  • Occam's razor

I will quote most important part from Fundamental Doubts

So, in the end, I think we must allow the use of brains to think about thinking; and the use of evolved brains to think about evolution; and the use of inductive brains to think about induction; and the use of brains with an Occam prior to think about whether the universe appears to be simple; for these things we really cannot unwind entirely, even when we have reason to distrust them. Strange loops through the meta level, I think, are not the same as circular logic.

And this have a lot of similarities with my previous conclusion (with significant differences about circular logic and meta loops)

a non-contradicting collection of self-referential statement that covers the epistemology and axiology

Comment by Arielgenesis on A rational unfalsifyable believe · 2016-07-29T03:25:50.176Z · LW · GW

I will have to copy paste my answer to your other comment:

Yes I could. I chose not to. It is a balance between suspension of disbelieve and narrative simplicity. Moreover, I am not sure how much credence should I put on recent cosmological theories that they will not be updated the future, making my narrative set up obsolete. I also do not want to burden my reader with familiarity of cosmological theories.

Am I not allowed to use such narrative technique to simplify my story and deliver my point? Yes I know it is out of touch with the human condition but I was hoping it would not strain my audiences' suspension of disbelieve.

Comment by Arielgenesis on A rational unfalsifyable believe · 2016-07-29T03:19:33.336Z · LW · GW

genuine marital relationship

"If Adam is guilty, then the relationship was not genuine." Am I on the right track? or did I misunderstood your question?

Comment by Arielgenesis on A rational unfalsifyable believe · 2016-07-29T03:17:11.889Z · LW · GW

Why are you a theist?

This is very poorly formulated. But there are 2 foundations in my logic. First is, that I am leaning towards presuppositionalism (https://en.wikipedia.org/wiki/Presuppositional_apologetics). The only way to build a 'map', first of all, is to take a list of presuppositions for granted. I am also interested in that (see my post on http://lesswrong.com/lw/nsm/open_thread_jul_25_jul_31_2016/). The idea is that a school could have a non-contradicting collection of self-referential statement that covers the epistemology and axiology and another school have another distinct collection. And due to the expensiveness of computation and lack of information, both maps are equally good and predicting what should and should not happen ("and also what is actually happening and why", what scientist, not rationalist, cares about).

The other part is, the basis of this post, personal experience. All of my personal life experience, up until this point, "arrived at a posterior where P(God exists) >> P(God does not exist)" exactly in the same way Eve arrived at hers in this OP.

Now I do realize that is very crude and not at all solid, not even presentable. But since you asked, there you go.

Comment by Arielgenesis on Open thread, Jul. 25 - Jul. 31, 2016 · 2016-07-28T06:21:01.203Z · LW · GW

We needn't presume that we are not in a simulation, we can evaluate the evidence for it.

How do we not fall into the rabbit hole of finding evidence that we are not in a simulation?

Comment by Arielgenesis on A rational unfalsifyable believe · 2016-07-28T06:14:27.910Z · LW · GW

why does she want to be correct (beyond "I like being right")?

I think that's it. "I like knowing that the person I love is innocent." Which implies that Adam is not lying to her and "I like being in healthy, fulfilling and genuine marital relationship"

Comment by Arielgenesis on A rational unfalsifyable believe · 2016-07-28T06:08:09.723Z · LW · GW

I see... I have been using unfalsifiability and lack of evidence as a synonym. The title should have read: a rational believe without evidence

Thank You.

Comment by Arielgenesis on A rational unfalsifyable believe · 2016-07-28T06:04:58.688Z · LW · GW

God is a messy concept. As a theist, I am leaning more towards the Calvinistic Christianity. Defining God is very problematic because, by definition, it is something, which in it's fullness, is beyond human comprehension.

Could you clarify?

Since ancient time, there are many arguments for and against God (and the many versions of it). Lately, the arguments against God has developed to a very sophisticated extend and the theist is lagging very far behind and there doesn't seem to be any interest in catching up.

Comment by Arielgenesis on A rational unfalsifyable believe · 2016-07-28T05:56:43.917Z · LW · GW

Well... That's part of the story. I'm sure there is a term for it, but I don't know what. Something that the story gives and you accept it as fact.

Comment by Arielgenesis on A rational unfalsifyable believe · 2016-07-28T04:45:23.046Z · LW · GW

you can make a more sciency argument with recent cosmological theories

Yes I could. I chose not to. It is a balance between suspension of disbelieve and narrative simplicity. Moreover, I am not sure how much credence should I put on recent cosmological theories that they will not be updated the future, making my narrative set up obsolete. I also do not want to burden my reader with familiarity of cosmological theories.

Comment by Arielgenesis on Open thread, Jul. 25 - Jul. 31, 2016 · 2016-07-28T04:02:51.485Z · LW · GW

This, and your links to Lob's theory, is one of the most fear inducing piece of writing that I have ever read. Now I want to know if I have understand this properly. I found that the best way to do it is to first explain what I understand to myself, and then to other people. My explanation is below:

I suppose that rationalist would have some simple, intuitive and obvious presumptions a foundation (e.g. most of the time, my sensory organs reflect the world accurately). But apparently, it put its foundation on a very specific set of statement, the most powerful, wild and dangerous of them all: self-referential statement:

Rationalist presume Occam's razor because it proof itself Rationalist presume Induction razor because it proof itself *etc.

And a collection of these self-referential statement (if you collect the right elements) would reinforce one another. Upon this collection, the whole field of rationality is built.

To the best of my understanding, this train of thought is nearly identical to the Presuppositionalism school of Reformed Christian Apologetics.

The reformed / Presbyterian understanding of the Judeo-Christian God (from here on simply referred to as God), is that God is a self-referential entity, owing to their interpretation of the famous Tetragrammaton. They believe that God is true for many reasons, but chief among all, is that it attest itself to be the truth.

Now I am not making any statement about rationality or presuppositionalism, but it seems to me that there is a logical veil that we cannot get to the bottom of and it is called self-reference.

The best that we can do is to get a non-contradicting collection of self-referential statement that covers the epistemology and axiology and by that point, everyone is rational.

Comment by Arielgenesis on Open thread, Jul. 25 - Jul. 31, 2016 · 2016-07-27T04:14:00.443Z · LW · GW

What are rationalist presumptions?

I am new to this rationality and Bayesian ways of thinking. I am reading the sequence, but I have few questions along the way. These questions is from the first article (http://lesswrong.com/lw/31/what_do_we_mean_by_rationality/)

Epistemic rationality

I suppose we do presume things, like we are not dreaming/under global and permanent illusion by a demon/a brain in a vat/in a Truman show/in a matrix. And, sufficiently frequently, you mean what I think you meant. I am wondering, if there is a list of things that rationalist presume and take for granted without further proof. Are there anything that is self evident?

Instrumental rationality

Sometimes a value could derive from other value. (e.g. I do not value monarchy because I hold the value that all men are created equal). But either we have circular values or we take some value to be evident (We hold these truths to be self-evident, that all men are created equal). I think circular values make no sense. So my question is, what are the values that most rationalists agree to be intrinsically valuable, or self evident, or could be presumed to be valuable in and of itself?

Comment by Arielgenesis on A rational unfalsifyable believe · 2016-07-27T03:24:57.176Z · LW · GW

Thank you for the reply.

My personal answer to the 3 questions is 3 yes. But I am not confident of my own reasoning, that's why I'm here, looking for confirmation. So, thank you for the confirmation.

If we let Eve say "I still think he didn't do it because of his character, and I will keep believing this until I see evidence to the contrary - and if such evidence doesn't exist, I will keep believing this forever" - then yes, Eve is rational

That is exactly what I meant her to say. I just thought I could simplify it, but apparently I lose important points along the way.

Yes, it can be extended to belief in God. Provided we restrict "God" to a REALLY TINY thing.

I am a theist, but I am appalled by the lack of rational apologetic, the abundance of poor ones, and the disinterest to develop a good one. So here I am, making baby steps.

Comment by Arielgenesis on A rational unfalsifyable believe · 2016-07-27T03:10:34.053Z · LW · GW

unfalsifiability and lack of evidence, even an extreme one, are orthogonal concern.

That is a very novel concept for me. I understand what you are trying to say, but I am struggling to see if it is true.

Can you give me few examples where something is "physically unfalsifiable" but "logically falsifiable" and the distinction is of great import?

Comment by Arielgenesis on A rational unfalsifyable believe · 2016-07-27T03:07:05.251Z · LW · GW

human-granularity

I don't understand what does it mean, even after a google search, so please enlighten me.

For epistemic rationality

I think so. I think she has exhausted all the possible avenue to reach the truth. So she is epistemically rational. Do you agree?

For instrumental rationality

Now this is confusing to me as well. Let us forget about the extension for the moment and focus solely on the narrative as presented in the OP. I am not familiar how does value and rationality goes together, but, I think there is nothing wrong if her value is "Adam's innocence" and that it is inherently valuable, and end to it self. Am my making any mistake in my train of thought?

Comment by Arielgenesis on A rational unfalsifyable believe · 2016-07-27T02:36:04.272Z · LW · GW

Would Russell's teapot qualify

Yes exactly! The issue with that is the irrelevance of it. It is of no great import to anyone (except the teapot church, which I think is a bad satire of religion. The amount of suspension of disbelieve the narrative require is beyond me). On the other hand, Adam's innocence is relevant, meaningful and important to Eve (I hope this is obvious from the narrative).

Moreover, since people are assumed to be innocent until proven guilty, in the eye of many laws, the burden of proof argument from Russell's teapot is not applicable here.

In this twist of Russell's teapot, I think it is rational for Eve to maintain her belief. And that her belief is relevant and the burden of proof is not upon her. And by extension, this argument could be used by theist. But I know that my reasoning is not impeccable, so here I am Less Wrong.

Comment by Arielgenesis on A rational unfalsifyable believe · 2016-07-27T02:21:52.952Z · LW · GW

is evidence. Not irrefutable evidence

Yes, that's exactly what I had in mind.

The idea of the story is that there are no evidence.

What I meant was that there are no possibility of new evidence.

I also think that Eve is rational. But I'm not sure if I am correct. Thank you for the confirmation.

Comment by Arielgenesis on A rational unfalsifyable believe · 2016-07-25T17:41:39.742Z · LW · GW

not unfalsifiable, it's simply unfalsfied

I am trying to make a situation where a belief is (1) unfalsified, (2) unfalsifiable, and (3) has a lack of evidence. How should I change the story such that all 3 conditions are fulfilled. And in that case, would then Eve be irrational?

Comment by Arielgenesis on A rational unfalsifyable believe · 2016-07-25T17:29:30.539Z · LW · GW

The idea of the story is that there are no evidence. Because I think, in real life, sometimes, there are important and relevant things with no evidence. In this case, Adam's innocence is important and relevant to Eve (for emotional and social reasons I presume), but there is no, and there will never be, evidence. Given that, saying: "If there is evidence, then the belief could be falsified." is a kind of cheating because producing new evidence is not possible anymore.

Comment by Arielgenesis on A rational unfalsifyable believe · 2016-07-25T17:22:56.704Z · LW · GW

Thank you, that was a very nice extension to the story. I should have included the scenario to make her belief relevant. I agree with you, assigning 100% probability is irrational in her case. But, if she is not rationally literate enough to express herself in fuzzy, non-binary way, I think she would maintain rationality through saying "Ceteris paribus, I prefer to be not locked in the same room with Cain because I believe he is a murder because I believe Adam was innocent" (ignoring ad hominem)

I was under the impression that the golden standard for rationality is falsifiability. However, I now understand that Eve is rational despite unfalsifiablity, because she remained Bayesian.

Comment by Arielgenesis on A rational unfalsifyable believe · 2016-07-25T04:01:22.093Z · LW · GW

What if we were to take one step back and Adam didn't die. Eve claims that her believe pays rent because it could be falsified if Adam changed in character. In this scenario, I suppose that you would agree to say that Eve is still rational.

Now, I cannot formulate my arguments properly at the moment, but I think it is weird that Adam's death make Eve's belief irrational, as per:

So I do not believe a spaceship blips out of existence when it crosses the cosmological horizon of our expanding universe, even though the spaceship's existence has no further experimental consequences for me.

http://lesswrong.com/lw/ss/no_logical_positivist_i/

Comment by Arielgenesis on A rational unfalsifyable believe · 2016-07-25T03:57:41.509Z · LW · GW

Thank your the link. I just read the article. It is exactly what I had mind, but my mind works better with narrative.

What I am wondering is if a theist could use this as a foundation of their arguments and remain rational.

Comment by Arielgenesis on Open thread, Jul. 18 - Jul. 24, 2016 · 2016-07-25T02:14:15.192Z · LW · GW

Thank you, that is very helpful. I wish it is said in the FAQ, or I could have missed it. I would have upvoted you if I could.

Comment by Arielgenesis on Open thread, Jul. 18 - Jul. 24, 2016 · 2016-07-24T19:05:16.957Z · LW · GW

Hi, I have silly question. How do I vote? It seems obvious but I cannot see any upvote or downvote button anywhere in this page. I have tried:

  1. looking at the top of the comment. Next to OP/TS is date, and then time, and then the points. At the far right is the 'minimize'
  2. looking at the bottom of the comment. I see Parent, Edit, Permalink, get notification
  3. The FAQ says: >you can vote submissions and comments up or down just like you can on Reddit but I cannot find the vote button anywhere near comments or post.
Comment by Arielgenesis on Open thread, Jul. 18 - Jul. 24, 2016 · 2016-07-24T18:52:04.105Z · LW · GW

Post-high education LWers, do you think the place you studied at had a significant effect on your future prospects?

I went to Melbourne University and did an exchange program to UCSD. So I have comparison. I think the distribution of the quality of teaching is sufficiently narrow that it should not play a major factor..

There are careers like politics where personal connection that are gathered during university years are very important.

Depending on the job and your part of the world, personal connection might be a very important factor in carer success. It is more likely that you will would gain more, better personal connection in better university.

Comment by Arielgenesis on Open thread, Jul. 18 - Jul. 24, 2016 · 2016-07-24T18:33:41.978Z · LW · GW

I bought a $1400 mattress in my quest for sleep, over the Internet hence much cheaper than the mattress I tried in the store, but non-returnable. When the new mattress didn’t seem to work too well once I actually tried sleeping nights on it, this was making me reluctant to spend even more money trying another mattress. I reminded myself that the $1400 was a sunk cost rather than a future consequence, and didn’t change the importance and scope of future better sleep at stake (occurring once per day and a large effect size each day).

from http://rationality.org/checklist/

Is it rational for someone to choose to NOT buy another mattress, not because of the sunk cost, but in order to "punish" oneself (stick and carrot style) to change their behavior and not buy non-returnable, expensive things, ever again? (or to be more careful when buying expensive things)

Comment by Arielgenesis on 16 types of useful predictions · 2016-07-24T17:51:28.037Z · LW · GW

A dollar feels more important than it actually is, so people treat the bets seriously even though they are not very serious.

Although there is a weight in the dollar, I think there is also another reason why people take it more seriously. People adjust their believe according to what other people believe and their confidence level. Therefore, when you propose a bet, even only for a dollar, you are showing a high confidence level and this decrease their confidence level. As a result, system 2 kicks in and they will be > [forced] to evaluate honestly.

Comment by Arielgenesis on 16 types of useful predictions · 2016-07-24T17:45:37.622Z · LW · GW

To the best of my knowledge, human brain is a simulation machine. It unconsciously making prediction about what sensory input it should expect. This include the higher level input, like language and even concepts. This is the basic mechanism underlying surprise and similar emotion. Moreover, it only makes simulation on the things it cares about and filter the rest.

Given this, I would think that most of your prediction is obsolete, because we are doing this unconsciously. Example:

  1. You predict you will finish the task one week early. But you are ended up finishing one day early. You are not surprised. But if you ended up finishing one day late, then you would be surprised. When people are surprised by the same trigger often enough, most normal people I presume, will update their believe. I know this is related to planning fallacy, but I think my arguments still hold water.

  2. You post a post on Facebook. You didn't make any conscious prediction on the reaction of the audience. You got one million likes. I bet you will be surprised and scratching your mind about why and how you could get such reaction.

Otherwise, I still see some value in what you are doing, but not because of prediction per se, but because you it effectively mitigate bias. For example. "Predict how well you understand someone's position by trying to paraphrase it back to him." It addresses illusion of transparency. But I think there is not much more value in making prediction rather than simply making a habit to paraphrase more often than otherwise without making prediction.

Making conscious prediction, on top of the unconscious one, is cognitively costly. I do think it might improve one's calibration and accuracy and is superior to the improvement made by the surprise mechanism alone. However, the question is, is the calibration and accuracy improvement worth the extra cognitive cost?

Comment by Arielgenesis on [CORE] Concepts for Understanding the World · 2016-07-24T17:15:01.450Z · LW · GW

I'm not sure how do you define concept. According to what I understood, I think you might be missing these:

Feed back https://en.wikipedia.org/wiki/Feedback the impact of something halts its cause.

feed forward https://en.wikipedia.org/wiki/Feed_forward_(control) the impact of something reinforce its cause

self fulfilling prophecy https://en.wikipedia.org/wiki/Self-fulfilling_prophecy a prophecy is being fulfilled because the prophecy was made, usually because active agents tried to prevent the prediction from happening

emergence https://en.wikipedia.org/wiki/Emergence a collection of simpler elements and behavior generating a more complex pattern when multiple elements are connected to each other.

Comment by Arielgenesis on Welcome to Less Wrong! (8th thread, July 2015) · 2016-07-24T15:50:51.836Z · LW · GW

We'd love to know who you are, what you're doing: I was a high school teacher. Now I'm back to school for Honours and hopefully PhD in science (computational modelling) in Australia. I'm Chinese-Indonesian (my grammar and spelling are a mess) and I'm a theist (leaning toward Reformed Christianity).

what you value: Whatever is valuable.

how you came to identify as an aspiring rationalist or how you found us: My friend who is now a sister under the Fransiscan order of the Roman Catholic Church recommended me Harry Potter and the method of Rationality.

I think the theist community needs a better, more rational arguments for their belief. I think the easiest way is to test it against rational people. I hope this is the right place.

I am interested in making rationality be more accessible to the general public.

I am also interested in developing an ideal, universal curriculum. And I think rationality should be an integral part of it.