Posts
Comments
There is something to be said to improving the quality of life as well as saving lives. In scientific and discovery fields such as pure math, contributions could improve the quality of life exponentially.
Does anyone have any unbiased statistics on gender in workforce, career choice, education, and any other relevant statistics?
This chart has been extremely helpful to me in school and is full of weird approximation like the two above.
I also would like to point out that Anthony didn't disagree with me when i said it and accepted that assumption. When I can i'm going to use the arguments that Qiaochu_Yuan had and go back to talk with him to see if he will update as well.
May I ask why the downvotes if I promise not to rebbutle and suck up time?
I don't have them any longer. An easy way to do it is have a friend pick out videos for you (or have someone post links to videos here and have someone pm them for the answer). Or while on YouTube look for names that you've heard before but not quite remember clearly which is not really reliable but its better then nothing.
I don't think it works on all inconsistency though just large one's. There is a large mass difference between a box with nothing in it and a box with something in it. This doesn't necessarily work for lets say a box with a cat in it and a box with a dead cat in it.
I've been reading the sequences but i've realized that less of it has sunk in then i would have hoped. What is the best way to make the lessons sink in?
I found that going to the gym for about half an hour a day improved my posture. Whether this is from increased muscles that help with posture or simply with increased self-esteem I do not know but it definitely helped.
also this xkcd comic seems very on topic
I think I understand now thank you.
You're right Einstien was not relevant to your original question. I brought him up because I did not understand the question until
I think you've lost track of why we were talking about Einstein. In the original post, you listed two reasons to believe non-falsifiable things. I asked you to give an example of the first one. Maybe it wasn't sufficiently clear that I was asking for an example which wasn't falsifiable, in which case I apologize, but I was (after all, that's why it came up in the first place). Relativity is falsifiable. A heuristic that beautiful things tend to be true is also falsifiable.
Thanks for leading me to the conclusion truth does not trump awesomeness and yes I now agree with this.
I also have in mind things like finding out whether you were adopted
Good point
How does attacking Eliezer here add to the argument?
What is this referring to?
I think you've lost track of why we were talking about Einstein. In the original post, you listed two reasons to believe non-falsifiable things. I asked you to give an example of the first one. Maybe it wasn't sufficiently clear that I was asking for an example which wasn't falsifiable, in which case I apologize, but I was (after all, that's why it came up in the first place). Relativity is falsifiable. A heuristic that beautiful things tend to be true is also falsifiable.
quote break
Whether or not you do, most people do not optimize for truth. Do you think this is a good thing or a bad thing, and in either case, why?
Perhaps it would be easier for me to replace the word happiness with Awesomeness in which case I could see the argument that optimizing for awesomeness would let me seek out ways to make the world more awesome and would allow specific circumstances of what i consider awesome to be to govern which truths to seek out. In this way I can understand optimizing for awesomeness.
I think it is a good thing most people do not optimize for truth because if it were so I don't think the resulting world would be awesome. It would be a world where many people were less happy even though it would also probably be a world with more scientific advances.
I suppose that if anyone were to optimize for truth it would be a minority who wanted to advance science further to make the general population more happy while the scientist themselves were not always. Even in this case I could understand the argument that they were optimizing awesomeness not truth because they thought the resulting world would be more awesome.
I'm not saying a view point on whether I agree or not with your premise, I don't think this is the best group ever but I have not been here long enough to know if others do.
I would however like to point out
Yet, in spite of these priors, the group you consider yourself member of is somehow the true best group ever? Really? Where's hard evidence for this? I'm tempted to point to Eliezer outright making things up on costs of cryonics multiple times, and ignoring corrections from me and others, in case halo effect prevents you from seeing that he's not really extraordinarily less wrong.
is full of Ad hominem errors that to me distract from your argument
(I'm teasing you to some extent. What I regard to be the answers to many of the questions I'm asking can be found in the Sequences.)
I know the answers to most of these questions can be found in the sequences because I read them. However the sequences include quite a bit of information and it is clear not all or probably even most made it into the way I think. You asking me these questions is extremely helpful to me filling in those gaps and I appreciate it.
Why do you believe that? Even given that you believe this is currently true, do you think this is something you should change about yourself, and if not, why?
I believe that because I do not have the mental discipline required to both know a belief is false and still gain happiness from that belief. It is possible that I could change this about myself but I don't see myself ever learning the discipline required to lie to myself (if doublethink is actually possible). Its also possible to go the other-way and say that something injured my brain and brought my intelligence to a level that I could no longer see why i should think one way instead of another, or not being able to see the truth vs. happiness decision which would let me pick happiness without lying to myself.
I think that most of two is based off of that heuristic which allows you to gain evidence for the claim even though it remains unfalsafiable and only weak evidence.
I have read them but it was a long time ago and I was not practicing using the knowledge at the time so it may not have sunk in as it was supposed to. I will go back now and reread them, thank you.
Thinking about truth vs. happiness I believe that I think if given a decision of truth or happiness it is already to late for me to fully accept happiness. In short thinking about the decision made the decision. On top of this I am to curious to avoid thinking about certain topics of which i would be faced with this (not) decision so I will always embrace truth over happiness.
What I will now have to think on is given a friend who aspires to be more rational yet, not a scientist or somebody similar, and i find a thought pattern that is giving false but enjoyable results, should I intervene?
As to Einstein I was not saying how his belief was unfalsifiable but my thought process with out my conscious knowledge probably thought that Eisenstein's theory was evidence for p(truth/beauty) being higher. If so I realize that this is only weak evidence.
Why do you want to be a physicist?
I learned of Quantum mechanics when I was younger and I grew curious of it because it was mysterious. Now Quantum mechanics is not mysterious but the way the world works is and I am still deeply curious about it.
In what sense was relativity non-falsifiable at the time that Einstein described it?
It was falsifiable but I was thinking that it was still extraordinarily beautiful
also the quote
In 1919, Sir Arthur Eddington led expeditions to Brazil and to the island of Principe, aiming to observe solar eclipses and thereby test an experimental prediction of Einstein's novel theory of General Relativity. A journalist asked Einstein what he would do if Eddington's observations failed to match his theory. Einstein famously replied: "Then I would feel sorry for the good Lord. The theory is correct."
This post is somewhat confused. I would recommend that you finish reading the Sequences before making a future post.
I agree that I am putting a post here prematurely but I thought the criticism on some of my ideas would be worth it so I could fix things before they were ingrained. So thanks for the criticism.
I responded with truth trumps happiness Why?
Break of quotes
I often find myself torn between epistemic rationality as a terminal value and its alternative. My thoughts are learning how to treat truth as the highest goal would be more useful to my career in physics and would be better for the world then if I currently steered to my closer less important values.
^from the comment below
I believe that putting truth first will help me as a Physicist
Can you give some examples of beliefs with this property?
Most of the beautiful theories I know of at this point are those found in mathematics and not Physics (this is due to my math education being much greater than my physics one even though my intended career is in physics) and I don't think qualify as proper examples in this circumstance. The best I could come up with by five minutes on the clock is Einsteins theory of relativity which before experimental predictions were obtained was held as beautiful and correct.
Why call it a belief instead of an idea, then?
I wanted to call it a belief instead of an idea because when i think of examples such as Timeless physics I believe its actually how the world works and it seems much more meaningful than an idea that does not color my perspective on the world. This however may simply be over definition of Idea.
And why the emphasis on originality?
You're right, it doesn't necessarily have to be original. I was thinking along the lines that it is much harder to think of an original theory and this is a goal of mine so I had it in mind while writing this.
Good point, I often find myself torn between epistemic rationality as a terminal value and its alternative. My thoughts are learning how to treat truth as the highest goal would be more useful to my career in physics and would be better for the world then if I currently steered to my closer less important values.
In fact, it's rather worse :) The negation of an unfalsifiable belief is also unfalsifiable - you unfalsifiably believe that Carl's garage does not have an immaterial dragon in it. Even if you make an observation, e.g. you throw a ball to measure the gravitational acceleration, you have an unfalsifiable belief that you have not just hallucinated the whole thing.
As a general principle it would seem that the negation of an unfalsifiable belief is better then the falsifiable one. Meaning that the unfalsifiable belief has a much larger number of worlds in which it is true then the falsifiable one.
For example: There are many more possible ways that Carl does not have a immaterial dragon in his garage than possible ways that he does.
I think a good way to think about this meaning which unfalsifiable belief to hold is the evidence that brought it out of the original hypothesis space. In this way Timeless physics has a higher ratio of probability (p(timeless physics)/p(not timeless physics)) then an immaterial dragon.
However it is a warning flag to me when someone brings up that
you have an unfalsifiable belief that you have not just hallucinated the whole thing.
because of the negligible probability of this belief and giving it power in an argument would both be an example of Scope Insensitivity as well as preventing any useful work being done
Never the less it reminded me that i should be thinking in terms of probability to unfalsifiable beliefs rather then simply the fact that there unfalsifiable. maybe i should revise conjectures to unfalsifiable beliefs that are within a certain probability margin. say p=.8 to p=.2. I would still separate them from higher beliefs because simply labeling them with a probability is still not intuitive enough for myself not confuse them with scope insensitivity.
You have a very good point and have shown me something that I knew better and will have to keep an eye on closer for now on.
That being said Beauty is not enough to be accepted into any realm of science but thinking about beautiful concepts such as timeless physics could increase the probability of thinking up an original testable theory that is true.
In particular I'm thinking how the notion of absolute time slowed down the discovery of relativity while if someone were to contemplate the beautiful notion of relative time, relativity could have been found much faster.
I think a main reason why I try to correct friends thought patterns is practice. With friends I get a certain amount of wiggle room, if I accidentally say something that insults them, or turns them off of rationality, or would cause a form a social friction, they would be inclined to tell me before it got between us. I can learn what I did wrong and don't have to keep bothering the same friend to the point of it actually hampering our friendship.
Lessons learned from this can be used to correct someones thought patterns when it is much more imperative for you to do so as in cases where
your ability to accomplish your goals directly depends on their rationality.
and allows you to teach these people whom having social conflict would be very difficult since they are typically people you have to cooperate with a lot.
I'm reading through all of the sequences (slowly, it takes a while to truly understand and I started in 2012) and by coincidence I happen to be at the beginning of metaethics currently. Until I finish I won't argue any further on this subject due to being confused. Thanks for help
Hello and welcome to lesswrong, your goal to understanding time as the 4th dimension stuck out to me in that it reminded me of a post that i found beautiful and insightful while contemplating the same thing. timeless physics has a certain beauty to it that resonates to me much better then 4th dimensional time and sounds like something you would appreciate.
I do not ask it because I wanted to stop the discussion by asking a hard question. I ask it because I aspire to do research into physics and will someday need an answer to it. As such I have been very curious about different arguments to this question. By no means did I mean by asking this question that there are things that should not be research simply how to go about finding them?
My knowledge of statistics at the time was very much lacking (that being said i still only have about a semesters worth of stat) so I was not able to do any type of statistical analysis that would be rigorous in any way. I did however keep track of my predictions and was around 60% on the first day (slightly better then guessing probably caused by reading books i mentioned) to around 80% about a week later of practicing every day. I no longer have the exact data though only approximate percentages of how i did.
I remember also that it was difficult tracking down the cases in which truth was known and this was very time consuming, this is the predominant reason that i only practiced like this for a week.
The majority of scientific discoveries (I'm tempted to say all but I'm 90% certain that there exist at least one counter example) have very good consequences as well as bad. I think the good and bad actually usually go hand in hand.
To make the obvious example nuclear research lead to both the creation of nuclear weapons but also the creation of nuclear energy.
At what point could you label research into any scientific field as having to many negative consequences to pursue?
I was not hear for the roko post and i only have a general idea of what its about, that being said i experienced a bout of depression when applying rationality to the second law of thermodynamics.
Two things helped me, 1 i realized that while dealing with a future that is either very unlikely or inconceivably far away it is hard to properly diminish the emotional impact by what is rationally required. knowing that the emotions felt completely out way what is cause for them, you can hopefully realize that acting in the present towards those beliefs is irrational and ignoring those beliefs would actually help you be more rational. Also realize that giving weight to an improbable future more then it deserves is in its self irrational. With this i realized that by trying to be rational i was being irrational and found that it was easier to resolve this paradox then simply getting over the emotional weight it took to think about the future rationally to begin with.
2 I meditated on the following quote
People can stand what is true, for they are already enduring it.
-Gendlin nothing has changed after you read a post on this website besides what is in your brain. Becoming more rational should never make you lose, after all Rationality is Systematized Winning so instead if you find that a belief you have is making you lose it is clearly a irrational one or is being thought of in a irrational way.
Hope this helps
I do not believe it would be a good way to practice because even with actors acting the way they are supposed (consistent body language and facial expressions) lets say conservatively 90% of the time, you are left with 10% wrong data. This 10% wouldn't be that bad except for the fact that it is actors trying to act correctly (meaning you would interpret what it looks like for a fabricated emotion to be a real emotion). This could be detrimental to many uses of being able to read body language such as telling when other people are lying.
My preferred method has been to watch court cases on YouTube where it has come out afterword whether the person was guilty or innocent. I watch these videos before i know what the truth is make a prediction and then read what the truth is. In this way I am able to get situations where the person is feeling real emotions and is likely to hide what there feeling with fake emotions.
After practicing like this for about a week i found that i could more easily discern whether people were telling the truth or lying, and it was easier to see what emotions they truly felt.
This may not extremely applicable to the real world because emotions felt in court rooms are particularly intense but i found that it allows me to get my mind to the point of being used to looking for emotion which has helped in the real world.
I should also note that i have read many books from Paul Ekman and have used some of his training programs.
If it is important to you how to learn to read faces I largely recommend SETT and METT where if its simply a curiosity you're unwilling to spend much money on i recommend checking out "emotions revealed" in your local library
The first thing that came to mind is it would only be possible to do this for the original post because it would be nearly impossible to be able to calculate how many of the readers read each comment. Further if it was implemented it would have to be able to count one reader per username, or more specifically one reader per person that can vote. that way if lets say i were to read an article but come back multiple times to read different comments it would not skew the ratio.
As a side note to this we could also implement a ratio per username that would show (post read)/(post voted on) so we would be able to see which users participate in voting at all. This however is nowhere near as useful to those who post as the original ratio and could have many possible downsides that i'm not going to take the time to think about because it will probably not be considered, but it is a fun idea.
No rational argument will have a rational effect on a man who does not want to adopt a rational attitude.
Karl Popper
This being said, one should not hesitate to downvote a short message if it does not add at all to the discussion, simply to keep the flow of useful comments without superfluous interruption that would hamper what could otherwise be a constructive argument.
It seems that the prisoner's dilemma mentioned here differs from the typical (from at least my perspective) prisoner's dilemma in the sense that rewards for both defecting are equal to instead of greater then the rewards for the one that cooperates in the defect/cooperate case. This leads to the outcome of whenever one person (p1) is known to defect (p2) no longer stands a chance to gain anything. Unless this game is repeated in which case punishments make sense (p2) has no game theory incentive to pick one case over the other outside of made deals such as the ultimatum. The difference between the two would only be the money (p1) walks away with. So instead of a prisoner's dilemma it turns into (p2) having the two moves cooperate (p1) gets money defect (p1) gets no money from here it would seem that even though (p1) did something that was to (p2)'s disadvantage, (p2) gains nothing from causing (p1) the harm of defecting and it seems to me that a moral argument could easily be made that states (p2) must cooperate. This doesn't work for the traditional prisoner's dilemma because once (p1) defects (p2) stands more to gain from defecting then cooperating.
I think what Creutzer is trying to mean is in ordinary discourse meaning everyday problems in which you are not always able to give the thought time it deserves, when you don't even have 5 minutes by the clock hand to think about the problem rationally, it is better to rely on the heuristic assume people are smart and some unknown context is causing problems then to rely on the heuristic people who make mistakes are dumb. this said heuristics are only good most of the time and may lead you to errors such as
It's epistemically incorrect to adopt a belief "for the purpose of action"
in this case it is still technically an error but you are merely attempting to be "less wrong" about a case where you don't have time to be correct then assuming the heuristic until you encounter contrary evidence (or you have the time to think of better answers) follows closely the point of this website
i'm going to reply to the quote as if it means "Truth doesn't have a moral valence" and rebuttal that truth should be held more sacred then morals rather then simply outside of it. For example if there are two cases and case 1 leads to a morally "better" (in quotes because the word better is really a black box) outcome then case 2 but case 1 leads to hiding the truth (including hiding from it yourself) then I would have to think very specifically about it. In short I abide by the rule "That which can be destroyed by the Truth should be" but am weary that this breaks down practically in many situations. So when presented with a scenario where i would be tempted to break this principle for the "greater good" or the "morally better case" I would think long and hard about whether it is a rationalization or that i did not expend the mental effort to come up with a better third alternative.
From lessons I learned in HPMOR before making an important decision ask yourself "What do you think you know, and why do you think you know it?" I have found that this not only shows you what knowledge you have is sound enough to make decisions on but shows which pieces of knowledge you're emotionally attached to and would therefore lead to a biased conclusion.
My thoughts on its implications are along the lines of even if cryogenics works or the human race finds some other way of indefinitely increasing the length of the human life span, the second law of thermodynamics would eventually force this prolonged life to be unsustainable. That combined with the adjusting of my probability estimates of an afterlife made me have to face the unthinkable fact that there will be a day in which i cease to exist regardless of what i do and i am helpless to stop it. while i was getting over the shock of this i would have sleepless night which turned into days that i was to tired to be coherent which turned into missing classes which turned into missed grades. In summation I allowed a truth which would not come to pass for an unthinkable amount of time to change how i acted in the present in a way in which it did not warrant (being depressed or happy or any action now would not change that future).
I'm Shai Horowitz. I'm currently a duel physics and mathematics major at Rutgers university. I first learned of the concept of "Bayesian" or "rationality" through HPMOR and from there i took it upon myself to read the Overcoming Bias post which has been an extremely long endeavor of which I have almost but not yet accomplished. Through conversation with others in my dorm at Rutgers I have realized simply how much this learning has done to my thought process and it allowed me to hone in on my own thoughts that i could see were still biased and go about fixing them. Through this same reasoning it became apparent to me that it would be largely beneficial to become an active part in the lesswrong community to sharpen my own skills as a rationalist while helping others along the way. I embrace rationality for the very specific reason that I wish to be a Physicists and realize that in trying to do so i could (as Eliezer puts hit) "shoot off my own foot" while doing things that conventional science allows. In the process of learning this I did stall out for months at a time and even became depressed for a while as I was stabbing my weakest points with the metaphorical knife. I do look back at laugh at the fact now that a college student was making incredibly bad decisions to get over the pain of fully embracing the second law of thermodynamics and its implications, which to me seems to be a sign of my progress moving forward. I don't think that i will soon have to face a fact as daunting as that one and with the knowledge that I know how to accept even that law I will now be able to accept any truths much more easily. That being said even though hard science is my primary purpose for learning rationality I am a bit of a self proclaimed polymath and have spent recent times learning more of psychology and cognition then simply the cognitive bias's i need to be self weary of. I just finished the book "Influence: Science and Practice" which I've heard Eliezer mention multiple times and very recently as in this week my interest have turned into pushing standard ethical theories to there limits as to truly understand how to make the world a better place and to unravel the black box that is itself the word "better". I conclude with I would love to talk with anyone experienced or new to rationality about pretty much any topic and would very much like if someone would message me. furthermore if anyone reading this goes to Rutgers university or is around the area, a meet up over coffee or something similar would make my day.
I am new to less wrong and am coincidentally a student at rvcc. i unfortunetly have class until 3:15 but will stop by for the end of the meetup