Posts

Comments

Comment by lesswronguser123 (fallcheetah7373) on Does the "ancient wisdom" argument have any validity? If a particular teaching or tradition is old, to what extent does this make it more trustworthy? · 2024-11-14T13:01:15.857Z · LW · GW

Honestly majority of the points presented here are not new and already been addressed in 

https://www.lesswrong.com/rationality 

or https://www.readthesequence.com/ 

I got into this conversation because I thought I would find something new here. As an egoist I am voluntarily leaving this conversation in disagreement because I have other things to do in life. Thank you for your time. 

Comment by lesswronguser123 (fallcheetah7373) on Does the "ancient wisdom" argument have any validity? If a particular teaching or tradition is old, to what extent does this make it more trustworthy? · 2024-11-13T14:34:25.641Z · LW · GW

Another issue with teaching it academically is that academic thought, like I already said, frames things in a mathematical and thus non-human way. And treating people like objects to be manipulated for certain goals (a common consequence of this way of thinking) is not only bad taste, it makes the game of life less enjoyable.
 

 

Yes intuitions can be wrong welcome to reality. Beside I think schools are bad at teaching things.

 

If you want something to be part of you, then you simply need to come up with it yourself, it will be your own knowledge. Learning other peoples knowledge however, feels to me like consuming something foreign.

Yes the trick for that is to delete the piece of knowledge you learnt and ask the question, how could I have come up with this myself? 

 

Of course, my defense of ancient wisdom so far has simply been to translate it into an academic language in which it makes sense. "Be like water" is street-smarts, and "adaptability is a core component of growth/improvement/fitness" is the book-smarts. But the "street-smarts" version is easier to teach, and now that I think about it, that's what the bible was for.

That just sounds to me like "we need wisdom because people cannot think" . Yes I would agree considering when you open reddit, twitter or any other platform you can find many biases being upvoted. I would agree memetic immune system is required for a person unaware of various background literature required to bootstrap rationality. I am not advocating for teaching anything I don't have plans for being an activist or having will to change society. But consider this, if you know enough rationality you can easily get past all that.

Sure a person should be aware when they're drifting from the crowd and not become a contrarian since reversed stupidity is not intelligence and if you dissent when you have overwhelming reason for it you're going to have enough problems in your life

 

I would agree on the latter part regarding good/evil. Unlike other rationalist this is why I don't have will to change society. Internet has killed my societal moral compass for good/evil however you may like to put it for being more egoistic. Good just carries a positive system 1 connotation for me, I am just emoting it, but I mostly focus on my life. Or you have to be brutally honest about it, I don't care about society as long as my interests are being fulfilled.

The actual truth value of beliefs have no psychological effects (proof: Otherwise we could use beliefs to measure the state of reality).

Agreed, map is not the territory, it feels same to be wrong as it feels to be right. 

 

It's more likely for somebody to become rich making music if their goal is simply to make music and enjoy themselves, than if their goal is to become rich making music.

Yes if someone isn't passionate about such endeavours they may not have the will to sustain it. But if a person is totally apathetic to monetary concerns they're not going to make it either. So a person may argue on a meta level it's more optimal to be passionate about a field or choose a field you're passionate about in which you want to do better , to overcome akrasia and there might be some selection bias at play where a person who's good at something is likely to have positive feedback loop about the subject.

 

But the "Something to protect" link you sent seems to argue for this as well?

Yes, exactly, truth is in highest service to other goals if my phrasing of "highest instrumental value" wasn't clear.  But you don't deliberately believe false things that's what rationality is all about, truth is nice to have but usefulness is everything. 

Believing false things purposefully is impossible either ways, you're not anticipating it with high possibility. That's not how rationalist belief works. When you believe something that's how reality is to you, you look at the world through your beliefs. 

How many great peoples autobiographies and life stories have you read?

Not many, but it would be unrepresentative to generalise from that. 

 

But it's ultimately a projection, a worldview does not reveal the world, but rather than person with the worldview.

Ethically yes, epistemically no. Reality doesn't care, this is what society gets wrong, if I am disagreeing with your climate denial or climate catastrophism I am not proposing a what needs to be done, there is a divide between morals and epistemics. 

 

"I define rationality as what's correct, so rationality can never be wrong, because that would mean you weren't being rational"

Yes, finally you get my point. We label those things rationality, the things which work. Virtue of empiricism. Rationality is about having cognitive algorithms which have higher returns systematically on whatever is that thing you want. 

 

maps of the territory are inherently limited (and I can prove this)

I would disagree, physics is more accurate than intuitive world models. The act of guessing a hypothesis is reverse engineering experience so to speak, you get a causal model which is connected to you in form of anticipations  (this link is part of a sequence so there's a chance there's lot of background info).  

 

When you experience something your brain forms various models of it, and you look at the world through your beliefs. 

 

You're optimizing for "Optimization power over reality / a more reliable map", while I'm optimizing for "Biological health, psychological well-being and enjoyment of existence".
And they do not seem to have as much in common as rationalists believe

That's misrepresentation of my position I said truth is my highest instrumental value not highest terminal value.  Besides good portion of hardcore rationalists tend to have something to protect, a humanistic cause, which they devote themselves to, that tends to be aligned with their terminal values however they may see fit.  Others may solely focus on their own interests like health,life and wellbeing.  

To reiterate, you only seek truth as much as it allows you to get what you want but you don't believe in falsities. That's it. 

But if rationality in the end worships reality and nature, that's quite interesting, because that puts it in the same boat as Taoism and myself. Some people even put Nature=God.  

Rationality doesn't necessarily have nature as a terminal value, rationality is a tool, the set of cognitive algorithms which work for whatever you want with truth being highest instrumental value. As you might have read in the something to protect article. 

 

Rationalists tend to have heavy respect for cognitive algorithms which allow us to systematically get us what we desire.  They're disturbed if there's a violation in the process which gets us there. 

 

Finally, if my goal is being a good programmer, then a million factors will matter, including my mood, how much I sleep, how much I enjoy programming, and so on. But somebody who naively optimizes for progamming skills might practice at the cost of mood, sleep, and enjoyment, and thus ultimately end up with a mediocre result. So in this case, a heuristic like "Take care of your health and try to enjoy your life" might not lose out to a rat-race like mentality in performance. Meta-level knowledge might help here, but I still don't think it's enough. and the tendency to dismiss things which seem unlikely, illogical or silly is not as great as a heuristic as one would think, perhaps because any beliefs which manage to stay alive despite being silly have something special about them.

None of that is incompatible with rationality, rather rationality will help you get there. Heuristics like "take care of your health and try to enjoy life" seem more of vague plans to fulfill your complex set of values which one may discover more about. Values are complex and there are various posts you can find here which may help you model yourself better and reach reflective equilibrium which is the best you can do either ways both epistemically and morally  (former (epistemics) of which is much more easily reached by focusing on getting better with w.r.t. your values than focusing solely on it as highlighted by the post since truth is only instrumental) . 

 

Edit: added some more links fixed some typos. 

Comment by lesswronguser123 (fallcheetah7373) on Does the "ancient wisdom" argument have any validity? If a particular teaching or tradition is old, to what extent does this make it more trustworthy? · 2024-11-12T14:02:35.084Z · LW · GW

I even think it's a danger to be more "book smart" than "street smart" about social things.

Honestly I don't know enough about people to actually tell if that's really the case for me book smart becomes street smart when I make it truly a part of me.  

That's how I live anyways. For me when you formalise streetsmart it becomes booksmarts to other people, and the latter is likely to yield better prediction aside from the places where you lack compute like in case of society where most of people don't use their brain outside of social/consensus reality. So maybe you're actually onto something here, along the lines of "don't tell them the truth because they cannot handle it"  lol. 

 

Astrology is wrong and unscientific, but I can see why it would originate. It's a kind of pattern reocgnition gone awry.

Well since I wanted to dismantle the chesterton fence I did reach the similar conclusions as yours regarding why it came to be and why they (the ancients) fell for it, the correlation causation one is the general purpose one. One major reason was agriculture where it was likely to work well due to common cause of seasons and relative star movement. So it can also be thought of as faulty generalisation.

 

If you had used astrology yourself, it might have ended better, as you'd be likely to intrepret what you wanted to to be true, and since your belief that your goal in life was fated to come true would help against the periodic doubt that people face in life.

That's false I wouldn't have socially demotivated my mom using apathy from wasting  too much money on astrology, if I had been enthusiastic about it it would have fueled into her desire. Astrology is like that false hope of lottery, waste of emotional energies. 

I would have been likely to fall for other delusions surrounding astrology instead of spending that time learning about things for example going on to pilgrimage for few weeks before exams etc . 

Besides astrology predicts everything on the list of usual human behavior and more or less ends up predicting nothing. 

 

Lastly, "systematic optimality" seems to suffer from something like Goodhart's law. When you optimize for one variable, you may harm 100 other variables slightly without realizing it (paperclip optimizers seem like the mathematical limit of this idea). Holistic perspectives tend to go wrong less often.

Well more or less rational is w.r.t. to cognitive algorithms, you tend to have one variable, achieving goals. And cognitive algorithms which are better at reaching certain goals are more rational w.r.t. to that goal. 

There is a distinction made better truth oriented epistemic rationality and day-to-day life goal oriented instrumental rationality but for me they're pretty similar that for epistemic the goal is truth.

I think the distinction was made because there's significant amount of epistemics in rationality. 

If your goal is optimising 100 variables then go with it. For a rationalist truth tends to be their highest instrumental value, that's the main difference imo between a rationalist or say a post-rationalist or a pre-rationalist. They can have other terminal values above that like life,liberty and pursuit of happiness etc. 

If you're not aware with the difference between terminal and instrumental.

 

I'm personally glad that people who chase money or fame above all end up feeling empty, for you might as well just replace humanity with robots if you care so little for experiencing what life has to offer.

I think it again depends on value being 2 place function. Some people may find fulfillment from that. I have met some of them who're like that. I think quite a bit of literature on the topic is a bit biased in favour of common morality. 

 

 But why did Nikola Tesla's intelligence not prevent him from dying poor and lonely? Why was Einstein so awkward? Why do some many intelligent people not enjoy life very much? 

I think you would need to provide evidence for such claims, my prior is set against such claims given the low amount of evidence I have encountered and I cannot update it just because some cultural wisdom said so, because cultural wisdom is often wrong. 

 

For reference, I used to think rationally, I hated the world, I hated people, I couldn't make friends, I couldn't understand myself.

 

Then you weren't thinking rationally. To quote; 


 If you say, “It’s (epistemically) rational for me to believe X, but the truth is Y,” then you are probably using the word “rational” to mean something other than what I have in mind. (E.g., “rationality” should be consistent under reflection—“rationally” looking at the evidence, and “rationally” considering how your mind processes the evidence, shouldn’t lead to two different conclusions.)

Similarly, if you find yourself saying, “The (instrumentally) rational thing for me to do is X, but the right thing for me to do is Y,” then you are almost certainly using some other meaning for the word “rational” or the word “right.” I use the term “rationality” normatively, to pick out desirable patterns of thought.

~ what do we mean by rationality? 

Also check firewalling the rational from the optimal  and feeling rational

 

You may even learn something about rationality from the experience, if you are already far enough grown in your Art to say, "I must have had the wrong conception of rationality," and not, "Look at how rationality gave me the wrong answer!

~ Something to protect

Also check no one can exempt you from laws of rationality.

 

 

And I disagree with a few of the moral rules because they decrease my performance in life by making me help society. Finally, my value system is what I like, not what is mathematically optimal for some metric which people think could help society experience less negative emotions (I don't even think this is true or desirable)

Well then you can be mathematically optimal for the other metric. Laws of decision theory don't stop working if you change your utility function. Unless you want to get money pumped lol , in that case your preferences are circular. Yes you might argue that we're not knowledgeable enough to figure out what our values will be in various subject areas, and there's a reason we have an entire field of AI alignment due to various such issues, and there are various problems with inferring our desires, limits of introspection. 

Comment by lesswronguser123 (fallcheetah7373) on Does the "ancient wisdom" argument have any validity? If a particular teaching or tradition is old, to what extent does this make it more trustworthy? · 2024-11-12T08:49:33.724Z · LW · GW

There's an entire field of psychology, yes, but most men are still confused by women saying "it's fine" when they are clearly annoyed. Another thing is women dressing up because they want attention from specific men. Dressing up in a sexy manner is not a free ticket for any man to harass them, but socially inept men will say "they were asking for it" because the whole concept of selection and stardards doesn't occur to them in that context. And have you read Niccolò Machiavelli's "The Prince"? It predates psychology, but it is psychology, and it's no worse than modern books on office politics and such, as far as I can tell. Some things just aren't improving over time.

 

I think majority of people aren't aware of psychology and various fields under it. Ethics and decision theory kind of give a lot of clarity into such decisions when you analyse the payoff matrix. I haven't The prince but have read excerpts from it in self-improvement related diaspora, I am not denying the value which such literature gives us, I just think we should move on by learning from it and developing on top in light of newer methods. 

Beside I am more of a moral anti-realist so lol. I don't think there is universally compelling arguments for these ethical things, but people with enough common psychological and culture grounds can cooperate. 

 

Modern hard sciences like mathematics are too inhuman (autistic people are worse at socializing because they're more logical and objective). 

Well it depends on your definition of inhuman, my_inhuman =/= your_inhuman value is a two place function, my peers when I was in high school found at least one of the hard sciences fun. Like them I find hard sciences pretty cool to learn about for fulfilling my other goals. 

And modern soft sciences are frankly pathetic 

Agreed. Some fields under psychology as pathetic. But the fields like cognitive biases etc are not. 

 

and that it hasn't failed you much in the social parts?

Well astrology has clearly failed me, my mom often had these luddite-adjacent ideas about what I am meant to do in life because her entire source of ethics was astrology. Astrology in career advice is like rolling a dice and assigning the all the well known professions to a number rather than actual life satisfaction or value fulfillment. 

"The rules of the brain are different than those of math, if you treat the brain like it's supposed to be rational, you will always find it to be malfunctioning for reasons that you don't understand"

I would strongly disagree on the front of intelligence . More Rational as in cognitive algorithms which tend to lead to systematic optimality in this case truth seeking/achieving goals is indeed possible and pretty much is a part of growth.

I would weakly disagree on the front of Internal family subsystems (with the internal double crux special case being extremely useful) and other introspective reductionist methods where you   break   down   your  emotional responses and process into parts and understand what you like/dislike and the various attempts to bridge the two. On this front there are plethora of competing theories due to easy problem of consciousness and trying to understand experience functionally.   

And for brain not working as I want to be when I model other parts of this brain, I find it being emotionally engaged in things which aren't optimal for some of my goals and it isn't contradictory with rationality to acknowledge or deal with these feelings. 

I was praising goggins because he's more of the type who is willing to fight himself and in more than half of the introspective models that without acknowledgement is bordering on self-harm. I find his strategy to be intuitively much better lol. 

 

Where I would agree is that if you don't understand something then your theory is probably wrong. There are not confusing facts only models which are confused by facts. 

 

Too many geniuses have failed at living good lives for me to believe that intelligence is enough. 

I think growth is important, I like to think of it in intelligence being compute power and growth and learning being more of changing algorithms. Besides there is a good amount of coorelations with IQ you might want to look into, I think this area is very contentious (got a system1 response to check for the social norms due to past bans lol) , but we're on lesswrong, so you can continue. 

 

This might be why I have the courage to criticize science on LW in the first place.

You're welcome, maybe you should read sequence highlights to get introduced with LW's POV to understand other people's positions here. 

Comment by lesswronguser123 (fallcheetah7373) on Does the "ancient wisdom" argument have any validity? If a particular teaching or tradition is old, to what extent does this make it more trustworthy? · 2024-11-12T01:16:41.188Z · LW · GW

Science hasn't increased our social skills nor our understanding of ourselves, modern wisdom and life advice is not better than it was 2000 years ago. 

 

Hard disagree, there's an entire field of psychology, decision theory and ethics using reflective equilibrium in light of science. 

 

Ancient wisdom can fail, but it's quite trivial for me to find examples in which common sense can go terribly wrong. It's hard to fool-proof anything, be it technology or wisdom.

Well some things go wrong more often than other things, wisdom goes wrong a lot of time, it isn't immune to memetic selection, there is not much mechanism to prevent you from falling for false memes. Technology after one point goes wrong wayyy less. A biology textbook is much more likely to be accurate and better on medical advice than a ayurvedic textbook. 

 

The whole "Be like water" thing is just flexibility/adaptability
 

Yes it's a metaphor for adaptiveness, but I don't understand where it may apply other than being a nice way to say "be more adaptive". It's like logical model like maths but for adaptiveness you import the idea of water-like adaptiveness into situations. 

 

As for that which is not connected to reality much (wisdom which doesn't seem to apply to reality), it's mostly just the axioms of human cognition/nature.

You know what might be an axiom of human cognition? Bayes rule and other axioms in statistics. I have found that I can bypass a lot of wisdom by using these axioms where others are stuck without a proper model in real life due to ancient wisdom. Eg; I stopped taking ayurvedic medication which contained carcinogens; when people spend hours thinking about certain principles in ethics or decision theory I know the laws to prevent such confusion etc

 

If you're in a good mood then the external world will seem better too. A related quote is "As you think, so you shall become" which is oddly simiar to the idea of cognitive behavioural therapy.

Honestly I agree with this part, I think this is the biggest weakness of rationalism. I think the failure to general purpose overcome akrasia is a failure of rationality. I find it hard to believe that there would be a person like david goggins but a rationalist. The obsession with accuracy doesn't play well with romanticism of motivation and self-overcoming, it's a battle you have to fight and figure out daily, and under the constraints of reality it becomes difficult.
 

Comment by lesswronguser123 (fallcheetah7373) on Does the "ancient wisdom" argument have any validity? If a particular teaching or tradition is old, to what extent does this make it more trustworthy? · 2024-11-11T12:03:15.635Z · LW · GW

I don't think I can actually deliberately believe in falsity it's probably going to end up in a belief in a belief rather than self deception.

Beside having false ungrounded beliefs are likely to not be utility maximising in the long run its a short term pleasure kind of thing.

Beliefs inform our actions and having false beliefs will lead to bad actions.

I would agree with the Chesterton fence argument but once you understand the reasons for the said belief's psychological nature than truthfulness holding onto to it is just rationalisation.

Ancient wisdom is more of it works until it doesn't kind of wisdom, you have heuristics which reach certain benign conclusions but then they fail miserably when they do.

Besides someone thought about such wisdom, and it's been downhill ever since with people forgetting it and reinventing it. Science on other hand progresses with each generation.

But when you do have a verdical gears level model on the other hand then you can be damn sure the thing will work.

Comment by lesswronguser123 (fallcheetah7373) on how to truly feel my beliefs? · 2024-11-11T11:47:04.000Z · LW · GW

Check the page on aliefs

Comment by lesswronguser123 (fallcheetah7373) on UFO Betting: Put Up or Shut Up · 2024-11-11T01:48:30.195Z · LW · GW

does too-hard-to-win bets make you wary of something unpredictably going right? 

Comment by lesswronguser123 (fallcheetah7373) on O O's Shortform · 2024-11-10T06:36:32.037Z · LW · GW

Well that tweet can easily be interpreted as overconfidence for their own side, I don't know whether Vance would continue with being more of a rationalist and analyse his own side evenly.

Comment by lesswronguser123 (fallcheetah7373) on quila's Shortform · 2024-11-10T04:05:35.044Z · LW · GW

I think the post was a deliberate attempt to overcome that psychology, the issue is you can get stuck in these loops of "trying to try" and convincing yourself that you did enough, this is tricky because it's very easy to rationalise this part for feeling comfort.

When you set up for winning v/s try to set up for winning. 

The latter is much easier to do than the former, and former still implies chance of failure but you actually try to do your best rather than, try to try to do your best. 

I think this sounds convoluted, maybe there is a much easier cognitive algorithm to overcome this tendency.

Comment by lesswronguser123 (fallcheetah7373) on quila's Shortform · 2024-11-10T03:06:22.429Z · LW · GW

Trying to do good.

 

"No!  Try not!  Do, or do not.  There is no try."
       —Yoda

Trying to try

Comment by lesswronguser123 (fallcheetah7373) on Shortform · 2024-11-10T02:35:55.431Z · LW · GW

I thought we had a bunch of treaties which prevented that from happening? 

Comment by lesswronguser123 (fallcheetah7373) on Twelve Virtues of Rationality · 2024-11-03T13:12:34.452Z · LW · GW

I think it's an hyperbole, one can still progress, but in one sense of the word it is true, check The Proper Use of Humility and  The Sin of Underconfidence 

Comment by lesswronguser123 (fallcheetah7373) on The "Intuitions" Behind "Utilitarianism" · 2024-11-03T11:26:02.749Z · LW · GW

I don't say that morality should always be simple.  I've already said that the meaning of music is more than happiness alone, more than just a pleasure center lighting up.  I would rather see music composed by people than by nonsentient machine learning algorithms, so that someone should have the joy of composition; I care about the journey, as well as the destination.  And I am ready to hear if you tell me that the value of music is deeper, and involves more complications, than I realize - that the valuation of this one event is more complex than I know. 

I wonder if he lived up to that standard, given we have genAI like suno and udio now.

Comment by lesswronguser123 (fallcheetah7373) on JargonBot Beta Test · 2024-11-03T03:42:31.487Z · LW · GW

I recommend having this question in the next lesswrong survey. 

 

Along the lines of "How often do you use LLMs and your usecase?"

Comment by lesswronguser123 (fallcheetah7373) on Cipolla's Shortform · 2024-10-21T17:23:43.595Z · LW · GW

Is this selection bias? I have had people who are overconfident and get nowhere.

I don't think it's independent from smartness, a smart+conscientious person is likely to do better.

Comment by lesswronguser123 (fallcheetah7373) on is there a big dictionary somewhere with all your jargon and acronyms and whatnot? · 2024-10-19T12:27:52.659Z · LW · GW

https://www.lesswrong.com/tag/r-a-z-glossary

 

I found this by mistake and luckily I remembered glancing over your question 

Comment by lesswronguser123 (fallcheetah7373) on Open Thread Fall 2024 · 2024-10-17T13:58:07.245Z · LW · GW

It would be an interesting meta post if someone did a analysis of each of those traction peaks due to various news or other articles.

Comment by lesswronguser123 (fallcheetah7373) on Causal Diagrams and Causal Models · 2024-10-13T09:24:15.467Z · LW · GW

accessibility error: Half the images on this page appear to not load.

Comment by lesswronguser123 (fallcheetah7373) on Where to find reliable reviews of AI products? · 2024-10-12T18:23:13.852Z · LW · GW

Have you tried https://alternativeto.net ?  It may not be AI specific but it was pretty useful for me to find lesser known AI tools with particular set of features. 

Comment by lesswronguser123 (fallcheetah7373) on Skill: The Map is Not the Territory · 2024-10-12T15:35:19.899Z · LW · GW

Error: The mainstream status on the bottom of the post links back to the post itself. Instead of comments.

Comment by lesswronguser123 (fallcheetah7373) on MakoYass's Shortform · 2024-10-11T16:44:29.850Z · LW · GW

I prefer system 1: fast thinking or quick judgement

Vs

System 2 : slow thinking

I guess it depends on where you live and who you interact with and what background they have because fast vs slow covers the inferential distance fastest for me avoids the spirituality intuition woo woo landmine, avoids the part where you highlight a trivial thing to their vocab called "reason" etc

Comment by lesswronguser123 (fallcheetah7373) on Notes on Optimism, Hope, and Trust · 2024-10-11T15:11:48.604Z · LW · GW

William James (see below) noted, for example, that while science declares allegiance to dispassionate evaluation of facts, the history of science shows that it has often been the passionate pursuit of hopes that has propelled it forward: scientists who believed in a hypothesis before there was sufficient evidence for it, and whose hopes that such evidence could be found motivated their researches.

 

Einstein's Arrogance  seems like a better explanation of the phenomena to me

Comment by lesswronguser123 (fallcheetah7373) on Shortform · 2024-09-30T10:29:38.973Z · LW · GW

I remember this point that yampolskiy made for impossibleness of AGI alignment on a podcast that as a young field AI safety had underwhelming low hanging fruits, I wonder if all of the major low hanging ones have been plucked.

Comment by lesswronguser123 (fallcheetah7373) on GeneSmith's Shortform · 2024-09-08T09:16:49.397Z · LW · GW

I thought this was kind of known that few of the billionaires were rationalist adjacent in a lot of ways, given effective altruism caught on with billionaire donors, also in the emails released by OpenAI https://openai.com/index/openai-elon-musk/ there is link to slatestarcodex forwarded to elonmusk in 2016, elon attended eliezer's conference iirc. There are a quite of places you could find them in the adjacent circles which already hint to this possibility like basedbeffjezos's followers being billionaires etc.  I was kind of predicting that some of them would read popular things on here as well since they probably have overlapping peer groups. 

Comment by lesswronguser123 (fallcheetah7373) on [New Feature] Your Subscribed Feed · 2024-08-10T11:39:05.065Z · LW · GW

Few feature suggestions: (I am not sure if these are feasible) 

1) Folders OR sort by tag for bookmarks. 

2) When I am closing the hamburger menu on the frontpage I don't see a need for the blogs to not be centred. It's unusual, it might make more sense if there was a way to double stack it side by side like mastodon. 

3) RSS feature for subscribed feeds? I don't like using Emails because too many subscriptions and causes spam. 

(Unrelated: can I get deratelimited lol or will I have to make quality Blogs for that to happen?) 

Comment by lesswronguser123 (fallcheetah7373) on What is an agent in reductionist materialism? · 2024-08-05T17:03:28.307Z · LW · GW

I usually think of this in terms of Dennett's concept of the intentional stance, according to which there is no fact of the matter of whether something is an agent or not. But there is a fact of the matter of whether we can usefully predict its behavior by modeling it as if it was an agent with some set of beliefs and goals.

 

That sounds awfully lot like asserting agency to be a mind-projecting fallacy. 

Comment by lesswronguser123 (fallcheetah7373) on "... than average" is (almost) meaningless · 2024-07-25T14:45:18.927Z · LW · GW

Sorry for the late reply, I was looking through my past notifs, I would recommend you to taboo the words and replace the symbols with the substance , I would also recommend you to treat language as instrumental since words don't have inherent meaning, that's how an algorithm feels from inside.

Comment by fallcheetah7373 on [deleted post] 2024-07-18T14:30:12.026Z

Is this the copy of video which has been listed as removed? @Raemon 

 

Comment by lesswronguser123 (fallcheetah7373) on Daniel Kokotajlo's Shortform · 2024-07-11T03:21:02.607Z · LW · GW
Comment by lesswronguser123 (fallcheetah7373) on No Safe Defense, Not Even Science · 2024-07-06T10:42:13.987Z · LW · GW

It is surely the case for me, I was raised a hindu nationalist,I ended up also trusting various sides of the political spectrum from far right to far left, porn addiction , later ended up falling into trusting science,technology without thinking for myself. Then i fell into epistemic helplessness, did some 16 hr/day work hrs as a denial of the situation led to me getting sleep paralysis, later my father also died due to his faulty beliefs in naturopathy and alternative medicine honestly due to his contrarian bias he didn't go to a modern medicine doctor, I was 16 back then (last year) . Which eventually ended up leading me here, initially being every skeptical of anything but my default common sense intuition I realised the cognitive biases I fell for etc etc. so on 

Comment by lesswronguser123 (fallcheetah7373) on Againstness · 2024-07-02T09:15:35.239Z · LW · GW

Most useful post, I was intuitively aware of these states, thanks for providing the underlying physiological underpinning. I am aware enough to actually feel a sense of tension in my head in general in SNS dominated states and noticed that I was biased during these states, my predictions seem to align well with the literature it seems.

Comment by lesswronguser123 (fallcheetah7373) on Open Thread Summer 2024 · 2024-06-28T12:32:58.635Z · LW · GW

Why does lesswrong.com have the bookmark feature without a way to sort them out? As in using tags or maybe even subfolders. Unless I am missing something out. I think it might be better if I just resort to browser bookmark feature.

Comment by lesswronguser123 (fallcheetah7373) on "... than average" is (almost) meaningless · 2024-06-21T05:22:32.332Z · LW · GW

I think what they mean is the intuitve notion of typicality rather than the statistical concept of average.

 

98 seems approximately 100 

but 100 doesn't seem approximately 98  due to how this heuristic works. 

That is typicality is a system 1 heuristic of a similarity cluster, it's asymmetric.

Here is the post on typicality from a human guide's to word sequence.

 

To interpret what you meant when you said "my hair has grown above average" you have a extensional  which you refer to with the word "average hair" and you find yourself to be on the outer ends of this extensional cluster in the hairspace. Ideally you would craft an intensional of this extensional instead of "average hair as in mathematical concept Sum of terms/no. of terms" to somewhat like "that's the amount of hair growth I tend to experience usually" now this statement may or may not be accurate based on how much data you have provided to your inner sim. Or if you mean by average hair as in "the societal stereotype of average hair growth" then that would be subject to cultural factors like what shows you watch etc. 

(also if you reply back I won't be able to reply I have been ratelimited one post per 2 days for an year on lesswrong) 

Comment by lesswronguser123 (fallcheetah7373) on Turbocharging · 2024-06-16T18:13:06.550Z · LW · GW

The student employing version one of the learning strategy will gain proficiency at watching information appear on a board, copying that information into a notebook, and coming up with post-hoc confirmations or justifications for particular problem-solving strategies that have already provided an answer.

 

ouch I wasn't prepared for direct attacks but thank you very much for explaining this :), I now know why some of the later strategies of my experienced self of "if I was at this step how would I figure this out from scratch" and "what will the teacher teach today based on previous knowledge" worked better, or felt more engaging from my POV (I love maths and it was normal for me to try find ways to engage more) . 

 

 

But this tells me I should apply rationality A-Z techniques more often to learning...given how this is just anticipation controller,fake causality and replacing symbol with the referent, positive bias. 

Comment by lesswronguser123 (fallcheetah7373) on Thinking harder doesn’t work · 2024-06-08T10:14:01.924Z · LW · GW

Leaning into the obvious is also the whole point of every midwit meme.

Midwit

 I would argue this is not a very good example, "do the obvious thing" just implies that you have a higher prior for a plan or a belief and you are choosing to believe it without looking for further evidence. 

It's epistemically arogant to assume that your prior will be always correct. 

Although if you are experienced in a field it probably took your mind a lot of epistemic work to isolate a hypothesis/idea/plan in the total space of them while doing the inefficient bayesian processing  in the background.

 

The root issue is that reality has a surprising amount of detail. All models are wrong. The map is not the territory.

We look at the territory via our beliefs, I think intuition is just a model by another name. A true map corresponds to territory. I think the extra amount of details is due to our brain's inability to comprehend the raw truth since the levels to reality lie on the map and these level can often leave out minor details since we cannot compute it. Our higher level maps are just approximation of the fundamental reality. 

 

The emissary’s narrow, analytical view of the world and desire to have everything fully under control, cut it into pieces and arrange it in ways it can fully grasp, is inadequate for dealing with the complexities of reality. 

I think there are a lot of sequences on this topic on how our intuitions of categorisation which were evolved to deal with the complexities of the world aren't adequate and can often need help of reductionism. 

There is a reason why mathematicians talk about the 3Bs: bus, bad, bed. This is where we have our best ideas.

Eureka moments don’t happen when you try to force it.

That is diffused vs focused thinking you cannot really distill and tell which eurekas are real eurekas and which are fake eurekas without doing the focused part after the hypothesis generator part of the brain does its thing.

This is once again a fact our left brain likes to ignore as the chemicals in our body are not something fully under its control and this potentially diminishes its importance.

Uhh I mean I just don't understand why is this post at first criticisng the left brain for valuing truth and then coming back at it for not valuing truth...

 

Also (if there hasn't been further research which made a comeback) the premise of the post left-right brain influencing personality dichotomy is inaccurate. 

Comment by lesswronguser123 (fallcheetah7373) on Toward A Bayesian Theory Of Willpower · 2024-06-05T04:39:54.507Z · LW · GW

This theory seems to explain all observations but I am not able to figure out what it doesn't explain in day to day life.

Also, for the last picture the key lies in looking straight at the grid and not the noise then you can see the straight lines, although it takes a bit of practice to reduce your perception to that.

Comment by lesswronguser123 (fallcheetah7373) on The Magnitude of His Own Folly · 2024-06-01T11:51:37.950Z · LW · GW

Obviously this isn't true in the literal sense that if you ask them, "Are you indestructible?" they will reply "Yes, go ahead and try shooting me." 

 

Oh well- I guess meta-sarcasm about guns is a scarce finding in your culture because I remember non-zero times when I have said this months ago. (also I emotionally consider myself as mortal if that means I will die just like 90% of other humans who have ever lived and like my father) 

Comment by lesswronguser123 (fallcheetah7373) on A Technical Explanation of Technical Explanation · 2024-05-29T09:36:57.951Z · LW · GW

Bayesian probability theory is the sole piece of math I know that is accessible at the high school level

They teach it here without the glaring implications because those don't come in exams. Also I was extremely confused by the counterintuitive nature of probability until I stumbled upon here and realised my intuitions were wrong.

Comment by lesswronguser123 (fallcheetah7373) on Advice for Activists from the History of Environmentalism · 2024-05-18T16:46:04.362Z · LW · GW

instead semi-sensible policies would get considered somewhere in the bureaucracy of the states?

Whilst normally having radical groups is useful for shifting the Overton window or abusing anchoring effects in this case study of environmentalism I think it backfired from what I can understand, given the polling data of public in the sample country already caring about the environment.

Comment by lesswronguser123 (fallcheetah7373) on Mental Masturbation and the Intellectual Comfort Zone · 2024-05-08T10:01:54.399Z · LW · GW

I think the hidden motives are basically rationalisation, I have found myself singlethinking those motives in the past nowadays I just bring those reasons to the centre-stage and try to actually find whether they align with my commitments instead of motivated stopping. Sometimes I just corner my motivated reasoning (bottom line) so bad (since it's not that hard to just do expected consequentialist reasoning properly for day to day stuff) that instead of my brain trying to come up with better reasoning it just makes the idea of the impulsive action more salient, some urge along the lines of "think less and intuit/act more".  

 

Also I have personally used this concept of "intellectual masturbation" to divert discussions away from potentially philosophical bomb to more relevant topics it's much better to reduce the philosophical jargon in day to day conversations lol.

Comment by lesswronguser123 (fallcheetah7373) on 37 Ways That Words Can Be Wrong · 2024-04-29T09:38:59.157Z · LW · GW

37 spotted! Fun fact 37 is one of the subs-consciously more typical 2 digit number our mind stores for the similarity cluster of random number.  I found a good video and website on this topic.

Comment by lesswronguser123 (fallcheetah7373) on The losing identity of Twitter · 2024-04-23T06:15:18.897Z · LW · GW

Ever since they killed (or made it harder to host) nitter,rss,guest accounts etc. Twitter has been out of my life for the better. I find the twitter UX in terms of performance, chronological posts, subscriptions to be sub-optimal. If I do create an account my "home" feed has too much ingroup v/s outgroup kind of content (even within tech enthusiasts circle thanks to the AI safety vs e/acc debate etc), verified users are over-represented by design but it buries the good posts from non-verified. Elon is trying wayy too hard to prevent AI web scrapers ruining my workflow

Comment by lesswronguser123 (fallcheetah7373) on What are your thoughts on rational wiki · 2024-04-15T18:49:33.832Z · LW · GW

The gray fallacy strikes again, the point is to be lesswrong! 

Comment by lesswronguser123 (fallcheetah7373) on The Cluster Structure of Thingspace · 2024-04-15T16:58:59.854Z · LW · GW

Most of this just seems to be nitpicking lack of specificity of implicit assumptions which were self-evident (to me), the criticism regarding "blue" pretty much depends on whether the html blue also needs an interpreter(Eg;human brain) to extract the information. 

The lack of formality seems (to me as a new user) a repeated criticism of the sequences but, I thought that was also a self-evident assumption (maybe I'm just falling prey to the expecting short inferential distance bias) I think Eliezer has mentioned 16 years ago here:

"This blog is directed at a wider audience at least half the time, according to its policy. I'm not sure how else you think this post should have been written." 

 

I personally find sequences to be useful aggregator of various ideas I seem to find intriguing at the moment...

Comment by lesswronguser123 (fallcheetah7373) on Evolutionary Psychology · 2024-04-12T18:16:14.787Z · LW · GW

I found a related article prior to this on this topic which seems to be expanding about the same thing.

https://journals.sagepub.com/doi/10.1177/1745691610393528

Comment by lesswronguser123 (fallcheetah7373) on Evolutionary Psychology · 2024-04-12T16:33:54.278Z · LW · GW

Like "IRC chat"

I don't think that aged well :)

Comment by lesswronguser123 (fallcheetah7373) on simeon_c's Shortform · 2024-04-11T02:55:20.595Z · LW · GW

It would also be quite terrible for safety if AGI was developed during a global war, which seems uncomfortably likely (~10% imo).

This may be likely, iirc during wars countries tend to spend more on research and they could potentially just race to AGI like what happened with space race. Which could make hard takeoff even more likely.

Comment by lesswronguser123 (fallcheetah7373) on The Power of Intelligence · 2024-04-10T17:53:15.661Z · LW · GW

I laughed multiple times while reading this one. I was severely underestimating the general concept of intelligence. Almost felt like someone intentionally targeted my past self's misconceptions lol.

Comment by lesswronguser123 (fallcheetah7373) on The Halo Effect · 2024-04-07T08:16:53.593Z · LW · GW

The description is much better evidence, but the attractiveness remains somewhat important.

 

I would like to disagree on this one, if all of the major accomplishments of the candidate is written on that description then the attractiveness doesn't really matter as an evidence for intelligence, it's already been taken into account in that description.