Posts

Comments

Comment by Jasnah Kholin (Jasnah_Kholin) on (The) Lightcone is nothing without its people: LW + Lighthaven's big fundraiser · 2024-12-08T16:56:31.036Z · LW · GW

I donated 100$. It's not a lot, but I don't live in USA and don't expect to get any benefit from the physical infrastructure, and not convinced on this being effective from EA point of view. so this is only for the benefit of sporadically reading the site.  

Comment by Jasnah Kholin (Jasnah_Kholin) on Epistemic Slipperiness · 2024-07-21T15:58:35.179Z · LW · GW

in my own frame, Yudkowsky's post is central example of Denying Reality. Duncan's Fabricated Options are another example of Denying Reality. when reality is to hard to deal with, people are... denying reality. refuse to accept it. 

the only protection i know is to leave line of retreat - and it's easier if you do it as algorithm, even when you honestly believe it's not needed. 

not all your examples are Denying Reality be my categorization. other have different kind if Unthinkable things. and sometimes they mess together - the Confused Young Idealist may be actually confused - there are two kinds if Unthinkables. the one when if someone point it up to you you say - wow, i would have never thought that myself! and then understand, and the one when the reaction is angry denial (and of course it's not actually two, and there are a lot of space on the spectrum between the two).


not very helpful, but... i'm struggling with how to talk to people who do that. I tried various strategies, and came back to tell it as it is. it's actually get me better results then trying to sneak around this. not that i got good results, but... i think it reveal useless conversations faster, AND let good potential conversations to actually occur.  

Comment by Jasnah Kholin (Jasnah_Kholin) on Networks of Trust vs Markets · 2024-05-02T15:14:14.445Z · LW · GW

Are you sure the math hold up? there are a bunch of posts about how spend money to buy time, and if I need to chose between waste 50 HOURS on investigation and just buy the more expensive product, it's pretty obvious to me that the second option is best. maybe not in this example, though I see it as false dichotomy - I tend to go with "ask in the specialized good-looking facebook group" as way to chose when stakes are high.

In the last years I internalize more and more that I was raised by poorer people then I am now, that my heuristics just doesn't count all the time that I waste comparing products or seeking trusted professionals, and it would have been best for me to just buy the expensive phone, instead of asking people for recommendations and specs. 

also, and this is important - the interpersonal dynamics of trust networks can be so much more expansive then mere money. I preferred to work and pay for my degree myself then ask my parents for help. I see in real time as one my friend that depend on reputation for her work constantly censure herself and fret if she should censor herself. 

basically, I would have give my past self the opposite advise, and what i want is an algorithm - how to know if you want more trust networks or more markets?

or, actually, i want BETTER MAP. facebook recommendations are not exactly trust network, but not markets, either. I don't think this distinction cut reality at the joints. there is a lot to explore here - although I'm not the one who should do the exploring. IT will not be useful for me, as I try to move to the direction of wasting less time and more money on things. 

Comment by Jasnah Kholin (Jasnah_Kholin) on Public beliefs vs. Private beliefs · 2023-09-12T08:59:46.748Z · LW · GW

it sometimes happen in conversations, that people talk past each other, don't notice that they both use the word X and mean two different things, and behave as if they agree on what X is but disagree on where to draw the boundary.

from my point of view, you said some things that make it clear you mean very different thing then me by "illegible". prove of theorem can't be illegible to SOMEONE. illegibility is property of the explanation, not the explanation and person. i encountered papers and posts that above my knowledge in math and computer science. i didn't understand them despite them being legible. 


you also have different approach to concepts in generally. i don't have concept because it make is easier for people to debug. i try to find concepts that reflect the territory most precisely. that is the point of concepts TO ME.

i don't sure it worth it go all the way back, and i have no intention go over you post and adding "to you" in all the places where it should be add, to make it clearer that goals are something people have, not property of the teritory. but if you want to do half of the work of that, we can continue this discussion. 

Comment by Jasnah Kholin (Jasnah_Kholin) on Slack for your belief system · 2023-08-17T12:30:45.654Z · LW · GW

this is one of the posts when i wish for three examples for the thingy described. because i see two options:
1. this is weakman of the position i hold, in which i seek the ways to draw a map that correspond to the territory, and have my estimations of what work and what no, and disagree with someone about that. and the someone instead of providing evidence that his method providing good predictions or insights, just say i should have more slack.

all you description on why believe in things sounds anti-Beysian. it's not boolean believe-disbelieve. update yourself incrementally! if i believe something provide zero evidence i will not update, if the deviance dubious, i will update only a little. and then the question is how much credence you assign to what evidence, and methods to find evidence. 

2. it's different worlds situation, when the post writer encountered problem i didn't.

and i have no way to judge that, without at least one, and better more, actual examples of the interaction, better linked to and not described by the author. 

Comment by Jasnah Kholin (Jasnah_Kholin) on Addendum to applicable advice · 2023-08-09T13:22:25.310Z · LW · GW

list of implicit assumptions in the post i disagree with:

 

  • that there are significant amount of people that see advise and their cached thought is "that can't work for me".
  • that this cached thought is bad thing.
  • that you should to try to apply every advice you encounter to yourself.
  • that it's hard.
  • that the fact that it hard is evidence that it's good and worthy thing to do.
  • that "being kind of person" is good category to think in, or good framing to have.

 

i also have a lot of problems with the example - which is example of advise that most people try to follow but shouldn't, and should think about their probability of success by looking on the research and not by thinking that "you can be any kind of person" - statement whose true value is obviously false. 

Comment by Jasnah Kholin (Jasnah_Kholin) on Limits to Legibility · 2023-08-07T08:33:22.268Z · LW · GW

this is not how the third conversation should go, in my opinion. instead. you should say inquiry your Inner Simulator, and then say that you expect that learning GTD will make them more anxious or will work for two weeks and then stop, so the initial investment in time will not pay off, or that in the past you encountered people who tried and it make them to crash down parts of themselves, or you expect it will work to well and lead to burnout.

it is possible to compare illegible intuitions - by checking what different predictions they produce, by comparing possible differences in the sorting of the training data. 

in my experience, different illegible intuitions come from people see different parts of the whole picture, and it's valuable to try to understand better. also, making predictions, describe the differences between word when you right and world when you wrong, having at least two different hypotheses, is all way to make the illegible intuitions better.

Comment by Jasnah Kholin (Jasnah_Kholin) on Your Utility Function is Your Utility Function · 2023-07-25T06:35:49.772Z · LW · GW

One of the things that I searched for in EA and didn't find, but think should exist: algorithm, or algorithms, to decide how much to donate, as a personal-negotiation thing.

There is Scott Alexander's post about 10% as Schelling point and way to placate anxiety, there is the Giving What You Can calculation. but both have nothing with personal values.

I want an algorithm that is about introspection - about not smashing your altruistic and utilitarian parts, but not other parts too, about finding what number is the right number for me, by my own Utility Function.

and I just... didn't find those discussions.
 

in dath ilan, when people expect to be able to name a price for everything more or less, and did extensive training to have the same answer to the questions  'how much would you pay to get this extra' and 'how much additional payment would you forgo to get this extra' and 'how much would you pay to avoid losing this' and 'how much additional payment would you demand if you were losing this.', there are answers.
 

What is the EA analog? how much I'm willing to pay if my parents will never learn about that? If I could press a button and get 1% more taxes that would have gone to top Giving Well charities but without all second order effects except the money, what number would I choose? What if negative numbers were allowed? what about the creation of a city with rules of its own, that take taxes for EA cause - how much i would accept then?
 

where are the "how to figure out how much money you want to donate in a Lawful way?" exercises?
 

Or maybe it's because far too many people prefer and try to have their thinking, logical part win internal battle against other, more egotistical ones?
 

Where are all the posts about "how to find out what you really care about in a Lawful way"? The closest I came about is Internal Double Crux and Multi-agent Model of the soul and all its versions. But where are my numbers?
 

Comment by Jasnah Kholin (Jasnah_Kholin) on A few more ants and grasshoppers · 2023-07-20T09:21:55.899Z · LW · GW

so, I'm in the same time happy there is an answer, but can't be happy with the answer itself. which is to say, i tried to go and find the pints i agree with, and find one after another point of disagreement. but i also believe this post deserve more serious answer, so i will try to write at least part of my objections.

i do believe that x-risk and societies destroying themselves as thy become more clever then wise is a real problem. but i disagree with the framing that the ants are the ones to blame. it's running from the problem. if grasshoppers are to grow, even if slower, they too may bring atomic winter. 

and you just... assume it away. in the way of worst Utopian writing, where societies have features that present-people hate and find bad but somehow everyone happy and no one have any problems with that and everything is okay.  it's just... feel cheap to me.

and if you assume no growth at all, then... what about all the people that value growth? there are a lot of us in the world. if it's actually "steady-state existence", not sustainable growth but everything stay the same way... it's really really really bad by my utility function, and the one good thing i can say about that, is that state doesn't look stable to me. there were always innovators and progressors. you can't have your stable society without some Dystopian repression of those.

but you can have dath ilan. this was my main problem with the original parable. it was very black-and-white. dath ilan didn't come to the ants and ask for food, instead it offered it. but it definitely not table state. and to my intuition, it's look both possible and desirable.

 and it also doesn't assume that the ants throw away decision theory from the window. the original parables explicitly mentioned it. i find representation of ants that forego cooperation in prisoner dilemma strawmanish.

but beside all that, there is another, meta-point. there was prediction after prediction for pick-oil and the results, and they all proved wrong. so are other predictions for that strand of socialism. from my point of view, the algorithm that generating this predictions is untrustworthy. i don't think Less Wrong is the right place for all those discussions.

and i don't plan to write my own, dath-ilani replay to the parables.

but i don't think some perspectives are missing. i think they was judged false and ignored afterwards. and the way in which the original parables felt fair to the ants, and those don't, is evidence this is good rule to follow. 

it's not bubble, it's the trust in the ability for fair discussion, or the absent of trust. because discussion in which my opinions assumed to be result of bubble and not honest disagreement... i don't have words to describe the sense of ugliness, wrongness, that this create. it the same that came from feeling the original post as honest and fair, and this as underhanded and strawmanish. 

(all written here is not very certain and not precise representation of my opinions, but i already took way too much time to write it, and i think it better to write it then not)

Comment by Jasnah Kholin (Jasnah_Kholin) on Save the kid, ruin the suit; Acceptable utility exchange rates; Distributed utility calculations; Civic duties matter · 2023-07-17T12:48:35.770Z · LW · GW

this would be much closer to the Pareto frontier then our curren social organization! unfortunately, this is NOT how society work. if you will operate like that, you will loss almost all your resources. 

but it's more complicated then that - why not gate this on cooperation? why should i give 1 dollar for 2 of someone else dollars, when they will not do that for me? 

and this is why all this scheme doesn't work. every such plan need to account for defectors, and it doesn't look like you address it, anywhere.

on the issue of politics - most people who involve in politics make things worse. before declaring that it's people duty to do something, it's important to verify this is net-positive thing to do. if i look on people involved in politics and decide that less politics would have been better to society, then my duty is to NOT get involved in politics. or at least, not to get involved more then the level that i believe is the right level of involvement.

but... i really don't see how all this politics even connected to the first half of the post, about the right ratio of my utility : other person utility? 

 

Comment by Jasnah Kholin (Jasnah_Kholin) on Both or Nothing · 2023-07-17T08:08:07.228Z · LW · GW

regarding the first paragraph - Eliezer not criticizing the Drowning Child story in our world, but in dath ilan. dath ilan, that is utilitarian in such questions, when more or less everyone is utilitarian when children lives are what at stake. we don't live in dath ilan. in our world, it's often the altruistic parts that hammer down the selfish parts, or warm-fuzzies parts that hammer down the utilitarian ones as heartless and cruel.

EA sometimes is doing the opposite - there are a lot of stories of burnout.

and in the large scheme of things, what i want is a way to find what actions in the world will represent my values to the fullest - but this is a problem when i can't learn from dath ilan, that have a lot of things fungible, that are not in Earth. 

 

Comment by Jasnah Kholin (Jasnah_Kholin) on Lies Told To Children · 2023-05-27T08:02:02.938Z · LW · GW

So i read a lot of dah-ilani glowfics the previous weeks, and yet, i didn't guess right. i didn't stop to put numbers on it, so i can in retrospective (and still not sure it's actually good idea to put numbers on all things). and it was 0.9 that the stroy is about kid losing trust in adults because they was told a lie, and 0.3 that after that, it turned out they should trust adults and this distrust is bad (like teens that think all drugs are not dangerous because adults exaggerate the harm of the less-harmful ones). in that situation, i was basically 50-50 divided if the Aesop is about the importance of bounded trust for the reader, that should see themselves as the kid, or that lying to children is bad, and the reader should not do that. 

i did realized it's dath illan and experiment some time after. and now i even more curious to what dath ilan would do with people like me, who see lying as Evil. not try to change them - the utilityfunction is not up to grabs.

i'm pretty sure typical minding make my attempts to do that sub-optimal. i just find it hard to imagine society when most people actually OK with that state of affairs. but my attempts at imagining trust broken and things go bad feel unrealistic, an-dath-ilani to my sense of how-dath-ilan-is. for example, this: https://pastebin.com/raw/fjpS2ZDP doesn't strike me as realistic. i expect dath ilan can use the fact the Keepers Are Trustworthy for example, to swear to a child they will never ever pull such experiments on them, and the child believe that. i expect dath ilan check in younger age how children react to that sort of thing, that is standard on dath-ilani education, and stop if they see it bad for some kid.

and yet... the utilityfunction is not up to grabs. and for some reason, this "fact" about dath ilan is somehow more bad then, for example, the places where dath ilan allow people lack of reflection so they can remain themselves and not go full Keeper.  i disagree there and find it wrong, but it strike me as difference in prioritization, when here it's look like our utilityfunctions are opposite in this small section.

i see lies and deceptions as Evil, even of sometimes it can be traded off, and society with more slack will use it to lie much less, almost never. dath ilan LIKE it's clever experiments and lies children should figure out for themselves. and i would have expected that Keepers would be the type of people that HATE such lies with fury of a thousand suns. so in the end, i remain confused, and feel like dath ilan is somewhat world-where-people-like-me-doesn't-exist. which, most Utopias just ignore uncomfortable complications, but dath ilan is much better then most. and i can't really believe dath ilan heredity-optimized to not have people that hate lying and being lied to.

so in the end, i just confused.

Comment by Jasnah Kholin (Jasnah_Kholin) on Public beliefs vs. Private beliefs · 2023-05-10T09:16:46.196Z · LW · GW

this also describe math. like, the mote complicated math that have some prerequisites and person that didn't take the courses in collage or some analog will not understand.

math, by my understanding of "legibility", is VERY legible. same about programming, physics, and a whole bunch of explicitly lawful but complicated things. 

what is your understanding about that sort of things?

 

Comment by Jasnah Kholin (Jasnah_Kholin) on Investigating Fabrication · 2023-05-09T19:34:56.490Z · LW · GW

I already have them in my reading list, but after that post i plan to epub them and read them soon. 

Comment by Jasnah Kholin (Jasnah_Kholin) on Investigating Fabrication · 2023-05-09T13:11:32.293Z · LW · GW

this is extremely good post. it example and illustrate the sort of mental-moves i believe is needed for rational thinking, of the variety of "know thyself". those things are even harder then normal to communicate, and i find this post manage to do that, and manage to give me useful information, and give me example of how such introspection can happen. i really impressed!

Comment by Jasnah Kholin (Jasnah_Kholin) on Moderation notes re: recent Said/Duncan threads · 2023-04-23T06:32:33.632Z · LW · GW

somewhere (i can't find it now) some else wrote that if he will do that, Said always can say it's not exactly what he means.

In this case, i find the comment itself not very insulting - the insult is in the general absent of Goodwill between Said and Duncan, and in the refuse to do interpretive labor. so any comment of "my model of you was <model> and now i just confused" could have worked.

my model of Duncan avoided to post it here from the general problems in LW, but i wasn't surprised it was specific problem. I have no idea what was Said's model of Duncan. but, i will try, with the caveat that the Said's model of Duncan suggested is almost certainly not true :

I though that you avoid putting it in LW because there will be strong and wrong pushback here against the concept of imaginary injury. it seem coherent with the crux of the post. now, when I learn the true, i simply confused. in my model, what you want to avoid is exactly the imaginary injury described in the post, and i can't form coherent model of you.

i suspect Said would have say i don't pass his ideological Turning test on that, or continue to say it's not exact. I submit that if i cannot, it's not writing not-insultingly, but passing his ideological turning test.

Comment by Jasnah Kholin (Jasnah_Kholin) on Moderation notes re: recent Said/Duncan threads · 2023-04-20T07:01:34.626Z · LW · GW

i think we have very different models of things, so i will try to clarify mine. my best bubble site example is not in English, so i will give another one - the emotional Labor thread in MetaFilter, and MetaFilter as whole. just look on the sheer LENGTH of this page!

https://www.metafilter.com/151267/Wheres-My-Cut-On-Unpaid-Emotional-Labor

there are much more then 3 comments from person there.

from my point of view, this rule create hard ceiling that forbid the best discussions to have. because the best discussions are creative back-and-forth. my best discussions with friends are  - one share model, one ask questions, or share different model, or share experience, the other react, etc. for way more then three comments. more like 30 comments. it's dialog. and there are lot of unproductive examples for that in LW. and it's quite possible (as in, i assign to it probability of 0.9) that in first-order effects, it will cut out unproductive discussions and will be positive.

but i find rules that prevent the best things from happening as bad in some way that i can't explain clearly. something like, I'm here to try to go higher. if it's impossible, then why bother? 

I also think it's VERY restrictive rule. i wrote more then three comments here, and you are the first one to answer me. like, i'm just right now taking part in counter-example to "would find it surprising if you needed more than 3 comments per day to share examples, personal experiences, intuitions and relations."

i shared my opinions on very different and unrelated parts of this conversation here. this is my six comment. and i feel i reacted very low-heat. the idea i should avoid or conserve those comments to have only three make me want to avoid comment on LW altogether. the message i get from this rule is like... is like i assumed guilty of thing i literately never do, and so have very restricted rules placed on me, and it's very unfriendly in a way that i find hard to describe.

like, 90% of the activity this rule will restrict is legitimate, good comments. this is awful false positive ratio. even if you don't count the you-are-bad-and-unwelcome effect i feel from it and you, apparently, not.

 

Comment by Jasnah Kholin (Jasnah_Kholin) on Explaining Capitalism Harder · 2023-04-19T11:01:41.409Z · LW · GW

i don't think it will go more productive. explaining harder is not my default mode. my default mode is more close to your suggestions, and so i can tell from experience it's NOT productive. what happen next is Fabricated Options, and refusal to react rationally to evidence.

like, i can remember ONE time when i got sensible reaction. and there are locally-infamous situation when socialist politician agreed that their proposal was tried and the results was bad, only to use the same proposal afterwards. 

the standard failure mode i have with socialists in discussion is ignoring or outright denying bad consequences. the good version is accepting the price and prefer this version, or having wildly different frame and so different model and predictions. and most of political discussions are bad - capitalists tend to replay in slogans and ignore evidence of socialists policies working, too. 

maybe it'ss different worlds or inverting every advise situation. because when i read the title, i was sure the post will be about explaining capitalism harder, because in my experience, this is the helpful thing people need to do more, while your proposal for different strategy is the current, inefficient one. 

Comment by Jasnah Kholin (Jasnah_Kholin) on Moderation notes re: recent Said/Duncan threads · 2023-04-19T07:54:45.605Z · LW · GW

i find the fact that you see comments as criticism, and not expanding and continuing the building, is indicative of what i see as problematic. good comments should most of the time not be critisim. be part of the building. 

the dynamic that is good in my eyes, is one when comments are making the post better not by criticize it, but by sharing examples, personal experiences, intuitions, and the relations of those with the post. 

counting all comments as prune instead of bubble disincentivize bubble-comments. this is what you want?

Comment by Jasnah Kholin (Jasnah_Kholin) on Moderation notes re: recent Said/Duncan threads · 2023-04-18T18:04:56.302Z · LW · GW

(3) i didn't watch the movie, nor i plan to watch it, but i read the plot summary in Wikipedia. and I see it as caution against escalation. the people there consistently believe that you should revenge on 1 point offense at 4 points punishment. and this create escalation cycle.

while i think most of Duncan's writing is good, the thing when i think he consistently create bad situations, is in unproportional escalations of conflict, and inability to just let things be. 


once upon a time if i saw someone did something 1 point bad and someone reacting in 3 point bad thing, i would think the first one is 90% of the problem. with time, i find robustness more and more important, and now i see the second one more problematic. as such. i disagree with your description of the movie.

the plot is one people doing something bad, other refuse to punish him, and a lot of people that escalate things, and so, by my standards, doing bad things. LOT of bad things. to call it a chin reaction is to not assign the people that doing bad unproportional escalating things agency over their bad choices. it's strange for me, as i see this agency very clearly. 

Comment by Jasnah Kholin (Jasnah_Kholin) on Moderation notes re: recent Said/Duncan threads · 2023-04-18T16:04:59.026Z · LW · GW

"I do generally wish Duncan did more of this and less trying to set-the-record straight in ways that escalate in IMO very costly ways"

strongly agree.

Comment by Jasnah Kholin (Jasnah_Kholin) on Moderation notes re: recent Said/Duncan threads · 2023-04-18T15:46:38.006Z · LW · GW

I actually DO believe you can't write this in not-insulting way. I find it the result of not prioritizing developing and practicing those skills in general. 

while i do judge you for this, i judge you for this one time, on the meta-level, instead of judging any instance separately. as i find this behavior orderly and predictable.

 

Comment by Jasnah Kholin (Jasnah_Kholin) on Moderation notes re: recent Said/Duncan threads · 2023-04-18T14:10:28.284Z · LW · GW

So this is the fourth time I am trying to write a comment. This comment is far from ideal, but I feel like I did the best as my current skill in writing in English and understanding such situations allow.

 

1. I find 90% of the practical problems to be Drama. as in, long, repetitive, useless arguments. if it was facebook and Duncan blocked Said, and then proceeded to block anyone that was too much out of line by Duncan-standards, it would have solved 90% of Duncan-related problems. if he would have given up already on making LW his kind of garden, it would have solved another 9%.

 

2. In my ideal Garden, Said would have been banned long ago. but it is my believe (and i have like five posts waiting to be written to explain my framework and evidence on that, if i will actually write them) that LW will never be something even close to my or Duncan's Garden (there is 80%-90% similarity in our definitions of garden, by my model of Duncan).
 

In this LessWrong, he may remain and not be blocked. It is also good that more people will ignore his comments that predictably start a useless argument. aka - if i will write something about introspective, i expect Said comment to be useless. a also expect most third and after comments in thread to be useless. 

 

In better LW, those net-negative comments would have been ignored, downvoted, and maybe deleted by mods. while the good ones upvoted and got reactions.

 

3. Duncan, I will be sad if you leave LW. I really enjoy and learn from your posts. I also believe LW will never be your Garden. i would like you to just give up already on changing LW, but still remain here and write. I wish you could have just... care less about comments, assume that 90% of what is important in LW is posts, not comments. Ignore most comments, answer only to those that you deem good and written in Goodwill. LessWrong is not YOUR version of Garden, and will never be. but it has good sides, and you (hopefully) can choose to enjoy the good parts and ignore the bad ones. while now it looks to me like you are optimized toward trying things you object to and engage with them, in hope to change LW to be more to your standards. 

 

Comment by Jasnah Kholin (Jasnah_Kholin) on Coordination as a Scarce Resource · 2023-03-27T11:39:21.121Z · LW · GW

"This puts a new spin on the increasing tendency of employees to change employers and even careers. Rather than a sign of disloyalty or fickleness, it’s just the natural result of an economy efficiently incentivizing and engaging in valuable information exchange"

this is very interesting idea! sadly, i have no idea how to check it. 

Comment by Jasnah Kholin (Jasnah_Kholin) on Staring into the abyss as a core life skill · 2023-03-05T09:58:59.834Z · LW · GW

Interesting! reading this post make me realize I have somehow opposite opinion. the people I respect are often the people that are good at untangling big-scary-questions, so they will not be like that. It's very much Bucket Errors - If i will think on X I will have to do uncomfortable thing Y. so the mental move that helped me was to untangle.

for example, when i thought about the possibility of break up i was practically panicking. it was very irrational, disentangles from the territory emotion - the break up itself was swift and easy and I'm pretty sure i should have done in sooner except i still have no idea when. 

but the mental move that let me to think about that was to say to myself that  I DON'T HAVE TO BREAK UP. now, it's not exactly like that. i told myself we can stay together for a year. and then it was extremely clear i want to plan for this break up. and then during something like one week break up become the only possible option. 

in the same way, I didn't break up by having uncomfortable conversation. I just... didn't. it' harder to describe, but there are people that i can have emotionally vulnerable and deep conversations and people i don't. and the right move is not to have the conversations with people that it hard to have them. but to have connections with those with whom it's not hard to have those conversations. 

for this move to work it have to be honest. for example, I'm staying at my job despite the real possibility i will be able to earn more in other place because it's comfortable and moving jobs is very high cost emotionally to me. i did told myself year ago that if they don't give me the promotion they promised i leave (and i believe this is why i actually got it), but I'm still here. and I'm not sure your framing will see that as the right choice, despite the fact i did stare into the abyss and precommited to search for different job if i don't get the promotion.

there is two things here, to acknowledge something, and to change it. and you sorta conflating them here. for example, there are ultra-orthodox people here (Haredim) with some cult-like live. and there was forums (and i assume there are facebook groups) for Haredim-against-their-will. people who stare into the abyss, decided religion is a lie, and then decided it's not worth to losing all their family and friends and work place, and it's better to pretend. 

there is to see something, and there is to act on it, and it's two different things. and your framing is too much on the side of forcing yourself to do something as the only option, when I see forcing yourself to do things as form of self harm (like in Forcing yourself to keep your identity small is self-harm), and prefer ways that does not include forcing yourself, and that I don't see in your map (but see in the territory).

also, I noticed now that I wrote a lot about where I disagree, and it's misleading. I VERY MUCH agreeing that do the hard thing is very important life skill. I just prefer to un-abyss the abyss before you stare at it. 
 

Comment by Jasnah Kholin (Jasnah_Kholin) on Frame Control · 2023-03-03T19:15:57.330Z · LW · GW

regarding the third point, my interpretation of this part was very different: "I don’t have this for any other human flaw - people with terrible communication skills, traumatized people who lash out, anxious, needy people who will try to soak the life out of you, furious dox-prone people on the internet - I believe there’s an empathic route forward. Not so with frame control."

I read is as "I'm not very vulnerable to those types of wrongness, that all have the same absolute value in some linear space, but I'm vulnerable to frame control, and believe the nuclear option is justified and people should feel OK while using it". 

I, personally, not especially vulnerable to frame control. my reaction to the examples are in the form of "there is a lot to unpack here, but let's just burn the whole suitcase". they struck me as manipulative, and done with Badwill. as such, they set alarm in my mind, and in such cases, this alarm neutralize 90% of the harm.

my theory regarding things like that, all the cluster of hard-to-pinpoint manipulations, is that understanding it is power. i read a lot and now i tend to recognize such things. as such, I'm not especially vulnerable to that, and don't have the burn-it-with-fire reaction. i have more of a "agh, this person, it's impossible to talk to them" reaction. I find dox-prone, needy, lash-out people much more problematic to deal with.

i have zero personal knowledge of the writer, but the feeling i get from the post is that she will agree with me. she will tell me that if I can be around frame controller and not being harmed is OK, and if can't be around needy person it's OK. I will avoid the needy one, and she the frame-controller. I less sure she will agree with me about the way different people can tolerate different vectors of badness different, and allowing one kind force everyone vulnerable to it be harmed or avoid the place. 

but the general feeling i got is not "writer is good at spotting and we should burn it with fire" and more "you should listen to the part of you that telling you that SOMETHING IS WRONG, and it's legitimate to take it seriously and act on it". and it promote culture that acknowledge that as legitimate, allow such person to avoid other persons, not trying to guilt-trip them or surprise them with the frame-controller presence or do other unfriendly things people do sometimes. 

as in, I didn't see burn-with-fire-frame-controllers promoted as community strategy, but as personal strategy. personal strategy that now may encounter active resistance from the community, and should not encounter such resistance. 

Comment by Jasnah Kholin (Jasnah_Kholin) on Ruling Out Everything Else · 2023-02-28T17:03:26.318Z · LW · GW

what you refer to as Dark Arts here? do you consider slurs Dark Art? the word Bulshit?

 

Comment by Jasnah Kholin (Jasnah_Kholin) on Respect Chesterton-Schelling Fences · 2023-02-28T15:16:05.145Z · LW · GW

there is one main problem with this argument, and this that people who want to cross Fence aren't safe on their current position. 

for example, high-commitment communities is "safe" social default, one very old that survived from before we were humans. but, as Ozy wrote, "One of the most depressing facts about high-commitment communities is that they almost all cover up child sexual abuse."

this is the safety of the Fence. this "safety" sucks. 


the sister that went no-contact with her rapist father is the black sheep of the family. she is the radical, the revolutionist. all her family think she is bad daughter and she should not deny her father his granddaughter. her sister, who send her little boy unsupervised to his grandfather, even after he start to wetting himself again - she is the conservative, who respect the status quo.

i want to be the black-sheep sister. i can't see the other option as anything but abomination. 

***

different argument: what is the fence? because if you ask me, cheating in unhappy marriage IS the fence, the conservative view. the unconservative view is you can just divorce. very new, was definitely not like that during most of the history. while constant cheating, sometimes with "self-respecting woman have husband and lover" as folk-wisdom idiom, was the norm in some times and places.

so how can you be respectful of the fence, with you don't know what side is the conservative one? 

(it's like what Duncan said, but from different angle)
 

Comment by Jasnah Kholin (Jasnah_Kholin) on Ruling Out Everything Else · 2023-02-28T12:35:40.905Z · LW · GW

"From another perspective, if this were obvious, more people would have discovered it, and if it were easy, more people would do it, and if more people knew and acted in accordance with the below, the world would look very different."


so, i know another person who did the same, and i tried that for some time, and i think this is interesting question i want to try to answer.

so, this other person? her name is Basmat. and it sorta worked for her. she saw she is read as contrarian and received with adversity, and people attribute to her things she didn't said. and decided to write very long posts that explained her worldview and include what she definitely doesn't mean. she was ruling out everything else. and she become highly respected figure in that virtual community. and... she still have people how misunderstood her. but she had much more legitimacy in shutting them up as illegitimate trolls that need not be respected or addressed. 


see, a lot of her opinions where outside the Overton window. and even in internet community that dedicated for it, there was some wave-of-moderation. one that see people like her as radicals and dogmatic and bad and dangerous. and the long length... it changed the dynamic. but it mostly was costly, and as such trustworthy, signal, she is not dogmatic. that she can be reasoned with. this is one of my explanations for that.

but random people still misunderstood her, in exactly the same ways she ruled out! it was the members of the community, who know her, that stopped to do that. random guests - no.

why? my theory us there are things that language designed to make hard to express. the landscape is such to make easy to misunderstand or misrepresent certain opinions, in Badwill, to sound much worst then they are. 

and this related to my experience. which is - most people don't want to communicate in Goodwill. they don't try to understand what I'm trying to point at. they try to round my position to the most stawmanish one they reasonably can, and then attack it. 

i can explain lengthly what i mean, and this will make it worst, as i give them more words that can be misrepresented.

and what i learned is to be less charitable, is to filter those people out ruthlessly, because it is waste of time to engage. if i make the engagement in little pieces with opportunity for such person feedback, and ask if i was understood and if he disagree, if i make Badwill strategies hard - they will refuse to engage.

and if i clarify and explain and rule out everything else in Goodwill, they just find new and original ways to distort what i just said.

i still didn't read the whole post, but i know my motivation such that i wrote this comment now and will not if i postpone it. but i want to say - in my experience, such strategy work ONLY in Close Garden. in Open Garden, with too many people acting in Badwill, it's losing strategy. 


( i planned to write also about length and that 80%-90% of the people will just refuse to engage with long enough text or explanation, but exhausted my writing-in-English energy for now. it is much more important factor that the dynamic i described, but i want to filter such people so i mostly ignore it. in real world though, You Have Four Words, and most people will simply refuse to listen to read you, in my experience)

edit after i read all the post:

so i was pleasantly surprised of the post. we have very similar models of the dynamics of conversions here. i have little to add beside - I agree!

this is what make the second part so bewildering - we have totally opposite reactions. but, maybe it can be solved by putting a number on it?

if i want to communicate idea that is very close to politically-charged one, 90% of people will be unable to hear it no matter how i will say that. 1% will hear no matter what. and another 9% will listen, if it is not in public, if they already know me, if they are in the right emotional space for that.

also, 30%-60% of the people will pretend they are listening to me in good faith only to make bad faith attacks and gotchas.

which is to say - i did the experiment. and my conclusion was i need to filter more. that i want to find and avoid the bad-faith actors, the sooner - the better. that in almost all cases i will not be able to have meaningful conversion.


and like, it work, sorta! if i feel extremely Slytherin and Strategic and decided my goal is to convince people or make then listen to my actual opinion, i sorta can. and they will sorta-listen. and sorta-accept. but people that can't do the decoupling thing or just trust me - i will not have the sort of discussion i find value in. i will not be able to have Goodwill discussion. i will have Badwill discussion when i carefully avoid all the traps and as a prize get you-are-not-like-the-other-X badge. it's totally unsatisfying, uninteresting experience. 

what i learned from my experience is that work is practically always don't worth it, and it's actually counter-productive in a lot of times, as it make sorting Badwill actors harder.

now i prefer that people who are going to round my to the closest strawman to demonstrate it sooner, and avoid them fast, and search for the 1%.

because those numbers? i pulled them right from my ass, but they are wildly different in different places. and it depends on the local norms ( which is why i hate the way Facebook killed the forums in Hebrew - it's destroyed Closed Gardens, and the Open Garden sucks a lot. and there are very little Closed Gardens that people are creating again). but hey can be more like 60%-40% in certain places. and certain places already filtered for people that think that long posts are good, that nuances are good. and certain places filtered for lower resolution and You Have Four Words and every discussion will end with every opinion rounded up for one of the three opinions there, because it simply have no place for better resolutions. 

it's not worth it to try to reason with such people. it's better to find better places.


all this is very good when people try to understand you in Goodwill. it's totally worth it then. but it not move people from Badwill to Goodwill, from Mindkilled to not. it's can make dialog with mindkilled people sorta not-awful, if you pour in a lot of time and energy. like, much more then i can in English now. but it's not worth it.

do you think it worth it? do you think about situations, like this with $ORGANIZATION that you have to have this dialog? i feel like we have different kinds of dialogs in mind. and we definitely have very different experiences. I'm not even sure that we are disagreeing on something, and yet, we have very similar descriptions and very different prescriptions...

****
it was very validating to read Varieties Of Argumentative Experience. because, most discussions sucks. it's just the way things are. 

I can accept that you can accidentally suck the discussion, but not move it higher on the discussion pyramid.


****

about this example -  downvoted the first and third, and upvoted the second. my map say that the person that wrote it assign high probability for $ORGANIZATION being bad actor as part of complicated worldview about how humans work, and that comment didn't make him to update this probability at all, or maybe have epsilon update.

he have actually different model. he actually think $ORGANIZATION is bad actor, and it's good that he can share his model. do you wish for Less Wrong that you can't share that model? do you find this model obviously wrong? i can't believe you want people who think people are bad actors should pretend they don't think so, but it's failure mode i saw and highly dislike.

the second comment is highly valuable, and the ability to see and to think Bulshit like the author did is highly valuable skill that I'm learning now. i didn't think about that. i want to have constantly-running background process that is like that commenter. Shoulder Advisor, as i believe you would have described it. 

Comment by Jasnah Kholin (Jasnah_Kholin) on Politics is the Fun-Killer · 2023-02-26T18:03:25.676Z · LW · GW

״And how much is it actually mind-killing in the first place?״


a lot. as in - dumber then 7 years old kid.

i remember this time i said to smart women, with PhD, that good intentions lead to hell. and then she said that i said that I'm in hell because of her. this was ridiculous failure of reading comprehension. after that i started to notice such instances.

my country have major political battle now, and i wrote to woman on facebook that i talked to in the past, and she sounded less human that chatGPT. i have someone else, who i know in real live, behave very stupidly and uncharitably, because i didn't support some argument-solider, despite the fact i actually agreed with his position.

the mind-killing effect is STRONG.

Comment by Jasnah Kholin (Jasnah_Kholin) on Sazen · 2023-02-26T10:20:43.412Z · LW · GW

i want to write not short post that explains my own map of the sazen-adjusting part of ConceptSpace, so i postpone my longer response until i will write it. my map of it all now is that you throw bunch of very different things into this one concept, that i separate to different concepts - that should be treated differently. when i unpack folk wisdom i feel like now i understand it BETTER - but my core understanding remain the same. if someone will tell me Duncan is writer and teacher (And not Second Foundation Rationalist - which is how i think about you) i will suspect it unfriendly attempt at deception - or more likely, stupid joke, that play exactly on the fact this description is the sort that described in folk wisdom as "half  true - whole lie"

folk wisdom, in my experience, is much more similar to the lossy compression picture then the sazen one - when i gained understanding, i feel like the fold wisdom pointer point exactly in the right direction, and what was missed is the emotional understanding. the picture representing it will be black-white version of the same picture. (and i don't call it lossy compression, nor i find this concept useful). it's different from the sazen as the picture that contains some distinct features that let you recognize it if you know what someone is talking about. 

but i don't want to start this discussion now - it's better that i will write my own post first. 

Comment by Jasnah Kholin (Jasnah_Kholin) on Sazen · 2023-02-24T19:06:34.340Z · LW · GW

I was sazened by the word Sazen when i saw Duncan use it on facebook, and though i understood it. to my defense i say that now i believe this word does not carve reality at the joints, and that folk wisdom and what-sazen-should-mean are two different, distinct things. 

Comment by Jasnah Kholin (Jasnah_Kholin) on Elements of Rationalist Discourse · 2023-02-15T07:53:03.500Z · LW · GW

so i thought about you comment and i understand why we think about that in different ways.

in my model of the world, there is important concept - Goodwill. there are arrows that point toward it, things that create goodwill - niceness, same side politically, personal relationship, all sort of things. there are also things that destroy goodwill, or even move it to the negative numbers.

there are arrows that come out of this Goodwill node in my casual graph. things like System1 understand what actually said, tend to react nicely to things, able to pass ITT. some things you can get other ways - people can be polite to people they hate, especially on the internet. but there are things that i saw only as result of Goodwill. and System1 correct interpretation is one of them/ maybe it's possible - but i never saw it. and the politeness you get without Goodwill, is shallow. people's System1 notice that in body language, and even in writing.

now, you can dial back on needless insulting and condescension. those are adversarial moves that can be chose consciously or avoided, even with effort. but from my point of view, when there is so little Goodwill left, the chance for good discussion already lost. it can only be bad and very bad. avoiding very bad is important! but my aim in such situations is to leave the discussion when the goodwill come close to zero, and have mental alarm screaming at me if i ever in the negative numbers of feel like the other person have negative numbers of Goodwill toward me.

so, basically, in my model of the world, there is ONE node, Goodwill. in the world, there is no different things. you write: "even if there's no risk that yelling at people (or whatever) will directly cause you to straw-man them.". but in my model, such situation is impossible! yelling at people WILL cause you to strawman them. 

in my model of the world, this fact is not public knowledge, and my model regarding that is important part of what i want to communicate when I'm talking about Goodwill. 

thanks for the conversion! it's the clearest way i ever described my concept of Goodwill, and it was useful for my to formulate that in words.
 

Comment by Jasnah Kholin (Jasnah_Kholin) on Elements of Rationalist Discourse · 2023-02-12T14:46:05.465Z · LW · GW

i may be should (and probably will not) write my own post about Goodwill. instead i will say in comment what Goodwill is about, by my definition. 

Goodwill, the way i see it, on the emotional level, is basically respect and cooperation. when someone make an argument, do you try to see to what area in ConceptSpace they are trying to gesturing, and then asking clarifying questions to understand, or do you round it up to the nearest stupid position, and not even see the actual argument being made? do you even see then saying something incoherent and try to parse it, instead of proving it wrong?

the standard definition of Goodwill does not include the ways in which failure of Goodwill is failure of rationality. is failure of seeing what someone is trying to say, to understand their position and their framing.

civility is good for its own sake. but almost everyone who decided to be uncivil end up strawmanning their opponents, end up with more wrong map of the world. what may look like forgiveness from outside, for rationalist, should look from inside like remembering that we expect short inferential distances and that politics wrecks your ability to do math and your believes filter your receptions, so depends on your side in argument.


i gained my understanding of those phenomenons mostly from the Rational Blogosphere, and saw it as part of rationality. there is important difference between person executing the algorithm "being civil and forgiving", and people executing algorithm "remember about biases and inferential distances, and try to overcome them", that implemented by understanding the importance of cooperating even after perceived defection in noisy environment, in the prisoner's dilemma, and by assuming that communication is hard ind miscommunications are frequent, etc.

Comment by Jasnah Kholin (Jasnah_Kholin) on LessWrong Has Agree/Disagree Voting On All New Comment Threads · 2023-02-10T18:58:12.309Z · LW · GW

i didn't intend to comment .but then i read comment about fighting negativity bias and decided the commenter right, so, I'm doing it to - this new feature is really good, i encountered it in the wild, find it intuitive (except the sides of the votes, but when i doing it wrong the colors clarify it and i fix that immediately), and basically very good and useful feature. in my model of the world, 70%+ of users like this feature, and don't say that, so the result is the comment section below.

i also find it much better then Duncan's suggestion below, for reasons about Propagating Beliefs Into Aesthetics, and LessWrong aesthetics being very clearly against attention-grabbing things that Out To Get You and signaling undue overconfidence as Overconfidence Is Deceit, and Duncan's suggestion undermine this. 

Comment by Jasnah Kholin (Jasnah_Kholin) on Here's Why I'm Hesitant To Respond In More Depth · 2023-02-07T14:14:28.236Z · LW · GW

interesting! now i think about how my own version of this post will look like - and what those differences tell me about myself. i think that if different people will write their own versions (i count Duncan's rules of discussion as his own version, despite the different format) it will give interesting information about how people are different, and how to pass their ITT. i may try to write my own version of such post as an exercise in "know thyself".

Comment by Jasnah_Kholin on [deleted post] 2023-02-07T13:15:08.416Z

there are posts with good titles, when i understand the concept from the title and expect the post to elaborate. i find this post in the links in Pain is Not The Unit of Effort, and i though i knew what i will see. i was wrong.

i expected to read post with examples about the places when pain actually pay off. pain is orthogonal to success, and it's mean that sometimes you will need pain to success, and i expected to see list of such examples. only part 2 was example of that. part 1 and 3 was examples of things that are not pain, and part 4 just left me bewildered. part 5 sounds like counter-argument to me. and the Antidotes sounds like counter-arguments too. they look to me like examples of dysfunctionality, exactly the attitude that the original post come against. 

i will address part 4 specifically as this is the part i find most strange and confusing.

Ye Xiu strategy sounds to me clearly inferior, like signal you will always cooperate on the Prisoner Dilemma - you basically incentivize people to defect against you. why it's a good thing?

"That wasn't a real disaster." sounds like True Scotsman, moreover, in defining disaster as "You simply die" you make this word useless. categories exist to point on cluster of things. "everything" and "the empty set" are both useless categories. why would you want to take useful word and render it useless? <very bad things that worth to guard against> sounds like good category to me.

if you recover in less the a minute from startup fail, you don't sound surprised enough for the disparity between your map and territory. emotions serve purposes - like making you try hard to avoid this outcome. and not "hard" as throwing willpower on it, but "hard" as dedicating per-planing and perception and all your ability to think. if you don't know your start up will fail, you should be surprised, if you do, you should do something to prevent it. also, Chesterton's Fence. human emotions exist for a reason, and i deeply suspect ideologies that glorified emotionless. it's like throwing away really useful tool that was optimized by the blind goddess of evolution. you sure you can do better? really sure?

you say "I'm still in the game." and i think about the time i understood the problem with the social script of "don't give up". sometimes, it's bad to stay. WHY do you think it's good to stay? why you judge the one leaving the startup world bad thing and you remaining good thing? what are the criteria of the judgment?

part of my problem with glorify-pain culture is it anti-reflection, it's all "go forward in full force" and not 'let's stop and evaluate the options and see what option is best".

you give examples but not reasons it was worth it, or that it was even good thing. and i have the feeling there are unsaid something this post try to reflect, but i can't imagine what person will be persuaded by post like that, what kind of algorithm can create that post. ITT total failure on my part.

maybe i should write the post i expected to find here. the problem? i don't have enough real-world examples for that.

 

Comment by Jasnah Kholin (Jasnah_Kholin) on When Having Friends is More Alluring than Being Right (by Ferrett Steinmetz) · 2023-02-04T13:24:22.999Z · LW · GW

It's very interesting to read that, because i had the exactly opposite reaction:
What if I got irrefutable proof that [my belief X] contradicts evidence? I'd NOT lose all my friends believing X. what's wrong with them, that their friendships depend on believing X?

my beliefs are idiosyncratic enough that i never met a person that i don't disagree with on something substantial. and yet, i have friends. maybe it's because i didn't invest lot of effort to create groups around believes? 

now i wonder how much i typical-mind other people in regard to that question, because i expect that most people will not lose all their friends over that. especially not "real" friends.

i feel there is some way i still failing on ITT here, but i can't grasp exactly where. 

Comment by Jasnah Kholin (Jasnah_Kholin) on Basics of Rationalist Discourse · 2023-01-31T18:18:10.147Z · LW · GW

so I read in Rational Spaces for almost a decade, and almost never commented. when i did commented, it was in places that i consider Second Foundation. your effort to make Less Wrong is basically the only reason I even tried to comment here, because i basically accepted that Less Wrong comments are to adversarial for safe and worthwhile discussion.

In my experience - and the Internet provide with a lot of places with different discussion norms - collaboration is the main prediction of useful and insightful discussion. I really like those Rational Spaces when there is real collaboration on truth-seeking. I find a lot of interesting ideas in blogs where comments are not collaborative but adversarial and combative, and I sometimes found interesting comments, but i almost never found interesting discussion. I did, however find a lot of  potentially-insightful discussions when the absent of good will and trust and collaboration and charity ruined perfectly good discussion. sometimes it was people deliberately pretend to not understand what people said, and instead attacking strawman. sometimes (especially around politics) people failed to understand what people say and was unable to hear anything but the strawman-version of an argument. a lot of times people was to busy trying to win an argument so they didn't listen to what the other side actually trying to convey. trying to find weak part of the argument to attack instead of trying to understand vague concept in thingspace that a person is trying to gesture to.

the winning a argument mode is almost never produced new insights, while sharing experiences and exploring together and not trying to prove anything is the fertile ground of discussion.

All the rules in this list are rules I agree to. more then half will facilitate this type of environment. and other things you wrote that I read make me believe you find this find of collaborative spirit important. but this is my way of seeing the world, in which this concept of Good Will is really important, and more then half of this rules look like ways to implement in practice this concept. and I'm not sure this is the way you think about those things, or we see the same elements of the territory and map them differently. 

if i was writing those rules, i would have started with "don't be irrationally, needlessly adversarial, to wrongly fulfill your emotional needs, for example: [rules 2,3,5, 6,7,8,9,10]"

but there is enough difference that i suspect there is other concept, near my Good Will concept but different from it, around which those rules cluster, that i don't entirely grasp. 

can you help me understand if such a concept exist, and if yes, point to some posts that may help me understand it?

 

Comment by Jasnah Kholin (Jasnah_Kholin) on My Model Of EA Burnout · 2023-01-31T13:36:07.216Z · LW · GW

This is very interesting comment, about book that I just added to my reading list. would you consider posting this as separate post? I have some thoughts about masking and Authenticity, and the price of it and the price of too much of it, and I believe it's discussion worth having, but not here.

(I believe some people will indeed benefit a lot from not working as a new parents, but for others, it will be very big hit to their self-worth, as they define themselves by work, and better to be done only after some introspection and creating foundation of self-value disconnected from work.)

Comment by Jasnah Kholin (Jasnah_Kholin) on My Model Of EA Burnout · 2023-01-31T13:17:48.238Z · LW · GW

The way i see it, something wrong with people EA attract and some problem with EA are complimentary hypotheses. dysfunctional workplaces tend to filter for people that accept those dysfunctionalities.

Comment by Jasnah Kholin (Jasnah_Kholin) on My Model Of EA Burnout · 2023-01-31T13:14:59.438Z · LW · GW

I know very little about other sorts of charity work, but i heard social workers complaining about burnout a lot.

I tend to assume that encounter harsh reality s hard, and working in unappreciated work that lack resources is hard.

It may be interesting to see what is the baseline burnout level in some fields is, to look both on variation and to how similar or dissimilar EA to other charities is. It may help to understand who big part different elements play in burnout - true values alignment, Heroic Responsibility, encountering discouraging reality, other things (like simply too many working hours).

Comment by Jasnah Kholin (Jasnah_Kholin) on Picture Frames, Window Frames and Frameworks · 2022-10-23T20:19:43.075Z · LW · GW

i find it interesting, and something i want one of my friends especially to read. i also liked a lot the ACTUAL EXAMPLES, that was helpful. i will not use (At least, i'm not planing to use) picture-window-framework metaphor myself.

so... maybe in the future don't write long posts that take a lot of time just because two people pressure you to do that? you have n=1 it will not worth it.

Comment by Jasnah Kholin (Jasnah_Kholin) on Yes Requires the Possibility of No · 2022-10-02T18:12:32.438Z · LW · GW

it was strange to read it. it was interesting - explaining point i already know in succinct and effective way. and it's connect nicely with the extensive discussion on consent and boundaries. Boundaries: Your Yes Means Nothing if You Can’t Say No

and then, when i was reading the comments and still internalizing the post i got it - i actually re-invented this concept myself! it could have been so nice not to have to do it... i wrote my own post about it - in Hebrew. it's name translates to Admit that sometimes the answer is "yes", and it start with a story about woman that claimed to believe in personal optimization of diet my experiments on yourself, but then find a reason no invalidate every result that contradicted her own believes about optimal diet. it took me years to notice the pattern.

and then, this comment about budgeting and negotiating with yourself that empathized how important it is to allow the answer to be "yes":

"I’m seeing a lot of people recommend stopping before making small or impulse purchases and asking yourself if you really, really want the thing. That’s not bad advice, but it only works if the answer is allowed to be ‘yes.’ If you start by assuming that you can’t possibly want the thing in your heart of hearts, or that there’s something wrong with you if you do, it’s just another kind of self-shaming. "

it's kind of like 5, but from the point of view of different paradigm.

and of course, If we can’t lie to others, we will lie to ourselves.

it's all related to the same concept. but i find the different angels useful.

Comment by Jasnah Kholin (Jasnah_Kholin) on Yes Requires the Possibility of No · 2022-10-02T17:42:19.418Z · LW · GW

is it? i find it very Christian way of thinking, and this though pattern seem obviously wrong to me. it's incorporated into the Western Culture, but i live in non-Christian place. you can believe in Heaven to all! some new-age people believe in that. you can believe in Heaven to all expect the especially blameworthy - this is how i understand Mormonism.

thanks for the insight! now i can recognize one pretty toxic thought-pattern as Christian influence, and understand it better! 

Comment by Jasnah Kholin (Jasnah_Kholin) on The 5-Second Level · 2022-09-12T16:16:06.653Z · LW · GW

this post was almost-useless for me - i learn from it much less then from any post for the sequences. what i did learn: how over-generalization look like. that someone think that other people learn rationality skills in a way that i never saw anyone learn from, with totally different language and way of thinking about that. that translating is important.

the way i see it is: people look on the world with different lens. my rationality skills are the lens that are instinctive to me and include in the rationality-skills subset. 

i learned them mostly be seeing examples and creating a category for it. 

all those exercises not only didn't work for me, i have much less idea what Yudkowsky  tried to teach, while from the sequences i did manged to learn some things.

maybe the core rationality skill is the ability to bridge the gap between theory and practice? i consider "go one meta level higher" the most important one. it creates important feedback loop. 

also, in most situations i consider going level higher - give category and not example - good idea. 

i actually learned that examples are really good thing and that is the natural way humans learn. i think it's part of the things the post tried to say, but i'm not sure. this is one of the least understandable post of Yudkowsky i ever read. 

Comment by Jasnah Kholin (Jasnah_Kholin) on Motive Ambiguity · 2021-11-21T08:37:19.116Z · LW · GW

what is cool about that post is it self-demonstrating nature. like the maze, it give explanation that give less precise map of the world, with less prediction power then more standard model. and it give more pessimistic and cynical explanation. you trade off your precision and prediction power to be cynical and pessimistic!

and now i can formalize what i didn't like abut this branch of rationality. it's look like cynicism is their bottom line. they already convinced in the depth of their heart that the most pessimistic theory is true, and now they are ready to bite the bitter bullet. 

but from the side, i see no supporting evidence. this is not how people behave. the predictions created by such theories are wrong. it's so strange thing to write on the bottom line! but being unpleasant not making something true, more then being pleasant make it true. 

and as Wizard's first rule say, people believe things they want to believe or things the afraid to believe...