EniScien's Shortform
post by EniScien · 2022-02-06T06:29:52.071Z · LW · GW · 126 commentsContents
132 comments
126 comments
Comments sorted by top scores.
comment by EniScien · 2023-05-08T02:14:50.087Z · LW(p) · GW(p)
In HPMOR, Draco Malfoy thinks that either Harry Potter was lucky enough to be able to come up with a bunch of great ideas in a short period of time, or he, for some unimaginable reason, has already spent a bunch of time thinking about how to do it. The real answer to this false dilemma is that Harry just read a book as a kid where its author came up with all these for the book's needs.
In How to Seem (and Be) Deep Thought, Eliezer Yudkowsky says that the Japanese often portray Christians as bearers of incredible wisdom, while the opposite is true of the "eastern sage" archetype in the western midst. And the real answer is that both cultures have vastly different, yet meaningful sets of multiple ideas, so when one person meets another, and he immediately throws at him 3 meaningful and highly plausible thoughts that the first person has never even heard of, and then does so again and again, the first person concludes that he is a genius.
I've also seen a number of books and fanfics whose authors seemed like incredible writing talents and whose characters seemed like geniuses, fountaining so many brilliant ideas. And then each time it turned out that they really just came from a cultural background that was unfamiliar to me. And I generalized this to the point that when you meet someone who spouts a bunch of brilliant ideas in a row, you should conclude that it's almost certainly not that he's a genius, but that he's familiar with a meme you're unfamiliar with.
And hmmm. Just now I thought about it, but it probably also explains that Aura of Power around characters who are familiar with a certain medium, and people who are familiar with a certain profession (https://www.lesswrong.com/posts/zbSsSwEfdEuaqCRmz/eniscien-s-shortform?commentId=dMxfcMMteKqM33zSa [LW(p) · GW(p)]), that's probably the point, and it means that the feeling is not false, it really elevates above mere mortals, because you have a whole bunch of meaningful thoughts that mere mortals simply do not have.
When the familiar with more memeplexes will ponder, he will stand on a bigger pile of cached thoughts, not just on the shoulders of giants, not on the shoulders of a human pyramid of human giants, so he can see much further than someone who looks only from his, no matter how big or small, height.
Replies from: quetzal_rainbow, EniScien, EniScien↑ comment by quetzal_rainbow · 2025-02-16T20:20:01.226Z · LW(p) · GW(p)
EY wrote in planecrash about how the greatest fictional conflicts between characters with different levels of intelligence happen between different cultures/species, not individuals of the same culture.
Replies from: EniScien↑ comment by EniScien · 2025-02-16T22:03:14.778Z · LW(p) · GW(p)
Yeah. I reread today and thought that it could be replaced by link to it and phrase "then if you see that somebody is very smart and spits out brilliant new looks on things, it may be question of cumulative/crystalized intelligence, not only fluid".
(But when I tried to find part about TPOT by using keywords like "Carissa Ri-Dul TPOT Oppara", I found out... the search giving me nothing. So today I didn't have hope to find a fragment in reasonable time)
↑ comment by EniScien · 2025-02-16T22:27:13.656Z · LW(p) · GW(p)
Object level 2y later comments
Now I though think it should be derived apriori, like, as toy model: LET there is 1B people, average can have 1 idea/day, 1 of 1K 100 idea/day, 1 of 1B 1M idea/day, you have 84000s/day and need 84s/idea for comprehension. THEN 1 of 1K has only 1/10 his ideas created by him, so priors to meet somebody with more created ideas than read are lower than 1/1K, AND just the average people create 1M times more ideas than can be read, at best you will know 0.0001% of ideas your world has.
↑ comment by EniScien · 2025-02-16T17:43:05.669Z · LW(p) · GW(p)
Wow. I look at it again 2y later and as exception of sort of my personal absolute rule it doesn't look absolutely terrible, it looks almost good. I am not sure about the reasons. Mb I managed to wrote it for external reader, not just spit out thought. I doubt it's because I "acquired decent level of rationality".
comment by EniScien · 2025-02-16T16:46:55.859Z · LW(p) · GW(p)
That's sort of Welcome (Back) Post. My mistakes analysis. Or confession.
I probably solved a problem that iirc I had my whole life: being VERY upset by losing at all, like, school grades, losing games (even to AI) etc. By posting on LW I could ever get down votes. And did. But I was trying post still, I had ideas I wanted to publish.
And as a full surprise, reinforcement turned out to work such way, that if you overcome it in end-step of thought-action sequence, it will just strike your earlier steps. Eg I became less able to think about posting. And I didn't noticed that before it was too late.
And I didn't even asked about not downvoting me, because, obviously it will look like "can you vote me higher than other users unrelated to posts quality because of these fully trust based reasons how I suffer so much!". And wouldn't it be so wrong and selfish, when I so much liked using downvotes myself?
(of course, as I now understand after thinking about it, I could and should just write out these reasoning fully, maybe someone else had an obvious to him solution)
And so, in fact previous 1.5 years my brain avoided thinking about something that can lead to posting on LW, like having mathy thoughts, or opening the site.
And the problem could be solved if I just managed to not be upset because of downvotes. And yet, even though I acquired good enough general emotional control to fully turn off almost any emotion just few months after stopping posting, being upset by loses was very resistant to that. And I could not introspect why. And now it looks like I finally solved this my whole life problem.
Such happened iirc that when I was little kid, I heard a lot of speech like "boys should not be afraid of pain, should not cry about it". But pain was just the worst thing in my utility function at all, and if somebody advices you into doing exactly reverse thing of your utility function, then it be good idea doing not by their advices, but doing exactly reverse thing of their advices. So I decided to frantically avoid ever doing anything remotely painful, remotely bad-feeling AT ALL.
And hence I could not use my emotional control to make it less painful because it could make higher a probability of happening painful-at-all thing.
Then I noticed that my intuitive expectations about "will I be upset by losing" changed and I immediately tried to get as much profit from this luck as I could. I several times applied emotional control, to be sure that I will not lose this opportunity.
And the problem... Was that I was not sure in my intuition. I needed some way to test it without additional negative reinforcement of important things. And desirable without any consequences at all. And there was such a way!
To explain, I wasn't playing very much of games. Partially, because easy levels were too easy and in hard levels I could lose. But some time ago I was interested to see, how changed my skills of Scrabble, when now I have much higher vocabulary than me as a kid. I played in 1 lvl of AI. And it was incredibly easy. So I tried just 1 lvl higher. And completely losed. And in distinction with me as a kid I was not having tears, screaming, growling, waving hands, throwing things. Not even thinking enraged thoughts. But it didn't matter really much, I just felt how my wish to ever play high levels of Scrabble vanishes to zero. And I couldn't do anything with that.
And now I checked how much of that I will feel when losing? And... Immediately understood that I VASTLY overcorrected. Because I felt nothing at all, not even slightly.
So, I hope now I will be able post my ideas without such enormous psychological obstacles. Because in the last 1.5y I wasn't trying to write my ideas or even think about it, and so had much more ideas than before. More than it is possible to write them, I just don't know which ideas LW consider new and useful relatively to myself. I am going to try to write a mix of detailed myself-best, detailed LW-prediction-best, short namings of first 20 best, and probably something which I just like to write for positive reinforcement. And update my beliefs by reactions added to parts of my posts.
(a lot of things with this post seem very off, as on level of general narrative as on level of grammar. Maybe I lost any skill of writing anything except of maximally short notes to myself. Or... maybe I just became to much quibbling, 1.5y ago I wouldn't be able to write whole post in non native language and notice it only after seeming a lot of offness. I will try to fix it tomorrow with fresh eyes, but post already now to avoid not moving forward because of too much perfecting things, especially because it is fast takes form)
Replies from: cubefox, EniScien, EniScien↑ comment by EniScien · 2025-02-18T13:37:29.099Z · LW(p) · GW(p)
Update: I am certainly lost some writing skills... Or, more precisely, I can't at the same time use my old writing skills and successfully think new thoughts, they are too distant in mental space. That makes things harder, I am not sure what to do with that.
One of very important problems here is that my old wring skills are to tightly weaven as a habit. Do someone know solution for such problems?
↑ comment by EniScien · 2025-02-17T00:32:12.518Z · LW(p) · GW(p)
Somehow I manage to be dismissing/condescending even to myself. Like "oh, this old just didn't thought about writing his reasoning to LW". When actually systematically it turns out that I or someone else thought, even tried, but. This. Thought. Had. Failed. To. Work.
I actually HAD wrote a post with a significant part of my reasoning: https://www.lesswrong.com/posts/zbSsSwEfdEuaqCRmz/eniscien-s-shortform?commentId=mv3YpuL6tDTtTF7Dv [LW(p) · GW(p)]
comment by EniScien · 2023-04-21T22:28:46.685Z · LW(p) · GW(p)
I noticed here that Eliezer Yudkowsky in his essays (I don't remember exactly which ones, it would be nice to add names and links in the comments) says that the map has many "levels", and the territory has only one. However, this terminology itself is misleading, because these are not close to "levels", these are "scales". And from this point of view, it is quite obvious that the scale is a purely property of the map, the territory does not just have one scale, the smallest, and it cannot even be said that it has all the scales in one, it simply does not have a scale. Scale is a degree of approximation, like distance is about photographs, different photographs can be taken from different distances, however, the object is not the closest photograph and not all of them put together, it is simply NOT a photograph and there simply is no scale, distance or degree of approximation, it is all the categories that refer to the relationship of the subject and the photograph when shooting, however the subject never shot itself, there were no shooting distances. Talking about levels makes it feel like there could very well be many levels, they just don't exist, however, when talking about scale, it's obvious that the territory is not a map, there is no scale, just like there is no cross of your current location or icons for points of interest. And the scale just fits perfectly into the analogy with the map and the territory.
Replies from: ChristianKl↑ comment by ChristianKl · 2023-04-22T19:40:46.393Z · LW(p) · GW(p)
"Map isn't the territory" comes out of Science and Sanity from Alfred Korzybski. Korzybski speaks about levels of abstraction.
In the photography case, there's the subject, then there's light going from the subject to the camera (which depends on the lighting conditions), then the camera sensor translates that light into raw data. That raw data then might be translated into a png file in some color space. These days, the user might then add an AI based filter to change the image. Finally, that file then gets displayed on a given screen to the user.
All those are different levels of abstraction. The fact that you might take your photo from different distances and thus have a different scale is a separate one.
Replies from: EniScien↑ comment by EniScien · 2023-04-22T20:15:22.108Z · LW(p) · GW(p)
But does Yudkowsky mention the word "abstraction"? Because if not, then it is not clear why the levels. And if you mention it, then as in the case of scale, I don’t really understand why people would even think that different levels of abstraction exist in the territory.
Edited: I've searched in Reductionism 101 and Physicalism 201 and didn't find mention of "abstraction", so I save my opinion that using just word "level" doesn't create right picture in the head.
Replies from: lahwran, ChristianKl↑ comment by the gears to ascension (lahwran) · 2023-04-23T00:27:15.031Z · LW(p) · GW(p)
for one major way scale is in the territory, search for "more is different".
↑ comment by ChristianKl · 2023-04-22T22:42:22.387Z · LW(p) · GW(p)
The main issue is that people often make mistakes that come out of treating maps like they have one level.
Yudkowsky, doesn't go much into the details of levels but I don't think "scale" gives a better intuition. It doesn't help with noticing abstraction. Level might not help you fully but scale doesn't either.
comment by EniScien · 2022-03-03T14:26:07.880Z · LW(p) · GW(p)
Despite the fact that this is only an "outward attribute of a smart character", and not something rational, but I calculated that if you study for 15 minutes a day (in the morning, in the evening and at lunch, one lesson, 5 minutes), then for You can learn a language in 5 years, which is 12 languages in a lifetime, which is usually perceived as something incredibly big that only a polyglot genius can do. Yes, given the development of AI, languages are far from being needed, but it seems that constantly learning something new develops the brain and postpones Alzheimer's, and this is a good way to achieve consistency. Also, the feeling of being able to understand many different languages, find relationships and so on, it's just nice. It's like asking why every Potter mage doesn't reach the level of Professor Quirrell in old age if it's just a matter of training. For exactly the same reason why now not every person who has a smartphone knows at least three languages. People don't appreciate it and make excuses like it's not that important, although it's definitely better to know three languages than one. Studying for five minutes is not at all difficult, and almost always learning a language is more rewarding than what a person would otherwise do. And yes, I myself have been studying languages 295 days without any missed days, the main thing was to break the process into several stages, which, as it were, insure each other, at the same time making each specific lesson less difficult. And I managed to increase the level of German from complete ignorance to an approximate understanding.
Replies from: EniScien, Pattern, EniScien↑ comment by EniScien · 2023-04-22T16:41:50.242Z · LW(p) · GW(p)
I must say, I made the mistake of thinking that it was enough to make a habit to get a result. At that time, I was interested, so I did not notice that interest was required here. But now I realize that only some part of the crystallized intellect, and not dynamic, can be made into a habit, learning anything, including languages, is not the activation of already learned neural connections, but the creation of new ones, this requires straining the intellect and this cannot be to do purely out of habit, some kind of emotional motivation is needed, for example, interest. So now I don’t study any one particular language, but switch between English, German, Greek, Latin and Italian as my interest fades / grows, in order to constantly keep it at a high level for at least one language.
↑ comment by Pattern · 2022-03-04T21:18:29.070Z · LW(p) · GW(p)
and not something rational,
'Spending time well' is optimal.
Replies from: EniScien↑ comment by EniScien · 2022-03-05T15:24:21.209Z · LW(p) · GW(p)
Maybe. I just thought that LessWrong doesn’t just turn into a standard site with “ten tips on how to improve your life”, on the other hand, explicitly posing the question gives at least two answers: send about ready-made to other sites, and here give non-standard / new tips, talk about things that no one has noticed yet.
Replies from: Pattern↑ comment by Pattern · 2022-03-07T18:42:18.316Z · LW(p) · GW(p)
just turn into a standard site with “ten tips on how to improve your life”,
The obvious upgrade is having 'life improvement' be more experimental. People try things out and see how well they work, etc. (I'm not saying there has to be 'peer review' but 'peer experimentation' does seem like a good idea.)
Another approach is grouping things together, and having tools so that those things can be filtered out. (This won't be perfect because short form 'posts' don't have tags.) Also issues around content sticking around, versus floating down the river of time.
↑ comment by EniScien · 2025-02-16T23:43:02.262Z · LW(p) · GW(p)
Self review 2
There is a difference between "better than default option" and "the best of all moderately easily searchable options". And I doubt I've done the second search at all, not just done first and congratulated myself. I definitely wasn't asking "which language is more useful/pleasant to learn - English, German, Japanese, Greek, Latin, Esperanto etc" or "which exercises are the most useful/pleasant - language, mnemonic, speedreading, mental arithmetic etc"
Edit: actually, now I think it would be better to learn speedreading instead of German, then learn English at x5 speed and for read-writing only purpose, and even next learn some conlang, not German. And do it not in "an app which is better than school", but in "the app which is the best from all apps I found".
comment by EniScien · 2022-05-19T12:18:00.385Z · LW(p) · GW(p)
Some time ago, I noticed that the concepts of fairness and fair competition were breaking down in my head, just as the concept of free will once broke down. All three are not only wrong, they are meaningless. If you go into enough detail, you will not be able to explain how this should work in principle. There is only determinism and chance, only upbringing and genetics, there is simply no place for free will. And from this it follows that there is also no place for fair punishment and fair competition, because either your actions and achievements are the result of heredity, or they are the result of the environment, society. The concept of punishment turns out to be fundamentally wrong, meaningless, you can’t give a person what he deserves, in some metaphysical sense. Maybe it's my upbringing, or people in general tend to think of moral systems as objectively existing. But in fact, you can only influence him with positive and negative measures to achieve the desired behavior, including socially useful. As was noted in one of the chains, moral correctness is only relative to someone's beliefs, this is not a property of an act, but your action of evaluating it. And this seems to be the only mention of such questions in the lessvrong chains. For some reason, there is a chain about free will, but not about fair punishments and fair competition. Perhaps there are materials on some third-party sites? Because I was completely unprepared for the fact that my ideas of justice would fall apart in my hands.
Replies from: Dagon, EniScien↑ comment by Dagon · 2022-05-19T19:20:49.291Z · LW(p) · GW(p)
Be careful with thinking a phenomenon is meaningless or nonexistent just because it's an abstraction over an insanely complex underlying reality. Even if you're unsure of the mechanism, and/or can't calculate how it works in detail, you're probably best off with a decision theory that includes some amount of volition and intent. And moral systems IMO don't have an objective real cause, but they can still carry a whole lot of power as coordination points and shared expectations for groups of humans.
Replies from: EniScien↑ comment by EniScien · 2023-04-22T16:11:31.093Z · LW(p) · GW(p)
It seems that you didn’t understand that my main problem is that every time in my thoughts I rest on the fact that within the framework of a better understanding of the world, it becomes clear that the justifications why competitions are good do not make any sense. It is as if you have lived well all your life because you were afraid of hell, and now the previous justifications why it is worth living well do not make sense, because you have learned that hell does not exist, now it is not clear what is the point of behaving well and whether in general, is it worth it to continue to honor his father, not to steal other people's things and other people's slaves, to stone women for betraying her husband and burn witches? Maybe these rules make sense, but I don't know if they have it, and if so, which one. I mean, I wondered what role do, say, sports olympiads play, but my only answer was that they force athletes to overexert their bodies or dope, and also spend a lot of money on such extremely expensive entertainment in exchange for releasing better films or let's say the scientific race "who will be the first to invent a cure for aging." Well, I've been recently thinking about how to organize the state according to the principle of "look at the economic incentives" and I seem to finally understand what sense competition can sometimes have. Those incentives. Competitions are one of the types of competition, so they allow not only to give an incentive to someone to go towards a certain goal, they can create an arms race situation when you need to not only achieve the goal, but do it faster/better than other participants. However, the key word is "goal", and in sports olympiads the goal clearly does not make sense, like beauty contests, in science olympiads with a better goal, they allow you to determine the smartest participants for further use of their intellect, which does not make sense in sports olympiads, because machines have long been faster, stronger and more resilient than humans, and right before our eyes they are becoming more agile, however, at local sports competitions with a better goal, they allow people to be stimulated to move more.
↑ comment by EniScien · 2025-02-16T23:16:27.167Z · LW(p) · GW(p)
Self review
(I've already wrote here something about meaningfulness of competition, but it's hard to parse, so just rewrite)
less try to follow usual society justifications of why thing "makes sense" and more wonder for "which multiple forces conjure thing into reality".
Eg:
different stimuli make people perform different behaviour
you need vengeance to be proportional to harm so people will not eg try hide lesser crimes as stealing by making bigger crimes like killing (those who knew about lesser crimes)
competing with somebody of achievable level make people apply efforts into optimising thing of competition to seize it's prizes
comment by EniScien · 2022-05-25T13:40:47.893Z · LW(p) · GW(p)
Some time ago I was surprised that narrow professional skills can significantly change your thinking, and not just give you new abilities to specialize. Changes you, not just allowing you to cast a new spell.
Something like not scientific, but professional rationality, but definitely not just the ability to make origami. (Specifically, I had this with the principles of good code, including in object-oriented programming) I had never heard of this before, including here on LessWrong.
But it makes me think that the virtue of learning is more than being able to look at cryonics without prejudice. It seems that the virtue of learning itself can change your thinking (in a positive way?). The ability to see where reality is tightly woven?
Perhaps I should have noticed earlier that, for example, characters who specialize in politics (Malfoy), tropes (Hiro) and other things look clearly cooler than if they had no specialization and were simple townsfolk. It seems that even narrow skills, and not a special leveling of wisdom, give you experience points.
Although perhaps the point was that I did not think that this is really applicable in real life. And I'm still not sure if it really improves you, and not just creates a fake aura of coolness.
Replies from: EniScien↑ comment by EniScien · 2025-02-17T00:04:15.732Z · LW(p) · GW(p)
Self review 1
I was wrong before OP observation, but in "narrow professional skills can't give you Rationality" it wasn't OP's update of "narrow can't", it was "professional skills are narrow". They aren't. Our world just can't systematically define premises on which professional skill works, optimise it to abstract non narrow principle and then teach to everybody who frequently mets these premises.
comment by EniScien · 2025-02-18T11:10:57.630Z · LW(p) · GW(p)
I think I finally got clear understanding of Newcomb-like problems. I am afraid that I again will think about some EY's post which I read and forgot. But just write it will be faster, than search.
I think the cause of why people stuck with these problems are wrong intuitions that your decisions "change the future". Which are obviously wrong if you think about it, it's not like in Past there was one Future, but now now in Present and Future there are another Future.
It's wrong like think that if you run computer program, in the process of calculation it will "change it's future outcome". Next step cause future steps, but they themselves were caused by previous steps of execution. And your decisions cause, determine the future, but they were themselves caused by past, and there always was just one stream of causes.
The same way, if you decide in the future "one box" and from that decision expect that Omega put 1M, it doesn't mean that your decision "changed the past", not any more that it changed the future. There is just previous state of the world (including you) which synchronously causes you real choice and choice in Omega prediction.
So, probably it makes me think more EDT than TDT?
It's not about being "kind of agent" from my view, it's just some strange crutch. It's more thinking that there was some full system state, some code, which fully caused you to current state, and this code is executed in some other place, and because the system is fully deterministic, the results of execution will always match.
And then if you know that you've done something, you immediately can predict from that evidence that other copy have done the same, because if one copy of deterministic algorithm calculated some expression, you can generally predict that calculation of this expression always return answer X, so everyone who calculated it will get the same answer X. And if you know you calculated it and got Y, you know the answer is X=Y.
So in an expectation of the future where you pick one box, you can expect that your future version will immediately update into that Omega simulation in past also picked that and that caused Omega to put 1M. And vice versa, in an expectation of the future where you pick two boxes, you can update about Omega simulation choice and lack of 1M.
No time travel. Causes indeed go between your decision and past boxes choice. But they go from the past, your decision not only causes things, but WAS caused itself. And the same causes will make both your decision and box content.
I think it would be more obvious if question was starting from Omega visible to you beginning to calculate prediction, hiddenly making a setup and THEN giving you a choice. And not just "everything already happened, choose".
There is a problem though with such thinking that when you tell people that they are fully determined, they start to think that their future is determined not through their ongoing thinking process, but independently from that.
I don't know what to think except: but people who think they can determine the future evidently get better results then those who think that everything is doomed, so definitely you can determine the future by trying to do that, even you don't know how to match that with being determined yourself.
Or maybe: if you now know about being determined and started to think "how me making make future decision usual way is determined by past", it's wrong, it's more levels of putting calculations inside of calculations, so you have less power to spend on making good decision. And of course don't think "how my current future is determined by my past including this thought", it's infinity loop.
Though in the first place I think about it like: if you have examples of previous games and one boxes get better results, then in two future predictions there is one with higher expected utility and it's the one where you decide to pick one box, so decision of picking one box has higher expected utility, so do it.
Or if elaborate, choice of one box is an evidence in our examples for getting 1M. But if you will choose one box, you will not GET this evidence, you will CREATE it. And usually if you do have some evident measure and start to optimise it, it ceases to be evident.
But if we had examples of people who optimised measure and still get the results, then it's still will be valid evidence even if you will optimise the measure. So prediction which starts from picking one box still has better expected utility and is the one you should choose to start by picking one box.
And I prefer to think about whole situation like our world is simulation, Omega copies the computer state from one point, continue before it can see your decision, and next execute copied state the second time, but now puts boxes conditional on your choice from previous run. And in sim 3 uses your choice from sim 2. And so on 1B times.
For me it's obvious then that you should choose one box, because you can't by running the same simulation state get different results on the moment where simulated you makes your choice.
(And when I was trying to think about choice which change the past, and determinism, free will and decisions, my brain felt like it's trying to fold into a pretzel. And I make conclusion, that it's better to never actually act on logic which makes your brain turn into a pretzel, even if it is "perfectly physically grounded". If you have some "perfectly physically grounded logic" you need to unfold it before it become obvious, intuitive and fully visible, because elsewhise you'll just end up making mistakes.)
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2025-02-18T15:39:04.359Z · LW(p) · GW(p)
A lot of free will confusions are sidestepped by framing decisions so that the agent thinks of itself as "I am an algorithm" rather than "I am a physical object" [LW · GW]. This works well for bounded individual decisions (rather than for long stretches of activity in the world), and the things that happen in the physical world can then be thought of as instantiations of the algorithm and its resulting decision, which the algorithm controls from its abstract headquarters that are outside of physical worlds and physical time.
For example, this way you don't control the past or the future, because the abstract algorithm is not located at some specific time, and all instances of it at various times within the physical world are related to the abstract algorithm in a similar way. For coordination of multiple possible worlds, an abstract algorithm is not anchored to a specific world, and so there is no additional conceptual strangeness of controlling one possible world from another, because in this framing you instead control both from the same algorithm that is not intrinsically part of either of them. There are also thought experiments where existence of an instance of the decision maker in some world depends on their own decision (so that for some possible decisions, the instance never existed in the first place), and extracting the decision making into an algorithm that's unbothered by nonexistence of its instances in real worlds makes this more straightforward.
Replies from: sharmake-farah, EniScien↑ comment by Noosphere89 (sharmake-farah) · 2025-02-18T16:32:07.082Z · LW(p) · GW(p)
Somewhat similarly, one of the most useful shifts in identity to apply it to more cases usefully is to take the view that "My identity is an algorithm/function" as the more fundamental primitive/general case, and view the idea that "My identity is a physical object" is a useful special case, but the physicalist view of identity cannot hold in certain regimes.
The shift from a physical view to an algorithmic view of an identity answers/dissolves/sidesteps a lot of confusing questions about what happens to identity.
(It's also possible that identity in a sense is basically a fiction, but that's another question entirely)
↑ comment by EniScien · 2025-02-19T21:14:57.383Z · LW(p) · GW(p)
I am suspicious to and don't like using some weird sidesteppings, instead of not being confused while looking on the question from the position of "how it will actually look in the world/situation" (though they can be faster, yeah).
I mean, causes are real, future was caused by you, may be say controlled, and it less feels like controlling something if somebody predicts by wishes and performs them before I can think about their fulfilling.
But these are probably just trade-offs of trying to explain these things to people in plain English.
When I first thought that picking actions by conditional expected utility was obviously correct, I was very confused about the whole DTs situation. So link was very useful, thanks.
comment by EniScien · 2025-02-17T14:26:04.579Z · LW(p) · GW(p)
(It's vague, but I'll try it broad right NOW. And then elaborate if necesssary.)
I.
I've just noticed comments of Raemon, Gwern and Vladimir_Nesov on my old post and it struck me that maybe I was wrong and LW community is much tinier than I thought. It explains a lot. Flaws of site design, lack of any galactic ideas I momentarily internally spit out seeing things, whole tiny success of rationality and alignment missions.
Probably there are just not enough people for all of that? And I was wrong estimating by EY's 140K twitter, count of readers of HPMoR. And SHOULD be estimating by amount of votes and reactions. Probably there are just only a few hundreds of people who have account on site to at least vote, not saying "comment", or furthermore "post" on regular basis.
Like, in that case I should be modeling LW much more like hunter gatherer tribe than inexploitable market civilization.
(oops, I lost "try-it-broad")
So:
- How big is LW community?
- How much are there (how to say... high level rationalists?), like, Yudkowsky, Salamon, Muehlhauser... Gwern? I don't know who are actually.
- I don't want to invoke here enormous hubris, but it feels like depending on previous 2 questions, I might hugely underestimate my own ability to contribute something significant. And that my already formed best ideas might be much less of widely known stuff.
- How much are there (how to say... high level rationalists?), like, Yudkowsky, Salamon, Muehlhauser... Gwern? I don't know who are actually.
II.
How strong is consensus on concretely LW (not EA, SSC, whatever), about AI scenario?
Replies from: Viliam, EniScien↑ comment by Viliam · 2025-02-18T01:10:50.416Z · LW(p) · GW(p)
Probably there are just not enough people for all of that?
Social media will make you overestimate a lot. When I share a LW post on Facebook, it gets 10 likes. When I invite those people to a local LW meetup, no one comes. Clicks are cheap; even people who don't like rationality are happy to click if the article seems interesting.
Replies from: EniScien↑ comment by EniScien · 2025-02-18T08:27:16.945Z · LW(p) · GW(p)
(I will consider it as getting some evidence, so, thanks)
I mean, I just thought in the way of "yeah, not all of people on twitter will come to LW", but I was on LW even though I wasn't on twitter. And there are also Facebook, Reddit, SSC and other PBlogs, and people who prefer to meetup instead of using online, and people in other countries.
And also so much people read HPMoR. And I personally, if look back on my life story, don't see any division moments where I was prone even a little to change my direction in which I was going after reading HPMoR, way could be different, but direction is the same.
May be, I was just wrong that I am not unusual. Like, I was probably the best pupil in my class, but that just gives sort of lower bound. What is after that? May be I should take IQ or SAT. But it's long, hard and has side effects.
The most close thing that I've done was to take Vocabulary tests. And I actually found just one (1) site that wasn't terrible. https://www.myvocab.info/ In difference with all the other, it uses adaptive testing, so it takes just like 20 questions and 2 minutes, and author says "Bayes" in description, which are, in the surface, good signs.
And I've got 99.9% for my age. (which is just knowing twice as many words as average, not so much). And I can't remember specifically doing something that could optimise over vocab test.
I don't know, may be almost everyone on LW will get 99.9% for their age or something? (Though, of course I took the test in my native Russian, and for English it's different).
Replies from: Viliam↑ comment by Viliam · 2025-02-18T10:26:49.393Z · LW(p) · GW(p)
May be, I was just wrong that I am not unusual.
Unusual is not a binary, it is a "more or less" thing, but yes, you may be much more unusual in some regards than you expected.
Actually, looking at your bio, it may be a cultural taboo for you to admit that you are exceptional. I grew up in communist Czechoslovakia, and the thought "I may be different from most other people, in a good way" went completely against all my conditioning. That's not what you are supposed to ever say, unless you want to get in deep trouble.
It's not just about intelligence, although high intelligence may be a prerequisite. Most people, even the intelligent ones, simply don't give a fuck about many things that we value at LW, such as having a correct model of the world as opposed to merely winning verbal battles., or preferring to know the truth even if it makes you sad as opposed to just enjoying happy thoughts no matter how unrelated to reality they are. Most people just don't click with this, because... I guess they don't see a reason why. Why do things, if they don't make you happy? (Yeah, in theory, looking reality in its face could save your life or something, but in practice, it's not like rationalists are famous in the outside world for actually winning at life, so maybe this all is just our version of wishful thinking.)
So, yeah, actual rationalists are very rare. I couldn't find ten of them in my country. (And I am not familiar with the Bay Area community, but sometimes I suspect that many people are there simply for the vibes. Some people enjoy hanging out with rationalists, even if they don't share the fundamentals. It's just another social activity for them.)
Then there is also the fact that people are busy. Not everyone who has the potential to become a rationalist also has time to spend on LW website. Such people usually have a lot of work they could do instead.
I might hugely underestimate my own ability to contribute something significant
Maybe, I guess you won't know until you try.
Replies from: EniScien↑ comment by EniScien · 2025-02-18T13:32:13.498Z · LW(p) · GW(p)
I think I written my bio in some biased way. But yeah, even in modern Russia it's not a very popular frame that you are so unique and exceptional. And our teacher said us "not to run faster than a whole train". And modesty is a virtue, but admitting how very excellent are is not at all.
And in part it were things like my parents were trying to make me less perfectionist about grades by saying that "grades doesn't really matter", then I continued be a perfectionist, but now just thought that grades are not at all a measure of intelligence.
Or that I was programming at 9y because it was interesting and then saying to myself that it's not a reason to be proud by my interests, if they were interested they obviously could too.
Or that I've seen that prideful people in fiction were just doing some stupid short termed or evil things, like beating people because someone said something wrong, and next I concluded that whole emotion of pride is something awful.
And also it young me's failure of putting myself into others shoes. Like, obviously most of kids were not invited to the stage at every end of year, but for me it was just something that happened from the beginning of the time.
And I was very distrusting to people who were saying things like "oh, you know, you are not like the other kids, you are smarter than your age" etc, of course they were just trying to flatter you.
A lot of such things. I actually ended up almost full blown Modest Epistemology by my own.
And that an interesting observation. I somehow missed THAT as a part of my rationalist leanings, but yeah, I can remember reading about Pollyanna (it was before HPMoR) and being outraged about her attitude "how glad I am about crutches as a gift, I don't need them" because...
Doesn't she see how bad everything around her is? She can't see reality. Blind. Lobotomisedly happy not being able to internally react on environment. For me it felt like something worse than death.
And not surprisingly, I was also much more unhappier whole my life than the other people.
(That very resembles EY's attitude. But now I don't believe anymore than emotions are a part of world modeling, of what should reflects traits of reality, and an instrumental good for neural reinforcement by 1 and 2 derivatives of current situation, staying in right mental poses for some activity and others)
And I admit I have much more free time than the most of people. But I am not sure I understood, people with potential didn't have enough time to read LW to become rationalists? Or rationalists don't have time for LW? Isn't posting your thoughts in net is usually a very cost benefit action where thousands of people can read post written once?
Edited: also, I've seen A LOT OF recommendations of HPMoR. And I probably overestimated how much people read it after it. Because I go and it literally after hearing about it the first time in literally the form of pair of fanfic comments "- Wow, this fic is excellent! - No, it's average, excellent is HP and Methods of Rationality"
Replies from: Viliam↑ comment by Viliam · 2025-02-18T14:03:26.074Z · LW(p) · GW(p)
From inside, almost everything I can do is "easy". Otherwise I wouldn't be able to do it, right? The trick is noticing that many things that are "easy" for me are actually quite difficult for other people. And even there, who do you compare yourself to? If you are constantly surrounded by experts, the things that are considered "easy" by your entire bubble can still be a mystery for 99.9% of population.
people with potential didn't have enough time to read LW to become rationalists? Or rationalists don't have time for LW?
Both of that. There are probably some people out there, who would be a great fit for LW, but instead they are busy doing research or making money or changing the world somehow. Also, some people who have formerly spent a lot of time on LW are now doing something more efficient with their time.
Isn't posting your thoughts in net is usually a very cost benefit action where thousands of people can read post written once?
Yeah, but 99.9% of those people won't remember what you wrote the next day, so the actual impact can be very small. Also, instead on LW you could post on your own blog, or maybe write a book, those are also ways to approach many people. Some of those may be more efficient.
It's good that we have the LW books [? · GW] for busy people; selection of the best articles instead of having read all of that.
And I admit I have much more free time than the most of people.
That is a great starting position (much better than having no free time -- then it is very difficult to think about your life or try to improve it, if there is no time to do that). But if you use that free time to figure out what you want to do and actually start doing it, then... probably in a year, you will have less free time. Not necessarily in a bad way; you can be busy doing things that you totally love. But you won't have so much time to read LW anymore.
This is a paradox of self-improvement groups (and LW kinda belongs to that category). If you actually keep improving, at some moment you will probably leave the group, because some other action will be more useful for you than staying at the group. That's the entire point -- the group is supposed to make you more efficient at the life outside the group. If it fails to achieve that, then the group is not really good at its stated goal.
Replies from: EniScien↑ comment by EniScien · 2025-02-20T17:02:49.092Z · LW(p) · GW(p)
(There was some errors, message hadn't sent before now)
Becoming stronger feels like things became lighter. But a lot of things I trying to do are not "just easy"? Also I thought about it more like finding things which you found too hard and gave up. And I am not sure how to just pinpoint easy things, it's like finding details of how you are moving.
(Though I probably should expect that for a lot of people it's actually hard to from the first try read/remember even single time new long words like "cefoperazone" and "ceftazidime" and that wasn't just a trope)
99.9%??? Are you serious? I thought I have a bad memory, but I don't think I will forget more than 70% of what I read THE NEXT DAY. Like, I can remember which tweets and shortforms I've read in the last few days, just trying to remember what I read, not because I found myself in relevant situation.
And that is for random post which I didn't considered important to me like "what to do with AI for a layman?". I remember much better posts which I did considered important, like the one LW about effectively explaining things in the way of "it's like airbnb, but for boats".
I think about it in the way of "which time I can spare if I will find something really important", it's not like now I am going and saying "oh, again I have no idea what to do with my time". It's just that I am not compelled to work, don't have kids etc.
I don't think about it quite that way. Isn't the sense in sharing ideas? You have have ideas, some are more important, and share them is easier than invent and different people find different ideas. So you have a benefit from sharing with each other. So if it was like that I'd expected more like 30K of people showing with great ideas on lesswrong once a week each.
↑ comment by EniScien · 2025-02-18T08:34:44.027Z · LW(p) · GW(p)
If LW is tiny, I think I understand much better why no one wrote AGI Ruin before EY. I thought about that, but just for a few seconds. I also thought that LW is full of smart people, then if it isn't done yet, not mentioned as "needed task", then probably it's not that good idea, or there are much more important things to spend time on.
comment by EniScien · 2023-12-26T12:46:59.729Z · LW(p) · GW(p)
There are no common words upvote/downvote in Russian, so I just said like/dislike. And it was really a mistake, these are two really different types of positive/negative marks, agree/disagree is third type and there may be any amount of other types. But I named it like/dislike, so I so thought about it like it means your power of liking it in form of outcome to author, not just adjusting the sorting like "do I want to see more posts like that higher in suggestions".
And actually it looks for me like a more general tendency in my behaviour to avoid finding subtle differences between thing and, especially, terms. Probably, I've seen like people are trying to find difference in colloquial terms which are not strictly determined and next argue to that difference, I was annoyed by that and that annoyance forced me to avoid finding subtle differences in terms. Or maybe it is because they said us that synonyms are words with the same meaning, instead of near meanings (or "equal or near meanings"), and didn't show us that there is difference in connotations. Or maybe the first was because of the second. Or maybe it was because I too much used programming languages instead of normal languages when I was only 8. Anyway, I probably need now to start developing a 24/7 automatically working habit to search and notice subtle differences.
Replies from: Dagon↑ comment by Dagon · 2023-12-26T17:24:28.472Z · LW(p) · GW(p)
Word use, especially short phrases with a LOT of contextual content, is fascinating. I often think the ambiguity is strategic, a sort of motte-and-bailey to smuggle in implications without actually saying them.
"like" vs "upvote" is a great example. The ambiguity is whether you like that the post/comment was made, vs whether you like the thing that the post/comment references. Either word could be ambiguous in that way, but "upvote" is a little clearer that you think the post is "good enough to win (something)", vs "like" is just a personal opinion about your interests.
comment by EniScien · 2023-05-03T18:29:28.906Z · LW(p) · GW(p)
I've read, including on lesswrong (https://www.lesswrong.com/posts/34Tu4SCK5r5Asdrn3/unteachable-excellence [LW · GW]), that often listening to those who failed is more useful than those who succeeded, but I somehow missed if there was an explanation somewhere as to why? And the fact is that there are 1000 ways to be wrong and only 1 way to do something right, so if you listen to a story about success, it should be 1000 times longer than a story about failure, because for the latter it is enough to make one fatal mistake, while for the former you have to not make the whole thousand.
However, in practice, stories of failure and stories of success are likely to be about the same length, since people will take note of about the same number of factors. In the end, you will still have to read 1,000 stories each, whether success or failure, except that success happens 1,000 times less often and the stories about it will be just as short.
Replies from: Raemon, Dagon↑ comment by Raemon · 2023-05-04T01:36:54.133Z · LW(p) · GW(p)
fwiw I don't think I've heard this particular heuristic from LessWrong. Do you have a link for a place this seemed implied?
I think there's a particular warning flag about "selection effects from successes" (i.e. sometimes a person who succeeded just did so through luck). So, like, watch out for that. But I remember hearing a generalized thing about learning more from failure than from success.
Replies from: EniScien↑ comment by EniScien · 2023-05-08T04:59:33.813Z · LW(p) · GW(p)
I added link to comment: https://www.lesswrong.com/posts/34Tu4SCK5r5Asdrn3/unteachable-excellence [LW · GW]
↑ comment by Dagon · 2023-05-04T01:06:54.589Z · LW(p) · GW(p)
In truth, listen to everybody. But recognize that different stories have different filters and distortions. Neither success nor failure storytellers actually understand the complexity of why things worked or didn’t - they will each have a biased and limited view.
comment by EniScien · 2023-04-24T00:27:56.831Z · LW(p) · GW(p)
One of my most significant problems is that I do not like to read, although it is generally believed that all "smart" people must adore it. And accordingly to overcome my dislike of reading, the book itself has to be very interesting for me, and such are rare and difficult to find them for me (I was thinking that it would be nice to have some kind of recommendation service on your previous evaluations, which there are for movies, but for books I do not know such).
And accordingly, another problem follows from this. I didn't read a collection of science fiction and fantasy books as a kid, I didn't read Escher Bach's Gödel, Step into the Future, Impact: Science and Practice, Science and Sanity, Probability Theory: the Logic of Science, Feynman Lectures in Physics, and so on. And I feel like I'm missing out on much of the obvious stuff for "our cluster people" because of this.
That is, the sequences were written to point out some non-obvious things that weren't already said by someone else and known to everyone. And I don't know if there's a list somewhere of such most significant books that predate the sequences, so that a person from outside this cluster would at least know which direction to look, or else it's hard to identify such things that are obvious background facts to some people, but unknown to others altogether.
In a broader sense, you could say that I was making the mistake of looking too little at other people's experiences. I didn't realize that no one person is capable of inventing all the wisdom of a tribe on their own. And this manifested itself everywhere, in every field, in programming, in music, in writing books, in creating languages, in everything.
Probably one is related to the other, I didn't read a bunch of books, so I didn't see how much higher than my own intelligence other people's knowledge could be. So I intuitively, without realizing it, proceeded as if they were equal, as if I were competing with a single other person, without even considering the obvious fact that there were simply many more other people, let alone what could be achieved by investing more time, summing up the experience of many generations, using collaborative work, practical experiments and computer calculations.
Actually, I didn't quite put it that way, though. Yes, I don't adore reading, but let's say I don't have any problem with reading blog posts (e.g. here on Lezvrong) or even just Wikipedia pages, here I rather have the opposite problem, opening one article and following the interested hypertext links I can sit down and read for hours.
So it's probably more accurate to say that I have a "clip thinking" problem, however... I have no problem watching hours of lectures, reading 4 hour podcast transcripts, or listening to hours of audiobooks.
So it's probably "reading books" that's the problem. Perhaps I associate them with school, with the most boring reading of history textbooks when my brain literally stops perceiving, or with literature classes, including assigned readings for the summer, where of all the years of school I can remember exactly one work that seemed interesting to me. I'm not sure what the big deal is.
Replies from: r↑ comment by RomanHauksson (r) · 2023-04-24T04:26:33.269Z · LW(p) · GW(p)
LibraryThing has a great book recommendation feature.
Replies from: EniScien↑ comment by EniScien · 2025-02-16T23:25:12.244Z · LW(p) · GW(p)
I tried it. Unfortunately, I was trying to set marks on books by all parameters of being good book, instead of accessing just pleasure of reading, which was the only actually important thing to access, because recommendations for others I can get by other ways. And it ended up that now I don't know how to clear my profile.
Replies from: r↑ comment by RomanHauksson (r) · 2025-02-19T22:19:57.053Z · LW(p) · GW(p)
I was trying to optimise/recommend by it not only parameter of personal tastes which it is actually only hard to get by other means, but a whole bunch of parameters.
Can you rephrase this? Having a hard time parsing this sentence.
Replies from: EniSciencomment by EniScien · 2022-05-29T17:31:13.943Z · LW(p) · GW(p)
I am extremely dissatisfied with the phrase "he lives in another world" to understand him that he does not agree with you, because we all live in the same world.
But a good option is "he sees that the world is different (perhaps falsely)", exactly so, because "he sees the world differently / in a different way" has connotations of just an opinion to which everyone is entitled, and "he sees a different world" again creates the feeling that there are other worlds in which some people may not live, but look exclusively at them.
The same glasses of perception. Trying to tell people to "take off your wrong glasses" is useless because first, if they just take off their glasses, they won't see anything, people are born completely nearsighted, which can be seen from babies, besides, human eyes have a blind spot and other defects, which need to be corrected with glasses, secondly, they see that their glasses do not create any distortions of the normal / ordinary picture of the world, but they see that our glasses distort reality.
The glasses metaphor is already known here and there, has no wrong connotations and is quite intuitive. In addition, now all people know about visual illusions, so it will be clear to everyone that we can literally SEE the world in different ways, see things differently than they really are.
This metaphor may look obvious, but for a long time it did not occur to me in the correct formulation, it is desirable to develop it and make it commonly used. "Point of view" is a fixed expression, but it does not imply that the point of view can be wrong, either literally or figuratively.
(However, the reception with the worlds will also be effective: "they believe that they live in another world.") Edit: though it probably misses the point of "Thor exists, it's not just like I believe in Thor"
comment by EniScien · 2022-05-21T14:41:07.278Z · LW(p) · GW(p)
Perhaps somewhere on LessWrong this is already noticed, but judging by how much space there is not occupied by life, how much useless particles there are in physics, it seems that our universe is just one of the random options in which intelligence appears somewhere in order to you could see the universe. And how different it is from the universe, which was specially designed for life, even more than one planet would be enough, only 100 meters of the earth's crust would be enough. As primitive people actually imagined, until science appeared, so that religion began to lay claim to irrefutability. It becomes ridiculously obvious once you understand it. P.S. This comment and main post look relevant, but I never saw it before: https://www.lesswrong.com/posts/EKu66pFKDHFYPaZ6q/the-hero-with-a-thousand-chances?commentId=LYq9pGpMmKBDP3YR6 [LW(p) · GW(p)]
Replies from: gwerncomment by EniScien · 2025-02-17T12:11:34.805Z · LW(p) · GW(p)
I noticed some problems with LW interface, quick list
(Ep status: follow my intuition, by saying "you" instead of "I" mean that I expect that these intuitions will be shared by a lot of other users)
Priority for me of these are such items where using is still very inconvenient, not just hard to understand for new users. There are: 1 - reactions, 2 - markdown OR colours (I don't know, mb obsidian solves markdown)
On android
- I have no Intercom on Android, property checked, on PC I do have
- Reactions
- inline reaction button shows up outside of page bounds, so I need to scroll
- ordinary reaction oppositely show the menu outside of screen bounds, so I can't even scroll
- inline reaction phrases aren't underlined anymore, so it's hard to find them and if something shows on the right, it's out of page bounds.
- User profile button shows you a pop-over while I expect redirection, that confuses me each time
Generally
- Colours
- Where all the COLOURS? What's wrong with them? Why to decolourise the site so hardly? I'd expected precisely opposite thing, add colours, you know, like in IDE, to recognise page elements easier.
- I in the first thought I checked some anti colour option in settings, but no
- Where all the COLOURS? What's wrong with them? Why to decolourise the site so hardly? I'd expected precisely opposite thing, add colours, you know, like in IDE, to recognise page elements easier.
- Settings
- There is this HUGE place next to "submit" button, but instead of adding "rules" button (which shows pop-over), there is non-selectable text about "what you should enter directly into URL line"
- Notifications specific setting are not grouped into collapsable list. AND "remove negative karma" and "batch notifications" options go ONLY AFTER it. (so I just missed them earlier)
- You have this whole list of collapsable option lists and AFTER all of it, little, not standing out button written "submit", like you can open "new option types to site coders suggestion" by that. Instead of floating, bright button "save" and warning if you are leaving the page without saving data.
- Actually, it's how LW itself works for comments. It even has autosaves! What's the problem to do it for settings?
- Reveal check for facilitation option is totally obscure for me in what it does. And has no pop-over tooltip more detailed explanation.
- Hide people names option is totally useless for quick takes list page, because it shows names like "X's Shortform"
- It also could be useful if you could hide-before-hover karma... and probably agrees and reactions. Just show post, comments in sorting order depending on their karma etc and collapse expectedly useless things. Though, even if hide concrete reactions, it would be good to see their existance in principle, and by inlines.
- Markdown
- Quick takes interface definitely shows NO markdown or visual. And again, why?
- Also, why markdown/visual switch is not default for editor?
- And if markdown/docs conversion is lossy, why is it pick-up pop-over, instead of "convert to []" button?
- Markdown not just hides shows when you select a word, in an intuitive way (for example, like it's done in obsidian), it "compiles" and next refuse to be removed and... very inconvenient
- Mb I just should write in Obsidian and paste.
- Sorting
- Also I understood why me and some another people didn't know that LW has filtering options for posts. Different modes look like different modes of sorting, not like different tabs, so you don't expect that you will lose any hint on filtering after switching to "Recommended" (which sort of subconsciously reads like it's recommended by LW authors sorting mode to pick, not recommended posts).
- It would be more intuitive if modes were tabs, not pick buttons (like, overlap on each other? Show content as being inside of button?). Or it could be pick pop-over, but I like such design less, and I don't think that without showing-it-as-a-content it would be more legible.
- Also it may be better if Enriched option had settings gear on it to show that it HAS settings even if it's not selected.
- Notifications: when I subscribe on someone's short form, notification says that somebody replied on my comment instead
That sort of sounds like I complain about site not being able to read my thoughts (or vice versa), but a lot of soft which was otherwise much more poorly designed, iirc was able to do it, was intuitive/conventional in all such things OR had guides and warnings. Idk mb it's just a question of higher count of fail fix iteration with higher amount of users.
And also when I visited site from PC I got visual editor pane. (markdown editor setting was not changed yet) And I still have NO idea why after that it also suddenly appeared on phone where it never appeared before.
comment by EniScien · 2025-02-17T00:47:27.996Z · LW(p) · GW(p)
Oh. It looks like I just understood what was EY's point in "Keltham wrote first version of [text] and than rewrote it". I just accidentally did this and it seems that it is much more effective and quick to do not any fixes at all on the first writing, just write out thoughts for yourself, and next write version with all the obvious fixes, than to constantly erase and make local fixes. It's just important not to start trying to do second order fixes on "for publication" version.
(Context: earlier he wrote that he has problem with constant fixing/rewriting. I guess it may be representation of his found out fix.)
comment by EniScien · 2023-05-01T18:39:00.151Z · LW(p) · GW(p)
I haven't encountered this technique anywhere else, so I started using it based on how associations work in the brain:
If I can't remember a word, instead of just continuing to tell myself "think, think, think," I start going through the letters alphabetically and make an effort over each one "what are the words for that letter, is that word by any chance?" And that almost always helps.
comment by EniScien · 2025-02-20T17:01:12.593Z · LW(p) · GW(p)
The most useful my ideas to write in details:
- Thinking is the most powerful human ability which differ us from animals
- You can develop any aspect of your brain/thinking/neurons by training
- You don't need to be restricted by existing ones, you can invent them
- You can develop your thinking and next invent by it better ways to develop thinking
- Introspection is really important, because it's really hard to develop thinking blindly
- Ordinary processes of learning are INCREDIBLY ineffective, you can do better
- eg, anki, pomodoro timed learning, remembering images, not words
- You train things in imagination
- You can remember things better by imagining what you need to remember in which situation
- Optimisation of internal dialogue [LW(p) · GW(p)] and visual thinking
- Moving your attention from mental things instead of fading them for better remembering
TBC
comment by EniScien · 2025-02-20T16:49:15.051Z · LW(p) · GW(p)
Whole my life I hated reading. Now I have a better understanding why and what to do with that.
I read slow because I am pronouncing things, I pronounce things because I want to have emotions from intonations, I need that because I don't imagine a lot. And as a bonus I am pronouncing things, not just perceiving audibly, so reading for me is an effort, in comparison with listening.
Unfortunately pronouncing also is a very stable habit, it's really hard to not pronounce things. I will try to use rhythms for speed reading.
Replies from: ChristianKl↑ comment by ChristianKl · 2025-02-20T21:21:08.164Z · LW(p) · GW(p)
The claim that pronouncing things is a bad reading habit that's frequently made but I have never seen good evidence for it. Why do you believe it?
Replies from: EniScien, EniScien↑ comment by EniScien · 2025-02-20T22:03:19.679Z · LW(p) · GW(p)
But it depends which speed do you read? If it's 800-1000wpm (4000-5000 letter/min), then I maybe wrong.
Replies from: ChristianKl↑ comment by ChristianKl · 2025-02-20T22:25:11.484Z · LW(p) · GW(p)
The key problem here are your epistemics. My reading speed doesn't really matter for this discussion. You are dealing with a topic that has an existing discourse and instead of familarizing yourself with that discourse, you are reasoning with anecdotal data.
Scott H Young for example writes:
Here the evidence is clear: subvocalization is necessary to read well. Even expert speed readers do it, they just do it a bit faster than untrained people do. We can check this because that inner voice sends faint communication signals to the vocal cords, as a residue of your internal monolog, and those signals can be measured objectively.
It might be that Scott is wrong here, I don't think the kind of observation that you use to support your belief that subvocalization is bad are strong enough to doubt Scott here.
Replies from: EniScien↑ comment by EniScien · 2025-02-20T23:10:33.787Z · LW(p) · GW(p)
Even expert speed readers do it, they just do it a bit faster than untrained people do. We can check this because that inner voice sends faint communication signals to the vocal cords, as a residue of your internal monolog, and those signals can be measured objectively.
I've heard about that and that looked like an evidence that you are able to untrain only things which are introspectively visible, not that it is somewhat important. Again, what about deaf-mute, what do they subvocalize? And "a bit faster"? 5000 phonemes/min, ~100/s, more looks like 1 phoneme per neuron activation. I doubt you can properly understand speech on 1000wpm.
But in general, when I started, I failed to find existing discourse and decided that it will be quicker to just check. And than it just looked too clear than I actually can at least think just visually and much, much faster than speak.
I'll check the link though. (It's existence explains why not more people checking this)
PS Edit: okay, I've read and I didn't find anything new in this article. I will try to read link on "evidence"
And also just to check, will you also say that it's impossible for ordinary human to read text and speak at the same time?
PPS Edit: and no, second article also hasn't had any evidence. But still thanks, I've found some techniques of speed reading I've never heard before, only thought about by myself, so probably my ideas aren't new even if it's not something like math. And people converge in such topics, and end up having almost fully overlapping ideas, and I'm not exception.
And the reason why such ideas aren't widely used probably isn't that no one discovered that, but because people are sceptical. Like I was. 1000wpm? 20000wpm? Looks like fake for credulous.
But now I'm less sceptical even about such results because I was wrongly sceptical about such things like training of imagination, attention, intelligence, memory and willpower. And was clearly wrong. Btw, what you will say about these things?
Also I was sceptical about thinking multiple thoughts in parallel, but that was mostly because of Feynman's and EY's claim, and now I'm just more doubt them, after I understood it's easily possible.
↑ comment by EniScien · 2025-02-20T21:44:55.987Z · LW(p) · GW(p)
Some apriori reasoning like: pronoucing goes a consequences, one token at time, but brain is 200Hz is consequence, brain is better in being parallel with all these 80M neurons, and also words have meaning as a whole, so most of step by step letters don't even contain meaning.
And next evidence from experience to this apriori reasoning: when I succeed to stop pronouncing I can see and understand three words at a moment. Also I wrote whole shortpost [LW(p) · GW(p)] here about my experience with trying to replace usual speech into visual thinking. (you need "visual thinking" section)
comment by EniScien · 2025-02-20T15:53:41.950Z · LW(p) · GW(p)
EniScien's Bio thread
Replies from: EniScien, EniScien, EniScien↑ comment by EniScien · 2025-02-20T16:44:47.845Z · LW(p) · GW(p)
My knowledge state
Read books: HPMoR x7, Sequences x3, planecrash x2, "You're joking Mr Feynman" x3, GEB 1/4, Fast and Slow Thinking 1/3
Didn't read: all the other rationalist books. And just almost no books, probably the one I can remember is Better Angels of Our Nature.
Have read some Wikipedia pages.
Have 99.9%, 110K words on Russian on https://myvocab.info
Watched a lot of popular science YouTube videos and lectures, mostly sciences, almost no the most useful ones like evolutionary/behavioral/cognitive psychology, neurobiology, information theory, probability, game theory, DT, or economics and programming, or at least physics, math and human biology. Almost fully illiterate in history, don't know anything except school course (and even my teacher said that schoolbooks we must use are very poor schoolbooks, probably it's Russian specifics). Have at least decent basic biology knowledge, I don't feel like our bodies are squashy but can regenerate because of vital power. Have sort of wide, but unrigorous and non-deep knowledge. Can't derive wave function of atom of hydrogen.
Read really little of fiction, mostly fanfiction, mostly fantasy, almost no sci-fi.
Have done some thinking in intuitive math on topics like connection between 1+i, Pyfagors theorem and level of curve of our space.
↑ comment by EniScien · 2025-02-20T15:58:05.698Z · LW(p) · GW(p)
TLDR
From Russia, hated reading, was grateful to science, liked clever things, cheats, computers, coded since 9y, was going to become a programmer, read HPMoR at 12, but didn't tried to solve any riddle, didn't read sequences until 16y, until 2023 didn't understood that rationality isn't about Truth, it's about Skills. Now trying to generate maximally useful thoughts, post some on LW.
↑ comment by EniScien · 2025-02-20T15:54:02.747Z · LW(p) · GW(p)
Current Detailed
I am from Russia. I liked to watch Discovery and other scientific channels from an age of ~7. I never liked to read, only listened sometimes, and when I have gone into school, I understood that books lied, we didn't have nicks, bullies, D-graders or A-graders (I got some As, but Bs too, and no one got only As), there were no any correlation between being a good student, wearing glasses, liking reading and playing chess.
When I was 9 I learned some programming, I decided that I will work as a programmer when I grew up, because programmers do have really good salaries, can work from any point of the world with internet, and don't need to show a diploma, just their skills, so I will not need to go to university, so I will not need to go into last two classes of school, I will just need to practice my skills for next ~7 years. Also it was work with computers and I really liked computers.
When I was 12 I found comment "it's not excellent fic, it's average, excellent is HP and Methods of Rationality". My favourite fictional universe + rationality?? And such characteristic... It even costed to read it, even if it didn't have an audiobook yet. And author was so dear that he said where actually you should drop it if you still don't interested. It said wait until 10th chapter where it starts to be really cool.
But for me it was really cool from the first sentence. I was so interested in reading that one time I almost was late to school (which I never did even closely before). When I ended reading I decided that I will reread this precious treasure for whole my life, so it's better to reread it only once in few year, so it will not become boring, which would be awful, because I've never seen any book even close to that ever before.
In the end there were said "it's only shadow of Sequences". I've "Fable of Science and Politics". And it was not even close that good. So I decided that I will check them all eventually. But not now. I've read them only when was 16y and really regretted that it "only shadow" instead of "how to know everything HJPEV knows and even more". I've all the sequences which were translated on Russian (a lot of even R: AZ weren't). They changed my worldview. But I didn't know English. And for a long time idea of reading through translator didn't come into my mind.
Until 2023 I didn't understood that rationality isn't about Truth, it's about Skills. I was trying to know all the truths to act out of them. I wasn't trying to act by skills of performing such strategies which get maximum expected_utility according to correct models of my preferences and world around me. THAT was moment everything actually changed, not just worldview. In short terms I've started search for creative ideas for enhancing my everyday life, learned how to fully turn off any unwanted emotions (and just as a consequence removed 2-3 fears which were with me for whole my life), learned how to control my thoughts, enhanced my imagination, lifted up my mood, started to invent mental techniques.
Unfortunately I still had a lot of developed psychological blocks, which resisted to thinking about them to fix them. And one of them was making detailed plans for short term and short term future. I fixed this problem in the end. But I still have a lot of issues with motivation.
Now I have a lot of ideas to post on LW, which are probably significantly useful.
comment by EniScien · 2025-02-20T15:22:04.825Z · LW(p) · GW(p)
Optimising internal dialogue
Never repeating thoughts technique
In planecrash EY shows technique of "never repeating same thoughts". And it can look useful, like what if you was repeating your thoughts for 10 times? You could get 10x more thoughts by that. Sort of like if you had 10 times more time for thinking, which looks really useful.
But when I actually started to practice it, I noticed, that actually a lot of time I spend on replacing last word by better synonym, a lot of time I rephrase the end of sentence, or whole sentence, or whole paragraph. That I spend a lot of time trying to think up better words to say.
And the same thoughts... I repeated them not just 10 times for whole time. I repeated a lot of them hundreds times per DAY.
A lot of time I wanted to start new thought and I for a long time (~4/5) explained circumstances, next premises, arguments etc. Like if I was doing that in speech. Because my internal dialogue felt like speech, it wasn't really different that it was in my imagination, not in real world.
So if somebody will say that Yudkowsky thinks 1000 times faster, I can easily believe in that, at least for myself in the span of days. And I can't even compare in the span of weeks.
But never repeating same thoughts is only one technique of optimising your internal dialogue.
I used a lot phrases which were not actually meaningful in thoughts, only in speech. And these probably were 2/3 of my internal dialogues.
I was trying not to use abbreviations, slang etc, and of course not refer to half of ideas like "because of that" if it was something in my visual imagination, or some knowledge I am sure in. I was trying to make my internal speech sound properly in grammatical and rhetorical sense, instead of being maximally compressed.
I was trying to speak only language that I know, Russian or English. Not to mix Russian and English sentences, and even mix words inside of sentences using Russian, English, German, Italian, Esperanto, Japanese, Latin, Greek and some languages that I know a little.
I was not replacing long words and phrases in my speech by imagining of how they will write. And also wasn't doing the same for math formulas and code. And geometrical blueprints. And schemes. And full visual images or even collages of them.
I was trying to speak legibly, and not in maximal possible speed, but in speed on which people in real dialogues didn't start to complain that I speak too fast. Which by itself could make my internal dialogue 2-3 times slower.
I was trying to make my internal speech more precise by sending a little tension on my speaking muscles, instead of just doing it fully in imagination. And... Actually my muscles very much slower than my imagination, it by itself was doing my speech 3-4 times slower.
Visual thinking
But even if I will replace all my internal sentences into saying "that" and just mentally referring to idea, that also will not be the best result. As I noticed, just visual images come to my mind faster that I can say "that", ~2 times faster.
And just the speed is only one parameter in which visual is better than audio. Our world used to refer to video and audio as the same category of information sources. Used to say that you can be visual of audio learner.
But consider the difference in size in video and audio files. 1 min of audio will be ~1mb, 1 min of video ~150mb, 150x (!) difference.
Consider a page of text, it's something like 400 words, 2000 letters + 400 spaces, how much time do you need to see a page by eyes? Something like 1/4 of second maybe. And how much time do you need to listen it as audio? At the maximum speed of human speech it will be 12 syllables, 4 words/s, at x2 8 words, so 1/50, 1/100 for audio of visual. And on normal speed of speech it's more something like 1/200, 1/400.
There are speed reading techniques, and even Wikipedia says that it can give you 5x speed of usual reading in speed (~200wpm, speed of fast speech).
But that is still just 20wps (1000wpm). Human reaction speed is 0.1-0.25s, and in this time yours eyes allegedly can recognise only 2-5 words?? When you are looking on picture you are not restricted by recognising only 2-5 things at the moment, you can see it as a whole.
Though there is a problem of quality of seeing. How how much words you can see in enough quality to recognise? I used a flashlight to create a trace in my eye and next kept it at one word in the center, it looks like I can see in enough quality a square 11x11 or 13x13, 121-169 chars, by one eye, 242-338 chars by both, which could be 50-70 words.
And it's only for speed reading, in your mind there will be no such restrictions. 5x faster thinking by analogy with 5x faster reading via using visual track instead of audio, it's only lower limit of possible.
And you don't need to restrict yourself by usual speech rules than. You can use a lot of long terms and phrases like "utility function", because length is no longer limits your speed.
Now there is a question, will such speech work? Shouldn't speech be audio? I don't think so because of deaf and unable to speak people, who can whole life use gesture language, "speak" by their hands and "hear" by their eyes.
And I suspect that limits on visual thinking should be very high.
Because it works in more natural to brain information processing mode. Not in consequence, but in parallel. Your brain have 200 tacts/s max in consequence, but has 80B neurons.
I was afraid that sentences have dependence of next from previous, so you can't say them in parallel. But... It's only in English and German words order is fixed. In my native Russian you can use words almost in any order, and they still will make almost the same sentence. So I think that no.
And imagining words of sentence visually instead of audibly is only the beginning. Next you can do so much. You can use as your speech emojis and even replace words just by pictures, (again, use code and math, schemes, schemes). You can keep threads of sentences in multiple languages at the same time.
You can see the whole sequence of your thoughts, not just current thought. You can group your thoughts into context, and remember these at one moment. You can colour your words like with synesthesia or like in IDE. You can add comments, and reactions, and marks for yourself.
You can you eyes independently to have two tracks of flat image like on computer screen, or you can use both to have 3d! Use distance, texture as additional type of data, like colour.
Practice
I was somewhat incredulous to all of these thoughts. But next I tried that. And all that actually works. Some time ago I noticed that I am 2-3 times when I am using my just beginner's visual thinking. And I can be now even more faster in learning, because in difference from acting I can learn a few things simultaneously.
Also visual thinking actually helps a lot in navigating in thoughts, not repeating the same a lot of times, because I can just see continuation of this thought sequence.
Problems:
Visual thinking in my case refuses to "just work", I need to every day start it by deliberate practice before it continues to work automatically.
And it's prone to gradually fall out from your mind when I am using audio instead, like when thinking by audio, or talking (much less when listening), or even writing, because I have a habit to pronounce things which I write.
Another problems is that some days I can start using visual thinking almost right after getting up. And some other days my imagination just works worse and I struggle to start visual thinking for whole day.
Also I was used to express emotions by mentally saying sentences by intonation and I didn't find good alternative for that in my visual thinking.
When I started I made a lot of mistakes in the process of training.
Like I thought that if I will make my thoughts less loud and intonated, visual thinking will come by itself. It didn't. My better idea was just break my internal sentences on half and wait before I will be able to notice their meaning without pronouncing.
Something sort of like when I am trying to speak not very well known language, know what I need, but prohibit to myself to speak my native language, and next without words try to find words in another language.
Another mistake was trying to restrict visual thinking. In saying only things that I would say audibly. Or in imagining visual thought only in subjective space. Or in making them look like usual text, instead of just visualing words in any convenient order.
Intuition of "how it will be more convenient to do that mental action" are generally very useful to listen to, because they usually signal which mental actions will train better and faster.
Another mistake was trying to develop visual thinking as different mode, instead of shifting every my action to it just a little.
(That's just short list of thoughts that come to my mind immediately, not even close to full list of ideas. And would be interested to know where do exists some similar ideas. Speed reading or Feynman's modeling method do come to mind, but that's only which I know.)
comment by EniScien · 2025-02-18T20:53:57.558Z · LW(p) · GW(p)
I've just found out Inflation Adjusted Karma Sorting and started to wish it be implemented into standard karma viewing system
Replies from: habryka4↑ comment by habryka (habryka4) · 2025-02-18T20:58:12.627Z · LW(p) · GW(p)
What does that mean? It doesn't affect any recent content, and it's one of the most prominent options if you are looking through all historical posts.
Replies from: EniScien↑ comment by EniScien · 2025-02-19T21:39:20.074Z · LW(p) · GW(p)
I am not sure what is unclear. But I many times noticed that my brain is very confused seeing eg EY's recent post about Lies Told to Kids with 360 karma and comparing it some post of sequences that got "pitiful" 100+. And it looked like an example of inflation, that recent much less cool post gets a few times more karma. And I don't know how calibrate my brain, and using some software solution looks easier.
comment by EniScien · 2025-02-18T09:35:56.999Z · LW(p) · GW(p)
One of the problems is that if my contribution may be more valuable, then it may be also dangerous. I was trying to create techniques to generally increase intelligence, and it looks like I have at least some success. But they are not rational techniques, not an asymetrical weapon. (Ok, some may be at least a little, if I think)
I think like "may be their using can help with AI alignment", but I don't want to their using instead helped with destroying our world faster.
And probably nothing will happen... I just don't want to be careless and post Roko Basilisk because of "ah, what can happen, I can't actually cause a big damage, it's just hubris"
In the end I have problems with writing lists of the best things, because most things in them are of this question.
comment by EniScien · 2025-02-18T08:46:38.774Z · LW(p) · GW(p)
Firefox added tab groups! Finally! I'd joked that it's a sign of the end of the world, but it doesn't look very funny now. Unfortunately, the same with vertical tabs, it's on PC, where it's completely useless, where there are tree tabs add-ons which are just better.
Does somebody know, how to get features of Touch-To-Search and fast switching between bottom bar of tabs like in chrome, but with extensions like in Firefox, and with sync, so not kiwi?
The situation with that is so desperate, that I start to want to make VR glasses linux computer like some enthusiasts. Or at least just root one of my phones, install linux and use it as a computer. Or find and buy some PDA (КПК). Or buy a tiny laptop. Or just... something.
Does somebody knows solutions to such problems?
comment by EniScien · 2024-02-14T07:58:00.581Z · LW(p) · GW(p)
I saw that a lot of people are confused by "what does Yudkowsky mean by this difference between deep causes and surface analogies?". I didn't have this problem, with no delay I had interpretation what he means.
I thought that it's difference between deep and surface regarding to black box metaphor. Difference between searching correlation between similar inputs and outputs and building a structure of hidden nodes and checking the predictions with rewarding correct ones and dividing that all by complexity of internal structure.
Difference between making step from inputs to outputs and having a model. Looking only at visible things and thinking about invisible ones. Looking only at experiment results and building theories from that.
Just like difference between deep neural networks and neural networks with no hidden layers, the first ones are much more powerful.
I am really unsure that it is right, because if it was so, why he just didn't say that? But I write it here just in case.
comment by EniScien · 2023-06-03T15:33:41.396Z · LW(p) · GW(p)
Does the LessWrong site use a password strength check like the one Yudkowsky talks about (I don't remember that one)? And if not, why not? It doesn't seem particularly difficult to hook this up to a dictionary or something. Or is it not considered worth implementing because there's Google registration?
comment by EniScien · 2023-06-03T15:03:58.737Z · LW(p) · GW(p)
It occurred to me that on LessWrong there doesn't seem to be a division of posts in evaluations into those that you want to promote as relevant right now, and those that you think will be useful over the years. If there was such an evaluation... Or such a response, then you could take a list not of karma posts, which would include those that were only needed sometime in a particular moment, but a list of those that people find useful beyond time.
That is, a short-term post might be well-written, really required for discussion at the time, rather than just reporting news, so there would be no reason to lower its karma, but it would be immediately obvious that it was not something that should be kept forever. In some ways, introducing such a system would make things easier with Best Of. And I also remember when choosing which of the sequences to include in the book, there were a number of grades on scales other than karma. This could also be added as reactions, so that such scores could be left in an independent mode.
comment by EniScien · 2023-05-26T16:17:07.140Z · LW(p) · GW(p)
On the one hand, I really like that on LessWrong, unlike other platforms, everything unproductive is downgraded in the rating. But on the other hand, when you try to publish something yourself, it looks like a hell of a black box, which gives out positive and negative reinforcements for no reason at all.
This completely chaotic reward system seems to be bad for my tendency to post anything at all on LessWrong, just in the last few weeks that I've been using EverNote, it has counted 400 posts, and by a quick count, I have about 1500 posts lying in Google Keep , at the same time, on LessWrong I have published only about 70 over the past year, that is, this is 6-20 times less, although according to EverNote estimates ~ 97% of these notes belong to the "thoughts" category, and not to something like lists shopping.
I tried literally following the one advice given to me here and treating any scores less than ±5 as noise, but that didn't negate the effect. I don't even know, maybe if the ratings of the best posts here don't match up with my rating of my best posts, I should post a couple of really terrible posts to make sure they get rated extremely bad and not good or not?
Replies from: EniSciencomment by EniScien · 2023-05-08T06:21:33.452Z · LW(p) · GW(p)
Yudkowsky says that public morality should be derived from personal morality, and that personal morality is paramount. But I don't think this is the right way to put it, in my view morality is the social relationships that game theory talks about, how not to play games with a negative sum, how to achieve the maximum sum for all participants.
And morality is independent of values, or rather, each value system has its own morality, or even more accurately, morality can work even if you have different value systems. Morality is primarily about questions of justice, sometimes all sorts of superfluous things like god worship are dragged under this kind of human sentiment, so morality and justice may not be exactly equivalent.
And game theory and answers questions about how to achieve justice. Also, justice may concern you as directly one of your values, and then you won't betray even in a one-time prisoner's dilemma without penalty. Or it may not bother you and then you will pass on always when you do not expect to be punished for it.
In other words, morality is universal between value systems, but it cannot be independent of them. It makes no sense to forbid someone to be hurt if he has absolutely nothing against being hurt.
In other words, I mean that adherence to morality just feels different from inside than conformity to your values, the former feels like an obligation and the latter feels like a desire, in one case you say "should" and in the other "wants."
I've read "Sorting Pebbles into Different Piles" several times and never understood what it was about until it was explained to me. Certainly the sorters aren't arguing about morality, but that's because they're not arguing about game theory, they're arguing about fun theory... Or more accurately not really, they are pure consequentialists after all, they don't care about fun or their lives, only piles into external reality, so it's theory of value, but not theory of fun, but theory of prime.
But in any case, I think people might well argue with them about morality. If people can sell primes to sorters and they can sell hedons to people, would it be moral to betray in a prisoner's dilemma and get 2 primes by giving -3 hedons. And most likely they will come to the conclusion that no, that would be wrong, even if it is just ("prime").
That you shouldn't kill people, even if you can get yourself the primeons you so desire, and they shouldn't destroy the right piles, even if they get pleasure from looking at the blowing pebbles.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2023-05-08T09:38:18.586Z · LW(p) · GW(p)
There is convergently useful knowledge, and parameters of preference that could be anything, in a new mind. You don't need to align the former. There are no compelling arguments about the latter.
comment by EniScien · 2023-05-01T15:19:30.670Z · LW(p) · GW(p)
Yudkowsky says in one of his posts that since 0 and 1 for probabilities mean -∞ and +∞, you can't just add up all the hypotheses to get one. However, I don't see why this should necessarily follow. After all, to select one hypothesis from the hypothesis space, we must get the number of bits of evidence corresponding to the program complexity of that hypothesis.
And accordingly we don't need to have an infinite number of proofs to choose, as many as the number of bits in the longest hypothesis is sufficient, since any longer hypotheses will compete with shorter hypotheses not for correctness but for accuracy.
Yes, in the end you can never reach a probability of 1 because you have meta level uncertainty, but that is exactly what meta level probability is, and it should have been written as a separate multiplier, because otherwise adding an infinite number of uncertain meta levels will give you a probability of 0 for each of your hypotheses.
And the probability P(H) without considering meta levels should never be 0 or 1, but the probability P(H|O) could well be, since the entire meta level is put into P(O) and therefore P(H|O) will have a known finite program complexity. That is, something like:
A="first bit is zero", B="first bit is one"
C="A or B is true", O="other a priori"
P(O)=~0.99
P(A|O)=1/2, P(B|O)=1/2
P(C|O)=P(A|O)+P(B|O)=1/2+1/2=(1+1/2)=1
P(C)=P(C|O)*P(O)=1*0.99=0.99
And if we talk about the second bit, there will be two more hypotheses orthogonal to the first two, on the third bit two more hypotheses, and if we talk about the first three bits, there will already be a choice of 8 multiplications of the first six hypotheses, and there will no longer be correct to ask which of 6 hypotheses is true, because there are 6, in options 8, and must be true simultaneously not one hypothesis, but at least 3.
And accordingly, for 8 hypotheses, we can also add up the probabilities as 8 times 1/8 and end up with 1. Or we can write it as 1/2+1/4+1/8+1/8=1, but of course this is only possible if we can count the number of bits in the hypotheses, decompose them into these bits and determine the intersections, so as not to count the same bits repeatedly.
comment by EniScien · 2023-04-27T10:09:26.945Z · LW(p) · GW(p)
Human language works primarily due to recognition in context, this works with individual words, but it can also work with whole phrases, the same word can be completely determined by its morphemes and only morphemes will have to be known from the context, but also a word can be and a single morpheme, and of course here you should also take into account words borrowed from other languages, which in the original language can be broken into morphemes, and in yours be known only from the context, and the same thing works not with whole words, but also with phrases where individual words are separated by spaces in writing and pauses in conversation, there you can also determine the meaning of a phrase based on the meanings of its words, but often this may not be the case, often a certain combination of words has a meaning different from its individual parts, this meaning can be recognized only from the context in which the phrase is used. And also a single word can have a meaning that is different from its morphemes, for example, the words computer, calculator and counter based on morphemes should be synonyms, but they are also used in the context to refer to three specific different devices. And there can be an unlimited number of such levels of different contextual meanings, they are also often used to add additional levels of meaning, if you look at everything only as a sequence of morpheme meanings, the meaning is one, if you perceive the meanings of words from contextual learning as a whole, then the meaning will be different if look at phrases third and so on. I started programming at the age of 9 and even then I noticed that programming is not rocket science, there is nothing complicated, writing a program is no more difficult than speaking, writing in a programming language is just like writing in a foreign language. Later, I also heard about a study where an MRI showed that programming uses the same areas of the brain as talking. And this finally strengthened me in the wrong thought. The fact is that conventional languages and programming languages, despite the fact that these are all languages, are very different, programming languages are "ascending" languages, they start with a description of completely elementary structures, then refer to them, then refer to sets of links and so on, in any case, this is a perfectly accurate process, in programming if you have described some elementary structure, then this is a perfectly accurate description, without any vagueness, and if you have described a high-level representation from references, then again it is perfectly exact, you just referred to a number of perfectly accurate low-level descriptions.
You create your worlds here with your own laws of physics, and like nature does not tolerate any exceptions (although there is also a huge difference between the laws of physics and the laws of the game), if you say "truth", then it is absolute truth, if you say "everything" , then this is absolutely everything that is, if you say "always", then it will be absolutely always, any situations where this is not so, these will be situations of a specific indication "always except for a and b", in human language it is completely not like that, it works like leaky categories (reference needed), if you say "I will never tell a lie", then although you do not describe any exceptions, you still mean a bunch of them, like "except if I myself do not know the truth ", "unless I get drugged", "unless someone changes my brain" and probably also "unless lying saves my or someone else's life".
comment by EniScien · 2023-04-22T20:19:57.318Z · LW(p) · GW(p)
I recently read Robin Hanson saying that small groups that hold counter-intuitive beliefs tend to come from very specific arguments, some even invent their own terminology, and outsiders who reject those beliefs often they don't even bother to learn the terminology and review the specific arguments because they reject these beliefs on purely general grounds. And this is what I myself noticed, however, only from within such a group, I was outraged and annoyed by the fact that usually the arguments of those who reject FOOM or the danger of AI / AGI in general are so general that they have nothing to do with the topic at all. On the other hand, Robin Hanson gives an example like conspiracy theories, and this is true, and this is something that I did not think about, because really, when I come across a conspiracy theory, I don’t even try to spend time studying it, I just immediately reject it on a general basis and move on. This may well be called the application of an outside view and a reference class, however, the point is not that I use this as an argument in an argument (arguing with a conspiracy theorist is a priori useless, because in order to believe in it, you need not understand scientific method), I use this to avoid even wasting time on a more precise but longer look inside. If I were to take the time to consider a conspiracy theory, then I would not dismiss all arguments on the grounds that your idea belongs to the reference class of conspiracy theories and your view from the inside is just your vision under cognitive distortions. I would apply an inside view, understand the specific arguments, reject them on the basis of scientific evidence. And I would use the view from the outside in the form of a reference class of conspiracy theories only as not very weighty a priori information. So the problem is not that I have an impenetrable barrier of reference classes, but that this narrow group goes beyond the reference class of ordinary conspiracy theorists. And, perhaps, without fail, entered the reference class of those who are well acquainted with the scientific method, because without this I will decide that they are just one of a slightly smaller number of non-standard conspiracy theory groups.
And, I'm not entirely sure it's worth doing it in one post, but it also talked about the cases of seeing a sharp change in values about reaching full strength. Maybe not in the economy, but in politics this happens regularly, at first the politician is all so kind, he does only good, and then we finally give him all the power, we make him an absolute dictator, and for some reason he suddenly stops behaving so positively, arranges repressions, starts wars.
And more about Hanson's sketch about the explosion of "betterness". And at the end, the question is "Really?". And personally, I answer this: well ... Yes. Right. But only the human brain works about 20M times slower than a computer processor, so it will take about the same amount more time. If for a computer I would be ready to believe in a period from 1 month to 1 millisecond, then for a person it would be from 1.5M years to 6 hours, but the second is subject to the ability to instantaneous self-modification, including that any new idea instantly updates generally all your previous conclusions. Plus, in the intermediate scenarios, you still need to take into account sleep, food, willpower, the need to invent a cure for old age, and the elimination of other causes of death. In addition, you will have to hide your theory of superiority from other people so that they do not decide to stop you before seizing power. But in general, yes, I think that this is possible for people, just a whole bunch of problems like a lack of will, the lack of the possibility of self-modification and terrible for this evolutionary spaghetti code in the brain. But first of all, a too short life span, if people lived a million years, or better a billion, so that with a margin, they could achieve all this, despite all the difficulties, except death. However, individuals live too little, think too little, so the explosion of betterness is only possible for them collectively, it has been going on for 300 years, from 3 to 15 generations and involves 8 billion people at its peak. More specifically, I would consider this “betterness explosion” a broader concept than the “intelligence explosion” in the sense that it does not specifically talk about intelligence, but only about the concept of optimization processes in general, but at the same time more narrow, because it seems to have specific values, positive human ones, while a nuclear explosion could also be called an optimization explosion, but by no means aligned with human values.
comment by EniScien · 2023-04-22T17:26:25.866Z · LW(p) · GW(p)
I have never heard of this before, either here or elsewhere, but I myself notice that usually even the most unreasonable thing like religion has its grain of truth. Or rather, a lot of grains of truth. That is, knowledge, even in human society, almost never has a purely negative utility, a person who knows a certain cultural cluster will almost always have an advantage over a purely ignorant one. However, an important nuance, while a more knowledgeable one may turn out to be worse when they are both taught the correct rational methods, a pure mind will accept them without problems and go forward, and the one busy with another meme will resist. And this is all understandable, the combined optimization pressure between intelligence and meme evolution is unlikely to leave purely harmful memes alive in the end. Human thinking plus natural selection usually correlates with some useful/sane traits in memes. Although, of course, they can very often lead into traps, holes in the fitness landscape, create inadequate balances, and so on and so forth. This can be seen, for example, in sayings (not to be confused with omens, they are almost always just a consequence of apothenia with a confirmation error), something like "geniuses think alike" this is easily paraphrased into "there are a thousand ways to be wrong and only one way to do that - that's right." Usually, if a saying exists, then it correctly reflects some aspect of reality. The problem is that such observational data collection without building causal models does not make it possible to understand how conflicting sayings relate to each other and which are true in which situations. Because about the same geniuses, and vice versa, there is an observation that they think outside the box. And without analysis, you cannot clearly say that in order to obtain the greatest amount of information, one should causally or logically coordinate for geniuses to think on different topics. Or that non-standard thinking is just “not like usual, but meaningful”, so that it can be caused both by being on higher levels of the conceptual pyramid, or simply by another memeplex, which does not cancel the previous point about coordination for thinking in unusual directions of search . However, also with all sayings there is an effect that only those who already understand them understand them, and all because these are just links, but I will write more about this another time.
comment by EniScien · 2023-04-22T13:48:07.283Z · LW(p) · GW(p)
An interesting consequence of combining the logical space of hypotheses, Bayes' theorem, and taking priors from Kolmogorov complexity is that any hypothesis of a certain level of complexity will have at least two opposite child hypotheses, which are obtained by adding one more bit of complexity to the parent hypothesis in one of two possible states.
And, conversely, you can remove one bit from the hypothesis, make it ambiguous with respect to which child hypothesis will fit your world, but then you will need fewer bits of evidence to accept it a priori.
And accordingly, there will be one hypothesis that you will have to accept without proof at all, a program of zero length, an empty string, or rather an indefinite string, because it contains all its child hypotheses inside itself, it simply does not say which one of them is true.
And this absolutely a priori hypothesis will be that you exist within at least one world, the world of the space of hypotheses itself, it is not specified simply in which particular part of this world you are.
And this makes us look differently at the Tegmark multiverse. Because, let's say, if we know that we are on Earth, and not on Mars, then this does not mean that Mars does not exist. We just aren't there. If we are in a certain branch of Everett, then this does not mean that other branches of Everett do not exist, we just ourselves are not there. And continuing this series, if we are within one program of the physical laws of the universe, then this does not mean that other programs of universes with other laws do not exist, it's just that we ourselves are not there.
If we bring this pattern to a logical result, then this means that the fact that we are inside one sequence of bits does not mean that others do not exist. In other words, based on an absolutely a priori hypothesis, all possible sequences of bits exist, absolutely everything, no restrictions, neither on computability, nor on infinity, or even on consistency. Just exist absolutely all mathematics in general.
In general, all mathematics is summed up in hypotheses of zero length and complexity, because it does not limit our expectations in any way. That is, this is not a request to accept some new hypothesis, it is, on the contrary, a request to remove unreasonable restrictions. In the broadest sense, there are no limits on either computability or infinity, and if you claim otherwise, then you are proposing a hypothesis that narrows our expectations, and then it is on you that the burden of proof lies.
However, this can also be called a rethinking of the concept of "exists", people say it intuitively, but they absolutely cannot give it any definition. And if you refine this concept to something related to controlling your expectations, then you can talk about questions like how likely you are to see contradictions, infinity or something else like that, then you can specifically answer.
For example, one could say that you will never see inconsistent territory because it is a delusion of mind projection, "contradictory" is a characteristic of a map that does not actually have the territory it is supposed to describe.
Seeing contradictions in a territory is like seeing a “you are here” icon or an indication of the scale, or asking what is the error of the territory to itself, the answer is “none”, because the territory is not a map, not a model created from an object, it is the object itself , he cannot be in error with himself, and he cannot contradict himself.
It can all be called something like Tegmark V, mathematical realism and Platonism without limits. And personally, these arguments convince me quite well that all mathematics exists and only mathematics exists, or, to paraphrase, all that exists is mathematics or a specific part of it.
Edited: in some strange way I forgot to clarify, to indicate this obvious point in terms of the space of hypotheses and the narrowing of expectations.
Usually, any narrowing is done by one bit, that is, twice, so you will not reach zero in any finite number of steps, this is because you remove one of the two bits each time in a perpendicular to all previous direction, however, you can also do otherwise, you can not delete any of the two bits, or, after deleting the first bit, delete the second one, delete both, cut not one half, but * (1-1/2 = 1/2) = a * 1/2, and both halves, a*(1-1/2-1/2=0)=a*0, with such a removal that is not perpendicular to the previous ones, but parallel, we remove both possible options, we narrow the space to zero in some direction, and as a result, the whole hypothesis becomes automatically narrowing our entire space of hypotheses to zero, so that now it corresponds not to half the number of territories, but to zero, excluding both alternatives, it excludes all options.
This looks like a much better, much more technical definition of a contradiction than trying to proceed from its etymology, and thus it is clear that there is no “contradiction” exactly, just any card is obliged to narrow your expectations, leaving only one option for each choice, indefinite leaves both, therefore useless, but the contradictory leaves neither, excludes both options, therefore even more useless.
If there is a contradiction not on the map itself, but in the general system of the map and the territory, there is no such problem, it only means that the map is incorrect for this territory, but may be correct for another, it happens that we already have there is an a priori exclusion of one of the options, but the data received from the territory exclude the second option, if we draw up a second map based on these data, then it will not supplement and clarify ours, but will exclude both possible alternatives along with it, therefore the maps are incompatible, their cannot be combined without getting the exclusion of both sides of reality, the narrowing of space to zero.
comment by EniScien · 2023-04-22T12:36:52.286Z · LW(p) · GW(p)
I don't remember if I already wrote about this, but I was thinking about the space of hypotheses from first and second order logic, about where recursive reasoning hits the bottom and so on, and I came to the conclusion that if you actually find some mathematical formulation of the falsifiability criterion Popper, then it must be deeper than Bayes' Theorem. In other words, Bayes' Theorem shows not that positive cognition also exists, it's just weaker, but that negative cognition has indirect effects that can be mistaken for weak positive cognition. If we try to formulate it concretely, then Bayes' Theorem is usually depicted as a square, divided in one dimension by one line, and in the other by two. And after you've done the calculation, you cut off the probabilities that didn't happen, and then you normalize it back to a nice square. However, if you treat this square as a space of hypotheses from which you cut out falsified clusters of hypotheses, then you will see that no probabilities ever increase, just some fall much more than others and in a relative ratio less fallen ones look larger, and after normalization this difference generally lost. The main plus of such a view is that there is no crisis of confidence in it, in general, in principle, you cannot confirm something, you can only more or less refute it. So bits of information become rebuttal or contradiction scores, you don't validate the bit's current value, you disprove the opposite, because that would mean a contradiction. The probabilities of this number are less than one, therefore, with mutual multiplication, they can always only fall, but you look at the probability distribution for those that fell the least. For example, religion has already been pierced by so many spears that in order for you to reach such a low probability, we need to lower even lower all possible probabilities, which are now higher. But for quantum mechanics, it doesn’t matter if there are any flaws in it, it still remains our best hypothesis. In other words, it allows you not to reach self-destruction even if you are a contradictory agent, you just need to look for a less contradictory model. And this also works in comparison between agents, no one can increase someone's rating, you can only find new contradictions, including in yourself, the one with whom others found the least contradictions wins. In a broader sense, I believe that contradictions are generally a more fundamental category than lies and truth. False is something that is contradictory only in combination with some external system, so it can win in the system with which it does not contradict. And there are also things that are already contradictory in themselves, and for them it will not work out to find an ideal external world, the number of contradictions in which reaches zero. In other words, contradictory things in themselves are worse than things that contradict only something specific, but there is no fundamental difference, even though false, even contradictory systems do not destroy the mechanism of your knowledge. In addition, of course, in addition to false, true and contradictory, there is a fourth category, indefinite, they certainly score the least points of contradiction, but they are not particularly useful, because the essence of Bayesian knowledge is to distinguish between alternative worlds among themselves, and if a certain fact is true for all possible worlds, it does not allow you to discern which world you are in. However, this does not mean that they are completely useless, because it is precisely from such facts that all mathematics / logic consists, facts that are true for all possible worlds, distinguishing them from contradictory facts that are not true for all possible worlds. That is, in other words, the meaning is again that it is impossible to prove something, mathematics is not a pool of statements proven true for all worlds, it is rather a pool of those statements that have not yet proven that they are wrong for all worlds.
comment by EniScien · 2023-04-22T11:32:33.227Z · LW(p) · GW(p)
Some time ago I saw an article here on the topic, but what do the words "deep causality" and "surface analogy" mean. For me personally, at that time it was intuitively obvious what the difference was, including for me it was obvious that the article was not about deep analogies, but only about the concentration of the probabilistic mass, which of course is a very important skill for a rationalist, actually key, but that's just not what I mean by deep causes, at least. However, despite this, I could not express my intuition in words at that time. I wasn't even sure if the analogy could be "truly deep" or just "more profound" than any other. Since then, I have better understood and deduced for myself the concepts of upward and downward understanding, that is, calculating large consequences from simple base-level laws, or trying to determine basic laws by seeing only large consequences. And in the first case, you can confidently talk about the deepest level, in the second, you can’t, so in theory it can always turn out that you didn’t know the true laws and there is a level even deeper. An obvious analogy for the surface and the deep is the black box, where the surface is a purely statistical analysis of purely patterns in inputs and outputs, and the deep is an attempt to build a model of the inside of the box. The difference is that when you talk about statistics, you are always ready to say that there is always some possibility of a different outcome, this will not disprove your model, and when you build the internal structure of the box, your hypothesis says that some combinations of inputs and outputs are strictly impossible. according to your model, if this appears, then your model is wrong, deep causal models are more falsifiable and give clearer answers. You can also say that the difference is like between a neural network with two layers, input and output and connections between them, and a neural network with a certain number of hidden, invisible, internal layers. The difference is that deep causal networks require not just saying that there is a certain correlation between the states of inputs and outputs, it requires building a chain of causes between inputs and outputs, laying a specific path between them. And in the real world, you can also often check the specific steps of that very path, not just the ins and outs. But it can also be compared to a logical versus probabilistic construction, you can "ring out" this circuit and clearly say which outputs will be at which inputs. And as in inference, if you deny the conclusion, you need to point out the specific premises that you then reject. You cannot reject the conclusion without refuting the whole structure of this model. Like that. Probably later I will formulate it even better and add it.
comment by EniScien · 2022-11-01T10:25:56.463Z · LW(p) · GW(p)
It occurred to me that looking through first-order logic could be the answer to many questions. For example, the choice of complexity by the number of rules or the number of objects, the formulas of quantum mechanics do not predict some specific huge combination of particles, they, like all hypotheses, limit your expectations compared to the space of all hypotheses/objects, so for at least complexity according to the rules, at least according to objects will be given one answer. At the same time, limiting the complexity of objects should be the solution to the Pascal robbery (the original articles have no link if they are already solved), this is the answer, where the leverage penalty comes from. When you postulate a hypothesis, you narrow the space of objects, initially there is much more googolplex of people, in fact, but you specify only a specific googolplex as axioms of your hypothesis, and for this you need the corresponding number of bits, because in logic an object with identical properties cannot be different objects (and if I'm not mistaken, quantum mechanics says exactly that), so that each person in the googolplex must be different in some way, and to prove / indicate / describe this you need at least the logarithm of bits. And as long as you're human, you can't really even formulate that hypothesis exactly, define all the axioms when that is 1 your hypothesis is 1, let alone get enough bits of evidence to establish that they really are 1. But also the hypothesis about any number of people is the sum of the hypotheses "there is at least n-1 people" and "there is one more person", so increasing its probability by a billion times it will be literally equivalent to believing at least that part of the hypothesis where there are a billion people , which will be affected by the master of the matrix. This can also be expressed as that each very unlikely hypothesis is the sum of two less complex and unlikely hypotheses, and so on until you have enough memory to consider them, or in other words, you must start with more likely hypotheses, test, and only then add new axioms to them, new bits of complexity. Or a version of leverage penalty, only not for the probability of being on such a significant node, but for choosing from the space of hypotheses, where for the hypothesis about the googolplex of people there will be a googolplex of analogues for smaller numbers. That is, according to first-order logic, our programs have unreasonably high priors for regular hypotheses, in fact, infinite, although in fact you have to choose from two options for each bit, so the longer the set of certain bit values, the less likely it will be. Of course, we have evidence that things behave regularly, but not all evidence goes there, much less an infinite amount of it, since we haven't even tested all 10^80 atoms in our Hubble sphere, so our prior is larger the probabilities of regular hypotheses will not be strong enough to overpower even a googolplex.
comment by EniScien · 2022-06-06T13:46:33.717Z · LW(p) · GW(p)
The more I learn about the brain, the more the sense of integrity dissipates. Apparently, although we look at our mind from the inside, it, just like the body from the outside, at the same time ceases to be mysterious and also ceases to seem like something whole, a kind of fluid thinking. Ignorance creates not only a sense of derivativeness (chaos), but also a sense of wholeness.
comment by EniScien · 2022-06-03T16:48:56.108Z · LW(p) · GW(p)
I remember when I was a kid, if I heard two apparently conflicting opinions from two sources I liked, I would take great pains to suppress the cognitive dissonance, pull an owl on the globe, and convince myself that both were right. It seems that I was even proud of the work I did. So I can understand why some believers try to combine the theory of evolution with the idea of divine creation. For them, there are simply two sources associated with positive emotions, and they do not want to experience unpleasant sensations, admitting that one of them is wrong at best.
comment by EniScien · 2022-05-31T11:10:59.138Z · LW(p) · GW(p)
Somewhere in the comments it was written that rationality is not building a house, where the main thing is to lay a good foundation, no, it's more like trying to patch a hole in a sinking ship. In my opinion, this should be included in the fund of golden quotes
Replies from: EniScien↑ comment by EniScien · 2023-04-22T14:22:02.131Z · LW(p) · GW(p)
This phrase can also be called something like a projection of the concept of inadequate equilibria in civilization on a particular person or decision-making process. In other words, the process of attempting a better self-modification runs into self-reference, trying to test each of its elements with each of its elements, for an external object you can test it without prejudice, but you cannot test yourself, because prejudices can hide themselves.
comment by EniScien · 2022-05-30T15:40:50.791Z · LW(p) · GW(p)
In fact, I'm both worried and happy that in LessWrong I can influence the ratings of so many posts so much. That is, I can make the difference between two messages 6 points. And this means that not many people read the site and it will not be balanced. The factor of individuals with ordinary karma gets too much influence. But on the other hand, it is psychologically good for the voters themselves, since you can see that your votes are important, there is no feeling of “I don’t influence anything” and “a bunch of other smart people will decide better”
comment by EniScien · 2022-05-22T09:36:34.493Z · LW(p) · GW(p)
(I can’t find where it was, if I find it, I’ll move it there) Someone suggested in light of the problems with AI to clone Yudkowsky, but the problem is that apparently we don’t have the 18 years it takes for the human brain to form, so that even when solving all the other problems, it's just too slow. Well, with any means of accelerating the development of the brain, the problem is already clear.
comment by EniScien · 2022-05-20T13:03:05.917Z · LW(p) · GW(p)
It seems that in one of the chains Yudkovsky says that Newtonian mechanics is false. But in my opinion, saying that Newton's mechanics is false is the same as saying that Einstein's theory of relativity is false, well, we know that it does not work in the quantum world, so sooner or later it will be replaced by another theory, so you can say in advance that it is false. I think that this is generally the wrong question, and either we should indicate how much the percentage is false, somehow without confusing it with the probability that it is false. Or continue the metaphor of the map and territory. Maps are usually not false, they are inaccurate. Some map may not outperform white noise in predictions, but Newton's map is not like that, his laws worked well until the discovery of problems with the orbit of Mercury, and replaced by the theory of relativity. Newton's map is less like territory, less accurate than Einstein's map. Let's say Newton's map contained a blurry gray spot in the shape of a circle, and one could assume that it was just a gray circle, but Einstein's map showed us in higher resolution that there is a complex pattern in this place within a circle with equal alternation of black and white, no grey.
comment by EniScien · 2022-05-17T18:21:22.648Z · LW(p) · GW(p)
It occurred to me (didn't finish reading Inadequate Equilibria, I don't know if that comparison is made there) that the unusability of markets is similar to that very mutual entropy, thanks to which you can put your finger in boiling water and not get burned if you know the movements of atoms.
Replies from: Dagon↑ comment by Dagon · 2022-05-17T18:35:51.574Z · LW(p) · GW(p)
I'm not sure I get the analogy. And in fact, I don't think that KNOWING the movements of atoms is sufficient to allow you to avoid heat transfer to your finger. You'd need it to be true that there exists a place and duration sufficient to dip your finger that would not burn you. I don't think this is true for any existing or likely-to-exist case.
If you can CONTROL the movements of atoms, you certainly can avoid burning. This is common and boring - either a barrier/insulator or just cooling the water first works well.
↑ comment by EniScien · 2023-04-22T16:26:46.236Z · LW(p) · GW(p)
I expressed myself inaccurately. Firstly, of course, simple knowledge will not make water cold for you, you also need to move your finger very accurately and quickly in order to avoid hotter molecules, I just considered this insignificant, since you initially physically cannot fit into your brain weighing 1.5 kg information about 0.25 kg of water molecules. Secondly, to put it with my current best understanding, these systems are similar in that it is generally believed that the glass just has high entropy, so you can't help but get burned, so it is generally believed that the market is just efficient, just unpredictable, although in general - then these are all human-centric, relative categories (I don’t remember, they should be called “magical” or “unnatural” in Lessvrong), one way or another, the point is that you can’t talk about the entropy of the order book or the predictability of the market, as about object.property, just like subject.function(object), otherwise it would be support for making a mind projection error, there is no entropy of an object, there is only mutual entropy of two objects, and it doesn't matter if you are talking about heat dissipation or information prediction. (Then it just occurred to me that there is a difference in vision between an informed and an uninformed subject in both cases, that for one impenetrable chaos, for the other a transparent order)
comment by EniScien · 2023-04-29T12:22:20.088Z · LW(p) · GW(p)
I was once very interested in the question of what "time" is and what "entropy" is. The thing is, I watched popular science videos on YouTube, and nowhere was there a normal answer, at best it was some kind of circular argumentation. Also, nowhere was there a normal explanation of what entropy was, only vaguely stating that it was a "measure of disorder in the system".
In my head, however, the idea swirled around that it had something to do with the fact that there are more directions outward than inward in space. And I also twirled that it must be connected with the law of least action, for which I also did not meet such an explanation, that is, that the reason is that the straight path is one path, and there are at least 4 detours, and this is only the closest, with each step there will be 2 times less, respectively, if we imagine that there is no "law" of least action, we will still see it, because for a particle the probability to be in each next step from the central path will be 2 times less, because there are twice as many paths, and for a wave it will not even be probability, but purely its distribution.
All these thoughts were inspired to me by a video of balls falling down a pyramid of pegs, and they end up having paths on both sides inward at each step, and only one side path outward, and they form a normal distribution. That is, to put it another way, the point is that although the number of points is the same, the paths in the center converge to each other and the paths on the edges do not, the two paths in the center form a single cluster of central paths and the two paths on the edges do not.
And from this we can assume that the average expected space will have a shape close to a square, a cube, a tesseract, or another figure with equal sides, because although there is only one such figure, and many other variants, these variants do not fit together, but variants close to a cube fit into a cube.
This also explains for me why Everett's chaotic branches do not create a chaotic world. There are more chaotic branches, but they form a circle around the edge rather than a circle in the center, the least chaotic branches are fewer, but they converge to a world close to order, but the most chaotic branches differ from each other even more than they differ from order.
Somewhere here on lasswrong I saw a question about why, if we live in a Tegmark universe, we don't see constant violations of the laws of physics, like "clothes turn into crocodiles." However... We do observe. But only "clothes" and "crocodile" are too meaningfully human variants, in fact there are much more, one mole of matter contains ~10^23 particles, and even if we only assume different variants of their presence/absence, it is 2(1023), our system is too big to notice these artifacts, however if we go to individual particles...
That's exactly what we'll see, constant random fluctuations. Quantum. This can be considered a successful prediction of Tegmark, although in fact only retrospective.
comment by EniScien · 2023-04-22T17:49:51.782Z · LW(p) · GW(p)
I thought about the phenomenon when the monuments of all the great people are taken down, because they were slave owners. And I formulated why I do not like this trend. The fact is that the past will look extremely terrible from the point of view of the future anyway. That is, virtually anyone in the past has done at least one absolutely terrible thing by today's standards. If you continue this trend, it is very likely that in the future you will already be considered a monster, for some strange reasons from the point of view of today, such as the fact that you did not cryopreserve chimpanzees, ate the meat of dead animals, buried your mother's brain in the ground to rot and be eaten by worms, didn't apply to donate your organs for transplant if you died, didn't donate to malaria nets, agreed to drive a car, didn't have biological children, were smarter than average, or got bad grades. And these are just things that are already discussed today, in fact, it may well be something completely unexpected, which you could not even suspect that this is actually something bad. Well, you can give other examples of the horror of the past in addition to slavery. Something like Bayes, Newton and a bunch of other scientists were religious. Although, in general, religion is very common to this day, so the entire previous list can already be applied to scientists of the past, if you agree with the specific points. Based on this, it is obvious that achievements should be valued regardless of today's moral assessment.
comment by EniScien · 2023-04-22T13:06:35.722Z · LW(p) · GW(p)
I thought for a long time about what "contradictions" mean at all and how they can not exist in any world, if here they are, they can be written down on paper. And in the end, I came to the conclusion that this is exactly the case when it is especially important to look at the difference between the map and the territory. Thus an inconsistent map is a map that does not correspond to any territory. In other words, you usually see the area and then you make a map. However, the space of maps, the space of descriptions, is much larger than the space of territories. And you may well not draw up a map for a certain territory, but simply generate a random map and ask what territory it corresponds to? In most cases, the answer is "none", for the reason already described. There are maps compiled by territories and they correspond to these territories, and there are random maps and they are not required to correspond to any territory in principle. In other words, an inconsistent map exists, but it is not drawn for any territory, the territory for which this map was drawn, if it were drawn for a certain territory, does not exist, because this map was not drawn for a territory. This can be represented as the fact that our map is made from a globe. If you unfold it and attach it to the plane, a series of slices will come out, and there is an empty space between them, on an ideal map it should not be white, but transparent. However, if you first draw the map, then you can fill these transparent areas with a certain pattern, and if you try to collapse the map back into a globe, then you will not get a single real globe, because it will not be a sphere, Riemann space, but a plane , the Euler space with a higher dimension, as well as the space of maps to the space of territories, and its attempt to project onto a sphere will not lead to any even result, it will go in folds, like an attempt to project Lobachevsky's space into ours. Even for programmers, this can be represented as a difference between a map and an array, in the case of a map, you can specify many different values for one key and then it will not be possible to collapse into an array. Thus, inconsistent models can be described as artificially created similarities of object projections that cannot be unambiguously deprojected back into an object. And in order to avoid confusion, it should be said not that contradictions are something that cannot exist, but that contradictions are a state of the map, which does not correspond to any state of the territory. In other words, you can also say that this is a typical case of mind projection error, when you project a certain state of the map (which really exists and is not contradictory) onto the territory and cannot get this very territory, then you say that you cannot get it, because that would be controversial territory.
comment by EniScien · 2022-10-18T16:58:11.049Z · LW(p) · GW(p)
It seems to me that the standard question on the conjunction error about "the probability of an attack by the USSR on Poland as a result of conflict X" and "the probability of an attack by the USSR on Poland" is inaccurate for this experiment, since it contains an implicit connotation that once in the first reason X is named, and in the second case, Y or Z is not indicated, then in the second case we evaluate the attack for no reason, and if we continue to show the person himself his answers to this question, the effect of hindsight comes into play, like that experiment with the substitution of the selected photo. It seems to me that a more correct question, in order not to create such subconscious false premises, would be "the probability of an attack by the USSR on Poland as a result of conflict X" and "the probability of an attack by the USSR on someone as a result of any conflict between them." Although I'm not sure that this will change the results of the experiment as a whole. At least because even with an explicit indication of "some" instead of an implicit premise of "no reason", something so multivariate and vague will still not look like a plausible plot, as a result, vague = indetailed = implausible = unlikely = lowprobable.
comment by EniScien · 2022-06-01T07:34:10.880Z · LW(p) · GW(p)
My feeling is that people's planning error stems from the illusion of control. When a person plans, it seems to him on a subconscious level that as he draws up a plan, it will be so, so a person tends to draw up a plan, taking into account the fact that everything went the worst way. You don't want to create a plan that says "I'm going to be wrong at this point" do you? After all, who will specifically plan to commit a mistake if he can plan that no mistake will be made? It's like writing a book. There, if you do not plan for the character to make a mistake, he will never make it. That's only if in books skillful writers strive to follow Murphy's law, because a story about an ideal character is not interesting and Mary Sue in general, an exhibition of pianos in the bushes, author's arbitrariness, etc. But in reality, a person wants to avoid mistakes, so he plans that everything will work out. By the way, it worked in the book... And in the head, in the imagination when planning - everything also turned out without errors when they were not added intentionally, so why should it be otherwise in reality? This seems to be the same reason why people, knowing that a box is 70% red and 30% blue, try to bet red 70% of the time, as if the point of randomness isn't that you can't plan for it. It must also be related to the inability to respect unknown unknowns, to take them into account when you do not intuitively feel that you do not know something. Perhaps teaching the difference between a map and a territory to an intuitive level by itself, without specifically planning error, should improve the situation.
comment by EniScien · 2022-05-28T13:51:42.113Z · LW(p) · GW(p)
Based on Yudkovsky's post about the aura of menacingness and the Level Above Yours, it would probably be nice to make some kind of rating with examples of books (like that list of the best textbooks, compiled from the comments of people who have read at least 3 pieces), so that you can assess what level you yourself are . At the same time, it will be possible to understand how objective this measure is. It seemed to me that, among other things, a community, lessvrong, is needed for such purposes.
comment by EniScien · 2022-05-24T10:50:11.760Z · LW(p) · GW(p)
If you think about it, there is nothing wrong with every person knowing everything that civilization knows now, on the contrary, this return to normality has already accumulated, it is overdue. Previously, there was just a scientist who was aware of all the achievements of science. Now two physicists will not understand each other, because they are from different fields. No one in the world knows how things are, no one sees the whole picture even remotely. One can imagine the horror of the post of a person who met someone who does not even fully know either the history of his planet or the laws of physics.
comment by EniScien · 2022-05-23T13:58:43.506Z · LW(p) · GW(p)
Surely someone has already pointed out this, but I have not seen such indications. It seems that humanism follows science, because the idea of progress shows that everyone can win, there is enough for everyone, life is not a zero-sum game, where if you do not harm someone, then you yourself live worse. And the lack of discrimination probably comes from the greater consistency of your reasoning, you see that hating a certain group is a completely arbitrary thing that has nothing to do with it, and just as you could hate any other group. It can be said that you are aware that you cannot be said to be special just because you are, because everyone else may think the same, you have no special reason.
comment by EniScien · 2022-05-22T09:29:15.827Z · LW(p) · GW(p)
I came up with the idea that people can cheer for the protagonist of the book, even if he is a villain, because the political instincts of rationalizing the correctness of your tribe's actions are activated. You are rooting for Main Character, as for your group.
comment by EniScien · 2024-01-11T11:48:15.723Z · LW(p) · GW(p)
I noticed that some names here have really bad connotations (although I am not saying that I know which don't, or even that any hasn't).
"LessWrong" looks like "be wrong more rare" and one of obvious ways to it is to avoid difficult things, "be less wrong" is not a way to reach any difficult goal. (Even if different people have different goals)
"Rationality: from A to Z" even worse, it looks like "complete professional guide about rationality" instead of "incomplete basic notes about a small piece of rationality weakly understood by one autodidact" which it actually is.
Replies from: Dagoncomment by EniScien · 2023-05-01T16:26:40.590Z · LW(p) · GW(p)
I've noticed that in everyday life, when you're testing some habit choices to see if they're working for you, it's better to leave a habit that doesn't seem to be working for you, to make it easier to determine that, because otherwise you won't be sure later if it turned out to work otherwise, habit one or habit two or habit three.
This reminds me of how I used to do mod compilations, it might seem like a good idea to add all the desired ones at once, but then if some mod is missing or some extra, you won't be able to figure out which one. So they should only be added and removed one at a time.
With habits the same, only even more difficult, because they begin to act gradually and do it much slower, and then there are factors beyond your control. And I used to assume that it doesn't make sense to waste time and effort on following useless habits.
However, since it is also experimentation, it was worth bearing in mind that any change would in any case complicate the analysis of what worked, it is better to keep them until you find a stable working combination, and then remove them one at a time, too, in case certain habits somehow worked only together.
comment by EniScien · 2023-04-29T12:28:11.408Z · LW(p) · GW(p)
For some reason until recently, nowhere I heard about programming did it explain that object-oriented programming is essentially a reverse projection of the human brain, everywhere I heard about programming before it said something like, at best, that procedural programming is ugh and object-oriented is cool, it did not explain that procedural programming is much closer to the language that reality thinks in, and inventing "objects" is just a crutch for the imperfect monkey brain
All this came to my mind when I noticed that people tend to think of drugs as if a drug is an object that has the property of "curing", like a potion in a video game or a real "potion" from antiquity. And people are still extremely prone to think about viruses, bacteria, antibiotics, vaccines, and the like in this way. Not to imagine a specific mechanism for how it's supposed to work, but just to assume that the drug as a whole will "cure." And of course the same goes for poisons, people not familiar with biology think of them as having an inherent property to "poison," or on a deeper level, acids, as having an inherent property to "dissolve.
And if you go back to the question of reverse projection of the human mind, it becomes obvious that human language is not something that came out of nowhere, it is a product of the human brain, so language must also be a projection of its structure, and therefore you need reverse projection as objects in a broader sense, so specifically convolutional neural networks here, to work with it properly.
comment by EniScien · 2023-04-22T20:31:11.436Z · LW(p) · GW(p)
I used to have conflicting thoughts about death. Does death nullify the value of life, because you yourself are destroyed? Or maybe, on the contrary, immortality, because you will ever achieve everything? The first is false, because a person has other values besides the state of his own brain. The second has nothing to do with life, because really by immortality we mean "not dying of old age" and not "living infinity of years", so you only have a trillion years ahead of you, and this is a very limited time. And any other finite time too. Thus, one should not be an absolutist, the value of life is equal to the lived period by the coefficient of average goodness, and the anti-value of death is equal to the product of the distribution of expected value and the distribution of expected probabilities. I also wondered, if there are no stupid values, then can't any person choose the value of death himself? And... No. The word "terminal" is missing. Instrumental values are a different type of information altogether and can easily be silly. And a person is clearly not born with a final desire to die, as, for example, one is born with a desire for sweetness and the absence of pain. So no, death destroys too much expected positive utility to rationally prefer it to anything. Another thing is that the decision-making mechanisms in the human brain are far from rational, and first of all, we are talking about time discounting at a fixed, not a probabilistic rate, which makes you extremely overestimate what is happening now compared to the future.
comment by EniScien · 2023-04-22T19:15:53.302Z · LW(p) · GW(p)
Again, I'm not sure if I already wrote, but when it comes to quantum mechanics, Born probabilities and why a square, then it's spinning in my head that if you square and take the root in the form of an equation back, then you will have from one square branching into two possible answers, positive and negative, in other words, with this operation you erase exactly one bit of information, the sign bit. And in turn, if you took a number to the first power, then you could directly compare each point of the past with each point of the future and then there would be no calculation of the wave function of the future only from the entire function of the past, you would have exactly one future for each past, and it would not be a question of probabilities at all, only of a logical conclusion.
comment by EniScien · 2022-05-27T15:09:19.973Z · LW(p) · GW(p)
This is not the first time it seems to me that Yudkowsky is hinting for us to understand this instead of writing directly. But on the other hand, he seems to write as if he doesn't really know the answer. Specifically in this case he says about qualia, that we should ask the question "what makes us think we have qualia?" At first I got that wrong, in the sense that qualia is just an illusion. But then I did a mental experiment about a neural network. What if we created a neural network that could similarly answer questions about the picture it recognized from its cameras. We ask it what it sees in the right corner, it answers. And so on. And all the answers will be as if she really SEES the whole picture from qualia, just like people do. As if the retina or its counterpart is already enough to have qualia, the only difference is that not all retina wearers can answer us in detail about its contents. On this basis, it seems that qualia is not just some "consciousness field" type property of the universe, it is an inherent logical/mathematical phenomenon, its absence not possible in any alternate world, it is just confusion/nonsense. Another thought experiment also hints at this. Imagine a universe without qualia. Wait, but how? There is no one observer there, no one point of view. No one can ever see what's going on in that universe. And it just seems... Wrong. It's as if the existence of a universe in which nothing can be observed, even if there are philosophical zombies there, just doesn't make sense. In other words, qualia are strictly retinal cells, animals have them as well, neural networks, and probably in a very simplified form even bacteria and sensors. The ability to speak only allows us to "cash in" on them, to inform people of their existence, though in fact they have always been there, everywhere. Next comes the question of self-consciousness. It doesn't seem to be some conceptual (epiphenomenal) thing that makes you valuable. It just seems to be something that is missing when we dream. I can even imagine people with no self-awareness, their difference would be that they would be incapable of self-reflection. Because of this, they would even be more efficient workers, not distracted, not engaged in procrastination and useless hesitation and so on. They will be able to answer questions about their condition, they can be made to go one level deeper, but they will not do it on their own. Because of this, they will be sillier. New thoughts will visit them only on the basis of external stimuli. They will invent fewer new ideas in a lifetime and have worse experience unless someone outside forces them to think until they say it seems the task is over. It seems their difference will be that they don't have wandering chaotic signals and closed loops of neurons in their brains to repeatedly pass backward error propagation. They might also not have a task stack, they might not be able to remember something at the right moment or just remember, they would necessarily need an alarm clock. Lack of ability to hold more than one object in short-term memory? And hence the only reason why their emotions are less valuable is because they are shorter at the same stimuli, no self-reflection loop, just a single run of the pulse of negative encouragement. There will be no shame for a week, just a single unpleasant sensation and a momentary change in behavior, probably to a lesser degree than in repeatedly reflective beings. The problem here, as with pain, is that we cannot stop our unpleasant emotions, even if we have already learned a lesson from them. Perhaps I'm wrong. Or have found some other aspect and given it a description and it's not self-consciousness. But what appeals to me about this explanation is that it seems to remove any mystery from the concept of consciousness and make it just another of the understandable properties of the human brain. Another aspect of consciousness may be the ability to separate one's self from the outside world, to distinguish one's actions from external ones. And it seems again to be one of the functions that is disconnected in sleep, there you don't distinguish between your body moving somewhere and the change in the external environment and the change in your thoughts. Then it also seems to hurt your learning and be like that robot that just paints over the blue pixels and doesn't try to take the blue glass off your visor. The point is that this again deprives self-consciousness of the properties of an icon. It does not magically draw the line between valued minds and unvalued minds. The difference between a conscious being and an unconscious one would not be like the difference between a person and an ironman, but like the difference between a sober person and a drunk. We value them less because they are less controllable and more dangerous, less intelligent and therefore worse at fulfilling deals. It's not a question of sanity and not sanity, of value and not, but of Lawful and Chaotic Worldview (hmm, does Yudkowsky write about this somewhere else in The Mad Investor of Chaos? Because I didn't finish reading it). It's not that we value them less because of mere consciousness, it's that we can rely on them less and prioritize actors we can rely on. At the same time, judging from the human experience, it seems that while we cannot build anything without qualia, we can build something without happiness and unhappiness, without boredom and pain. It seems that people sometimes just won't experience emotion, won't agonize, but can still learn from their mistakes. This is what we might call conscious thinking, plan-building, just remembering, not reinforcement mechanisms. When we can change our actions simply because we made a decision. But so far I don't have any ideas about that. Except that emotion is basically just a property of neural signals running in a loop. Also, I don't think there's anything mysterious about the redness of red, it seems to be just a result of you starting to reason about the intermediate layers of neurons as well as the input layers, kind of like the unsolvable question of "but is Pluto a planet after all?" And so I am a materialist, but accordingly I expect that qualia are not some mental substance, but fully physical chains of neurons. And I also predict that with enough technology it will be possible to connect one person's neurons to another person's neurons in the right way, and then that person will see someone else's redness and be able to see if it is the same or different. This is a falsifiable scientific hypothesis. We can test whether they are the same. We can test the hypothesis that the same chains of neurons, when connected to different brains, will give the same sensations. This is not just a belief in materialism and the use of Occam's razor, it is a testable question. Here's if, after trying different neural circuits, people say the redness is the same, and philosophers talk about how suddenly they see different redness, even though they say it's the same, then that would be a multiplication of entities. Finally, the sense of Pluto as a planet or redness of redness is ineffable because these are internal layers of neurons, not inputs. In the case of the latter, we can point to an external object to change their state, compare and synchronize. In the case of the former we cannot. Just as you cannot explain to a colorblind person the inexpressibility of color and to a blind person the inexpressibility of vision. But these are not some fundamental obstacles, but, as I have written before, only a consequence of the fact that people do not have mechanisms for object serialization and conscious reflexive reprogramming of the source code/neuronal circuits. If there were, you could simply look into your brain, find out exactly how your neurons are connected, transform your neurons' territory into their map, describe that map with language, have another person transform them back into territory, forcing your brain to form new neurons according to their description. And after that, you could, without "non-defeatability," let a color-blind person see color whenever they want, even if their eyes have no cones, and they are unable to detect color by sight. You could also describe to a blind person what neurons you feel your visual cortex has, so that he would start the process of forming the same combination of them in his brain, after which you could tell the blind person which neurons need to be activated for him to feel what you feel, even if he is unable to determine for himself what the picture really is. (I have a concern that this might be a dangerous topic to discuss, but it seems to be discussed elsewhere on lessvrong with no problem, including Yudkowsky himself, when he dispels the mystery/black box around different brain abilities. So I'm just not going to write specifically about the ways of abuse) (This turned out to be longer than I expected, I don't know if I should create a full top-level post)
comment by EniScien · 2022-05-26T13:32:00.696Z · LW(p) · GW(p)
It's hard to articulate why I dislike so much the views that change depending on what family you were born into (religion, nationalism, patriotism, etc.). It's like priors of one, fallacy of favoring an arbitrary hypothesis, lack of stability of your views under self-changes as in AI alignment, floating beliefs, and what is not truly part of you, unrecoverable knowledge instead of a knowledge generator. And it seems that all these points are connected, which is why Yudkovsky wrote about them all when trying to teach the creation of safe AI. Well, just as Yudkovsky said in one of the chains "I'm sorry, the same person as me, just grew up in a different environment, and not a monster at all, so we are now forced to be at enmity," I don't like the idea that I would be at enmity with myself from a neighboring universe, simply because I was born in a Catholic family, and he was born in a Protestant one. Well, another view, even before lessvrong, that I do not like beliefs that are so unstable that if you were born in another country, you could never reliably come to them, that is, seriously, and this very important belief of yours rests on such flimsy birth fact? I mean, you don't have any foundation of justifications under it, as you have under science. It's just a random thing. Again, irretrievable knowledge, floating conviction. That you can't pass anyone through arguments. Though I don't know why it wouldn't apply to tastes let's say. It probably doesn't even apply. I'm not saying my tastes matter. It's just a random fact. I will not try to prove to anyone that my tastes are better than everyone else, because it is not. It's not true, it's just a preference. And it cannot be said that I had a great reluctance to change them. Just any other will not be better. And it's like my reluctance to lose my memory, because it's a part of me.
Replies from: Dagon, EniScien↑ comment by Dagon · 2022-05-26T14:30:59.828Z · LW(p) · GW(p)
Do you dislike the meta-view that an individual cares about their family more than they care about distant strangers? The specific family varies based on accident of birth, but the general assumption is close to universal.
How many of the views you dislike are of that form, and why are they different?
Replies from: EniScien↑ comment by EniScien · 2022-05-26T17:21:52.858Z · LW(p) · GW(p)
I didn't quite understand what you mean. The family is not entirely relevant to the topic of that post. Usually it is treated somewhat more logically. And in the post, the conversation was more about beliefs than about duty. I am ready to pay the debt to the family or even the state, but only for what they really did good and partly how much it cost them, and not just for the fact of birth. "Honor your father" clearly does not deserve a place in the 20 commandments, because I was lucky, but my father could beat someone else. Your friends are not only just useful tools, you can also be grateful for what they have done in the past. But no unjustified unconditional love. Somewhere in here, it seems to be Scott Alexander, there was a chain where a person woke up in a virtual reality capsule in a world without a family, having failed his exam for excessive conformity. It largely reflects my views. It might make sense to prefer one's home country/family/gender/race/species, all other things being equal, but obviously not if the other option gives MORE (if expressed in numbers, then this is certainly not 0.01%, but let's say 3%).
Replies from: EniScien↑ comment by EniScien · 2023-04-22T15:45:44.722Z · LW(p) · GW(p)
More specifically, what I mean is that I find it extremely pointless to make something a moral value such as duty rather than a preference value such as taste if that attitude varies by region of birth. Which can probably be expressed as something like "I think it's a mistake to list anything other than the direct conclusions of game theory in the list of moral values of duty." Well, or else you can say that I believe that interpersonal relationships should not be regulated by someone's personal preferences, only by ways of finding a strategy for the game to achieve the maximum total. Well, maybe it's just a longer and more pompous way of saying "do not impose your preferences on others" or "the moral good for the other person should be determined by preferential utilitarianism."
↑ comment by EniScien · 2025-02-17T01:47:30.242Z · LW(p) · GW(p)
Self review 1
Now I suspect that I completely wrongly projected my feelings on reasoning.
Maybe I am just less collectivist, more individual than most people.
Or maybe the question is that don't actually like their country of birth the most (eg I like my species of birth the most), but more On The Side Of Blue Country vs Red. And saying obviously wrong things like everybody going like "MY country is the best". And iirc I was even as a child when seen things like "mom, you're the best in the world" mentally going like "oh, really, by which qualities? And also, did you tried all of moms in the world before you stopped on current or just only the most?" (no, really, what's the people problem with making compliments without lies? "mom, I love you really much" is shorter)
And probably also because they next go into wars in the name of these flavors of ice-cream preferences differences.
comment by EniScien · 2022-06-02T13:54:42.713Z · LW(p) · GW(p)
It occurred to me that the essence of the calculation process is optimization / removal of unnecessary information, because "shape transformation" is not the answer. Not only when you reduce the fraction, but also when you find the result of the equation, because, paradoxically, it carries more information, because it allows you to calculate all the other results. Does this mean that the calculation is just "entropy"? This would be "the answer to all the questions of the universe", but it looks wrong, too much of a fix idea. But it would explain why in the world of mathematics these things are the same, but in our world they are not, it's all about the presence of entropy and the passage of time.
Replies from: lahwran↑ comment by the gears to ascension (lahwran) · 2022-06-07T10:20:07.609Z · LW(p) · GW(p)
ooh this was starting to make sense at the beginning and then didn't - I was getting excited at the first line though. seems like if I had to guess, you're working on integrating concepts - try rephrasing into more formal terminology perhaps? I feel like if this is anything like how I think, you may have made reasonable jumps but not showed enough of your mental work for me to follow. calculation process of what? what do you refer to with "shape transformation"? what is it not the answer to? what fraction? result of what equation? etc etc.
Replies from: EniScien, EniScien↑ comment by EniScien · 2023-04-29T12:26:40.904Z · LW(p) · GW(p)
Or, if I haven't written it down anywhere else, it occurred to me that since we live inside the Tegmark mathematical universe, in which case the universe is just a giant formula and the process of solving it step by step, each next part after the equal sign is the next moment in time, and the value of the expression itself, what is stored between the equal signs, is energy. The superposition is the subtraction inside the parentheses, which with each step adds another multiplier to both parts of the difference, and the different Everett branches are the same two halves of each difference, just with the parentheses open.
Well, now it's not that I think this is wrong, but rather the opposite, too obvious, and therefore useless.
Besides, it can also be presented in another form, in terms of the intersection of computer science and quantum mechanics, the universe, or rather, all universes in mathematics, is a bit string, which diverges into Everett branches, in the first sign you have 2 branches for 0 and 1, in the second you already have 4, in the third 8 and so on, each branch is a standard particular bit string, with each branch division the amount of information in this string grows, that is, entropy grows, and this direction of entropy growth in each individual branch is time.
The law of conservation of energy, or even more broadly the law of conservation, common to all mathematics, is that at each step you have for 0 you have 1 and vice versa, each time you divide this into two bit options and each time you have the opposite option, so the total entropy of all mathematics is also in some way zero, if you look from inside it is infinite, but to write this down, you do not need any length formula, zero length is enough for you.
So from the inside mathematics is infinite, but from the outside it adds up to zero. That's sort of the answer to the question "why is there something and not nothing?" and that answer is that "something" refers to a piece of "everything" and "everything" is what nothing looks like from the inside.
I came up with this myself, but later I also saw someone else's formulation of this, that for every number on the mathematical plane there is an inverse number, so even though the math has infinite information, it adds up to zero in the end, hence the law of conservation of energy.
As far as I know, it is widely known among physicists, as opposed to ordinary people, that energy is a conditional quantity, and the energy of a part of a system can be the energy of the whole system, since energy can also be negative, can be as negative as you want, so what we think of as zero is only a convenient point of reference.
↑ comment by EniScien · 2023-04-22T14:16:01.569Z · LW(p) · GW(p)
I seem to have a better understanding of timeless physics since then, and if we talk more clearly about the regularity that I had in mind, then ... point in time of the book or all at once, for there is no answer to the question "what day is it in Middle-earth?", but all because our timeline has nothing to do with that one. And when we look at any mathematical object, its timeline, like the book's timeline, is also not connected to ours, which is why they look outside of time. That is, because the timeline is not the same for the entire universe, there are many timelines, and we are inside our own, but not inside the timeline of some object like a book or something else. And you can also say that if you usually say something like "we see the passage of time when entropy grows" and "entropy is something that grows with time", then outside of time physics reduces / reduces time to entropy. You link into a timeline chain those fragments of mathematical descriptions between which there is the least mutual entropy. This model of time also says that in addition to the standard linear time scale, there should be all non-standard time scales like different types of Everett branches, past, future and parallel, different types of time loops, like rings and spirals, and so on. And this can be called a calculation, because the calculation leads to a change in shape, another piece of information, and between it and the original one there will always be some kind of mutual entropy. It seems that everything, in short, did not work out, because although I myself understand what I meant then, I see that I expressed myself extremely unclearly. The answer question is, what does "working on the integration of concepts" mean? I don't understand what is meant by this expression.
comment by EniScien · 2023-05-26T15:20:11.953Z · LW(p) · GW(p)
I must say, I wonder why I did not see here speed reading and visual thinking as one of the most important tips for practical rationality, that is, a visual image is 2 + 1 d, and an auditory image is 0 + 1 d, plus auditory images use sequential thinking, in which people are very bad, and visual thinking is parallel. And according to Wikipedia, the transition from voice to visual reading should speed you up 5 (!) times, and in the same way, visual thinking should be 5 times faster compared to voice, and if you can read and think 5 times in a lifetime more thoughts, it's just an incredible difference in productivity.
Well, the same applies to the use of visual imagination instead of voice, here you can also use pictures. (I don’t know, maybe it was all in Korzybski’s books and my problem is that I didn’t read them, although I definitely should have done this?)