EniScien's Shortform

post by EniScien · 2022-02-06T06:29:52.071Z · LW · GW · 78 comments

78 comments

Comments sorted by top scores.

comment by EniScien · 2023-05-08T02:14:50.087Z · LW(p) · GW(p)

In HPMOR, Draco Malfoy thinks that either Harry Potter was lucky enough to be able to come up with a bunch of great ideas in a short period of time, or he, for some unimaginable reason, has already spent a bunch of time thinking about how to do it. The real answer to this false dilemma is that Harry just read a book as a kid where its author came up with all these for the book's needs.

In How to Seem (and Be) Deep Thought, Eliezer Yudkowsky says that the Japanese often portray Christians as bearers of incredible wisdom, while the opposite is true of the "eastern sage" archetype in the western midst. And the real answer is that both cultures have vastly different, yet meaningful sets of multiple ideas, so when one person meets another, and he immediately throws at him 3 meaningful and highly plausible thoughts that the first person has never even heard of, and then does so again and again, the first person concludes that he is a genius.

I've also seen a number of books and fanfics whose authors seemed like incredible writing talents and whose characters seemed like geniuses, fountaining so many brilliant ideas. And then each time it turned out that they really just came from a cultural background that was unfamiliar to me. And I generalized this to the point that when you meet someone who spouts a bunch of brilliant ideas in a row, you should conclude that it's almost certainly not that he's a genius, but that he's familiar with a meme you're unfamiliar with.

And hmmm. Just now I thought about it, but it probably also explains that Aura of Power around characters who are familiar with a certain medium, and people who are familiar with a certain profession (https://www.lesswrong.com/posts/zbSsSwEfdEuaqCRmz/eniscien-s-shortform?commentId=dMxfcMMteKqM33zSa [LW(p) · GW(p)]), that's probably the point, and it means that the feeling is not false, it really elevates above mere mortals, because you have a whole bunch of meaningful thoughts that mere mortals simply do not have.

When the familiar with more memeplexes will ponder, he will stand on a bigger pile of cached thoughts, not just on the shoulders of giants, not on the shoulders of a human pyramid of human giants, so he can see much further than someone who looks only from his, no matter how big or small, height.

comment by EniScien · 2023-04-21T22:28:46.685Z · LW(p) · GW(p)

I noticed here that Eliezer Yudkowsky in his essays (I don't remember exactly which ones, it would be nice to add names and links in the comments) says that the map has many "levels", and the territory has only one. However, this terminology itself is misleading, because these are not close to "levels", these are "scales". And from this point of view, it is quite obvious that the scale is a purely property of the map, the territory does not just have one scale, the smallest, and it cannot even be said that it has all the scales in one, it simply does not have a scale. Scale is a degree of approximation, like distance is about photographs, different photographs can be taken from different distances, however, the object is not the closest photograph and not all of them put together, it is simply NOT a photograph and there simply is no scale, distance or degree of approximation, it is all the categories that refer to the relationship of the subject and the photograph when shooting, however the subject never shot itself, there were no shooting distances. Talking about levels makes it feel like there could very well be many levels, they just don't exist, however, when talking about scale, it's obvious that the territory is not a map, there is no scale, just like there is no cross of your current location or icons for points of interest. And the scale just fits perfectly into the analogy with the map and the territory.

Replies from: ChristianKl
comment by ChristianKl · 2023-04-22T19:40:46.393Z · LW(p) · GW(p)

"Map isn't the territory" comes out of Science and Sanity from Alfred Korzybski. Korzybski speaks about levels of abstraction.

In the photography case, there's the subject, then there's light going from the subject to the camera (which depends on the lighting conditions), then the camera sensor translates that light into raw data. That raw data then might be translated into a png file in some color space. These days, the user might then add an AI based filter to change the image. Finally, that file then gets displayed on a given screen to the user.

All those are different levels of abstraction. The fact that you might take your photo from different distances and thus have a different scale is a separate one.

Replies from: EniScien
comment by EniScien · 2023-04-22T20:15:22.108Z · LW(p) · GW(p)

But does Yudkowsky mention the word "abstraction"? Because if not, then it is not clear why the levels. And if you mention it, then as in the case of scale, I don’t really understand why people would even think that different levels of abstraction exist in the territory.

Edited: I've searched in Reductionism 101 and Physicalism 201 and didn't find mention of "abstraction", so I save my opinion that using just word "level" doesn't create right picture in the head.

Replies from: lahwran, ChristianKl
comment by the gears to ascension (lahwran) · 2023-04-23T00:27:15.031Z · LW(p) · GW(p)

for one major way scale is in the territory, search for "more is different".

comment by ChristianKl · 2023-04-22T22:42:22.387Z · LW(p) · GW(p)

The main issue is that people often make mistakes that come out of treating maps like they have one level.

Yudkowsky, doesn't go much into the details of levels but I don't think "scale" gives a better intuition. It doesn't help with noticing abstraction. Level might not help you fully but scale doesn't either.

comment by EniScien · 2022-05-19T12:18:00.385Z · LW(p) · GW(p)

Some time ago, I noticed that the concepts of fairness and fair competition were breaking down in my head, just as the concept of free will once broke down. All three are not only wrong, they are meaningless. If you go into enough detail, you will not be able to explain how this should work in principle. There is only determinism and chance, only upbringing and genetics, there is simply no place for free will. And from this it follows that there is also no place for fair punishment and fair competition, because either your actions and achievements are the result of heredity, or they are the result of the environment, society. The concept of punishment turns out to be fundamentally wrong, meaningless, you can’t give a person what he deserves, in some metaphysical sense. Maybe it's my upbringing, or people in general tend to think of moral systems as objectively existing. But in fact, you can only influence him with positive and negative measures to achieve the desired behavior, including socially useful. As was noted in one of the chains, moral correctness is only relative to someone's beliefs, this is not a property of an act, but your action of evaluating it. And this seems to be the only mention of such questions in the lessvrong chains. For some reason, there is a chain about free will, but not about fair punishments and fair competition. Perhaps there are materials on some third-party sites? Because I was completely unprepared for the fact that my ideas of justice would fall apart in my hands.

Replies from: Dagon
comment by Dagon · 2022-05-19T19:20:49.291Z · LW(p) · GW(p)

Be careful with thinking a phenomenon is meaningless or nonexistent just because it's an abstraction over an insanely complex underlying reality.  Even if you're unsure of the mechanism, and/or can't calculate how it works in detail, you're probably best off with a decision theory that includes some amount of volition and intent.  And moral systems IMO don't have an objective real cause, but they can still carry a whole lot of power as coordination points and shared expectations for groups of humans.

Replies from: EniScien
comment by EniScien · 2023-04-22T16:11:31.093Z · LW(p) · GW(p)

It seems that you didn’t understand that my main problem is that every time in my thoughts I rest on the fact that within the framework of a better understanding of the world, it becomes clear that the justifications why competitions are good do not make any sense. It is as if you have lived well all your life because you were afraid of hell, and now the previous justifications why it is worth living well do not make sense, because you have learned that hell does not exist, now it is not clear what is the point of behaving well and whether in general, is it worth it to continue to honor his father, not to steal other people's things and other people's slaves, to stone women for betraying her husband and burn witches? Maybe these rules make sense, but I don't know if they have it, and if so, which one. I mean, I wondered what role do, say, sports olympiads play, but my only answer was that they force athletes to overexert their bodies or dope, and also spend a lot of money on such extremely expensive entertainment in exchange for releasing better films or let's say the scientific race "who will be the first to invent a cure for aging." Well, I've been recently thinking about how to organize the state according to the principle of "look at the economic incentives" and I seem to finally understand what sense competition can sometimes have. Those incentives. Competitions are one of the types of competition, so they allow not only to give an incentive to someone to go towards a certain goal, they can create an arms race situation when you need to not only achieve the goal, but do it faster/better than other participants. However, the key word is "goal", and in sports olympiads the goal clearly does not make sense, like beauty contests, in science olympiads with a better goal, they allow you to determine the smartest participants for further use of their intellect, which does not make sense in sports olympiads, because machines have long been faster, stronger and more resilient than humans, and right before our eyes they are becoming more agile, however, at local sports competitions with a better goal, they allow people to be stimulated to move more.

comment by EniScien · 2022-03-03T14:26:07.880Z · LW(p) · GW(p)

Despite the fact that this is only an "outward attribute of a smart character", and not something rational, but I calculated that if you study for 15 minutes a day (in the morning, in the evening and at lunch, one lesson, 5 minutes), then for You can learn a language in 5 years, which is 12 languages ​​in a lifetime, which is usually perceived as something incredibly big that only a polyglot genius can do. Yes, given the development of AI, languages ​​​​are far from being needed, but it seems that constantly learning something new develops the brain and postpones Alzheimer's, and this is a good way to achieve consistency. Also, the feeling of being able to understand many different languages, find relationships and so on, it's just nice. It's like asking why every Potter mage doesn't reach the level of Professor Quirrell in old age if it's just a matter of training. For exactly the same reason why now not every person who has a smartphone knows at least three languages. People don't appreciate it and make excuses like it's not that important, although it's definitely better to know three languages ​​than one. Studying for five minutes is not at all difficult, and almost always learning a language is more rewarding than what a person would otherwise do. And yes, I myself have been studying languages ​​295 days without any missed days, the main thing was to break the process into several stages, which, as it were, insure each other, at the same time making each specific lesson less difficult. And I managed to increase the level of German from complete ignorance to an approximate understanding.

Replies from: Pattern, EniScien
comment by Pattern · 2022-03-04T21:18:29.070Z · LW(p) · GW(p)
and not something rational,

'Spending time well' is optimal.

Replies from: EniScien
comment by EniScien · 2022-03-05T15:24:21.209Z · LW(p) · GW(p)

Maybe. I just thought that LessWrong doesn’t just turn into a standard site with “ten tips on how to improve your life”, on the other hand, explicitly posing the question gives at least two answers: send about ready-made to other sites, and here give non-standard / new tips, talk about things that no one has noticed yet.

Replies from: Pattern
comment by Pattern · 2022-03-07T18:42:18.316Z · LW(p) · GW(p)
just turn into a standard site with “ten tips on how to improve your life”,

The obvious upgrade is having 'life improvement' be more experimental. People try things out and see how well they work, etc. (I'm not saying there has to be 'peer review' but 'peer experimentation' does seem like a good idea.)

Another approach is grouping things together, and having tools so that those things can be filtered out. (This won't be perfect because short form 'posts' don't have tags.) Also issues around content sticking around, versus floating down the river of time.

comment by EniScien · 2023-04-22T16:41:50.242Z · LW(p) · GW(p)

I must say, I made the mistake of thinking that it was enough to make a habit to get a result. At that time, I was interested, so I did not notice that interest was required here. But now I realize that only some part of the crystallized intellect, and not dynamic, can be made into a habit, learning anything, including languages, is not the activation of already learned neural connections, but the creation of new ones, this requires straining the intellect and this cannot be to do purely out of habit, some kind of emotional motivation is needed, for example, interest. So now I don’t study any one particular language, but switch between English, German, Greek, Latin and Italian as my interest fades / grows, in order to constantly keep it at a high level for at least one language.

comment by EniScien · 2022-05-25T13:40:47.893Z · LW(p) · GW(p)

Some time ago I was surprised that narrow professional skills can significantly change your thinking, and not just give you new abilities to specialize. Changes you, not just allowing you to cast a new spell. Something like not scientific, but professional rationality, but definitely not just the ability to make origami. (Specifically, I had this with the principles of good code, including in object-oriented programming) I had never heard of this before, including here on LessWrong. But it makes me think that the virtue of learning is more than being able to look at cryonics without prejudice. It seems that the virtue of learning itself can change your thinking (in a positive way?). The ability to see where reality is tightly woven? Perhaps I should have noticed earlier that, for example, characters who specialize in politics (Malfoy), tropes (Hiro) and other things look clearly cooler than if they had no specialization and were simple townsfolk. It seems that even narrow skills, and not a special leveling of wisdom, give you experience points. Although perhaps the point was that I did not think that this is really applicable in real life. And I'm still not sure if it really improves you, and not just creates a fake aura of coolness.

comment by EniScien · 2023-12-26T12:46:59.729Z · LW(p) · GW(p)

There are no common words upvote/downvote in Russian, so I just said like/dislike. And it was really a mistake, these are two really different types of positive/negative marks, agree/disagree is third type and there may be any amount of other types. But I named it like/dislike, so I so thought about it like it means your power of liking it in form of outcome to author, not just adjusting the sorting like "do I want to see more posts like that higher in suggestions".

And actually it looks for me like a more general tendency in my behaviour to avoid finding subtle differences between thing and, especially, terms. Probably, I've seen like people are trying to find difference in colloquial terms which are not strictly determined and next argue to that difference, I was annoyed by that and that annoyance forced me to avoid finding subtle differences in terms. Or maybe it is because they said us that synonyms are words with the same meaning, instead of near meanings (or "equal or near meanings"), and didn't show us that there is difference in connotations. Or maybe the first was because of the second. Or maybe it was because I too much used programming languages instead of normal languages when I was only 8. Anyway, I probably need now to start developing a 24/7 automatically working habit to search and notice subtle differences.

Replies from: Dagon
comment by Dagon · 2023-12-26T17:24:28.472Z · LW(p) · GW(p)

Word use, especially short phrases with a LOT of contextual content, is fascinating.  I often think the ambiguity is strategic, a sort of motte-and-bailey to smuggle in implications without actually saying them.

"like" vs "upvote" is a great example.  The ambiguity is whether you like that the post/comment was made, vs whether you like the thing that the post/comment references.  Either word could be ambiguous in that way, but "upvote" is a little clearer that you think the post is "good enough to win (something)", vs "like" is just a personal opinion about your interests.

comment by EniScien · 2023-05-03T18:29:28.906Z · LW(p) · GW(p)

I've read, including on lesswrong (https://www.lesswrong.com/posts/34Tu4SCK5r5Asdrn3/unteachable-excellence [LW · GW]), that often listening to those who failed is more useful than those who succeeded, but I somehow missed if there was an explanation somewhere as to why? And the fact is that there are 1000 ways to be wrong and only 1 way to do something right, so if you listen to a story about success, it should be 1000 times longer than a story about failure, because for the latter it is enough to make one fatal mistake, while for the former you have to not make the whole thousand.

However, in practice, stories of failure and stories of success are likely to be about the same length, since people will take note of about the same number of factors. In the end, you will still have to read 1,000 stories each, whether success or failure, except that success happens 1,000 times less often and the stories about it will be just as short.

Replies from: Raemon, Dagon
comment by Raemon · 2023-05-04T01:36:54.133Z · LW(p) · GW(p)

fwiw I don't think I've heard this particular heuristic from LessWrong. Do you have a link for a place this seemed implied?

I think there's a particular warning flag about "selection effects from successes" (i.e. sometimes a person who succeeded just did so through luck). So, like, watch out for that. But I remember hearing a generalized thing about learning more from failure than from success.

Replies from: EniScien
comment by Dagon · 2023-05-04T01:06:54.589Z · LW(p) · GW(p)

In truth, listen to everybody. But recognize that different stories have different filters and distortions. Neither success nor failure storytellers actually understand the complexity of why things worked or didn’t - they will each have a biased and limited view.

comment by EniScien · 2022-05-31T11:10:59.138Z · LW(p) · GW(p)

Somewhere in the comments it was written that rationality is not building a house, where the main thing is to lay a good foundation, no, it's more like trying to patch a hole in a sinking ship. In my opinion, this should be included in the fund of golden quotes

Replies from: EniScien
comment by EniScien · 2023-04-22T14:22:02.131Z · LW(p) · GW(p)

This phrase can also be called something like a projection of the concept of inadequate equilibria in civilization on a particular person or decision-making process. In other words, the process of attempting a better self-modification runs into self-reference, trying to test each of its elements with each of its elements, for an external object you can test it without prejudice, but you cannot test yourself, because prejudices can hide themselves.

comment by EniScien · 2022-05-29T17:31:13.943Z · LW(p) · GW(p)

I am extremely dissatisfied with the phrase "he lives in another world" to understand him that he does not agree with you, because we all live in the same world. But a good option is "he sees that the world is different (perhaps falsely)", exactly so, because "he sees the world differently / in a different way" has connotations of just an opinion to which everyone is entitled, and "he sees a different world" again creates the feeling that there are other worlds in which some people may not live, but look exclusively at them. The same glasses of perception. Trying to tell people to "take off your wrong glasses" is useless because first, if they just take off their glasses, they won't see anything, people are born completely nearsighted, which can be seen from babies, besides, human eyes have a blind spot and other defects, which need to be corrected with glasses, secondly, they see that their glasses do not create any distortions of the normal / ordinary picture of the world, but they see that our glasses distort reality. The glasses metaphor is already known here and there, has no wrong connotations and is quite intuitive. In addition, now all people know about visual illusions, so it will be clear to everyone that we can literally SEE the world in different ways, see things differently than they really are. This metaphor may look obvious, but for a long time it did not occur to me in the correct formulation, it is desirable to develop it and make it commonly used. "Point of view" is a fixed expression, but it does not imply that the point of view can be wrong, either literally or figuratively. However, the reception with the worlds will also be effective: "they believe that they live in another world."

comment by EniScien · 2022-05-21T14:41:07.278Z · LW(p) · GW(p)

Perhaps somewhere on LessWrong this is already noticed, but judging by how much space there is not occupied by life, how much useless particles there are in physics, it seems that our universe is just one of the random options in which intelligence appears somewhere in order to you could see the universe. And how different it is from the universe, which was specially designed for life, even more than one planet would be enough, only 100 meters of the earth's crust would be enough. As primitive people actually imagined, until science appeared, so that religion began to lay claim to irrefutability. It becomes ridiculously obvious once you understand it. P.S. This comment and main post look relevant, but I never saw it before: https://www.lesswrong.com/posts/EKu66pFKDHFYPaZ6q/the-hero-with-a-thousand-chances?commentId=LYq9pGpMmKBDP3YR6 [LW(p) · GW(p)]

Replies from: gwern
comment by EniScien · 2023-04-24T00:27:56.831Z · LW(p) · GW(p)

One of my most significant problems is that I do not like to read, although it is generally believed that all "smart" people must adore it. And accordingly to overcome my dislike of reading, the book itself has to be very interesting for me, and such are rare and difficult to find them for me (I was thinking that it would be nice to have some kind of recommendation service on your previous evaluations, which there are for movies, but for books I do not know such).

And accordingly, another problem follows from this. I didn't read a collection of science fiction and fantasy books as a kid, I didn't read Escher Bach's Gödel, Step into the Future, Impact: Science and Practice, Science and Sanity, Probability Theory: the Logic of Science, Feynman Lectures in Physics, and so on. And I feel like I'm missing out on much of the obvious stuff for "our cluster people" because of this.

That is, the sequences were written to point out some non-obvious things that weren't already said by someone else and known to everyone. And I don't know if there's a list somewhere of such most significant books that predate the sequences, so that a person from outside this cluster would at least know which direction to look, or else it's hard to identify such things that are obvious background facts to some people, but unknown to others altogether.


In a broader sense, you could say that I was making the mistake of looking too little at other people's experiences. I didn't realize that no one person is capable of inventing all the wisdom of a tribe on their own. And this manifested itself everywhere, in every field, in programming, in music, in writing books, in creating languages, in everything.

Probably one is related to the other, I didn't read a bunch of books, so I didn't see how much higher than my own intelligence other people's knowledge could be. So I intuitively, without realizing it, proceeded as if they were equal, as if I were competing with a single other person, without even considering the obvious fact that there were simply many more other people, let alone what could be achieved by investing more time, summing up the experience of many generations, using collaborative work, practical experiments and computer calculations.


Actually, I didn't quite put it that way, though. Yes, I don't adore reading, but let's say I don't have any problem with reading blog posts (e.g. here on Lezvrong) or even just Wikipedia pages, here I rather have the opposite problem, opening one article and following the interested hypertext links I can sit down and read for hours.

So it's probably more accurate to say that I have a "clip thinking" problem, however... I have no problem watching hours of lectures, reading 4 hour podcast transcripts, or listening to hours of audiobooks.

So it's probably "reading books" that's the problem. Perhaps I associate them with school, with the most boring reading of history textbooks when my brain literally stops perceiving, or with literature classes, including assigned readings for the summer, where of all the years of school I can remember exactly one work that seemed interesting to me. I'm not sure what the big deal is.

Replies from: r
comment by RomanHauksson (r) · 2023-04-24T04:26:33.269Z · LW(p) · GW(p)

LibraryThing has a great book recommendation feature.

comment by EniScien · 2023-04-22T17:49:51.782Z · LW(p) · GW(p)

I thought about the phenomenon when the monuments of all the great people are taken down, because they were slave owners. And I formulated why I do not like this trend. The fact is that the past will look extremely terrible from the point of view of the future anyway. That is, virtually anyone in the past has done at least one absolutely terrible thing by today's standards. If you continue this trend, it is very likely that in the future you will already be considered a monster, for some strange reasons from the point of view of today, such as the fact that you did not cryopreserve chimpanzees, ate the meat of dead animals, buried your mother's brain in the ground to rot and be eaten by worms, didn't apply to donate your organs for transplant if you died, didn't donate to malaria nets, agreed to drive a car, didn't have biological children, were smarter than average, or got bad grades. And these are just things that are already discussed today, in fact, it may well be something completely unexpected, which you could not even suspect that this is actually something bad. Well, you can give other examples of the horror of the past in addition to slavery. Something like Bayes, Newton and a bunch of other scientists were religious. Although, in general, religion is very common to this day, so the entire previous list can already be applied to scientists of the past, if you agree with the specific points. Based on this, it is obvious that achievements should be valued regardless of today's moral assessment.

comment by EniScien · 2024-02-14T07:58:00.581Z · LW(p) · GW(p)

I saw that a lot of people are confused by "what does Yudkowsky mean by this difference between deep causes and surface analogies?". I didn't have this problem, with no delay I had interpretation what he means.

I thought that it's difference between deep and surface regarding to black box metaphor. Difference between searching correlation between similar inputs and outputs and building a structure of hidden nodes and checking the predictions with rewarding correct ones and dividing that all by complexity of internal structure.

Difference between making step from inputs to outputs and having a model. Looking only at visible things and thinking about invisible ones. Looking only at experiment results and building theories from that.

Just like difference between deep neural networks and neural networks with no hidden layers, the first ones are much more powerful.

I am really unsure that it is right, because if it was so, why he just didn't say that? But I write it here just in case.

comment by EniScien · 2023-06-03T15:33:41.396Z · LW(p) · GW(p)

Does the LessWrong site use a password strength check like the one Yudkowsky talks about (I don't remember that one)? And if not, why not? It doesn't seem particularly difficult to hook this up to a dictionary or something. Or is it not considered worth implementing because there's Google registration?

comment by EniScien · 2023-06-03T15:06:45.876Z · LW(p) · GW(p)

Hmm. Judging from the brief view, it feels like I'm the only one who included reactions in my brief forms. I wonder why?

comment by EniScien · 2023-06-03T15:03:58.737Z · LW(p) · GW(p)

It occurred to me that on LessWrong there doesn't seem to be a division of posts in evaluations into those that you want to promote as relevant right now, and those that you think will be useful over the years. If there was such an evaluation... Or such a response, then you could take a list not of karma posts, which would include those that were only needed sometime in a particular moment, but a list of those that people find useful beyond time.

That is, a short-term post might be well-written, really required for discussion at the time, rather than just reporting news, so there would be no reason to lower its karma, but it would be immediately obvious that it was not something that should be kept forever. In some ways, introducing such a system would make things easier with Best Of. And I also remember when choosing which of the sequences to include in the book, there were a number of grades on scales other than karma. This could also be added as reactions, so that such scores could be left in an independent mode.

comment by EniScien · 2023-05-26T16:17:07.140Z · LW(p) · GW(p)

On the one hand, I really like that on LessWrong, unlike other platforms, everything unproductive is downgraded in the rating. But on the other hand, when you try to publish something yourself, it looks like a hell of a black box, which gives out positive and negative reinforcements for no reason at all.

This completely chaotic reward system seems to be bad for my tendency to post anything at all on LessWrong, just in the last few weeks that I've been using EverNote, it has counted 400 posts, and by a quick count, I have about 1500 posts lying in Google Keep , at the same time, on LessWrong I have published only about 70 over the past year, that is, this is 6-20 times less, although according to EverNote estimates ~ 97% of these notes belong to the "thoughts" category, and not to something like lists shopping.

I tried literally following the one advice given to me here and treating any scores less than ±5 as noise, but that didn't negate the effect. I don't even know, maybe if the ratings of the best posts here don't match up with my rating of my best posts, I should post a couple of really terrible posts to make sure they get rated extremely bad and not good or not?

Replies from: EniScien
comment by EniScien · 2023-05-31T10:50:09.141Z · LW(p) · GW(p)

A. I saw a post that reactions were added. I was just thinking that this would be very helpful and might solve my problem. Included them for my short forms. I hope people don't just vote no more without asking why through reactions.

comment by EniScien · 2023-05-08T06:21:33.452Z · LW(p) · GW(p)

Yudkowsky says that public morality should be derived from personal morality, and that personal morality is paramount. But I don't think this is the right way to put it, in my view morality is the social relationships that game theory talks about, how not to play games with a negative sum, how to achieve the maximum sum for all participants.

And morality is independent of values, or rather, each value system has its own morality, or even more accurately, morality can work even if you have different value systems. Morality is primarily about questions of justice, sometimes all sorts of superfluous things like god worship are dragged under this kind of human sentiment, so morality and justice may not be exactly equivalent.

And game theory and answers questions about how to achieve justice. Also, justice may concern you as directly one of your values, and then you won't betray even in a one-time prisoner's dilemma without penalty. Or it may not bother you and then you will pass on always when you do not expect to be punished for it.

In other words, morality is universal between value systems, but it cannot be independent of them. It makes no sense to forbid someone to be hurt if he has absolutely nothing against being hurt.

In other words, I mean that adherence to morality just feels different from inside than conformity to your values, the former feels like an obligation and the latter feels like a desire, in one case you say "should" and in the other "wants."

I've read "Sorting Pebbles into Different Piles" several times and never understood what it was about until it was explained to me. Certainly the sorters aren't arguing about morality, but that's because they're not arguing about game theory, they're arguing about fun theory... Or more accurately not really, they are pure consequentialists after all, they don't care about fun or their lives, only piles into external reality, so it's theory of value, but not theory of fun, but theory of prime.

But in any case, I think people might well argue with them about morality. If people can sell primes to sorters and they can sell hedons to people, would it be moral to betray in a prisoner's dilemma and get 2 primes by giving -3 hedons. And most likely they will come to the conclusion that no, that would be wrong, even if it is just ("prime").

That you shouldn't kill people, even if you can get yourself the primeons you so desire, and they shouldn't destroy the right piles, even if they get pleasure from looking at the blowing pebbles.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2023-05-08T09:38:18.586Z · LW(p) · GW(p)

There is convergently useful knowledge, and parameters of preference that could be anything, in a new mind. You don't need to align the former. There are no compelling arguments about the latter.

comment by EniScien · 2023-05-01T18:39:00.151Z · LW(p) · GW(p)

I haven't encountered this technique anywhere else, so I started using it based on how associations work in the brain:

If I can't remember a word, instead of just continuing to tell myself "think, think, think," I start going through the letters alphabetically and make an effort over each one "what are the words for that letter, is that word by any chance?" And that almost always helps.

comment by EniScien · 2023-05-01T16:26:40.590Z · LW(p) · GW(p)

I've noticed that in everyday life, when you're testing some habit choices to see if they're working for you, it's better to leave a habit that doesn't seem to be working for you, to make it easier to determine that, because otherwise you won't be sure later if it turned out to work otherwise, habit one or habit two or habit three.

This reminds me of how I used to do mod compilations, it might seem like a good idea to add all the desired ones at once, but then if some mod is missing or some extra, you won't be able to figure out which one. So they should only be added and removed one at a time.

With habits the same, only even more difficult, because they begin to act gradually and do it much slower, and then there are factors beyond your control. And I used to assume that it doesn't make sense to waste time and effort on following useless habits.

However, since it is also experimentation, it was worth bearing in mind that any change would in any case complicate the analysis of what worked, it is better to keep them until you find a stable working combination, and then remove them one at a time, too, in case certain habits somehow worked only together.

comment by EniScien · 2023-05-01T15:19:30.670Z · LW(p) · GW(p)

Yudkowsky says in one of his posts that since 0 and 1 for probabilities mean -∞ and +∞, you can't just add up all the hypotheses to get one. However, I don't see why this should necessarily follow. After all, to select one hypothesis from the hypothesis space, we must get the number of bits of evidence corresponding to the program complexity of that hypothesis.

And accordingly we don't need to have an infinite number of proofs to choose, as many as the number of bits in the longest hypothesis is sufficient, since any longer hypotheses will compete with shorter hypotheses not for correctness but for accuracy.

Yes, in the end you can never reach a probability of 1 because you have meta level uncertainty, but that is exactly what meta level probability is, and it should have been written as a separate multiplier, because otherwise adding an infinite number of uncertain meta levels will give you a probability of 0 for each of your hypotheses.

And the probability P(H) without considering meta levels should never be 0 or 1, but the probability P(H|O) could well be, since the entire meta level is put into P(O) and therefore P(H|O) will have a known finite program complexity. That is, something like:

A="first bit is zero", B="first bit is one"

C="A or B is true", O="other a priori"

P(O)=~0.99

P(A|O)=1/2, P(B|O)=1/2

P(C|O)=P(A|O)+P(B|O)=1/2+1/2=(1+1/2)=1

P(C)=P(C|O)*P(O)=1*0.99=0.99

And if we talk about the second bit, there will be two more hypotheses orthogonal to the first two, on the third bit two more hypotheses, and if we talk about the first three bits, there will already be a choice of 8 multiplications of the first six hypotheses, and there will no longer be correct to ask which of 6 hypotheses is true, because there are 6, in options 8, and must be true simultaneously not one hypothesis, but at least 3.

And accordingly, for 8 hypotheses, we can also add up the probabilities as 8 times 1/8 and end up with 1. Or we can write it as 1/2+1/4+1/8+1/8=1, but of course this is only possible if we can count the number of bits in the hypotheses, decompose them into these bits and determine the intersections, so as not to count the same bits repeatedly.

comment by EniScien · 2023-04-27T10:09:26.945Z · LW(p) · GW(p)

Human language works primarily due to recognition in context, this works with individual words, but it can also work with whole phrases, the same word can be completely determined by its morphemes and only morphemes will have to be known from the context, but also a word can be and a single morpheme, and of course here you should also take into account words borrowed from other languages, which in the original language can be broken into morphemes, and in yours be known only from the context, and the same thing works not with whole words, but also with phrases where individual words are separated by spaces in writing and pauses in conversation, there you can also determine the meaning of a phrase based on the meanings of its words, but often this may not be the case, often a certain combination of words has a meaning different from its individual parts, this meaning can be recognized only from the context in which the phrase is used. And also a single word can have a meaning that is different from its morphemes, for example, the words computer, calculator and counter based on morphemes should be synonyms, but they are also used in the context to refer to three specific different devices. And there can be an unlimited number of such levels of different contextual meanings, they are also often used to add additional levels of meaning, if you look at everything only as a sequence of morpheme meanings, the meaning is one, if you perceive the meanings of words from contextual learning as a whole, then the meaning will be different if look at phrases third and so on. I started programming at the age of 9 and even then I noticed that programming is not rocket science, there is nothing complicated, writing a program is no more difficult than speaking, writing in a programming language is just like writing in a foreign language. Later, I also heard about a study where an MRI showed that programming uses the same areas of the brain as talking. And this finally strengthened me in the wrong thought. The fact is that conventional languages ​​and programming languages, despite the fact that these are all languages, are very different, programming languages ​​are "ascending" languages, they start with a description of completely elementary structures, then refer to them, then refer to sets of links and so on, in any case, this is a perfectly accurate process, in programming if you have described some elementary structure, then this is a perfectly accurate description, without any vagueness, and if you have described a high-level representation from references, then again it is perfectly exact, you just referred to a number of perfectly accurate low-level descriptions.

You create your worlds here with your own laws of physics, and like nature does not tolerate any exceptions (although there is also a huge difference between the laws of physics and the laws of the game), if you say "truth", then it is absolute truth, if you say "everything" , then this is absolutely everything that is, if you say "always", then it will be absolutely always, any situations where this is not so, these will be situations of a specific indication "always except for a and b", in human language it is completely not like that, it works like leaky categories (reference needed), if you say "I will never tell a lie", then although you do not describe any exceptions, you still mean a bunch of them, like "except if I myself do not know the truth ", "unless I get drugged", "unless someone changes my brain" and probably also "unless lying saves my or someone else's life".

comment by EniScien · 2023-04-22T20:31:11.436Z · LW(p) · GW(p)

I used to have conflicting thoughts about death. Does death nullify the value of life, because you yourself are destroyed? Or maybe, on the contrary, immortality, because you will ever achieve everything? The first is false, because a person has other values ​​besides the state of his own brain. The second has nothing to do with life, because really by immortality we mean "not dying of old age" and not "living infinity of years", so you only have a trillion years ahead of you, and this is a very limited time. And any other finite time too. Thus, one should not be an absolutist, the value of life is equal to the lived period by the coefficient of average goodness, and the anti-value of death is equal to the product of the distribution of expected value and the distribution of expected probabilities. I also wondered, if there are no stupid values, then can't any person choose the value of death himself? And... No. The word "terminal" is missing. Instrumental values ​​are a different type of information altogether and can easily be silly. And a person is clearly not born with a final desire to die, as, for example, one is born with a desire for sweetness and the absence of pain. So no, death destroys too much expected positive utility to rationally prefer it to anything. Another thing is that the decision-making mechanisms in the human brain are far from rational, and first of all, we are talking about time discounting at a fixed, not a probabilistic rate, which makes you extremely overestimate what is happening now compared to the future.

comment by EniScien · 2023-04-22T20:19:57.318Z · LW(p) · GW(p)

I recently read Robin Hanson saying that small groups that hold counter-intuitive beliefs tend to come from very specific arguments, some even invent their own terminology, and outsiders who reject those beliefs often they don't even bother to learn the terminology and review the specific arguments because they reject these beliefs on purely general grounds. And this is what I myself noticed, however, only from within such a group, I was outraged and annoyed by the fact that usually the arguments of those who reject FOOM or the danger of AI / AGI in general are so general that they have nothing to do with the topic at all. On the other hand, Robin Hanson gives an example like conspiracy theories, and this is true, and this is something that I did not think about, because really, when I come across a conspiracy theory, I don’t even try to spend time studying it, I just immediately reject it on a general basis and move on. This may well be called the application of an outside view and a reference class, however, the point is not that I use this as an argument in an argument (arguing with a conspiracy theorist is a priori useless, because in order to believe in it, you need not understand scientific method), I use this to avoid even wasting time on a more precise but longer look inside. If I were to take the time to consider a conspiracy theory, then I would not dismiss all arguments on the grounds that your idea belongs to the reference class of conspiracy theories and your view from the inside is just your vision under cognitive distortions. I would apply an inside view, understand the specific arguments, reject them on the basis of scientific evidence. And I would use the view from the outside in the form of a reference class of conspiracy theories only as not very weighty a priori information. So the problem is not that I have an impenetrable barrier of reference classes, but that this narrow group goes beyond the reference class of ordinary conspiracy theorists. And, perhaps, without fail, entered the reference class of those who are well acquainted with the scientific method, because without this I will decide that they are just one of a slightly smaller number of non-standard conspiracy theory groups.


And, I'm not entirely sure it's worth doing it in one post, but it also talked about the cases of seeing a sharp change in values ​​about reaching full strength. Maybe not in the economy, but in politics this happens regularly, at first the politician is all so kind, he does only good, and then we finally give him all the power, we make him an absolute dictator, and for some reason he suddenly stops behaving so positively, arranges repressions, starts wars.


And more about Hanson's sketch about the explosion of "betterness". And at the end, the question is "Really?". And personally, I answer this: well ... Yes. Right. But only the human brain works about 20M times slower than a computer processor, so it will take about the same amount more time. If for a computer I would be ready to believe in a period from 1 month to 1 millisecond, then for a person it would be from 1.5M years to 6 hours, but the second is subject to the ability to instantaneous self-modification, including that any new idea instantly updates generally all your previous conclusions. Plus, in the intermediate scenarios, you still need to take into account sleep, food, willpower, the need to invent a cure for old age, and the elimination of other causes of death. In addition, you will have to hide your theory of superiority from other people so that they do not decide to stop you before seizing power. But in general, yes, I think that this is possible for people, just a whole bunch of problems like a lack of will, the lack of the possibility of self-modification and terrible for this evolutionary spaghetti code in the brain. But first of all, a too short life span, if people lived a million years, or better a billion, so that with a margin, they could achieve all this, despite all the difficulties, except death. However, individuals live too little, think too little, so the explosion of betterness is only possible for them collectively, it has been going on for 300 years, from 3 to 15 generations and involves 8 billion people at its peak. More specifically, I would consider this “betterness explosion” a broader concept than the “intelligence explosion” in the sense that it does not specifically talk about intelligence, but only about the concept of optimization processes in general, but at the same time more narrow, because it seems to have specific values, positive human ones, while a nuclear explosion could also be called an optimization explosion, but by no means aligned with human values.

comment by EniScien · 2023-04-22T17:26:25.866Z · LW(p) · GW(p)

I have never heard of this before, either here or elsewhere, but I myself notice that usually even the most unreasonable thing like religion has its grain of truth. Or rather, a lot of grains of truth. That is, knowledge, even in human society, almost never has a purely negative utility, a person who knows a certain cultural cluster will almost always have an advantage over a purely ignorant one. However, an important nuance, while a more knowledgeable one may turn out to be worse when they are both taught the correct rational methods, a pure mind will accept them without problems and go forward, and the one busy with another meme will resist. And this is all understandable, the combined optimization pressure between intelligence and meme evolution is unlikely to leave purely harmful memes alive in the end. Human thinking plus natural selection usually correlates with some useful/sane traits in memes. Although, of course, they can very often lead into traps, holes in the fitness landscape, create inadequate balances, and so on and so forth. This can be seen, for example, in sayings (not to be confused with omens, they are almost always just a consequence of apothenia with a confirmation error), something like "geniuses think alike" this is easily paraphrased into "there are a thousand ways to be wrong and only one way to do that - that's right." Usually, if a saying exists, then it correctly reflects some aspect of reality. The problem is that such observational data collection without building causal models does not make it possible to understand how conflicting sayings relate to each other and which are true in which situations. Because about the same geniuses, and vice versa, there is an observation that they think outside the box. And without analysis, you cannot clearly say that in order to obtain the greatest amount of information, one should causally or logically coordinate for geniuses to think on different topics. Or that non-standard thinking is just “not like usual, but meaningful”, so that it can be caused both by being on higher levels of the conceptual pyramid, or simply by another memeplex, which does not cancel the previous point about coordination for thinking in unusual directions of search . However, also with all sayings there is an effect that only those who already understand them understand them, and all because these are just links, but I will write more about this another time.

comment by EniScien · 2023-04-22T13:48:07.283Z · LW(p) · GW(p)

An interesting consequence of combining the logical space of hypotheses, Bayes' theorem, and taking priors from Kolmogorov complexity is that any hypothesis of a certain level of complexity will have at least two opposite child hypotheses, which are obtained by adding one more bit of complexity to the parent hypothesis in one of two possible states.

And, conversely, you can remove one bit from the hypothesis, make it ambiguous with respect to which child hypothesis will fit your world, but then you will need fewer bits of evidence to accept it a priori.

And accordingly, there will be one hypothesis that you will have to accept without proof at all, a program of zero length, an empty string, or rather an indefinite string, because it contains all its child hypotheses inside itself, it simply does not say which one of them is true.

And this absolutely a priori hypothesis will be that you exist within at least one world, the world of the space of hypotheses itself, it is not specified simply in which particular part of this world you are.

And this makes us look differently at the Tegmark multiverse. Because, let's say, if we know that we are on Earth, and not on Mars, then this does not mean that Mars does not exist. We just aren't there. If we are in a certain branch of Everett, then this does not mean that other branches of Everett do not exist, we just ourselves are not there. And continuing this series, if we are within one program of the physical laws of the universe, then this does not mean that other programs of universes with other laws do not exist, it's just that we ourselves are not there.

If we bring this pattern to a logical result, then this means that the fact that we are inside one sequence of bits does not mean that others do not exist. In other words, based on an absolutely a priori hypothesis, all possible sequences of bits exist, absolutely everything, no restrictions, neither on computability, nor on infinity, or even on consistency. Just exist absolutely all mathematics in general.

In general, all mathematics is summed up in hypotheses of zero length and complexity, because it does not limit our expectations in any way. That is, this is not a request to accept some new hypothesis, it is, on the contrary, a request to remove unreasonable restrictions. In the broadest sense, there are no limits on either computability or infinity, and if you claim otherwise, then you are proposing a hypothesis that narrows our expectations, and then it is on you that the burden of proof lies.

However, this can also be called a rethinking of the concept of "exists", people say it intuitively, but they absolutely cannot give it any definition. And if you refine this concept to something related to controlling your expectations, then you can talk about questions like how likely you are to see contradictions, infinity or something else like that, then you can specifically answer.

For example, one could say that you will never see inconsistent territory because it is a delusion of mind projection, "contradictory" is a characteristic of a map that does not actually have the territory it is supposed to describe.

Seeing contradictions in a territory is like seeing a “you are here” icon or an indication of the scale, or asking what is the error of the territory to itself, the answer is “none”, because the territory is not a map, not a model created from an object, it is the object itself , he cannot be in error with himself, and he cannot contradict himself.

It can all be called something like Tegmark V, mathematical realism and Platonism without limits. And personally, these arguments convince me quite well that all mathematics exists and only mathematics exists, or, to paraphrase, all that exists is mathematics or a specific part of it.

Edited: in some strange way I forgot to clarify, to indicate this obvious point in terms of the space of hypotheses and the narrowing of expectations.

Usually, any narrowing is done by one bit, that is, twice, so you will not reach zero in any finite number of steps, this is because you remove one of the two bits each time in a perpendicular to all previous direction, however, you can also do otherwise, you can not delete any of the two bits, or, after deleting the first bit, delete the second one, delete both, cut not one half, but * (1-1/2 = 1/2) = a * 1/2, and both halves, a*(1-1/2-1/2=0)=a*0, with such a removal that is not perpendicular to the previous ones, but parallel, we remove both possible options, we narrow the space to zero in some direction, and as a result, the whole hypothesis becomes automatically narrowing our entire space of hypotheses to zero, so that now it corresponds not to half the number of territories, but to zero, excluding both alternatives, it excludes all options.

This looks like a much better, much more technical definition of a contradiction than trying to proceed from its etymology, and thus it is clear that there is no “contradiction” exactly, just any card is obliged to narrow your expectations, leaving only one option for each choice, indefinite leaves both, therefore useless, but the contradictory leaves neither, excludes both options, therefore even more useless.

If there is a contradiction not on the map itself, but in the general system of the map and the territory, there is no such problem, it only means that the map is incorrect for this territory, but may be correct for another, it happens that we already have there is an a priori exclusion of one of the options, but the data received from the territory exclude the second option, if we draw up a second map based on these data, then it will not supplement and clarify ours, but will exclude both possible alternatives along with it, therefore the maps are incompatible, their cannot be combined without getting the exclusion of both sides of reality, the narrowing of space to zero.

comment by EniScien · 2023-04-22T13:06:35.722Z · LW(p) · GW(p)

I thought for a long time about what "contradictions" mean at all and how they can not exist in any world, if here they are, they can be written down on paper. And in the end, I came to the conclusion that this is exactly the case when it is especially important to look at the difference between the map and the territory. Thus an inconsistent map is a map that does not correspond to any territory. In other words, you usually see the area and then you make a map. However, the space of maps, the space of descriptions, is much larger than the space of territories. And you may well not draw up a map for a certain territory, but simply generate a random map and ask what territory it corresponds to? In most cases, the answer is "none", for the reason already described. There are maps compiled by territories and they correspond to these territories, and there are random maps and they are not required to correspond to any territory in principle. In other words, an inconsistent map exists, but it is not drawn for any territory, the territory for which this map was drawn, if it were drawn for a certain territory, does not exist, because this map was not drawn for a territory. This can be represented as the fact that our map is made from a globe. If you unfold it and attach it to the plane, a series of slices will come out, and there is an empty space between them, on an ideal map it should not be white, but transparent. However, if you first draw the map, then you can fill these transparent areas with a certain pattern, and if you try to collapse the map back into a globe, then you will not get a single real globe, because it will not be a sphere, Riemann space, but a plane , the Euler space with a higher dimension, as well as the space of maps to the space of territories, and its attempt to project onto a sphere will not lead to any even result, it will go in folds, like an attempt to project Lobachevsky's space into ours. Even for programmers, this can be represented as a difference between a map and an array, in the case of a map, you can specify many different values ​​for one key and then it will not be possible to collapse into an array. Thus, inconsistent models can be described as artificially created similarities of object projections that cannot be unambiguously deprojected back into an object. And in order to avoid confusion, it should be said not that contradictions are something that cannot exist, but that contradictions are a state of the map, which does not correspond to any state of the territory. In other words, you can also say that this is a typical case of mind projection error, when you project a certain state of the map (which really exists and is not contradictory) onto the territory and cannot get this very territory, then you say that you cannot get it, because that would be controversial territory.

comment by EniScien · 2023-04-22T12:36:52.286Z · LW(p) · GW(p)

I don't remember if I already wrote about this, but I was thinking about the space of hypotheses from first and second order logic, about where recursive reasoning hits the bottom and so on, and I came to the conclusion that if you actually find some mathematical formulation of the falsifiability criterion Popper, then it must be deeper than Bayes' Theorem. In other words, Bayes' Theorem shows not that positive cognition also exists, it's just weaker, but that negative cognition has indirect effects that can be mistaken for weak positive cognition. If we try to formulate it concretely, then Bayes' Theorem is usually depicted as a square, divided in one dimension by one line, and in the other by two. And after you've done the calculation, you cut off the probabilities that didn't happen, and then you normalize it back to a nice square. However, if you treat this square as a space of hypotheses from which you cut out falsified clusters of hypotheses, then you will see that no probabilities ever increase, just some fall much more than others and in a relative ratio less fallen ones look larger, and after normalization this difference generally lost. The main plus of such a view is that there is no crisis of confidence in it, in general, in principle, you cannot confirm something, you can only more or less refute it. So bits of information become rebuttal or contradiction scores, you don't validate the bit's current value, you disprove the opposite, because that would mean a contradiction. The probabilities of this number are less than one, therefore, with mutual multiplication, they can always only fall, but you look at the probability distribution for those that fell the least. For example, religion has already been pierced by so many spears that in order for you to reach such a low probability, we need to lower even lower all possible probabilities, which are now higher. But for quantum mechanics, it doesn’t matter if there are any flaws in it, it still remains our best hypothesis. In other words, it allows you not to reach self-destruction even if you are a contradictory agent, you just need to look for a less contradictory model. And this also works in comparison between agents, no one can increase someone's rating, you can only find new contradictions, including in yourself, the one with whom others found the least contradictions wins. In a broader sense, I believe that contradictions are generally a more fundamental category than lies and truth. False is something that is contradictory only in combination with some external system, so it can win in the system with which it does not contradict. And there are also things that are already contradictory in themselves, and for them it will not work out to find an ideal external world, the number of contradictions in which reaches zero. In other words, contradictory things in themselves are worse than things that contradict only something specific, but there is no fundamental difference, even though false, even contradictory systems do not destroy the mechanism of your knowledge. In addition, of course, in addition to false, true and contradictory, there is a fourth category, indefinite, they certainly score the least points of contradiction, but they are not particularly useful, because the essence of Bayesian knowledge is to distinguish between alternative worlds among themselves, and if a certain fact is true for all possible worlds, it does not allow you to discern which world you are in. However, this does not mean that they are completely useless, because it is precisely from such facts that all mathematics / logic consists, facts that are true for all possible worlds, distinguishing them from contradictory facts that are not true for all possible worlds. That is, in other words, the meaning is again that it is impossible to prove something, mathematics is not a pool of statements proven true for all worlds, it is rather a pool of those statements that have not yet proven that they are wrong for all worlds.

comment by EniScien · 2023-04-22T11:32:33.227Z · LW(p) · GW(p)

Some time ago I saw an article here on the topic, but what do the words "deep causality" and "surface analogy" mean. For me personally, at that time it was intuitively obvious what the difference was, including for me it was obvious that the article was not about deep analogies, but only about the concentration of the probabilistic mass, which of course is a very important skill for a rationalist, actually key, but that's just not what I mean by deep causes, at least. However, despite this, I could not express my intuition in words at that time. I wasn't even sure if the analogy could be "truly deep" or just "more profound" than any other. Since then, I have better understood and deduced for myself the concepts of upward and downward understanding, that is, calculating large consequences from simple base-level laws, or trying to determine basic laws by seeing only large consequences. And in the first case, you can confidently talk about the deepest level, in the second, you can’t, so in theory it can always turn out that you didn’t know the true laws and there is a level even deeper. An obvious analogy for the surface and the deep is the black box, where the surface is a purely statistical analysis of purely patterns in inputs and outputs, and the deep is an attempt to build a model of the inside of the box. The difference is that when you talk about statistics, you are always ready to say that there is always some possibility of a different outcome, this will not disprove your model, and when you build the internal structure of the box, your hypothesis says that some combinations of inputs and outputs are strictly impossible. according to your model, if this appears, then your model is wrong, deep causal models are more falsifiable and give clearer answers. You can also say that the difference is like between a neural network with two layers, input and output and connections between them, and a neural network with a certain number of hidden, invisible, internal layers. The difference is that deep causal networks require not just saying that there is a certain correlation between the states of inputs and outputs, it requires building a chain of causes between inputs and outputs, laying a specific path between them. And in the real world, you can also often check the specific steps of that very path, not just the ins and outs. But it can also be compared to a logical versus probabilistic construction, you can "ring out" this circuit and clearly say which outputs will be at which inputs. And as in inference, if you deny the conclusion, you need to point out the specific premises that you then reject. You cannot reject the conclusion without refuting the whole structure of this model. Like that. Probably later I will formulate it even better and add it.

comment by EniScien · 2022-11-01T10:25:56.463Z · LW(p) · GW(p)

It occurred to me that looking through first-order logic could be the answer to many questions. For example, the choice of complexity by the number of rules or the number of objects, the formulas of quantum mechanics do not predict some specific huge combination of particles, they, like all hypotheses, limit your expectations compared to the space of all hypotheses/objects, so for at least complexity according to the rules, at least according to objects will be given one answer. At the same time, limiting the complexity of objects should be the solution to the Pascal robbery (the original articles have no link if they are already solved), this is the answer, where the leverage penalty comes from. When you postulate a hypothesis, you narrow the space of objects, initially there is much more googolplex of people, in fact, but you specify only a specific googolplex as axioms of your hypothesis, and for this you need the corresponding number of bits, because in logic an object with identical properties cannot be different objects (and if I'm not mistaken, quantum mechanics says exactly that), so that each person in the googolplex must be different in some way, and to prove / indicate / describe this you need at least the logarithm of bits. And as long as you're human, you can't really even formulate that hypothesis exactly, define all the axioms when that is 1 your hypothesis is 1, let alone get enough bits of evidence to establish that they really are 1. But also the hypothesis about any number of people is the sum of the hypotheses "there is at least n-1 people" and "there is one more person", so increasing its probability by a billion times it will be literally equivalent to believing at least that part of the hypothesis where there are a billion people , which will be affected by the master of the matrix. This can also be expressed as that each very unlikely hypothesis is the sum of two less complex and unlikely hypotheses, and so on until you have enough memory to consider them, or in other words, you must start with more likely hypotheses, test, and only then add new axioms to them, new bits of complexity. Or a version of leverage penalty, only not for the probability of being on such a significant node, but for choosing from the space of hypotheses, where for the hypothesis about the googolplex of people there will be a googolplex of analogues for smaller numbers. That is, according to first-order logic, our programs have unreasonably high priors for regular hypotheses, in fact, infinite, although in fact you have to choose from two options for each bit, so the longer the set of certain bit values, the less likely it will be. Of course, we have evidence that things behave regularly, but not all evidence goes there, much less an infinite amount of it, since we haven't even tested all 10^80 atoms in our Hubble sphere, so our prior is larger the probabilities of regular hypotheses will not be strong enough to overpower even a googolplex.

comment by EniScien · 2022-06-06T13:46:33.717Z · LW(p) · GW(p)

The more I learn about the brain, the more the sense of integrity dissipates. Apparently, although we look at our mind from the inside, it, just like the body from the outside, at the same time ceases to be mysterious and also ceases to seem like something whole, a kind of fluid thinking. Ignorance creates not only a sense of derivativeness (chaos), but also a sense of wholeness.

comment by EniScien · 2022-06-03T16:48:56.108Z · LW(p) · GW(p)

I remember when I was a kid, if I heard two apparently conflicting opinions from two sources I liked, I would take great pains to suppress the cognitive dissonance, pull an owl on the globe, and convince myself that both were right. It seems that I was even proud of the work I did. So I can understand why some believers try to combine the theory of evolution with the idea of ​​divine creation. For them, there are simply two sources associated with positive emotions, and they do not want to experience unpleasant sensations, admitting that one of them is wrong at best.

comment by EniScien · 2022-06-01T07:34:10.880Z · LW(p) · GW(p)

My feeling is that people's planning error stems from the illusion of control. When a person plans, it seems to him on a subconscious level that as he draws up a plan, it will be so, so a person tends to draw up a plan, taking into account the fact that everything went the worst way. You don't want to create a plan that says "I'm going to be wrong at this point" do you? After all, who will specifically plan to commit a mistake if he can plan that no mistake will be made? It's like writing a book. There, if you do not plan for the character to make a mistake, he will never make it. That's only if in books skillful writers strive to follow Murphy's law, because a story about an ideal character is not interesting and Mary Sue in general, an exhibition of pianos in the bushes, author's arbitrariness, etc. But in reality, a person wants to avoid mistakes, so he plans that everything will work out. By the way, it worked in the book... And in the head, in the imagination when planning - everything also turned out without errors when they were not added intentionally, so why should it be otherwise in reality? This seems to be the same reason why people, knowing that a box is 70% red and 30% blue, try to bet red 70% of the time, as if the point of randomness isn't that you can't plan for it. It must also be related to the inability to respect unknown unknowns, to take them into account when you do not intuitively feel that you do not know something. Perhaps teaching the difference between a map and a territory to an intuitive level by itself, without specifically planning error, should improve the situation.

comment by EniScien · 2022-05-30T15:40:50.791Z · LW(p) · GW(p)

In fact, I'm both worried and happy that in LessWrong I can influence the ratings of so many posts so much. That is, I can make the difference between two messages 6 points. And this means that not many people read the site and it will not be balanced. The factor of individuals with ordinary karma gets too much influence. But on the other hand, it is psychologically good for the voters themselves, since you can see that your votes are important, there is no feeling of “I don’t influence anything” and “a bunch of other smart people will decide better”

comment by EniScien · 2022-05-26T13:32:00.696Z · LW(p) · GW(p)

It's hard to articulate why I dislike so much the views that change depending on what family you were born into (religion, nationalism, patriotism, etc.). It's like priors of one, fallacy of favoring an arbitrary hypothesis, lack of stability of your views under self-changes as in AI alignment, floating beliefs, and what is not truly part of you, unrecoverable knowledge instead of a knowledge generator. And it seems that all these points are connected, which is why Yudkovsky wrote about them all when trying to teach the creation of safe AI. Well, just as Yudkovsky said in one of the chains "I'm sorry, the same person as me, just grew up in a different environment, and not a monster at all, so we are now forced to be at enmity," I don't like the idea that I would be at enmity with myself from a neighboring universe, simply because I was born in a Catholic family, and he was born in a Protestant one. Well, another view, even before lessvrong, that I do not like beliefs that are so unstable that if you were born in another country, you could never reliably come to them, that is, seriously, and this very important belief of yours rests on such flimsy birth fact? I mean, you don't have any foundation of justifications under it, as you have under science. It's just a random thing. Again, irretrievable knowledge, floating conviction. That you can't pass anyone through arguments. Though I don't know why it wouldn't apply to tastes let's say. It probably doesn't even apply. I'm not saying my tastes matter. It's just a random fact. I will not try to prove to anyone that my tastes are better than everyone else, because it is not. It's not true, it's just a preference. And it cannot be said that I had a great reluctance to change them. Just any other will not be better. And it's like my reluctance to lose my memory, because it's a part of me.

Replies from: Dagon
comment by Dagon · 2022-05-26T14:30:59.828Z · LW(p) · GW(p)

Do you dislike the meta-view that an individual cares about their family more than they care about distant strangers?  The specific family varies based on accident of birth, but the general assumption is close to universal.

How many of the views you dislike are of that form, and why are they different?

Replies from: EniScien
comment by EniScien · 2022-05-26T17:21:52.858Z · LW(p) · GW(p)

I didn't quite understand what you mean. The family is not entirely relevant to the topic of that post. Usually it is treated somewhat more logically. And in the post, the conversation was more about beliefs than about duty. I am ready to pay the debt to the family or even the state, but only for what they really did good and partly how much it cost them, and not just for the fact of birth. "Honor your father" clearly does not deserve a place in the 20 commandments, because I was lucky, but my father could beat someone else. Your friends are not only just useful tools, you can also be grateful for what they have done in the past. But no unjustified unconditional love. Somewhere in here, it seems to be Scott Alexander, there was a chain where a person woke up in a virtual reality capsule in a world without a family, having failed his exam for excessive conformity. It largely reflects my views. It might make sense to prefer one's home country/family/gender/race/species, all other things being equal, but obviously not if the other option gives MORE (if expressed in numbers, then this is certainly not 0.01%, but let's say 3%).

Replies from: EniScien
comment by EniScien · 2023-04-22T15:45:44.722Z · LW(p) · GW(p)

More specifically, what I mean is that I find it extremely pointless to make something a moral value such as duty rather than a preference value such as taste if that attitude varies by region of birth. Which can probably be expressed as something like "I think it's a mistake to list anything other than the direct conclusions of game theory in the list of moral values ​​of duty." Well, or else you can say that I believe that interpersonal relationships should not be regulated by someone's personal preferences, only by ways of finding a strategy for the game to achieve the maximum total. Well, maybe it's just a longer and more pompous way of saying "do not impose your preferences on others" or "the moral good for the other person should be determined by preferential utilitarianism."

comment by EniScien · 2022-05-23T13:58:43.506Z · LW(p) · GW(p)

Surely someone has already pointed out this, but I have not seen such indications. It seems that humanism follows science, because the idea of ​​progress shows that everyone can win, there is enough for everyone, life is not a zero-sum game, where if you do not harm someone, then you yourself live worse. And the lack of discrimination probably comes from the greater consistency of your reasoning, you see that hating a certain group is a completely arbitrary thing that has nothing to do with it, and just as you could hate any other group. It can be said that you are aware that you cannot be said to be special just because you are, because everyone else may think the same, you have no special reason.

comment by EniScien · 2022-05-22T09:36:34.493Z · LW(p) · GW(p)

(I can’t find where it was, if I find it, I’ll move it there) Someone suggested in light of the problems with AI to clone Yudkowsky, but the problem is that apparently we don’t have the 18 years it takes for the human brain to form, so that even when solving all the other problems, it's just too slow. Well, with any means of accelerating the development of the brain, the problem is already clear.

comment by EniScien · 2022-05-22T09:29:15.827Z · LW(p) · GW(p)

I came up with the idea that people can cheer for the protagonist of the book, even if he is a villain, because the political instincts of rationalizing the correctness of your tribe's actions are activated. You are rooting for Main Character, as for your group.

comment by EniScien · 2022-05-20T13:03:05.917Z · LW(p) · GW(p)

It seems that in one of the chains Yudkovsky says that Newtonian mechanics is false. But in my opinion, saying that Newton's mechanics is false is the same as saying that Einstein's theory of relativity is false, well, we know that it does not work in the quantum world, so sooner or later it will be replaced by another theory, so you can say in advance that it is false. I think that this is generally the wrong question, and either we should indicate how much the percentage is false, somehow without confusing it with the probability that it is false. Or continue the metaphor of the map and territory. Maps are usually not false, they are inaccurate. Some map may not outperform white noise in predictions, but Newton's map is not like that, his laws worked well until the discovery of problems with the orbit of Mercury, and replaced by the theory of relativity. Newton's map is less like territory, less accurate than Einstein's map. Let's say Newton's map contained a blurry gray spot in the shape of a circle, and one could assume that it was just a gray circle, but Einstein's map showed us in higher resolution that there is a complex pattern in this place within a circle with equal alternation of black and white, no grey.

comment by EniScien · 2022-05-17T18:21:22.648Z · LW(p) · GW(p)

It occurred to me (didn't finish reading Inadequate Equilibria, I don't know if that comparison is made there) that the unusability of markets is similar to that very mutual entropy, thanks to which you can put your finger in boiling water and not get burned if you know the movements of atoms.

Replies from: Dagon
comment by Dagon · 2022-05-17T18:35:51.574Z · LW(p) · GW(p)

I'm not sure I get the analogy.  And in fact, I don't think that KNOWING the movements of atoms is sufficient to allow you to avoid heat transfer to your finger.  You'd need it to be true that there exists a place and duration sufficient to dip your finger that would not burn you.  I don't think this is true for any existing or likely-to-exist case.

If you can CONTROL the movements of atoms, you certainly can avoid burning.  This is common and boring - either a barrier/insulator or just cooling the water first works well.

Replies from: EniScien
comment by EniScien · 2023-04-22T16:26:46.236Z · LW(p) · GW(p)

I expressed myself inaccurately. Firstly, of course, simple knowledge will not make water cold for you, you also need to move your finger very accurately and quickly in order to avoid hotter molecules, I just considered this insignificant, since you initially physically cannot fit into your brain weighing 1.5 kg information about 0.25 kg of water molecules. Secondly, to put it with my current best understanding, these systems are similar in that it is generally believed that the glass just has high entropy, so you can't help but get burned, so it is generally believed that the market is just efficient, just unpredictable, although in general - then these are all human-centric, relative categories (I don’t remember, they should be called “magical” or “unnatural” in Lessvrong), one way or another, the point is that you can’t talk about the entropy of the order book or the predictability of the market, as about object.property, just like subject.function(object), otherwise it would be support for making a mind projection error, there is no entropy of an object, there is only mutual entropy of two objects, and it doesn't matter if you are talking about heat dissipation or information prediction. (Then it just occurred to me that there is a difference in vision between an informed and an uninformed subject in both cases, that for one impenetrable chaos, for the other a transparent order)

comment by EniScien · 2024-01-11T11:48:15.723Z · LW(p) · GW(p)

I noticed that some names here have really bad connotations (although I am not saying that I know which don't, or even that any hasn't).

"LessWrong" looks like "be wrong more rare" and one of obvious ways to it is to avoid difficult things, "be less wrong" is not a way to reach any difficult goal. (Even if different people have different goals)

"Rationality: from A to Z" even worse, it looks like "complete professional guide about rationality" instead of "incomplete basic notes about a small piece of rationality weakly understood by one autodidact" which it actually is.

Replies from: Dagon
comment by Dagon · 2024-01-11T17:10:42.925Z · LW(p) · GW(p)

Ehn.  Not sure what you expect, or where you think does it better.  
 

i would recommend that you reframe “x has really bad connotations” to “I have these specific associations with X, which I think are negative”.

comment by EniScien · 2023-04-29T12:28:11.408Z · LW(p) · GW(p)

For some reason until recently, nowhere I heard about programming did it explain that object-oriented programming is essentially a reverse projection of the human brain, everywhere I heard about programming before it said something like, at best, that procedural programming is ugh and object-oriented is cool, it did not explain that procedural programming is much closer to the language that reality thinks in, and inventing "objects" is just a crutch for the imperfect monkey brain

All this came to my mind when I noticed that people tend to think of drugs as if a drug is an object that has the property of "curing", like a potion in a video game or a real "potion" from antiquity. And people are still extremely prone to think about viruses, bacteria, antibiotics, vaccines, and the like in this way. Not to imagine a specific mechanism for how it's supposed to work, but just to assume that the drug as a whole will "cure." And of course the same goes for poisons, people not familiar with biology think of them as having an inherent property to "poison," or on a deeper level, acids, as having an inherent property to "dissolve.

And if you go back to the question of reverse projection of the human mind, it becomes obvious that human language is not something that came out of nowhere, it is a product of the human brain, so language must also be a projection of its structure, and therefore you need reverse projection as objects in a broader sense, so specifically convolutional neural networks here, to work with it properly.

comment by EniScien · 2023-04-29T12:22:20.088Z · LW(p) · GW(p)

I was once very interested in the question of what "time" is and what "entropy" is. The thing is, I watched popular science videos on YouTube, and nowhere was there a normal answer, at best it was some kind of circular argumentation. Also, nowhere was there a normal explanation of what entropy was, only vaguely stating that it was a "measure of disorder in the system".

In my head, however, the idea swirled around that it had something to do with the fact that there are more directions outward than inward in space. And I also twirled that it must be connected with the law of least action, for which I also did not meet such an explanation, that is, that the reason is that the straight path is one path, and there are at least 4 detours, and this is only the closest, with each step there will be 2 times less, respectively, if we imagine that there is no "law" of least action, we will still see it, because for a particle the probability to be in each next step from the central path will be 2 times less, because there are twice as many paths, and for a wave it will not even be probability, but purely its distribution.

All these thoughts were inspired to me by a video of balls falling down a pyramid of pegs, and they end up having paths on both sides inward at each step, and only one side path outward, and they form a normal distribution. That is, to put it another way, the point is that although the number of points is the same, the paths in the center converge to each other and the paths on the edges do not, the two paths in the center form a single cluster of central paths and the two paths on the edges do not.

And from this we can assume that the average expected space will have a shape close to a square, a cube, a tesseract, or another figure with equal sides, because although there is only one such figure, and many other variants, these variants do not fit together, but variants close to a cube fit into a cube.

This also explains for me why Everett's chaotic branches do not create a chaotic world. There are more chaotic branches, but they form a circle around the edge rather than a circle in the center, the least chaotic branches are fewer, but they converge to a world close to order, but the most chaotic branches differ from each other even more than they differ from order.

Somewhere here on lasswrong I saw a question about why, if we live in a Tegmark universe, we don't see constant violations of the laws of physics, like "clothes turn into crocodiles." However... We do observe. But only "clothes" and "crocodile" are too meaningfully human variants, in fact there are much more, one mole of matter contains ~10^23 particles, and even if we only assume different variants of their presence/absence, it is 2(1023), our system is too big to notice these artifacts, however if we go to individual particles...

That's exactly what we'll see, constant random fluctuations. Quantum. This can be considered a successful prediction of Tegmark, although in fact only retrospective.

comment by EniScien · 2023-04-22T19:15:53.302Z · LW(p) · GW(p)

Again, I'm not sure if I already wrote, but when it comes to quantum mechanics, Born probabilities and why a square, then it's spinning in my head that if you square and take the root in the form of an equation back, then you will have from one square branching into two possible answers, positive and negative, in other words, with this operation you erase exactly one bit of information, the sign bit. And in turn, if you took a number to the first power, then you could directly compare each point of the past with each point of the future and then there would be no calculation of the wave function of the future only from the entire function of the past, you would have exactly one future for each past, and it would not be a question of probabilities at all, only of a logical conclusion.

comment by EniScien · 2022-10-18T16:58:11.049Z · LW(p) · GW(p)

It seems to me that the standard question on the conjunction error about "the probability of an attack by the USSR on Poland as a result of conflict X" and "the probability of an attack by the USSR on Poland" is inaccurate for this experiment, since it contains an implicit connotation that once in the first reason X is named, and in the second case, Y or Z is not indicated, then in the second case we evaluate the attack for no reason, and if we continue to show the person himself his answers to this question, the effect of hindsight comes into play, like that experiment with the substitution of the selected photo. It seems to me that a more correct question, in order not to create such subconscious false premises, would be "the probability of an attack by the USSR on Poland as a result of conflict X" and "the probability of an attack by the USSR on someone as a result of any conflict between them." Although I'm not sure that this will change the results of the experiment as a whole. At least because even with an explicit indication of "some" instead of an implicit premise of "no reason", something so multivariate and vague will still not look like a plausible plot, as a result, vague = indetailed = implausible = unlikely = lowprobable.

comment by EniScien · 2022-06-02T13:54:42.713Z · LW(p) · GW(p)

It occurred to me that the essence of the calculation process is optimization / removal of unnecessary information, because "shape transformation" is not the answer. Not only when you reduce the fraction, but also when you find the result of the equation, because, paradoxically, it carries more information, because it allows you to calculate all the other results. Does this mean that the calculation is just "entropy"? This would be "the answer to all the questions of the universe", but it looks wrong, too much of a fix idea. But it would explain why in the world of mathematics these things are the same, but in our world they are not, it's all about the presence of entropy and the passage of time.

Replies from: lahwran
comment by the gears to ascension (lahwran) · 2022-06-07T10:20:07.609Z · LW(p) · GW(p)

ooh this was starting to make sense at the beginning and then didn't - I was getting excited at the first line though. seems like if I had to guess, you're working on integrating concepts - try rephrasing into more formal terminology perhaps? I feel like if this is anything like how I think, you may have made reasonable jumps but not showed enough of your mental work for me to follow. calculation process of what? what do you refer to with "shape transformation"? what is it not the answer to? what fraction? result of what equation? etc etc.

Replies from: EniScien, EniScien
comment by EniScien · 2023-04-29T12:26:40.904Z · LW(p) · GW(p)

Or, if I haven't written it down anywhere else, it occurred to me that since we live inside the Tegmark mathematical universe, in which case the universe is just a giant formula and the process of solving it step by step, each next part after the equal sign is the next moment in time, and the value of the expression itself, what is stored between the equal signs, is energy. The superposition is the subtraction inside the parentheses, which with each step adds another multiplier to both parts of the difference, and the different Everett branches are the same two halves of each difference, just with the parentheses open.

Well, now it's not that I think this is wrong, but rather the opposite, too obvious, and therefore useless.

Besides, it can also be presented in another form, in terms of the intersection of computer science and quantum mechanics, the universe, or rather, all universes in mathematics, is a bit string, which diverges into Everett branches, in the first sign you have 2 branches for 0 and 1, in the second you already have 4, in the third 8 and so on, each branch is a standard particular bit string, with each branch division the amount of information in this string grows, that is, entropy grows, and this direction of entropy growth in each individual branch is time.

The law of conservation of energy, or even more broadly the law of conservation, common to all mathematics, is that at each step you have for 0 you have 1 and vice versa, each time you divide this into two bit options and each time you have the opposite option, so the total entropy of all mathematics is also in some way zero, if you look from inside it is infinite, but to write this down, you do not need any length formula, zero length is enough for you.

So from the inside mathematics is infinite, but from the outside it adds up to zero. That's sort of the answer to the question "why is there something and not nothing?" and that answer is that "something" refers to a piece of "everything" and "everything" is what nothing looks like from the inside.

I came up with this myself, but later I also saw someone else's formulation of this, that for every number on the mathematical plane there is an inverse number, so even though the math has infinite information, it adds up to zero in the end, hence the law of conservation of energy.

As far as I know, it is widely known among physicists, as opposed to ordinary people, that energy is a conditional quantity, and the energy of a part of a system can be the energy of the whole system, since energy can also be negative, can be as negative as you want, so what we think of as zero is only a convenient point of reference.

comment by EniScien · 2023-04-22T14:16:01.569Z · LW(p) · GW(p)

I seem to have a better understanding of timeless physics since then, and if we talk more clearly about the regularity that I had in mind, then ... point in time of the book or all at once, for there is no answer to the question "what day is it in Middle-earth?", but all because our timeline has nothing to do with that one. And when we look at any mathematical object, its timeline, like the book's timeline, is also not connected to ours, which is why they look outside of time. That is, because the timeline is not the same for the entire universe, there are many timelines, and we are inside our own, but not inside the timeline of some object like a book or something else. And you can also say that if you usually say something like "we see the passage of time when entropy grows" and "entropy is something that grows with time", then outside of time physics reduces / reduces time to entropy. You link into a timeline chain those fragments of mathematical descriptions between which there is the least mutual entropy. This model of time also says that in addition to the standard linear time scale, there should be all non-standard time scales like different types of Everett branches, past, future and parallel, different types of time loops, like rings and spirals, and so on. And this can be called a calculation, because the calculation leads to a change in shape, another piece of information, and between it and the original one there will always be some kind of mutual entropy. It seems that everything, in short, did not work out, because although I myself understand what I meant then, I see that I expressed myself extremely unclearly. The answer question is, what does "working on the integration of concepts" mean? I don't understand what is meant by this expression.

comment by EniScien · 2022-05-28T13:51:42.113Z · LW(p) · GW(p)

Based on Yudkovsky's post about the aura of menacingness and the Level Above Yours, it would probably be nice to make some kind of rating with examples of books (like that list of the best textbooks, compiled from the comments of people who have read at least 3 pieces), so that you can assess what level you yourself are . At the same time, it will be possible to understand how objective this measure is. It seemed to me that, among other things, a community, lessvrong, is needed for such purposes.

comment by EniScien · 2022-05-27T15:09:19.973Z · LW(p) · GW(p)

This is not the first time it seems to me that Yudkowsky is hinting for us to understand this instead of writing directly. But on the other hand, he seems to write as if he doesn't really know the answer. Specifically in this case he says about qualia, that we should ask the question "what makes us think we have qualia?" At first I got that wrong, in the sense that qualia is just an illusion. But then I did a mental experiment about a neural network. What if we created a neural network that could similarly answer questions about the picture it recognized from its cameras. We ask it what it sees in the right corner, it answers. And so on. And all the answers will be as if she really SEES the whole picture from qualia, just like people do. As if the retina or its counterpart is already enough to have qualia, the only difference is that not all retina wearers can answer us in detail about its contents. On this basis, it seems that qualia is not just some "consciousness field" type property of the universe, it is an inherent logical/mathematical phenomenon, its absence not possible in any alternate world, it is just confusion/nonsense. Another thought experiment also hints at this. Imagine a universe without qualia. Wait, but how? There is no one observer there, no one point of view. No one can ever see what's going on in that universe. And it just seems... Wrong. It's as if the existence of a universe in which nothing can be observed, even if there are philosophical zombies there, just doesn't make sense. In other words, qualia are strictly retinal cells, animals have them as well, neural networks, and probably in a very simplified form even bacteria and sensors. The ability to speak only allows us to "cash in" on them, to inform people of their existence, though in fact they have always been there, everywhere. Next comes the question of self-consciousness. It doesn't seem to be some conceptual (epiphenomenal) thing that makes you valuable. It just seems to be something that is missing when we dream. I can even imagine people with no self-awareness, their difference would be that they would be incapable of self-reflection. Because of this, they would even be more efficient workers, not distracted, not engaged in procrastination and useless hesitation and so on. They will be able to answer questions about their condition, they can be made to go one level deeper, but they will not do it on their own. Because of this, they will be sillier. New thoughts will visit them only on the basis of external stimuli. They will invent fewer new ideas in a lifetime and have worse experience unless someone outside forces them to think until they say it seems the task is over. It seems their difference will be that they don't have wandering chaotic signals and closed loops of neurons in their brains to repeatedly pass backward error propagation. They might also not have a task stack, they might not be able to remember something at the right moment or just remember, they would necessarily need an alarm clock. Lack of ability to hold more than one object in short-term memory? And hence the only reason why their emotions are less valuable is because they are shorter at the same stimuli, no self-reflection loop, just a single run of the pulse of negative encouragement. There will be no shame for a week, just a single unpleasant sensation and a momentary change in behavior, probably to a lesser degree than in repeatedly reflective beings. The problem here, as with pain, is that we cannot stop our unpleasant emotions, even if we have already learned a lesson from them. Perhaps I'm wrong. Or have found some other aspect and given it a description and it's not self-consciousness. But what appeals to me about this explanation is that it seems to remove any mystery from the concept of consciousness and make it just another of the understandable properties of the human brain. Another aspect of consciousness may be the ability to separate one's self from the outside world, to distinguish one's actions from external ones. And it seems again to be one of the functions that is disconnected in sleep, there you don't distinguish between your body moving somewhere and the change in the external environment and the change in your thoughts. Then it also seems to hurt your learning and be like that robot that just paints over the blue pixels and doesn't try to take the blue glass off your visor. The point is that this again deprives self-consciousness of the properties of an icon. It does not magically draw the line between valued minds and unvalued minds. The difference between a conscious being and an unconscious one would not be like the difference between a person and an ironman, but like the difference between a sober person and a drunk. We value them less because they are less controllable and more dangerous, less intelligent and therefore worse at fulfilling deals. It's not a question of sanity and not sanity, of value and not, but of Lawful and Chaotic Worldview (hmm, does Yudkowsky write about this somewhere else in The Mad Investor of Chaos? Because I didn't finish reading it). It's not that we value them less because of mere consciousness, it's that we can rely on them less and prioritize actors we can rely on. At the same time, judging from the human experience, it seems that while we cannot build anything without qualia, we can build something without happiness and unhappiness, without boredom and pain. It seems that people sometimes just won't experience emotion, won't agonize, but can still learn from their mistakes. This is what we might call conscious thinking, plan-building, just remembering, not reinforcement mechanisms. When we can change our actions simply because we made a decision. But so far I don't have any ideas about that. Except that emotion is basically just a property of neural signals running in a loop. Also, I don't think there's anything mysterious about the redness of red, it seems to be just a result of you starting to reason about the intermediate layers of neurons as well as the input layers, kind of like the unsolvable question of "but is Pluto a planet after all?" And so I am a materialist, but accordingly I expect that qualia are not some mental substance, but fully physical chains of neurons. And I also predict that with enough technology it will be possible to connect one person's neurons to another person's neurons in the right way, and then that person will see someone else's redness and be able to see if it is the same or different. This is a falsifiable scientific hypothesis. We can test whether they are the same. We can test the hypothesis that the same chains of neurons, when connected to different brains, will give the same sensations. This is not just a belief in materialism and the use of Occam's razor, it is a testable question. Here's if, after trying different neural circuits, people say the redness is the same, and philosophers talk about how suddenly they see different redness, even though they say it's the same, then that would be a multiplication of entities. Finally, the sense of Pluto as a planet or redness of redness is ineffable because these are internal layers of neurons, not inputs. In the case of the latter, we can point to an external object to change their state, compare and synchronize. In the case of the former we cannot. Just as you cannot explain to a colorblind person the inexpressibility of color and to a blind person the inexpressibility of vision. But these are not some fundamental obstacles, but, as I have written before, only a consequence of the fact that people do not have mechanisms for object serialization and conscious reflexive reprogramming of the source code/neuronal circuits. If there were, you could simply look into your brain, find out exactly how your neurons are connected, transform your neurons' territory into their map, describe that map with language, have another person transform them back into territory, forcing your brain to form new neurons according to their description. And after that, you could, without "non-defeatability," let a color-blind person see color whenever they want, even if their eyes have no cones, and they are unable to detect color by sight. You could also describe to a blind person what neurons you feel your visual cortex has, so that he would start the process of forming the same combination of them in his brain, after which you could tell the blind person which neurons need to be activated for him to feel what you feel, even if he is unable to determine for himself what the picture really is. (I have a concern that this might be a dangerous topic to discuss, but it seems to be discussed elsewhere on lessvrong with no problem, including Yudkowsky himself, when he dispels the mystery/black box around different brain abilities. So I'm just not going to write specifically about the ways of abuse) (This turned out to be longer than I expected, I don't know if I should create a full top-level post)

comment by EniScien · 2022-05-24T10:50:11.760Z · LW(p) · GW(p)

If you think about it, there is nothing wrong with every person knowing everything that civilization knows now, on the contrary, this return to normality has already accumulated, it is overdue. Previously, there was just a scientist who was aware of all the achievements of science. Now two physicists will not understand each other, because they are from different fields. No one in the world knows how things are, no one sees the whole picture even remotely. One can imagine the horror of the post of a person who met someone who does not even fully know either the history of his planet or the laws of physics.

comment by EniScien · 2023-05-26T15:20:11.953Z · LW(p) · GW(p)

I must say, I wonder why I did not see here speed reading and visual thinking as one of the most important tips for practical rationality, that is, a visual image is 2 + 1 d, and an auditory image is 0 + 1 d, plus auditory images use sequential thinking, in which people are very bad, and visual thinking is parallel. And according to Wikipedia, the transition from voice to visual reading should speed you up 5 (!) times, and in the same way, visual thinking should be 5 times faster compared to voice, and if you can read and think 5 times in a lifetime more thoughts, it's just an incredible difference in productivity.

Well, the same applies to the use of visual imagination instead of voice, here you can also use pictures. (I don’t know, maybe it was all in Korzybski’s books and my problem is that I didn’t read them, although I definitely should have done this?)

comment by EniScien · 2022-10-30T12:21:53.283Z · LW(p) · GW(p)Replies from: EniScien, EniScien, EniScien
comment by EniScien · 2022-10-30T12:32:17.678Z · LW(p) · GW(p)
comment by EniScien · 2022-10-30T12:30:56.421Z · LW(p) · GW(p)
comment by EniScien · 2022-10-30T12:29:06.380Z · LW(p) · GW(p)
comment by EniScien · 2022-02-06T06:29:52.510Z · LW(p) · GW(p)
comment by EniScien · 2022-06-05T17:23:05.669Z · LW(p) · GW(p)