Posts

StartAtTheEnd's Shortform 2024-01-11T17:52:24.084Z

Comments

Comment by StartAtTheEnd on Do you consider perfect surveillance inevitable? · 2025-02-17T00:41:10.127Z · LW · GW

All good! I wrote a long response after all.

But what future do you value? Personally, I don't want to decrease the variances of life, but I do want to increase the stability. 

In either case, I think my answer is "Invest in the growth and maturation of the individual, not in the external structures that we crudely use to keep people in check" 

Can you convince all people who have surveillance powers to not use them

No, but we can create systems in which surveillance is impossible from an information-theoritic perspective. Web 3.0 will likely do this unless somebody stops it, and there's ways to stop it too (you could for instance argue that whoever create these systems are aiding criminals and terrorists)

Anxiety seems to be why individual people prefer transparency of information, but it's not why the system prefers it. The system merely exploits the weakness of the population to legitimize its own growth and to further its control of society.

Converting everyone to a single value system is not easy. But we can improve the average person and thus improve society in that way, or we can start teaching people various important things so that they don't have to learn them the hard way. One thing I'd like to see improved in society is parenting, it seems to have gotten worse lately, and it's leading to deterioration of the average person and thus a general worsening of society.

A society of weak people leads to fear, and fear leads to mistrust which leads to low-trust societies. By weak, I mean people who run away from trauma rather than overcoming it. You simply just need to process uncomfortable information successfully to grow, it's not even that difficult, it just requires a bunch of courage. We're all going to die sometime, but not all of us suffer from this idea and seek to run away by drinking or distracting ourselves with entertainment. Sometimes, it's even possible to turn unpleasant realities into optimism and hope, and this is basically what maturity and development is

Comment by StartAtTheEnd on Are we the Wolves now? Human Eugenics under AI Control · 2025-01-31T17:07:10.154Z · LW · GW

I think this effect already happened, just not because of AI.

Nietzsche already warned against the possible future of us turning into "The last man", and the meme "Good times create weak men" is already a common criticism/explanation of newer cultures. There's also memes going around calling people "soy", and increases in cuckolding and other traits which seem to indicate falling testosterone levels (this is not the only cause, but I find it hard to put a name on the other causes as they're more abstract)

We're being domesticated by society/"the system". We've built a world where cunning is rewarded over physical aggression, in which standing out in any way is associated with danger, and in which we praise the suppression of human nature, calling it "virtue". Even LW is quite harsh on natural biases.

It's a common saying that the modern society and human nature are a poor fit, and that this leads to various psychological problems. But the average man has nowhere to aim is frustrations, and he has no way to fight back. The enemy of the average person is not anything concrete, they're being harassed by things which are downstream consequences of decisions made far away from them, by people who will never hear what their victims think about their ideas. I think this leads to a generation of "broken men". This is unlikely to change the genetics of society though, unless the most wolf-life of us fight back and get punished for it, or if those who suffer the least from these changes are least wolf-like (which I think may be the case).

Dogs survive much better than wolves in our current society, and I think it's fair to say that social and timid people survive better than aggressive people who stand up to that which offends them, and more so now than in the past (one can still direct their aggression at the correct targets, but this requires a lot more intelligence than aggressive people tend to have)

I think this is likely to continue, though, by which I mean to say that you don't seem incorrect. Did you use AI to write this article? If so, that would explain the downvotes you got. And a personal nitpick with the "Would this even be Bad?" section: "Mood stabilizing" is a misleading term, it actually means mood-reducing. Our "medical solutions" to people suffering in society are basically minor lobotomies. By making people less human, they become a better fit for our inhuman system. If you enjoy the thought of being domesticated, you're probably low on testosterone, or otherwise a piece of evidence that human beings have already been strongly weakened.

Comment by StartAtTheEnd on Do you consider perfect surveillance inevitable? · 2025-01-26T17:50:34.302Z · LW · GW

Predict and control... I'm not sure about that, actually. The world seems to be a complex system, which means that naive attempts at manipulating it often fail. I don't think we're using technology to control others in the manner that we can choose their actions for them, but we are decreasing the diversity of actions that one can take (for instance, anything which can be misunderstood seems to be no go now, as strangers will jump in to make sure that nothing bad is going on, as if it was their business to get involved in other peoples affairs). So our range of motion is reduced, but it's not locked to a specific direction which results in virtue or something.

I don't think that the world can be controlled, but I also think that attempts at controlling by force mistaken, as there's more upstream factors which influence most of society. For instance, if your population is buddhist, they will believe that treating others well is the best thing to do, which I think is a superior solution to placing CCTVs everywhere. The best solutions don't need force, and the one which use force never seem optimal (consider the war on drugs, the taboo on sexuality, attempts at stopping piracy, etc). I think the correct set of values is enough (but again, the receiver needs to agree that they're correct voluntarily). If everyone can agree on what's good, they will do what's good, even if you don't pressure them into doing so. 

I'm also keeping extinction events in mind and trying to combat them, I just do so from a value perspective instead. I'm opposed to creating AGIs, and we wouldn't have them if everyone else were opposed as well. Some people naively believe that AGIs will solve all their problems, and many don't place any special value on humanity (meaning that they don't resist being replaced by robots). But there's also many people like me who enjoy humanity itself, even in its imperfection.

I mean you as the owner of your machine can audit what packets are entering or exiting it

This is likely possible, yeah. But you can design things in such a way that they're simply secure - as it's impossible for them not to be. How do you prevent a lock from being hacked? You keep it mechanical rather than digital. I don't trust websites which promise to keep my password safe, but I trust websites which don't store my password in the first place (they could run it through a one-way hash). Great design makes failure impossible (e.g. atomic operations in banking transfers)

I’m curious about your thoughts on that. 

This would likely result in security, but it comes at a huge cost as well. I feel like there's better solutions, and not just for a specific organization, but for everyone. You could speak freely on the internet just 20 years ago (freely enough that you could tell the nuclear launch codes to strangers if you wanted to), so such a state is still near in a sense. Not only was it harder to spy on people back then, less people even wanted to do such a thing, and this change in mentality is important as well. I'm not trying to solve the problem in our current environment, I want to manipulate our environment to one in which the problem doesn't exist in the first place. We just have to resist the urge to collect and record everything (this collection is mainly done by malicious actors anyway, and mainly because they want to advertise to you so that you buy their products). You could go on vacation in a country which considers it bad taste to pry on others affairs and be more or less immune thanks to that alone, so you don't even need to learn opsec, you just need to be around people who don't know what that word means. You could also use VPNs which have no logs (if they're not lying of course) as nothing can be leaked if nothing is recorded. Sadly, the same forces which destroyed privacy are trying to destroy these methods, it's the common belief that we need to be safe, and that in order to be safe we need certaincy and control. I don't even think this is purely ideology, I think it's a psychological consequence of anxiety (consider 'control freaks' in relationships as well). Society is dealing with a lot of problems right now which didn't exist in the past not because they didn't happen, but because they weren't considered as problems. And if we don't consider things to be problems, then we don't suffer from them, so the people who are resonsible for creating the most suffering in life are those who point at imperfections (like discrimination and strict beauty standards) and convince everyone that life is not worth living until they're fixed.

Finally, people can leak information, but the human memory is not perfect, and people tend to paraphrase eachother, so "he said she said" situations are inherently difficult to judge. You have plausible deniability since nobody can prove what was actually said. I think all ambiguity translates into deniability, which is also why you can sometimes get away with threatening people - "It would be terrible if something bad happened to your family" is a threat, but you haven't actually shown any intent to break the law. Ambiguity is actually what makes flirting fun (and perhaps even possible), but systematizers and people in the autistism-cluster tend to dislike ambiguity, it never occurs to them that both ambiguity and certainty have pros and cons.

I mean politically

Politics is a terrible game. If possible, I'd like to return society to the state it had before everyone cared too much about political issues. Since this is not an area where reasonable ideas work, I suggest just telling people that dictators love surveillance (depending on the ideology of the person you're talking to, make up an argument for how surveillance is harmful). The consensus on things like censorship and surveillance seems to depend on the ideology one perceives it to support. Some people will say "We need to get rid of anonymity so that we can shame all these nazis!" but that same sort of person was strongly against censorship 13 years ago, because back then censorship was though to be what the evil elite used to oppress the common man. So the desire to protect the weak resulted in both "censorship is bad" and "censorship is good" being common beliefs, and it's quite easy for the media to force a new interpretation since people are easily manipulated.

By the way, I think "culture war" topics are against the rules, so I can only talk about them in a superficial and detached manner. Viligantes in the UK are destroying cameras meant to automate fining people, and as long as mentalities/attitudes like this dominate (rather than the belief that total surveillance somehow benefits us and makes us safe) I think we'll be alright. But thanks to technological development, I expect us to lose our privacy in the long run, and for the simple reason that people will beg the government to take away their rights.

Comment by StartAtTheEnd on Do you consider perfect surveillance inevitable? · 2025-01-25T19:24:02.233Z · LW · GW

Sorry in advance for the wordy reply. 

Can you identify the specific arguments from ISAIF that you find persuasive

Here's my version (which might be the same. I take responsibility for any errors, but no credit for any overlap with Ted's argument)

1: New technologies seem good at first/on the surface.

2: Now that something good is available, you need to adapt it (or else you're putting yourself or others at a disadvantage, which social forces will punish you for)

3: Now that the new technology is starting to be common, people find a way to exploit/abuse it. This is because technology is neutral, it can always be use for both good and bad things, you cannot seperate the two.

4: In order to stop abuse of said technology, you need to monitor its use, restrict access with proof of identity, to regulate it, or to create new and even stronger technology.

5: Now that you're able to regulate the new technology, you must do so. If you can read peoples private emails, and you choose not to, you will be accused of aiding pedophiles and terrorists (since you could arguably have caught them if you did not respect their privacy)

This dynamic has a lot of really bad consequences, which Ted also writes about. For instance, once gene editing is possible, why would we not remove genes which results in "bad traits"? If you do not take actions which makes society safer, you will be accused of making society worse. So we might be forced to sanitize even human nature, making everyone into inoffensive and lukewarm drones (as the traits which can result in great people and terrible people are the same, the good and the bad cannot be separated. This is why new games and movies are barely making any money, and it's why Reddit is dying. They removed the good together with the bad) 

 

I’m curious how long you think you will be able to slow it down and what your ideas for doing so are

I can slow it down for myself by not engaging in these new technologies (IoT, subscription-based technology, modern social media, etc.) and using fringe privacy-based technologies, or simply not making noise (If nothing you say escapes the environment in which you said it, you're likely safe. If what you said is not stored for longer periods of time, you're likely safe. If the environment you're in is sufficiently illegible, information is lost and you cannot be held accountable.

I'm also doing what I can to teach people that:

1: Good and Bad cannot be separated. You can only have both of them or none of them. I think this is axiomatically true,  which suggests that the Waluigi Effect occurs naturally (just like intrusive thoughts).

2: You cannot have your cake and eat it too. You can have privacy OR safety, you cannot have both. You cannot have a backdoor that only "the good guys" can access. You cannot have a space where vulnerable groups can speak out, without also having a space where terrorists can discuss their plans. You cannot have freedom of speech and an environment in which nothing offensive is said.

Most people in the web3 space are not taking internet anonymity as seriously as it needs to be

This is possibly true, but the very design of web3 (decentralization, encryption) makes it so that privacy is possible. If your design makes it so that large corporations cannot control your community, it also makes it so that the governement is powerless against it, as these are equal on a higher level of abstraction.

That can audit every single internet packet entering and exiting a machine

This sounds like more surveillance rather than less. I don't think this is an optimal solution. We need to create something in which no person is really in charge, if we want actual privacy. The result will look like the Tor network, and it will have the same consequences (like illegal drug trade). If a platform is not a safe place to sell drugs, it's also not a safe platform to speak out against totalitarianism or corruption, and it's also not a safe place to be a minority, and it's also not a safe place to a criminal. I think these are equivalent, you cannot separate good and bad.

I like talking in real life, as no records are kept. What did I say, what did I do? Nobody knows, and nobody will ever know. I don't have to rely on trust or probability here. Like with encryption, I have mathematical certainty, and that's the only kind of certainty which means anything in my eyes.

Self-destructing messages are safe as well, as is talking on online forums which will cease to exist in the future, taking all the information with them (what did I say on the voice chat of Counter Strike 1.4? Nobody knows)

Communities like LW have cognitive preferences for legibility, explicitness,  and systematizing, but I think the reason why Moloch did not bother humanity before the 1800s is because it couldn't exist. It seems like game-theoritic problems are less likely to occur when players don't have access to enough information to be able to optimize. This all suggests one thing: That information (and openness of information) is not purely good. It's sometimes a destructive force. The solution is simple to me: Minimize the storage and distribution of information.

edit: Fixed a few typos

Comment by StartAtTheEnd on Do you consider perfect surveillance inevitable? · 2025-01-24T17:10:08.245Z · LW · GW

I have considered automated mass-surveillance likely to occur in the future, and tried to prevent it, since about 20 years ago. It bothers me that so many people don't have enough self-respect to feel insulted by the infringement of their privacy, and that many people are so naive that they think surveillance is for the sake of their safety.

Privacy has already been harmed greatly, and surveillance is already excessive. And let me remind you that the safety we were promised in return didn't arrive.  

The last good argument against mass-surveillance was "They cannot keep an eye on all of us" but I think modern automation and data processing has defeated that argument (people have just forgotten to update their cached stance on the issue).

Enough ranting. The Unabomber argued for why increases in technology would necessarily lead to reduced freedom, and I think his argument is sound from a game theory perspective. Looking at the world, it's also trivial to observe this effect, while it's difficult to find instances in which the amount of laws have decreased, or in which privacy has been won back (also applies to regulations and taxes. Many things have a worrying one-way tendency). The end-game can be predicted with simple exterpolation, but if you need an argument it's that technology is a power-modifier, and that there's an asymmetry between attack and defense (the ability to attack grows faster, which I believe caused the MAD stalemate).

I don't think it's difficult to make a case for "1", but I personally wouldn't bother much with "2" - I don't want to prepare myself for something when I can help slow it down. Hopefully web 3.0 will make smaller communities possible, resisting the pathelogical urge to connect absolutely everything together. By which time, we can get separation back, so that I can spend my time around like-minded people rather than being moderated to the extent that no groups in existence are unhappy with my behaviour. This would work out well unless encryption gets banned.

The maximization of functions lead to the death of humanity (literally or figuratively), but so does minimization (I'm arguing that pro-surveillance arguments are moral in origin and that they make a virtue out of death)

Comment by StartAtTheEnd on The average rationalist IQ is about 122 · 2025-01-10T04:33:08.752Z · LW · GW

I have more reasons for believing that Mensa members are below 130, but also for believing that they're above.

Below: Most online IQ tests are similar enough to the Mensa IQ test that the practice effect applies. And most people who obsess about their IQ scores probably take a lot of online IQ tests, memorizing most patterns (there's a limit to the practice effect, but it can still give you at least 10 points)

Above: Mensa tests for pattern recognition abilities, which in my experience correlates worse with academic performance than verbal abilities. Pattern recognition abilities also select for people with autism (they tend to score about 20 points higher on RPM-like pattern recognition tests (matrices) than on other subtests). These people will be smarter than they sound, because their low verbal abilities makes them appear stupid, even though their pattern recognition might be 2 standard deviations higher. So you get intelligent people with poor social skills, who sound much dumber than they are, and who tend to have more diagnoses than just autism. It's no wonder that these people go to forums like Mensa, or that they're less successful in life than their IQ would suggest. These people are also incredibly easy targets by the kind of people who go to r/iamverysmart so it's easy to build the public consensus that they're actually stupid, even when it isn't true.

However, in order for high intelligence to shine (and have worthy insights) even without formal education, IQs above 150 are likely needed. For in order to generate your own ideas and still be able to compete with the consensus (which is largely based off the theories of genuises like Tesla, Einstein, Neumann, Turing, Pavlov, etc.) you need to discover similar things yourself independently.

I think many rationalists are above 130. I don't like rationalist mentalities very much though. They seem to think that everything needs to have a source or a proof (a projected lack of confidence in their own discernment). They also tend to overestimate the value of knowledge (even sometimes using it as a synonym of intelligence). If somebodies IQ is, say, 110, I don't think they will ever have any great takes (even with years of studies) which a 140 IQ person couldn't run circles around given a week or two of thoughts. Ever seen somebody invest their whole life into something that you could dismantle or do better in 5 minutes? You could look at this and go "Rapid feedback is better because you approximate reality and update your beliefs faster, makes sense, but why overcompl- right, it's to make mone- to legitimize the only position in which they are thought to have value - because agile coaches are selling ideas/theory and rely on the illusion of substance of course"

Comment by StartAtTheEnd on The average rationalist IQ is about 122 · 2024-12-30T11:35:57.170Z · LW · GW

People tend to get suspicious if you claim IQs above 125, and start analyzing data and looking for reasons to believe that the actual numbers are less. But I feel like such people really overestimate what an IQ in the 120s or 130s look like. If you go on the Mensa Forums, you will likely find that most of the comments seem rather dumb, and that the community generally appears dumber than LW.

A large number of people who report scoring in the 130s on IQ tests are not lying. If the number seems off but isn't, then what needs updating is the impression of what an IQ in the 130s look like.
I suppose that people dislike that some high IQ people just aren't doing very well in life, and prefer to think that they're lying about their scores

Comment by StartAtTheEnd on StartAtTheEnd's Shortform · 2024-12-24T21:37:21.504Z · LW · GW

That's basically the exact same idea I came up with!

Your link says popularity ≈ beauty + substance, that's no different than my example of "success of a species = quality of offspring + quantity of offspring". I just generalized to a higher number of dimensions, such that for a space of N dimensions, the success of a person is the area spanned. So it's like these stat circles but n-dimensional where n is the number of traits of the person in question. I don't know if traits are best judged when multiplied or added together, but one could play around with either idea.

I'm not sure my insights say anything that you haven't already, but what I wanted to share is that you might be able to improve yourself by observing unsuccessful people and copying their trait in the dimension where you're lacking (this was voted 'wrong' above but I'm not sure why). And that if you want success, mimicking the actions of somebody who is ugly should be more effective, and this is rather unintuitive and amusing.

I also think it would be an advantage for an attractive person to experience what it's like not to be attractive for a while, getting used to this, and then becoming attractive again. Since he would have to make up for a deficit (he's forced to improve himself) and then when the advantage comes back, he'd be further than if he never had the period of being unattractive. And as is often the case with intelligent people, I never really had to study in school, but this made me unable to develop proper study habits. If I learned how below-average people made it through university, this would likely help me more than even observing the best performing student in my class.

A related insight is that if you want a good solution, you have to solve a worse problem. Want a good jacket for the cold weather? Find out what brands they use on Greenland, those are good, they have to be. Want to get rid of a headache? Don't Google "headache reliefs", instead, find out what people with migraines and cluster-headaches do, for they're highly motivated to find good solutions.

Anyway, I swear I came up with these ideas before you wrote your post, the similarity is a coincidence though it looks like I just wrote a worse version of your post. I was partly inspired by I Ching hexagram 42 which says something like "When the superior man perceives good, he imitates it; when he perceives faults, he eliminates them in himself"

Comment by StartAtTheEnd on Ideologies are slow and necessary, for now · 2024-12-23T22:41:00.546Z · LW · GW

I've noticed that your essay doesn't differentiate beliefs (about truth), and values (subjective preferences). This implies that ideologies approximates truth, or that truth makes people change their mind, or that you can calculate which preferences are best, and I think all of these assumptions are wrong as value judgements and pure knowledge are entirely separate.

You could argue that having a belief X results in behaviour Y which leads to better well-being for a group of people, but this is unrelated to the truth value of a belief (if it were not so, we'd be able to prove the existence of god by measuring the outcomes of people who believed in him vs those who didn't), but this might depend on the type of belief

Comment by StartAtTheEnd on StartAtTheEnd's Shortform · 2024-12-23T17:50:47.308Z · LW · GW

I think the ideas are independently useful, but to get the best out of both, I'd probably have to submit a big post (rather than these shortform comments) and write some more related insights (I only shared this one because I thought it might be useful to you). Actually, I know that I'm likely too lazy and unconscientious to ever make such a post, and I invite people to plagiarize, refine and formalize my ideas. I've probably had a thousand insights like this, and after writing them out, they stop being interesting to me, and I go on thinking about the next thing.

I hope my comment was useful to you, though! You can start applying the concept to areas outside of morality. Of feel how postive experiences have the same effect (I have made many good memories on sunny days, so everything connected to brightness and summer is perceived more positively by me). There's no need to "fix" good associations blending together, I personally don't, but I also don't identify as a rationalist. I'm more of a meta-gamer/power-gamer, like a videogame speedrunner looking for new glitches to exploit (because it's fun, not because I'm ambitious).

Comment by StartAtTheEnd on StartAtTheEnd's Shortform · 2024-12-23T16:15:53.065Z · LW · GW

Sometimes I spend a few hours talking with myself, and finding out what I really believe, what I really value, and what I'm for and against. The effect is clearity of mind and a greater trust in myself. A lot of good and bad things have a low distance to eachother, for instance "arrogance" and "confidence", so without the granularity to differentiate subtle differences, you put yourself at a disadvantage, suspecting even good things.

I suppose another reason that I recommend trusting yourself is that some people, afraid of being misunderstood and judged by others, stay away from anything which can be misunderstood as evil, so they distance themselves from any red flags with a distance of, say, 3 degrees of association.

Having ones associations corrupted because something negative poisons everything without 3 degrees/links of distance has really screwed me over, so I kind of want you to hear me out on this:
I might go to the supermarket, and buy a milkshake, but hate the experience because I know the milkshake has a lot of chemicals in it, because I hate the company which makes them, because I hate the advertisement, because I know the text on the bottle is misleading... But wait a minute, the milkshake tastes good, I like it, the hatred is a few associations away. What I did was sabotage my own experience of enjoying the milkshake, because if I didn't, it would feel like I was supporting something which I hated, merely because something like that existed 2-3 links away in concept space.

I can't enjoy my bed because I think about dust mites, I can't enjoy video-games because I think about exploitative skinners boxes, I can't enjoy pop music because, even though I like the melody, I know that the singer is somewhat talentless and that somebody else wrote the lyrics for them. But, I have some young friends (early 20s) who simply enjoy what they enjoy and hate what they hate, and they do not mix the two. They drink a milkshake and it's tasty, and they listen to the music and it feels good, and they lay down in their bed and it's soft and cozy. Aren't they living in reality and enjoying the moment, while I'm telling my body that my environment is hostile, which probably makes it waste a lot of energy making sure that I don't enjoy anything (as that would be supporting evil) or let my guard down?

I noticed myself doing this, and stopped this unnecessary spread of negative associations, or reduced the distance at least. Politics seems to be "the mind killer" exactly because people form associations/shortcuts like "sharing -> communism -> mass starvation -> death", putting them off even good things which "gets too close" to bad things. And these poisoned clusters can get really big, causing heuristics such as "rock music = the devil". Sorry about the lengthy message by the way.

Comment by StartAtTheEnd on StartAtTheEnd's Shortform · 2024-12-22T23:05:19.146Z · LW · GW

Many of the advantages are like that, but I think it's a little pessimistic not to dare to look anyway. I've personally noticed that people who are on the helpless side are good at making others want to help them, so not all insights are about immoral behaviour. But even then, aren't you curious how people less capable than yourself can be immoral without getting caught, or immoral in a way which others somehow forgive? Most things which can be used for evil can also be used for good, so I think it's a shame if you don't allow yourself to look and analyze (though I understand that things surrounding immorality can be off-putting)

I'm not all that afraid of things surrounding morality, but it's because trust myself quite a lot, so the borders between good and bad are more clear (the grey area is smaller, it's more white and black) so I don't bully myself for just getting sort of close to immorality. I don't know if you do this yourself, but having steeper gradients has benefited me personally, I feel more mentally sharp after making my own boundaries clear to myself. I'm just sharing this because I think most people could benefit from it (less so LW users than the general population, but there should still be some)

Comment by StartAtTheEnd on StartAtTheEnd's Shortform · 2024-12-22T11:04:14.830Z · LW · GW

By a duality principle, you can learn a lot from losers.

If somebody has made it to a high position of power despite having glaring flaws (you can think of Tate I guess), I recommend you pay attention. They must have something which balances their flaws out, something which made them succeed despite being at a disadvantage. You can figure out what it is and take it for yourself.

If an ugly person, stupid person, or socially awkward person becomes more successful than what makes sense to you, then your map is incomplete, and the advantage of the person might even be a low-hanging fruit (if a stupid person has an advantage, chances are you can emulate it). In fact, anything which has a stupid design, but which darwinism hasn't yet removed from existence despite many years passing, likely has some trait which gives it a lot of fitness (this might even be true necessarily). The Ocean Sunfish, arguably one of natures failures, apparently lays up to 300 million eggs at a time. This is not really useful despite noticing that for X to exist, X merely has to have a combined quality and quantity above some threshold. This insight is mostly valuable in human beings, like stupid people who have a lot of money, or low EQ people with social success.

I think this insight is valuable enough to share, since it seemingly finds value in what most people regard as worthless, which is much more impressive to me than finding value in what's valuable. Of course, this is closer to stat-point allocation. 100 traits in a person might sum to "success", and you're only aware of 80 of their traits, and none of them impress you. Well, the last 20 traits should sum to a high value, and some of these values are likely higher than your own values (since your own values are likely higher in the other 80 dimensions). So the sum of traits is of some value, and you can take the extremes and raise your own respective dimensions by emulating/stealing/seeking inspiration from the person in question. 

It would be quite easy to formalize this mathematically and likely even to prove it, though a few dimensions might have to be "luck" and "advantage at birth" (nepotism).

Comment by StartAtTheEnd on Shortform · 2024-12-12T13:06:35.678Z · LW · GW

I think the new communication systems could be a catalyst, but that stopping at this conclusion obscures the actual cause of cancel culture. I think the answer is something like what Kaczynski said about oversocialization, and that social media somehow worsens the social dynamics responsible. I think it's an interesting question how exactly these dynamics work socially and psychologically, so for me, "it's the new communication systems" is not a conclusion but a possible first step in finding the answer

Comment by StartAtTheEnd on leogao's Shortform · 2024-12-07T23:05:35.906Z · LW · GW

My own expectation is that limitations result in creativity. Writers block is usually a result of having too many possibilities/choices. If I tell you "You can write a story about anything", it's likely harder for you to think of anything than if I tell you "Write a story about an orange cat". In the latter situation, you're more limited, but you also have something to work with.

I'm not sure if it's as true for computers as it is for humans (that would imply information-theoretic factors), but there's plenty of factors in humans, like analysis paralysis and the "See also" section of that page

Comment by StartAtTheEnd on Sam Harris’s Argument For Objective Morality · 2024-12-05T18:59:24.493Z · LW · GW

If that is really his view, Sam Harris didn't think things through at all, nor did he think very deeply.

Qualia is created by the brain, not by anything external. Touching a hot stove feels bad because we are more likely to survive when we feel this way. There's no reason why it can't feel pleasurable to damage yourself, it just seems like a bad design choice. The brain uses qualia to reward and punish us so that we end up surviving and reproducing. Our defense mechanisms are basically just toying with us because it helps us in the end (it's merely the means to survival), and our brains somewhat resist our attempts at hacking our own reward mechanisms because those who could do that likely ended up dying more often.

You could use Harris Arguments to imply that objective beauty exists, too. This is of course also not correct.

The argument also implies that all life or all consciousness can feel positive and negative qualia, but that's not necessarily true. He should have written "made our corner of the universe suck less, for us, according to us. (What if a change feel bad for us but causes great suffering to some alien race?)

Lastly, if these philosophers experienced actual, severe suffering for long periods of time, they would likely realize that suffering isn't even the issue, but suffering that one feels is meaningless. Meaningful pain is not bothersome at all, and it doesn't even need to reduce further pain. Has Harris never read "man's search for meaning" or other works which explain this?

Comment by StartAtTheEnd on Breaking beliefs about saving the world · 2024-11-22T03:09:29.760Z · LW · GW

Thank you! Writing is not my strong suit, but I'm quite confident about the ideas. I've written a lot, so it's alright if you don't want to engage with all of it. No pressure!

I should explain the thing about suffering better:
We don't suffer from the state of the world, but from how we think about it. This is crucial. When people try to improve other peoples happiness, they talk about making changes to reality, but that's the least effective way they could go about it.
I believe this is even sufficient. That we can enjoy life as it is now, without making any changes to it, by simply adopting a better perspective on things.

For example, inequality is a part of life, likely an unavoidable one (The Pareto principle seems to apply in every society no matter its type). And even under inequality, people have been happy, so it's not even an issue in itself. But now we're teaching people in lower positions that they're suffering from injustice, that they're pitiful, that they're victims, and we're teaching everyone else that life could be a paradise, if only evil and immoral influences weren't preventing it. But this is a sure way to make people unhappy with their existence. To make them imagine how much better things could be, and make comparisons between a naive ideal and reality. Comparison is the thief of joy, and most people are happy with their lot unless you teach them not to be.
Teaching people about suffering doesn't cause it per se, but if you make people look for suffering, they will find it. If you condition your perception to notice something unpleasant, you will see it everywhere. Training yourself to notice suffering may have side-effects. I have a bit of tinnitus, and I got over it by not paying it any attention. It's only like this that my mind will start to filter it away, so that I can forget about it.

The marketing perspective

I don't think you need pain to motivate people to change, the carrot is as good at the stick. But you need one of the two at minimum (curiousity and other such drives make you act naturally, but do so by making it uncomfortable not to act and rewarding to act)
I don't think that suffering is bearable because of reward itself, but because of perceived value and meaning. Birth is really painful, but the event is so meaningful that the pain becomes secondary. Same for people who compete in the olympics, they have found something meaningful enough that a bit of physical pain is a non-issue.
You can teach this to people, but it's hard to apply. It's better to help them avoid the sort of nihilism which makes them question whether things are worth it. I think one of the causes of modern nihilism is a lack of aesthetics. 

My 2nd perspective

I don't think understanding translates directly into power. It's a common problem to think "I know what I should be doing, but I can't bring myself to do it". If understanding something granted you power over it, I'd practically be a wizard by now.
You can shift the problem that people attack, but if they have actual problems which put them in danger, I think their focus should remain on these. You can always create dissatisfaction by luring them towards better futures, in a way which benefits both them and others at the same time.

I'm never motivated by moral arguments, but some self-help books are alluring to me because they prey on my selfishness in a healthy manner which also demands responsibility and hard work.

As for the third possibility, that sounds a bit pessimistic. But I don't think it would be a worthless outcome as long as the image of what could be isn't a dangerous delusion. Other proposed roads to happiness include "Destroy your ego", "Be content with nothing", "Eat SSRIs forever", and various self-help which asks you to "hustle" and overwork.

who want to prevent human extinction

I see! That something deeper than preventing suffering. I even think that there's some conflicts between the two goals. But motivating people towards this should be easier since they're preventing their own destruction as well, and not just helping other people.

it is difficult to know what the far-reaching consequences of this hypothetical world would be

It really is. But it's interesting to me how both of us haven't used this information to decrease our own suffering. It's like I can't value things if they come too easy, and like I want to find something which is worth my suffering. 
But we can agree that wasted suffering is a thing. That state of indecision, being unable to either die or live, yield or fight back, fix the cause of suffering or come to terms with it.
The scarcity mindset is definitely a problem, but many resources are limited. I think a more complex problem would be that people tend to look for bad actions to avoid, rather than positive actions to adopt. It's all "we need to stop doing X" and "Y is bad" and "Z is evil". It's all about reduction, restrictions, avoidance. It simply chokes us. Many good people trap themselves with excessive limitations and become unable to move freely. To simply use positives likes "You should be brave", "You should stand up for what you believe in", "You should accept people for who they are" would likely help improve this problem.

there are certain important aspects of human psychology that I'm still unsure about

I think pain and such are thresholds between competing things. If I'm tired and hungry, whether or not I will cook some food depends on which of the two cause the greatest discomfort.
When procrastinating I've also found that deadlines helped me. Once I was backed into a corner and had to take action, I suddenly did. I ran away for as long as I could. The stress from deadlines might also result in dopamine and adrenaline, which help in the short term.
"Acceptance of suffering" is a bit ambigious. Accepting something usually reduces the suffering it causes, and accepting suffering lessens it too. But one can get too used to suffering, which makes them wait too long before they change anything, like the "This is fine" meme or the boiling frog that I mentioned earlier

Spread logical decisionmaking

Logic can defend against mistakes caused by logic, but we did not destroy ourselves in the past when we were less logical than now. I also don't think that logic reduces suffering. Many philosophers have been unhappy, and many people with down syndrome are all smiles. Less intelligent people often have a sort of wisdom about them, often called "street smarts" when observed, but I think that the lack of knowledge leads them to make less map-territory errors. They're nearer to reality because they have less knowledge which can mislead them.

I personally think that intellect past a certain level gives humans the ability to deliberately manipulate their suffering

I don't think any human being is intelligent enough to do this (Buddha managed, but the method was crude, reducing not only suffering). What we can do, is manipulate our reward systems. But this leaves us feeling empty, as we cannot fake meaning. Religion basically tells us to live a good life according to a fixed structure, and while most people don't like this lack of freedom, it probably leads to more happiness in the long run (for the same reason that neuroticism and conscientiousness are inversely correlated)

since human meaning is constructed by our psychological systems

Yes, the philosophical question of meaning, and the psychology of meaning are different. To solve meaninglessness by proving external meaning (this is impossible, but lets assume you could), is like curing depression by arguing that one should be happy. Meaning is basically investment, engagement, and involvement in something which feels like it has substance.

I recommend just considering humanity as a set of axioms. Like with mathematical axioms, this gives us a foundation. Like with mathematics, it doesn't matter that this foundation is arbitrary, for no "absolute" foundation can exist (in other words, no set of axioms are more correct than any other. Objectivity does not exist, even in mathematics, everything is inherently relative).
Since attemping to prove axioms is silly, considering human nature (or yourself) as sets of axioms allows you not to worry about meaning and values anymore. If you want humanity to survive, you no longer have to justify this preference.

Maybe you can point me to a source of information that will help me see your perspective on this? 

That would be difficult as it's my own conclusion. But do you know this quote by Taleb?
"I am, at the Fed level, libertarian;
at the state level, Republican;
at the local level, Democrat;
and at the family and friends level, a socialist."
The smaller the scope, the better. The reason stupid people are happier than smart people is because their scope of consideration is smaller. Being a big fish in a small pond feels good, but increase your scope of comparison to an entire country, and you become a nobody. Politics makes people miserable because the scope is too big, it's feeding your brain with problems that you have no possibility of solving by yourself. "Community" is essential to human well-being because it's cohersion on a local level. "family values" are important for the same reason. Theres more crime in bigger cities than smaller ones. Smaller communities have less crazy behaviour, they're more down-to-earth. A lot of terrible things emerge when you increase the scale of things.
Multiple things on a smaller scale does not seem to have a cost. One family can have great coherence. You can have 100 families living side by side, still great. But force them all to live together in one big house, and you will notice the cost of centralization. You will need hierarchies, coordination, and more rules. This is similar to urbanization. It's also similar to how the internet went from being millions of websites, to becoming a few 100 popular websites. It's even similar to companies merging into giants that most people consider evil.
An important antidote is isolation (gatekeeping, borders, personal boundaries, independence, seperation of powers, the single-responsibility-principle, live and let live philosophies, privacy and other rights, preservation).
I wish it was just "reduced efficiency" which was the problem. And sadly, it seems that they optimal way to increase the efficiency between many things is simply to force them towards similarity. For society, this means the destruction of different cultures, the destruction of different ways of thinking, the destruction of different moralities and different social norms.

I presume you're referring to management claiming credit

It's much more abstract than that. The amount of countries, brands, languages, accents, standards, websites, communities, religious, animals, etc. are all decreasing in numbers. All slowly tending towards 1 thing having monopoly, with this 1 thing being the average of what was merged.

Don't worry if you don't get last few points. I've tried to explain them before, but I have yet to be understood.

I wonder about what basis/set of information you're using to make these 3 claims?

Once a moloch problem has been started, you "either join or die", like you said. But we can prevent moloch problems from occuring in the first place, by preventing the world from becoming legible enough. For this idea, I was inspired by "Seeing like a state" and this

There's many prisoners-dilemma like situations in society, which do not cause problems simply because people don't have enough information to see them. If enough people cannot see them, then the games are only played by a few people. But that's the only solution to Moloch: Collectively agree not to play (or, I suppose, never stop playing in the first place). The amount of moloch-like problems has increased as a side-effect of the increased accessibility of information. Dating apps ruined dating by making it more legible. As information became more visible, and people had more choices and could make more informed decisions, they became less happy. The hidden information in traditional dating made it more "human", and less materialistic as well. Since rationalists, academics and intellectuals in general want to increase the openness of information and seem rather naive about the consequences, I don't want to become either. 

I agree with the factors leading to human extinction. My solution is "go back". This may not be possible, and like you say, we need to use intelligence and technology to go forwards instead. But like the alignment problem, this is rather difficult. I haven't even taught myself high-level mathematics, I've noticed all this through intuition alone.
I think letting small disasters happen naturally could help us prevent black-swan like events. Just like burning small patches of trees can prevent large forest fires. Humanity is doing the opposite. By putting all its eggs in one basket and making things "too big to fail", we make sure that once a disaster happens, it hits hard.

Related to all of this: https://slatestarcodex.com/2017/03/16/book-review-seeing-like-a-state/ (the page mentions black swan risks, Taleb, Ribbonfarms, legibility and centralization). I actually had most of these thought before I knew about this page, so that gives me some confidence that I'm not just connecting unrelated concepts like a schizophrenic.

My argumentation is a little messy, but I don't want to invest my life in understanding this issue or anything. Kaczynski's books have a few overlapping arguments with me, and the other books I know are even more crazy, so I can't recommend them. 

But maybe I'm just worrying over nothing. I'm extrapolating things as linear or exponential, but they may be s-shaped or self-correcting cycles. And any partial collapse of society will probably go back to normal or even bring improvements with it in the long run. A lot of people have ruined themselves worrying over things which turned out just fine in the end.

Comment by StartAtTheEnd on Breaking beliefs about saving the world · 2024-11-17T10:15:09.508Z · LW · GW

I like this post, but I have some problems with it. Don't take it too hard, as I'm not the average LW reader. I think your post is quite in line with what most people here believe (but you're quite ambitious in the tasks you give yourself, so you might get downvoted as a result of minor mistakes and incompleteness resulting from that). I'm just an anomaly who happened to read your post.

By bringing attention to tactical/emotionally pulling patterns of suffering, people will recognize it in their own life, and we will create an unfulfilled desire that only we have the solution for.

I think this might make suffering worse. Suffering is subjective, so if you make people believe that they should be suffering, or that suffering is justified, they may suffer needlessly. For example, poverty doesn't make people as dissatisfied with life as relative poverty does. It's when people compare themselves to others and realize that they could have it better, that they start disliking what they have at the moment. If you create ideals, then people will work towards archiving them, but they will also suffer from the gap between the current state and what's ideal. You may argue "the reward redeems the suffering and makes it bearable", and yes, but only as long as people believe that they're getting closer to the goal. Most positive emotion we experience is a result of feeling ourselves moving towards our goals.

Personal concurrent life-satisfaction is possible in-spite of punishment/suffering when punishment/suffering is perceived as a necessary sacrifice for an impending reward.

Yes, which is why one should not reduce "suffering" but "the causes of unproductive suffering". Just like one shouldn't avoid "pain", but "actions which are painful and without benefit". The conclusions of "mans search for meaning" was that suffering is bearable as long as it as meaning, that only meaningless suffering is unbearable. I've personally felt this as well. One of the times I was the most happy, I was also the most depressed. But that might just have been a mixed episode as is known from bipolar disorder.
I'm nitpicking, but I believe it's important to state that "suffering" isn't a fundamental issue. If I touch a flame and burn my hand, then the flame is the issue, not the pain. In fact, the pain is protecting me from touching the flame again. Suffering is good for survival, for the same reason that pain is good for survival. The proof is that evolution made us suffer, that those who didn't suffer didn't pass on their genes.

We are products of EA

I'm not sure this is true? EA seems to be the opposite of darwinism, and survival of the fittest has been the standard until recent (everyone suddenly cares about reducing negative emotions and unfairness, to an almost pathological degree). But even if various forces helped me avoid suffering, would that really be a good thing?

I personally grew the most as a person as a result of suffering. You're probably right that you were the least productive when you didn't eat, but suffering is merely a signal that change is necessary, and when you experience great suffering, you become open to the idea of change. It's not uncommon that somebody hits rock bottom and turns their lives around for the better as a result. But while suffering is bearable, we can continue enduring, until we suffer the death of a thousand papercuts (or the death of the boiling frog, by our own hands)
That said, growth is usually a result of internal pressure, in which an inconsistency inside oneself finally snaps, so that one can focus on a single direction with determination. It's like a fever - the body almost kills itself, so that something harmful to it can die sooner.

We are still in trouble if the average human is as stupid as I am.

Are you sure suffering is caused by a lack of intelligence, and not by too much intelligence? ('Forbidden fruit' argument) And that we suffer from a lack of tech rather than from an abundance of tech? (As Ted Kaczynski and the Amish seem to think)
Many animals are thriving despite their lack of intelligence. Any problem more complicated than "Get water, food and shelter. Find a mate, and reproduce" is a fabricated problem. It's because we're more intelligent than animals that we fabricate more difficult problems. And if something was within out ability, we'd not consider it a problem, which is why we always fabricate problems which are beyond our current capacities, which is how we trick ourselves into growth and improvement. Growth and improvement which somehow resulted in us being so powerful that we can destroy ourselves. Horseshoe crabs seem content with themselves, and even after 400 million years they just do their own thing. Some of them seem endangered now, but that's because of us? 

Bureaucracy

Caused by too much centralization, I think. Merging structures into fewer, bigger structures causes an overhead which doesn't seem to be worth it. Decentralizing everything may actually save the world, or at least decrease the feedback loop which causes a few entities to hog all the resources.

Moloch

Caused by too much information and optimization, and therefore unlikely to be solved with information and optimization. My take here is the same as with intelligence and tech. Why hasn't moloch killed us sooner? I believe it's because the conditions for moloch weren't yet reached (optimal strategies weren't visible, as the world wasn't legible and transparent enough), in which case, going back might be better than going forwards.

The tools you wish to use to solve human extinction are, from my perspective, what is currently leading us towards extinction. You can add AGI to this list of things if you want.

Comment by StartAtTheEnd on Alexander Gietelink Oldenziel's Shortform · 2024-11-17T09:12:17.384Z · LW · GW

Great post!

It's a habit of mine to think in very high levels of abstraction (I haven't looked much into category theory though, admittedly), and while it's fun, it's rarely very useful. I think it's because of a width-depth trade-off. Concrete real-world problems have a lot of information specific to that problem, you might even say that the unique information is the problem. An abstract idea which applies to all of mathematics is way too general to help much with a specific problem, it can just help a tiny bit with a million different problems.

I also doubt the need for things which are so complicated that you need a team of people to make sense of them. I think it's likely a result of bad design. If a beginner programmer made a slot machine game, the code would likely be convoluted and unintuitive, but you could probably design the program in a way that all of it fits in your working memory at once. Something like "A slot machine is a function from the cartesian product of wheels to a set of rewards". An understanding which would simply the problem so that you could write it much shorter and simpler than the beginner. What I mean is that there may exist simple designs for most problems in the world, with complicated designs being due to a lack of understanding.

The real world values the practical way more than the theoretical, and the practical is often quite sloppy and imperfect, and made to fit with other sloppy and imperfect things.

The best things in society are obscure by statistical necessity, and it's painful to see people at the tail ends doubt themselves at the inevitable lack of recognition and reward.

Comment by StartAtTheEnd on What are Emotions? · 2024-11-15T10:51:52.623Z · LW · GW

I think there's a problem with the entire idea of terminal goals, and that AI alignment is difficult because of it.

"What terminal state does you want?" is off-putting because I specifically don't want a terminal state. Any goal I come up with has to be unachievable, or at least cover my entire life, otherwise I would just be answering "What needs to happen before you'd be okay with dying?"

An AI does not have a goal, but an utility function. Goals have terminal states, once you achieve them you're done, the program can shut down. An utility function goes on forever. But generally, wanting just one thing so badly that you'd sacrifice everything else for it.. Seems like a bad idea. Such a bad idea that no person has ever been able to define an utility function which wouldn't destroy the universe when fed to a sufficiently strong AI.

I don't wish to achieve a state, I want to remain in a state. There's actually a large space of states that I would be happy with, so it's a region that I try to stay within. The space of good states form a finite region, meaning that you'd have to stay within this region indefinitely, sustaining it. But something which optimizes seeks to head towards a "better state", it does not want to stagnate, but this is precisely what makes it unsustainable, and something unsustainable is finite, and something finite must eventually end, and something which optimizes towards an end is just racing to die. A human would likely realize this if they had enough power, but because life offers enough resistance, none of us ever win all our battles. The problem with AGIs is that they don't have this resistance.

The after-lives we have created so far are either sustainable or the wish to die. Escaping samsara means disappearing, heaven is eternal life (stagnation) and Valhalla is an infinite battlefield (a process which never ends). We wish for continuance. It's the journey which has value, not the goal. But I don't wish to journey faster.

Comment by StartAtTheEnd on Anvil Shortage · 2024-11-15T08:54:14.442Z · LW · GW

I meant that they were functionally booleans, as a single condition is fulfilled "is rich", "has anvil", "AGI achieved". In the anvil example, any number past 1 corresponds to true. In programming, casting positive integers to booleans results in "true" for all positive numbers, and "false" in the case of zero, just like in the anvil example. The intuition carries over too well for me to ignore.

The first example which came to mind for me when reading the post was confidence, which is often treated as a boolean "Does he have confidence? yes/no". So you don't need any countable objects, only a condition/threshold which is either reached or not, with anything past "yes" still being "yes".

A function where everything past a threshold maps to true, and anything before it maps to false, is similar to the anvil example, and to a function like "is positive" (since a more positive number is still positive). But for the threshold to be exactly 1 unit, you need to choose a unit which is large enough. 1$ is not rich, and having one water droplet on you is not "wet", but with the appropriate unit (exactly the size of the threshold/condition) these should be functionally similar.

I'm hoping there is simple and intuitive mathematics for generalizing this class of problems. And now that I think about it, most of these things (the ones which can be used for making more of themselves) are catalysts (something used but not consumed in the process of making something). Using money to make more money, anvils to make more anvils, breeding more of a species before it goes extinct.

Comment by StartAtTheEnd on Anvil Shortage · 2024-11-14T17:37:05.166Z · LW · GW

This probably makes more sense if you view it as a boolean type, you either "have an anvil" or you don't, and you either have access to fire or you don't. We view a lot of things as booleans (if your clothes get wet, then wet is a boolean). This might be helpful? It connects what might seem like a sort of edge case into something familiar.

But "something that relies on itself" and "something which is usually hard to get, but easy to get more of once you have a bit of it" are a bit more special I suppose. "Catalyst" is a sort of similar yet different idea. You could graph these concepts as dependency relations and try out all permutations to see if more types of problems exists

Comment by StartAtTheEnd on Does the "ancient wisdom" argument have any validity? If a particular teaching or tradition is old, to what extent does this make it more trustworthy? · 2024-11-14T14:47:42.460Z · LW · GW

The short version is that I'm not sold on rationality, and while I haven't read 100% of the sequences it's also not like my understanding is 0%. I'd have read more if they weren't so long. And while an intelligent person can come up with intelligent ways of thinking, I'm not sure this is reversible. I'm also mostly interested in tail-end knowledge. For some posts, I can guess the content by the title, which is boring. Finally, teaching people what not to do is really inefficient, since the space of possible mistakes is really big.

Your last link needs an s before the dot.

Anyway, I respect your decision, and I understand the purpose of this site a lot better now (though there's still a small, misleading difference between the explanation of rationality and in how users are behaving. Even the name of the website gave the wrong impression).

Comment by StartAtTheEnd on Does the "ancient wisdom" argument have any validity? If a particular teaching or tradition is old, to what extent does this make it more trustworthy? · 2024-11-13T20:06:39.037Z · LW · GW

Yes intuitions can be wrong welcome to reality

But these ways of looking at the world are not factually wrong, they're just perverted in a sense.
I agree that schools are quite terrible in general.

how could I have come up with this myself?

That helps for learning facts, but one can teach the same things in many different ways. A math book from 80 years ago may be confusing now, even if the knowledge it covers is something that you know already, because the terms, notation and ideas are slightly different.

we need wisdom because people cannot think

In a way. But some people who have never learned psychology have great social skills, and some people who are excellent with psychology are poor socializers. Some people also dislike "nerdy" subjects, and it's much more likely that they'd listen to a ted talk on budy language than read a book on evolutionary psychology and non-verbal communication. Having an "easy version" of knowledge available which requires 20 IQ points less than the hard version seems like a good idea.
Some of the wisest and psychologically healthy people I have met have been non-intellectual and non-ideological, and even teenagers or young adults. Remember your "Things to unlearn from school" post? Some people may have less knowledge than the average person, and thus have less errors, making them clear-sighted in a way that makes them seem well-read. Teaching these people philosophy could very well ruin their beautiful worldviews rather than improve on them.

if you know enough rationality you can easily get past all that.

I don't think "rationality" is required. Somebody who has never heard about the concept of rationality, but who is highly intelligent and thinks things through for himself, will be alright (outside of existential issues and infohazards, which have killed or ruined a fair share of actual geniuses).
But we're both describing conditions which apply to less than 2% of the population, so at best we have to suffer from the errors of the 98%.

I'm not sure what you mean by "when you dissent when you have an overwhelming reason". The article you linked to worded it "only when", as if one should dissent more often, but it also warns against dissenting since it's dangerous.
By the way, I don't like most rational communities very much, and one of the reasons is that is that they have a lot of snobs who will treat you badly if you disagree with them. The social mockery I've experienced is also quite strong, which is strange since you'd suspect intelligence to correlate with openness, and for the high rate of autistic people to combat some of the conformity.

I also don't like activism, and the only reason I care about the stupid ideas of the world is that all the errors are making life harder for me and the people that I care about. Like I said, not being an egoist is impossible, and there's no strong evidence that all egoism is bad, only that egoism can be bad. The same goes for money and power, I think they're neutral and both potentially good/bad. But being egoistic can make other people afraid of me if I don't act like I don't realize what I'm doing.

It's more optimal to be passionate about a field

I think this is mostly correct. But optimization can kill passion (since you're just following the meta and not your own desires). And common wisdom says "Follow your dreams" which is sort of naive and sort of valid at the same time.

Believing false things purposefully is impossible

I think believing something you think is false, intentionally, may be impossible. But false beliefs exist, so believing in false things is possible. For something where you're between 10% and 90% sure, you can choose if you want to believe in it or not, and then using the following algorithm:
Say "X is true because" and then allow your brain to search through your memoy for evidence. It will find them.

The articles you posted on beliefs is about the rules of linguistics (belief in belief is a valid string) and logic, but how belief works psychologically may be different. I agree that real beliefs are internalized (exist in system 1) to the point that they're just part of how you anticipate reality. But some beliefs are situational and easy to consciously manipulate (example: self-esteem. You can improve or harm your own self esteem in about 5 minutes if you try, since you just pick a perspective and set of standards in which you appear to be doing well or badly). Self-esteem is subjective, but I don't think the brain differentiates subjective and objective things, it doesn't even know the difference.

And it doesn't seem like you value truth itself, but that you value the utility of some truths, and only because they help you towards something you value more?

Ethically yes, epistemically no

You may believe this because a worldview will have to be formed through interactions with the territory, which means that a worldview cannot be totally unrelated to reality? You may also mean this: That if somebody has both knowledge and value judgements about life, then the knowledge is either true or false, while the value judgements are a function of the person. A happy person might say "Life is good" and a depression person might say "Life is cruel", and they might even know the same facts.

Online "black pills" are dangerous, because the truth value of the knowledge doesn't imply that the negative worldview of the person sharing it is justified. Somebody reading the vasistha yoga might become depressed because he cannot refute it, but this is quite an advanced error in thinking, as you don't need to refute it for its negative tone to be false.

Rationality is about having cognitive algorithms which have higher returns

But then it's not about maximizing truth, virtue, or logic.
If reality operates by different axioms than logic, then one should not be logical.
The word "virtue" is overloaded, so people write like the word is related to morality, but it's really just about thinking in ways which makes one more clear-sighted. So people who tell me to have "humility" are "correct" in that being open to changing my beliefs makes it easier for me to learn, which is rational, but they often act as if they're better people than me (as if I've made an ethical/moral mistake in being stubborn or certain of myself).
By truth, one means "reality" and not the concept "truth" as the result of a logic expression. This concept is overloaded too, so that it's easy for people to manipulate a map with logical rules and then tell another person "You're clearly not seeing the territory right".

physics is more accurate than intuitive world models

Physics is our own constructed reality, which seems to a act a lot like the actual reality. But I think an infinite amount of physics could exist which predicts reality with a high accuracy. In other words, "There's no one true map". We reverse engineer experiences into models, but experience can create multiple models, and multiple models can predict experiences.
One of the limitation is "there's no universal truth", but this is not even a problem as the universe is finite. But "universal" in mathematics is assumed to be truly universal, covering all things, and it's precisely this which is not possible. But we don't notice, and thus come up with the illusion of uniqueness. And it's this illusion which creates conflict between people, because they disagree with eachother about what the truth is, claiming that that conflicting things cannot both be true. I dislike the consensus because it's the consensus and not a consensus.

A good portion of hardcore rationalists tend to have something to protect, a humanistic cause 

My bad for misrepresenting your position. Though I don't agree that many hardcore rationalists care for humanistic causes. I see them as placing rationality above humanity, and thus prefering robots, cyborgs, and AIs above humanity. They think they prefer an "improvement" of humanity, but this functionally means the destruction of humanity. If you remove negative emotions (or all emotions entirely. After all, these are the source of mistakes, right?), subjectivity, and flaws from humans, and align them with eachother by giving them the same personality, or get rid of the ego (it's also a source of errors and unhappiness) what you're left with is not human. It's at best a sentient robot. And this robot can achieve goals, but it cannot enjoy them.
I just remembered seeing the quote "Rationality is winning", and I'll admit this idea sounds appealing. But a book I really like (EST: Playing the game the new way, by Carl Frederick) is precisely about winning, and its main point is this: You need to give up on being correct. The human brain wants to have its beliefs validated, that's all. So you let other people be correct, and then you ask them for what you want, even if it's completely unreasonable.

Rationality doesn't necessarily have nature as a terminal value

I meant nature as its source (of evidence/truth/wisdom/knowledge). "Nature" meaning reality/the dao/the laws of physics/the universe/GNON. I think most schools of thought draw their conclusions from reality itself. The only kind of worldviews which seems disconnected from reality is religions which create ideals out of what's lacking in life and making those out to be virtue and the will of god.

None of that is incompatible with rationality

What I dislike might not be rationality, but how people apply it, and psychological tendencies in people who apply it. But upvotes and downvotes seem very biased in favor of a consensus and verifiability, rather than simply being about getting what you want out of life. People also don't seem to like being told accurate heuristics which seem immoral or irrational (the colloquial definition that regular people use) even if they predict reality well. There's also an implicit bias towards alturism which cannot be derived from objective truth.

About my values, they already exist even if I'm not aware of them, they're just unconscious until I make them conscious. But if system 1 functions well, then you don't really need to train system 2 to function well, and it's a pain to force system 2 rationality onto system 1 (your brain resists most attempts at self-modification). I like the topic of self-modification, but that line of studies doesn't come up on LW very often, which is strange to me. I still believe that the LW community downplays the importance of human nature and psychology. It may even underevaluate system 1 knowledge (street smarts and personal experiences) and overevaluate system 2 knowledge (authority, book-smarts, and reasoning)

Comment by StartAtTheEnd on Does the "ancient wisdom" argument have any validity? If a particular teaching or tradition is old, to what extent does this make it more trustworthy? · 2024-11-13T13:16:15.095Z · LW · GW

There's a lot to unfold for this first point:

Another issue with teaching it academically is that academic thought, like I already said, frames things in a mathematical and thus non-human way. And treating people like objects to be manipulated for certain goals (a common consequence of this way of thinking) is not only bad taste, it makes the game of life less enjoyable.
Learning how to program has harmed my immersion in games, and I have a tendency to powergame, which makes me learn new videogames way faster than other people, also with the result that I'm having less fun than them. I think rationality can result in the same thing. Why do people dislike "sellouts" and "cars salesmen" if not for the fact they they simply optimize for gains in a way which conflicts with taste? But if we all just treat taste like it's important, or refuse to collect so much information that we can see the optimal routes, then Moloch won't be able to hurt us.

If you want something to be part of you, then you simply need to come up with it yourself, it will be your own knowledge. Learning other peoples knowledge however, feels to me like consuming something foreign.

Of course, my defense of ancient wisdom so far has simply been to translate it into an academic language in which it makes sense. "Be like water" is street-smarts, and "adaptability is a core component of growth/improvement/fitness" is the book-smarts. But the "street-smarts" version is easier to teach, and now that I think about it, that's what the bible was for.

Most things that society waste its time discussing are wrong. And they're wrong in the sense than even an 8-year-old should be able to see that all controversies going on right now are frankly nonsense. But even academics cannot seem to frame things in a way that isn't riddled with contradictions and hypocrisy. Does "We are good, but some people are evil, and we need to fight evil with evil otherwise the evil people will win by being evil while we're being good" not sound silly? A single thought will get you karl poppers "paradox of tolerance" and a single thought more will make you realize that it's not a paradox but a kind of neutrality/reflexivity which make both sides equal, and that "We need to fight evil" means "We want our brand of evil to win" as long as people don't dislike evil itself but rather how it's used. Again, this is not more complicated than "I punched my little brother because I was afraid he'd punch me first, and punching is bad" which I expect most children to see the problem with.

astrology

The thought experiment I had in mind was limited to a single isolated situation, you took it much further, haha. My point was simply "If you use astrology for yourself, the outcomes are usually alright". Same with tarot cards, as far as I'm concerned, it's a way to talk with your subconsciousness without your ego getting in the way, which requires acting as if something else is present. Even crystal balls are probably a kind of Rorschach test, and should not be used to "read other people" for this reason. Finally, I don't disagree with the low utility of astrology, but false hope gives people the same reassurance as real hope. People don't suffer from the non-existence of god, but from the doubt of his existence. The actual truth value of beliefs have no psychological effects (proof: Otherwise we could use beliefs to measure the state of reality).

are more rational w.r.t. to that goal

I disagree as I know of counter-examples. It's more likely for somebody to become rich making music if their goal is simply to make music and enjoy themselves, than if their goal is to become rich making music. You see similar effects for people who try to get girlfriends, or happiness for that matter. If X resuls in Y, then you should optimize for X and not for Y. Many companies are dying because they don't realize such a simple thing (they try to exploit something pre-existing rather than making more of what they're exploiting, for instance the trust in previous IPs). Ancient wisdom tackles this. Wu Wei is about doing the right by not trying to do it. I don't know how often this works, but it sometimes does.

I have to disagree that anyones goal is truth. I've seen strong evidence that knowledge of an environment is optimal for survival, and that knowledge-optimizing beats self-delusion every time, but even in this case, the real goal is "survival" and not "truth". And my proof is the following: If you optimize for truth because it feels correct or because you believe it's what's best, then your core motivation is feelings or beliefs respectively. For similar reasons, non-egoism is trivially impossible. But the "Something to protect" link you sent seems to argue for this as well?
And truth is not always optimal for goals. The belief that you're justified and the belief that you can do something are both helpful. The average person is 5/10 but tend to rate themself as 7/10, which may the around the optimal bias.
By the way, most of my disagreements so far seem to be "Well, that makes sense logically, but if you throw human nature into the equation then it's wrong"

Some people may find fulfillment from that

I find myself a little doubtful here. People usually chase fame not because they value it, but because other people seem to value it. They might even agree cognitively on what's valuable, but it's no use if they don't feel it.

I think you would need to provide evidence for such claims

How many great peoples autobiographies and life stories have you read? The nearer you get to them, the more human they seem, and if you get too close you may even find yourself crushed by pity. About Isaac Newton, it was even said "As a man he was a failure; as a monster he was superb". Boltzmann committed suicide, John Nash suffered from skizophrenia. Philosophy is even worse off, titles like "suicide or coffee?" do not come from healthy states of mind. And have you read the Vasistha Yoga? It's basically poison. But it's ultimately a projection, a worldview does not reveal the world, but rather than person with the worldview.

Then you weren't thinking rationally

But what saved me was not changing my knowledge, but my interpretation of it. I was right that people lie a lot, but I thought it was for their own sake, when it's mostly out of consideration for others. I was right that people were irrational, but I didn't realize that this could be a good thing.

No one can exempt you from laws of rationality

That seems like it's saying "I define rationality as what's correct, so rationality can never be wrong, because that would mean you weren't being rational". By treating rationality as something which is discovered rather than created (by creating a map and calling it the territory), any flaw can be justified as "that wasn't real rationality, we just didn't act completely rationally because we're flawed human beings! (our map was simply wrong!)".
There can be no universal knowledge, maps of the territory are inherently limited (and I can prove this). As far as rationality uses math and verbal or written communication, it can only approximate something which cannot be put into words "The dao of which can be spoken is not the dao" simply means "the map is not the territory".

By the way, I think I've found a big difference between our views. You're (as far as I can tell) optimizing for "Optimization power over reality / a more reliable map", while I'm optimizing for "Biological health, psychological well-being and enjoyment of existence".
And they do not seem to have as much in common as rationalists believe.

But if rationality in the end worships reality and nature, that's quite interesting, because that puts it in the same boat as Taoism and myself. Some people even put Nature=God.  

Finally, if my goal is being a good programmer, then a million factors will matter, including my mood, how much I sleep, how much I enjoy programming, and so on. But somebody who naively optimizes for progamming skills might practice at the cost of mood, sleep, and enjoyment, and thus ultimately end up with a mediocre result. So in this case, a heuristic like "Take care of your health and try to enjoy your life" might not lose out to a rat-race like mentality in performance. Meta-level knowledge might help here, but I still don't think it's enough. and the tendency to dismiss things which seem unlikely, illogical or silly is not as great as a heuristic as one would think, perhaps because any beliefs which manage to stay alive despite being silly have something special about them.

Comment by StartAtTheEnd on Does the "ancient wisdom" argument have any validity? If a particular teaching or tradition is old, to what extent does this make it more trustworthy? · 2024-11-12T10:51:32.407Z · LW · GW

I think majority of people aren't aware of psychology and various fields under it

I don't think there's a reason for most people to learn psychology or game theory, as you can teach basic human behaviour and such without the academic perspective. I even think it's a danger to be more "book smart" than "street smart" about social things. So rather than teaching game theory in college, schools could make children read and write a book report on "How to Win Friends & Influence People" in 4th grade or whatever. Academic knowledge which doesn't make it to 99% of the population doesn't help ordinary people much. But a lot of this knowledge is simple and easier than the math homework children tend to struggle with.

I don't particularly believe in morality myself, and I also came to the conclusion that having shared beliefs and values is really useful, even if it means that a large group of people are stuck in a local maximum. As a result of this, I'm against people forcing their "moral" beliefs on foreign groups, especially when these groups are content and functional already. So I reject any global consensus of what's "good". No language is more correct than another language, and the same applies for cultures and such. 

Well it depends on your definition of inhuman

It's funny that you should link that post, since it introduces an idea that I already came up with myself. What I meant was that people tend to value what's objective over what's subjective, so that their rational thinking becomes self-destructive or self-denying in a sense. Rationality helps us to overcome our biases, but thinking of rationality as perfect and of ourselves as defect is not exactly healthy. A lot of people who think they're "super-humans" are closer to being "half-humans", since what they're doing is closer to destroying their humanity than overcoming or going beyond it. And I'm saying this despite the fact that some of these people are better at climbing social hierarchies or getting rich than me. In short, the objective should serve the subjective, not the other way around. "The lenses which sees its own flaws" merely conditions itself to seeing flaws in everything. Some of my friends are artists, and they hate their own work because they're good at spotting imperfections in it, I don't consider this level of optimization to be any good for me. When I'm rational, it's because it's useful for me, so I'm not going to harm myself in order to become more rational. That's like wanting money thinking it will make me happy, and then sacrificing my happiness in order to make money.

But the fields like cognitive biases etc are not

I'll agree as long as these fields haven't been subverted by ideologies or psychological copes against reality yet (as that's what tend to make soft sciences pathetic). The "Tall poppy syndrome" has warped the publics perception of the "Dunning kruger effect", so that it becomes an insult you can use against anyone you disagree with who are certain of themselves, especially in a social sitaution in which a majority disagree.

Astrology

Astrology is wrong and unscientific, but I can see why it would originate. It's a kind of pattern reocgnition gone awry. Since everything is related, and the brain is sometimes lazy and thinks that correlation=causation and that X implies Y is the same as Y implies X, they use patterns to predict things, and assume that recreating the patterns will recreate the things. This is mostly wrong, of course, but not always. People who are happy are likely to smile, but smiling actually tends to make you happier as well. Do you know the tragic story behind the person who invented handwashing? He found the right pattern, and the results were verifiable, but because his idea sounded silly, he ended up suffering.

If you had used astrology yourself, it might have ended better, as you'd be likely to intrepret what you wanted to to be true, and since your belief that your goal in life was fated to come true would help against the periodic doubt that people face in life.

I would strongly disagree on the front of intelligence

Intelligent is not something you are, it's something you have. Identifying with your intelligence is how you disown 90% of yourself. Seeing intelligence as something available to you rather than as something you are helps eliminate internal conflict. All "gifted kid burnout" and "depressed intelligent person" situation I have seen was partly caused by this dangerous identification. Even if you dismiss everything else I've said so far, I want to stress the importance of this one thing. Lastly, "systematic optimality" seems to suffer from something like Goodhart's law. When you optimize for one variable, you may harm 100 other variables slightly without realizing it (paperclip optimizers seem like the mathematical limit of this idea). Holistic perspectives tend to go wrong less often.

I like the Internal Family Systems view. I think the brain has competing impulses whose strength depends on your physical and psychological needs. but while I think your brain is rational according to what it wants, I don't think it's rational according to what you want. In fact, I think peoples brains tend to toy them them completely. It creates suffering to motivate you, it creates anxiety to get you to defend yourself, it creates displeasure and tells you that you will be happy if you achieve your goals. Being happy all the time is easy, but our brain makes this hard to realize so that we don't hack our own reward systems and die. If you only care about a few goals, your worldview is extremely simple. You have a complex life with millions of factors, but you only care about a few objective metrics? I'm personally glad that people who chase money or fame above all end up feeling empty, for you might as well just replace humanity with robots if you care so little for experiencing what life has to offer.

there is a good amount of coorelations with IQ

Oh, I know, I have a few bans from various websites myself (and I once got rate limited on here). And intelligence correlates with nihilism, meta-thinking, systemization, and anxiety (I know a study found the correlaton to mental illness to be false. But I think the correlation is negative until about 120 IQ and then positive after). But why did Nikola Tesla's intelligence not prevent him from dying poor and lonely? Why was Einstein so awkward? Why do some many intelligent people not enjoy life very much? My answer is that these are consequences of lacking humanity / healthy ways of thinking. It's not just that stupid people are delusional. I personally like the idea that intelligence comes at the cost of instinct. For reference, I used to think rationally, I hated the world, I hated people, I couldn't make friends, I couldn't understand myself. Now I'm completely fine, I even overcame depression. I don't suffer and I don't even dislike suffering, I love life, I like socializing. I don't worry about injustice, immorality or death.

I just found a highlight of the sequences, and it turns out that I have read most of the posts already, or just discovered the principles myself previously. And I disagree with a few of the moral rules because they decrease my performance in life by making me help society. Finally, my value system is what I like, not what is mathematically optimal for some metric which people think could help society experience less negative emotions (I don't even think this is true or desirable)

Comment by StartAtTheEnd on Does the "ancient wisdom" argument have any validity? If a particular teaching or tradition is old, to what extent does this make it more trustworthy? · 2024-11-12T06:36:46.760Z · LW · GW

There's an entire field of psychology, yes, but most men are still confused by women saying "it's fine" when they are clearly annoyed. Another thing is women dressing up because they want attention from specific men. Dressing up in a sexy manner is not a free ticket for any man to harass them, but socially inept men will say "they were asking for it" because the whole concept of selection and stardards doesn't occur to them in that context. And have you read Niccolò Machiavelli's "The Prince"? It predates psychology, but it is psychology, and it's no worse than modern books on office politics and such, as far as I can tell. Some things just aren't improving over time.

wisdom goes wrong a lot of time

You gave the example of the ayurvedic textbook, but I'm not sure I'd call that "wisdom". If we compare ancient medicine to modern medicine, then modern medicine wins in like 95% of cases. But for things relating to humanity itself, I think that ancient literature comes out ahead. Modern hard sciences like mathematics are too inhuman (autistic people are worse at socializing because they're more logical and objective). And modern soft sciences are frankly pathetic quite often (Gardner's Theory of Multiple Intelligences is nothing but a psychological defense against the idea that some people aren't very bright. Whoever doesn't realize this should not be in charge of helping other people with psychological issues)

I don't understand where it may apply other than being a nice way to say "be more adaptive"

It's a core concept which applies to all areas of life. Humans won against other species because we were better at adapting. Nietzsche wrote "The snake which cannot cast its skin has to die. As well the minds which are prevented from changing their opinions; they cease to be mind". This community speaks a lot of "updating beliefs" and "intellectual humility" because thinking that one has all the answers, and not updating ones beliefs over time, leads to cognitive inflexibility/stagnation, which prevents learning. Principles are incredibly powerful, and most human knowledge probably boils down to about 200 or 300 core principles. 

I have found that I can bypass a lot of wisdom by using these axioms

Would I be right to guess that ancient wisdom fails you the most in objective areas of life, and that it hasn't failed you much in the social parts? I don't disagree that modern axioms can be useful, but I think there's many areas where "intelligent" approaches leads to worse outcomes. For the most part, attempting to control things lead to failure. I've had more unpleasant experiences on heavily moderated platforms than I have had in completely unmoderated spaces. I think it's because self-organization can take place once disturbance from the outside ceases. But we will likely never know.

I think the failure to general purpose overcome akrasia is a failure of rationality

You could put it like that. I'd say something like "The rules of the brain are different than those of math, if you treat the brain like it's supposed to be rational, you will always find it to be malfunctioning for reasons that you don't understand". Too many geniuses have failed at living good lives for me to believe that intelligence is enough. I have friends with IQs above 145 who are depressed because they think too rationally to understand their own nature. They reject the things which could help them, because they look down on them as subjective/silly/irrational.
David Goggings story is pretty interesting. I can't say I went through as much as him, but we do have things in common. This might be why I have the courage to criticize science on LW in the first place.

Comment by StartAtTheEnd on Spade's Shortform · 2024-11-11T15:38:05.784Z · LW · GW

No problem! Little note though, your psychiatrist might doubt you if it seems like you're trying to self-diagnose because of something you read online. It may be better not to name it directly unless they bring it up first

Comment by StartAtTheEnd on Does the "ancient wisdom" argument have any validity? If a particular teaching or tradition is old, to what extent does this make it more trustworthy? · 2024-11-11T15:19:24.409Z · LW · GW

Some false beliefs can lead to bad actions, but I don't think it's all of them. After all, human nature is biased, because having a bias aided in survival. The psyche also seems like it deceives itself as a defense mechanism fairly often. And I think that "believe in yourself" is good advice even for the mediocre.

I'm not sure which part of my message each part of your message is in response to exactly, but some realizations are harmful because they're too disillusioning. It's often useful to act like certain things are true - that's what axioms and definitions are, after all. But these things are not inherently true or real, they become so when we decide that they are, but in a way it's just that we created them. But I usually have to not think about that for a while before these things go back to looking like they're solid pieces of reality rather than just agreements.

Ancient wisdom can fail, but it's quite trivial for me to find examples in which common sense can go terribly wrong. It's hard to fool-proof anything, be it technology or wisdom.

Some things progress. Math definitely does. But like you said, a lot of wisdom is rediscovered periodically. Science hasn't increased our social skills nor our understanding of ourselves, modern wisdom and life advice is not better than it was 2000 years ago. And it's not even because science cannot deal with these. The whole "Be like water" thing is just flexibility/adaptability. Glass is easier to break than plastic. What's useful is that somebody who has never taken a physics class or heard about darwinism can learn and apply this principle anyway. And this may still apply to some wisdom which accidently reveals something which is beyond the current standard of science.

As for that which is not connected to reality much (wisdom which doesn't seem to apply to reality), it's mostly just the axioms of human cognition/nature. It applies to us more than to the world. "As within, so without", in short, internal changes seem to cause external changes. If you're in a good mood then the external world will seem better too. A related quote is "As you think, so you shall become" which is oddly simiar to the idea of cognitive behavioural therapy.

Comment by StartAtTheEnd on Spade's Shortform · 2024-11-11T06:04:43.134Z · LW · GW

I see this problem quite often in communities for people with ADHD. People describe being unable to relax or start any task if they have any plans later, seemingly going into a sort of "waiting mode" until that event happens. This may be a common problem which is simply stronger in people with ADHD, I'm not sure. 

If you Google "ADHD Waiting mode", you should be able to find posts on this. I don't know how many of these are scientific or otherwise high-quality, and how many of them are unhealthy self-victimzation and other such things. I'm not judging, as I'm diagnosed with ADHD and a few other things myself, I just don't recommend identifying as ones medical diagnoses nor considering them as inherently impossible to overcome. 

Comment by StartAtTheEnd on Does the "ancient wisdom" argument have any validity? If a particular teaching or tradition is old, to what extent does this make it more trustworthy? · 2024-11-11T05:48:55.997Z · LW · GW

Then, I'd argue, they're being wrong or pedantic. Since I don't believe my evidence is wrong, it's at most incomplete, and one could argue that an incomplete answer is incorrect in a sense, not because it says anything wrong, but because it doesn't convey the whole truth. If either reason applied to anyone reading that comment, I'd have loved to discuss it with them, which is why I wrote that initial comment in a slightly provocative or cocky way (which I belive is not inappropriate as it reflects my level of confidence quite accurately). This may conflict with some peoples intellectual virtues, but I think a bit of conflict is healthy/necessary for learning

Comment by StartAtTheEnd on Does the "ancient wisdom" argument have any validity? If a particular teaching or tradition is old, to what extent does this make it more trustworthy? · 2024-11-11T02:44:42.402Z · LW · GW

Maybe people care way less about the difference between the two kinds of downvotes than I do. Even if the comment was bad or poorly communicated, I don't think the disagree downvote is appropriate as long as the answer is correct. I see the votes as being "subjective" and "objective" respectively. I agree about the noise thing

Comment by StartAtTheEnd on Poll: what’s your impression of altruism? · 2024-11-09T23:22:56.804Z · LW · GW

I don't think any one option is precise enough that it's correct on its own, so I will have to say "5" as well.

Here's my take:

  • Altruism can be a result of both good and bad mental states.
  • Helping others tends to be good for them, at least temporarily.
  • Helping people can prevent them from helping themselves, and from growing.
  • Helping something exist which wouldn't exist without your help is to get in the way of natural selection, which over time can result in many groups who are a net negative for society in that they require more than they provide. They might also remain dependent on others.
  • Finally, (and I expect some people to disagree with this) I think that moral good is a luxury. Luxuries are pleasant, but expensive, so when you engage in more luxury than you can afford, it stops being sustainable. And putting luxuries above necessities seem to me a good definition of decadence. 

Everything has dose-dependent and context-dependent pros and cons.

I think you're expecting too much of the word "good". I don't think any "good" exists such that more of it is always better, so I think "good" is a region of space rather than a direction. If optimization is gradient descent, then the "good" direction might change with every step you take. But if optimization means "what metric should we optimize for?" then we don't know (we have yet to find a single metric which an AGI could maximize without destroying humanity. Heading too far in any direction seems dangerous). So I think many peoples intuition of the word "good" can prevent them from ever hitting a satisfactory answer (as they're actually searching for something which can be taken to infinity without anything bad happening as a result, and not even considering the context in question)

Comment by StartAtTheEnd on Does the "ancient wisdom" argument have any validity? If a particular teaching or tradition is old, to what extent does this make it more trustworthy? · 2024-11-09T01:19:16.839Z · LW · GW

That sounds about right. And "people sometimes feel that way" is a good explanation for the downvote in my opinion. I was arguing the object-level premises of the post because the "disagree" downvote was factually wrong, and this factual wrongness, I argue, is caused by a faulty understanding of how truth works, and this faulty understanding is most common in the western world and in educated people, and in the ideologies which correlate with western thought and academia.

If you disagree with something which is true, I think the only likely explanations are "Does not understand" and "Has a dislike of", and the bias I pointed out covers both of these possibilities (the former is a "map vs territory" issue and the latter is a "morality vs reality" issue).

I think you figured out what went wrong nicely, but in the end the disagreement remains. I still consider my point likely. If somebody comes along and tells me that they disagreed with it for other reasons, I might even argue that they're lying to themselves, as I'm way to disillusioned to think that a "will to truth" exists. I think social status, moral values and other such things are stronger motivators than people will admit even to themselves.

Comment by StartAtTheEnd on Does the "ancient wisdom" argument have any validity? If a particular teaching or tradition is old, to what extent does this make it more trustworthy? · 2024-11-08T05:28:30.635Z · LW · GW

I refered to that too (specifically, the assumption). By true I meant that the bias which I think is to blame certainly exists, not that it was certain to be the main reason (but I'd like to push against this bias in general, so even if this bias only applies to some of the people to see my comment, I think it's an important topic to bring up, and that it likely has enough indirect influence to matter)

To address your points:

1: Of course it's mixed. But the mixed advice averages out to be "wise", something generally useful.
2: I think it's necessarily trial and error, but a good question is "does the wisdom generalize to now?". 
3: This of course depends on the examples that you choose. A passage on the ideal age of marriage might generalize to our time less gracefully than a passage on meditation. I think this goes without saying, but if we assume these things aren't intuitive, then a proper answer would be maybe 5 pages long.
4: Would interpreting it as "negative" not mean that it has been misunderstood? That one can learn without understanding is precisely why they could prosper with a level of education which pales to that of modern times. We learned that bad smells were associated with sickness way before we discovered germs. If our tech requires intelligence to use, then the lower quartile of society might struggle. And with the blind approach you can use genius strategies even if you're mediocre.

5: along with 4, I think this is an example of the bias that I talked about above. What we think of as "real" tends to be sufficiently disconnected from humanity. Religion and traditional ways of living seem to correlate with mental health, so the types of people who think that wealth inequality is the only source of suffering in the world are too materialistic and disconnected. Not to commit the naturalistic fallacy, but nature does optimize in its own way, and imitating nature tends to go much better than "correcting" it.

Comment by StartAtTheEnd on Does the "ancient wisdom" argument have any validity? If a particular teaching or tradition is old, to what extent does this make it more trustworthy? · 2024-11-08T00:57:43.724Z · LW · GW

You don't think the entire western world is biased in favor of science to a degree which is a little naive? In addition to this, I think that people idolize intelligence and famous scientists, that they largely consider people born before the 1950s to have repulsive moral values, that they dislike tradition, that they consider it very important to be "educated", that they overestimate book smarts and underestimate the common sense of people living simple lives, and that they believe that things generally improve over time (such that older books are rarely worth bothering with), and I believe that social status in general make people associate with newer ideas over older ones. There's also a lot of people who have grown up around old, strict and religious people and who now dislike these. It doesn't help it that more intelligent people are higher in openness in general, and that rationalism correlates with a materialistic and mechanical worldview.

Many topics receive a lot more hostility than they deserve because of these biases, and usually because they're explained in a crazy way (for instance, Carl Jungs ideas are often called pseudoscience, and if you take the bible literally then it's clearly wrong) or because people associate them with immorality (say, the idea that casual sex is disliked by traditional because they were mean and narrow-minded, and not because casual sex caused problems for them, or because it might cause problems for us)

A lot of things are disliked or discarded despite being useful, and a lot of wisdom is in this category. All of this was packed in the message that "people dislike old things because it sounds irrational or immoral" (people tend to dislike long comments)

Comment by StartAtTheEnd on Does the "ancient wisdom" argument have any validity? If a particular teaching or tradition is old, to what extent does this make it more trustworthy? · 2024-11-08T00:07:35.417Z · LW · GW

I don't see it as unkind, and I don't think "trial and error" is a wrong explanation either. It seems very unlikely that ideas which are strictly harmful stick around for a very long time. So much that it must necessarily tend in the other direction (I won't attempt to prove this though)

I'm good at navigating hypothesis space, so any difficulties are likely related to theory of mind of people who are very different from myself (being intelligent but out of sync in a way). Still, I don't buy the idea that people can't or shouldn't do this. You're even guessing at my intentions right now, and if somebody is going to downvote me for acting in bad faith, they'll also need to guess at my intentions. So this seems like a common and sensible thing to do in moderation, rather than an intellectual sin of sorts

Comment by StartAtTheEnd on Does the "ancient wisdom" argument have any validity? If a particular teaching or tradition is old, to what extent does this make it more trustworthy? · 2024-11-07T23:32:12.834Z · LW · GW

They did answer the question, there's just a little bit of deduction required? I understood it at a glance and didn't even notice any typos. Situations in which agents can learn something without understanding the reasons behind what they learn are quite common, it's not a novel idea, it just raises a red flag in people who are used to scientific thinking. The general bias in society against tradition/spirituality/religion is too strong compared to the utility (even if not correctness) of these three.

That useless extra text in my previous comment saves a future comment or to by taking things into account in advance. I even wrote the "I didn't understand the explanation" reaction above (as something one might have thought before downvoting the comment), so it's not that I didn't think of it, I just considered it an unlikely reaction as I disagree with it

Comment by StartAtTheEnd on Alexander Gietelink Oldenziel's Shortform · 2024-11-07T23:09:57.356Z · LW · GW

This seems like an argument in favor of:

Stability over potential improvement, tradition over change, mutation over identical offspring, settling in a local maximum over shaking things up, and specialization vs generalization.

It seems like a hyperparameter. A bit like the learning rate in AI perhaps? Echo chambers are a common consequence, so I think the optimal ratio of preaching to the choir is something like 0.8-0.9 rather than 1. In fact, I personally prefer the /allPosts suburl over the LW frontpage because the first few votes result in a feedback loop of engagement and upvotes (forming a temporary consensus on which new posts are better, in a way which seems unfairly weighted towards the first few votes). If the posts chosen for the frontpage use the ratio of upvotes and downvotes rather than the absolute amount, then I don't thing this bias will occur (conformity might still create a weak feedback loop though).

I'm simplifying some of these dynamics though.

Comment by StartAtTheEnd on Does the "ancient wisdom" argument have any validity? If a particular teaching or tradition is old, to what extent does this make it more trustworthy? · 2024-11-07T19:04:56.310Z · LW · GW

I worded that a bit badly, I meant I had a hard time thinking of better (meaning kinder) explanations, not better (meaning more likely) explanations. Across all websites I've been on in my life, I have posted more than 100000 comments (resulting in many interactions), so while things like psychoanalyzing people, assuming intentions, and making stereotypes is "bad", I simply have too much training data, and too few incorrect guesses not to do this. I do, however, intentionally overestimate people (since I want to talk to intelligent people, I give people the benefit of doubt for as long as possible) but this means that mistakes are attributed to their intentions, personality or values, rather than careless mistakes or superficial heuristics. In this situation, I've assumed that they're offended by the idea that traditional socities rival the science method in some situations. But it may be something more superficial like "I find short comments to be effortless", "somebody else already said that" or "I didn't understand your explanation and I consider it your fault". But like I said in another comment, I remember the first downvotes being disagreements (red X) rather than regular downvotes, so I took it as meaning "this is wrong" rather than "I don't like this comment". Not that any of this matters very much, admittedly

Comment by StartAtTheEnd on Does the "ancient wisdom" argument have any validity? If a particular teaching or tradition is old, to what extent does this make it more trustworthy? · 2024-11-07T18:42:07.916Z · LW · GW

That makes sense, I just evaluated the comment in isolation. But I believe that the first few downvotes were as "incorrect" (the red X) rather than regular downvotes (down arrow), which is why the feedback occured to me as simply mistaken (as the comment is not false).

I've noticed, by the way, that most comments posted tend to get downvoted initially and then return to 0 over time. There may be a few regular, highly active users with high standards or something, and less casual users with lower standards which balance them out over time. I've gone to -10 and back before. 

Comment by StartAtTheEnd on The Case Against Moral Realism · 2024-11-07T18:19:18.688Z · LW · GW

I don't think good and evil are objectively real as moral terms, but if something makes us select against certain behaviour, it may be because said behaviour results in organisms deleting themselves from existence. So that "evil" actually means "unsustainable". But this makes it situational (your sustainable expenditure depends on your income, for instance, so spending 100$ cannot be objectively good or evil).

Moral judgments vary between individuals, cultures and societies


Yes, and which actions result in you not existing will also vary. There's no universal morality for the same reason that there's no universal "best food" or "most fitting zoo enclosure", for "best" cannot exist on its own. Calling something "best" is a kind of shortcut, there's implicit things being referred to.
What's the best move in Tetris? The correct answer depends on the game state. When you're looking for "objectively correct universal moral rules" you might also be throwing away the game state on which the answer depends.

I'd go as far as to say that all situations where people are looking for universal solutions are mistaken, as there may (necessarily? I'm not sure) exist many local solutions which are objectively better in the smaller scope. For instance, you cannot design a tool which is the best tool for fixing any machine, instead you will have to create 100s of tools which are the best for each part of each machine. So hammers, saws, wrenches, etc. exist and you cannot unify all of them them to get something which is objectively better than any of them in any situation. But does this imply that tools are not objective? Does it not rather imply that good is a function taking at least two inputs (tool, object) and outputting a value based on the relation between the two? (a third input could be context, i.e. water is good for me in the context that I'm thirsty). 

If my take is right, then like 80% of all philosophical problems turn out to be nonsense. In other words, most unsolved problems might be due to flawed questions. I'm fairly certain in this take, but I don't know if it's obvious or profound.

Comment by StartAtTheEnd on Does the "ancient wisdom" argument have any validity? If a particular teaching or tradition is old, to what extent does this make it more trustworthy? · 2024-11-07T17:46:36.185Z · LW · GW

Yeah I'm asking because downvotes are far too ambigious. I think they're ambigious to the point that they don't make for useful feedback (You can't update a worldview for the better if you don't know what's wrong with it). I don't think downvotes are necessarily bad as a concept though. And about humanity - sure, and on any other website I'd largely have agreed with your view, but when I talk about intellectual things I largely push my own humanity to the side. And even if somebody downvotes because of irrational feelings, I'm interested in what those feelings are.

But I know that people on here frequently value truth, and I'm quite brutal to those values as I think truth is about as valid of a concept as a semicolon (the language is just math/logic rather than English). And if we are to talk about Truth with a capital T, then we're speaking about reality, which is more fundamental than language (the territory, reality, is important. But I rarely see any good maps, even on this website. So when taoists seem to suggest throwing the map away entirely, I do think that's a good idea for every day life. It's only for science, research and tech that I value maps). That makes me an outlier though, haha.

Comment by StartAtTheEnd on Does the "ancient wisdom" argument have any validity? If a particular teaching or tradition is old, to what extent does this make it more trustworthy? · 2024-11-05T20:28:50.928Z · LW · GW

I'm curious why you were downvoted, for you hit the nail on the head. For a short an concise answer, yours is the best.

Does anyone know? Otherwise I will just assume that they're rationalists who dislike (and look down on) traditional/old things for moral reasons. This is not very flattering of me but I can't think of better explanations.

Comment by StartAtTheEnd on Does the "ancient wisdom" argument have any validity? If a particular teaching or tradition is old, to what extent does this make it more trustworthy? · 2024-11-05T20:24:24.942Z · LW · GW

Ancient wisdom is not scientific, and it might even be false, but the benefits are very real, and these benefits sort of works to make the wisdom true.

The best example I can give is placebo, the belief that something is true helps make it true, so even if it's not true, you get the benefits of it being true. The special trait ancient wisdom has is this: The outcome is influenced by your belief in the outcome. This tends to be true for psychological things, and advice like "Belief can move mountains" is entirely true in the psychological realm. But scientific people, who deal with reality, tend to reject all of this and consider it as nonsense, as the problems they're used to aren't influenced by belief.

Another case in which belief matters includes treating things with weight/respect/sacredness/divinity. These things are just human constructs, but they have very real benefits. Of course, you can be an obnoxious atheist and break these illusions all you want, but the consequences of doing this will be nihilism. Why? Because treating things as if they have weight is what gives them weight, and nihilism is basically the lack of perceived weight. There's nothing objectively valid about filial piety, but it does have benefits, and acting as if it's something special makes it so.

Ancient wisdom often gets the conclusions right, but get the explanations wrong, and this is likely in order to make people take the conclusions seriously. Meditation has been shown to be good for you. Are you feeling "Ki" or does your body just feel warm when you concentrate on it? Do you become "one with everything" or does your perception just discard duality for a moment? Do you "meet god" or do you merely experience a peace of mind as you let go of resistance? The true answer is the boring one, but the fantastical explanation helps make these ideas more contagious, and it's likely that the false explanations have stuck around because they're stronger memetically.

Ancient wisdom has one advantage that modern science does not: It can deal with things which are beyond our understanding. The opposite is dangerous: If you reject something just because you don't understand why it might be good (or because the people who like it aren't intellectual enough to defend it), then you're being rational in the map rather than in the territory. Maybe the thing you're dismissing is actually good for reasons that we won't understand for another 20 years.

You can compare this with money, money is "real but not real" in a similar way. And this all generalizes far beyond my examples, but the main benefits are found, like I said, in everything human (psychological and spiritual) and in areas in which the consensus has an incomplete map. I belive that nature has its own intelligence in a way, and that we tend to underestimate it.

Edit: Downvotes came fast. Surely I wrote enough that I've made it very easy to attack my position? This topic is interesting and holds a lot of utility, so feel free to reply. 

Comment by StartAtTheEnd on What are some good ways to form opinions on controversial subjects in the current and upcoming era? · 2024-10-29T02:40:20.751Z · LW · GW

While you could format questions in such a way that you can divide them into A and B in a sensible manner, my usual reaction to thought experiments which seem to make naive assumptions about reality is that the topic isn't understood very deeply. The problem about looking at the surface (and this is mainly why average people don't hold valuable opinions) is that people conclude that solar panels, windmills and electric cars are 100% "green", without taking into account the production and recycling of these things. Many people think that charging stations for electric cars are green, but they don't see the coal powerplant which supplies power to the charging station. In other words "Does X solution actually work?" is never asked. Society often acts like me when I'm being neurotic. When I say "I will clean my house next week" I allow my house to stay messy while also helping myself to forget the manner for now. But this is exactly like saying "We plan to be carbon neutral by 2040" and then doing nothing for another 5 years.

And yes, that does clarify things!

  • Valid, but knowing what's important might require understanding the problem in the first place. A lot of people want you to think that the thing they're yelling about is really important.
  • Then the axioms do not account for a lot of controversial subjects. I think the abortion debate also depends on definitions "at how many weeks can the child said to be alive?" "When is it your own body and when is it another body living inside you?"
  • I'm afraid it doesn't. I believe that morality has little to no correlation with intelligence, and that truth has little to do with morality. I'd go as far as to say that morality is one of the biases that people have, but you could call these "values" instead of biases.

To actually answer your question, I think understanding human nature and the people you're speaking to is helpful. Also the advantages of pushing certain beliefs, and the advantages of holding certain beliefs.

If somebody grew up with really strict parents, they might value freedom, whereas somebody who lacked guidance might recognize the danger of doing whatever one feels like doing. And whether somebody leans left or right economically seems influenced by their own perceived ability to support themselves. Ones level of pity for others seems to be influenced by ones own confidence, since there's a tendency to project ones own level of perceived fragility.

If you could measure a groups biases perfectly, then you could subtract it from the position they hold. If there's strong reasons to lean towards X, but X is only winning by a little bit, then X might not be true. You can often also use reason to find inconsistencies. I'd go as far as saying that inconsistencies are obvious everywhere unless you unconsciously try to avoid seeing them. Discrimination based on inherent traits is wrong, but it's socially acceptable to make fun of stupid people, ugly people, short people and weirdos? The real rule is obviously closer to something like "Discrimination is only acceptable towards those who are either perceived to be strong enough to handle it, or those who are deemed to be immoral in general". If you think about it enough, you will likely find that most things people say are lies. There's also some who have settled on "It's all social status games and signaling" which is probably just another way of looking at the same thing. Speaking of thinking, if you start to deconstruct morality and questioning it, you might put yourself out of sync with other people permanently, so you've been warned.

But the best advice I can give is likely just to read the 10 or so strongest arguments you can find on both sides of the issue and then judging for yourself. If you can't trust your own judgement, then you likely also can't trust your own judgement about who you can trust to judge for you. And if you can judge this comment of mine, then you can likely judge peoples takes on things in general, and if you can't judge this comment of mine, then you won't be able to judge the advice you get about judging advice, and you're stuck in a sort of loop.

I'm sometimes busy for a day or two, I don't think I will have longer delays in replying than that

Comment by StartAtTheEnd on What are some good ways to form opinions on controversial subjects in the current and upcoming era? · 2024-10-27T18:57:32.342Z · LW · GW

I misread a small bit, but I still stand by my answer. It is however still unclear to me if you value truth or not. You mention moral frameworks and opinions, but also sound like you want to get rid of biases? I think these conflict.

I guess I should give examples to show how I think:

  • Suppose that climate change is real, but that the proposed causes and solutions are wrong. Or that for some problem X, people call for solution Y, but you expect that Y will actually only make X worse (or be a pretend-solution which gives people a false sense of security and which is only adopted because it signals virtue)
  • Suppose that X is slightly bad, but not really worth bothering about, however, team A thinks that X is terrible and team B thinks that A is the best thing ever.
  • Suppose that something is entirely up to definition, such that truth doesn't matter (for instance, if X is a mental illness or not). Also, suppose that whatever definition you choose will be perceived as either hatred or support.
  • I don't think it's good to get any opinions from the general population. If actual intelligent people are discussing an issue, they will likely have more nuanced takes than both the general population and the media.
  • Lets say that X personality trait is positively correlated with both intelligence and sexual deviancy. One side argues that this makes them good, another side argues that this makes them bad. Not only is this subjective, people would be confusing the utilitarian "good/bad" with the moral "good/bad" (easy example: Breaking a leg is bad, but having a broken leg does not make you a bad person). 

I think being rational/unbiased results in breaking away from socities opinions about almost everything. I also think that being biased is to be human. The least biased of all is reality itself, and a lot of people seem really keen on fixing/correcting reality to be more moral. In my worldview, a lot of things stop making sense, so I don't bother with them, and I wonder why other people are so bothered by so many things.

I might be unable to respond for a little while myself, sorry about that

Comment by StartAtTheEnd on What are some good ways to form opinions on controversial subjects in the current and upcoming era? · 2024-10-27T17:55:08.734Z · LW · GW

I think it's often the case that neither A nor B are true. Common opinions are shallow, often simplified and exaggerated or even entirely besides the point.
Now, you're asking what a good way to form opinions is, well, it depends on what you want.

Do you want to know which side you should vote for to bring the future towards the state that you want?
Do you want to figure out which side is the most correct?
Do you want to figure out the actual truth behind the political issue?
Do you want to hold an opinion which won't disrupt your social life too much or make you unpopular?

I expect that these four will bring you to different answers.

(While I think I understand the problem well, I can't promise that I have a good solution. Besides, it's subjective. Since the topic is controversial, any answer I give will be influenced by the very biases that we're potentially interested in avoiding)

By the way, personally, I don't care much what foreign actors (or team A and B) have to say about anything, so it's not a factor which makes a difference to me.

Edit: I should probably have submitted this as a comment and not an answer. Oh well, I will think up an answer if you respond.

Comment by StartAtTheEnd on how to rapidly assimilate new information · 2024-10-25T00:01:44.892Z · LW · GW

Most of my learning took place in my head, causing it to be isolated from other senses, so that's likely one of the reasons. In some of the examples I know of people forgetting other things, they did things like learning 2000 digits of pi in 3 days, which is exactly something which doesn't really connect to anything else. So you're likely correct (at least, I don't know enough instances of forgetting to make any counter-arguments) 

most of it isn't really useful at helping you address the problems that you're facing

This is a rather commonly known technique, but you can work backwards from the problems, learning everything related to them. Rather than learning a lot and hoping that you can solve whatever problems might appear.

What I personally did, which might have been unhealthy, was wanting to fully understand what I was working with in general. So I'd always throw myself at material 5 years of studies above what I currently understood. When introduced to the Bayes chain rule, I started looking into the nature of chain rules, wanting to know how many existed across mathematics and if they were connected with one another. Doing things like this isn't always a waste of time, though, sometimes you really can skip ahead. If you Google summaries of about 100 different books written by people who are experts in their fields or highly intelligent in general, you will gain a lot of insights into things.

Comment by StartAtTheEnd on Word Spaghetti · 2024-10-24T23:11:08.907Z · LW · GW

I have the same problem. I think my non-verbal IQ might be about 45 points above my verbal IQ, so that could be a factor. I also think mostly in concepts, since I'm afraid that thinking in words would blind me to insights which do not yet have words to describe them.

But translation from "idea in my mind" to "words that others can understand" is hard. I hear that information in the mind has a relational (mindmap) structure, while writing is linear and left-to-right. So the data structures are quite different.

I'm autistic, which harms my ability to communicate. I also tend to create my own vocabulary, and to use grammar in a mathematical sense. I might add "un-" or "-izable" affixes to words which shouldn't have them, or use set-builder notation in my personal notes, even if they contain no mathematics at all. This causes me to have my own efficient symbolic language which is incompatible with other peoples models/associations/tokens.

There's two other things I try to avoid:

1: Subvocalization (it slows me down)
2: Explaining things to myself. I know what I mean, always. If I catch myself thinking to myself as if other people were listening, I stop. Is this a natural habit meant to improving communication, or caused by trauma and fear of being misunderstood (like imagining social scenarios while in the shower)? For I imagine that it causes a dramatic reduction in thinking speed, even if you get the benefits of rubberducking. 

In short, I'm guessing that people with high verbal intelligence, and those who tend to think purely in words don't have much difficulty writing. I don't have any contrary evidence in any of my memories, so I will believe this for now