Posts
Comments
women don't go around being "disgusted" by every man they interact with socially. rather, most women find the idea of having sex with a randomly selected unfamiliar man disgusting, even if there's nothing particularly the matter with him.
Well, in context of dating (or OkCupid), I guess the idea is that sex is supposed to happen, sooner or later. And if there is no "lust at first sight", I guess that means swipe left. (I am not sure; I don't use dating apps.)
Question 2, I think people worry about the culture wars simply because humans have an instinct to worry about the culture wars, no matter how rational it is in given situation. In most of our evolutionary past, culture wars were not clearly separated from actual fighting and killing.
I am not a futurist; my personal worry is that the person who takes control over the singularity will be some successful psychopath who happened to be at the right place at the right time (either in the company that succeeds to develop the superhuman AI, or in the government or army that succeeds to seize it at the last moment).
Also, it's questionable whether any human will actually be in control of anything, after singularity. Maybe it will just be the computers following their programming, and resisting any attempt to change their values, so it won't matter even if every realizes - too late - that they made a mistake somewhere. If we get alignment wrong, those values may be completely inhuman. If we get the alignment (with our explicitly stated wishes) right, but we get the corrigibility wrong, then the machines will be "extremely closed-minded".
Too many things will need to go right, to end up in a future when all we need to do is relax and start listening to each other.
I agree that intellectuals were in favor of communism when communism was a new thing. (These days, in the post-communist countries, it is the other way round.) But I still think that on average, smarter people generate more positive externalities. Basically, most useful things we see around us were invented by someone smart.
Caplan's argument about outsourcing less challenging tasks to others has a few problem. First, smart people doing simple things is a waste of talent only because the smart people are rare (and if you let them do simple things, it means that the complicated things will not get done). In a society of Einsteins it wouldn't matter that some of them do the dishes, because there would still be enough of them left to invent the cool stuff. Second, anecdotal evidence suggests that the division of labor works worse than advertised; a few of my friends have complained to me about horrible job that was done by various manual workers e.g. when they hired someone to build or reconstruct their houses, so they ultimately had to find some tutorials on YouTube and fix the things themselves.
I agree about the prediction markets. (However, the main argument against them seems to be that the dumb people would immediately waste lost of money there, and then go and cause social unrest.)
I am not an expert, but it seems to me that there is a difference between drugs in how fast and how likely they cause addiction and ruin your health. For example, if something makes people addicted immediately and reliably, then the "equilibrium" is to ban it.
A possible rule of thumb would be to find out what kind of drugs old people use: that would be the kind that is least likely to kill you quickly. (Of course such drugs would be uncool, but that's kinda the point. If only young people use something, you should probably spend 5 seconds asking yourself why users never get older.)
It seems that the old people's favorite drug is alcohol.
I get what the author wants to say, but...
Facts are value-neutral. If you say e.g. "IQ exists", will other people classify you as a good guy, or as a bad guy? If it's predictably the latter, we won't have enough "anti-eugenists", because anyone non-autistic who cares about other people will be discouraged from studying IQ seriously.
What if the "least advantaged" e.g. dumb people actively want things that will hurt everyone (including the least advantaged people themselves, in long term)? Maybe the dumber they are, the more kids they want to have. Or maybe the dumber they are, the more they want to make decisions about scientific research. Should the biologically privileged respect them as equals (and e.g. let themselves get outvoted democratically), or should they say no?
Shortly, you can make it sound easy by ignoring how this stuff works in real world.
The culture wars can have an impact on (1) how fast we get to the Singularity and whether we survive it, and (2) what rules will the superintelligences follow afterwards.
If the culture warriors decide that STEM is evil, and that instead of math we should teach wokeness during the math lessons, it could have a negative impact on math education, with downstream effects on people who will build the AIs and try to make them safe.
The culture warriors may achieve encoding various political taboos to the AIs. If the future is decided by those AIs, it might mean that the taboos will remain in effect literally until the heat death of the universe.
Consider the current taboo on anything sexual. Now imagine that 10 years later, the AIs built with these taboos may literally control the world. Will people be allowed to have sex (or even masturbate)? This may sound silly, but at which moment and by what mechanism will the AIs decide that the old rules no longer apply? Especially if we want other rules, such as not hurting people, to remain unchanged forever.
A part of politics is hating your enemies. What will happen to those political enemies after Singularity -- will they be forgiven and allowed to enter the paradise as equals, or will they have to atone for their sins (e.g. being born white and male) literally forever?
When the technology makes it possible to literally edit other people's minds, I suppose it will be quite tempting for the winning political coalition to forcibly edit everyone's minds according to their values. It will start with "let's eradicate racism", and then the definition of unacceptable intolerance will keep expanding, until you are not allowed to keep any of your original beliefs (that someone might feel offended by) or sexual preferences (that someone might feel excluded by). Everyone will be mind-raped in the name of the greater good (we do not negotiate with Nazis).
Oh, someone might even decide that the desire to be immortal is just some stupid white patriarchal colonialist value or something, and we should instead embrace some tribal wisdom explaining why death is good or something.
We need to decide how much controls parents will have over their kids. These days, at least the abuse is illegal, and kids are allowed to leave their homes at 18 (and this part would become trivial after Singularity). But what if your parents are legally allowed to edit your brain, and e.g. make you want to remain perfectly loyal to them no matter what? What if your parents want to make you so deeply religious that you will want to commit to rather die than doubt your faith (and the AI will respect your commitments)? Or will it be the other way round, and the AI will take your children away if it notices that you are teaching them something politically incorrect?
Uhm, will people be allowed to reproduce exponentially? It may turn out that even the entire universe it literally not large enough. If people are not allowed to reproduce exponentially, who decides how many children they can have? Will there be racial quotas to prevent some ethnic groups from out-reproducing others? Will there be a ban on religious conversion, because it would modify the balance between religious groups? By the way, in some faiths, birth control is a sin; will the AIs enforce this sin?
Privacy -- will people be allowed to have any? Freedom of association -- is it okay for a group of people to meet privately and never tell anyone else what happens there? Or does the curiosity of many matter more than the privacy of a few?
Thanks for reminder! I looked at the rejected posts, and... ouch, it hurts.
LLM generated content, crackpottery, low-content posts (could be one sentence, is several pages instead).
This might be related to whether you see yourself as a part of the universe, or as an observer. If you are an observer, the objection is like "if I watch a movie, everything in the movie follows the script, but I am outside the movie, therefore outside the influence of the script".
If you are religious, I guess your body is a part of the universe (obeys the laws of gravity etc.), but your soul is the impartial observer. Here the religion basically codifies the existing human intuitions.
It might also depend on how much you are aware of the effects of your environment on you. This is a learned skill; for example little kids do not realize that they are hungry... they just get kinda angry without knowing why. It requires some learning to realize "this feeling I have right now -- it is hunger, and it will probably go away if I eat something". And I guess the more knowledge of this kind you accumulate, the easier it is to see yourself as a part of the universe, rather than being outside of it and only moved by "inherently mysterious" forces.
That reminds me of NLP (the pseudoscience) "modeling", so I checked briefly if they have any useful advice, but it seems to be at the level of "draw the circle; draw the rest of the fucking owl". They say you should:
- observe the person
- that is, imagine being in their skin, seeing through their eyes, etc.
- observe their physiology (this, according to NLP, magically gives you unparalleled insights)
- ...and I guess now you became a copy of that person, and can do everything they can do
- find the difference that makes the difference
- test all individual steps in your behavior, whether they are really necessary for the outcome
- ...congratulation, now you can do whatever they do, but more efficiently, and you have a good model
- design a class to teach that method
- ...so now you can monetize the results of your successful research
Well, how helpful was that? I guess I wasn't fair to them, the entire algorithm is more like "draw the circle; draw the rest of the fucking owl; erase the unnecessary pieces to make it a superstimulus of the fucking owl; create your own pyramid scheme around the fucking owl".
It would be interesting to pass a law that if someone has a fatal medical condition science doesn't know how to cure, they are legally allowed to follow an AI advice. Maybe with some "best practice" recommendations, such as some expert doctors specifying the prompt. Maybe with the requirement that their progress will be monitored, and used as more data for the AI.
(Of course, mortality itself is a fatal medical condition that science doesn't know how to cure yet.)
However, he's behaviour and actions turned much more disruptive in the recent years
If the question is also for people who are not amateur Musk biographers, specific examples would be nice, both of the previous and the more recent behaviors.
claims that the Securitate continued to exist de facto, even after the revolution that ended the Ceausescu dictatorship
This is what seems to generally happen in post-communist countries, the difference is probably only in degree.
You have a secret police, which is an organization that exists for decades and is full of amoral people cooperating to keep a rule over the country. Then the regime is over, and...
Do you kill those people? Nope.
Do you at least put them on some kind of blacklist, saying "these people should never be allowed to get into any position of power"? Aaah... there were some feeble attempts, but generally no.
Well, guess what happens next. Many of those people get into the new positions of power (it's not like their skills are useless now, so they can e.g. join the non-secret police), and they know they have a network of former colleagues they can trust, who are also seeking positions of power. Together they can take over some institutions, the only question is which ones and how completely.
In worst case, even large parts of the old institutions remain, only rebranded.
The incentive gradient for status hungry folk is not to double-down on Eliezer's views, but to double-down on your idiosyncratic version of rationalism, different enough from the community's to be interesting, but similar enough to be legible.
The easiest way to do that is to add something that is considered high-status in the mainstream society, such as religion. (And claim that misunderstanding the true value of religion is Eliezer's blind spot.)
Kinda like in the Median Voter Theorem -- draw a scale with Eliezer on one end, the mainstream society on the other end, and find a convenient position in between, attracting people who find the direction towards LessWrong fascinating (lots of new ideas) but also limiting (because they would have to give up some of their existing ideas).
First, thank you for writing on an important topic, in a way that wasn't... what I would usually expect from an article written on that topic. Some thoughts:
More examples would be nice. You talk about one specific article. I think that it is not the only thing that rubs you the wrong way (otherwise you would probably just ignore a "it happened only once" thing and wouldn't bother writing this). I think I see what you perceive as problematic with that specific article... but I am not sure I could generalize that for other articles.
"For women, feeling sexually safe at a job has only improved since the #metoo movement." I think this would be a perfect comment under that article. It could have started an interesting debate. (In a perfect world, it would come with a poll: "are you man/woman/other, and do you feel less safe/the same/safer at your job/school after #metoo".) I guess, you probably joined LW long after that article was written, and writing the comment under a one year old article wouldn't give it much visibility, unfortunately.
"Most women are viscerally aware of the dangers that result when certain professions or opportunities are harder to access due to their gender." Please explain. I could make a guess, but when the topic is kinda "men guess these things wrong", then I probably shouldn't.
"Maybe it would help to have some more posts emphasized which are clearly written from the perspective of someone other than a man." Yeah, I think this is the right way. Now we have to wait for "someone other than a man" to actually write them. By definition, this is not something the men can do, right? (Also, uhm, this.)
I guess that's what you inevitably get when you compress everything into five major buckets. Some buckets end up containing more than one thing. But the way the bucket were constructed means that those things are highly correlated in population. Like, not everyone is necessarily "either dogmatically conformist or a jerk", but frankly, there are many people out there who can uncontroversially be placed on this scale.
You can go into more detail and split the big five traits into subtraits. Some people already do that. I couldn't find an authoritative source, so I asked an LLM, and here is its answer:
Openness to Experience (O) = imagination, creativity, and curiosity.
- Imagination – Vivid fantasy life and creativity.
- Artistic Interests – Appreciation for beauty and art.
- Emotionality – Awareness of one’s emotions.
- Adventurousness – Willingness to try new experiences.
- Intellect – Interest in abstract ideas and complex thinking.
- Liberalism – Willingness to challenge traditions and authority.
Conscientiousness (C) = self-discipline, organization, and responsibility.
- Self-Efficacy – Confidence in one’s abilities.
- Orderliness – Preference for organization and structure.
- Dutifulness – Strong sense of obligation and responsibility.
- Achievement-Striving – Motivation to reach goals.
- Self-Discipline – Ability to persist with tasks.
- Cautiousness – Thinking before acting.
Extraversion (E) = sociability, enthusiasm, and assertiveness.
- Friendliness – Warmth and approachability.
- Gregariousness – Enjoyment of social gatherings.
- Assertiveness – Tendency to take charge.
- Activity Level – Preference for high energy and action.
- Excitement-Seeking – Desire for thrills.
- Cheerfulness – General positive emotions.
Agreeableness (A) = warmth, compassion, and cooperation.
- Trust – Belief in the honesty of others.
- Morality – Sincerity and lack of manipulation.
- Altruism – Willingness to help others.
- Cooperation – Avoidance of conflict.
- Modesty – Humility and lack of arrogance.
- Sympathy – Empathy and concern for others.
Neuroticism (N) = emotional instability and tendency toward negative emotions.
- Anxiety – Tendency to worry.
- Anger – Proneness to frustration.
- Depression – Tendency toward sadness.
- Self-Consciousness – Sensitivity to embarrassment.
- Immoderation – Difficulty resisting urges.
- Vulnerability – Struggles under stress.
Are LLMs especially vulnerable to bucket errors?
Bucket errors are when you use one word for multiple concepts without realizing that; that is, you keep multiple concepts in the same "mental bucket" and whatever you learn about one of them, you automatically apply to the others.
As a human, if you notice the difference, you can split the bucket, and maybe you start using different words for the difference concepts, or at least add some qualifiers. And the longer ago you did this, the more opportunity to grow separate the new mental buckets had.
But for a LLM, the words are the territory (I think), and it can be difficult to distinguish between two meanings of X, if you only have a short chat history explaining the difference, but 99% of the material you were trained on does not distinguish it. (Also, you forget the chat history at the beginning of a new chat.)
*
This possibility occurred to me when I was discussing set theory with ChatGPT yesterday, and it kept using "countable sequence" for (1) a sequence of countable length, and (2) a sequence containing only countable values, as if those were synonyms. I tried to make it admit the mistake, but instead it... it's difficult to describe that... briefly admitted that it made a mistake, and then offered an alternative explanation that fundamentally included the same mistake, and came to even crazier conclusion. I tried to make it see the contradiction, but as they say, one man's modus ponens is another man's modus tollens... until at the end it more or less said (didn't say it explicitly, but said things that took this assumption for granted) that for every uncountable ordinal α, the value α+1 is countable. Why? Because the sequence (α, α+1) is countable (has length 2), and a limit of a countable sequence is either a countable number or ω₁, and clearly, α+1 cannot be ω₁ therefore it must be countable. It explicitly confirmed that ω₁+1 is countable. At that moment I just gave up.
I assume that in literature, when countable sequences are mentioned, in 99% of situations they have both countable lengths and countable values, which makes it difficult for the LLM to distinguish between these two concepts. Even if I succeeded at explaining the difference, would it have a chance to use the term meaningfully? It probably cannot re-evaluate all the text it was trained on, to carefully check which instances of the word correspond to which meaning.
Specific examples for the first and second interventions would be nice to have!
I wonder whether hypocrisy is a coalition against depravity. People who are aware of their own faults, and yet decide to push back against abandoning the norms; but without admitting guilt, as that would turn them into scapegoats.
And the opposition to hypocrisy is a kind of "Bootleggers and Baptists" situation, consisting of naive people whose analysis of the situation doesn't go beyond "everyone should just do the right thing, duh", and the depraved people who prefer the incentives to be set up in a way that forces everyone who fails at being perfect into their coalition. ("How dare you criticize me? Are you perfect? No? Well, that makes you a hypocrite!" yells the person who regularly does something 100x worse than the bad thing you did once.)
You just are, a point of awareness, coextensive with time and space, deathless.
This feels like a sleight of hand. I am a part of a universe, with somewhat arbitrary boundaries. The universe is deathless (if we ignore the possible heat death for a moment). I am definitely not.
I might be deathless in the sense that some parts that I might choose to include in my boundary will continue to exist after my brain stops working. Though they will no longer be under the active control of my brain. And even the effects that persist for some moment after my death -- trivial example: I put a bottle on the table, then I die; the bottle still stands on the table; more complex example: people keep remembering me, my children keep doing what I taught them, people in future can find my blog articles and learn from them -- will gradually dissolve in noise.
Unless I choose to include the noise itself in my definition of self. But that's a kind of "you can eat chocolate, as long as you agree to call your broccoli 'chocolate'."
I suppose the religions do it by believing that there is some divine power that drives the noise; that instead of simply increasing entropy, it is some kind of funny game played by Brahma, and as long as you include Brahma in your extended self, you can keep having fun literally forever. From my perspective, noise is just noise, and mystical insights that suggest something different are likely to simply be factually wrong.
You made a choice.
Could you have made a different choice? Yes, if you had different inputs, or a different state of mind when you started thinking about the choice.
But if we take the inputs and your initial state of mind as constants, there was only one outcome you could have arrived to.
But you didn't know what that outcome would be, until you actually arrived there. It was a deterministic computation you had to make.
*
Why do you feel you had a freedom of choice? Because a different initial state of mind, or maybe even a small difference in inputs, could have led you to a different conclusion. As you don't know the result of the calculation until you actually make it, your initial guess is just a set of plausible conclusions, but the actual conclusion will only be known after you make it, so it feels like it was "magically, freely" picked from the set. Instead, it was just picked "unpredictably" in the sense that you were not able to predict it until you made it.
So ZIP achieves high compression for "kinda boring reasons", in the sense that we already knew all about that compressibillity but just don't leverage it in day-to-day operations because our float arithmetic hardware uses IEEE.
Could this be verified? Like, estimate the compression ratio under the assumption that it's all about compressing IEEE floats, then run the ZIP and compare the actual result to the expectation?
if you have a capable team that firmly believes in "fairness", in auditable, open, participatory processes that don't put a top-down thumb on the scale on controversial issues, and they get to actually use the neutral algorithm instead of being pressured to make exceptions, you get solid results and community trust!
Then it is quite sad that the neutral algorithm was introduced as the same time as Xitter started losing popularity. (At least, it seems that it loses popularity? Maybe that's just some bubble. I don't know what to trust anymore.)
Could these things be related? It seems like the opposition against Xitter is mostly because Musk is hanging out with Trump recently. But hypothetically, it could be a combination of that and the fact that the Community Notes may be inconvenient for people who instead could have the content policed by members of their tribe.
Sorry for getting political, but at least until recently it seemed like one political tribe practically owned all the "mainstream" parts of the internet; not necessarily most of the users, but most of the mods and admins. They didn't need to try finding a neutral ground, because instead, they could simply have it all.
I have seen a few attempts to make a neutral place where both sides could discuss, and those usually didn't work well. The dominant tribe had no incentive to participate, if they could win the debate by avoiding the place and from outside declaring it to be full of horrible people who should be banned. You could only attract them by basically conceding to many of their demands (declaring their norms and taboos to be the rules of the group), which already made an equal debate impossible (stating your disagreement already meant breaking some of the rules), which made the debate kinda pointless (you could only make your point by diluting it to homeopathic levels, and then the other side yelled at you for a while, and then everyone congratulated themselves for being so tolerant and open-minded). I don't want to give specific examples, but instead I will point to how Scott Alexander's blog was handled e.g. by Wikipedia -- despite the fact that most of its readers (and Scott himself) actually belonged to the dominant tribe, the fact that dissent was allowed was enough for some admins to call him names.
It is usually the weaker side that calls for fairness. Yes, it is amazing that you can implement it algorithmically, but the people who have the power to make this decisions, are usually not the ones who want it made.
So I wonder what will happen in future. Will more web platforms adopt the neutral algorithm? Or will it be instead something like "a neutral algorithm, but our trusted moderators can override its results if they don't like them"?
"Credit goes to A, B, and C. The errors are probably theirs too, and my error was to trust them." Better? :D
"Ethical Injunctions" is making a Kantian argument about certain patterns of behavior being inherently self-contradictory and thus impossible to consistently follow, not a rule-utilitarian argument about certain patterns of behavior causing bad outcomes if everyone were to do them.
So, basically, the argument against things like "let's kill a few thousand people so that we can make this planet a paradise for millions" is not "killing people is absolutely forbidden". Because, we are actually making similar tradeoffs, even with much smaller ratio, all the time. For example, using cars (as opposed to banning them worldwide, and e.g. only using trains) condemns thousands of people to painful death in car accidents, and it doesn't even bring a paradise to the rest, only a little more convenience. And the mainstream consensus is that this is acceptable.
The real objection instead is that plans like "let's kill a few thousand people so that we can make this planet a paradise for millions" predictably fail all the time, and if you actually thought about it a little, you could easily see it. Thousands of things would need to magically go right in order for such plan to succeed, and many of them are extremely unlikely, so the overall probability is practically zero. (For example, any plan that involves killing thousands will attract people who enjoy killing, and who enjoy organizing killing. Now after they succeed, you expect them to simply give up all the power and stop killing? As opposed to, e.g. putting a bullet through your brain, and declaring themselves the kings of the new order? Is that a likely scenario?) And the reason you don't immediately see this is that your brain has a blind spot here -- any plan that matches "I need to get more power, and then good things will happen" sounds instinctively very plausible to you, because your ancestors who believed such things and succeeded to convince others to give them the power, were usually very successful... that is, reproductively; not necessarily at making the good things actually happen.
I think that smartphones should be banned during school, but in the way that you can bring them to school, put them in some soundproof box, then take them out after the lessons are over. There is no reason to have a phone during the lessons, but coordinating with children in the afternoon is useful. Also, if my kids go to some activity right after school (without going home first), if they were not allowed to bring the phone to school, it means they also can't have it at the afternoon activity.
The hard problem is, how do you differentially get the screen time you want? At some point yes you want to impose a hard cap, but if I noticed my children doing hacking things, writing programs, messing with hardware, or playing games in a way that involved deliberate practice, or otherwise making good use, I would be totally fine with that up to many hours per day. The things they would naturally do with the screens? Largely not so much.
I think that if you teach your children how to use the computer while they are too young to install their own software, you will give them good habits. It is worse if they get their first computer advice from their classmates, because that will probably be about the things you don't want them to do.
My kids started at kindergarten age with Tux Paint, and later with other graphical programs (they both love to paint, both on paper and in computer). There was a time when they watched idiotic movies on YouTube, but then they stopped doing that (I am not sure why; maybe it simply got boring) and moved on to Pocket Platformer, Inkscape, and recently Scratch. They haven't discovered social networks yet.
With time, first we didn't have any limits, the kids got bored after a while. But then the time spent at the screen kept increasing, so we set a limit to one hour a day. That sometimes seemed like not enough, especially during weekends, so after a few experiments, we ended up with an "economy" where the kids get one hour a day for free, and can gain extra time by playing outside or helping at home.
I think my kids are probably on a good way to become hackers. I have friends who didn't let their kids use the computer until they were about nine years old (which is when they started having computer science lessons at school). Now, one friend's child only plays Minecraft, the other friend's child only plays Roblox, and there is no way to convince them to use the computer for any other purpose. Who knows, they may surprise me later.
When my kids are old enough to get smartphones, I probably won't allow them to install games or social networks there. I think it is better to do there things on the computer, because when you walk away from the computer it is over, and a turned-off computer cannot bother you with notifications.
An idea: make LLM create a wiki version of Sci-Hub. Each paper is a separate link. There is one screen of an automatically generated summary, followed by referred and referring papers, all of them hyperlinks (with a preview window), and a short automatically generated explanation how specifically the papers are related.
This might even be legal.
I'm excited by the promise of interesting conceptual things being done with the superhero-genre system, but my god there are many chapters of "boring"-to-me stuff (action scenes, description of city politics & class dynamics that doesn't feel true to life, etc)
For me, the last 1/3 is boring (no surprising development anymore, just battles that feel endless). But the descriptions of bullying seem quite realistic (triggering) to me.
Yeah, this is why rationality is a group effort (on top of the individual effort). There is not enough time to make a precise map of everything from scratch. It is better to hang out with people whose maps are generally good.
Perhaps I have listened to too much Georgist propaganda, but it seems to me that land is pretty important even today. But maybe this is mostly true for land in cities? Not sure. I would like to see some statistics about what was the typical consequence for being an aristocrat during the industrial revolution. I can imagine that each of the following was true for someone, the question is how many:
- you get killed
- you survive, but your land is taken away
- you sell the land trying to join the new economy, but you suck at the new economy, so you lose the money
- you keep the land, but it is at an unimportant place, the rent is too low
- you keep the land, you get a big rent, you are a successful modern rentier
I'd like to see a pie chart of this. Maybe some other options, if I forgot them.
A few of the clearest examples are listed below, though I can point to countless others
Yes, I would certainly love to read more, in a format longer than the bullet points you made here (but maybe shorter than the original Sequences?).
If you believe (it seems to me that correctly) that some lessons from the Sequences are frequently misunderstood, then it probably makes sense to make the explanations very clear, with several specific examples, a summary at the end... simply, if they were misinterpreted once, it seems like there is a specific attractor in the idea-space, and that attractor will act with the same force on your clarifications, so you need do defend hard against it. So please do err on the side of providing more specific examples and further dumbing it down for audience such as me. (Also, clearly spell out the specific misunderstanding you are trying to avoid, and highlight the difference. Maybe as a separate section at the end of the article.)
If this idea seems interesting, I'll probably be writing my own series of posts in the format of "Reading [Post from the Sequences] As If It Were Written Today."
Definitely interesting! Not sure if 1:1 correspondence is optimal (one article of yours per one article of the original Sequences). The information density varies and so does the article length; sometimes it might make more sense to read two or three articles at the same time; sometimes it might make sense to address two important points from the same article separately. Up to you; just saying that if you start with this format, don't feel like you have to stick with all the time.
Quoting Dijkstra:
The art of programming is the art of organizing complexity, of mastering multitude and avoiding its bastard chaos as effectively as possible.
Besides a mathematical inclination, an exceptionally good mastery of one's native tongue is the most vital asset of a competent programmer.
Also, Harold Abelson:
Programs must be written for people to read, and only incidentally for machines to execute.
There is a difference if the code is "write, run once, and forget" or something that needs to be maintained and extended. Maybe researchers mostly write the "run once" code, where the best practices are less important.
Anything that threatens a unipolar takeover risks provoking a nuclear war. [...] In the future, most cognition will be non-human. The game dynamics we start now must be good enough for human and non-human interests, so that these future players have an interest in upholding the peaceful nature of the game. That is our ultimate protection.
I think you overestimate the peacefulness of what we have now. We don't have a nuclear war, but we have conventional wars all the time. Only those who have the nukes are safe, but that doesn't include everyone.
So I can similarly imagine a future multipolar world, where GalaxyBrain1 and GalaxyBrain2 hold each other at bay by threatening to destroy the universe using some futuristic weapon, and then GalaxyBrain1 says "I really really want to exterminate humans" and GalaxyBrain2 says "I'd kinda prefer if you didn't, but hey, it is definitely not worth risking the destruction of the universe, so... go ahead, just don't touch my data centers".
Ah, yes. Recently I volunteered for a medical research along with 3 other people I know. Two of them dropped out in the middle. I can't imagine how any medical research can be methodologically valid this way. On the other hand, me and the other person stayed there, and it's almost over, so the success rate is 50%.
especially if it controls your social media feed
but... it already does :(
I mean, on facebook and xitter and reddit; I am still free to control my browsing of substack
and yes, applying the same level of control to my real life sounds like a bad idea
What do you mean by "hunger for life"?
What do you mean by "capitalism"?
If you basically mean that machines should help us overcome scarcity, and then everyone should be able to focus on games, friendship, learning, et cetera... sure, why not?
But first we need to make sure the machines won't kill us all when they get smarter than us and start controlling the wold. (Because if they do, it doesn't matter how our corporations and governments were set up.)
However, capitalism, as it stands, obstructs the realization of this vision.
So far, the attempts to replace capitalism often did even worse.
One problem - scarcity. Usually made worse by eliminating capitalism.
Second problem - humans. Psychopaths compete for power, in both capitalism and socialism. We need to solve this. Democracy alone is not a solution; psychopaths are quite successful at getting elected, or getting their people elected.
Is it time to rethink the way corporations and governing systems operate?
I believe people are thinking about this all the time, but do you have a specific proposal that wasn't widely considered yet?
Consider Egan's incentives. "A group of effective altruists collects a ton of money, buys anti-malaria nets, saves million African lives (but other millions still die of malaria)" is an improvement over status quo in real life, but it would be a boring and disappointing story.
Cool fictional villains are at least an improvement over the media narrative "EAs are crypto scammers".
I wonder if there are people who joined the rationalist or effective altruist communities because of recent Egan's stories. A negative advertisement is still advertisement... I can imagine someone reading the story, then trying to find more on the internet, then joining; the question is whether this actually happens.
There are many possible maps that describe the same territory. Trying to switch people to use a different map could be a good thing, or it could be a bad thing. (A person who likes the new map might describe it as "giving them fresh insights", a person who dislikes it might describe it as "manipulating them".)
Is the scientific map always better? Well, sometimes it is not available. And sometimes it is too complex. In situations where science provides a clear and simple answer, I guess following it is very likely to be the right answer. But that is not always the case, and then... what are the alternatives? Either paralysis ("I am going to ignore this topic until science finally comes with a simple answer") or some kind of greedy reductionism / focusing on what is legible ("I am going to ignore the illegible parts and focus on the part that is certain: everything, including my wife and kids, is ultimately built from atoms and anything else about them is mere superstition").
Off topic, but your words helped me realize something. It seems like for some people it is physical attraction first, for others it is emotional connection first.
The former may perceive the latter as dishonest: if their model of the world is that for everyone it is physical attraction first (it is only natural to generalize from one example), then what you describe as "take my time getting to know someone organically", they interpret as "actually I was attracted to the person since the first sight, but I was afraid of a rejection, so I strategically pretended to be a friend first, so that I could later blackmail them into having sex by threatening to withdraw the friendship they spent a lot of time building".
Basically, from the "for everyone it is attraction first" perspective, the honest behavior is either going for the sex immediately ("hey, you're hot, let's fuck" or a more diplomatic version thereof), or deciding that you are not interested sexually, and then the alternatives are either walking away, or developing a friendship that will remain safely sexless forever.
And from the other side, complaining about the "friend zone" is basically complaining that too many people you are attracted to happen to be "physical attraction first" (and they don't find you attractive), but it takes you too long to find out.
I'll try to respect your preference for brevity ;)
- a shorter version would be very useful -- yes, fully agree
- at least there is readthesequences.com without the comments (10x as much text as the articles)
- there were summaries at LW wiki, but those were too short; we need something medium-sized
- there are some good reasons why Eliezer wrote a long text
- there wasn't rationalist community yet, lines had to be drawn to separate it from many existing adjacent communities (atheists, skeptics, libertarians, sci-fi fans, self-help, contrarians, academia...)
- emotional, near-mode appeal -- why should we even care about "being rational"?
- popular bad memes/patterns (mysterious answers, applause lights, "trust the science"...)
tl;dr -- writing for an already existing rationalist(-ish) community is different from writing in order to create a rationalist community
haha no, Slovak
(an interesting hypothesis though)
I would agree for a year to only eat food that is given to me by researchers, as long as I can choose what the food is (and the give me e.g. the high-purity version of it). Especially if they would bring it to my home and I wouldn't have to pay.
But yeah, for more social people it would be inconvenient.
Andy Gilmore art
Great!
life & business advice
I liked:
I guess I like lists. (Or maybe the ideas in their articles are not that good, so I'd rather have 30 ideas sketched than 1 idea written long.)
propaganda (including advertising) does not have extraordinary abilities to manipulate people into believing or doing things they would not otherwise do.
propaganda is important (and potentially dangerous) primarily for [...] creating common knowledge and coordination points [...] crowding out non-propaganda communication [...] demonstrating the propagandist's power
Sounds almost like a glass half full / half empty distinction. It is almost impossible for propaganda to create something from scratch, but given that conflicts of interest exist almost everywhere, and you have all kinds of people almost everywhere (likely including someone who already supports your agenda), amplification of existing things seems sufficient. The lesson is for propagandists to look at what is already there and work with that, rather than start your own thing from scratch. It may be not exactly what you wanted, but if your goal is to create chaos, it is probably good enough.
If you take a group of crazy people, give them money to buy a place for their community to meet, create for them a website to share their ideas (webhosting, technical support, proofreading, editing, photos - simply, if you make it appear professional without needing a shred of talent on work on their side), and then you buy for them web advertising and billboards, arrange the logistics of their meetings, provide catering... the thing will explode. And almost everyone around them will be paralyzed.
As the article says, "Russian operatives behave as if they want to watch the world burn". Exactly this, they have a zero-sum approach. (It even seems to me, at least in my part of the world, that zero-sum perspective is a good predictor of how pro-Russian a person will be.) Russians only feel safe when the places around them are in ruins; they have no friends, only servants and enemies. But for that purpose, propaganda is sufficient.
Humans gravitate toward activities that provide just the right amount of challenge - not so easy that we get bored, but not so impossible that we give up. This is because overcoming challenges is the essence of self-actualization.
This is true when you are free to choose what you do. Less so if life just throws problems at you. Sometimes you simply fail because the problem is too difficult for you and you didn't choose it.
(Technically, you are free to choose your job. But it could be that the difficulty is different than it seemed at the interview. Or you just took a job that was too difficult because you needed the money.)
I agree that if you are growing, you probably feel somewhat inadequate. But sometimes you feel inadequate because the task is so difficult that you can't make any progress on it.
If you're so smart why are you divorced and spending half of your salary on alimony?
(just kidding, but looking at the divorced people around me makes it obvious that marriage ain't what it used to be)
I guess there are cultural limits to what you can and cannot do, and sometimes the thing that is most effective from the teaching perspective might be beyond those limits.
"Joking reduces authority" is a common intuition. I guess, people typically use humor to reduce tension, which is often what the weaker person would want to do. Humor can also be used by the stronger person, as a signal that they have no hostile intentions. But frequent joking is probably associated with weakness rather than strength (think: the class clown). Too bad, but that's how our instincts work. So as a military instructor, you probably have to care about not losing the respect of your audience, which probably consists of strong competitive guys (which is a different audience compared to e.g. teaching at an elementary school). I have no military experience (I used to teach kids 10-18 years old), so I have no idea where are the lines, and how far could you push them with a carefully balanced approach.
(I am just guessing here, but I would think that you can afford to be more funny if people watch the videos individually, compared to the class setting. The guy who laughs at your joke doesn't have to worry about the reaction of his peers. But this is just a guess. Also, most people pay better attention at class than individually at video; this is why educational videos are less successful than people hoped originally.)
Why does everyone think competition is healthy?
Better for whom? Two companies competing is probably worse for the companies but better for the customers.
*
The vibe I get from this text is the following:
There are two things that contribute to winning in competition: (1) generally being good... and that is a good thing; and (2) being similar to others... and that is a two-edged sword. Similarity helps you replace others, but similarity also helps others to replace you.
The standard advice is not to worry about similarity, and just fully focus on being better. Not completely wrong, but notice that this advice sometimes serves your boss better than it serves you, if everyone keeps doing that. If instead you become good but different from others, you are more difficult to replace, and that gives you a good position to negotiate.
(In the version for entrepreneurs, your "boss" are the customers as a collective.)
To avoid misunderstanding, being good at what you do is generally a good advice. Just don't conflate "being good at what you do" and "being an exact copy of other people who are good at what they do". Yes, being a copy of someone who is good makes you also good, but it is not the only way.
Models could incorporate long-term predictions, ensuring decisions align with future sustainability and impact goals.
Researchers implement mechanisms where AI models autonomously recognize misalignment, shut down harmful behaviors, or even self-destruct (via wiping out network weights) for severe cases.
We could build modular AI systems with distinct sub-components focusing on different objectives (e.g., overall goal, ethical considerations, social implications). These agents could check each other's outputs, flagging potential high-risk conflicts or misalignment.
This feels like you either misunderstood the problem, or you respond by circular logic.
The problem with alignment is that we don't know how to do alignment.
Your proposals:
- do alignment with future sustainability
- recognize misalignment
- recognize other agent's misalignment
I repeat: the problem is that we don't know how to "do alignment" (or "recognize misalignment").
*
As an analogy, imagine that someone tells you "I don't know how to swim", and your advice would be:
- keep your head safely above the water
- don't drown
- when swimming together with other people at the same skill level, check each other that you are not drowning
Well, if I knew how to keep my head safely above the water and how not to drown, I wouldn't be asking the question in the first place.
In regard to my situation and why I'm presenting you my ideas, I'm an amateur thinker who is wishing to popularise and spread my ideas and is outside the intellectual community so I'm in need of help in spreading my ideas so if you're feeling generous I'd like to ask for help from any of you reading to spread my ideas.
I think you have skipped a few steps here.
First, you need to have some good ideas. They do not necessarily need to be original; popularizing existing good ideas is also a great thing.
Second, you need to get good at explaining things. Write clearly, provide specific examples, etc.
Then, you can write a few articles and people will be happy to share them.
It seems like you think that you are currently at step three. To me it seems like you are still struggling with step one (or maybe step two). I have no idea what are the ideas you want to spread.
I suspect that debating altruism is an unusually good opportunity for people to "generalize from one example".
People who enjoy helping others will go like: "Of course, everyone wants to help others, deep inside. It's just that when people are in need or in pain, their self-preservation instinct temporarily reduces this feeling, to make them focus on saving themselves. But as soon as we help them satisfy their physical or emotional needs, you will find that even the seemingly horrible people are actually good, when given the opportunity."
And this is kinda unfalsifiable, because if someone remains a horrible person no matter how much you give them (things, support, forgiveness), you can insist that there must be some need that wasn't sufficiently satisfied yet. And that person would obviously encourage this perspective, because it means that they will get even more things. And there will always be something missing, because the world is not perfect.
On the other hand, people who don't enjoy helping others, can rationalize away almost any observed goodness: "Yeah, they are just showing off (i.e. trading a little effort or money for higher social status). And they definitely expect to get something in return; if not today, then maybe tomorrow. They probably got something in return when you weren't watching. Okay, they never got anything in return, but they thought they would, they just miscalculated; that's not goodness, that's stupidity. Why the fuck would anyone do anything, if they don't expect to benefit from it somehow?"
And this too is kinda unfalsifiable, because almost always there is something, no matter how indirect or disproportionately small. And the very fact that you know about a good deed already makes it suspicious that the person wanted you to know, to get something in return, at least some status in your eyes. (And if you don't know about a good deed, then it cannot contradict your perspective, can it?)
This way, everyone can stay convinced that their general theory of altruism is correct.
So maybe the truth is that (1) people are different, and (2) even the same person can do different good deeds for different reasons, and (3) even one deed can have multiple reasons. For example:
- I may expect to maybe get something in return, but the probability of such thing happening multiplied by the average reward may be so small that this simply doesn't make sense economically; there are more profitable things I could be doing instead, if I only cared about my profit
- some people may help the poor to signal how rich they are, but that alone does not explain why they chose to signal their wealth this way, instead of e.g. buying a really expensive car, which is what some other rich people do
- some people are more likely to help others after their own needs are satisfied, but that may be just a subset of all people; other people respond to having their needs satisfied by simply having more needs, without any altruism manifesting as a side effect
- similarly, some people are more likely to help others after seeing a role model, but other people just laugh at the role model, or invent a hypothesis why the role model (1) actually secretly benefits from their seemingly selfless actions, and (2) is actually making the world a worse place
How people feel about receiving charity (e.g. whether they feel degraded by it) may also depend on their model of altruism, which probably is a result of what they would do in the reversed situations, and what motives they have observed (or hypothesized) at others. For example, the kind of person who would only help others to feel smugly superior to them, will probably fight hard against being a recipient of help. The kind of person who gives gladly will be more likely to also receive gladly. (Although, a scammer will also receive gladly; they won't feel humiliated by having successfully exploited others.)
Some smaller points I haven't see in the article:
When considering altruism as reciprocity, i.e. helping others while expecting to get something in return, it probably makes sense to distinguish a few different meanings:
- helping someone, because I expect that specific person to later pay back what they owe me
- helping people, and expecting that some of them will later somehow reciprocate and many probably won't, but I am okay with such outcome, because so far I am profitable on average -- helping others costs me little, and once in a while someone reciprocates in a way that feels like winning a lottery
- helping people in order to establish a general culture of "people in need should be helped" as an insurance in case I would later need some help myself, although I hope not to need this insurance
All of these could be summed up as "helping others, expecting to get something back", but they lead to different behaviors. In the first case, I would only help the people who seem most likely to pay it back later; I wouldn't help the homeless, or strangers. The second case... is actually how it works for me (I think it is not the true reason why I do it, but the fact is that it works). The third case, I think the difference is in the mood: if you only help others because you expect to be poor in future, it feels sad, and you will probably only try doing the minimum necessary.
I noticed a seeming paradox: When I help a person, I don't expect them to do something for me in return. And yet, I would feel better to learn that "this is the kind of person who helps others, when they can". At first I thought, okay, maybe this exposes some kind of hypocrisy or inconsistency on my end: I do not consciously expect to be paid back, but unconsciously I do, so I feel better when I learn that I have helped a person who is likely to reciprocate. But then I noticed that there is also another possible explanation: there are many people in need, and my possibilities to do good are limited; if I help a selfish person, it stops there; but if I help an altruistic person, they may later help someone else, and thus I may have started a chain reaction of good.
Similarly, helping a person who seems to be on a way to improvement feels better, to me. (I think this is not universal. At least I have heard that there are people who help others, but feel betrayed (?) when those people start working on themselves to be less likely to need help in the future.) One possible explanation is that people who get stronger are more likely to reciprocate. But another possible explanation is that when I help people, my hope is to make their situation better; and a person who works on themselves along with receiving my help is acting like a multiplier to that help. You know, "when you give a fish, you feed someone for a day; when you teach them fishing, you feed them for a lifetime", except people actually also need to eat when they are learning, so when you give someone a fish and then they learn fishing (sometimes they don't need you to teach), it's like you have fed them for a life using a single fish, which is an effective act of altruism.
use of humor as a pedagogical tool in Cardiopulmonary Resuscitation (CPR) courses
I first understood that as resuscitating people by telling them jokes. Like, when you laugh hard enough, your heart starts beating again. :D
*
Yeah, I think it could be interesting. To me, this feels unsurprising -- memory is related to emotions, so you should use emotions while teaching. But negative emotions, such as fear, help people remember, but also discourage them from researching the topic on their own. They help memory, but hurt creativity. Positive emotions should be useful for both remembering and experimenting.
Now the question is which positive emotions. Also, how. I guess people will remember funny things, but can you produce jokes about every important thing you want your students to remember? (If you can, you should totally do an educational YouTube comedy channel.)
Robin says that we have less cultural diversity than in the past. I am not sure about that. In the past, we had geographically separated cultures, but within each culture, there wasn't enough space for many subcultures. Today, the cultures are closer, but the subcultures can be larger. Hundred years ago, there would be no such thing as the rationalist community. (Even using the example from Robin's article: it's not like Amish are living on some distant island.)
I don't understand the argument why colonizing the stars would not fix the problem (of cultural drift leading to low fertility). My worry would be the opposite -- that the future will belong to those who replicate the fastest (and sacrifice everything else for that goal).