Posts

Trying to be rational for the wrong reasons 2024-08-20T16:18:06.385Z
How unusual is the fact that there is no AI monopoly? 2024-08-16T20:21:51.012Z
An anti-inductive sequence 2024-08-14T12:28:54.226Z
Some comments on intelligence 2024-08-01T15:17:07.215Z
Evaporation of improvements 2024-06-20T18:34:40.969Z
How to find translations of a book? 2024-01-08T14:57:18.172Z
What makes teaching math special 2023-12-17T14:15:01.136Z
Feature proposal: Export ACX meetups 2023-09-10T10:50:15.501Z
Does polyamory at a workplace turn nepotism up to eleven? 2023-03-05T00:57:52.087Z
GPT learning from smarter texts? 2023-01-08T22:23:26.131Z
You become the UI you use 2022-12-21T15:04:17.072Z
ChatGPT and Ideological Turing Test 2022-12-05T21:45:49.529Z
Writing Russian and Ukrainian words in Latin script 2022-10-23T15:25:41.855Z
Bratislava, Slovakia – ACX Meetups Everywhere 2022 2022-08-24T23:07:41.969Z
How to be skeptical about meditation/Buddhism 2022-05-01T10:30:13.976Z
Feature proposal: Close comment as resolved 2022-04-15T17:54:06.779Z
Feature proposal: Shortform reset 2022-04-15T15:25:10.100Z
Rational and irrational infinite integers 2022-03-23T23:12:20.135Z
Feature idea: Notification when a parent comment is modified 2021-10-21T18:15:54.160Z
How dangerous is Long COVID for kids? 2021-09-22T22:29:16.831Z
Arguments against constructivism (in education)? 2021-06-20T13:49:01.090Z
Where do LessWrong rationalists debate? 2021-04-29T21:23:55.597Z
Best way to write a bicolor article on Less Wrong? 2021-02-22T14:46:31.681Z
RationalWiki on face masks 2021-01-15T01:55:49.836Z
Impostor Syndrome as skill/dominance mismatch 2020-11-05T20:05:54.528Z
Viliam's Shortform 2020-07-22T17:42:22.357Z
Why are all these domains called from Less Wrong? 2020-06-27T13:46:05.857Z
Opposing a hierarchy does not imply egalitarianism 2020-05-23T20:51:10.024Z
Rationality Vienna [Virtual] Meetup, May 2020 2020-05-08T15:03:56.644Z
Rationality Vienna Meetup June 2019 2019-04-28T21:05:15.818Z
Rationality Vienna Meetup May 2019 2019-04-28T21:01:12.804Z
Rationality Vienna Meetup April 2019 2019-03-31T00:46:36.398Z
Does anti-malaria charity destroy the local anti-malaria industry? 2019-01-05T19:04:57.601Z
Rationality Bratislava Meetup 2018-09-16T20:31:42.409Z
Rationality Vienna Meetup, April 2018 2018-04-12T19:41:40.923Z
Rationality Vienna Meetup, March 2018 2018-03-12T21:10:44.228Z
Welcome to Rationality Vienna 2018-03-12T21:07:07.921Z
Feedback on LW 2.0 2017-10-01T15:18:09.682Z
Bring up Genius 2017-06-08T17:44:03.696Z
How to not earn a delta (Change My View) 2017-02-14T10:04:30.853Z
Group Rationality Diary, February 2017 2017-02-01T12:11:44.212Z
How to talk rationally about cults 2017-01-08T20:12:51.340Z
Meetup : Rationality Meetup Vienna 2016-09-11T20:57:16.910Z
Meetup : Rationality Meetup Vienna 2016-08-16T20:21:10.911Z
Two forms of procrastination 2016-07-16T20:30:55.911Z
Welcome to Less Wrong! (9th thread, May 2016) 2016-05-17T08:26:07.420Z
Positivity Thread :) 2016-04-08T21:34:03.535Z
Require contributions in advance 2016-02-08T12:55:58.720Z
Marketing Rationality 2015-11-18T13:43:02.802Z
Manhood of Humanity 2015-08-24T18:31:22.099Z

Comments

Comment by Viliam on leogao's Shortform · 2024-12-23T10:23:27.992Z · LW · GW

I guess is depends on the kind of work you do (and maybe whether you have ADHD). From my perspective, yes, attention is even more scarce than time or money, because when I get home from work, it feels like all my "thinking energy" is depleted, and even if I could somehow leverage the time or money for some good purpose, I am simply unable to do that. Working even more would mean that my private life would fall apart completely. And people would probably ask "why didn't he simply...?", and the answer would be that even the simple things become very difficult to do when all my "thinking energy" is gone.

There are probably smart ways to use money to reduce the amount of "thinking energy" you need to spend in your free time, but first you need enough "thinking energy" to set up such system. The problem is, the system needs to be flawless, because otherwise you still need to spend "thinking energy" to compensate for its flaws.

EDIT: I especially hate things like the principal-agent problem, where the seemingly simple answer is: "just pay a specialist to do that, duh", but that immediately explodes to "but how can I find a specialist?" and "how can I verify that they are actually doing a good job?", which easily become just as difficult as the original problem I tried to solve.

Comment by Viliam on ryan_greenblatt's Shortform · 2024-12-23T10:07:16.325Z · LW · GW

This made me wonder whether the logic of "you don't care about your absolute debt, but about its ratio to your income" also applies to individual humans. On one hand, it seems like obviously yes; people typically take a mortgage proportional to their income. On the other hand, it also seems to make sense to worry about the absolute debt, for example in case you would lose your current job and couldn't get a new one that pays as much.

So I guess the idea is how much you can rely on your income remaining high, and how much it is potentially a fluke. If you expect it is a fluke, perhaps you should compare your debt to whatever is typical for your reference group, whatever that might be.

Does something like that also make sense for countries? Like, if your income depends on selling oil, you should consider the possibilities of running out of oil, or the prices of oil going down, etc., simply imagine the same country but without the income from selling oil (or maybe just having half the income), and look at your debt from that perspective. Would something similar make sense for USA?

Comment by Viliam on quila's Shortform · 2024-12-22T21:43:07.311Z · LW · GW

The most likely/frequent outcome of "trying to build something that will last" is failure. You tried to build an AI, but it doesn't work. You tried to convince people that trade is better than violence, they cooked you for dinner. You tried to found a community, no one was interested. A group of friends couldn't decide when and where to meet.

But if you succeed to... create a pattern that keeps going on... then the thing you describe is the second most likely outcome. It turns out that your initial creation had parts that were easier or harder to replicate, and the easier ones keep going and growing, and the harder ones gradually disappear. The fluffy animal died, but its skeleton keeps walking.

It's like casting an animation spell on a thing, and finding out that the spell only affects certain parts of the thing, if any.

Comment by Viliam on What Goes Without Saying · 2024-12-22T20:36:58.886Z · LW · GW

Ouch. Sometimes the answer to "Why don't you simply X?" is "What makes you so sure I didn't already 'simply X' in the past, and maybe it just didn't work as well as advertised?".

It's not necessarily that the strategy is bad, but sometimes it needs a few ingredients to make it work, such as specific skills, or luck.

Comment by Viliam on No Internally-Crispy Mac and Cheese · 2024-12-22T16:32:36.567Z · LW · GW

Could you maybe somehow flip everything upside down in the middle of baking? So you get two tops.

Comment by Viliam on Raemon's Shortform · 2024-12-22T16:19:57.890Z · LW · GW

Sounds like pair programming, except the programming part is optional.

I’d like a large rolodex of such people, both for me, and other people I know who could use help.

Maybe different people need different assistants.

Seems to me that being a good assistant has two components: good communication skills (patience, clarity of explaining, adjusting the advice to target's current skills and knowledge), and skills in the specific thing you want to assist with. With the communication skills, different people may prefer different styles, but there probably would be a general consensus on what is better. With the task-specific skills, it depends on what you already know. Someone could provide useful advice to beginners, but have nothing useful to say to an expert.

I guess, if you make a list for other people, it should make clear what is the level of your skill where the assistant will be useful for you. There is nothing wrong with only being useful to beginners, if there are beginners who will use the list; and in a large group there will probably be more beginners than experts on any specific topic.

Comment by Viliam on TheManxLoiner's Shortform · 2024-12-22T16:00:41.205Z · LW · GW

You can create another account to make an anonymous comment. But it's inconvenient.

(Not sure whether this is an argument for or against anonymous commenting.)

Comment by Viliam on Yoav Ravid's Shortform · 2024-12-20T11:29:01.895Z · LW · GW

I would need more data to make an opinion on this.

At first sight, it seems to me like having a rule "if your total karma is less than 100, you are not allowed to downvote an article or a comment if doing so would push it under zero" would be good.

But I have no idea how often that happens in real life. Do we actually have many readers with karma below 100 who bother to vote?

By the way, I didn't vote on your article, but... you announced that you were writing a book i.e. it is not even finished, you didn't provide a free chapter or something... so what exactly was there to upvote you for?

(Sorry, this is too blunt, and I understand that people need some positive reinforcement along the way. But this is not a general website to make people feel good; unfortunately, aspiring rationalists are a tiny fraction of the general population, so making this website more welcoming to the general population would get us hopelessly diluted. Also, there is a soft taboo on politics, which your post was kinda about, without providing something substantial to justify that.)

Comment by Viliam on I'm Writing a Book About Liberalism · 2024-12-20T11:20:20.174Z · LW · GW

Seems to me like a "glass half full vs half empty" situation. What was the standard alternative to a society that preached freedom and oppressed many people? Probably a society that oppressed even more people, and also taught everyone that it was the right thing to do.

In your historical examples, you mention the negatives, but don't mention the positives. For example, revolutionary France has abolished slavery; so if we (rightfully) criticize USA for the slavery, it seems fair to mention this as a point in favor of France.

If we compare these examples to societies that existed at the same time or the same place... well, I don't know the historical rate of political opponents murdered, but I suspect that it was pretty high; it's just that when the kings or the holy inquisition do it, most people accept it as their divine right. Similarly, Soviet Union was a horrible place, but Russia has always been (and still remains) a horrible place.

(Also, Soviet Union did not exactly consider itself Liberal. Lenin would call most liberal things "bourgeois".)

So I think the criticism is that you can declare your aspirations overnight, but it may still take years, sometimes centuries, to implement them in real life. Therefore we should think of wannabe-liberal societies as being on their way towards something good, rather than being already there.

Comment by Viliam on halinaeth's Shortform · 2024-12-20T10:40:46.057Z · LW · GW

In my experience, I only remember one example of a successful "coup". It was a private company that started small, and then became wildly successful. Two key employees were savvy enough to realize that this is not necessary a good news for them. The founders, those will definitely become rich. But a rich company will hire more employees, which means that a relative importance of each one of them will decrease. And the position of the founders towards those two will probably become something like: "okay guys, you spent a decade working hard to make all of this happen, but... you got your salaries, so we don't owe you anything; what have you done for us recently?".

So those two guys joined forces and together blackmailed the founders: "either you make both of us co-owners, right now, or we both quit". And the company couldn't afford to lose them, because one of them wrote like 90% of the code used by the company, and the other had all the domain expertise the company needed. (Now imagine how different the power balance could be one year later, if the company had maybe three new employees understanding the code, and three more employees to learn the domain knowledge.) So the original founders grudgingly accepted the deal. I think there were some symbolic concessions like "but we have spent our money to build this company, so you will have to pay that part back from your future profits", but that was completely unimportant, because until now the company was small, and soon it became huge and rich, so the money was probably paid back in a few months, and the two guys are millionaires now.

(More generally, I get the impression that early employees in companies often get a bad deal, because first they are told "the company is still small, it may not even survive, so you need to work harder and we can't afford to pay you better... but think about the bright future if the company succeeds", and then it turns out that the future is bright for the owners, and the burned out employees probably get replaced by new hires who are full of energy and bring new technologies. Oh, and if they own any "equity", it almost always turns out that for some technical reasons it doesn't mean what they thought it meant, and instead of 5% of the company they actually own 0.005%, plus they have to pay a lot of tax for that privilege.)

I think a much more frequent situation is that people predict that they would end up in a similar situation, and avoid it by starting their own project rather than joining an existing one. Now in certain contexts, this is business as usual -- everyone who starts their own company rather than joining an existing one is doing exactly this. (You don't need to organize a coup, if you are the legitimate owner.)

Problem is, we have different social norms for "business" and for "community". In business, being openly selfish is legitimate. If someone asks you "why do you want to start your own company rather than work for someone else?", if you say "because I want to get rich", this is a perfectly acceptable answer. (The person may doubt your ability, but not your motivation.) In community context however, you are supposed to optimize for some greater good, rather than your own profit. That of course doesn't prevent the smart people from taking the profit! But they must argue that what they are doing is for the greater good. And if you want to start a competing project, you must also become a hypocrite and argue the same, otherwise all the people who are there for the community feeling will boycott you.

This is why "build a 10% better mousetrap" is a legitimate goal, but "build a 10% better web portal for artists" is not. The 10% improvement means nothing if the community accuses you of being a greedy selfish bastard who only cares about money and not about art, and they blacklist you and everyone who cooperates with you. And yes, if you understand how the game is played, the initiators of the backlash are those who profit from the existing system. But you can't say this out loud; it would only prove that you care about the money. So both sides will keep arguing complete bullshit, trying to get the confused people on their side. The important thing is to get confused high-status people on your side, because then the rest will follow. The old group will argue that "we need to protect our current values" and "splitting our small community will ultimately hurt everyone". The new group will argue that "we need more diversity" and "providing more options will attract more people to our common cause". (Then the old group will whine: "so why don't you add those new options to our current community website instead?" And the new group will respond: "you had plenty of opportunity to do that already, which means that you are either incompetent or unwilling, and we need a new space for the new ideas".)

You talk about a "crypto community", which I suspect is another example of the same thing. The people who have the power are there for the money. Everyone else is there for the feeling of community. The community is an important part of how the people with power make the money. But they very likely optimize for money, the community is only instrumental. In the occasional situation where "what is good for the people who make money" is significantly different from "what is good for the community", the arguments of the people with power may sound a bit... confused... but everyone else interprets it charitably as a "honest mistake" or "well, I don't have all the information they have, so maybe it's my fault that I do not understand their perspective". This is because the people who know better are either part of the inner circle, or have already left the community (or have never joined it in the first place); or maybe are there for their own selfish purposes, which are unrelated to the goals of the founders or the community (someone analogical to publishers in the artistic community).

(By the way, these days when I hear a company owner say something like "we are all like a big family here", I treat it as a red flag. That basically means that the owner wants me to apply community norms in a business situation. Thank you, but I keep my communities outside of my workplace, where I won't lose them if one day my boss decides to press the button.)

Comment by Viliam on What conclusions can be drawn from a single observation about wealth in tennis? · 2024-12-19T15:33:49.224Z · LW · GW

Quick guesses:

  • health is strongly correlated with both wealth and sport outcomes.
  • rich people have more free time; some of them choose to spend it training sports
Comment by Viliam on Don't Associate AI Safety With Activism · 2024-12-19T15:17:28.164Z · LW · GW

That mainstream is like one side of the American political spectrum, now also do the other side. ;)

Seems to me there are three factors to how one perceives an activist, most important first:

  • Do I support their agenda, or do I oppose it?
  • If I oppose their agenda, how threatened do I feel by their activism? If I support their agenda, how devastating blow do I think they delivered to my enemies?
  • How do the activists actually behave? Do they politely express their opinions? Do they destroy public and private property? Do they attack other people?

The problem is that the third point is the least important one. A typical person will excuse any violence on their side as "necessary" (and sometimes also as "cool"). On the other hand, even seemingly peaceful behavior cannot compensate for the fact that "their goals are evil".

Basically, the third point mostly matters for people who don't have a dog in this fight. The more radicalized is the society, the fewer such people are.

Comment by Viliam on sarahconstantin's Shortform · 2024-12-19T14:36:38.343Z · LW · GW

The description on the page you linked --- "augments the brain's ability to reason on a) who am I, b) who are you, and c) who are you to me, now and over time" -- leaves a lot to imagination. Sounds like a chatbot that will talk to you about your contacts?

i wouldn't know how to reach out to my roommate/best friend from college; we haven't talked in 16 years!

Maybe try finding out their birthday (on social networks, by online research, or maybe ask a mutual friend), and then set up a reminder. "Happy birthday, we haven't seen each other for a while, how are you?" Sounds to me like a socially appropriate thing (but I am not an expert).

Also, spend 5 minutes by the clock writing a list of people you would like to stay in contact with.

Now, I guess the question is how to set up a system that will let you store the data and provide the reminders. The easiest version would a spreadsheet where you enter the names and birthdays, and some system that will read it and prepare notifications for you. A more complicated version would allow you to write more data about the person (how do we know each other, what kinds of activities did we do together, when was the last time we talked), and group the people by categories. You could make an AI go through your e-mail archive and compile an initial report on the person.

I would probably feel very uncomfortable doing this online, because it would feel like I am making reports on people, and the owner of the software will most likely sell the data to any third party. I would want this as a desktop application, maybe connected to a small phone app, to set up the reminders. But many people seem to prefer online solutions as more convenient, privacy be damned.

(The phone reminders could be like: "Today, XY has a birthday; you have their phone number, e-mail, and Less Wrong account. You relationship status is: you have met a few times at a LW meetup. Topics you usually discuss: AI, kitten videos.")

Comment by Viliam on Don't Associate AI Safety With Activism · 2024-12-19T11:37:10.183Z · LW · GW

I guess it depends on the political bubble. So this may not necessarily be about activists as such, but about some political bubbles increasing recently (something something Russia Today something something Trump).

Comment by Viliam on sarahconstantin's Shortform · 2024-12-19T11:05:31.483Z · LW · GW

keeping track of people you know. as an inveterate birthday-forgetter and someone too prone to falling out of touch with friends, I bet there are ways for AI tools to do helpful things here.

Facebook already reminded me when my friends had birthdays, but recently I noticed that it also offers to write a congratulation comment for me, I just need to make a single click to send it. Now, Facebook has an obvious incentive to keep me returning to their page every day, so they are not going to fully automate this.

The next necessary functionality would be to write automated replies. I think that could be achieved by LLMs, I just need some service to do it automatically. That way I could have a rich social life, without the need to interact with humans.

Comment by Viliam on RussellThor's Shortform · 2024-12-19T09:47:14.771Z · LW · GW

Perhaps thinking about IQ conflates too things: correctness and speed. For individual humans, these seem correlated: people with higher IQ are usually able to get more correct results, more quickly.

But it becomes relevant when talking about groups of people: Whether a group of average people is better than a genius, depends on the nature of the task. The genius will be better at doing novel research. The group of normies will be better at doing lots of trivial paperwork.

Currently, the AIs seem comparable to having an army of normies on steroids.

The performance of a group of normies (literal or metaphorical) can sometimes be improved by error checking. For example, if you have them solve mathematical problems, they will probably make a lot of errors; adding more normies would allow you to solve more problems, but the fraction of correct solutions would remain the same. But if you give them instructions how the verify the solutions, you could increase the correctness (at a cost of slowing them down somewhat). Similarly, an LLM can give me hallucinated solutions to math / programming problems, but that is less of a concern if I can verify the solutions in Lean / using unit tests, and reject the incorrect ones; and who knows, maybe trying again will result in a better solution. (In a hypothetical extreme case, an army of monkeys with typewriters could produce Shakespeare, if we had a 100% reliable automatic verifier of their outputs.)

So it seems to me, the question is how much we can compensate for the errors caused by "lower IQ". Depending on the answer, that's how long we have to wait until the AIs become that intelligent.

Comment by Viliam on Nathan Young's Shortform · 2024-12-19T09:12:07.785Z · LW · GW

I think 3 is more like: "Attempting to achieve results in social reality, by 'social accuracy', regardless of factual accuracy."

  • 1 = telling the truth, plainly
  • 2 = lying, for instrumental purposes (not social)
  • 3 = tribal speech (political correctness, religious orthodoxy, uncritical contrarianism, etc.)
  • 4 = buzzwords, used randomly

This is better understood as a 2×2 matrix, rather than a linear sequence of 4 steps.

  • 1, 2 = about reality
  • 3, 4 = about social reality
  • 1, 3 = trying to have a coherent model of (real or social) reality
  • 2, 4 = making a random move to achieve a short-term goal in (real or social) reality
Comment by Viliam on CstineSublime's Shortform · 2024-12-19T08:56:18.140Z · LW · GW

There is also the aspect of "when". You can't keep thinking of a rule 24 hours a day, so the question is: in which situation should your attention be brought to the rule?

"Instead of X, do Y" provides an answer: it is when you are tempted to do X.

Probably relevant: Trigger-Action Planning

Comment by Viliam on bending light · 2024-12-17T23:29:46.363Z · LW · GW

Uhm...

√-8/4 = √(8∙-1) = √16 ÷ √-1 = (4i)

√-16 = √(16×-1)= √16 ÷ √-1 = (4i)

(2+3i)² = (2+3i)(2+3i) = '6i + 4 + -9' = (12i - 5)

(2+3i)² = (2+3i)(2+3i) = 2×6i + 4 + -9 = (12i - 5)

Comment by Viliam on Is this a better way to do matchmaking? · 2024-12-17T15:13:52.191Z · LW · GW

Perhaps there could be a way to efficiently measure similarity between people, without relying on vibes. Something like, measure everyone on hundred different scales (have them answer a questionnaire, have an LLM analyze their free texts), then say something like "this and this person seem similar to me", "this and that person do not seem similar". The system would figure out which dimensions you care about, and then find in a database the people most similar (in the dimensions you care about) to the one you want.

Comment by Viliam on are IQ tests a good measure of intelligence? · 2024-12-17T15:08:04.122Z · LW · GW

I define "intelligence" as "having a strong ability to hit narrow targets in a large search space"

If you make a too general definition, you may hit the No Free Lunch Theorem. For every universe where X is an intelligent strategy, we can design a Troll-X universe where using that strategy always results in a disaster. (An angry god measures your IQ, and hits you with a lightning bolt if it exceeds 150.) So the question is, what is intelligence in our universe.

Also, we are humans. As long as a test measures the ability of a human to hit narrow targets in general, we don't (yet) have to worry about the test being unfit for some machines or aliens.

Also, is there a one "official" IQ test?

No, there are multiple IQ tests that have been calibrated on large population samples, and they correlate to each other.

or does any random internet thing that calls itself an "iq test" work?

Obviously not.

which ones are real?

No online IQ test is real. How would you calibrate such thing?

Among the actual IQ tests, Raven's Progressive Matrices are a solid test. There are also others, but I don't remember which ones.

if they are not good measures of intelligence by this definition, is there a definition of intelligence which they are good at measuring?

They are good at measuring "the thing that people you would intuitively call 'intelligent' have in common".

Like, literally, this is how psychometrics works. You choose a concept you are interested in, for example "intelligence" or "niceness" or "whatever". At first, you have no idea how to measure anything like that. But you can point at a few people who are obviously an example of that, and a few who are obviously not. So that's a beginning. We need to find a test where the former will score higher than the latter.

So the next step is that you brainstorm dozens of questions that seem related to the thing you want to measure. Then you give those question to thousands of people. You observe two things: (A) which people score high and which people score low, and (B) which questions correlate with each other; the formal way to do this is called factor analysis.

Then you think about the results. Like, maybe if the analysis shows that there are two or three factors, you choose the questions most correlated with each factor, and try to figure out what the questions correlated with the same factor have in common, and how they differ from the questions correlated with another factor. You find out that you have actually tried to measure two or three things, because you were confused about them, but now you can see that they are not the same thing.I

If you do this with the naive concept of "intelligence", you will find out that there is one big factor that correlates with your concept of intelligence, and a few smaller ones that are something else (for example fluency in English). So you take the questions that correlate with the big factor, and call the resulting score "IQ".

This is how it was traditionally done. If you believe that this does not sufficiently measure the "quality of hitting a narrow target", you may be right, and in principle you could follow a similar process to design a better test for that. You might find out (in the part where you do the factor analysis) that the thing you were trying to measure was actually a combination of multiple things, such as intelligence and conscientiousness. (Because there is a difference between "target-hitting" in abstract, and "target-hitting, as it is actually implemented in humans".)

Comment by Viliam on World Models I'm Currently Building · 2024-12-17T13:31:21.445Z · LW · GW

What are the failure modes of various groups that try to keep secrets?

I think the secret should be known to as few people as possible.

If the secret requires e.g. people doing research (so with fewer people you would also have slower research), it helps if you can split the secret into smaller parts, and divide them between different people, so no one knows the entire secret; ideally they don't have an idea about what purpose their research might serve. Information is provided on a need-to-know basis, and people are punished even for leaking the little info they have. You may provide them intentionally misleading information that has no impact on their part of research.

You will probably need some kind of counter-intelligence, and maybe regularly test the loyalty of your employees by providing various controlled temptations; not only will you filter out the stupid traitors, but even the smart ones will be suspicious about actual temptations, because they will suspect it is yet another test.

At that moment you probably need to think hard about the balance of power, so you don't end up e.g. with the counter-intelligence department overthrowing the supposed leaders. Not sure how to achieve this. I think the general rule is that the inside-oriented departments (those who are supposed to handle internal bureaucracy) have more power than the outside-oriented ones (those who do the originally intended mission of the organizations). Notice how the word "secretary" originally referred to a humble assistant, but gradually "secretary general" became the most powerful role in an organization. This gets complicated.

As a dictator, how to build 100% surveillance instead of 99%, to increase the stability of your dictatorship?

I think the dictators secure their power not (only) by doing nerdy things, but mostly by being psychopaths who can credibly threaten to deliver horrible punishments even for minor infractions. When you say that any attempt to circumvent your surveillance will be punished by torturing the person, all their friends, and all their relatives to death (perhaps with the exception of the person who betrays them), suddenly everyone has an incentive to avoid doing anything suspicious and to police their neighbors and family. The dictators often stay in power by making everyone a potential enemy of everyone, and by choosing to let thousand innocent people be tortured to death rather than let one potential conspirator escape punishment.

Often as a side effect the country becomes weak at certain aspects e.g. because no one dares to deliver the bad news to the leader, and because the smart people try to leave your country, or at least avoid attracting your attention (which means they will not use their talents to the fullest).

With 99% surveillance and an LLM, a simple algorithm for a dictator is "find everyone whose behavior is unusual in any way, and have the secret police torture them until they confess". A more gentle way would be to find everyone whose behavior is unusual in any way, and force them to wear a camera on their person 24 hours a day (and kill them if they fail to do so). Also, everyone needs to give you all their passwords, and their operating system will regularly make screenshots of their activities to be archived and analyzed by the LLM (I think this is already a functionality in Windows 11).

Give huge rewards to traitors. The cheapest way for you would be to give them all the property of the people they betrayed, and maybe even make the traitors (those who survive your interrogation) their slaves. Publicly celebrate some successful traitors. The conspiracy against you may be technically possible, but most people will be to afraid to try. Also at the smallest conflict among the conspirators, some may be tempted to betray the rest (and thus gain an amnesty for themselves); and even if they are not, at least they will suspect each other, which will disrupt cooperation.

You could implant bombs in people, that can be remotely detonated, or even better require a scheduled update or they explode. Everyone with an access to internet gets one. For the scheduled bomb update, you need to bring the camera records and the screenshots from your computer. Basically, you separate the people to important and unimportant; the important ones are watched more intensely, but they are the only ones who get full access to various things.

(I grew up in a communist country, if that is not obvious from my writing.)

Is there any way to increase public access to therapy-client records from over 30-60 years ago? Is it a good idea to do this?

Good for whom. You can blackmail the people who are still alive. Some things from 30 years ago are no longer important, but some of them are, e.g. if you figure out someone's sexual orientation, etc.

How to best nudge people

Make it the default option.

How much power do elites have to take decisions that go against their local incentives and local culture?  For example if the prime minister of a country is in favour of declaring war

This assumes that the prime minister is the one with the power to make decisions, rather than e.g. their sponsor who also happens to have some blackmail material on them.

Yes I am biased lol, I think most elites don't do anything interesting with their lives. 

Sounds plausible to me, but I have no first-hand experience.

unethical in the context of a research experiment. What are some techniques to bypass this?

Do the research in China?

Why didn't the US nuke USSR cities immediately after nuking Japan to establish a nuclear monoppoly, before USSR got nukes?

John von Neumann was in favor of nuking USSR a.s.a.p. "If you say why not bomb them tomorrow, I say why not today? If you say today at five o' clock, I say why not one o' clock?"

I think that USA was full of Soviet spies and "useful idiots" those days. People in the West back then had no idea what life in USSR actually looks like. They were incredibly (from today's perspective) naive about Soviet propaganda; they basically assumed that if the Soviet government says something, it must bear at least some resemblance to the truth (maybe an exaggeration, but not a complete fabrication). So they assumed that USSR was basically a workers' paradise (with an occasional egg broken here and there to make the omelette). It didn't help that USA chose to ally with USSR against the Nazi Germany, so the crimes of USSR were further downplayed for the sake of preserving the alliance. In such situation, it wasn't too difficult for USSR to find leftists willing to betray their own country for a vision of a workers' paradise everywhere.

Basically, imagine the people who take their information from Russia Today these days, except imagine that this would be what most of the smart people are doing, because they have no information that would contradict it.

Should I just stop caring as much about grammar and spelling in my writing, and invent more shorthands?

Only if you don't want other people to read and respond.

Comment by Viliam on Introducing Avatarism: A Rational Framework for Building actual Heaven · 2024-12-17T12:05:54.885Z · LW · GW

If the future is good, it will be a technical problem whether we can resurrect the dead (especially, against the second law of thermodynamics). If the future is bad, it doesn't matter what we want.

I am not sure what is there to discuss, other than how to preserve the people who are still alive (cryonics) and how to increase the probability that the future is good, which is what this website is mostly about.

Comment by Viliam on How counterfactual are logical counterfactuals? · 2024-12-17T11:55:55.120Z · LW · GW

Does the identical twin one shot prisoners dilemma only work if you are functionally identical or can you be a little different and is there anything meaningful that can be said about this?

I guess it depends on how much the parts that make you "a little different" are involved in your decision making.

If you can put it in numbers, for example -- I believe that if I choose to cooperate, my twin will choose to cooperate with probability p; and if I choose to defect, my twin will defect with probability q; also I care about the well-being of my twin with a coefficient e, and my twin cares about my well-being with a coefficient f -- then you could take the payout matrix and these numbers, and calculate the correct strategy.

Option one, what if you cooperate. You multiply your payout, which is C-C with probability p, and C-D with probability 1-p; and also your twin's payout, which is C-C with probability p, and D-C with probability 1-p; then you multiply your twin's payout by your empathy e, and add that to your payout, etc. Okay, this is option one, now do the same for options two; and then compare the numbers.

It gets way more complicated when you cannot make a straightforward estimate of the probabilities, because the algorithms are too complicated. Could be even impossible to find a fully general solution (because of the halting problem).

Comment by Viliam on Debunking the myth of safe AI · 2024-12-17T10:12:39.482Z · LW · GW

those wars would be pretty pointless as well, because every single individual on earth has immediate access to the best and most intelligent fighting techniques, but also to the most intelligent techniques to protect themselves.

Knowledge is not everything. Looking e.g. at Ukraine today, it's the "ammo" they need, not knowledge.

Even if we assume almost magical futuristic knowledge that would change the war profoundly, still one side would have more resources, or better coordination to deploy it first, so rather than a perfect balance, it would be a huge multiplier to already existing imbalance. (What kind of imbalance would be relevant, that depends on the specific knowledge.)

that's why I'm advocating for slowly decensoring LLMs, because that's the only way how we can sensibly handle this.

Slowness is a necessary, but not sufficient condition. Unless you know how you should do it, doing it more slowly would probably just mean arriving to the same end result, only later.

we need to improve the socioeconomic factors that lead to people wanting to commit crime in the first place

The problem is, the hypothesis of "socioeconomic factors cause crime" is... not really debunked, but rather, woefully inadequate to explain actual crime. Some crime is done by otherwise reasonable people doing something desperate in difficult circumstances. But that is a small fraction.

Most crime is done by antisocial people, drug addicts, people with low impulse control, etc. The kind of people who, even if they won $1M in a lottery today, would probably soon return to crime anyway. Because it is exciting, makes them feel powerful, or just feels like a good idea at the moment. A typical criminal in the first world is not the "I will steal a piece of bread because I am starving" kind, but the "I will hurt you because I enjoy doing it" kind.

But it seems that you are aware of it, and I don't understand what is your proposed solution, other than "something must be done".

Comment by Viliam on Debunking the myth of safe AI · 2024-12-17T09:37:00.611Z · LW · GW

I like the way your expose the biases of the LLM. Obvious in hindsight, but probably wouldn't occur to me.

But the conclusion about "world peace" sounds so naive as if you have never met actual humans.

Comment by Viliam on Dress Up For Secular Solstice · 2024-12-17T09:22:38.913Z · LW · GW

I like this idea a lot!

My usual objections against dress codes are that they often require social skills to figure out, and sometimes are expensive. Well, this explanation seems simple, and "buy a black shirt" should be within most people's budgets.

(Of course, there is a space to navigate within the proposed dress code. You could have cheaper or more expensive black clothing, etc. But this complication was already there; the proposal does not make it worse.)

So what remains is signaling of conformity. Which I would expect to be controversial, and I am surprised that there is no pushback against it already. Because our kind, famously, sucks at cooperation. (For some people, it would probably be better to say "fights against it with suicidal fanaticism".) So it would be nice to have a simple way to select for people who are pro-rationality and open to cooperation. Those sound like nice people to cooperate with.

In general, social conventions are a signal of conformity and cooperation. Problem is, they often come with a cost we would consider inappropriate (for example the convention of never talking about certain topics comes with a cost of not being able to discuss those topics without incurring social penalty), or are difficult to understand for autistic people (so they do not distinguish between defecting on purpose and merely failing to infer the unwritten social norm from other people's behavior).

Comment by Viliam on avturchin's Shortform · 2024-12-14T21:43:59.809Z · LW · GW

Suppose that you are a whistleblower, and you suspect what someone will try to "suicide" you. How can you protect yourself?

If someone wants to murder you, they can. If you ever walk outside, you can't avoid being shot by a sniper. Or a random thug will be paid by a mysterious stranger to stab you. So my question is not "how can you make yourself immortal", but rather "how can you make it so that if you are killed, it will very obviously not be a suicide".

Saying "I have no intention to kill myself, and I suspect that I might be murdered" is not enough.

Wearing a camera that is streaming to a cloud 24/7, and your friends can publish the video in case of your death... seems a bit too much. (Also, it wouldn't protect you e.g. against being poisoned. But I think this is not a typical way how whistleblowers die.) Is there something simpler?

Comment by Viliam on Benito's Shortform Feed · 2024-12-14T20:12:32.031Z · LW · GW

Could be also something random. Maybe the friend broke up with someone recently.

People believe that other people pretend their loving emotions more than is real

Well, it's a difficult situation to figure out. Yes, people sometimes (often?) pretend. Does it mean that all emotions of some kind/intensity X are fake? Not necessarily. But it is difficult to figure out what is real and what is fake. So different people will believe different things, and there is no obvious way to figure out who is right, so... maybe it's better to drop the topic?

Comment by Viliam on Purplehermann's Shortform · 2024-12-14T20:01:30.944Z · LW · GW

Whether something is technically and economically possible is just a part of the puzzle. The remaining part is whether the people who make decisions have the incentives to do so.

According to Bryan Caplan, schools certify: intelligence, conscientiousness, and conformity. Online learning would certify intelligence, conscientiousness (even more than school attendance), but not conformity. Would the employers be okay with that?

Also, some prestigious universities select for having tons of money and/or the right social connections. The education is not the point. The point is that your parents had to be a part of the social "inner circle" to get you to the university, and you spent a few years socializing with other kids of the same kind, establishing the "inner circle" of the next generation. Making the credentials available to hoi polloi would defeat the entire purpose.

Comment by Viliam on First Thoughts on Detachmentism · 2024-12-13T22:24:03.134Z · LW · GW

I guess approving it and letting it sink by karma was the right move.

It feels like there is some substance that would be interesting if written in different words, but it is too vague to be useful.

Comment by Viliam on WannabeChthonic's Shortform · 2024-12-13T21:38:17.765Z · LW · GW

English is the de facto language of the LessWrong forum

The existing texts are in English, so naturally the website is visited by people who speak English.

I think you can post in German, but then most readers will not understand it. Are you okay with that? One problem is that if you won't get a positive response, you will not know how much of that is because of the language, and how much is because of the content.

use this account only for LW realted things

Yeah, using a different language and post on topics unrelated to LW would definitely be a bad idea.

Comment by Viliam on adamzerner's Shortform · 2024-12-13T20:04:33.366Z · LW · GW

In the past, we used to have Sequence re-runs.

I wonder it we should try it again, and maybe not just with the Sequences, but also with the best articles that were collected in the books.

Comment by Viliam on “Charity” as a conflationary alliance term · 2024-12-13T11:15:12.922Z · LW · GW

I wonder if we could split the word by using different adjectives, for example "global charity" vs "local charity".

The opposite of "effective" is "ineffective", which has negative connotations (and is kinda wrong; the "ineffective altruists" may be less effective at altruism, but more effective at e.g. supporting their in-group). We need an adjective that the other side could accept.

Maybe we should even adopt some degree of hypocrisy and say things like "there is local charity and global charity, and both of them are equally noble and valid, but the global charity is currently underfunded, so we focus on that", or something. Advertise ourselves without attacking the others.

Comment by Viliam on Just one more exposure bro · 2024-12-13T08:35:50.302Z · LW · GW

I guess you could do both kinds of mistakes: more exposure when what you need is an insight, and more insights when what you need is exposure. Among the nerds, the latter is probably much more frequent. But yes, if you tried more exposure and it didn't work, what you may need is the right insight.

Comment by Viliam on David Gross's Shortform · 2024-12-13T08:23:06.958Z · LW · GW

Compare to asking your colleague something that could be found by 10 seconds of googling. These days, you are supposed to google first. In ten years, you will be supposed to ask an AI for the explanation first, which for many people will also be the last step; and for the more curious ones the expected second and third steps will be something like "try a different prompt", "ask additional questions", "switch to a different AI", etc.

Comment by Viliam on adamzerner's Shortform · 2024-12-13T08:17:55.590Z · LW · GW

Worrying about dilution makes sense, but the default is... not reading any part of the Sequences.

I like the readthesequences.com page, because it has the posts without comments. People complain how the posts are long, but the comments are 10x longer, and it is tempting (at least for me) to look at them while reading the posts.

But yes, I also wish we had something even better.

Comment by Viliam on adamzerner's Shortform · 2024-12-12T16:24:06.803Z · LW · GW

Maybe there are important things that are going over my head. Or maybe I actually understand things too well now after hanging around this community for so long.

Depending on the quality of the lesson and your understanding of it, I think the following combinations are possible:

  • the lessons is wrong or stupid = not impressed
  • going over your head = not impressed
  • you understand it, but failed to internalize = impressed on re-read
  • you already internalized it = not impressed

Many of the outcomes seem similar, it is difficult to distinguish between them.

Seems to me that people are often impressed by texts that happen to provide some last missing piece of a puzzle for them. Which is a different thing for different people, and even for the same person at a different moment of their life. Why is why recommending books to others is difficult.

Comment by Viliam on MondSemmel's Shortform · 2024-12-12T16:14:05.134Z · LW · GW

From the first link:

While members of the TPOT community struggle with whether to embrace Mangione as part of what they call “the ingroup,” other extremely online commentators insist he has to be a member of the scene.

TPOT is commonly cited as an offshoot of rationalism, a popular Silicon Valley viewpoint popularized by thinkers like computer scientist Eliezer Yudkowsky and psychiatrist Scott Alexander that suggests all aspects of life should be decided based on rational thinking. Members of the TPOT community are often referred to as “post-rationalists” — former adherents who became “disillusioned with that whole scene, because it’s a little culty, it’s a little dogmatic,” said journalist.

Still, those in the subculture tend to share a few common interests and values: a fixation on technology — specifically, artificial intelligence — and an interest in self-improvement through diet, exercise, and meditation. Members speak often of exercising personal agency or free will in order to change their lives. (The term “agentic” is heavily employed in TPOT spaces to mean someone who exercises a high degree of personal agency; members encourage one another by saying, “You can just do things!”) Certain corners of the subculture embrace the use of psychedelics for self-help, and others, according to Rosenberg, adhere to pronatalism, the belief that a high birth rate is crucial to human survival.

Others suggest Mangione is more aligned with effective altruism, the similarly rationalist ideology that had a heyday in tech spaces before its chief promoter, Sam Bankman-Fried, was convicted of federal crimes.

Still, Rosenberg noted at least one other similarity between Mangione and the TPOTers: a penchant for overly long tweets.

“It’s a very verbal culture. People really love to have long-form discussions, state their opinions,” she said. “Really, just people who like to talk a lot.”

From the second link:

I have not found any evidence that Luigi was a specific fan of Scott, but he expressed appreciation for several figures associated with this big tent movement, including Peter Thiel.

My summary:

The evidence about the connection is that some members of TPOT (which is more like ex-rationalists) think that the shooter could be considered one of them (which is their opinion, not his). Also, someone unspecified said, without providing any evidence, that the shooter seems more like an EA to them. Finally, Rosenberg (who is he, and why should I care about his opinion?) found the smoking gun: both the shooter and the rationalists are verbose.

Also, the shooter knows Peter Thiel's name, which suggests that he is a member of a mysterious inner circle.

(I guess this passes for journalism these days.)

EDIT: It also feels weird to use "people in a specific group approve of the shooter" as evidence for something, when there are probably many groups that do the same.

Comment by Viliam on Why Isn't Tesla Level 3? · 2024-12-12T12:06:32.761Z · LW · GW

I wonder how much safer roads could be if human drivers were all a little more patient?

Depends on country. In my opinion, in Switzerland, the human drivers are already almost perfect. In Italy, I was surprised that most of them somehow manage to survive the day.

Comment by Viliam on the gears to ascenscion's Shortform · 2024-12-12T11:45:01.844Z · LW · GW

From today's perspective, Marx is just another old white cishet tech bro. (something something swims left)

I never expected that one day I would miss the old-style Marxists, but God forgive me, I do. We disagreed on many things, but at least we were able to have an intelligent debate.

Comment by Viliam on Shortform · 2024-12-12T11:20:05.622Z · LW · GW

What is the next step in this direction? Neuralink? I wonder what horrors it will bring.

Comment by Viliam on [deleted post] 2024-12-12T09:53:35.974Z

there is strong selection pressure for these organizations to have people that have low P(doom) and/or don't (think they) value the future lives of themselves and others

This is an important thing I didn't realize. When I try to imagine the people who make decisions in organizations, my intuitive model would be somewhere between "normal people" and "greedy psychopaths", depending on my mood, and how bad the organization seems.

But in addition to this, there is the systematic shift towards "people who genuinely believe things that happen to be convenient for the organization's mission", as a kind of cognitive bias on group scale. Not average people with average beliefs. Not psychopaths who prioritize profit above everything. But people who were selected from the pool of average by having their genuine beliefs aligned with what happens to be profitable in given organization.

I was already aware of similar things happening in "think tanks", where producing beliefs is the entire point of the organization. Their collective beliefs are obviously biased, not primarily because the individuals are biased, but because the individuals were selected for having their genuine beliefs already extreme in a certain direction.

But I didn't realize that the same is kinda true for every organization, because the implied belief is "this organization's mission is good (or at least neutral, if I am merely doing it for money)".

Would this mean that epistemically healthiest organizations are those whose employees don't give a fuck about the mission and only do it for money?

Comment by Viliam on daijin's Shortform · 2024-12-12T09:32:45.697Z · LW · GW

less than 6% of global GDP; they are exceptions not rules (Austria, Denmark, Finland, France, Germany, Iceland, Italy, Netherlands, Norway, Spain, Sweden and Switzerland)

It is 16%, not 6%.

(Approximately, Germany is 4.7% of global GDP, France is 3.1%, Italy 2.2%, Spain 1.5%, Netherlands 1.1%, Switzerland 0.9%, Sweden 0.6%, Austria and Norway 0.5% each, Denmark 0.4%, Finland 0.3%.)

Comment by Viliam on Everett's Cat's Shortform · 2024-12-12T08:06:14.858Z · LW · GW

In life you make choices; usually you lose something and you gain something. There are two ways to interpret the statement that he "ruined his life":

  • that the cost he paid was a predictable decrease in his quality of everyday life, as he goes to prison (I guess)
  • that the cost was too high compared to the gain, i.e. that the choice resulted in a net loss

The first is trivially true. The second depends on one's utility function. From certain perspective, all activists are ruining their lives. The activists seem to disagree.

Comment by Viliam on sarahconstantin's Shortform · 2024-12-11T16:34:11.666Z · LW · GW

Nope, politicians. SBF donated tons of money to Democrats (and a smaller ton of money to Republicans, just to be sure).

Comment by Viliam on sarahconstantin's Shortform · 2024-12-11T15:58:50.054Z · LW · GW

Another interesting part from the "debanking" article:

[Sam Bankman-Fried] orchestrated a sequential privilege escalation attack on the system that is the United States of America, via consummate skill at understanding how power works, really works, in the United States. They rooted trusted institutions and used each additional domino’s weight against the next. A full recounting of the political strategy alone could easily fill a book. [...] One major reason why crypto has experienced what feels like performative outrage from Democrats since 2022 is that they are trying to demonstrate that crypto did not successfully buy them.

Comment by Viliam on halinaeth's Shortform · 2024-12-11T13:06:39.251Z · LW · GW

in communities which gain prestige, infighting which causes collapse

Yes. A bit more cynically, sometimes you have a community with no infighting and you think "that's because we are nice people", but the right answer happens to be "that's because infighting isn't profitable yet". And I think this is much more likely to happen over money rather than prestige; prestige is just a possible way to get funding.

Prestige itself is less fungible and less zero-sum. For example, imagine that the two of us would start an artistic web project together: we buy a web domain, install some web publishing software, and then each of us posts two or three nice pictures each week. We keep doing it for a few months, and we acquire a group of fans.

And suppose that I happen to be the one of us who has the admin password to the software, and also the web domain is registered to my name. It didn't seem important at the beginning; we didn't expect our relationship to go bad, we probably didn't really even expect the project to succeed, and I just happened to be the person with better tech skills or maybe just more free time at the moment. Anyway, the situation is such that I could remove you from the project by clicking a button, should I choose to do so. At first, you just never thought about it, and probably neither did I. (Though it seems to me that some people have the right instincts, and always try to get this kind of a role, just in case.)

So, I could remove you by a click of a button, but why would I do that? I am happy to have a partner. A website with twice as many pictures is more likely to get popular. The effect is probably superlinear, because posting a picture every day will make the fans develop a habit to check out website the first thing every morning. Also, we have slightly different styles; some fans prefer my art, some prefer your art. And if I kicked you out, you could just start your own website, and your fans would follow you there.

Three years later, we get so popular that some art grant agency notices us, and decides to give us a generous grant of €1000 monthly, indefinitely. And that's the moment when I will seriously start thinking about clicking the button. It would require more work from me, but the money is worth it. (I am working on the assumption that as long as the quality and popularity of the website won't decrease dramatically, the agency won't care about the details.) You could start your alternative website, but this grant money would stay with me. So I just need to be smart about minimizing the disruption caused by your absence. In short term, I could compensate by working harder. But in long term, I need to somehow de-emphasize our role as creators, and make us more of rentiers (does this word even exist in English? Google Translate suggests "reindeer" but that's not what I have in mind). For example, I could suggest allowing guest contributions; maybe even make it a competition, like the fans would send us their pictures by e-mail, we would select the non-crappy ones, post five of them every other day, and let the users vote for the best ones every other week, etc. You might like the idea; but even if not, I would probably convince you by volunteering to do all the extra work myself. OK, soon the website is like 40% our contributions, and 60% guest contributions and voting. Perfect; time for me to push the button, and announce publicly that we had some philosophical disagreements about the True Nature of Art, so you decided to follow your own way, and I wish you good luck with your new projects, but the fans don't need to worry, because the website will continue working as usual. (Gee, I am such a competent villain in my stories; I should probably be more afraid of myself. But I am just describing what I have seen other people do. Whenever I was involved in person, I was on the receiving end.)

members would start tearing down or attacking "rival" communities to gain in-group points. [...] seems parallel to the prestige > infighting problem

Sometimes there are no clear boundaries; the insiders in the wider sense of the word are outsiders in the narrower sense of the word, e.g. one community of artists dissing another community of artists. Sometimes, the more similar the groups are to each other, the stronger the hate.

no one wants to criticize publicly for fear of being eaten alive, and I only hear people express discontent 1:1, never in public

An opportunity for a coup? Create a "safe space" for the unhappy people to complain; but only invite the competent ones. You don't want dead weight; and each additional member increases the risk of someone betraying the group. (This would be safer to do in an offline community, where you could meet in person and leave no written records; so if someone betrays you, you can simply deny it.)

would love any links

Sorry, the only thing that comes to my mind is the one you linked.

This may be needlessly paranoid, but consider the possibility whether some "bad choices" made by the founders could have been actually good for them personally, and only bad for the rest of the community. (There is a saying "Never attribute to malice that which is adequately explained by stupidity", but I would say "Never attribute to stupidity that which is adequately explained by selfish incentives.")

Comment by Viliam on Daniel Tan's Shortform · 2024-12-10T21:52:45.145Z · LW · GW

Sounds interesting.

When I think about making YouTube videos, it seems to me that doing it at high technical level (nice environment, proper lights and sounds, good editing, animations, etc.) is a lot of work, so it would be good to split the work at least between 2 people: 1 who understands the ideas and creates the script, and 1 who does the editing.

Comment by Viliam on sarahconstantin's Shortform · 2024-12-10T21:45:41.812Z · LW · GW

Julian Jaynes would say that this is how human consciousness as we know it today has evolved.

Which makes me wonder, what would he say about the internet bubbles we have today. Did we perhaps already reach peak consciousness, and now the pendulum is swinging back? (Probably not, but it's an interesting thought.)