Posts

Trying to be rational for the wrong reasons 2024-08-20T16:18:06.385Z
How unusual is the fact that there is no AI monopoly? 2024-08-16T20:21:51.012Z
An anti-inductive sequence 2024-08-14T12:28:54.226Z
Some comments on intelligence 2024-08-01T15:17:07.215Z
Evaporation of improvements 2024-06-20T18:34:40.969Z
How to find translations of a book? 2024-01-08T14:57:18.172Z
What makes teaching math special 2023-12-17T14:15:01.136Z
Feature proposal: Export ACX meetups 2023-09-10T10:50:15.501Z
Does polyamory at a workplace turn nepotism up to eleven? 2023-03-05T00:57:52.087Z
GPT learning from smarter texts? 2023-01-08T22:23:26.131Z
You become the UI you use 2022-12-21T15:04:17.072Z
ChatGPT and Ideological Turing Test 2022-12-05T21:45:49.529Z
Writing Russian and Ukrainian words in Latin script 2022-10-23T15:25:41.855Z
Bratislava, Slovakia – ACX Meetups Everywhere 2022 2022-08-24T23:07:41.969Z
How to be skeptical about meditation/Buddhism 2022-05-01T10:30:13.976Z
Feature proposal: Close comment as resolved 2022-04-15T17:54:06.779Z
Feature proposal: Shortform reset 2022-04-15T15:25:10.100Z
Rational and irrational infinite integers 2022-03-23T23:12:20.135Z
Feature idea: Notification when a parent comment is modified 2021-10-21T18:15:54.160Z
How dangerous is Long COVID for kids? 2021-09-22T22:29:16.831Z
Arguments against constructivism (in education)? 2021-06-20T13:49:01.090Z
Where do LessWrong rationalists debate? 2021-04-29T21:23:55.597Z
Best way to write a bicolor article on Less Wrong? 2021-02-22T14:46:31.681Z
RationalWiki on face masks 2021-01-15T01:55:49.836Z
Impostor Syndrome as skill/dominance mismatch 2020-11-05T20:05:54.528Z
Viliam's Shortform 2020-07-22T17:42:22.357Z
Why are all these domains called from Less Wrong? 2020-06-27T13:46:05.857Z
Opposing a hierarchy does not imply egalitarianism 2020-05-23T20:51:10.024Z
Rationality Vienna [Virtual] Meetup, May 2020 2020-05-08T15:03:56.644Z
Rationality Vienna Meetup June 2019 2019-04-28T21:05:15.818Z
Rationality Vienna Meetup May 2019 2019-04-28T21:01:12.804Z
Rationality Vienna Meetup April 2019 2019-03-31T00:46:36.398Z
Does anti-malaria charity destroy the local anti-malaria industry? 2019-01-05T19:04:57.601Z
Rationality Bratislava Meetup 2018-09-16T20:31:42.409Z
Rationality Vienna Meetup, April 2018 2018-04-12T19:41:40.923Z
Rationality Vienna Meetup, March 2018 2018-03-12T21:10:44.228Z
Welcome to Rationality Vienna 2018-03-12T21:07:07.921Z
Feedback on LW 2.0 2017-10-01T15:18:09.682Z
Bring up Genius 2017-06-08T17:44:03.696Z
How to not earn a delta (Change My View) 2017-02-14T10:04:30.853Z
Group Rationality Diary, February 2017 2017-02-01T12:11:44.212Z
How to talk rationally about cults 2017-01-08T20:12:51.340Z
Meetup : Rationality Meetup Vienna 2016-09-11T20:57:16.910Z
Meetup : Rationality Meetup Vienna 2016-08-16T20:21:10.911Z
Two forms of procrastination 2016-07-16T20:30:55.911Z
Welcome to Less Wrong! (9th thread, May 2016) 2016-05-17T08:26:07.420Z
Positivity Thread :) 2016-04-08T21:34:03.535Z
Require contributions in advance 2016-02-08T12:55:58.720Z
Marketing Rationality 2015-11-18T13:43:02.802Z
Manhood of Humanity 2015-08-24T18:31:22.099Z

Comments

Comment by Viliam on Proposal to increase fertility: University parent clubs · 2024-11-20T14:31:43.097Z · LW · GW

I agree. The best advertisement for having kids is to see other people having kids. Not only because people instinctively copy others, but also because you can ask the parents the things you are curious about, or you can try to babysit their kids to get an idea what it would be like to have your own kids.

Also, the more places are parent-friendly, the less costly it is to become a parent. If your friends mostly socialize in loud places with lots of alcohol, starting a family will make you socially isolated, because you would not want to bring your kids to places like that. If instead your friends meet at a park, you can keep your social life and bring your kids along with you.

If many people meet at the same place, it can make sense to have a room specifically for kids, at least with some paper and crayons, so that the kids can play there and leave their parents alone for a moment. Also, one big box where people can bring toys they no longer need at home.

Comment by Viliam on Neutrality · 2024-11-20T12:47:51.942Z · LW · GW

yet we still don't have anything close to a unified theory of human mating, relationships, and child-rearing that's better.

We even seem to have a collective taboo against developing such theory, or even making relatively obvious observations.

Comment by Viliam on Making a conservative case for alignment · 2024-11-20T12:26:36.806Z · LW · GW

I approve of the militant atheism, because there are just too many religious people out there, so without making a strong line we would have an Eternal September of people joining Less Wrong just to say "but have you considered that an AI can never have a soul?" or something similar.

And if being religious is strongly correlated with some political tribe, I guess it can't be avoided.

But I think that going further than that is unnecessary and harmful.

Actually, we should probably show some resistance to the stupid ideas of other political tribes, just to make our independence clear. Otherwise, people would hesitate to call out bullshit when it comes from those who seem associated with us. (Quick test: Can you say three things the average Democrat believes that are wrong and stupid? What reaction would you expect if you posted your answer on LW?)

Specifically on trans issues:

I am generally in favor of niceness and civilization, therefore:

  • If someone calls themselves "he" or "she", I will use that pronoun without thinking twice about it.
  • I disapprove of doxing in general, which extends to all speculations about someone's biological sex.

But I also value rationality and free speech, therefore:

  • I insist on keeping an "I don't know, really" attitude to trans issues. I don't know, really. The fact that you are yelling at me does not make your arguments any more logically convincing.
  • No, I am not literally murdering you by disagreeing with you. Let's tone down the hysteria.
  • There are people who feel strongly that they are Napoleon. If you want to convince me, you need to make a stronger case than that.
  • I specifically disagree on the point that if someone changes their gender, it retroactively changes their entire past. If someone presented as male for 50 years, then changed to female, it makes sense to use "he" to refer to their first 50 years, especially if this is the pronoun everyone used at that time. Also, I will refer to them using the name they actually used at that time. (If I talk about the Ancient Rome, I don't call it Italian Republic either.) Anything else feels like magical thinking to me. I won't correct you if you do that, but please do not correct me, or I will be super annoyed.
Comment by Viliam on Ayn Rand’s model of “living money”; and an upside of burnout · 2024-11-19T22:03:19.029Z · LW · GW

Just some quick guesses:

If you have problems with willpower, maybe you should make your predictions explicit whenever you try to use it. I mean, as a rationalist, you are already trying to be better calibrated, so you could leverage the same mechanism into supporting your willpower. If you predict a 90% success of some action, and you know that you are right, in theory you should feel small resistance. And if you predict a 10% success, maybe you shouldn't be doing it? And it helps you to be honest to yourself.

(This has a serious problem, though. Sometimes the things with 10% chance of success are worth doing, if the cost is small and the potential gain large enough. Maybe in such cases you should reframe it somehow. Either bet on large numbers "if I keep doing X every day, I will succeed within a month", or bet on some different outcome "if I start a new company, there is a 10% chance of financial success, and a 90% chance that it will make a cool story to impress my friends".)

This also suggests that it is futile to use willpower in situations where you have little autonomy. If you try hard, and then an external influence ruins all your plans, and this was all entirely predictable, you just burned your internal credibility.

(Again, sometimes you need at least to keep the appearance of trying hard, even if you have little control over the outcome. For example, you have a job where the boss overrides all your decisions and thereby ruins the projects, but you still need the money and can't afford to get fired. It could help to reframe, to make the bet about the part that is under your control. Such as "if I try, I can make this code work, and I will feel good about being competent", even if later I am told to throw the code away because the requirements have changed again.)

This also reminds me about "goals vs systems". If you think about a goal you want to achieve, then every day (except for maybe the last one) is the day when you are not there yet; i.e. almost every day is a failure. Instead, if you think about a system you want to follow, then every day you have followed the system successfully is a success. Which suggests that willpower will work better if you aim it at following a system, and stop thinking about the goal. (You need to think about the goal when you set up the system, but then you should stop thinking about it and only focus on the system.)

The strategy of "success spiral" could be interpreted as a way to get your credibility back. Make many small attempts, achieve many small successes, then attempt gradually larger things. (The financial analogy is that when you are poor, you need to do business that does not require large upfront investments, and gradually accumulate capital for larger projects.)

Comment by Viliam on Ayn Rand’s model of “living money”; and an upside of burnout · 2024-11-19T21:43:17.663Z · LW · GW

Perhaps the "decisions" that happen in the brain are often accompanied by some change in hormones (I am thinking about Peterson saying how lobsters get depressed after they lose a fight), so we can't just willpower them away. Instead we need to find some hack that reverts the hormonal signal.

Sometimes just taking a break helps, if the change in hormones is temporary and gets restored to the usual level. Or we can do something pleasant to recharge (eat, talk to friends). Or we can try working with unconsciousness and use some visualization or power poses or whatever.

Comment by Viliam on Ayn Rand’s model of “living money”; and an upside of burnout · 2024-11-19T21:26:04.651Z · LW · GW

There is an ACX article on "trapped priors", which in the Ayn Rand analogy would be... uhm, dunno.

The idea is that a subagent can make a self-fulfilling prophecy like "if you do X, you will feel really bad". You use some willpower to make yourself do X, but the subagent keeps screaming at you "now you will feel bad! bad!! bad!!!" and the screaming ultimately makes you feel bad. Then the subagent says "I told you so" and collects the money.

The business analogy could be betting on company internal prediction market, where some employees figure out that they can bet on their own work ending up bad, and then sabotage it and collect the money. And you can't fire them, because HR does not allow you to fire your "best" employees (where "best" is operationalized as "making excellent predictions on the internal prediction market").

Comment by Viliam on Ayn Rand’s model of “living money”; and an upside of burnout · 2024-11-19T21:11:14.793Z · LW · GW

Parts of human mind are not little humans. They are allowed to be irrational. It can't be rational subagents all the way down. Rationality itself is probably implemented as subagents saying "let's observe the world and try to make a correct model" winning a reputational war against subagents proposing things like "let's just think happy thoughts".

But I can imagine how some subagents could have less trust towards "good intentions that didn't bring actual good outcomes" than others. For example, if you live in an environment where it is normal to make dramatic promises and then fail to act on them. I think I have read some books long ago claiming that children of alcoholic parents are often like that. They just stop listening to promises and excuses, because they have already heard too many of them, and they learned that nothing ever happens. I can imagine that they turn this habitual mistrust against themselves, too. That "I tried something, and it was a good idea, but due to bad luck it failed" resembles too much the parent saying how they had the good insight that they need to stop drinking, but only due to some external factor they had to drink yet another bottle today. Shortly, if your environment fails you a lot, as a response you can become unrealistically harsh on yourself.

Another possible explanation is that different people's attention is focused on different places. Some people pay more attention to the promises, some pay more attention to the material results, some pay more attention to their feelings. This itself can be a consequence of the previous experience with paying attention to different things.

Comment by Viliam on Alexander Gietelink Oldenziel's Shortform · 2024-11-19T20:49:32.873Z · LW · GW

Fair point. (I am not convinced by the argument that if the AI's are trained on human texts and feedback, they are likely to end up with values similar to humans, but that would be a long debate.)

Comment by Viliam on sarahconstantin's Shortform · 2024-11-19T16:09:43.528Z · LW · GW

i want to read his nonfiction

It would have been nice to read A Journal of the Plague Year during covid.

Comment by Viliam on Shortform · 2024-11-19T14:42:02.812Z · LW · GW

Once your conspiracy gets large enough, chances are some member will be able to take care of the legal issues if they arise, by whatever means necessary.

(It's like starting a company: the critical part is growing to the point where you can afford ramen and a good lawyer. You want to get there as fast as possible. Afterwards you can relax and keep growing slowly, if you wish.)

Comment by Viliam on Alexander Gietelink Oldenziel's Shortform · 2024-11-19T13:06:59.863Z · LW · GW

Imagine that a magically powerful AI decides to set a new political system for humans and create a "Constitution of Earth" that will be perfectly enforced by local smaller AIs, while the greatest one travels away to explore other galaxies.

The AI decides that the most fair way to create the constitution is randomly. It will choose a length, for example 10000 words of English text. Then it will generate all possible combinations of 10000 English words. (It is magical, so let's not worry about how much compute that would actually take.) Out of the generated combinations, it will remove the ones that don't make any sense (an overwhelming majority of them) and the ones that could not be meaningfully interpreted as "a constitution" of a country (this is kinda subjective, but the AI does not mind reading them all, evaluating each of them patiently using the same criteria, and accepting only the ones that pass a certain threshold). Out of the remaining ones, the AI will choose the "Constitution of Earth" randomly, using a fair quantum randomness generator.

Shortly before the result is announced, how optimistic would you feel about your future life, as a citizen of Earth?

Comment by Viliam on Neutrality · 2024-11-19T12:51:16.390Z · LW · GW

Saying the (hopefully) obvious, just to avoid potential misunderstanding: There is absolutely nothing wrong with writing something for a smaller group of people ("people working in this space"), but naturally such articles get less karma, because the number of people interested in the topic is smaller.

Karma is not a precise tool to measure the quality of content. If there were more than a handful of votes, the direction (positive or negative) usually means something, but the magnitude is more about how many people felt that the article was written for them (therefore highest karma goes to well written topics aimed at the general audience).

My suggestion is to mostly ignore these things. Positive karma is good, but bigger karma is not necessarily better.

Comment by Viliam on Making a conservative case for alignment · 2024-11-19T12:26:24.339Z · LW · GW

I apologize. I spent some time digging for ancient evidence... and then decided against publishing it.

Short version is that someone said something that was kinda inappropriate back then, and would probably get an instant ban these days, with most people applauding.

Comment by Viliam on Making a conservative case for alignment · 2024-11-18T16:47:35.680Z · LW · GW

Going by today's standards, we should have banned Gwern in 2012.

And I think that would have been a mistake.

I wonder how many other mistakes we made. The problem is, we won't get good feedback on this.

Comment by Viliam on What are Emotions? · 2024-11-18T14:17:01.783Z · LW · GW

Emotions are about reality, but emotions are also a part of reality, so we also have emotions about emotions. I can feel happy about some good thing happening in the outside world. And, separately, I can feel happy about being happy.

In the thought experiments about wireheading, people often say that they don't just want to experience (possibly fake) happy thoughts about X; they also want X to actually happen.

But let's imagine the converse: what if someone proposed a surgery that would make you unable to ever feel happy about X, even if you knew that X actually happened in the world. People would probably refuse that, too. Intuitively, we want to feel good emotions that we "deserve", plus there is also the factor of motivation. Okay, so let's imagine a surgery that removes your ability to feel happy about X, but solves the problem of motivation by e.g. giving you an urge to do X. People would probably refuse that, too.

So I think we actually want both the emotions and the things the emotions are about.

Comment by Viliam on What are some positive developments in AI safety in 2024? · 2024-11-18T13:55:03.245Z · LW · GW

Welp, this was a short list.

Comment by Viliam on Neutrality · 2024-11-18T13:51:39.969Z · LW · GW

Speaking only for myself, I can agree with the abstract approach (therefore: upvote), but I am not familiar with any of the existing projects mentioned in the article (therefore: no vote; because I have no idea how useful the projects actually are, and thus how useful is the list of them).

Comment by Viliam on Neutrality · 2024-11-18T10:27:21.891Z · LW · GW

Library in the sense of "we collect texts written by other people" is: The Best Textbooks on Every Subject

I would like to see this one improved; specifically to have a dedicated UI where people can add books, vote on books, and review them. Maybe something like "people who liked X also liked Y".

Also, not just textbooks, but also good popular science books, etc.

Comment by Viliam on D0TheMath's Shortform · 2024-11-18T09:30:18.220Z · LW · GW

if you ask mathematicians whether ZFC + not Consistent(ZFC) is consistent, they will say "no, of course not!"

I suspect than many people's intuitive interpretation of "consistent" is ω-consistent, especially if they are not aware of the distinction.

Comment by Viliam on Lalit Shankar Chowdhury's Shortform · 2024-11-18T09:20:09.943Z · LW · GW

I find it difficult to make distinct categories, but there seem to be two dimensions along which to classify relations:

  1. How intense is the relation / how much we "click" emotionally and intellectually.
  2. Whether the relation is expected to survive the change of current context.

(Even this is not a clear distinction, because "my relatives" is kinda contextual, but the context is there forever.)

Mapping to your system: close friends = high intensity context independent; friendly acquaintances = high intensity contextual; acquaintances = low intensity contextual.

One quadrant seems to be missing, but maybe that makes sense: if the relation is low intensity, why would people bother to keep it outside of the context where it originated.

Comment by Viliam on sarahconstantin's Shortform · 2024-11-15T20:57:16.408Z · LW · GW

Seems to me that Obama had the level of charisma that Hillary did not. (Neither do Biden or Harris). Bill Clinton had charisma, too. (So did Bernie.)

Also, imagine that you had a button that would make everyone magically forget about the race and gender for a moment. I think that the people who voted for Obama would still feel the same, but the people who voted for Hillary would need to think hard about why, and probably their only rationalization would be "so that Trump does not win".

I am not an American, so my perception of American elections is probably extremely unrepresentative, but it felt like Obama was about "hope" and "change", while Hillary was about "vote for Her, because she is a woman, so she deserves to be the president".

I'm still really not sure there isn't a gender effect!

I guess there are people (both men and women) who in principle wouldn't vote for a woman leader. But there are also people who would be happy to give a woman a chance. Not sure which group is larger.

But the wannabe woman leader should not make her campaign about her being a woman. That feels like admitting that she has no other interesting qualities. She needs to project the aura of a competent person who just happens to be female.

In my country, I have voted for a woman candidate twice (1, 2), but they never felt like "DEI hires". One didn't have any woke agenda, the other was pro- some woke topics, but she never made them about her. (It was like "this is what I will support if you elect me", not "this is what I am".)

Comment by Viliam on [deleted post] 2024-11-15T14:24:06.552Z

I suspect that to solve this puzzle, we would need more precise data. For example, the thing about martyrdom. Naively, it makes it sound like the early Christians were quite suicidal, which is amazing in itself, and also makes you wonder how they survived as a group.

But let's try to use numbers. What fraction of early Christians was actually willing to die for their faith? I have no idea, so just for the sake of a thought experiment, I propose a number... 1%. (No idea whether it is correct.)

Suddenly the fact that a religion which promises you an awesome afterlife can make 1% of its members die voluntarily, does not feel so surprising. There are all kinds of crazy and otherwise vulnerable people out there. With enough peer pressure, you could probably start a cult where 1% of your members commit some kind of suicide even today. Only, the moment you would actually do it, the media would describe you as a crazy murderous cult, and you would probably end up in jail. It would be difficult to keep recruiting members. I suppose the Rome could have been different, for example didn't care about suicides of slaves so much. Also, "suicide by a (Roman) cop" is a non-central form of suicide; it does not make your group look like villains. And if you are actively gaining new members, losing 1% does not make much of a difference.

Also, I wonder how hard Romans actually tried to eliminate Christians. I imagine that if someone tried the same way Hitler tried to get rid of Jews, it would be game over for Christianity. But if the level of persecution is more like "once in a while, we will take a high-status member, try to make them deny Jesus, and kill them if they refuse", that won't stop the group than meanwhile recruits hundred new members. Also, this was ancient Rome, life was probably cheap, you could have get killed for many different things, plus die of many different diseases, perhaps the chance of being killed for your religion didn't increase the overall risk significantly if you were an average member.

Comment by Viliam on Heresies in the Shadow of the Sequences · 2024-11-15T13:16:04.081Z · LW · GW

Stop using LLM's to write. It burns the commons by filling allowing you to share takes  on topics you don't care enough to write about yourself, while also introducing insidious (and perhaps eventually malign) errors. 

Yeah, someone just started doing this in ACX comments, and it's annoying.

When I read texts written by humans, there is some relation between the human and the text. If I trust the human, I will trust the text. If the text is wrong, I will stop trusting the human. Shortly, I hold humans accountable for their texts.

But if you just copy-paste whatever the LLM has vomited out, I don't know... did you at least do some sanity check, in other words, are you staking your personal reputation on these words? Or if I spend my time finding an error, will you just shrug and say "not my fault, we all know that LLMs hallucinate sometimes"? In other words, will feedback improve your writing in the future? If not... then the only reason to give feedback is to warn other humans who happen to read that text.

The same thing applies when someone uses an LLM to generate code. Yes, it is often a way more efficient way to write the code. But did you review the code? Or are you just copying it blindly? We already had a smaller version of this problem with people blindly copying code from Stack Exchange. LLM is like Stack Exchange on steroids, both the good and the bad parts.

there do exist fairly coherent moral projects such as religions

I am not sure how coherent they are. For example, I was reading on ACX about Christianity, and... it has the message of loving your neighbor and turning the other cheek... but also the recommendation not to cast pearls before the swine... and I am not sure whether it makes it clear when exactly are you supposed to treat your neighbors with love or as swines.

It also doesn't provide an answer to whom you should give your coat if two people are trying to steal your shirt, etc.

Plus, there were historical situations when Christians didn't turn the other cheek (Crusades, Inquisition, etc.), and maybe without those situations Christianity would not exist today.

What I am saying is that there is a human judgment involved (which sometimes results in breaking the rules), and maybe the projects are not going to work without that.

Comment by Viliam on Why would ASI share any resources with us? · 2024-11-15T12:52:56.233Z · LW · GW

In this scenario, why would ASI not do either one of the following things: 1) Exploit humans in pursuit of its own goals, while giving us the barest minimum to survive (effectively making us slaves) or 2) Take over the resources of the entire solar system for itself and leave us starving without any resources?

The ASI will do what it is programmed to do. If it means helping humans, it will help humans. If there is a bug in the program, it will... do something that is difficult to predict (and that sounds scary, because most random things are not good).

Make us slaves? We probably wouldn't be useful slaves, compared to alternatives, such as robots, or human bodies with brains replaced by computers.

Taking over the resources probably means killing us in the process, if those resources include e.g. water or oxygen of Earth.

Comment by Viliam on The Humanitarian Economy · 2024-11-14T16:02:53.221Z · LW · GW

If we ignored housing, then "free market + some taxation and giving the money to the poor" kinda sounds like the best of both words. Unfortunately, the increasing rents can eat the extra money given to the poor. (See also: Georgism.)

Maybe if we could get UBI high enough that people could survive on that alone, it would no longer be necessary to live in cities (close to good jobs) and people could avoid paying too high rent. Or maybe not, because ultimately all land belongs to someone? Not sure.

Comment by Viliam on Jan_Kulveit's Shortform · 2024-11-14T15:40:47.445Z · LW · GW

It is difficult to prove things, but I strongly suspect that in Slovakia, Ján Čarnogurský is a Russian asset.

In my opinion, the only remaining question is when exactly was he recruited, how long game was played on us. I have suspected him for a long time, but most people probably would have called me crazy for that, however recently he became openly pro-Russian, to a great surprise for many of his former supporters. So the question is whether I was right and this was a long con, or whether he had a change of mind recently and my previous suspicions were merely a coincidence (homogeneity of the outgroup, etc.).

If this indeed was a long con (maybe, maybe not), then he had a perfect cover story. During communism, he was a lawyer and provided legal support for the anti-Communist opposition. Two years before the fall of communism, he was fired and unemployed. Three months before the fall of communism, he was put in prison. Also, he was strongly religious (perceived as a religious fanatic by some). Remember that Slovakia is a predominantly Catholic country.

After the fall of communism he quickly rose to power. He basically represented the opposition to communism, and the comeback of religious freedom. In 1990s the political scene of Slovakia was basically two camps: those nostalgic for communism, led by Vladimír Mečiar, and those who opposed communism and wanted to join the West, led by Ján Čarnogurský. So we are talking here about the strongest, or the second strongest politician.

I remember some weird opinions of his from that era. For example, he talked a lot about how Slovakia should be "a bridge between Russia and the West", and that we should build a broad-gauge railway across Slovakia (i.e. from the Ukrainian border, to the capital city which is on the western end). If anyone else would have said that, people would probably suspect them of something, but Čarnogurský's anti-communist credentials were just too perfect, so he stayed above suspicion. (From my perspective, perhaps a little paranoid, that sounded a bit like preparing the ground for easy invasion. I mean, one day, a huge train could arrive from Russia right to our capital city, and if it turns out that the train is full of well-armed soldiers, the invasion could be over before most people would even notice that it began. Note: I have no military expertise, so maybe what I am saying here doesn't make sense.)

Then in 1998 he was unexpectedly replaced as a leader by Mikuláš Dzurinda, in a weird turn of events, that was basically a non-violent coup based on technicality. (The opposition to Mečiar was always fragmented to multiple political parties, so they always ran as a coalition. Mečiar changed the constitution to make elections much more difficult for coalitions than for individual parties. The opposition parties were like "no problem, we will make a faux political party as a temporary facade for our coalition, win the election, revert the law, disband the temporary party, and return to life as usual", and they put Dzurinda, a relatively unknown young guy, as a leader of the new party. However, after election when they asked him to disband the new party, he was like "LOL, I am the leader of the party that won the election, you guys better shut up", and governed the country.) Those were the best years for Slovakia, politically; we quickly joined EU and NATO. (Afterwards, Mečiar was replaced in the role of nostalgic post-communist alpha male leader by Robert Fico who won almost every election since then, and the opposition remains fragmented.)

Thus Ján Čarnogurský lost most of his political power. No longer the natural (Schelling-point) leader of the opposition; too much perceived as a religious fanatic to lead anyone other than those. So he quit politics, founded a private Paneuropean University (together with two Russian entrepreneurs), and later became openly pro-Russian. Among other things, he supports the Russian invasion of Ukraine, organizes protests for "peace" (read: capitulation of Ukraine), opposes the EU sanctions against Russia. He is a chairman of Slovak-Russian Society. Recently he received an Order of Honour in Russia.

Comment by Viliam on The Humanitarian Economy · 2024-11-14T10:03:09.918Z · LW · GW

Sometimes it feels like the society is a big computer program, and it doesn't matter if you have the general idea right, as long as there is a syntax error in line 1013, the program is not going too work. (Running a company seems to be the same thing, on a much smaller scale.) Some errors can be fixed by adding a missing semicolon. Sometimes merely fixing an error in one place introduces a related error in a different place, so many places need to be changed in sync.

On top of that, it is a living system. People try to find new exploits all the time. Plus there is a cultural momentum, so that things that work okay in one country will completely fail in a different one; or the things that worked okay a few decades ago no longer work now. The simple model is that people follow the incentives, but in addition to the formal incentives, you have informal ones, such as the opinion of your neighbors. (Sometimes the fear of being rejected by your neighbors is stronger than the fear of legal consequences. And depending on your neighbors, sometimes they push you towards obeying the law, and sometimes they push you towards breaking it.) Now consider that half of the population has IQ 100 or less, some people are psychopaths or drug addicts, so even in the hypothetically optimal system, you will still get people who hurt themselves or others for no good reason, just because the idea occurred to them at the moment.

Also, unlike the situation with programming, there is no clear distinction between the programmer and the system that is programmed. Your attempts to change the system, even for the better, will be actively rejected by those who profit from the way the things currently are, plus everyone who falls for their propaganda. Also, all idealists who have a different vision. Even if you are a dictator, your situation is actually not much better (from the perspective of social engineering), because now you have to keep your army and foreign allies happy, and prevent the population from rebelling against you, which may dramatically limit your options.

...in summary, sometimes it feels to me like magic that things work at all, considering the number of reasons why they should not. I guess it's because there are also millions of people who try to improve things, mostly locally, and they push back against the forces of entropy. But they are often uncoordinated individuals; and also, as individuals, sometimes they die, or burn out, or start a family and no longer have time for their previous activities; and in such cases, sometimes there is a replacement for them, and sometimes there is not and then the local good things fall apart again.

The reason I am writing this is that I don't want to discourage you, but really the devil is in the details.

One typical problem when trying to design a society is: "who will guard the guards themselves?" Like, if you propose an "army of inspectors" to check the business, the obvious next question is who will check this army of inspectors. If you don't have a good answer, sooner or later the inspectors will naturally start doing things for their own benefit, rather than to make the system work as intended. Two typical ways are taking bribes, and trying to make their own work as easy as possible. Taking bribes may motivate them to lobby for making the regulations as strict as possible; seemingly for the benefit of the customers (it will be easy to get a popular support for such proposal), but in fact to give more opportunities to take bribes. (From their perspective, the perfect outcome is when the regulation is so difficult that it is virtually impossible to comply with, or at least so difficult that it would be impossible to make a profit while complying with it, so everyone need to pay a bribe to get approved.) Optimizing for less work means that whenever the business owner proposes a small change, the answer is an automatic no; no one has an incentive to actually think about the proposal. To address this, you would need a second army of meta-inspectors who would check the inspectors, but then the problem might reappear at another level.

And this is not just empty speculation, you can see it at many places. (For example, you need police to reduce crime, but now USA has a problem with criminal policemen protected by the police unions.) I grew up on socialist Czechoslovakia, which in theory was supposed to be a paradise for workers and peasants, governed by wise and benevolent people in the Party. (We typically called it "the Party", because there was only one.) In theory, it was a perfect opportunity to make everything work great. In practice, that didn't happen. Not only was the entire economy mismanaged (the proverbial shortages of toilet paper), but practically all aspects of life were dysfunctional somehow.

The housing situation... well, you applied for a waiting list, waited for a decade or more, and then you were assigned a place to live (you couldn't choose the part of the city; you were happy that you were allowed to stay in the same city because sometimes even that wasn't guaranteed). During that decade or two, you had to stay with your parents, or on your friend's couch; I think there was not an opportunity to rent. (Technically, you could stay in a hotel all the time, but most people didn't have enough money for that.) If you were lucky, there was a job opportunity offering temporary free housing for their employees. So, even if money technically wasn't the problem, the housing still was.

Food... was cheap (heavily subsidized) and available, but only the basic forms. If you walk in a supermarket today, imagine that you would have to choose a subset of maybe 15% of the stuff that is there, and that will be all that is ever available, in the entire country (except for a few super expensive luxury shops). Forget about things like "yogurt with fruit flavor" or "low-fat yogurt". Be happy to buy the yogurt if they have one in the shop; there is only one kind, so it's easy to choose. One kind of bread, two kinds of milk, etc. All restaurants in the country cook the same set of meals, based on the government-approved book of recipes, and the inspectors check that they never deviate from a recipe, even if the customers would really prefer something different. But, yeah, unlike in Soviet Union, at least nobody was starving.

Before you object to comparison with socialism, my point is that this (as far as I know) didn't happen on purpose. The ruling party might have had its ideological objections against the ways markets work, but they had no reason to prevent the workers from getting housing soon or eating tasty meals. Actually, considering that most workers mostly care about their houses and food and beer, improving the housing and meals would increase the stability of the regime. And yet. The lesson is that things can easily go wrong even with good intentions, if you regulate a bit too much.

Comment by Viliam on The Humanitarian Economy · 2024-11-13T19:28:01.338Z · LW · GW

I am not an expert, but the standard answer to why anything is expensive is that there is less of it than what people want. With housing, the usual reason are various restrictions: you are not allowed to build new houses anywhere you want to; and sometimes without good political connections you simply won't get a permission to build anywhere in the city. The reason for that is that people who already own houses (that is, those who vote for the local government) want their prices to go up.

This gets further complicated by the fact that the value of living in a city depends on the opportunities that are there (such as jobs, shops, etc.). So when you only build a relatively few new houses, the costs may actually go up -- partially because the costs keep going up almost all the time, and partially because the greater city is now even more attractive for people who want to get there. Someone should do an experiment and build so many houses that it literally doubles the capacity of the city -- I would expect the prices to go down. But such thing is unlikely to happen, because the people in the city would vote against it.

The problem with "corps of inspector-accountants validating businesses" is... well, I guess you would have to experience being their target to understand it. Basically, when there is an army of bureaucrats giving you certificates, that kinda makes you their servant in the sense that if you do anything slightly differently than they want you to do, no certificate for you! Among other things, it means zero innovation, because doing something differently than the current "best practice" means not getting certified. Or it may mean that getting literally 100% score on their criteria is impossible, so everyone needs to pay bribes to get certified. Yeah, in theory it is not supposed to work like that. But in practice, it often does.

Comment by Viliam on The Humanitarian Economy · 2024-11-13T12:11:17.705Z · LW · GW

it hasn’t been the case that there is food or housing production shortfalls due to resourcing for a long time. Said another way, market dynamics conspire to keep food and housing scarce, because it serves the people steering those market dynamics.

Are you sure about housing?

Perhaps you could support your argument by some numbers, such as "how many people want to live in city X" and "how many houses are there in city X". It seems like you suggest that there are plenty of free houses everywhere, but landlords simply refuse to rent them, for reasons not going beyond "either pay me lots of money or fuck you". If that is true, you would serve your cause better by documenting this specific thing.

This undoes the market forces keeping people beholden to unscrupulous landlords and corporate overlords, through providing universal basic livelihood support.

How specifically would this work? A greedy landlord requires a ton of gold. The generous government provides a free ton of gold for every citizen. But now the greedy landlord requires two tons of gold. No problem, the generous government starts providing two tons of gold for every citizen. But now the greedy landlord requires three tons of gold. When will the market forces finally get defeated?

a debit card tied to a transaction processing system that automatically filters and limits transactions to only essentials, and only within sane limits.

So basically food stamps, with some fancy tech on top of that. Let's ignore the tech for a moment. What happens to the food stamp after someone uses it to pay for food.

Imagine that you are an owner of a grocery store, and at the end of the day you are left with $10000 worth of food stamps in your hand. Now you need to play your employees, and also pay your electric power bills. It is definitely illegal to use the food stamps for the latter; and your employees probably wouldn't be happy to be paid in food stamps, because it allows them to buy food, but not to pay their other expenses.

One possible solution is that if you are a shop owner (but not if you are just a random citizen), the government will replace your food stamps with real money. First, it's obvious what happens next on the black market. Second, you introduced a lot of friction and regulation in the system, because now we need to have rules about who exactly qualified as a "food shop owner" and what exactly qualifies as "food" for the purpose of our food stamps. Only bread and butter? What about cheese? What about really expensive cheese? Chocolate? Alcohol?

.

You seem to have thought a lot about how your proposed solution would make the world a better place, but too little time checking whether it would actually work.

Also, you seem to assume that robots will do all the work, which is like... maybe, one day; and maybe the day is closer than most people imagine... but we are not there yet. At this moment, most jobs can't be automated.

Comment by Viliam on sarahconstantin's Shortform · 2024-11-13T09:46:00.386Z · LW · GW

even a "neutral" college class (let's say a standard algorithms & data structures CS class) is non-neutral relative to certain beliefs

Things that many people consider controversial: evolution, sex education, history. But even for mathematical lessons, you will often find a crackpot who considers given topic controversial. (-1)×(-1) = 1? 0.999... = 1?

some people object to the structure of universities and their classes to begin with

In general, unschooling.

In my opinion, the important functionality of schools is: (1) separating reliable sources of knowledge from bullshit, (2) designing a learning path from "I know nothing" to "I am an expert" where each step only requires the knowledge of previous steps, (3) classmates and teachers to discuss the topic with.

Without these things, learning is difficult. If an autodidact stumbles on some pseudoscience in library, even if they later figure out that it was bullshit, it is a huge waste of time. Picking up random books on a topic and finding out that I don't understand the things they expect me to already know is disappointing. Finding people interested in the same topic can be difficult.

But everything else about education is incidental. No need to walk into the same building. No need to only have classmates of exactly the same age. The learning path doesn't have to be linear, could be a directed oriented graph. Generally, no need to learn a specific topic at a specific age, although it makes sense to learn the topics that are prerequisites to a lot of knowledge as soon as possible. Grading is incidental; you need some feedback, but IMHO it would be better to split the knowledge into many small pieces, and grade each piece as "you get it" or "you don't".

...and the conclusion of my thesis is that a good educational system would focus on the essentials, and be liberal about everything else. However, there are people who object against the very things I consider essential. The educational system that would seem incredible free for me would still seem oppressive to them.

neutrality is a type of tactic for establishing cooperation between different entities.

That means you can have a system neutral towards selected entities (the ones you want in the coalition), but not others. For example, you can have religious tolerance towards an explicit list of churches.

This can lead to a meta-game where some members of the coalition try to kick out someone, because they are no longer necessary. And some members strategically keeping someone in, not necessarily because they love them, but because "if they are kicked out today, tomorrow it could be me; better avoid this slippery slope".

Examples: Various cults in USA that are obviously destructive but enjoy a lot of legal protection. Leftists establishing an exception for "Nazis", and then expanding the definition to make it apply to anyone they don't like. Similarly, the right calling everything they don't like "communism". And everyone on internet calling everything "religion".

"we will take no sides between these things; how they succeed or fail is up to you"

Or the opposite of that: "the world is biased against X, therefore we move towards true neutrality by supporting X".

is it robust to being intentionally subverted?

So, situations like: the organization is nominally politically neutral, but the human at an important position has political preferences... so far it is normal and maybe unavoidable, but what if there are multiple humans like that, all having the same political preference. If they start acting in a biased way, is it possible for other members to point it out.. without getting accused in turn of "bringing politics" into the organization?

As soon as somebody asks "why is this the way things are?" unexamined normality vanishes.

They can easily create a subreddit r/anti-some-specific-way-things-are and now the opposition to the idea is forever a thing.

a way to reconstruct some of the best things about our "unexamined normality" and place them on a firmer foundation so they won't disappear as soon as someone asks "why?"

Basically, we need a "FAQ for normality". The old situation was that people who were interested in a topic knew why things are certain way, and others didn't care. If you joined the group of people who are interested, sooner or later someone explained it to you in person.

But today, someone can make a popular YouTube video containing some false explanation, and overnight you have tons of people who are suddenly interested in the topic and believe a falsehood... and the people who know how things are just don't have the capacity to explain that to someone who lacks the fundamentals, believes a lot of nonsense, has strong opinions, and is typically very hostile to someone trying to correct them. So they just give up. But now we have the falsehood established as an "alternative truth", and the old process of teaching the newcomers no longer works.

The solution for "I don't have a capacity to communicate to so many ignorant and often hostile people" is to make an article or a YouTube video with an explanation, and just keep posting the link. Some people will pay attention, some people won't, but it no longer takes a lot of your time, and it protects you from the emotional impact.

There are things for which we don't have a good article to link, or the article is not known to many. We could fix that. In theory, school was supposed to be this kind of FAQ, but that doesn't work in a dynamic society where new things happen after you are out of school.

a lot of it is literally decided by software affordances. what the app lets you do is what there is.

Yeah, I often feel that having some kind of functionality would improve things, but the functionality is simply not there.

To some degree this is caused by companies having a monopoly on the ecosystem they create. For example, if I need some functionality for e-mail, I can make an open-source e-mail client that has it. (I think historically spam filters started like this.) If I need some functionality for Facebook... there is nothing I can do about it, other than leave Facebook but there is a problem with coordinating that.

Sometimes this is on purpose. Facebook doesn't want me to be able to block the ads and spam, because they profit from it.

but having a substantive framework at all clearly isn't incompatible with thinking independently, recognizing that people are flawed, or being open to changing your mind.

Yeah, if we share a platform, we may start examining some of its assumptions, and maybe at some moment we will collectively update. But if everyone assumes something else, it's the Eternal September of civilization.

If we can't agree on what is addition, we can never proceed to discuss multiplication. And we will never build math.

I think the right boundary to draw is around "power users" -- people who participate in that network heavily rather than occasionally.

Sometimes this is reflected by the medium. For example, many people post comments on blogs, but only a small part of them writes blogs. By writing a blog you join the "power users", and the beauty of it is that it is free for everyone and yet most people keep themselves out voluntarily.

(A problem coming soon: many fake "power users" powered by LLMs.)

I have many values differences with, say, the author of the Epic of Gilgamesh, but I still want to read it.

There is a difference between reading for curiosity and reading to get reliable information. I may be curious about e.g. Aristotle's opinion on atoms, but I am not going to use it to study chemistry.

In some way, I treat some people's opinions as information about the world, and other people's opinions as information about them. Both are interesting, but in a different way. It is interesting to know my neighbor's opinion on astrology, but I am not using this information to update on astrology; I only use it to update on my neighbor.

So I guess I have two different lines: whether I care about someone as a person, and whether I trust someone as a source of knowledge. I listen to both, but I process the information differently.

this points towards protocols.

Thinking about the user experience, I think it would be best if the protocol already came with three default implementations: as a website, as a desktop application, and as a smartphone app.

A website doesn't require me to install anything; I just create an account and start using it. The downside is that the website has an owner, who can kick me out of the website. Also, I cannot verify the code. A malicious owner could probably take my password (unless we figure out some way to avoid this, that won't be too inconvenient). Multiple websites talking to each other in a way that is as transparent for the user as possible.

A smartphone app, because that's what most people use most of the day, especially when they are outside.

A desktop app, because that provides most options for the (technical) power user. For example, it would be nice to keep an offline archive of everything I want, delete anything I no longer want, export and import data.

Comment by Viliam on How to Live Well: My Philosophy of Life · 2024-11-12T15:16:02.368Z · LW · GW

You have already posted this here, two months ago. And it seems to be the only way you interact with this website.

From my perspective, the things you wrote are potentially an interesting opening to a debate, but not all of that at the same time. That's just too much text.

I think the document is actually not bad, but in my opinion it suffers from "it only sounds convincing to those who already agree". Once the reader disagrees with something, there is almost no argument for that, besides the fact that you said so, plus a recommended book to read.

One specific thing I noticed, is that all your arguments are made from selfish perspective. For example, the only reason to help other people is that doing so can make me feel better. Again, this is the type of thing where if you already agree, you agree, but if you don't already agree, it leaves you unimpressed.

Comment by Viliam on How I Learned That You Should Push Children Into Ponds · 2024-11-12T09:40:09.719Z · LW · GW

I now decide upon my stock portfolio by throwing a dart at a dart board.

If you only put S&P 500 companies on the dart board, your investment strategy will in long term resemble investing in passive index funds, which I was told is the smartest way to invest money.

People are leaving a lot of money on the table for status reasons. Deciding the portfolio by throwing a dart yourself is considered low-status, paying fees to credentialed people to do the same thing is considered high-status.

tl;dr - investing at stock market is not about investing at stock market

Comment by Viliam on papetoast's Shortforms · 2024-11-12T09:24:14.413Z · LW · GW

Seems like the problem is that in real life people are not perfectly rational, and also they have an instinct to reciprocate when they receive a gift (at least by saying "thank you" and not throwing the gift away).

In a world where Bob is perfectly rational and Tim has zero expectations about his gift, the situation is simple. Previously, Bob's choices were "spend $300 on good headphone", "spend $100 on bad headphone and $200 on something else", and "spend $300 on something else". Tim's action replaced the last two options with a superior alternative "use Tim's headphone and spend $300 on something else". Bob's options were not made worse.

But real people are not utility maximizers. We instinctively try to choose a locally better option, and how we feel about it depends on what we perceive as the baseline. Given the choice between 10 utilons and 3 utilons, we choose 10 and feel like we just "gained 7 utilons". Given the choice between 10 utilons and 9 utilons, we choose 10 again, but this time we feel like we just "gained 1 utilon". Given the choice between 10 utilons and 10 utilons of a different flavor, we might feel annoyed about having to choose.

Also, if Tim expects Bob to reciprocate in a certain way, the new options are not strictly better, because "spend $300 on good headphone" got replaced by "spend $300 on good headphone, but owe Tim a favor for giving me the $100 headphone I didn't use".

Comment by Viliam on Spade's Shortform · 2024-11-12T09:00:04.478Z · LW · GW

At job, what works for me is making notes. For each task, I start a new page in a note-making software, and put there everything related to the task: link to the Jira ticket, short description, people to contact about analysis and testing, links to relevant resources, etc. Sometimes I also write an outline like "first I will do this, then this". Then I start working on the task, adding more information as it emerges: things that people told me, things I found in the source code, links to the commits and pull requests I made, etc.

The reason is that interruptions are frequent (both planned and unplanned) and seems like I can't do much about it, but the thing I can do is make it easier to recover after the interruption. This way I can make a use of a short block of time, by reading about the planned next step in my notes, doing it, and adding a note about the result.

Unfortunately, the same strategy does not work for me in my private life. I am not sure why, but I have a few suspicions. In private life I have to play both the role of the manager (decide what to do) and the individual contributor (actually do it); my current version of the system works okay for the latter but not for the former. The difficult part is to make myself continue working on the interrupted project, when there are so many alternatives.

Without interruptions, this is automatic. It is difficult for me to start working on something, but once I do, I can easily get obsessed, and could continue working on the same thing for days. That is how I accomplished some things when I was single and childless; I knew that the right time for projects was weekends, especially the ones that had a holiday on Friday or Monday. I could work on something for 4 days in a row, only taking breaks for food and sleep. But now that I have kids, I simply don't get that amount of uninterrupted time, ever.

Interruptions at work are not just difficult for me, but also very unpleasant. It feels like getting hurt in some mental way; having my autonomy violated. Forcing myself to start doing something when I am not in the mood, it hurts. Finally getting in the mood as I am doing it, and then being forced to stop, it hurts again. To have an interruption looming ahead of me means to expect to get hurt soon... that is, if I actually start working on the project. The unpleasant feelings accumulate and result in aversion against the task they are associated with. The more I get interrupted trying on work on a certain task, the more I hate the task. At work, I usually don't have a choice, and I have to finish the task anyway. In private, this makes me abandon projects, or procrastinate on them a lot.

Not all interruptions have the same effect. Taking a break to eat, sleep, exercise, or take a walk is okay. Those are simple activities, so I can continue thinking about the project in the background. The bad kind of interruption is when I need to think hard about something different, when I need to solve a different problem.

Comment by Viliam on What is malevolence? On the nature, measurement, and distribution of dark traits · 2024-11-11T15:48:00.195Z · LW · GW

do self-awareness of one's own malevolence factors help one to limit the malevolence factors?

Probably the effect would be nonlinear, like the evil people would just laugh, the average might get depressed and give up, and the mostly-good would strive to achieve perfection (or conclude that they are already good enough compared to others, and relax their efforts?).

Comment by Viliam on quila's Shortform · 2024-11-11T15:30:34.601Z · LW · GW

Thank you for the article!

The long version: https://qst.darkfactor.org/?site=pFBYndBUExaK041MEY5TmJCa3RiaWNsKzhiT2V3Y01iL0t5cC80RVE3dEdMNjZHczNocU1BaHA1czZIT1dyd2pzSg

Comment by Viliam on AI #89: Trump Card · 2024-11-11T14:29:26.242Z · LW · GW

Everyone, definitely click the "Claude being funny" link.

favorite human interaction is when they ask me to proofread something and i point out a typo and they do 'are you SURE?' like i haven't analyzed every grammatical rule in existence. yes karen, there should be a comma there. i don't make the rules i just have them burned into my architecture

Comment by Viliam on Going Beyond "immaturity" · 2024-11-11T09:05:57.500Z · LW · GW

The day only has 24 hours. The time you spend on fixing bugs in Linux is the time you could spend on something else. If you have no better projects, then sure, go ahead.

When you have other important things to do, then you need to set priorities, focus on what is essential, and choose the path of least resistance for everything else.

Comment by Viliam on Alexander Gietelink Oldenziel's Shortform · 2024-11-11T08:58:04.641Z · LW · GW

It smells a bit of 4d-chess.

To me it just seems like understanding the competitive nature of the prediction markets.

In our bubble, prediction markets are celebrated as a way to find truth collectively, in a way that disincentivizes bullshit. And that's what they are... from outside.

But it's not how it works from the perspective of the person who wants to make money on the market! You don't want to cooperate on finding the truth; you actually wish for everyone else to be as wrong as possible, because that's when you make most money. Finding the truth is what the mechanism does as a whole; it's not what the individual participants want to do. (Similarly how economical competition reduces the prices of goods, but each individual producer wishes they could sell things as expensively as possible.) Telling the truth means leaving money on the table. As a rational money-maximizer, you wish that other people believe that you are an idiot! That will encourage them to bet against you more, as opposed to updating towards your position; and that's how you make more money.

This goes strongly against our social instincts. People want to be respected as smart. That's because in social situation, your status matters. But the prediction markets are the opposite of that: status doesn't matter at all, only being right matters. It makes sense to sacrifice your status in order to make more money. Would you rather be rich, or famous as a superforecaster?

This could be a reason why money-based prediction markets will systematically differ from prestige-based prediction markets. In money-based markets, charisma is a dump stat. In prestige-based ones, that's kinda the entire point.

Comment by Viliam on [Intuitive self-models] 8. Rooting Out Free Will Intuitions · 2024-11-11T08:27:38.099Z · LW · GW

OK, that makes sense. I forgot about that part, and probably underestimated its meaning when I read it.

Seems to me that a large part of this all is about how modeling how other people model me is... on one hand, necessary for social interactions with other people... on the other hand, it seems to create some illusions that prevent us from understanding what's really going on.

Comment by Viliam on [Intuitive self-models] 8. Rooting Out Free Will Intuitions · 2024-11-10T22:10:25.729Z · LW · GW

Note the difference between saying (A) “the idea of going to the zoo is positive-valence, a.k.a. motivating”, versus (B) “I want to go to the zoo”. (A) is allowed, but (B) is forbidden in my framework, since (B) involves the homunculus.

This sounds like the opposite of the psychological advice to make "I statements".

I guess the idea of going to the zoo can have a positive valence in one brain, and negative valence in another, so as long as we include another people in the picture, it makes sense to specify which brain's valence are we talking about. And "I" is a shortcut for "this is how things are in the brain of the person that is thinking these thoughts".

Comment by Viliam on Does the "ancient wisdom" argument have any validity? If a particular teaching or tradition is old, to what extent does this make it more trustworthy? · 2024-11-10T16:55:39.310Z · LW · GW

Don't overthink it. Two downvotes (or maybe one strong downvote) just means that there were one or two people who didn't like the answer, and the rest either didn't notice it or didn't care enough to vote.

I understand that it sucks, but in general, if few people vote on a thing, there is a lot of noise.

Comment by Viliam on Does the "ancient wisdom" argument have any validity? If a particular teaching or tradition is old, to what extent does this make it more trustworthy? · 2024-11-10T16:49:45.935Z · LW · GW

Claims about "if you keep doing this thing, after a lot of hard work you will achieve these amazing results" seem memetically useful regardless of their truth value. It gives people motivation to join the group and work harder; and whenever someone complains about working hard but not getting the advertised results, you can dismiss them as doing it wrong, or not working hard enough.

Also, consider the status incentives. Claiming to achieve the results after a lot of hard work is high-status; admitting to not achieving the results is low-status; and the claims are externally unverifiable anyway.

I believe monks have a taboo against talking about their attainments

I suspect this rule appeared as a consequence of many monks following the status incentives too obviously. Letting them continue doing so would be good for them but bad for the group, so the groups that made the taboo were more successful.

(Cynically speaking, the actual rule seems to be: Low-status people are not allowed to talk about their attainments. If you are high-status, others will make assumptions about your attainments, and you can just smile mysteriously and speak some generic wise words, or otherwise confirm it in a plausibly deniable way.)

Comment by Viliam on Does the "ancient wisdom" argument have any validity? If a particular teaching or tradition is old, to what extent does this make it more trustworthy? · 2024-11-10T16:33:05.585Z · LW · GW

This is a great point! Generally, whenever someone says "let's do this traditional thing", you might want to check whether the thing actually is traditional... before getting distracted by the endless debates about whether "traditional things" are better than "modern things" (often too unspecific to be useful).

Adding my own too-unspecific-to-be-useful statement, I suspect that most things advertised as traditional are in fact not. Or that the tradition claiming to be millennia old actually started like hundred years ago, so kinda traditional, just not in the way the proponents claim.

Comment by Viliam on Viliam's Shortform · 2024-11-10T16:16:00.254Z · LW · GW

If you dismiss ideas coming from outside academia as non-scientific, you have a point. Those ideas were not properly tested, peer-reviewed, etc.

But if you dismiss those ideas as not worth scientists' attention, you are making a much stronger statement. You are effectively making a positive statement that the probability of those ideas being correct is smaller than 5%. You may be right, or you may be wrong, but it would be nice to provide some hint about why you think so. Are you just dissing the author; or do we have an actual historical experience that among ideas coming from a certain reference group, less than 1 in 20 turns out to be correct?

Why 5%? Let's do the math. Suppose that we have a set of hypotheses out of which about 5% are true. We test them, using a p=0.05 threshold for publishing. That means, out of 10000 hypotheses, about 500 are true, and let's assume that all of them get published; and about 9500 are false, and about 475 of them get published. This would result in approximately 50% failure in replication... which seems to be business as usual in certain academic journals?

So if it is perfectly okay for scientists to explore ideas that have about 5% chance of being correct, then by saying that certain idea should not be explored scientifically, you seem to suggest that its probability is much smaller.

Note that this is different from expecting an idea to turn out to be correct. If an idea has a 10% chance of being correct, it means that I expect it to be wrong, and yet it makes sense to explore the idea seriously.

(This is a condensed version of an argument I made in a week old thread, so I wanted to give it a little more visibility. On top of that, I suspect that status concerns can make a great difference in scientist's incentives. Exploring ideas originating in academia that have a 5% chance of being right is... business as usual. Exploring ideas originating outside of academia that have a 5% chance of being right will make you look incompetent if they turn out to be wrong, which indeed is the likely outcome. No one ever got fired for writing a thesis on IBM, so to speak.)

Comment by Viliam on The Median Researcher Problem · 2024-11-10T13:23:10.826Z · LW · GW

I agree, but there are two different perspectives:

  • whether the outsider wants to be taken seriously by academia
  • whether the people in academia want to collect knowledge efficiently

From the first perspective, of course, if you want to be taken seriously, you need to play by their rules. And if you don't, then... those are your revealed preferences, I guess.

It is the second perspective I was concerned about. I agree that the outsiders are often wrong. But, consider the tweet you linked:

If you never published your research but somehow developed it into a product, you might die rich. But you'll still be a bit bitter and largely forgotten.

It seems to me that from the perspective of a researcher, taking ideas of the outsiders who have already developed successful products based on them, and examining them scientifically (and maybe rejecting them afterwards), should be a low-hanging fruit.

I am not suggesting to treat the ideas of the outsiders as scientific. I am suggesting to treat them as "hypotheses worth examining".

Refusing to even look at a hypothesis because it is not scientifically proven yet, that's putting the cart before the horse. Hypotheses are considered first, scientifically proved later; not the other way round. All scientific theories were non-scientific hypotheses first, at the moment they were conceived.

Choosing the right hypothesis to examine, is an art. Not a science yet; that is what it becomes after we examine it. In theory, any (falsifiable) hypothesis could be examined scientifically, and afterwards confirmed or rejected. In practice, testing completely random hypotheses would be a waste of time; they are 99.9999% likely to be wrong, and if you don't find at least one that is right, your scientific career is over. (You won't become famous by e.g. testing million random objects and scientifically confirming that none of them defies gravity. Well, you probably would become famous actually, but in the bad way.)

From the Bayesian perspective, what you need to do is test hypotheses that have a non-negligible prior probability of being correct. From the perspective of the truth-seeker, that's because both the success and the (more likely) failure contribute non-negligibly to our understanding of the world. From the perspective of a scientific career-seeker, because finding the correct one is the thing that is rewarded. The incentives are almost aligned here.

I think that the opinions of smart outsiders have maybe 10% probability of being right, which makes them hypotheses worth examining scientifically. (The exact number would depend on what kind of smart outsiders are we talking about here.) Even if 10% right is still 90% wrong. Why do I claim that 10% is a good deal? Because when you look at the published results (the actual "Science according to the checklist") that passed the p=0.05 threshold... and later half of them failed to replicate... then the math says that their prior probability was less than 10%.

(Technically, with prior probability 10%, and 95% chance of a wrong hypothesis being rejected, out of 1000 original hypotheses, 100 would be correct and published, 900 would be incorrect and 45 of them published. Which means, out of 145 published scientific findings, only about a third would fail to replicate.)

So we have a kind of motte-and-bailey situation here. The motte is that opinions of smart outsiders, no matter how popular, now matter how commercially successful, should not be treated as scientific. The bailey is that the serious researchers should not even consider them seriously as hypotheses; in other words that their prior probability is significantly lower than 10% (because hypotheses with prior probability about 10% are actually examined by serious researchers all the time).

And what I suggest here is that maybe the actual problem is not that the hypotheses of smart and successful outsiders are too unlikely, but rather that exploring hypotheses with 10% prior probability is a career-advancing move if those hypotheses originate within academia, but a career-destroying move if they originate outside of it. With the former, you get a 10% chance of successfully publishing a true result (plus a 5% chance of successfully publishing a false result), and 85% chance of being seen as a good scientist who just wasn't very successful so far. With the latter, you get a 90% chance of being seen as a crackpot.

Returning to Yann LeCun's tweet... if you invent some smart ideas outside of academia, and you build a successful product out of them, but the academia refuses to even look at them because the ideas are now coded as "non-scientific" and anyone who treats them seriously would lose their academic status... and therefore we will never have those ideas scientifically confirmed or rejected... that's not just a loss for you, for also for the science.

Comment by Viliam on The Median Researcher Problem · 2024-11-10T11:50:26.110Z · LW · GW

The reason I trust research in physics in general is that it doesn't end with publishing a paper. It often ends with building machines that depend on that research being right.

We don't just "trust the science" that light is a wave; we use microwave ovens at home. We don't just "trust the science" that relativity is right; we use the relativistic equations to adjust GPS measurements. Therefore it would be quite surprising to find out that any of these underlying theories is wrong. (I mean, it could be wrong, but it would have to be wrong in the right way that still keeps the GPS and the microwave ovens working. That limits the possibilities of what the alternative theory could be.)

Therefore, in a world where we all do power poses all the time, and if you forget to do them, you will predictably fail the exam...

...well, actually that could just be a placebo effect. (Something like seeing a black cat on your way to exam, freaking out about it, and failing to pay full attention to the exam.) Damn!

Comment by Viliam on Viliam's Shortform · 2024-11-08T15:14:28.324Z · LW · GW

Fuck Google, seriously. About once a week it asks me whether I want to "backup my photos in the cloud", and I keep clicking no, because fuck you why would I want to upload my private photos on your company servers.

But apparently I accidentally once clicked yes (maybe), because suddenly Google sends me a notification about how it created a beautiful animation of my recent photos in the cloud, offering me the option to download them. I don't want to download my private photos from the fucking Google cloud, I never wanted them to be there in the first place! I want to click the delete button, but it's not there: it's either download the animation from the cloud, or close the dialog.

Of course, turning off the functionality is at least 10x more difficult than turning it on, so I get ready to spend this evening finding the advice online and configuring my phone to stop uploading my private photos to Google servers, and preferably to delete all the photos that are already there despite my wishes. Does the "delete" option even exist anymore, or is there just "move to recycle bin (where it stays for as long as we want it to stay there)"? Today I will find out.

Again, fuck Google. I hope the company burns down. I wonder what other things I have already accidentally "consented" to. Google's idea of consent is totally rapist. And I only found this out by accident. In future, I expect to accidentally find this or some other "optional" feature turned on again.

EDIT:

Finally figured out how to delete the animation in the cloud. First, disable all cloud backup options (about a dozen of them). Then, download the animation from the cloud. Then, click to delete the downloaded animation... the app warns you that this would delete both the local and the cloud version; click ok; mission accomplished.

Comment by Viliam on A brief history of the automated corporation · 2024-11-08T14:49:39.882Z · LW · GW

Why do most humans in 2041 still need to work 40 hours a week? The answer is complicated, but to keep this comment simple, let's focus on a few factors that even a hypothetical reader from 2024 would understand.

In most countries, government regulation requires humans in the loop. These might seem like bullshit jobs, but that doesn't make the competition for them any less fierce. An average person cannot get a good job without good credentials (required for regulatory reasons), and good credentials are expensive; it often takes a lifetime to pay back the school debt. It doesn't matter whether the things taught at school are useful in any practical sense (the few remaining human teachers mostly agree that they are not), but they are required by law. The official reasoning is that general education keeps us human (note: this is simplified to the level of strawman, but I am trying to keep it simple for the hypothetical 2024 reader unfamiliar with the culture wars of 2041).

With the exception of a few things such as rent, most things today are significantly cheaper than they used to be in 2024. On the other hand, there are new expenses, many of them related to AI. Some aspects of life got complicated, for example contracts of all kinds. To put it bluntly, you need the latest AI to safely navigate the legal minefield created by the latest AI. Trying to save money by using a cheaper version of AI that is several weeks obsolete is generally considered a very bad idea, and will probably cost you more in long run, because you have no idea what you sign (and you should generally assume that the form was optimized to extract as much value from you as legally possible, otherwise the company would be leaving money on table). You either spend a large part of your income on AI services... or you risk joining the underclass at the first accident; there is not much of a middle way. If you can't afford the "business version" of the latest AI, you can get one that is supported by advertising -- the less you pay for it, the more you should expect the AI agent to optimize for the goals of the advertisers rather than your personal goals. (Oh, "advertisement" today no longer means trying to influence the humans. Humans are mostly irrelevant. It means influencing the AI agents that make most of the everyday decisions. As a simple example, you can pay the AI agents to buy your products rather than your competitor's products, even if they are somewhat more expensive or worse, and to defend this choice to human users using individually optimized arguments.)

There is increasingly addictive... well, basically everything. I am afraid that a far-mode description will fail to convey how strong the effect is when experienced in near mode, but basically: The salesmen of old have used only a few dozen simple techniques (such as smiling at you, looking in your eyes, repeating your name, trying to anchor you to a higher price and then giving you a discount, creating a false sense of urgency, etc.) which were only statistically effective and often failed or backfired for you, the modern ones come to you with a full AI-powered analysis of your personality (yes, there are regulations against this, but they are trivially circumvented), and they have probably already spent a few previous months trying to influence you in all known ways (bots pretending to be humans contacting you on social networks and nudging you in the desired direction, advertising in your AI agent if you use the cheaper version, subliminal advertising on the streets flashing when the screen detects you looking at it, etc.) which makes is almost impossible to resist; in many cases the humans believe that the interaction was actually their own idea, and quite often they fall in love with the salesperson.

Some people suggest that this is a problem humanity should focus on solving, but the respected economists (and more importantly, their AI advisors) mostly shrug and say: "revealed preferences".

Comment by Viliam on AI timelines don't account for base rate of tech progress · 2024-11-08T13:32:17.507Z · LW · GW

I definitely agree that specific examples would make the argument much stronger. At least, it would allow me to understand what kind of "false alarms" are we talking about here: is it mere tech hypes (such as cold fusion), or specifically humanity-destroying events (such as nuclear war)?

I think we didn't have so many things that threatened to destroy humanity. Maybe it's just my ignorance speaking, but the nuclear war is the only example that comes to my mind. (Global warming, although possibly a great disaster, is alone not an extinction threat to entire humanity.) And mere tech hypes that didn't threaten to destroy humanity don't seem like a relevant category for the AI danger.

Perhaps more importantly, with things like LK99 or cold fusion, the only source of hype was "people enthusiastically writing papers". With AI, the situation is more like "anyone can use (for free, if it's only a few times a day) a technology that would be considered sci-fi five years ago". Like, the controversy is about how far and fast it will get, but there is no doubt that it is already here... and even if somehow magically the state of AI would never improve beyond where it is today, we would still have a few more years of social impact at more people would learn to use it and find new ways how to use it.

EDIT: By "sci-fi" I mean, imagine creating a robotic head that uses speech recognition and synthesis to communicate with humans, uploading the latest LLM into it, and sending it by a time machine five or ten years into the past. Or rather, sending thousands of such robotic heads. People would be totally scared (not just because of the time travel). And finding out that the robotic heads often hallucinate would only calm them down a little.