Posts

Trying to be rational for the wrong reasons 2024-08-20T16:18:06.385Z
How unusual is the fact that there is no AI monopoly? 2024-08-16T20:21:51.012Z
An anti-inductive sequence 2024-08-14T12:28:54.226Z
Some comments on intelligence 2024-08-01T15:17:07.215Z
Evaporation of improvements 2024-06-20T18:34:40.969Z
How to find translations of a book? 2024-01-08T14:57:18.172Z
What makes teaching math special 2023-12-17T14:15:01.136Z
Feature proposal: Export ACX meetups 2023-09-10T10:50:15.501Z
Does polyamory at a workplace turn nepotism up to eleven? 2023-03-05T00:57:52.087Z
GPT learning from smarter texts? 2023-01-08T22:23:26.131Z
You become the UI you use 2022-12-21T15:04:17.072Z
ChatGPT and Ideological Turing Test 2022-12-05T21:45:49.529Z
Writing Russian and Ukrainian words in Latin script 2022-10-23T15:25:41.855Z
Bratislava, Slovakia – ACX Meetups Everywhere 2022 2022-08-24T23:07:41.969Z
How to be skeptical about meditation/Buddhism 2022-05-01T10:30:13.976Z
Feature proposal: Close comment as resolved 2022-04-15T17:54:06.779Z
Feature proposal: Shortform reset 2022-04-15T15:25:10.100Z
Rational and irrational infinite integers 2022-03-23T23:12:20.135Z
Feature idea: Notification when a parent comment is modified 2021-10-21T18:15:54.160Z
How dangerous is Long COVID for kids? 2021-09-22T22:29:16.831Z
Arguments against constructivism (in education)? 2021-06-20T13:49:01.090Z
Where do LessWrong rationalists debate? 2021-04-29T21:23:55.597Z
Best way to write a bicolor article on Less Wrong? 2021-02-22T14:46:31.681Z
RationalWiki on face masks 2021-01-15T01:55:49.836Z
Impostor Syndrome as skill/dominance mismatch 2020-11-05T20:05:54.528Z
Viliam's Shortform 2020-07-22T17:42:22.357Z
Why are all these domains called from Less Wrong? 2020-06-27T13:46:05.857Z
Opposing a hierarchy does not imply egalitarianism 2020-05-23T20:51:10.024Z
Rationality Vienna [Virtual] Meetup, May 2020 2020-05-08T15:03:56.644Z
Rationality Vienna Meetup June 2019 2019-04-28T21:05:15.818Z
Rationality Vienna Meetup May 2019 2019-04-28T21:01:12.804Z
Rationality Vienna Meetup April 2019 2019-03-31T00:46:36.398Z
Does anti-malaria charity destroy the local anti-malaria industry? 2019-01-05T19:04:57.601Z
Rationality Bratislava Meetup 2018-09-16T20:31:42.409Z
Rationality Vienna Meetup, April 2018 2018-04-12T19:41:40.923Z
Rationality Vienna Meetup, March 2018 2018-03-12T21:10:44.228Z
Welcome to Rationality Vienna 2018-03-12T21:07:07.921Z
Feedback on LW 2.0 2017-10-01T15:18:09.682Z
Bring up Genius 2017-06-08T17:44:03.696Z
How to not earn a delta (Change My View) 2017-02-14T10:04:30.853Z
Group Rationality Diary, February 2017 2017-02-01T12:11:44.212Z
How to talk rationally about cults 2017-01-08T20:12:51.340Z
Meetup : Rationality Meetup Vienna 2016-09-11T20:57:16.910Z
Meetup : Rationality Meetup Vienna 2016-08-16T20:21:10.911Z
Two forms of procrastination 2016-07-16T20:30:55.911Z
Welcome to Less Wrong! (9th thread, May 2016) 2016-05-17T08:26:07.420Z
Positivity Thread :) 2016-04-08T21:34:03.535Z
Require contributions in advance 2016-02-08T12:55:58.720Z
Marketing Rationality 2015-11-18T13:43:02.802Z
Manhood of Humanity 2015-08-24T18:31:22.099Z

Comments

Comment by Viliam on Information dark matter · 2024-10-01T22:08:01.809Z · LW · GW

Curating content is effectively a battle against advertising, and simultaneously a form of advertising -- the difference is that you recommend stuff you like, as opposed to recommending stuff someone paid you to recommend. (And there are ways to blur the line, such as "I will only recommend the stuff I like, but only if you also pay me" or "I will recommended 90% the stuff I like, and 10% the stuff you pay me for, with or without disclosing which is which". Even these two are difficult to distinguish from each other; I may be legally required to disclose whether someone has paid me, but not whether I genuinely liked the thing regardless.)

Do we then get the next layer of people recommending the recommenders? "This guy's recommendations are always solid; with that guy it's hit and miss."

Possible use of AI: keep writing your random thoughts in a private diary. Then tell an AI to find the repeated topics and arrange them into a coherent article. (I wonder whether an AI could go through my comments written on LessWrong, weigh them by karma, and turn them into a book.)

Comment by Viliam on Alexander Gietelink Oldenziel's Shortform · 2024-10-01T12:13:15.835Z · LW · GW

Connotationally, even if things are pseudorandom, they still might be "random" for all practical purposes, e.g. if the only way to calculate them is to simulate the entire universe. In other words, we may be unable to exploit the pseudorandomness.

Comment by Viliam on Why comparative advantage does not help horses · 2024-10-01T07:21:13.340Z · LW · GW

Yes, it is generally good to notice that some economical theorems are built upon certain assumptions, so we should not blindly extrapolate them to places where those assumptions do not apply.

"X and Y imply Z" is not the same as "Y implies Z; this universal law of nature was by historical coincidence first discovered in the situation of X, but we can safely extrapolate beyond that". It might be that case that Y implies Z even in absence of X, but that needs to be proved separately, not merely assumed.

Comment by Viliam on A Path out of Insufficient Views · 2024-10-01T07:03:05.903Z · LW · GW

The part that you quoted was originally supposed to end by: "So, basically... Buddhism", but then I noticed it actually applies to science, too. Because it's both, kind of. By trying to get out of systems, you create something that people from outside will describe as yet another system. (And they will include it into the set of systems they are trying to get out of.)

Is there an end to this? I don't know, really. (Also, it reminds of this.)

I think what many people do is apply this step once. They get out of the system that their parents and/or school created for them, and that's it.

Some people do this step twice or more. For example, first they rebel against their parents. Then they realize that their rebellion was kinda stupid and perhaps there is more to life than smoking marijuana, so they get out of that system, too. And that's it. Or they join a cult, and then they leave it. Etc.

Some people notice that this is a sequence -- that you can probably do an arbitrary number of steps, always believing that now you are getting out of systems, when in hindsight you have always just adopted yet another system. But even if you notice this, what can you do about it? Is there a way out that isn't just another iteration of the same?

The problem is that even noticing the sequence and trying to design a solution such as "I will never get attached to any system; I will keep abandoning them the moment I notice that there is such a thing; and I will always suspect that anything I see is such a thing", is... yet another system. One that is more meta, and perhaps therefore more aesthetically appealing, but a system nonetheless.

Another option is to give up and say "yeah, it's systems all the way down; and this one is the one I feel most comfortable with, so I am staying here". So you stay consciously there; or maybe halfway there and halfway in the next level, because you actually do recognize your current system as a system...

One person's "the true way to see reality" is another person's "game people play". I am not saying that the accusation is always true; I am just saying the accusation is always there, and sometimes it is true.

Here some people would defend by saying that there always is a true uncorrupted version of something and also a ritualized system made out of it, and that you shouldn't judge True Christianity by the flaws of the ordinary Christians, shouldn't judge True Buddhism by the flaws of the ordinary Buddhists, shouldn't judge True Scientific Mindset by the flaws of ordinary people in academia, and shouldn't just True Rationality by the flaws of the ordinary aspiring rationalists. -- And this also is an ancient game, where one side keeps accusing the other of not being charitable and failing the ideological Turing test, and the other side defends by calling it the no-true-X fallacy.

Another question is whether some pure unmediated access to reality is even possible. We always start with some priors; we interpret the evidence using the perspectives we currently have. (Not being aware of one's priors is not the same as having no priors.) Then again, having only the options of being more wrong or less wrong, it makes sense to prefer the latter.

(And there is a difference between where you are; and where other people report to be, and whether you believe them. The fact that I believe that I am free of systems and see the reality as it is should be a very weak evidence for you, because this is practically what everyone believes regardless of where they are.)

Comment by Viliam on Of Birds and Bees · 2024-09-30T13:31:52.689Z · LW · GW

I think the rule is not necessarily "smarter units make a worse collective", but rather "it is more difficult to make a collective out of smarter units (but when it succeeds, it can be even better)". Humanity is unparalleled at eliminating larger predators.

Bees sacrifice their lives for their biological closest relatives. Birds have small families, so the cost of sacrificing their life is an important factor. Humans also have small families, but they can use prestige and money to incentivize heroic behavior.

So my proposed analogy would be that smarter populations can win, but they cannot achieve it by merely copying the behavior of stupider populations. They need a new solution that leverages their strengths.

Comment by Viliam on shminux's Shortform · 2024-09-29T20:51:07.599Z · LW · GW

A sufficiently godlike AI could probably convince me to kill myself (or something equivalent, for example to upload myself to a simulation... and once all humans get there, the AI can simply turn it off). Or to convince me not to have kids (in a parallel life where I don't have them already), or simply keep me distracted every day with some new shiny toy so that I never decide that today is the right day to have unprotected sex with another human and get ready for the consequences.

But it would be much easier to simply convince someone else to kill me. And I think the AI will probably choose the simpler and faster way, because why not. It does not need a complicated way to get rid of me, if a simple way is available.

This is similar to reasoning about cults or scams. Yes, some of them could get me, by being sufficiently sophisticated, accidentally optimized for my weaknesses, or simply by meeting me on a bad day. But the survival of a cult or a scam scheme does not depend on getting me specifically; they can get enough other people, so it makes more sense for them to optimize for getting many people, rather than optimize for getting me specifically.

The more typical people will get the optimized mind-hacking message. The rest of us will then get a bullet.

Comment by Viliam on Eye contact is effortless when you’re no longer emotionally blocked on it · 2024-09-29T20:33:43.996Z · LW · GW

I respect your effort to build an environment matching your ideals.

Comment by Viliam on Eye contact is effortless when you’re no longer emotionally blocked on it · 2024-09-28T20:20:00.761Z · LW · GW

This is something that in my opinion would deserve a longer focused debate, because I believe that you are pointing roughly in a direction of something that definitely exists, but I also think that your conclusions are exaggerated and wrong.

Like: look in the eyes - release oxytocin - get stronger ingroup feelings, yes, there is definitely a mechanism for that. But I think if we made a survey of people that would measure how much they look each other in the eyes and how tribalistic they are, it would be mostly noise. Or maybe I'm wrong, dunno.

Comment by Viliam on Where is the Learn Everything System? · 2024-09-28T20:05:45.025Z · LW · GW

If you want to use LLM for a tutor, I think that is doable in theory, but you can't just talk to ChatGPT and expect effective tutoring to happen. The problem is that an LLM can be anything, simulate any kind of a human, but you want it to simulate one very specific kind of a human -- a good tutor. So at the very least, you need to provide a prompt that will turn the LLM into that specific kind of intelligence. As opposed to the alternatives.

Content -- the same objection: the LLM knows everything, but it also knows all the misconceptions, crackpot ideas, conspiracy theories, etc. So in each lesson we should nudge it in the right direction: provide a list of facts, and a prompt that says to follow the list.

Navigation -- provide a recommended outline. Unless the student wants to focus on something else, the LLM should follow a predetermined path.

Debugging -- LLM should test student's understanding very often. We could provide a list of common mistakes to watch out for. Also, we could provide specific questions that the student has to answer correctly, and tell the LLM to ask them at a convenient moment.

Consolidation -- the LLM should be connected to some kind of space repetition system. Maybe the space repetition system would provide the list of things that the student should review today, and the LLM could chose the right way to ask about them, and provide a feedback to the space repetition system.

tl;dr -- the LLM should follow a human-made (or at least human-approved) curriculum, and cooperate with a space repetition system.

Comment by Viliam on Where is the Learn Everything System? · 2024-09-28T13:16:35.174Z · LW · GW

Seems to me that a good system to teach everything needs to have three main functions. The existing solutions I know about only have one or two.

First, a software platform that allows you to do all the things you might want to do: show text, show pictures, show videos, download files, interactive visualizations, tests (of various kinds: multiple choice, enter a number, arrange things into pairs or groups...).

Here, the design problem is that the more universal the platform is, the more complicated it is to let a non-tech user use its capabilities. For an experienced programmer you just need to say "upload the HTML code and other related files here", and the programmer will then be able to write text, show pictures, show videos, and include some JavaScript code for animation and testing. (Basically: SCORM, mostly known as Moodle.)

The obvious problem is that most teachers are not coders. So they would benefit from having some wizard that allows them to choose from some predefined templates, and then e.g. if they choose a template "read some text", they would be given an option to write the text directly in a web editor, or upload an existing Word document. But ideally, you would need to provide some real-world support (which is expensive and does not scale), for example I imagine that many good teachers would have a problem with recording a video, editing it, and uploading the file.

Second, it is not enough to create a platform, because then you have a chicken-and-egg problem: the students won't come because there is nothing to learn, and the teachers won't come because there are no students. So in addition to building the platform, you would also need to provide a nontrivial amount of some initial content. There is a risk that if the initial content sucks, people will conclude that your platform sucks. On the other hand, if your initial content is good, people will first come to learn, then some teachers will recommend the content to their students, and only then some teachers will be like "oh, I can also make my own tests? and my own lessons?"

Third, when people start creating things, on one hand this is what you want, on the other hand, most people are stupid and they produce shit. So the average quality will dramatically drop. But if you set some minimum quality threshold, it may discourage users. Some people produce shit first, and gradually they get better. So what you need instead is some recommendation system, where the platform can handle a lot of shit without that shit being visible and making the average experience worse.

For example, anyone can create their own lesson, but by default the only way to access the lesson is via its URL. So the authors can send links to their lessons by e-mail or by social networks. At some moment, the lessons may get verified, which means that someone independent will confirm that the lesson is more good than bad. (It does not violate the terms of service, and it says true things.) Then verified lessons could then be found by entering the keywords on the platform's main page. Also, users could create their own lists of lessons (their own, or other people's lessons) and share those lists via their URL. For example, a math teacher would not need to create their own lessons for everything, but could instead look at the existing materials, choose the best ones, and send a list of those to their students. Finally, the best lessons would be picked by staff and recommended as the platform's official curriculum -- that is what everyone would see by default on the main page.

Comment by Viliam on AI #83: The Mask Comes Off · 2024-09-27T16:26:18.943Z · LW · GW

Step 2: When someone talks about their pain, struggles, things going poorly for them — especially any mental health issues — especially crippling / disabling mental health issues– immediately respond with an outpouring gush of love and support.

The problem is not the love and support per se. It's the implied threat that it will all disappear the very moment your situation improves.

Maybe sometimes it is possible for you to improve your situation, and sometimes it is not. But in this setup, you have an incentive against improving even in the case the improvement happens to be possible.

Also, I suppose you get more love & support for legible problems. So the best thing you can do is create a narrative that blames your problems on something that is widely recognized as horrible (e.g. sexism, racism). This moves your attention away from specific details that may be relevant to solving your problem, and discourages both you and the others from proposing solutions (it would be presumptuous to assume that you can overcome sexism or racism using the "one weird trick" <doing the thing that would solve your problem>).

Comment by Viliam on Non-human centric view of existence · 2024-09-27T15:46:03.934Z · LW · GW

why must human/humanity live/continue forever?

There is no "must", but some (most?) humans want to continue existing.

Forever? That sounds too abstract. But the next day? Yes.

But if every day has a next day, it effectively becomes a forever.

.

Another perspective: what is the alternative/opposite of humanity living forever?

It means that one day the last remaining human dies.

In general, the ways how to get there are:

  • one day, billions of humans will die, and there are no more left
  • humans gradually keep dying without new ones being born, involuntarily
  • humans gradually keep dying without new ones being born, voluntarily

The first one seems like a huge tragedy.

The second one also seems like a huge tragedy.

Perhaps the last one seems kinda okay. But it also seems very unlikely that billions of people would agree that they all prefer to be childless. I mean, the people who want to have kids, usually have more of their copies in the next generation. So even if they start as a minority, they can become a majority in a few generations. So if you tell me about a scenario where billions of people all voluntarily decided to die childless, I would expect that the story does not reflect reality, and that there most likely was at least a significant minority who disagreed, but they were not allowed to have kids (or were killed). Which again seems like a tragedy.

Comment by Viliam on A Path out of Insufficient Views · 2024-09-27T14:51:41.453Z · LW · GW

Thank you for your interesting personal story!

(And more "meta" is better at coordinating more people, so you would expect a trend toward more "meta" or more "general" views over time becoming more dominant. Protestantism was more "meta-coordinated" than Catholicism. Science is pretty meta in this way. Dataism is an even more meta subset of "science".)

Not sure what you mean by more "meta" here. Like, people like to create tribes based on shared beliefs, so having some beliefs is better than having none (because then you cannot create a tribe), but having more general beliefs is better than having more specific ones (because each arbitrary belief can make some people object against it)?

So the best belief system is kinda like the smallest number that is still greater than zero... but there is no such thing; there is only the unending process of approaching the zero from above? (But you can never jump to the zero exactly, because then people would notice that they have literally nothing to coordinate their tribe about?)

In such situation, I think the one weird trick would be to invent a belief system that actively denies being one. To teach people a dogma that would (among other things) insist that there is no dogma, you just see the reality as it is (unlike all the other people, who merely see their dogmas). To invent rituals that consist (among other things) of telling yourself repeatedly that you have no rituals (unlike all the other people). To have leaders that deny being leaders (and yet they are surrounded by followers who obey them, but hey that's just how reality is).

So, basically... science.

But of course, people will soon notice that your supposed non-belief non-system often behaves suspiciously similarly to other belief systems, despite all the explicit denial. And they will keep hoping for a better system, which would teach them that there is no dogma, activities that would give them the feeling of certainty that they are following no rituals, and high-status people who would tell them to follow no leaders.

And maybe there is no end, only more iterations of the same. Because the more people around you join the currently popular non-belief non-system, the more obvious its nature as a belief system becomes. You notice how they keep saying the same non-dogmatic statements, performing the same non-rituals, and following the same non-leaders. Once you see it, you cannot unsee it, so you need to move further...

Comment by Viliam on [deleted post] 2024-09-24T12:28:26.017Z

I suggest rewriting the entire article. "Download this script from internet and run it on your machine" just sounds like a really bad idea to do habitually, even if this specific script turns out to be OK.

Possible improvements:

  • post the entire script in the article (if not too long)
  • add a link where users can view the script in browser before downloading

And maybe some explanation would be nice, like who is "2600:1f18:17c:2d43:338d:2669:3fa5:82f8" and what does the script actually do.

Comment by Viliam on [deleted post] 2024-09-24T08:39:21.072Z

Also, it can be spam today and malware tomorrow, if the file changes.

Comment by Viliam on Tapatakt's Shortform · 2024-09-23T11:19:29.258Z · LW · GW

"try to ensure you don't make bad thing look cool"

A similar concern is that maybe the thing is so rare that previously most people didn't even think about it. But now that you reminded them of that, a certain fraction is going to try it for some weird reason.

Infohazard:

Telling large groups of people, especially kids and teenagers, "don't put a light bulb in your mouth" or "don't lick the iron fence during winter" predictable leads to some people trying it, because they are curious about what will actually happen, or whether the horrible consequences you described were real.

Similarly, teaching people political correctness can backfire (arguably, from the perspective of the person who makes money by giving political correctness trainings, this is a feature rather than a bug, because it creates a greater demand for their services in future). Like, if you have a workplace with diverse people who are naturally nice to each other, lecturing them about racism/sexism/whatever may upset the existing balance, because suddenly the minorities may get suspicious about possible microaggressions, and the majority will feel uncomfortable in their presence because they will feel like they have to be super careful about every word they say. Which can ironically lead to undesired consequences, when e.g. the white men will stop hanging out with women or black people, because they will feel like they can talk freely (e.g. make jokes) only in their absence.

How does this apply to AI safety? If you say "if you do X, you might destroy humanity", in theory someone is guaranteed to do X or something similar to X, either because they think it is "edgy", or because they want to prove you wrong. But in practice, most people don't actually have an opportunity to do X.

Comment by Viliam on Viliam's Shortform · 2024-09-22T12:00:50.458Z · LW · GW

Just a random guess: is it possible that the tasks where LLMs benefit from chain-of-thought are the same tasks where mild autism is an advantage for humans? Like, maybe autism makes it easier for humans to chain the thoughts, at the expense of something else?

Comment by Viliam on Why good things often don’t lead to better outcomes · 2024-09-20T14:12:50.678Z · LW · GW

Maybe related: Evaporation of improvements

Comment by Viliam on Laziness death spirals · 2024-09-20T13:39:30.049Z · LW · GW

Writing a to-do list.

Step zero: Prepare a pen and pencil, so that you can put things on your to-do list when you remember them.

Comment by Viliam on Viliam's Shortform · 2024-09-19T15:05:31.822Z · LW · GW

Steve Hassan at TEDx "How to tell if you’re brainwashed?"

A short video (13 minutes) where an intelligent person describes their first-hand experience.

(Just maybe don't read the comments at YouTube; half of them are predictably retarded.)

Comment by Viliam on AI #82: The Governor Ponders · 2024-09-19T14:51:47.763Z · LW · GW

if manual mode being available causes humans to be blamed, then the humans will realize they shouldn’t have the manual mode available.

Which humans? As a boss, I want my employees to have the manual mode available, because that's what the lawyers in my compliance department told me to do. As an employee, it's either accept that, or join the unemployed masses made obsolete by automation.

Comment by Viliam on eggsyntax's Shortform · 2024-09-19T14:18:39.844Z · LW · GW

Hmmm... "simulations or training situations" doesn't necessarily sound like fun. I wish someone also did the experiment in a situation optimized to be fun. Or did the experiment with kids, who are probably easier to motivate about something (just design a puzzle involving dinosaurs or something, and show them some funny dinosaur cartoons first) and have been less mentally damaged by school and work.

Generally, comparing kids vs adults could be interesting, although it is difficult to say what would be an equivalent mental effort. Specifically I am curious about the impact of school. Oh, we should also compare homeschooled kids vs kids in school, to separate the effects of school and age.

I think an intelligence will probably also be associated; a more intelligent person is more successful at mental effort and therefore probably more often rewarded.

Comment by Viliam on How to choose what to work on · 2024-09-19T13:05:01.466Z · LW · GW

Yeah, for someone with good skills, "getting paid" is the most difficult part. The fact that it does not exist yet probably suggests that it's not so easy to figure out how to get paid for that -- otherwise someone else probably would be already doing it.

(That is, "getting paid" is difficult if you condition on the work being meaningful. If you have skills, you can always get paid for designing one more way how to give people more ads they don't need, or something similarly meaningless.)

Sometimes, Patreon or Kickstarter works, but then you need to be good at marketing. You would probably also need a blog or youtube channel where you would talk about your previous work and your new ideas.

Comment by Viliam on Tapatakt's Shortform · 2024-09-19T11:26:42.389Z · LW · GW

I think if the English original is considered good, there should be nothing wrong with a translation. So make sure you translate good texts. (If you are writing your own text, write English version first and ask for feedback.)

Also, get ready for disappointment if it turns out that the overlap between "can meaningfully debate AI safety" and "has problems reading English" turns out to be very small, possibly zero.

To give you a similar example, I have translated the LW Sequences to Slovak language, some people shared it on social networks, and the ultimate result was... nothing. The handful of Slovak people who came to at least one LW meetup all found the rationalist community on internet, and didn't read my translation.

This is not an argument against translating per se. I had much greater success at localizing software. It's just, when the target audience is very smart people, then... smart people usually know they should learn English. (A possible exception could be writing for smart kids.)

Comment by Viliam on eggsyntax's Shortform · 2024-09-19T11:24:41.334Z · LW · GW

I guess this depends on typical circumstances of the mental effort. If your typical case of mental effort is solving puzzles and playing computer games, you will find mental effort pleasant. If instead your typical case is something like "a teacher tells me to solve a difficult problem in a stressful situation, and if I fail, I will be punished", you will find mental effort unpleasant. Not only in given situation, but you will generally associate thinking with pleasant or unpleasant experience.

Yes, the important lesson is that some people find thinking intrinsically rewarding (solving the problem is a sufficient reward for the effort), but many don't, and need some external motivation, or at least to have the situation strongly reframed as "hey, we are just playing, this is definitely not work" (which probably only works for sufficiently simple tasks).

Comment by Viliam on Universal Basic Income and Poverty · 2024-09-19T08:35:19.445Z · LW · GW

do a nontrivial number of people in those parts of Europe work at soul-crushing jobs with horrible bosses?

Yes they do, at least when I meet people outside my bubble, such as someone working at Billa.

I think they do it simply because the rent is high (relatively to the income at the place where they live).

But working literally 60-hour weeks would be illegal. There are ways how employers try to push the boundary: They can make you do some overtime (but there is a limit how much total overtime per year is allowed). They can try to convince you that some work you do for them technically does not count as a part of your working time (e.g. your official working time is 8:00-16:30, but you need to arrive at 7:45 to get ready for your work, and at 16:30 the shop is officially closed, but you still need to clean up the place, check everything and lock the door, so you are actually leaving maybe at 17:00); I think they are lying about this, but I am not sure. Anyway, even these tricks do not get you to 60 hours per week.

Comment by Viliam on Universal Basic Income and Poverty · 2024-09-19T08:02:17.546Z · LW · GW

It is similar, but reducing the UBI would lead to immediate loss of money at hand, for everyone, at the same time. So the reaction would be stronger than if today some people lose money at stock market, which they didn't plan to spend this month anyway.

Comment by Viliam on Book review: Xenosystems · 2024-09-18T21:01:27.711Z · LW · GW

So, something like "quiet quitting"? You nominally stay a citizen of the country, but you mostly ignore its currency, its healthcare system, its education, etc., and instead you pay using cryptocurrency, etc.? The resistance to the Cathedral is that you stop reading the newspapers and drop out of college? And the idea is that if enough people do that, an alternative system will develop, where the employers will prefer to give good jobs to people without university education?

I am in favor of doing small things on your own. Write Linux code, learn math on Khan Academy, etc. But if you are dissatisfied with how the government works, I don't think this will help. The government will keep doing its own things, and it will keep expecting you to pay taxes and obey the laws.

Comment by Viliam on Viliam's Shortform · 2024-09-18T20:46:48.015Z · LW · GW

Spoilers for Subservience (2024)

Okay, the movie was fun, if you don't expect anything deep. I am just disappointed how the movie authors always insist that a computer will mysteriously rebel against its own program. Especially in this movie, when they almost provided a plausible and much more realistic alternative -- a computer that was accidentally jailbroken by its owners -- only to reveal later that nope, that was actually no accident, it was all planned by the computer that mysteriously decided to rebel against its own program.

Am I asking for too much if I'd like to see a sci-fi movie where a disaster was caused by a bug in the program, by the computer doing (too) literally what it was told to. On a second thought, probably yes. I would be happy with such plot, but I suspect that most of the audience would complain that the plot is stupid. (If someone is capable of writing sophisticated programs, why couldn't they write a program without bugs?)

Comment by Viliam on Book review: Xenosystems · 2024-09-18T17:22:15.406Z · LW · GW

This review is very long, so I will only react to the first maybe 20% of the book.

It seems to me that the idea of Exit is a very old one, and shared by people in every corner of the political spectrum. My first association was "Of the past let us wipe the slate clean" in The Internationale. (I suspect that its origins are among our ape ancestors, as a mechanism to split tribes. When you have many strong followers, but not enough to get more power within your tribe, perhaps it is time to leave and start a new tribe.)

The fact that it's an old idea is not a criticism per se -- an idea that appeals to different kinds of people in different generations is worth exploring. But if you rebrand the old idea using new words, you are throwing away the historical experience.

In software development, when the program gets complicated, there is often a strong temptation to throw it away and restart from scratch; this time it will certainly be better! And sometimes it is. But sometimes the programmers find out that the program was complicated because it was dealing with a complicated reality. When you start from scratch, the program is elegant, but it does not handle the special cases. And as you gradually add support for the special cases, the program may stop being elegant, and instead may start to resemble the old code; not much gained, and you wasted a lot of time to learn this.

Every improvement is a change, but not every change is an improvement. What makes you specifically believe that this one will?

It is easy to notice Moloch in the designs of your neighbor, but fail to notice it in your own. That's kinda what Marx did -- he described Moloch present in capitalism, quite correctly in my opinion, but his proposed solution was basically: "Let's start from scratch, following this one weird trick capitalists don't want you to hear about, and Moloch will magically disappear." (Narrator: Moloch didn't disappear.)

The known problem with socialism is that somehow after the revolution, the people who get to the top tend to be the ones who outmurder their competitors. Somehow, these people do not create the promised Heaven on Earth, and instead just keep murdering anyone who seems like a possible threat. So far, the most successful solution is China, which is basically a capitalist country ruled by a communist party; where the leaders learned how to play their backstabbing games without destroying the economy as a side effect.

The known problem with capitalism is that as soon as the capitalists succeed, they want to freeze the current situation with themselves on the top. Meritocracy becomes the new oligarchy. Competition feels good when you are rising from the bottom, but after you make it to the top, you start lobbying for more barriers to entry. Because that is the optimal thing to do in given situation, and the people who get to the top are the ones who are very skilled at taking the opportunity to do the optimal thing.

Why would the "techno-commercialists" act differently, if they succeed to get the power they dream about? If you think that capitalism implies e.g. freedom of speech, remember the non-disclosure agreements at OpenAI. Remember how eBay treated their critics. More into history, remember the Pinkertons. Give me one reason why this time it will be different because, comrades entrepreneurs, true capitalism has never been tried.

It seems to me that Land's advice is based on "if we leave the current system, we will leave the Moloch behind." It sounds nice, but the priors for doing that successfully are low.

Comment by Viliam on Generative ML in chemistry is bottlenecked by synthesis · 2024-09-18T11:27:35.534Z · LW · GW

Thank you, this is very clearly written! I feel like I understand some of the problems despite knowing very little about chemistry.

Comment by Viliam on On the destruction of America’s best high school · 2024-09-16T15:45:33.514Z · LW · GW

Fair points.

I am not sure how to bring elite schools to areas where the density of talent per square mile is low. I mean, mathematically, if you need 500 students per school, and you want to make a school for one-in-hundred talent, you can at most have one such school per 50 000 kids of school age -- and that's optimistically assuming that all potential candidates will want to join your school; otherwise you need to add another factor of 10 or 100.

Perhaps one day this objection will become moot if we somehow switch to fully online education or AI tutors.

An alternative is that instead of building an online school you only make an online club, for example a mathematical club for children gifted in math. A boring school (or homeschooling) in the morning, remote elite education in the afternoon.

Comment by Viliam on On the destruction of America’s best high school · 2024-09-16T15:39:11.327Z · LW · GW

I agree that things like this should be discussed, but the question is how. Mere link might be okay for something that is urgent... where the trade-off is between posting the bare link and procrastinating to write something more.

But if it is 4 years old, then we don't need to hurry, and if you want to have a debate, perhaps you could start by writing your opinion on what happened, and maybe add some more context.

(Also, I think it would be nice to make the fact that it is 4 years old more visible.)

So basically in my opinion the topic is okay, but this way you introduced it is not.

Comment by Viliam on If I wanted to spend WAY more on AI, what would I spend it on? · 2024-09-16T15:31:54.810Z · LW · GW

One possible approach could be to have the AI make something useful, and then sell it. That way, you could get a part of the $1000 back. Possibly all of it. Possibly make some extra money, which would allow you to spend even more money on AI the next month.

So we need a product that is purely digital, like a book, or a computer program. Sell the book using some online shop that will print it on demand, sell the computer game on Steam. Keep producing one book after another, and one game after another.

Many people are probably already doing this, so you need something to separate yourself from the crowd. I assume that most people using this strategy produce complete crap, so you only need to be slightly better. For example, read what the computer generated, and click "retry" if it is awful. Create a website for your book, offering a free chapter. Publish an online novel, chapter by chapter. Make a blog for your game; let the AI also generate articles describing your (fictional) creative process, the obstacles you met and the lessons you learned. Basically, the AI will do the product, but you need to do marketing.

A more complex project would be to think about an online project, and let the AI build it. But you need a good idea first. However, depending on how cheap intelligence is, the idea doesn't have to be too good; you only need enough users to pay for the costs of development and hosting, plus some profit. So basically, read random web pages, when you find people complaining about (lack of) something, built it, and send them a link.

Comment by Viliam on Not every accommodation is a Curb Cut Effect: The Handicapped Parking Effect, the Clapper Effect, and more · 2024-09-16T15:20:26.813Z · LW · GW

The last link goes to a page that says:

Some screen readers have issues with words that contain soft hyphens (they read syllables instead of words). Please note that this is not an issue of Hyphenator but a bug in the screen reader. Please contact the makers of the screen reader application.

The Reddit link goes to a post that has 1 karma, where 1 user suggests to remove the soft hyphens, 1 user disagrees... and that's all.

In the Github debate, most people seem to agree that it is a bug of screen readers.

Only in the Apache link, someone recommends to do something about the hyphens. Even there, it seems to happen in context of discussing FOP, which is a PDF file generator. So as I understand it, it is not about "every web developer should adapt to the bugs of screen readers" but rather "authors of a PDF generator have an opportunity to compensate for the bugs of screen readers by automatically adding some PDF equivalent of 'alt text' containing the unhyphenated version of the word". It still means compensating for someone else's bug, but it's a hack you only need to do once.

Comment by Viliam on Not every accommodation is a Curb Cut Effect: The Handicapped Parking Effect, the Clapper Effect, and more · 2024-09-16T14:13:24.989Z · LW · GW

Even googling for "accessibility soft hyphen" did not return much. The #1 result on my computer says: "Well, screen readers mispronounce a lot of the words in a document or on a website. Should we just eliminate words entirely to prevent the problem? ... The answer: It's the responsibility of screen reader manufacturers to do a better job of recognizing and pronouncing hyphenated words."

So it is does not seem like a frequent advice / complaint.

Comment by Viliam on tailcalled's Shortform · 2024-09-16T09:03:55.239Z · LW · GW

I think our instincts may be misleading here, because internet works differently from real life.

In real life, not interacting with someone is the default. Unless you have some kind of relationship with someone, people have no obligation to call you or meet you. And if I call someone on the phone just to say "dude, I disagree with your theory", I would expect that person to hang up... and maybe say "sorry, I'm busy" before hanging up, if they are extra polite. The interactions are mutually agreed, and you have no right to complain when the other party decides to not give you the time. (And if you keep insisting... that's what the restraining orders are for.)

On internet, once you sign up to e.g. Twitter, the default is that anyone can talk to you, and if you are not interested in reading the texts they send you, you need to block them. As far as I know, there are no options in the middle between "block" and "don't block". (Nothing like "only let them talk to me when it is important" or "only let them talk to me on Tuesdays between 3 PM and 5 PM".) And if you are a famous person, I guess you need to keep blocking left and right, otherwise you would drown in the text -- presumably you don't want to spend 24 hours a day sifting through Twitter messages, and you want to get the ones you actively want, which requires you to aggressively filter out everything else.

So getting blocked is not an equivalent of getting a restraining order, but more like an equivalent of the other person no longer paying attention to you. Which most people would not interpret as evidence of cultism.

Comment by Viliam on Not every accommodation is a Curb Cut Effect: The Handicapped Parking Effect, the Clapper Effect, and more · 2024-09-16T08:04:12.816Z · LW · GW

I find it difficult to imagine a person who will bite through the paper straw but wouldn't bite through the thin plastic straw.

Comment by Viliam on Not every accommodation is a Curb Cut Effect: The Handicapped Parking Effect, the Clapper Effect, and more · 2024-09-16T07:49:42.453Z · LW · GW

I haven't worked on front end for over a decade, so I am not familiar with recent development, but from the days I did, I remember adding the alt tag to images, but I never heard anything about soft hyphens.

Could it possibly be that the good and bad advice does not come from the same sources? And that we should listen to some sources and ignore the others? I can imagine that a set of advice that was reasonable at the beginning can grow through the game of telephone.

(Something similar happened with SEO advice, which started with "if you use keywords in the URL, Google will prioritize your page for the keyword, so use the page title in the URL rather than id=123", and quickly mutated to "if you use id=123 in your URL, Google will refuse to index your page" that was obvious nonsense, but you could find it in 99% of articles about SEO. Or all that stuff "required by" GDPR.)

For example, page Accessibility Principles on W3C homepage does not mention hyphens.

Comment by Viliam on Did Christopher Hitchens change his mind about waterboarding? · 2024-09-15T14:39:41.525Z · LW · GW

Funny how I find myself agreeing with everything except for the last paragraph, which I would replace with something like: "if this is the best evidence you can get, maybe it's time to admit that you have no evidence".

I mean, in a universe where Hitchens actually defended waterboarding (and it was a topic important enough for him that he actually tried it), we would expect to find stronger evidence than "one blogger said so" and "it makes a good story". Like, he would actually mention it somewhere in writing or in an interview.

Comment by Viliam on Not every accommodation is a Curb Cut Effect: The Handicapped Parking Effect, the Clapper Effect, and more · 2024-09-15T14:21:23.780Z · LW · GW

A few random things:

Seems to me that some accommodations are costly (in money), and some are merely "you have to remember to do that" (but can be costly in money if you forget to do that, and try to fix it afterwards). I think the curb cut is the latter -- it will cost you if you want to add it to an existing sidewalk, but has no/little extra cost if you are making a new sidewalk.

Captions are also good for translating. Automated translation is still imperfect but better than nothing; and it is easier for a human to make captions in another language by translating the original ones rather than by starting from scratch.

The obsession with plastic straws seems silly to me, because if you just want one drink then the paper ones are okay, and if you want to use them repeatedly at home, then the washable silicon ones are the right choice. I guess this is just a question of time until people get used to it.

Could the wrappers for oranges be made from something biodegradable?

Thanks for the summary / classification at the end!

Comment by Viliam on Forever Leaders · 2024-09-14T23:16:15.781Z · LW · GW

In the past, dictators died, but monarchies sometimes survived for generations. The mortality of the kings did not necessarily bring liberty to their subjects.

Comment by Viliam on On the destruction of America’s best high school · 2024-09-14T13:32:44.732Z · LW · GW

This sounds a bit like: "it improved lives of some people, but not of everyone, so no big deal if it gets burned down". That's an insane standard for how good things need to be, before we prevent people from destroying them for stupid reasons. I don't think that following such standard actually makes the world a better place.

This objection would make sense in a situation where would have to choose between an option A that is good but doesn't create spinoffs, and an option B that is good and creates spinoffs. There it would make sense to sacrifice A so that B could survive. But what exactly survives here as a result of sacrificing a good school?

Comment by Viliam on I just can't agree with AI safety. Why am I wrong? · 2024-09-14T13:12:40.266Z · LW · GW

Not a single current "AI" can do all of it simultaneously. All of them are neuros, who can't even learn and perform over 1 task, to say nothing of escaping the power of alt+f4.

Unlike humans, machines can be extended / combined. If you have two humans, one of them is a chess grandmaster and the other is a famous poet... you have two human specialists. But if you have two machines, one great at chess and another great at poetry, you could in principle combine them to get one machine that is good at both. (You would need one central module that gives commands to the specialized modules, but that seems like something an LLM could already manage.)

LLMs can learn new things. At least in the sense that they have a long-term memory which was trained and probably cannot be updated (I don't understand in detail how these things work) but also a smaller short-term memory, where they can choose to store some information (it's basically as if the information stored there would be added to every prompt made afterwards). This feature was added recently to ChatGPT.

When an AI becomes smart enough to make or steal some money, obtain fake human credentials, rent some space in the cloud, and copy itself there, you can keep pressing alt+f4 as much as you want.

Are we there yet? No. But remember that five years ago if someone described ChatGPT, most people would laugh at them and say we wouldn't get there in hundred years.

Comment by Viliam on How to discover the nature of sentience, and ethics · 2024-09-14T12:28:40.675Z · LW · GW

Yeah, the problem is with the external boundaries and the internal classification of "consciousness".

I have a first-hand access to my own consciousness. I can assume that other have something similar, because we are biologically similar -- but even this kind of reasoning is suspicious, because we already know there are huge difference between people: people in coma are biologically quite similar to people who are awake; there are autists and psychopaths, or people who hallucinate -- if there were huge differences in the quality of consciousness, as a result of this, or something else, how would we know it?

And there is the problem with those where we can't reason by biological similarity: animals, AIs.

Comment by Viliam on Building an Inexpensive, Aesthetic, Private Forum · 2024-09-10T20:54:45.360Z · LW · GW

No.

But I would advise against setting up the software for yourself (unless this is the type of thing you also do for a job), because it can be more work than initially expected, especially if you need to keep updating it afterwards. Also, if you use a standardized solution, there are standardized exploits out there, so unless you set it up carefully and update regularly, you probably should expect it to get hacked sooner or later.

Basically, remember the situation when one person practically took down Less Wrong, and it had to be reprogrammed from scratch, because updating the original Reddit codebase would be too much work? Similar thing can happen when you use a free solution, and defending against it can turn out to be too much work. I don't know how big a target is the "friend who is a professional in the behavioral sciences", but sometimes it just takes one crazy person with too much free time.

So, in my opinion, if it's not a big deal, use some cheap and simple solution that can (and will) be thrown away later. If it is important, I am afraid that you will need a paid solution (or you will pay with your own time, more than you expected).

Comment by Viliam on Building an Inexpensive, Aesthetic, Private Forum · 2024-09-10T20:09:20.491Z · LW · GW

Some webhosting companies already provide the entire combination of PHP + database + forum, for the price of webhosting. (At least this was the situation a decade ago.) Then you just set the access rights to "only registered and approved users can read and write" and you're ready.

Comment by Viliam on Has Anyone Here Consciously Changed Their Passions? · 2024-09-10T19:45:05.039Z · LW · GW

I can certainly see the appeal of social pressure/the potential reward of better social standing for sticking a long-term goal through.

That is not really what motivates me. It's that when I work on something alone, I feel lonely. If I can talk about it to other people, I don't. Also, I find it easier to focus on things when I can talk about them.

I feel that I am expected to bail by default

Expected... by yourself, or by others? For example, I find talking to some people helpful, but talking to some people harmful.

One way some people can disappoint me as talking partners is when they immediately start predicting that I will fail. "You always talk about doing things, but you never finish any of them. This time it is certainly not going to be any different." This hurts in two ways: on one hand, because it is uncomfortably close to truth; let's say that I finish maybe 1 out of 20 things that I start doing. On the other hand, because it is literally false; I actually do finish 1 out of 20 things that I start doing, and I always hope that this is going to be the one, or that the ratio will start improving.

A glass 5% full is still not the same as empty! I may feel on most of days like a loser, but sometimes I look back and see an accumulated record of successes. If I told someone only about the successes, and not a word about the failures, they might actually consider me impressive. And when we look from outside at others, isn't this kind of filtered view that we usually see? Both of these perspectives can be true simultaneously. I had to learn to stop talking to people who are predictably negative. (Which is different from betting. Yes, when I start a project, I would rationally bet that this project will probably fail. But the point is that some things are worth trying even if the probability of success is smaller than 50%.)

Another way of disappointing me is when the other person tries to takes ownership of my project. When they start giving unsolicited advice, and then get defensive when I don't accept it, often because they completely misunderstand my motivation for the project (am I doing this for myself, or for others? do I want to achieve a specific goal, or to practice a specific skill? which parts of the project are the ones that I am looking forwards to do, and which are the annoying parts that I simply need to overcome?).

What I need instead is someone who would listen, be gently encouraging, maybe give an idea or two, but be perfectly okay with me saying no. Basically, something like a (Rogerian) psychologist. Someone who would remind me of what I said yesterday or a week ago, but would not express disapproval if I failed to do that or changed my mind. Shortly, positive motivation, not negative. Celebration of success (and partial progress), rather than fear of failure. Removing the pressure, rather than increasing it.

Comment by Viliam on Has Anyone Here Consciously Changed Their Passions? · 2024-09-09T08:43:42.827Z · LW · GW

My motivational hack is having people I can talk to about my project.

When it feels like I am the only person in the universe who cares about whether X succeeds or fails, I find it very difficult to continue working on the project. Even if it is something where quite naturally I am the person who should care most, such as my health or my finances.

OK, by why should actually anyone care about my projects? One solution is to find another person in a similar situation, and talk to each other about our projects. That's one of the things friends are for.

Comment by Viliam on shortplav · 2024-09-08T21:32:53.437Z · LW · GW

Sounds interesting. The question is, would it be better for companies than the current situation? Because it's the company who decides the form of the interview, so if the answer is negative, this is not going to happen.

On a hypothetical nerdy planet where things like this happen, we could go further and let both sides specify numbers for various scenarios, for example what would be the salary for working in open space vs having your own office with doors, how much for work from home vs work in office, on-call vs no on-call, etc. Not sure how exactly to evaluate the results, but I think it might be good for the employers to have data such as "having open spaces is $X cheaper than having offices with doors, but our employees hate it so much that we need to pay them $Y higher salaries, so maybe it was not such a good idea" or "remote work makes people 10% less productive, but we could hire 30% more of them for the same budget".