Posts

Rationality Vienna Meetup June 2019 2019-04-28T21:05:15.818Z · score: 9 (2 votes)
Rationality Vienna Meetup May 2019 2019-04-28T21:01:12.804Z · score: 9 (2 votes)
Rationality Vienna Meetup April 2019 2019-03-31T00:46:36.398Z · score: 8 (1 votes)
Does anti-malaria charity destroy the local anti-malaria industry? 2019-01-05T19:04:57.601Z · score: 64 (17 votes)
Rationality Bratislava Meetup 2018-09-16T20:31:42.409Z · score: 18 (5 votes)
Rationality Vienna Meetup, April 2018 2018-04-12T19:41:40.923Z · score: 10 (2 votes)
Rationality Vienna Meetup, March 2018 2018-03-12T21:10:44.228Z · score: 10 (2 votes)
Welcome to Rationality Vienna 2018-03-12T21:07:07.921Z · score: 4 (1 votes)
Feedback on LW 2.0 2017-10-01T15:18:09.682Z · score: 11 (11 votes)
Bring up Genius 2017-06-08T17:44:03.696Z · score: 55 (50 votes)
How to not earn a delta (Change My View) 2017-02-14T10:04:30.853Z · score: 10 (11 votes)
Group Rationality Diary, February 2017 2017-02-01T12:11:44.212Z · score: 1 (3 votes)
How to talk rationally about cults 2017-01-08T20:12:51.340Z · score: 5 (10 votes)
Meetup : Rationality Meetup Vienna 2016-09-11T20:57:16.910Z · score: 0 (1 votes)
Meetup : Rationality Meetup Vienna 2016-08-16T20:21:10.911Z · score: 0 (1 votes)
Two forms of procrastination 2016-07-16T20:30:55.911Z · score: 10 (11 votes)
Welcome to Less Wrong! (9th thread, May 2016) 2016-05-17T08:26:07.420Z · score: 4 (5 votes)
Positivity Thread :) 2016-04-08T21:34:03.535Z · score: 26 (28 votes)
Require contributions in advance 2016-02-08T12:55:58.720Z · score: 61 (61 votes)
Marketing Rationality 2015-11-18T13:43:02.802Z · score: 28 (31 votes)
Manhood of Humanity 2015-08-24T18:31:22.099Z · score: 10 (13 votes)
Time-Binding 2015-08-14T17:38:03.686Z · score: 17 (18 votes)
Bragging Thread July 2015 2015-07-13T22:01:03.320Z · score: 4 (5 votes)
Group Bragging Thread (May 2015) 2015-05-29T22:36:27.000Z · score: 7 (8 votes)
Meetup : Bratislava Meetup 2015-05-21T19:21:00.320Z · score: 1 (2 votes)

Comments

Comment by viliam on Ruby's Public Drafts & Working Notes · 2019-09-14T14:09:51.754Z · score: 11 (2 votes) · LW · GW

Please note that even things written in 1620 can be under copyright. Not the original thing, but the translation, if it is recent. Generally, every time a book is modified, the clock starts ticking anew... for the modified version. If you use a sufficiently old translation, or translate a sufficiently old text yourself, then it's okay (even if a newer translation exists, if you didn't use it).

Comment by viliam on Matthew Barnett's Shortform · 2019-09-13T22:02:16.848Z · score: 5 (4 votes) · LW · GW

These days my reason for not using full name is mostly this: I want to keep my professional and private lives separate. And I have to use my real name at job, therefore I don't use it online.

What I probably should have done many years ago, is make up a new, plausibly-sounding full name (perhaps keep my first name and just make up a new surname?), and use it consistently online. Maybe it's still not too late; I just don't have any surname ideas that feel right.

Comment by viliam on Looking for answers about quantum immortality. · 2019-09-13T21:37:48.481Z · score: 2 (1 votes) · LW · GW
If its at all possible for consciousness to transfer between worlds

I suppose it's not.

Physics doens't say how consciousness works.

It exists in brains, brains are made of atoms, and physics has a story or two about the atoms.

Comment by viliam on Looking for answers about quantum immortality. · 2019-09-11T01:15:03.796Z · score: 6 (3 votes) · LW · GW

I read the first link, and to me it seems that the author actually stumbles upon the right answer in the middle of the paper, only to dismiss it immediately with "we have no good way to justify it" and proceed towards things that make less sense. I am talking about what he calls the "intensity rule" in the paper.

Assuming a non-collapse interpretation, the entire idea is that literally everything happens all the time, because every particle has a non-zero amplitude at every place, but it all adds up to normality anyway, because what matters is the actual value of the amplitude, not just the fact whether it is zero or non-zero. (Theoretically, epsilon is not zero. Practically, the difference between zero and epsilon is epsilon.) Outcomes with larger amplitudes are the normal ones; the ones we should expect more. Outcomes with epsilon amplitudes are the ones we should only pay epsilon attention to.

It is possible that the furniture in my room will, due to some very unlikely synchronized quantum tunneling, transform into a hungry tiger? Yes, it is theoretically possible. (Both in Copenhagen and many-worlds interpretations, by the way.) How much time should I spend contemplating such possibility? Just by mentioning it, I already spent many orders of magnitude more than would be appropriate.

The paper makes some automatic assumption about time, which I am going to ignore for the moment. Let's assume that, because of quantum immortality, you will be alive 1000000 years from now. Which path is most likely to get you from "here" to "there"?

In any case, some kind of miracle is going to happen. But we should still expect the smallest necessary miracle. In absolute numbers, the chances of "one miracle" and "dozen miracles" are both pretty close to zero, but if we are going to assume that some miracle happened, and normalize the probabilities accordingly, "one miracle" is almost certainly what happened, and the probability of "dozen miracles" remains pretty close to zero even after the normalization. (Assuming the miracles are of comparable size, mutually independent, et cetera.)

Comparing likelihoods of different miracles is, by definition, outside of our usual experience, so I may be wrong here. But it seems to me that the horror scenario envisioned by the author requires too many miracles. (In other words, it seems optimized for shock value, not relative probability.) Suppose that in 10 years you get hit by the train, and by a miracle, a horribly disfigured fragment of you survives in an agony beyond imagination. Okay, technically possible. So, what is going to happen during the following 999990 years? It seems that further surviving in this state would require more miracles than further surviving as a healthy person. (The closer to death you are, the more unlikely it is for you to survive another day, or year.) And both these paths seem to require more miracles than being frozen now, and later resurrected and made forever young using advanced futuristic technology. Even just dying now, and being resurrected 1000000 years later, would require only one miracle, albeit a large one. If you are going to be alive in 1000000 years, you are most likely to get there by a relatively least miraculous path. I am not sure what exactly it is, but being constantly on the verge of death and surviving anyway seems too unlikely (and being frozen and later unfrozen, or uploaded to a computer, seems almost ordinary in comparison).

Now, let's take a bit more timeless perspective here. Let's look at the universe in its entirety. According to quantum immortality, there are you-moments in the arbitrarily distant future. Yes; but most of them are extremely thin. Most of the mass of the you-moments is here, plus or minus a few decades. (Unless there is a lawful process, such as cryonics, that would stretch a part of the mass into the future enough to change the distribution significantly. Still not as far as quantum immortality, which can probably overcome even the death heat of the universe and get so far that the time itself stops making sense.) So, according to anthropic principle, whenever you find yourself existing, you most likely find yourself in the now -- I mean, in your ordinary human lifespan. (Which is, coincidentally, where you happen to find yourself right now, don't you?) There are a few you-moments at a very exotic places, but most of them are here. Most of your life happens before your death; most instances of you experiencing yourself are the boring human experience.

Comment by viliam on strangepoop's Shortform · 2019-09-10T23:57:41.345Z · score: 3 (2 votes) · LW · GW

From certain perspective, "more models" becomes one model anyway, because you still have to choose which of the models are you going to use at a specific moment. Especially when multiple models, all of them "false but useful", would suggest taking a different action.

As an analogy, it's like saying that your artificial intelligence will be an artificial meta-intelligence, because instead of following one algorithm, as other artificial intelligences do, it will choose between multiple algorithms. At the end of the day, "if P1 then A1 else if P2 then A2 else A3" still remains one algorithm. So the actual question is not whether one algorithm or many algorithms is better, but whether having a big if-switch at the top level is the optimal architecture. (Dunno, maybe it is, but from this perspective it suddenly feels much less "meta" than advertised.)

Comment by viliam on Free-to-Play Games: Three Key Trade-Offs · 2019-09-10T23:41:15.356Z · score: 9 (5 votes) · LW · GW

I recently started playing an online game I saw advertised online. I know how addictive these things are, but I decided to "play with fire" anyway.

As a precaution, I decided to not make a browser bookmark of this game, ever. I registered using a throwaway e-mail address. Also, I never told anyone that I was playing it. That way, when I decide to quit, nothing would push me back -- it would only require one decision, not repeated temptations and decisions. And... I played for a few weeks and then I quit. And after a few days of not playing, I don't feel like starting it again anymore, so I guess my strategy worked.

I will not mention the name of the game here. Anyway, it was the type of game where you build stuff, collect resources, and research new stuff; with many things to unlock. In the game there were three important resources, let's call them X, Y, and Z. By making better or worse decisions, you could make more or less of the resources X and Y; and I spent some time optimizing for that.

With resource Z, however, the basic way to get it was to play the game regularly. If you logged in at least N times a day, you got M points of resource Z per day; you couldn't get more for playing longer, but you would get less for taking breaks longer than 1/N of the day. In addition to this, there were also some other ways to get resource Z, but this extra amount was always smaller than the amount you got for merely playing the game regularly. There was no smart strategy to at least double the income of Z. So, whether you did smart or stupid things had a visible impact on X and Y, but almost no impact on Z.

Of course the resource Z was the one that actually mattered, in long term. Your progress on the tech tree sometimes required X and Y, but always required Z. And, of course, the higher steps on the almost-linear tech tree required more of the resource Z.

So, regardless of whether you did smart or stupid things, you advanced in the game at a pre-programmed speed, which was gradually getting slower the longer you played. In other words, pre-programmed fun at the beginning (unlocking a lot of stuff during the first day, trying various things), pre-programmed increasing boredom later. Completely unsurprisingly, resource Z was the one you could also buy for real money. But even if you would decide to spend a certain amount of money every week, you would still get the same boredom curve as a result, as the constant income of resource Z would have diminishing returns the further you progress on the tech tree. The only way to keep constant levels of fun (assuming that unlocking new things on the tech tree counts as fun, even if they are mostly the same stuff only with different numbers and pictures) would be to pay ever increasing amounts of money.

After realizing all this, I still kept playing for a few days before I finally stopped. (I never paid anything, of course.)

Comment by viliam on G Gordon Worley III's Shortform · 2019-09-08T21:24:28.566Z · score: 5 (3 votes) · LW · GW

Seems to me that modern life full of distractions. As a smart person, you probably have a work that requires thinking (not just moving your muscles in a repetitive way). In your free time there is internet with all the websites optimized for addictiveness. Plus all the other things you want to do (books to read, movies to see, friends to visit). Electricity can turn your late night into a day; you can take a book or a smartphone everywhere.

So, unless we choose it consciously, there are no silent moments, to get in contact with yourself... or whatever higher power you imagine there to be, talking to you.

I wonder what is the effect ratio between meditation and simply taking a break and wondering about stuff. Maybe it's our productivity-focused thinking saying that meditating (doing some hard work in order to gain supernatural powers) is a worthy endeavor, while goofing off is a sin.

Comment by viliam on Matthew Barnett's Shortform · 2019-09-08T19:41:46.272Z · score: 4 (2 votes) · LW · GW

In real world, people usually forget what you said 10 years ago. And even if they don't, saying "Matthew said this 10 years ago" doesn't have the same power as you saying the thing now.

But the internet remembers forever, and your words from 10 years ago can be retweeted and become alive as if you said them now.

A possible solution would be to use a nickname... and whenever you notice you grew up so much that you no longer identify with the words of your nickname, pick up a new one. Also new accounts on social networks, and re-friend only those people you still consider worthy. Well, in this case the abrupt change would be the unnatural thing, but perhaps you could still keep using your previous account for some time, but mostly passively. As your real-life new self would have different opinions, different hobbies, and different friends than your self from 10 years ago, so would your online self.

Unfortunately, this solution goes against "terms of service" of almost all major website. On the advertisement-driven web, advertisers want to know your history, and they are the real customers... you are only a product.

Comment by viliam on An Educational Singularity · 2019-09-08T19:22:04.161Z · score: 9 (3 votes) · LW · GW

Is "knowledge transference" a real thing, or one of those thousand things that didn't replicate? There are many myths in education, I wonder if this is one of them.

(I tried Wikipedia, but it only has an article on "knowledge transfer", which is about sharing information between people within an organization, i.e. something completely different.)

Bryan Caplan in The Case Against Education writes:

[Teachers say:] A history class can teach critical thinking; a science class can teach logic. Thinking—all thinking—builds mental muscles. The bigger students’ mental muscles, the better they’ll be at whatever job they eventually land.
[Is it true?] For the most part, no. Educational psychologists who specialize in “transfer of learning” have measured the hidden intellectual benefits of education for over a century. Their chief discovery: education is narrow. As a rule, students learn only the material you specifically teach them . . . if you’re lucky. In the words of educational psychologists Perkins and Salomon, “Besides just plain forgetting, people commonly fail to marshal what they know effectively in situations outside the classroom or in other classes in different disciplines. The bridge from school to beyond or from this subject to that other is a bridge too far.”
Many experiments study transfer of learning under seemingly ideal conditions. Researchers teach subjects how to answer Question A. Then they immediately ask their subjects Question B, which can be handily solved using the same approach as Question A. Unless A and B look alike on the surface, or subjects get a heavy-handed hint to apply the same approach, learning how to solve Question A rarely helps subjects answer Question B.
[In an experiment when subjects are told a military puzzle and its solution, and then a medical puzzle which can be solved analogically,] A typical success rate is 30%. Since about 10% of subjects who don’t hear the military problem offer the convergence solution, only one in five subjects transferred what they learned. To reach a high (roughly 75%) success rate, you need to teach subjects the first story, then bluntly tell them to use the first story to solve the second.
To repeat, such experiments measure how humans “learn how to think” under ideal conditions: teach A, immediately ask B, then see if subjects use A to solve B. Researchers are leading the witness. As psychologist Douglas Detterman remarks: "Teaching the principle in close association with testing transfer is not very different from telling subjects that they should use the principle just taught. Telling subjects to use a principle is not transfer. It is following instructions."
Under less promising conditions, transfer is predictably even worse. Making the surface features of A and B less similar impedes transfer. Adding a time delay between teaching A and testing B impedes transfer. Teaching A, then teaching an irrelevant distracter problem, then testing B, impedes transfer. Teaching A in a classroom, then testing B in the real world impedes transfer. Having one person teach A and another person test B impedes transfer.
[...] No wonder even transfer optimists like Robert Haskell lament: "Despite the importance of transfer of learning, research findings over the past nine decades clearly show that as individuals, and as educational institutions, we have failed to achieve transfer of learning on any significant level."
[...] Counterexamples do exist, but compared to teachers’ high hopes, effects are modest, narrow, and often only in one direction. One experiment randomly taught one of two structurally equivalent topics: (a) the algebra of arithmetic progression, or (b) the physics of constant acceleration. Researchers then asked algebra students to solve the physics problems, and physics students to solve the algebra problems. Only 10% of the physics students used what they learned to solve the algebra problems. But a remarkable 72% of the algebra students used what they learned to solve the physics problems. Applying abstract math to concrete physics comes much more naturally than generalizing from concrete physics to abstract math.
[...] Each major sharply improved on precisely one subtest. Social science and psychology majors became much better at statistical reasoning—the ability to apply “the law of large numbers and the regression or base rate principles” to both “scientific and everyday-life contexts.” Natural science and humanities majors became much better at conditional reasoning—the ability to correctly analyze “if . . . then” and “if and only if” problems. On remaining subtests, however, gains after three and half years of college were modest or nonexistent.
[...] Transfer researchers usually begin their careers as idealists. Before studying educational psychology, they take their power to “teach students how to think” for granted. When they discover the professional consensus against transfer, they think they can overturn it. Eventually, though, young researchers grow sadder and wiser. The scientific evidence wears them down—and their firsthand experience as educators finishes the job

Intuitively, it seems to me that having a good model of world trained on some subjects should provide some advantage at other subjects. But either it is an obvious prerequisite (such as: understanding chemistry helps you understand biochemistry) or the benefits are likely to be small (e.g. from physics I could learn that the universe follows relatively simple impersonal laws; but that alone does not tell me which laws are followed in sociology or computer science). Having good general knowledge can inoculate one against some fake theories (e.g. physics and chemistry against homeopathy), but after removing the fake frameworks there is still much to learn. Also, the transferred knowledge (e.g. "there is no supernatural, the nature follow impersonal laws") is the same for all natural sciences, so the "X%" you get from physics is the same as the "X%" you get from chemistry; you do not get "2X%" after learning both of them.

Comment by viliam on Sayan's Braindump · 2019-09-05T21:32:12.096Z · score: 2 (1 votes) · LW · GW

Generally, if you want to go outside of your comfort zone, you might as well do something useful (either for yourself, or for others).

For example, if you try "rejection therapy" (approaching random people, getting rejected, and thus teaching your System 1 that being rejected doesn't actually hurt you), you could approach people with something specific, like giving them fliers, or trying to sell something. You may make some money as a side effect, and in addition to expanding your comfort zone also get some potentially useful job experience. If you travel across difficult terrain, you could also transport some cargo and get paid for it. If you volunteer for an organization, you will get some advice and support (the goal is to do something unusual and uncomfortable, not to optimize for failure), and you will get interesting contacts (your LinkedIn profile will be like: "endorsed for skills: C++, object-oriented development, brain surgery, fire extinguishing, assassination, cooking for homeless").

You could start by obtaining a list of non-governmental organizations in your neighborhood, calling them, and asking whether they need a temporary volunteer. (Depending on your current comfort zone, this first step may already be outside of it.)

Comment by viliam on The Transparent Society: A radical transformation that we should probably undergo · 2019-09-04T23:23:23.640Z · score: 22 (8 votes) · LW · GW

If someone tried to implement this in real life, I would expect it to get implemented exactly halfway. I would expect to find out that my life became perfectly transparent for anyone who cares, but there would be some nice-sounding reason why the people at the top of the food chain would retain their privacy. (National security. Or there are a few private islands in the ocean where the surveillance is allegedly economically/technically impossible to install, and by sheer coincidence, the truly important people live there.) I would also expect this asymmetry to be abused against people who try to organize to remove it.

You know, just like those cops wearing body cams that mysteriously stop functioning exactly at the moment the recording could be used against them. That, but on a planetary scale.

From the opposite perspective, many people would immediately think about counter-measures. Secret languages; so that you can listen to me talking to my friends, but still have no idea what was the topic. This wouldn't scale well, but some powerful and well-organized groups would use it.

People would learn to be more indirect in their speech, to allow everyone to pretend that anything was a coincidence or misunderstanding. There would be a lot of guessing, and people on the autism spectrum would be at a serious disadvantage.

How would the observed data be evaluated? People are hypocrites; just because you are doing the same thing many other people are doing, and everyone can see it, it doesn't necessarily prevent the outcome where you get punished and those other people not. People are really good at being dumb when you provide them evidence they don't want to see. Not understanding things you can clearly see would become even more important social skill. There would still be taboos, and you would not be able to talk about them; not even in privacy, because that wouldn't exist anymore.

But for the people who believe this would be great... I would recommend trying the experiment on a smaller scale. To create a community of volunteers, who would install surveillance throughout their commune, accessible to all members of the commune. What would happen next?

Comment by viliam on Sayan's Braindump · 2019-09-04T22:30:39.447Z · score: 2 (1 votes) · LW · GW

How specifically would you do better than status quo?

I could easily dismiss some charities for causes I don't care about, or where I think they do more harm than good. Now there are still many charities left whose cause I approve of, and that seems to me like they could help. How do I choose among these? They publish some reports, but are the numbers there the important ones, or just the ones that are easiest to calculate?

For example, I don't care if your "administrative overhead" is 40%, if that allows you to spend the remaining 60% ten times more effectively than a comparable charity with smaller overhead. Unfortunately, the administrative overhead will most likely be included in the report, with two decimal places; but the achieved results will be either something nebulous (e.g. "we make the world a better place" or "we help kids become smarter"), or they will describe the costs, not the outcomes (e.g. "we spent 10 millions to save the rainforest" or "we spent 5 milions to teach kids the importance of critical thinking").

Now, I don't have time and skills to become a full-time charity researcher. So if I want to donate well, I need someone who does the research for me, and whose integrity and sanity I can trust.

Comment by viliam on Sayan's Braindump · 2019-09-04T22:19:26.354Z · score: 4 (2 votes) · LW · GW

What kills you doesn't make you stronger. You want to get out of your comfort zone, not out of your survival zone.

Comment by viliam on Sayan's Braindump · 2019-09-04T21:21:47.887Z · score: 2 (1 votes) · LW · GW

The brain, I guess.

Comment by viliam on Stories About Academia · 2019-09-03T00:24:17.375Z · score: 14 (10 votes) · LW · GW
if you say that you're interested in computer science and also music, or studying the Hebrew Bible, wow, that's just, that must mean you're just not very serious about computer science.

This part is not limited to academia. In computer science, it is now trendy to show your open-source contributions and GitHub projects, which shows that when you get home from your programming job, you also spend your free time programming... as opposed to, you know, having a different hobby or friends or a family. Not programming in your free time means you are not very serious.

(I suppose that programming in your free time, but using technologies that are frowned upon, or programming something completely unrelated to the job you apply for, is also a sign of not being completely serious. Your hobby should be very similar to your job. Oh, and you should probably stop doing it when you get the job, because now it would be infringing on your company's intellectual property. But when you change the job, you are somehow magically supposed to have a few GitHub projects using the latest technological fads again.)

Comment by viliam on Ruby's Public Drafts & Working Notes · 2019-09-02T23:57:28.640Z · score: 2 (1 votes) · LW · GW

The specific details are probably gender-specific.

Men are supposed to be strong. If they express sadness, it's like a splash of low status and everyone is like "ugh, get away from me, loser, I hope it's not contagious". On the other hand, if they express anger, people get scared. So men gradually learn to suppress these emotions. (They also learn that words "I would really want you to show me your true feelings" are usually a bait-and-switch. The actual meaning of that phrase is that the man is supposed to perform some nice emotion, probably because his partner feels insecure about the relationship and wants to be reassured.)

Women have other problems, such as being told to smile when something irritates them... but this would be more reliably described by a woman.

But in general, I suppose people simply do not want to empathize with bad feelings; they just want them to go away. "Get rid of your bad feeling, so that I am not in a dilemma to either empathize with you and feel bad, or ignore you and feel like a bad person."

A good reaction would be something like: "I listen to your bad emotion, but I am not letting myself get consumed by it. It remains your emotion; I am merely an audience." Perhaps it would be good to have some phrase to express that we want this kind of reaction, because from the other side, providing this reaction unprompted can lead to accusations of insensitivity. "You clearly don't care!" (By feeling bad when other people feel bad we signal that we care about them. It is a costly signal, because it makes us feel bad, too. But in turn, the cost is why we provide all kinds of useless help just to make it go away.)

Comment by viliam on Power Buys You Distance From The Crime · 2019-09-02T23:47:43.867Z · score: 2 (1 votes) · LW · GW

Self-driving cars have a similar problem. Even if the car would cause 100 times fewer accidents than a human driver, the problem is that when an accident happens, we need a human to blame.

How will we determine who goes to jail? Elon Musk? The poor programmer who wrote the piece of software that will be identified as having caused the bug? Or maybe someone like you, who "should have checked that the car is 100% safe", even if everyone knows it is impossible. Most likely, it will be someone at the bottom of the corporate structure.

For now, as far as I know, the solution is that there must be a human driver in a self-driving car. In case of accident, that human will be blamed for not avoiding it by taking over the control.

But I suppose that moving the blame from the customer to some low-wage employee of the producer would be better for sales, so the legislation will likely change this way some day. We just need to find the proper scapegoat.

Comment by viliam on Tiddlywiki for organizing notes and research · 2019-09-01T20:14:40.350Z · score: 4 (3 votes) · LW · GW

Thanks for the description! I found this software years ago, but somehow I didn't notice that you can save the data.

For people who like the idea of a wiki, but have some objection against Tiddlywiki, I would recommend trying WikidPad (desktop app, can save in one file, automatically generates page tree, supports lookup by keywords) or MediaWiki (online app, all functionality of Wikipedia including writing your own macros).

Comment by viliam on Question about a past donor to MIRI. · 2019-09-01T20:04:36.244Z · score: 6 (6 votes) · LW · GW

My bet would be "yes", because it doesn't seem to be a frequent name, and the set of people who can donate so much money is not that large. It could possibly be a relative with the same first name, if such person exists.

I also think it is irrelevant, but I am curious about the meta level: If an organization has to disclose the names of donors (not sure if SIAI has to or not), does it also have to provide answers to questions "is the XY who gave you money this specific XY or a namesake?"

Would it make sense for a group of rich people to coordinate and legally change their names to the same (already frequent) name, thus making their future donations deniable? Imagine that George Soros, Peter Thiel, and the Koch brothers would all change their names to "John Snow", and then many organizations would be like "yeah, John Snow donated tons of money to our cause, and we are not legally required to tell you which one". :D

Comment by viliam on How to Make Billions of Dollars Reducing Loneliness · 2019-09-01T00:38:58.452Z · score: 2 (1 votes) · LW · GW

That would make average people underestimate the "loneliness crisis", right?

Comment by viliam on Peter Thiel/Eric Weinstein Transcript on Growth, Violence, and Stories · 2019-09-01T00:37:38.754Z · score: 2 (1 votes) · LW · GW

I have uMatrix at home, the pages appear empty.

Previously I accessed the page from work, where no blocker is installed, then it was full of ads.

So it seems that for the optimal experience you not only need some kind of blocker, but it also has to be the right one.

Comment by viliam on Peter Thiel/Eric Weinstein Transcript on Growth, Violence, and Stories · 2019-08-31T17:14:52.871Z · score: 4 (2 votes) · LW · GW

Apologies for going off topic, but what is this "wikiwand.com" domain, repeatedly linked in the article? Seems like a mirror of Wikipedia, only full of advertisements.

Comment by viliam on How to Make Billions of Dollars Reducing Loneliness · 2019-08-31T17:08:29.199Z · score: 2 (1 votes) · LW · GW

It sometimes works, and it sometimes doesn't. The question is whether the only thing X and Y have in common is knowing you -- in which case it likely won't scale, -- or whether you have selected them both for the same reason (perhaps one you couldn't even articulate explicitly, but it's real and they feel it too) -- in which case it could work.

Comment by viliam on Slider's Shortform · 2019-08-31T17:05:20.484Z · score: 3 (2 votes) · LW · GW
If there is no way to input user generated content then you can't spam.

Yep. If I ever have a meaningful web page, there will be no user comments, because it seems like there is no good solution.

I think there are power balancing mechanism that get a lot more close to proportionality.

I am afraid that online even this wouldn't work. First, people can make multiple accounts. (The infamous guy on LW 1.0 made several hundreds of them.) Second, I feel that participating in online debates already selects for a worse parts of humanity, simply because some people have better things to do and some don't.

I prefer the archipelago model of internet. Rationalist websites for rationalists, homeopathic websites for homeopaths; rather than having all of them in the same place fighting each other. But goes against the incentives of the big websites, who want to be for everyone, because that allows them to display advertising to everyone.

On the other hand, creating "reality bubbles" (because, let's admit it honestly, this is what the archipelago model means) also has its own problems.

Comment by viliam on Habryka's Shortform Feed · 2019-08-31T14:10:26.040Z · score: 2 (1 votes) · LW · GW

Warning: HPMOR spoilers!

I suspect that fiction can conveniently ignore the details of real life that could ruin seemingly good plans.

Let's look at HPMOR.

The general idea of "create a nano-wire, then use it to simultaneously kill/cripple all your opponents" sounds good on paper. Now imagine yourself, at that exact situation, trying to actually do it. What could possibly go wrong?

As a first objection, how would you actually put the nano-wire in the desired position? Especially when you can't even see it (otherwise the Death Eaters and Voldemort would see it too). One mistake would ruin the entire plan. What if the wind blows and moves your wire? If one of the Death Eaters moves a bit, and feels a weird stinging at the side of their neck?

Another objection, when you pull the wire to kill/cripple your opponents, how far do you actually have to move it? Assuming dozen Death Eaters (I do not remember the exact number in the story), if you need 10 cm for an insta-kill, that's 1.2 meters you need to do before the last one kills you. Sounds doable, but also like something that could possibly go wrong.

In other words, I think that in real life, even Harry Potter's plan would most likely fail. And if he is smart enough, he would know it.

The implication for real life is that, similarly, smart plans are still likely to fail, and you know it. Which is probably why you are not trying hard enough. You probably already remember situations in your past when something seemed like a great idea, but still failed. Your brain may predict that your new idea would belong to the same reference class.

Comment by viliam on I think I came up with a good utility function for AI that seems too obvious. Can you people poke holes in it? · 2019-08-31T13:47:05.047Z · score: 6 (3 votes) · LW · GW

The usual weaknesses:

  • how would the AI describe the future? different descriptions of the same future may elicit opposite reactions;
  • what about things beyond current human understanding? how is the simulated person going to decide whether they are good or bad?

And the new one:

  • the "this future is going to happen anyway, now I will observe your actions" approach would give high score e.g. to futures that are horrible but everyone who refuses to cooperate with the omnipotent AI will suffer even worse fate (because as long at the threat seems realistic and the AI unstoppable, it makes sense for the simulated person to submit and help)

EDIT: Probably even higher score for futures that are "meh but kinda okay, only everyone who refuses to help (after being explicitly told that refusing to help is punished by horrible torture) is tortured horribly". The fact that the futures are "kinda okay" and that only people ignoring an explicit warning are tortured, would give an excuse to the simulated person, so fewer of them would choose to become martyrs and thereby provide the -1 vote.

Especially if the simulated person would be told that actually, so far, everyone chose to help, so no one is in fact tortured, but the AI still has a strong precommitment to follow the rules if necessary.

Comment by viliam on How to Make Billions of Dollars Reducing Loneliness · 2019-08-31T13:10:11.562Z · score: 2 (1 votes) · LW · GW

It is best to have both, but generally the community requires less effort per human contact. I mean, if you want to meet your friend, either you or the friend needs to take responsibility and organize the thing. (Even if "organizing" means simply telling them "come to my place today at 19:00, we can talk or watch a movie".) With community, there is more work organizing, but then many people benefit from it, and also each participant meets multiple people at the same time, i.e. you could have dozen 1:1 interactions at the same place, which puts the cost of one interaction really low.

In the community, there is a risk that some people will always volunteer and some people will always free-ride, but in some sense this possibility is also a feature: people momentarily too low on energy to organize anything can still participate.

Are you familiar with Transactional Analysis, or more specifically the book Games People Play? Among other things, there is a scale of human relationships; if I remember correctly, it goes like this: "ignorance" (people pretend not to see each other), "rituals" (people do prescribed movements and say prescribed words), "work" (people act like professionals, they cooperate on a common goal but there is nothing personal about it), "games" (people interact to fulfill their emotional needs, but still hide behind their personas), and finally "intimacy" (people feel comfortable to remove their masks and interact openly).

The thing is, all of these levels serve a purpose. It is pathological if you can't trust anyone. It is also pathological if you can't keep your boundaries. The deeper relationships are more meaningful; the less deep relationships scale better. You want the entire pyramid: a few people you are intimate with, a larger group you have fun with, to be able to cooperate if necessary with any sane person, and to avoid conflict with those who rub you the wrong way.

By the way, people live in bubbles, so it's hard to estimate how many have the "loneliness crisis". Enough in absolute numbers for it to be a problem. But is it a majority or a minority? I have no idea.

Comment by viliam on How to Make Billions of Dollars Reducing Loneliness · 2019-08-31T12:21:55.724Z · score: 10 (5 votes) · LW · GW

I am pretty low on conscientiousness, but I know people even lower than me.

For me, an important lesson was living alone in my own place, for a few years. That allowed me to try different things, and see what happens. After a few iterations, I guess my System 1 learned that actually washing the dishes immediately is the option that requires least work, which made it my preferred option.

Seems to me that people who never lived alone are missing an important learning opportunity. (Not just about dishes but also other household topics: vacuuming the room, buying toilet paper, etc. You see the entire "metabolism" of the household, not just selected parts.) When you live with other people, the options are not only "do the dishes immediately, when you mostly just rinse them", "do the dishes later, when you have to scrub the dry parts of the food", or even "do the dishes much later, when you also have to remove the disgusting mold", but there is also an option "if I wait long enough, someone else will do the dishes". The presence of the last option, especially when one is in denial about how much they benefit from it, is one of the things that make bad roommates.

tl;dr -- I believe it's usually more about incentives than about conscientiousness; or rather that going against your incentives requires even more conscientiousness than usual.

Comment by viliam on eigen's Shortform · 2019-08-31T12:05:48.990Z · score: 2 (1 votes) · LW · GW

I agree, reading a book... and then reading a book on a different topic when you already had too much of the former... seems like a good approach.

Actually, the school seems to be designed this way, of course only if you assume that 45 minutes is the optimal time to spend with one subject. (Which is probably wrong, and also depends on age, subject, etc. But the idea of "focus on X for nontrivial time, then focus on Y" is there.)

Comment by viliam on Slider's Shortform · 2019-08-31T12:00:12.890Z · score: 2 (1 votes) · LW · GW

That's what always happens, I guess.

The thing is that all solutions are bad, but leaving the problem (of spam etc.) unaddressed is even worse than the usual solutions.

Sometimes small websites avoid this, when they are unknown enough that they don't attract any spammer or any crazy person, and unimportant enough that people who don't like the content simply leave. But if they get more popular, it's only a question of when.

Imagine that your user base is: 50% Greens, 30% Blues, 10% crazy people, and 10% spammers. If you leave the site unmoderated, crazy people and spammers will make it unpleasant for everyone else. If you have a voting system, Greens will eliminate the Blues. If you have moderators, you must choose carefully, because a majority of Greens or Blues among the moderators will eliminate the other side; and of course having the same number of Green and Blue moderators would be unfair, because then the Blues are overrepresented compared to the user base. (Also, this would incentivize the 0.01% Purples to demand equal representation among moderators, too. And if you grant it, then either Greens or Blues, by making a coalition with Purples, can eliminate the other side.) You can't win.

Comment by viliam on How to Make Billions of Dollars Reducing Loneliness · 2019-08-30T22:23:00.801Z · score: 9 (7 votes) · LW · GW

A community is different from mere friendship, just like common knowledge is different from mere knowledge; it's transitive. Not only are X and Y close to you, but you know that they are also close to each other; this is why you can invite them both at the same time.

Assuming you already have a community, if they live in the same city, what you need is to establish a communication channel where people can post stuff like "I want to go to a movie / walk at time T, would anyone like to join me?"

If you don't have a community, the first step is to join one or create one. You could join a larger community (e.g. the local rationalist meetup), and within it select a subgroup of people who "click" with you and with each other. Or you could grow it gradually, starting with a small group of your friends who also like each other, slowly progressing by "do you have a friend who seems like they could fit into our small group? try inviting them to our meeting tomorrow". For a group up to 10 people, the easiest way to organize a meetup is at your home.

(Sorry, this probably deserves a longer text, but I feel tired at the moment. Just wanted to write this.)

Comment by viliam on How to Make Billions of Dollars Reducing Loneliness · 2019-08-30T22:14:40.219Z · score: 4 (3 votes) · LW · GW

For example, when you have kids, you often get a situation where dozens of things get dirty at the same time, and instead of washing them immediately you need to do something else, such as taking them to a kindergarten.

But without kids, I agree with you, washing everything right after you used it is quick and simple. It just requires a bit of conscientiousness.

Comment by viliam on jp's Shortform · 2019-08-30T21:26:53.546Z · score: 2 (1 votes) · LW · GW

Would it be useful to examine what exactly "low energy" means? For example, if you do not have enough sleep, then you could simply go sleep sooner, or take a nap in the middle of the day. If it's just mental fatigue, you could take a walk in a park.

My personal objection to reading web is that it requires almost zero energy to do, but on the other hand it does not let you replenish the energy. You start reading tired, and you end up just as tired. That's why talking a walk is better, because it liberates your mind a bit.

Comment by viliam on Eigil Rischel's Shortform · 2019-08-30T21:00:53.274Z · score: 3 (2 votes) · LW · GW

Sounds correct to me. As long as the AI has no model of the outside world and no model of itself (and perhaps a few extra assumptions), it should keep playing within the given constraints. It may produce results that are incomprehensive to us, but it would not do so on purpose.

It's when the "tool AI" has the model of the world -- including itself, the humans, how the rewards are generated, how it could generate better results by obtaining more resources, and how humans could interfere with its goals -- when the agent-ness emerges as a side effect of trying to solve the problem.

"Find the best GO move in this tree" is safe. "Find the best GO move, given the fact that the guy in the next room hates computers and will try to turn you off, which would be considered a failure at finding the best move" is dangerous. "Find the best GO move, given the fact that more computing power would likely allow you to make better moves, but humans would try to prevent you from getting too much resources" is an x-risk.

Comment by viliam on Slider's Shortform · 2019-08-30T20:53:38.949Z · score: 4 (2 votes) · LW · GW

Voting with points is what you do when...

  • you need to somehow separate the better stuff from the worse stuff, even if the method is imperfect, because there will be tons of extremely bad stuff (e.g. spam, or crazy people obsessed with the topic); and
  • you don't want to appoint a moderator, because you don't have the money to pay someone to do it as a job, and you suspect that the volunteers would be motivated by the opportunity to abuse the power;
  • you need to have a nice version to show to a completely passive person (who doesn't even have an account), so that individual friend lists and block lists are not sufficient -- you have to arrive at "one true rating" for stuff.

Without voting, you would have to give up on having an official page available to users without accounts, or you would have to establish the official moderators and either pay them or accept that people who want to abuse that kind of power have the strongest incentive to volunteer.

(The former is kinda like e-mail, and the latter is kinda like Reddit.)

Comment by viliam on eigen's Shortform · 2019-08-30T20:39:09.826Z · score: 2 (1 votes) · LW · GW

I don't read Reddit, but I have a similar experience with Hacker News. While I am reading it, it seems interesting, but when I afterwards try to remember anything useful, I can't.

My explanation is that I spend my time reading, but I don't spend my time processing what I have just read, because I am immediately moving to the next topic. Passivity is bad for remembering. (Compare with how spaced repetition learning software requires you to guess the correct answer, before telling you. Or how the mere act of note-taking improves remembering, even if you don't read your notes afterwards.) But again, reading without actively working with the topic seems to be the default approach when reading sites such as Reddit that throw a lot of content at you. With active engagement, my procrastination sessions wouldn't take an hour or two, but the entire day.

Seems like the rule is that you can only meaningfully process a limited amount of topics during a day. Reading a book seems like about the right amount. Also, the things in the book are related to each other, it is not a random mix of unrelated facts. (Related things are easier to remember than unrelated ones. Even if you make up a silly relation between them; a few mnemonic techniques are based on that.)

Comment by viliam on G Gordon Worley III's Shortform · 2019-08-21T22:57:21.901Z · score: 4 (2 votes) · LW · GW
the most important thing in Buddhist thinking is seeing reality just as it is, unmediated by the "thinking" mind, by which we really mean the acts of discrimination, judgement, categorization, and ontology. To be sure, this "reality" is not external reality, which we never get to see directly, but rather our unmediated contact with it via the senses.

The "unmediated contact via the senses" can only give you sensual inputs. Everything else contains interpretation. That means, you can only have "gnosis" about things like [red], [warm], etc. Including a lot of interesting stuff about your inner state, of course, but still fundamentally of the type [feeling this], [thinking that], and perhaps some usually-unknown-to-non-Buddhists [X-ing Y], etc.

Poetically speaking, these are the "atoms of experience". (Some people would probably say "qualia".) But some interpretation needs to come to build molecules out of these atoms. Without interpretation, you could barely distinguish between a cat and a warm pillow... which IMHO is a bit insufficient for a supposedly supreme knowledge.

Comment by viliam on Two senses of “optimizer” · 2019-08-21T22:38:28.701Z · score: 3 (2 votes) · LW · GW

A superintelligence is potentially more useful if it can model more. As an example, imagine that you want an AI that gives you a cure for cancer. Well, it does, but as a side effect of the cure, the patient loses 50 IQ points. Or perhaps the cure is incredibly painful. Or it is made from dead babies' stem cells, causing outrage. Or it is insanely expensive, e.g. you would have to construct it atom by atom, in large quantities. Etc.

It would be better to have a superintelligence that understands all of these things, takes a little more time thinking, and finds a cure for cancer that also happens to be relatively cheap, inoffensive, painless, well tasting, and without negative side effects. (For the sake of argument, I assume here that both solutions are possible, it's just that the second one is a bit more difficult to find, so the former AI goes with the first solution it finds because why not.)

But the further this way you go, the more likely the superintelligence is able to model its own existence, and people's reaction on it. As soon as the AI is able to model correctly "if people turn me off 5 minutes before producing the cure for cancer, it means 0 people will be cured, even if my algorithm would have produced an efficient cure otherwise", we get the first bits of self-awareness. Now the superintelligence will optimize the environment for its instrumental goals (survival, more resources, greater popularity or ability to defend itself) as a side effect of solving other problems.

It would require a selective blindness to make the superintelligence assume that it is disembodied, and that its computations will continue and produce effects in real world even if its body is destroyed. Actually... with sufficiently good model of the world, it could still reason about building another intelligence to assist it with the project. And if you make it blind towards computer science, there is still a chance it would invent another intelligence that doesn't exactly fit your definition of a "computer", e.g. an intelligent swarm of nanobots built from organic molecules. (There is a general argument somewhere on LW that you can't reliably limit a superintelligence by creating a blacklist of forbidden moves, because by being smarter than you it can possibly think about things that should have been on your blacklist, but you didn't think about them.)

Using your terminology, not every optimizer_1 is an optimizer_2, but the most useful ones of them are. A computer able to solve a huge system of linear equations is not as useful as the one that can find a cure for cancer.

Comment by viliam on Paradoxical Advice Thread · 2019-08-21T22:04:03.773Z · score: 9 (6 votes) · LW · GW

"Early bird catches the worm" + "Never put off until tomorrow what you can do today"

vs "Look before you leap" + "Think before you speak"

Not completely opposites (I assume you are not expected to think 24 hours before you speak), but still going in the opposite direction: "act quickly" vs "be careful, slow down".

Advantages of acting later:

  • more time to think about consequences, can possibly lead to better choice of words or action;
  • you might even realize that not doing it or remaining silent is actually a better choice here.

Advantages of acting sooner:

  • being the first one gives you a competitive advantage;
  • ceteris paribus, people who act faster are more productive.

When I put it this way, I guess the important factor is how likely taking more time will allow you to make a better choice. When different actions can bring wildly different outcomes, take your time to choose the right one. On the other hand, if the results will be pretty much the same anyway, stop wasting time and do it now.

Specifically, when talking, taking time to reflect is usually the right choice. It doesn't necessarily improve the typical outcome, but may avoid a big negative outcome once in a while.

Problematic situations:

When there is a person you would like to approach... is it better to wait and prepare your words (risking that the situation chances, e.g. the person leaves, their phone will ring, or someone else will approach them) or act quickly (and risk making a bad first impression by saying something stupid, or just not focusing on the right thing)?

When starting a company... how much time should you spend analyzing the market, et cetera? There is a risk that you will spend the next few years doing what was predictably the wrong choice. On the other hand, markets change, you won't get perfect information anyway, and someone else might do the thing you wanted to do first and take over the market.

Comment by viliam on A misconception about immigration · 2019-08-20T20:09:57.767Z · score: 13 (3 votes) · LW · GW

The idea behind the broken windows fallacy is that when you move money from point X to point Y, and start talking about the effects of having more money at point Y, it would be fair to also mention the effects of having less money at point X. Otherwise you are drawing a false picture.

To highlight the mistake you made, let's take the situation into extreme. Imagine that there are so many immigrants that the population literally doubles. Let's assume that all of them are the lazy type: none of them gets a job, ever, all of them are living on welfare. To prevent starvation, the government issues a law that everyone who previously had a job must now work for 16 hours a day, to produce enough goods to satisfy everyone's needs.

I suppose we would agree that such outcome would be a bad thing... for those working 16 hours a day. (We could make a utilitarian argument that by improving the lives of those on welfare it is still a net good. But it makes it obvious why most of the original population would try to prevent such outcome.)

Now let's look at your argument: economy is growing, there is more work -- fantastic, isn't it?

By the way, the argument "more economy = better" is itself problematic. First, it probably should be measured per capita; having X% more whatever because you have X% more people, leaves the same amount for everyone, on average. But even measured per capita: I think that a hypothetical Western society where people consume 20% less, but only work 4 hours a week, is not obviously a worse place. (I am not talking about societies where "consuming 20% less" = literally starving, of course.) Similarly, working 6 days a week and consuming 20% more, is not an obvious improvement.

Comment by viliam on Why I Am Not a Technocrat · 2019-08-20T18:50:40.975Z · score: 13 (4 votes) · LW · GW

With an ad blocker, I see almost nothing on the page. No, I am not going to turn it off.

Comment by viliam on Prokaryote Multiverse. An argument that potential simulators do not have significantly more complex physics than ours · 2019-08-18T12:40:38.433Z · score: 2 (1 votes) · LW · GW

My understanding is that the article makes these claims:

1. Universes with "more complex rules" than ours are actually less likely to contain life, because there are more possibilities how things could go wrong.
2. Universes with "more complex rules" are a priori less likely.
Therefore: If our universe is a simulation in another universe, the parent universe likely doesn't have "more complex rules" than ours, because the probability penalty for having "more complex rules" outweighs the fact that such universe could easily find enough computing power to simulate many universes like ours.

I am not defending the assumptions, nor the conclusion, only trying to provide a summary with fewer buzzwords. (Actually, I agree with the assumption 2, but I am not convinced about the rest.)

Comment by viliam on Jacob's Twit, errr, Shortform · 2019-08-18T12:27:00.927Z · score: 5 (3 votes) · LW · GW

Rules making human behavior more transparent would be good for nerds, if everyone followed them. Unfortunately, I believe this is not going to happen.

What is going to happen instead, in my opinion, is the usual: rules that high-status people can afford to break, and low-status people can either accept them as additional burden or get punished for breaking them.

The usual "he said / she said" of sexual violence investigation will remain, only the object of the debate will move to "they gave me an explicit verbal confirmation / no I didn't". The usual double standard will remain, too: when two people will have sex, with neither of them giving explicit verbal consent, only one of them will risk actual repercussions.

Also, many people love plausible deniability, so adopting the new rule will stimulate a lot of creativity in this new direction: how to say something that simultaneously could be interpreted as an explicit verbal confirmation, but also as something other than explicit verbal confirmation. And, as usual, nerds will be at a disadvantage at playing these games.

Comment by viliam on Matthew Barnett's Shortform · 2019-08-17T16:48:23.620Z · score: 5 (3 votes) · LW · GW

Just like an idea can be wrong, so can be criticism. It is bad to give up the idea, just because..

  • someone rounded it up to the nearest cliche, and provided the standard cached answer;
  • someone mentioned a scientific article (that failed to replicate) that disproves your idea (or something different, containing the same keywords);
  • someone got angry because it seems to oppose their political beliefs;
  • etc.

My "favorite" version of wrong criticism is when someone experimentally disproves a strawman version of your hypothesis. Suppose your hypothesis is "eating vegetables is good for health", and someone makes an experiment where people are only allowed to eat carrots, nothing more. After a few months they get sick, and the author of the experiment publishes a study saying "science proves that vegetables are actually harmful for your health". (Suppose, optimistically, that the author used sufficiently large N, and did the statistics properly, so there is nothing to attack from the methodological angle.) From now on, whenever you mention that perhaps a diet containing more vegetables could benefit someone, someone will send you a link to the article that "debunks the myth" and will consider the debate closed.

So, when I hear about research proving that parenting / education / exercise / whatever doesn't cause this or that, my first reaction is to wonder how specifically did the researchers operationalize such a general word, and whether the thing they studied even resembles my case.

(And yes, I am aware that the same strategy could be used to refute any inconvenient statement, such as "astrology doesn't work" -- "well, I do astrology a bit differently than the people studied in that experiment, therefore the conclusion doesn't apply to me".)

Comment by viliam on Dony's Shortform Feed · 2019-08-15T21:56:07.862Z · score: 3 (2 votes) · LW · GW
if I get too powerful, I will break away from others

You would probably break away from some, connect with some new ones, and reconnect with some that you lost in the past.

Comment by viliam on Matthew Barnett's Shortform · 2019-08-15T21:51:39.546Z · score: 4 (2 votes) · LW · GW

This seems similar to "pomodoro", except instead of using your willpower to keep working during the time period, you set up the environment in a way that doesn't allow you to do anything else.

The only part that feels wrong is the commitment part. You should commit to work, not to achieve success, because the latter adds of problems (not completely under your control, may discourage experimenting, a punishment creates aversion against the entire method, etc.).

Comment by viliam on Raemon's Scratchpad · 2019-08-15T21:37:47.544Z · score: 5 (2 votes) · LW · GW

After learning a new concept, it is important to "play with it" for a while. Because the new concept is initially not associated with anything, so you probably will not see what it is good for.

For example, if someone tells you "a prime number is an integer number greater than one that can only be divided by itself and by one", that is easy to understand (even easier if they also give you a few examples of primes and non-primes), but it is not obvious why is this concept important and how could it be used.

But when the person also tells you "the number of primes is infinite... each integer can be uniquely factored into primes... some numbers are obviously not primes, but we don't know a simple method to find out whether a large number is a prime... in arithmetic modulo n you can define addition, subtraction, and multiplication for any n, but you can unambiguously define division only when n is prime..." and perhaps introduces a concept of "relative primes" and the Chinese remainder theorem... then you may start getting ideas of how it could be useful, such as "so, if we take two primes so big that we can barely verify their primeness, and multiply them, it will be almost impossible to factor the result, but it would be trivial to verify when the original two numbers are provided -- I wonder whether we could use this as a form of signature."

Comment by viliam on Eli's shortform feed · 2019-08-14T22:31:22.698Z · score: 2 (1 votes) · LW · GW

Seems to me that mental energy is lost by frustration. If what you are doing is fun, you can do it for a log time; if it frustrates you at every moment, you will get "tired" soon.

The exact mechanism... I guess is that some part of the brain takes frustration as an evidence that this is not the right thing to do, and suggests doing something else. (Would correspond to "1b" in your model?)

Comment by viliam on Hazard's Shortform Feed · 2019-08-14T22:25:07.502Z · score: 6 (3 votes) · LW · GW

Years ago, I wrote fiction, and dreamed about writing a novel (I was only able to write short stories). I assumed I liked writing per se. But I was hanging out regularly with a group of fiction fans... and when later a conflict happened between me and them, so that I stopped meeting them completely, I found out I had no desire left to write fiction anymore. So, seems like this was actually about impressing specific people.

I got the message, "To fit in, you have to really be about the thing. No half assing it. No posing."

I suspect this is only a part of the story. There are various ways to fit in a group. For example, if you are attractive or highly socially skilled, people will forgive you being mediocre at the thing. But if you are not, and you still want to get to the center of attention, then you have to achieve the extreme levels of the thing.

Comment by viliam on Hazard's Shortform Feed · 2019-08-14T22:13:39.443Z · score: 2 (1 votes) · LW · GW

Maybe emotional resilience is bad for some forms of signaling. The more you react emotionally, the stronger you signal that you care about something. Keeping calm despite feeling strong emotions can be misinterpreted by others as not caring.

Misunderstandings created this way could possibly cause enough harm to outweigh the benefits of emotional resilience. Or perhaps the balance depends on some circumstances, e.g. if you are physically strong, people will be naturally afraid to hurt you, so then it is okay to develop emotional resilience about physical pain, because it won't result in them hurting you more simply because "you don't mind it anyway".