Comment by viliam on The Relationship Between the Village and the Mission · 2019-05-16T23:43:26.149Z · score: 5 (3 votes) · LW · GW
"if you propose a new thing, especially a new confusing thing, there's a good chance you'll get a disproportionate amount of vocal opposition compared to support" ... if I interpreted wrong please correct me

Yes, this is how I meant it, but in context of Less Wrong especially when the new thing is about rationalist having some emotional experience and becoming closer to each other. Even if it is an obviously voluntary activity no one is pressured to join. Unusual and confusing suggestions that would involve studying math or playing poker would not get that intensity of reaction.

(The surprising part is why singing songs together or living in the Dragon Army house is perceived as more dangerous than polyamory. But maybe because the idea of polyamory came first, so the people who strongly objected to that were already gone when the other ideas came.)

Comment by viliam on The Relationship Between the Village and the Mission · 2019-05-16T23:32:02.657Z · score: 7 (4 votes) · LW · GW
Your comment is, I’m afraid, full of the most egregious strawmen

Looking at the discussion you linked... I admit I cannot find the horrible examples my mind keeps telling me I have seen. So, maybe I was wrong. Or maybe it was a different article, dunno. A few negative comments were deleted; but those were all written by the same person, so in either case they do not represent a mass reaction. The remaining comment closest to what I wanted to say is this one...

The whole point of rituals like this in religion is to switch off thinking and get people going with the flow. The epistemic danger should be pretty obvious. Ritual = irrational. [1]

...but even that one is not too bad.

It is only “a perfectly normal thing” because everyone who didn’t think it was perfectly normal, has left! ... It is a simple case of evaporative cooling!

This is a good point. Whatever the community does, if it causes the opposing people to leave, will be in hindsight seen as the obviously right thing to do (because those who disagree have already left), even if in a parallel Everett branch doing the opposite thing is seen as the obviously right thing.

I still feel weird about people who would leave a community just because a few members of the community did sing a song together. Also, people keep leaving for all kinds of reasons. I am pretty sure some have left because of lack of emotional connection, such as, uhm, doing things together.

Meta:

Okay, at this moment I feel quite confused about this comment I just wrote. Like, from certain perspectives it seems like you are right, and I am simply refusing to say "oops". At the very least, I failed to find a sufficiently horrible anti-Solstice comment.

Yet, somehow, it is you saying that there were people who left the rationality movement because of the Solstice ritual, which is the kind of hysterical reaction I tried to point at. (I can't imagine myself leaving a movement just because a few of its members decided to meet and sing a song together.)

Comment by viliam on The Relationship Between the Village and the Mission · 2019-05-15T22:16:31.734Z · score: 18 (6 votes) · LW · GW

I had in mind the proposals to organize (1) Solstice celebration and (2) Dragon Army, on Less Wrong.

From my perspective, both cases were "hey, I have an idea of a weird but potentially awesome activity, here is an outline, contact me if you are interested", and in both cases, the debate was mostly about why this is a horrible thing to do, because only cultists would organize a weird activity in real life.

The Dragon Army pushed the Overton window so far that now it makes difficult to remember what exactly was so horrifying about the Solstice celebration. But back then, the mere idea of singing together was quite triggering for a few people: singing is an irrational activity, it manipulates your emotions, it increases group cohesion which rubs contrarians the wrong way, it's what religious people do, yadda yadda yadda, therefore meeting with a group of friends and singing a song together means abandoning your rationality forever.

Now, the Solstice celebration is a perfectly normal thing, and no one freaks out about it anymore. And I suppose if there would be a second and third attempt to do something like the Dragon Army, people would get used to that, too. But the reactions to the first attempts felt quite discouraging.

Comment by viliam on The Relationship Between the Village and the Mission · 2019-05-15T20:36:42.713Z · score: 9 (5 votes) · LW · GW
people whose coalitional membership is constituted by their shared adherence to “rational,” scientific propositions have a problem when—as is generally the case—new information arises which requires belief revision.

My first reaction was that perhaps the community should be centered around updating on evidence rather than any specific science.

But of course, that can fail, too. For example, people can signal their virtue by updating on tinier and tinier pieces of evidence. Like, when the probability increases from 0.000001 to 0.0000011, people start yelling about how this changes everything, and if you say "huh, for me that is almost no change at all", you become the unworthy one who refuses to update in face of evidence.

(The people updating on the tiny evidence most likely won't even be technically correct, because purposefully looking for microscopic pieces of evidence will naturally introduce selection bias and double counting.)

Comment by viliam on The Relationship Between the Village and the Mission · 2019-05-14T23:22:33.040Z · score: 15 (6 votes) · LW · GW

Could some of this be connected to the "geek social fallacies"? Specifically: some people seem to be a community material; some people seem corrosive to any community; most are probably somewhere on the spectrum. If you try to make a community that includes the corrosive people, it will quickly and inevitably fall apart. However, some communities have "inclusion" as their applause light, so it requires some degree of hypocrisy and tacit coordination to navigate this successfully.

I suppose that even the religious communities who try to save everyone's soul, are ultimately exclusive. This happens in two ways:

First, "doing some actual work" filters out lazy people, or people who prefer talking about things to actually doing things. There are people who could endlessly talk about helping the poor; but if you ask for volunteers who will cook the soup for the homeless, when the time comes to actually cook the soup, these talkers will not be there. Good!

Second, some people take more than they give, but you can balance this by making "taking" low status, and "giving" high status; and then having the high-status people meet separately. So you spend one afternoon cooking the soup and giving it to the homeless; but then you spend another afternoon or two with the fellow cooks in a place where the homeless people are not invited.

So, on one level you have people who love everyone so much that they even spend their free time cooking soups for the homeless. But on another level, you have a clever algorithm to filter out a kind of elite -- people who are altruistic and willing to work -- and have them network with each other, in absence of the less worthy ones. No one mentions this explicitly, because debating it explicitly would probably ruin the effect, if people uninterested in cooking soup for the homeless would start participating anyway, because they would realize the benefits of networking with altruistic and hard-working ones.

I suspect that the atheist community meetup will be full of annoying and disagreeable people who would filter themselves out from the "religious people cooking soup for the homeless" meetup. They don't have to be all annoying and disagreeable, of course, but even a few of them can ruin the atmosphere.

Coordinating online probably also makes things worse. When you announce an activity, people who dislike the activity will give vocal feedback, and you suddenly find yourself in a debate with them, which is a complete waste of your time. As opposed to announcing the time and place on a flyer, so that people who are interested will come, and the people who are not will stay at home.

In my personal experience, I found the highest quality people in various volunteer groups. Doesn't matter what: they could be campaigning for human rights, organizing a summer camp for kids, preparing educational reform materials, or mowing a meadow to save endangered plant species. Some of these activities have specific filters on profession or political alignment, but each of them at the same times filters for... I am not sure I can describe it correctly, but it is a good filter.

Comment by viliam on The Relationship Between the Village and the Mission · 2019-05-14T22:44:54.974Z · score: 4 (2 votes) · LW · GW
Getting a sense of who is already working on what. ...

I would love to read an overview of things that are being done, in the rationalist community. By reading Less Wrong regularly, I am exposed to many random things, but I may have large blind spots. I would like to see the curated big picture.

In addition to big picture (list of meetups or podcasts or research groups), it would be also nice to have a database of helpful people (who organize the meetups, or bring cookies), but the later should probably not be public. I have heard stories of people who come to rationalist community with the goal of extracting free work (under vague and non-committal promises of improving the world or contributing to charity) from naive people. So, if someone loves to bake cookies and bring them to meetups, it would be nice to give their contact to local meetup organizers, but not to make it completely public so that random parasites will spam them. Maybe a trivial inconvenience of "show me the specific work you have already done, before I give you the list of contacts" would be enough.

Comment by viliam on The Relationship Between the Village and the Mission · 2019-05-14T20:59:12.330Z · score: 7 (3 votes) · LW · GW

I would like to have a community that strives to be rational also "outside the lab". The words "professional bayesianism" feel like bayesianism within the lab. (I haven't read the book, so perhaps I am misinterpreting the author's intent.)

Google seems to invest huge amounts of effort into making sure they have a good internal community.

That's nice, but ultimately, if there is a tension between "what is better for you" and "what is better for Google", Google will probably choose the latter. What could possibly be good for you but bad for Google? Thinking for less than one minute I'd say: becoming financially independent, so you no longer have to work; building your own startup; finding a spouse, having kids, and refusing to work overtime...

Yeah, this is a fully general argument against any society, but it seems to me that a Village, simply by not being profit oriented, would have greater freedom to optimize for the benefit of its members. For a business company, every employer is a cost. In a village, well-behaving citizens pay their own bills, and provide some value to each other, whether that value is greater or smaller, it is still positive or zero.

"Church" is something that can continues to succeed even in a large town or city where people come and go more easily (although I'm not confident this is a stable arrangement – once you have large cities, atomic individualism and the gradual erosion of Church might be inevitable)

An important part of being in the Church is being physically present at its religious activities, e.g. every Sunday morning. So even if you happen to be surrounded mostly by non-believers in your city, at least once in a week you become physically surrounded by believers. (A temporary Village.) Physical proximity creates the kind of emotions that internet cannot substitute.

Church is an "eukaryotic" organization: it has a boundary on the outside (believers vs non-believers), but also inside (clergy vs lay members). This slows down value shift: you can accept many believers, while only worrying about value alignment of the clergy: potential heretical opinions of the lay members are just their personal opinions, not the official teaching; if necessary, the clergy will make this clear in a coordinated way. Having stronger filter in the inner boundary allows you to have weaker filter on the outer boundary, because there is no democracy in the outer circle.

Translated to the language of the article: Mission can have multiple Villages, but Village can only have one Mission. As an example, if meditation becomes popular among some rationalists, and they start going to Buddhist retreats and hanging out with Buddhist, and then they bring their nerdy Buddhist friends to rationality meetups... it should be clear that the rationalist community is in absolutely no risk of becoming a religious community, because the mysterious bullshit of Buddhism will be rejected (at least by the inner circle) just like the mysterious bullshit of any other religion. Similarly when people will try to conquer the rationalist community for their political faction; but I believe we are doing quite well here.

You listen to sermons that establish common knowledge of what your people do-and-don't-do.

The important thing here is that the sermons come from the top. They do not represent the latest fashionable contrarian opinion. The Church provides many things for its members, but freedom to give sermons is not one of them.

(To avoid misunderstanding: I am not praising dictatorship for the dictatorship's sake here. Rather, it is my experience from various projects, that there is a type of people who come to introduce controversy, but don't contribute to the core mission. These people will cause drama, and provide nothing useful in return. If they win, they will only keep pushing further; if they lose, they will ragequit and maybe spend some time slandering you. It is nice to have a mechanism that stops them at the door. Even more importantly in a group that attracts so many contrarians, and where "hey, you call yourselves 'rationalists', but you irrationally refuse my opinion before you spent thousand hours debating it thoroughly?!" is a powerful argument. The sermons are a tool of coordination, and coordination is hard.)

Comment by viliam on The Relationship Between the Village and the Mission · 2019-05-14T20:40:07.409Z · score: 7 (3 votes) · LW · GW

If Mission requires a lot of work (or isn't paid well, so you need an extra job to pay your bills), people will have to reduce their involvement when they have kids. And most people are going to have kids at some moment of their lives.

On the other hand, Village without kids... should more properly be called Hotel or Campus.

Thus, Village helps Mission by keeping currently inactive people close, so even if you cannot use their work at the moment, you can still use some of their expertise. Also, the involvement doesn't have to be "all or nothing"; people with school-age kids can be part-time involved.

Mission without Village will keep losing tacit knowledge, and will probably have to make stronger pressure on keeping and recruiting members. (Which can become a positive feedback loop, if members start leaving because of increased pressure, and the pressure increases as a reaction to the threat of losing members.)

Comment by viliam on Hierarchy and wings · 2019-05-09T14:57:09.921Z · score: 11 (3 votes) · LW · GW

I am happy you posted it here. It sounds reasonable, and from my perspective it doesn't feel mindkilling.

I have already assumed that state power is about taking resources (or defending from having more taken from you; but if you have the power to achieve that, most people won't stop there). Saying that the right and the left are two different strategies of creating coalitions to achieve this goal, that sounds quite impartial to me. Maybe I have mentally edited out something offensive, dunno. But I like the definitions of "the Schelling point of power" and "the natural opposition to the former" (this is how I abbreviate it for myself). Definitely more useful than "the guys who would hypothetically sit on the left/right chair in the 18-th century French parliament" or "it's just completely random coalitions".

Two things I would like to add:

1) This model seems to work for e.g. USA, but the situation e.g. in post-communist Eastern Europe is the other way round. The Schelling point of power is "let's bring back communism", its natural leaders being the former apparatchiks and secret service officers (many of them, or their sons, still active in current army and police). Yet, this is considered "left-wing". And "right-wing" is the hodgepodge of free market and religious fundamentalism and whoever's vision of the future does not include the return of communism.

More abstractly, the Schelling point of power depends on the recent historical events in given country. If the previous military power self-identified as "left-wing", the naming gets reverted.

2) It seems possible to go at least one level deeper in this analysis. You have the "natural Schelling point", and "its natural opposition" defined as people most likely to be oppressed by the former. But even the opposition oppresses someone -- there are "minorities within minorities" -- and thus we can sometimes get a second-order opposition, which may ally itself with the enemy of their enemy, despite not belonging there "naturally". Generally, "enemy of my enemy" strategy can create weird coalitions.

To give a real-life example from American politics, the left-wing coalition includes feminists, gays, and ethnic minorities. But what if you are an ethnic minority member who criticizes how given minority treats their own women or gays? You will get labeled as "right-wing". Even if you identify as left-wing, and your opinions and arguments are traditionally left-wing, picking the wrong target gets you thrown out of the coalition.

Comment by viliam on Reference request: human as "backup" · 2019-04-29T18:57:13.482Z · score: 4 (2 votes) · LW · GW

In Matrix, the role of humans was quite similar to the role of mitochondria. (Except, it does not make sense.)

I imagine that at the beginning, humans could be useful to the young AIs which would excel at some skills but fail at others. (One important role would be providing a human "face" in interaction with humans who don't like AIs.) However, that usefulness would only be temporary.

An eukaryotic cell cannot find a short-term replacement for mitochondria, and in evolution the long-term does not happen without the short-term. An intelligent designer -- such as a self-improving AI -- could however spend the time and resources to research a more efficient replacement for the functions the humans provide, if it would make sense in long term.

On the other hand, if the AI is under so much pressure that it cannot afford to do research, it probably also cannot afford to provide luxuries to its humans. So the humans will become an equivalent of cage-bred chickens.

Comment by Viliam on [deleted post] 2019-04-29T00:01:19.446Z

Thank you for writing this; it is an inspiration to many thoughts!

Seems to me that according to "Copenhagen interpretation of ethics", knowing yourself makes you less moral; or makes your life more difficult if you want to remain moral.

If you don't understand your brain's [player's] Machiavellian moves, you cannot be blamed for them, as long as your [character's] intentions are pure. You simply do whatever feels right to you at the moment, and then you reap the rewards of the unconscious strategy given to you by evolution. You execute the shrewd moves with perfect innocence, and the outcome feels like good luck, or even good karma for... some random thing.

("My success is a result of my positive thinking and hard work. It is completely unrelated to the fact that I stabbed my former friends in the back when they outlived their purpose, and always kissed the asses of powerful people. No; I have simply found out that some people whom I considered friends in the past actually suck, and instead I decided to spend my time with genuinely awesome people whom I admire. And now I observe, full of gratitude, that the Universe has rewarded my constant striving for virtuous life.")

On the other hand, suppose you read a lot about evolutionary psychology, and get good at understanding your brain's motives. Your brain prompts you a Machiavellian move, and despite feeling the genuine desire to act that way, you also clearly see it for what it is. ("My friend's behavior feels really annoying recently; sometimes I am so irritated I wish we would just stop seeing each other. On a different level, I am also aware that he is no longer a useful ally to me. I have surpassed him in education, wealth, and social status; he can no longer offer me anything of use, other than sharing a few childhood memories. The time I spend with him these days would be much better spent networking with people in my current professional and social circles. A funny thing that I notice is that the exactly same behavior of his seemed really cool while we were at the high school, when he was a popular kid, and I was just an unpopular nerd who by sheer luck became his friend.")

The problem is, now that you [the character] see your brain's [player's] moves' true meaning, you become complicit if you decide to follow through. It still feels like the desirable thing to do; you just no longer have the privilege of denying the strategic value of it. So you do it anyway, but now you feel dirty. (Or you don't do it, but now you feel like a sucker, because you are aware that most people in your situation probably would have done it, and would have benefited from doing so.) When you follow your brain's path and reach success, you know exactly what to contribute it to, and it probably doesn't make you proud. It is tempting to simply pretend that some things didn't happen for the reasons they did.

Comment by viliam on The Forces of Blandness and the Disagreeable Majority · 2019-04-28T21:39:02.242Z · score: 10 (6 votes) · LW · GW

I wonder how much of this is a consequence of the fact that in the offline world, rich people usually associate with rich people, and poor people associate with poor people (and when a poor person associates with a rich person e.g. in a role of a servant, the poor person must behave in a way that the rich person finds proper)... but in the online world, we all use the same Facebook, Twitter, Reddit, etc.

So now rich people have a cultural shock of meeting the unwashed masses who don't give a fuck about their sensibilities, and will even laugh at their faces, protected by (perceived) online anonymity.

It should be possible to create separate gardens for the elites. Like, make a clone of a famous website, but require e.g. $1000 yearly membership fees, and you get rid of the plebs. There already are projects like that. But as far as I know, they fail. On the internet people provide value to each other, so a website for the 0.1 % would have much fewer interesting stories, fewer cat videos, etc. It would be less offensive, but mostly because it would be dead.

It is probably also hard to find the exact line; I suppose the elites would prefer to avoid dealing with people too low below them, but would welcome the presence of people slightly below them -- they are not that difficult culturally, and because how the top of the pyramid is shaped, there are lots of them, which means lots of useful content.

So instead, the rich people are trying to kick out the plebs from the online places they like. Using politeness and other things correlated with social class as an excuse.

Rationality Vienna Meetup June 2019

2019-04-28T21:05:15.818Z · score: 9 (2 votes)

Rationality Vienna Meetup May 2019

2019-04-28T21:01:12.804Z · score: 9 (2 votes)
Comment by viliam on 10 Good Things about Antifragile: a positivist book review · 2019-04-28T20:50:32.298Z · score: 4 (2 votes) · LW · GW

I guess we mostly agree here.

The current system of restaurants could suffer greatly, if (1) some company would start providing cheap delivery of high quality food by drones, or (2) some epidemic would make it dangerous for people to eat in public. Well, neither of these would wipe out the whole system, but it's just what I though in a few seconds; worse things could probably happen. Also, luck would play a great role, e.g. if first we would have the food delivery by drones, and a few months later the epidemic, with proper timing the combined impact could be much greater than either of these individually. A machine that could cook an (almost) arbitrary recipe automatically at home (plus a convenient delivery for the raw materials) -- at least the recipes usually found in the restaurants -- could also change a lot.

Yes, having many parallel solutions that work slightly differently, makes things more robust. This is a lesson I would love to see implemented in the school system: have hundreds of different types of schools, each providing education in a different way.

Comment by viliam on 10 Good Things about Antifragile: a positivist book review · 2019-04-27T21:44:39.750Z · score: 4 (2 votes) · LW · GW
There are bad events which cannot in principle be predicted.

However, Taleb can already predict which systems will benefit from those events. /s

This is my general problem with Taleb: it feels like his books keep telling you repeatedly that no one can actually predict or understand something, only to suggest that Taleb has some kind of knowledge beyond knowledge that allows him to predict the unpredictable and explain the incomprehensive. Sorry, I don't buy this. If no one can predict stuff, then Taleb can't either; if Taleb can predict a thing or two about stuff, so can possibly someone else.

Of course, the "motte" is that institutions which are inflexible and their success is based on too many dubious assumptions, will break when something important changes, and such changes happen once in a while.

But beyond this, I think it is more likely to be a trade off. A bet on things remaining the same, versus a bet on things changing quickly enough that we can actually benefit from being prepared for the change. A huge empire may gradually fall apart as a result of its own complexity and bureaucracy; but in the meanwhile, it will destroy hundreds of communities that weren't large enough and coordinated enough to resist the attack of a huge army of a centralized state. Other hundreds of communities will avoid the attention of the empire and survive. It is not obvious that being a member of a randomly selected community is better that being a citizen of the centralized state. Even a reliable prophecy that one day -- an unspecified moment between today and 500 years later -- the empire will fall apart, will not make the choice easier. Or maybe one day, Microsoft Windows will be completely replaced by thousands of competing flavors of Linux; I just don't believe that Bill Gates should lose his sleep over that. One day, Java will be a new Cobol, and all Python and Ruby developers will have a good laugh about it (that is, until Python and Ruby become new Cobols, too), but in the meanwhile, my Java skills are paying my bills. Etc.

So, one problem is that unless the changes come soon enough, your anti-fragility features are going to be just dead weight. (If they are providing some benefit in the meanwhile, it means you could have designed them for the purpose of that benefit, even without worrying about anti-fragility.) Another problem is that a genuinely unpredictable bad event can wipe off your anti-fragile solution, too. (Maybe the "anti-fragile" features you designed make it actually more susceptible to the event, not less. That's what genuine unpredictability means.)

tl;dr -- robust systems are usually more desirable than fragile ones, but "anti-fragility" is a pipe dream

Comment by viliam on When is rationality useful? · 2019-04-27T20:18:45.535Z · score: 3 (2 votes) · LW · GW
I feel like people who want to do X (in the sense of the word "want" where it's an actual desire, no Elephant-in-the-brain bullshit) do X, so they don't have time to set timers to think about how to do X.

Yeah. When someone does not do X, they probably have a psychological problem, most likely involving lying to themselves. Setting up the timer won't make the problem go away. (The rebelling part of the brain will find a way to undermine the progress.) See a therapist instead, or change your peer group.

The proper moment to go meta is when you are already doing X, already achieving some outcomes, and your question is how to make the already existing process more efficient. Then, 5 minutes of thinking can make you realize e.g. that some parts of the process can be outsourced or done differently or skipped completely. Which can translate to immediate gains.

In other words, you should not go meta to skip doing your ABC, but rather to progress from ABC to D.

If instead you believe that by enough armchair thinking you can skip directly to Z, you are using "rationality" as a substitute for prayer. Also, as another excuse for why you are not moving your ass.

Comment by viliam on When is rationality useful? · 2019-04-27T20:03:58.495Z · score: 4 (2 votes) · LW · GW

I guess we are talking about two different things, both of them useful. One is excellence in a given field, where the success could be described like "you got a Nobel price, bunch of stuff is named after you, and kids learn your name at high school". Other is keeping all aspects of your life in good shape, where the success could be described like "you lived until age 100, fit and mostly healthy, with a ton of money, surrounded by a harem of girlfriends". In other words, it can refer to being at top 0.0001 % of one thing, or at top 1-10 % at many things that matter personally.

One can be successful at both (I am thinking about Richard Feynman now), but it is also possible to excel at something while your life sucks otherwise, or to live a great life that leaves no impact on history.

My advice was specifically meant for the latter (the general goodness of personal life). I agree that achieving extraordinary results at one thing requires spending extraordinary amounts of time and attention on it. And you probably need to put emphasis on different rationality techniques; I assume that everyday life would benefit greatly from "spend 5 minutes actually thinking about it" (especially when it is a thing you habitually avoid thinking about), while scientists may benefit relatively more from recognizing "teachers' passwords" and "mysterious answers".

How much could a leading mathematician gain by being more meta, for example?

If you are leading, then what you are already doing works fine, and you don't need my advice. But in general, according to some rumors, category theory is the part of mathematics where you go more meta than usual. I am not going to pretend having any actual knowledge in this area, though.

In physics, I believe it is sometimes fruitful (or at least it was, a few decades ago) to think about "the nature of the physical law". Like, instead of just trying to find a law that would explain the experimental results, looking at the already known laws, asking what they have in common, and using these parts as building blocks of the area you research. I am not an expert here, either.

In computer science, a simple example of going meta is "design patterns", a more complex example would be thinking about programming languages and what are their desirable traits (as opposed to simply being an "X developer"), in extreme cases creating your own framework or programming language. Lisp or TeX would be among high-status examples here, but even JQuery in its era revolutionized writing JavaScript code. You may want to be the kind of developer who looks at JavaScript and invents JQuery, or looks at book publishing and invents TeX.

Comment by viliam on When is rationality useful? · 2019-04-26T19:40:35.714Z · score: 10 (5 votes) · LW · GW
But I don't think there's a good reason to expect rationalists to do better unprompted—to have more unprompted imagination, creativity, to generate strategies—or to notice things better: their blind spots, additional dimensions in the solution space.

I wonder if it would help to build a habit about this. Something like dedicating 15 minutes every day to a rationalist ritual, which would contain tasks like "spend 5 minutes listing your current problem, 5 minutes choosing the most important one, and 5 minutes actually thinking about the problem".

Another task could be "here is a list of important topics in human life { health, wealth, relationships... }, spend 5 minutes writing a short idea for each of them how to improve, choose one topic, and spend 5 minutes expanding the idea into specific plan". Or perhaps "make a list of your strengths, now think how you could apply them to your current problems" or "make a list of your weaknesses, now think how you could fix them at least a little" or... Seven tasks for seven days of the week. Or maybe six tasks, and one day should be spent reviewing the week and planning the next one.

The idea is to have a system that has a chance to give you the prompt to actually think about something.

Comment by viliam on How to make plans? · 2019-04-23T21:07:23.267Z · score: 14 (3 votes) · LW · GW

My guess for the most common planning mistakes:

1) Not having an actual plan, only a goal. Essentially, just saying "I want to be X", and then waiting for it to somehow magically happen. As opposed to researching how people actually get from "here" to "there", what kind of tasks they do, which skills they need, and actually practicing those skills. In other words, not making the first step, but instead waiting for the "right moment", which somehow never arrives; or if it does, it will find you unprepared.

2) Expecting the whole thing to happen in one big step, as opposed to setting up your activities and habits so that they keep drawing you in the desired direction.

For example, if you want to get fit, a typical failure is to buy an annual ticket to a gym... and then never actually go there. (Unlike the previous example, you have actually made the first step. But then you wait for the second step to happen magically.) A more successful plan would be to simply start doing push-ups every morning; and perhaps think how to reward oneself for doing so.

Or, if your goal is to become a writer, a typical failure is to start writing your big novel... only to end up a few years later with hundreds of pages of horribly written text, which obviously doesn't have a future, but the sunk costs are breaking your heart. (Now the problem is that you have skipped a few necessary steps.) A more successful plan would involve reading other people's texts and writing exercises, at specified time every week. (Similarly for computer programming.)

Comment by viliam on On the Nature of Programming Languages · 2019-04-22T12:40:28.360Z · score: 6 (4 votes) · LW · GW

I never designed an actual programming language, but I imagine these would be some of the things to consider when doing so:

1. How much functionality do I want to (a) hardcode in the programming language itself, (b) provide as a "standard library", or (c) leave for the programmer to implement?

If the programming language provides something, some users will be happy that they can use it immediately, and other users will be unhappy because they would prefer to do it differently. If I wait until the "free market" delivers a good solution, there is a chance that someone much smarter than me will develop something better than I ever could, and it won't even cost me a minute of my time. There is also a chance that this doesn't happen (why would the supergenius decide to use my new language?) and users will keep complaining about my language missing important functionality. Also, there is a risk that the market will provide dozen different solutions in parallel, each great at some aspect and frustrating at another.

Sometimes having more options is better. Sometimes it means you spend 5 years learning framework X, which then goes out of fashion, and you have to learn framework Y, which is not even significantly better, only different.

It seems like a good solution would be to provide the language, and the set of officially recommended libraries, so that users have a solution ready, but they are free to invent a better alternative. However, some things are difficult to do this way. For example, the type system: either your core libraries have one, or they don't.

2. Who is the target audience: noobs or hackers?

Before giving a high-status answer, please consider that there are several orders of magnitude more noobs than hackers; and that most companies prefer to hire noobs (or perhaps someone in the middle) because they are cheaper and easier to replace. Therefore, a noob-oriented language may become popular among developers, used in jobs, taught at universities, and develop an ecosystem of thousands of libraries and frameworks... while a hacker-oriented language may be the preferred toy or an object of worship of a few dozen people, but will be generally unknown, and as a consequence it will be almost impossible to find a library you need, or get an answer on Stack Exchange.

Hackers prefer elegance and abstraction; programming languages that feel like mathematics. Noobs prefer whatever their simple minds perceive as "simple", which is usually some horrible irregular hack; tons of syntactic sugar for completely trivial things (the only things the noob cares about), optional syntax that introduces ambiguity into parsing but hey it saves you a keystroke now and then (mostly-optional semicolons, end of line as an end of statement except when not), etc.

Hacker-oriented languages do not prevent you from shooting your own foot, because they assume that you either are not going to, or that you are doing it for a good reason such as an improvised foot surgery. Noob-oriented languages often come with lots of training wheels (such as declaring your classes and variables "private", because just asking your colleagues nicely to avoid using undocumented features would have zero effect), and then sometimes with power tools designed to remove those training wheels (like when you find out that there actually may be a legitimate reason to access the "private" variables e.g. for the purpose of externalization).

Unfortunately, this distinction cannot be communicated openly, because when you say "this is only meant for hackers to use", every other noob will raise their hands and say "yep, that means me". You won't have companies admit that their business model is to hire cheap and replaceable noobs, because most of their energy will be wasted through mismanagement and lack of analysis anyway. But when designing a language, you need to consider all the usual horrible things the average developer is going to do with it... and either add a training wheel, or decide that you don't care.

3. It may depend on the type of project. But I fear that 9 out of 10 cases someone uses this argument, it is actually a matter of premature optimization.

Comment by viliam on Slack Club · 2019-04-19T22:19:03.339Z · score: 4 (2 votes) · LW · GW

I think I get what you mean.

Maybe this is somehow related to the "openness to experience" (and/or autism). If you are willing to interact with weird people, you can learn many interesting things most people will never hear about. But you are also more likely to get hurt in a weird way, which is probably the reason most people stay away from weird people.

And as a consequence, you develop some defenses, such as allowing interaction only to some specific degree, and no further. Instead of filtering for safe people, you filter for safe circumstances. Which protects you, but also prevents you from from possible gains, because in reality, some people are more trustworthy than others, and it correlates negatively with some types of weirdness.

Like, instead of "I would probably be okay inviting X and Y to my home, but I have a bad feeling about inviting Z to my home", you are likely to have a rule "meeting people in cafeteria is okay, inviting them home is taboo". Similarly, "explaining concepts to someone is okay, investing money together is not".

So on one hand you are willing to tell a complete stranger in cafeteria the story of your religious deconversion and your opinion on Boltzmann brains (which would be shocking for average people); but you will probably never spend a vacation together with people who are closest to you in intellect and values (which average people do all the time).

Comment by viliam on Slack Club · 2019-04-17T21:52:19.866Z · score: 9 (5 votes) · LW · GW

Seems to me that we have members at both extremes. Some of them drop all caution the moment someone else calls themselves a rationalist. Some of them freak out when someone suggests that rationalists should do something together, because that already feels too cultish to them.

My personal experience is mostly with the Vienna community, which may be unusual, because I haven't seen either extreme there. (Maybe I just didn't pay enough attention.) I learn about the extremes on the internet.

I wonder what would be the distribution in Bay Area. Specifically, on one axis I would like to see people divided from "extremely trusting" to "extremely mistrusting", and on another axis, how deeply are those people involved with the rationalist community. That is, whether the extreme people are in the center of the community, or somewhere on the fringe.

Comment by viliam on Slack Club · 2019-04-16T22:18:21.609Z · score: 41 (12 votes) · LW · GW
My suspicion is that people see that Eliezer gained a lot of prestige via his writing ... and I suspect people make the (reasonable) assumption that if they do something similar maybe they will gain prestige from their writing targeted to other rationalists.

I'd like to emphasize the idea "people try to copy Eliezer", separately from the "naming new concepts" part.

It was my experience from Mensa that highly intelligent people are often too busy participating at pissing contests, instead of actually winning at life by engaging in lower-status behaviors such as cooperation or hard work. And, Gods forgive me, I believed we (the rationalist community) were better than that. But perhaps we are just doing it in a less obvious way.

Trying to "copy Eliezer" is a waste of resources. We already have Eliezer. His online articles can be read by any number of people; at least this aspect of Eliezer scales easily. So if you are tempted to copy him anyway, you should consider the hypothesis that you actually try to copy his local status. You have found a community where "being Eliezer" is high-status, and you are unconsciously pushed towards increasing your status. (The only thing you cannot copy is his position as a founder. To achieve this, you would have to rebrand the movement, and position yourself in the new center. Welcome, post-rationalists, et al.)

Instead, the right thing to do is:

  • cooperate with Eliezer, especially if your skills complement his. (Question is, how good is Eliezer himself at this kind of cooperation. I am on the opposite side of the planet, so I have no idea.) Simply said, anything Eliezer needs to get done, but doesn't have a comparative advantage at, if you do it for him, you free his hands and head to do things he actually excels at. Yes, this can mean doing low-status things. Again, the question is whether your are optimizing for your status, or something else.
  • try alternative approaches, where the rationalist community seems to have blind spots. Such as Dragon Army, which really challenged the local crab mentality. My great wish is to see other people build their own experiments on top of this one: to read Duncan's retrospective, to make their own idea of "we want to copy this, we don't want to copy that, and we want to introduce these new ideas", and then go ahead and actually do it. And post their own retrospective, etc. So that finally we may find a working model of a rationalist community that actually wins at life, as a community. (And of course, anyone who tries this has to expect strong negative reactions.)

I strongly suspect that internet itself (the fact that rationalists often coordinate as an online community) is a negative pressure. Internet is inherently biased in favor of insight porn. Insights get "likes" and "shares", verbal arguments receive fast rewards. The actions in real world usually take a lot of time, and thus don't make a good online conversation. (Imagine that every few months you acquire one boring habit that makes you more productive, and as a cumulative result of ten such years you achieve your dreams. Impressive, isn't it? Now imagine a blog, that every few months publishes a short article about the new boring habit. Such blog would be a complete failure.) I would expect rationalists living close to each other, and thus mostly interacting offline, to be much more successful.

Comment by viliam on Agency and Sphexishness: A Second Glance · 2019-04-16T20:54:57.326Z · score: 4 (2 votes) · LW · GW

Perhaps there is an optimal balance between habits and deliberation.

Too much on the side of habits, and you just keep doing the same behavior over and over again. Not necessarily a bad thing; sometimes you get lucky and the strategy you started with is actually a good one, and can bring you success in life. But you need the luck.

Too much on the side of deliberation, and your clever ideas get undermined by lack of "automated operations" that would keep you moving forward. The result is procrastination; well known among the readers of this website.

And the optimal balance probably depends on your current situation in life. After you achieve some success, you have more choices, and now deliberation probably becomes more useful. But again, there is such thing as too much meta-deliberation; obsessing "exactly how much time should I spend thinking and how much time should I spend working" generates neither useful work nor useful directions for work.

I guess, the more meta, the less time you should give it, unless you already have evidence that the previous level of meta was useful to you. (When you notice that spending some time thinking increases the productivity of the time when you are working, that is the right moment to think about how much time do you actually want to spend planning.) Also, meta decisions take time to bring fruit at the object level, so when you make plans, you should spend the following days executing the plans instead of adjusting them; otherwise you decide without feedback.

Comment by viliam on Why is multi worlds not a good explanation for abiogenesis · 2019-04-13T15:40:14.146Z · score: 7 (4 votes) · LW · GW
nearly anything can be a consequence of infinitely many worlds

This feels like complaining that if you flip a coin million times, all outcomes are possible.

Comment by viliam on Why is multi worlds not a good explanation for abiogenesis · 2019-04-12T21:57:21.521Z · score: 22 (11 votes) · LW · GW

In many worlds, everything happens, but not everything happens with equal "probability". Less miraculous paths towards life are more likely than more miraculous paths towards life. Thus, even if the life sees itself with probability 100%, it most likely sees itself evolved the least miraculous way.

So, at the end, we are in the same situation as we were before considering many worlds: looking for the most likely way life could have evolved, because that is most likely our history.

(In other words, many worlds do introduce miracles, but they still favor the solutions that didn't use them.)

Comment by viliam on Is reality warping theoretically possible ? · 2019-04-11T17:23:41.169Z · score: 3 (2 votes) · LW · GW

Logic? Almost certainly no. I have no idea what kind of activity could even in theory lead to a change in logic.

Physics? Depends on what you mean by "laws". I don't really understand these things, but I think there is a hypothesis that some physical constants were established near the beginning of the universe. So perhaps if we could create similar conditions again, and somehow make the constants different...

But it doesn't seem technically possible, because we live inside the universe, and we would have to collect a lot of its energy together again. It's not like we can find inside the universe a source of energy as big as the universe itself (at the beginning, when all that energy was concentrated in a small place).

Now a different question is whether we could discover new laws of physics. Then perhaps some of these new laws could help us create unimaginable amounts of energy, and maybe even create a new universe, with different laws. I think it is quite likely that we already know too much, and the new discoveries we can make would not give us that kind of magical powers.

Comment by viliam on What are the advantages and disadvantages of knowing your own IQ? · 2019-04-07T23:23:36.165Z · score: 6 (3 votes) · LW · GW

If an IQ test told me that I have an IQ of 90... and if I would have a good reason to believe the test's reliability... I would be extremely confused.

An IQ test giving me a result of 140 or higher, really wouldn't do anything useful, but that's because I already know. (Telling you an information you already know is useless. That doesn't mean the information was useless in the first place; just that its another repetition is.)

For me, knowing of my high IQ provided partial explanation to things I perceived in my life. (It was not a full story of course; one needs to also take into account high neuroticism, and probably some undiagnosed autism. But again; knowing about these concepts further improved my self-understanding.) Such as "why are most people so uninterested in things that I find fascinating?" or "why is learning at school so easy to me, except for subjects that are mostly memorization?". Or "where could I find people similar to me?".

There were alternative explanations, such as "the topics I am interested in are inherently weird" or "learning is easy to me because I spend a lot of time reading stuff", which seemed to almost fit, but not completely. (For example "because I read stuff" is yet another thing that needs to be explained; why I like reading about things, but most kids don't?)

Comment by viliam on How would a better replacement for tissues look like? · 2019-04-07T23:00:08.354Z · score: 4 (2 votes) · LW · GW

I can only give you a generalization from one example. I wouldn't buy that, because when my kids are sick, they stay at home, or only go out shortly; so this doesn't solve a problem I would perceive to have.

Also, maybe a better solution already exist, I just didn't do a proper research.

I actually imagine that the hand-powered cleaners could be improved, to make them useful. Either increase the part you compress, or add one-way valves so that you could quickly press it repeatedly. Though at some moment they would become inconveniently big. But I assume the cordless vacuum also is kinda big.

So it's a question of how big and inconvenient device are you willing to buy to solve a problem that happens rarely (that you need to take the sick kids outside for a long walk).

Comment by viliam on How would a better replacement for tissues look like? · 2019-04-06T20:33:55.040Z · score: 2 (1 votes) · LW · GW

When kids are sick, I usually don't spend a long time with them outside, so this is not a problem for me.

It's usually the adults who often need to be mobile even when sick. Personally, I am okay with the tissues.

Comment by viliam on Rules and Skills · 2019-04-05T21:38:36.817Z · score: 4 (3 votes) · LW · GW

The pictures are so confusing! Without them, the article would be half as long, and easier to read.

It took me some time to make a guess about what you were trying to say. Here is my guess:

We get skills by doing things, and getting natural feedback on them. The learning is more precise if we learn sub-skills separately; this is how good coaches do it. Well-mastered skills become unconscious. This kind of knowledge is often difficult to describe verbally. Wittgenstein is awkward. The fact that we have skills we can't verbally explain is an argument for the existence of reality (I guess if everything would be "maps" and "social consensus", we would be able to explain everything verbally). This is a historically important philosophical insight.

Somehow I am neither impressed, nor do I feel that reading and analyzing this article was a meaningful way of spending time. (I tried to be nice and give you feedback beyond "philosophy is unwelcome at LW". My alternative hypothesis is "texts that are difficult to read, and contain little information, which itself is kinda dubious, are unwelcome at LW".)

My objections:

  • the fact that "I can't explain something verbally" is a fact about me, not about the thing being indescribable in principle; maybe it actually is maps and social rules all the way down, and one of the rules is that you are forbidden to describe some parts verbally. (No, this is not my actual opinion, just a counter-argument.)
  • Wittgenstein's argument about infinite regress of rules for applying rules, and the actual mechanism how humans learn tacit skills, may seem similar but happen on different levels. (It's like using unpredictability of quantum physics to explain why you can't predict a coin flip.)
Comment by viliam on What is being? · 2019-04-05T21:03:01.126Z · score: 2 (1 votes) · LW · GW

This.

If you want to tell me something, please translate it to simple language. Assuming that you are an expert on the topic, you are hundred times more qualified to do the translation than I am. And without the translation (either from you, or trying to make my own), all I would do is memorize the phrases without understanding their meaning. Which would be a bad thing.

Comment by viliam on How would a better replacement for tissues look like? · 2019-04-05T20:45:35.170Z · score: 4 (2 votes) · LW · GW

It's the same with babies. We bought some; we threw them away. (The cleaners, not the babies.)

The vacuum ones are loud and will scare the baby for the first time, but they work like magic. Also, they are surprisingly easy to clean. But of course, impractical to take outside.

Comment by viliam on How do people become ambitious? · 2019-04-05T11:46:07.923Z · score: 4 (3 votes) · LW · GW

Looking at the (unfinished) answer, it seems to me like most of the things listed there are useful for getting from the "emotionally committed to a big goal, but no specific plans and completed steps yet" stage to the "actually achieved something awesome" stage.

I mean, conscientiousness feels like an obvious answer to actually doing things; IQ is useful for choosing the right way; extraversion implies networking and cooperation; parents' status implies expectations for yourself; and neuroticism can slow you down or stop you. So I am curious whether these all will be found true.

(However, just in case they might turn out to be false, let this be the record that I have predicted that, too: obviously, extraverted people will be easier distracted from their goals, and conscientious people may be too busy doing what they were told instead of dreaming about achieving more. On the other hand, neurotics will never stop because they will never feel secure enough with what they have already achieved. Too high IQ makes one incompatible with the rest of the society, thus less likely to find cooperators, and more likely to achieve things the society doesn't care about. High-status people are likely to provide better environment for their children, which in turn feel less pressure to change things.)

Anyway, I'd like to know more about how to get from "living a life of meh" to "being emotionally committed to a big goal". I wonder if there will be a research-supported answer to this. (Maybe the relevant things are harder to quantify?) I have a few pet theories, but I could also argue against any of them. (For example, having experienced different levels of wealth or social status in the past could make one feel that these things are changeable. Or it could make them feel that these things are beyond their control. Fictional evidence, such as science fiction or Kiyosaki might ignite one's ambition... or daydreaming.)

Comment by viliam on How would a better replacement for tissues look like? · 2019-04-05T10:48:31.433Z · score: 4 (2 votes) · LW · GW

For babies, there are baby nose vacuum cleaners (aspirators). They don't have the problem you describe. But they are impractical to take with you. The hand-powered ones don't really work well (speaking from personal experience, as a parent), and the vacuum-cleaner-powered ones would require the vacuum cleaner.

A completely different approach would be having many public places where one can wash their hands. Perhaps with free disposable tissues. (Possible counter-argument: ecological footprint.)

Comment by viliam on What are the advantages and disadvantages of knowing your own IQ? · 2019-04-03T21:58:54.820Z · score: 9 (7 votes) · LW · GW

I agree with both Dagon and TheWakalix. Generally speaking, more knowledge is better; with the exception of misleading incomplete knowledge, which can be overcome by more and better knowledge. And "knowing thyself" is especially important. Therefore I would definitely want to know my IQ, and also know exactly what it means and what it doesn't mean.

Comment by viliam on Degrees of Freedom · 2019-04-03T21:43:54.146Z · score: 23 (8 votes) · LW · GW

Yes. Even if what I actually want is "freedom to do the optimal thing", it is strategically better to fight for "freedom to do the arbitrary thing". The latter allows me to do the former. But if we only have the freedom to do the optimal thing, and the people with power disagree with me about what is optimal, I get neither.

Comment by viliam on Announcing the Center for Applied Postrationality · 2019-04-02T21:27:27.567Z · score: 15 (9 votes) · LW · GW

Note: There are Kegan levels beyond 6, but they cannot be described by human words. You will know you are there when you get there, except there will be no "you" anymore.

If you don't understand what this means, that is a statement about you, not about applied postrationality.

Comment by viliam on I found a wild explanation for two big anomalies in metaphysics then became very doubtful of it · 2019-04-02T21:17:50.269Z · score: 6 (2 votes) · LW · GW
Say there's a superintelligent line of cells in a Conway's Game of Life system (or, a line of cells whose state we can control). A small portion of the grid is configured in an unknown, random state. Can a physics gen support a way of containing the entropic part of the system that works most of the time?

I am not an expert, but I believe the fundamental difference between the physics of our universe and the rules of Game of Life is that laws in our universe are time-reversible. That makes the concept of "phase space" meaningful, and that is... somehow... related to the increasing entropy.

Game of Life is irreversible; you can go from two different configurations to the same next configuration; and there are configurations that do not have any possible previous configuration.

Comment by viliam on The Case for The EA Hotel · 2019-04-01T23:33:47.766Z · score: 5 (2 votes) · LW · GW

By the way, if you happen to run out of space in the hotel, consider buying a village in Spain.

Okay, you will not have an English speaking country, but you can still have an English speaking village... and perhaps you just need a few Spanish-speaking people to interface with the country.

Rationality Vienna Meetup April 2019

2019-03-31T00:46:36.398Z · score: 8 (1 votes)
Comment by viliam on Open Thread March 2019 · 2019-03-27T22:49:40.667Z · score: 10 (5 votes) · LW · GW

When I see people write things like "with unconditional basic income, people would not need to work, but without work their lives would lose meaning", I wonder whether they considered the following:

  • There are meaningful things besides work, such as spending time with your friends, or with your family.
  • Work doesn't have to be "full-time or nothing". Working only for one or two days a week, or working full week but only once in a while, would probably not provide enough money to make living, but in some context it can still be useful work and provide the sense of meaning and identity.
  • Sometimes it is difficult to convert useful work to money. For example, helping poor people overcome poverty seems like a useful thing, but the poor people, almost by definition, would have a problem paying for such service. The obvious objection here is that there would be no poverty with UBI, so instead let's talk about various dysfunctions that are currently correlated with poverty.
  • I don't have an obvious example here, but imagine a kind of work, where it is very difficult to evaluate its impact: it could be very useful, or it could accomplish nothing; and if you offer to pay people for it, you will attract masses of scammers. A person with an independent income could volunteer to do it for free.

But I think the greatest problem is jumping from "most work is not needed" directly to "post-scarcity society with superhuman AI". We might spend a few decades between these two; and in between, most people would have a problem to secure an income by doing economically useful work, but there will still be many useful things to do that simply wouldn't pay sufficiently.

Comment by viliam on Do you like bullet points? · 2019-03-27T21:24:29.080Z · score: 7 (3 votes) · LW · GW

Also, it's not just whether you use bullet points, but also what is in them. For example, in this article, there are sentences. You could remove the bullets, and the text would remain almost the same.

Now imagine instead reducing the article to the following bullet-structure:

Bullet points:

  • advantages
    • understanding of structure
    • easier prototyping
    • brevity
  • disadvantages
    • lack of clarity
    • lack of fluency
    • missing numbers
    • hard to read

Five minutes after heaving read this, would you actually remember anything? I most likely wouldn't even remember that I have read the "article".

Comment by viliam on What I've Learned From My Parents' Arranged Marriage · 2019-03-27T21:15:52.892Z · score: 10 (6 votes) · LW · GW

I find it funny that without data, I could easily argue either way.

  • Arranged marriages cannot work, because other people including your family members don't understand you and your priorities perfectly. They will likely look for a person they would want to live with, not a person you would want to live with; in the best case they will look for a person who fits their idea of you, which is still not the same as you.
  • Western marriages cannot work, because we have this meme of "true love" as something completely beyond our control, so when any problem comes, instead of "I should pay more attention to my relationship" people are more likely to go "this means it is not the true love; I quickly need to divorce / break up, and go seeking my real true love". You may verbally oppose this meme, but it is in most of stories you read and movies you saw, so it likely drives your expectations anyway. Also, it takes two to tango, so even if you succeed to overcome the cultural programming, unless your partner does the same thing, your relationship will fail anyway.
  • But there are probably also people in India who take relationships passively, something like "if our parents arranged this, it must be okay; no need for me to do the extra work".
  • Actually, feeling too much personal responsibility for your relationship may also be bad. It may mean that you enter a relationship or stay in a relationship with a wrong person despite obvious red flags, because you feel like it is your job to make it work anyway.
Comment by viliam on [Method] The light side of motivation: positive feedback-loop · 2019-03-27T20:31:44.328Z · score: 2 (1 votes) · LW · GW
Persist through exhaustion; this is a sign of working hard, not failure.

I strongly suspect it is things like this that make the greatest difference.

Where "things like this" refers to... how one almost automatically translates perceptions ("I feel tired") to judgments (either "I am a failure" or "I am working hard"). Things we actually do a lot in our heads, but we either don't talk about them, or just mention them as a side note; because it somehow feels more appropriate to focus on explaining techniques used outside of our heads (pomodoros) or theories (hyperbolic discounting).

Similar example: in a debate about exercising, a friend told me something like: "when you feel exhausted towards the end, that is the feeling of becoming stronger" (meaning: those are the moments in exercise that contribute most to the later increase of strength). Now when I am exercising, feeling tired at the end makes me feel happy, and gives me the motivation to do a few extra repetitions.

On intellectual level, either reaction could be defended; logically speaking, feeling exhausted could mean that you worked hard, but it also could mean that you took a task that exceeds your current capabilities. Neither emotional reaction is 100% guaranteed to reflect reality. (So perhaps the "rational" reaction would be... no reaction at all.) However, people who habitually feel "good work!" are likely to be more productive than people who habitually feel "oh no, I failed again. (And people who believe they feel nothing are probably just lying to themselves.)

Comment by viliam on Open Thread March 2019 · 2019-03-27T20:16:22.825Z · score: 2 (1 votes) · LW · GW

It would probably work better when the speech is slow, so you have more time to notice which currently pronounced word corresponds to which highlighted word / word part / set of words.

Also, the subtitles would have to be a very literal translation, which I suspect is usually not the case. (At least, if I would make subtitles, I would sacrifice exactness in favor of shortness, because people need to be able to read the text in real time, and shorter is better.)

Comment by viliam on Ask LW: Have you read Yudkowsky's AI to Zombie book? · 2019-03-24T20:26:21.589Z · score: 6 (3 votes) · LW · GW

I would like to know, among active Russian rationalists, how many of them speak fluently English; and among those who read the Sequences, how many read the original vs how many read the translation. (My guess would be "above 75%" for both.)

Comment by viliam on The Politics of Age (the Young vs. the Old) · 2019-03-24T19:36:03.096Z · score: 5 (2 votes) · LW · GW

No new insights, just trying to say the obvious: Yes, age 18 is arbitrary; there is no law of physics saying that changing it to 17 or 19 would mean the end of the world.

Kids live in a different world. Most of them never had a job, never had to take serious responsibility. In average school they are brought up to believe and obey. It seems like their votes will partially go towards the parent or teacher who shapes their opinions, and partially towards the currently fashionable form or teenage rebellion. (And yes, most of this could be said also about people above 18, e.g. college students or state employees, or adults who take their opinions from media. The argument is simply that no one is perfect, but for an average high school student these pressures are stronger than for an average adult.)

The argument against "people over 60 are the same case as people under 18" is that if you experience strong injustice when you are under 18, you can do nothing about it now, but you can remember it and express your opinion later. People over 60 would not get the same chance.

With high percentage of young males being often blamed for social unrest and wars, is the changing shape of the age pyramid going to result in even more political stability? And how is giving teenagers a vote going to affect that?

There are young people, middle-aged people, and old people. A change "less votes for young, more for middle-aged" could perhaps bring stability, but it doesn't stop there. It will soon become "less votes for young and middle-aged, more votes for old", and then I would expect politics of high taxes (need to get the money for the old people who don't have savings; why not take from those who have less political power) and neglecting long-term development in favor of short-term fixes.

Possibly a backlash afterwards, if the middle-aged people realize they do all the work and have almost no power. In extreme, we will either go the "no vote above 60" way; or if the old people coordinate first and make it impossible, then preference for less democratic forms of government.

We should also expect more of the usual; governments trying to change the demographic curve by importing young people from developing countries, thus adding racial and cultural aspects to the already existing inter-generational conflict.

A progress in medicine (or perhaps a change in lifestyle) could possibly change a lot; it matters whether "old people" refers to a lonely and sick person, or a relatively healthy person with hobbies and social connections.

Comment by viliam on Ask LW: Have you read Yudkowsky's AI to Zombie book? · 2019-03-23T21:49:19.746Z · score: 4 (2 votes) · LW · GW

Thank you! Found the book in a minute, looked at the first few pages, and indeed they were a pleasure to read.

Comment by viliam on Ask LW: Have you read Yudkowsky's AI to Zombie book? · 2019-03-23T21:38:14.647Z · score: 17 (8 votes) · LW · GW

Unfortunately, my translation didn't have any visible impact. My friends warned me that "it will be useless, because the kind of people who would be serious about rationality, they already speak English and read English texts online". I didn't listen to them, because I thought that even if this is true for most people, there are exceptions (such as people who suck at languages, or very young people) that make this work meaningful. But now... I have to admit they were probably right. As far as I know, there about five people in entire Slovakia interested in rationality, and they have already read the Sequences in English.

The translation is freely downloadable from my website, and I don't believe there is a market for selling it. I didn't do it for money (I already have a nice income as a software developer), but in a hope of increasing the local sanity waterline.

Google Translate cannot translate sentences into Slovak well. But I used it for translating individual words -- much faster than looking them up in paper dictionary. I think I am pretty good at (passive) English, and translating is my hobby. In the past, I have localized a few open-source games, and translated a few interesting articles. With LW, I also started with translating a few articles... and then at some moment I realized I already had about 20% of the book translated, so suddenly translating the entire thing felt doable. It took me more than a year. I am usually very low on conscientiousness, so this was one of two most serious projects in my life (the other one being localization of Battle for Wesnoth). Too bad this turned out to be so useless.

So, I'd say... unless you enjoy translating as an activity, don't do it. If you decide to do it anyway, translate a few articles first, put them online, share them on Facebook, and see the reaction. Unfortunately, even among people who will give you positive feedback, most of them enjoy "insight porn", not rationality per se.

Comment by viliam on Ask LW: Have you read Yudkowsky's AI to Zombie book? · 2019-03-18T22:00:27.421Z · score: 13 (4 votes) · LW · GW

I read the website before the book existed. Actually, I argued that it should be turned into a book, because books in general have higher status than websites. Then I read the book, and translated it to Slovak language.

My opinion on reading the comments is... they are interesting, but the added value per minute spent is significantly lower than reading the book. (Some of the comments are awesome, but most are not, and there is a lot of them.) Thus, if you have anything useful to do, reading the comments after you have read the book is probably a waste of time. (Perhaps, if you have specific questions or objections to specific chapters, you should only read the comments in those chapters.)

Your time would probably be better spent reading high-karma articles which are not part of the book (is there a way to see the highest-karma articles? if not, look here), and... you know, going outside and actually doing things.

Comment by viliam on The Impossibility of the Intelligence Explosion · 2019-03-18T21:44:03.530Z · score: 16 (3 votes) · LW · GW
there is no intelligent algorithm that outperforms random search on most problems; this profound result is called the No Free Lunch Theorem.

I am not familiar with the context of this theorem, but I believe that this is a grave misinterpretation. From a brief reading, my impression is that the theorem says something like "you cannot find useful patterns in random data; and if you take all possible data, most of them are (Kolmogorov) random".

This is true, but it is relevant only for situations where any data is equally likely. Our physical universe seems not to be that kind of place. (It is true that in a completely randomly behaving universe, intelligence would not be possible, because any action or belief would have the same chance of being right or wrong.)

When I think about superintelligent machines, I imagine ones that would outperform humans in this universe. The fact that they would be equally helpless in a universe of pure randomness doesn't seem relevant to me. Saying that an AI is not "truly intelligent" unless it can handle the impossible task of skillfully navigating completely random universes... that's trying to win a debate by using silly criteria.

Does anti-malaria charity destroy the local anti-malaria industry?

2019-01-05T19:04:57.601Z · score: 64 (17 votes)

Rationality Bratislava Meetup

2018-09-16T20:31:42.409Z · score: 18 (5 votes)

Rationality Vienna Meetup, April 2018

2018-04-12T19:41:40.923Z · score: 10 (2 votes)

Rationality Vienna Meetup, March 2018

2018-03-12T21:10:44.228Z · score: 10 (2 votes)

Welcome to Rationality Vienna

2018-03-12T21:07:07.921Z · score: 4 (1 votes)

Feedback on LW 2.0

2017-10-01T15:18:09.682Z · score: 11 (11 votes)

Bring up Genius

2017-06-08T17:44:03.696Z · score: 54 (49 votes)

How to not earn a delta (Change My View)

2017-02-14T10:04:30.853Z · score: 10 (11 votes)

Group Rationality Diary, February 2017

2017-02-01T12:11:44.212Z · score: 1 (3 votes)

How to talk rationally about cults

2017-01-08T20:12:51.340Z · score: 5 (10 votes)

Meetup : Rationality Meetup Vienna

2016-09-11T20:57:16.910Z · score: 0 (1 votes)

Meetup : Rationality Meetup Vienna

2016-08-16T20:21:10.911Z · score: 0 (1 votes)

Two forms of procrastination

2016-07-16T20:30:55.911Z · score: 10 (11 votes)

Welcome to Less Wrong! (9th thread, May 2016)

2016-05-17T08:26:07.420Z · score: 4 (5 votes)

Positivity Thread :)

2016-04-08T21:34:03.535Z · score: 26 (28 votes)

Require contributions in advance

2016-02-08T12:55:58.720Z · score: 61 (61 votes)

Marketing Rationality

2015-11-18T13:43:02.802Z · score: 28 (31 votes)

Manhood of Humanity

2015-08-24T18:31:22.099Z · score: 10 (13 votes)

Time-Binding

2015-08-14T17:38:03.686Z · score: 17 (18 votes)

Bragging Thread July 2015

2015-07-13T22:01:03.320Z · score: 4 (5 votes)

Group Bragging Thread (May 2015)

2015-05-29T22:36:27.000Z · score: 7 (8 votes)

Meetup : Bratislava Meetup

2015-05-21T19:21:00.320Z · score: 1 (2 votes)