MikkW's Shortform

post by MikkW (mikkel-wilson) · 2020-08-10T20:39:29.510Z · LW · GW · 256 comments

256 comments

Comments sorted by top scores.

comment by MikkW (mikkel-wilson) · 2020-09-30T18:26:44.962Z · LW(p) · GW(p)

I was going for a walk yesterday night, and when I looked up at the sky, I saw something I had never seen before: a bright orange dot, like a star, but I had never seen a star that bright and so orange before. "No... that can't be"- but it was: I was looking at Mars, that other world I had heard so much about, thought so much about.

I never realized until yesterday that I had never seen Mars with my own two eyes until that day- one of the closest worlds that humans could, with minimal difficulty, make into a new home one day.

It struck me then in a way that I never felt before, just how far away the world Mars is. I knew it in an abstract sense, but seeing this little dot in the distance, a dot that I knew to be an object larger even than the Moon, but seeming so small in comparison, made me realize, in my gut, just how far away this other world was, just like how when I stand on top of a mountain, and see small buildings on the ground way below me, I realize that those small buildings are actually skyscrapers far away.

And yet, as far as Mars was that night, it was so bright, so apparent, precisely because it was closer now to us than it normally ever is- normally this world is even further from us than it is now.

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2021-02-15T05:58:51.658Z · LW(p) · GW(p)

Correction: here I say that I had never seen Mars before, but that's almost certainly not correct. Mars is usually a tiny dot, nearly indistinguishable from the other stars in the sky (it is slightly more reddish / orange), so what I was seeing was a fairly unusual sight

comment by MikkW (mikkel-wilson) · 2021-03-24T00:48:46.815Z · LW(p) · GW(p)

In short, I am selling my attention by selling the right to put cards in my Anki deck, starting at the low price of $1 per card.

I will create and add a card (any card that you desire, with the caveat that I can veto any card that seems problematic, and capped to a similar amount of information per card as my usual cards contain) to my Anki deck for $1. After the first ten cards (across all people), the price will rise to $2 per card, and will double every 5 cards from then on. I commit to study the added card(s) like I would any other card in my decks (I will give it a starting interval of 10 days, which is sooner than the usual interval of 20 days I usually use, unless I judge that a shorter interval makes sense. I study Anki every day, and have been clearing my deck at least once every 10 days for the past 5 months, and intend to continue to do so). Since I will be creating the cards myself (unless you know of a high-quality deck that contains cards with the information you desire), an idea for a card is enough even if you don't know how to execute it.

Both question-and-answer and straight text are acceptable forms for cards. Acceptable forms of payment include cash, Venmo, BTC, ETH, Good Dollar, and Doge, at the corresponding exchange rates.

This offer will expire in 60 days. If you are interested in taking me up on this afterwards, feel free to message me.

There is now a top-level post discussing this offer [LW · GW]

Price as of 00:53 UTC 24 Mar '21: $4 per card

17 cards claimed so far

Replies from: MathieuRoy, mikkel-wilson, Chris_Leong, MathieuRoy, wunan
comment by Mati_Roy (MathieuRoy) · 2021-03-24T02:47:02.296Z · LW(p) · GW(p)

That's genius! Can I (or you) create a LessWrong thread inviting others to do the same?

Replies from: mikkel-wilson, mikkel-wilson
comment by MikkW (mikkel-wilson) · 2021-03-24T02:50:51.766Z · LW(p) · GW(p)

Thanks! I will create a top level post explaining my motivations and inviting others to join.

comment by MikkW (mikkel-wilson) · 2021-03-24T04:41:21.689Z · LW(p) · GW(p)

Proof that I have added cards to my deck (The top 3 cards, the other claimed cards are currently being held in reserve; -"is:new" shows only cards that have been given a date and interval for review)

comment by Chris_Leong · 2021-03-24T04:00:30.836Z · LW(p) · GW(p)

Interesting offer. If you were someone who regularly commented on decision theories discussions, I would be interested in order to spread my ideas. But since you aren't, I'd pass.

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2021-03-24T04:25:11.781Z · LW(p) · GW(p)

When I write up the top-level post, I'll mention that you offered this for people who comment on DT discussions, unless you'd prefer I don't

Replies from: Chris_Leong
comment by Chris_Leong · 2021-03-24T05:18:21.475Z · LW(p) · GW(p)

That's fine! (And much appreciated!)

comment by Mati_Roy (MathieuRoy) · 2021-03-24T02:41:15.076Z · LW(p) · GW(p)

can I claim cards before choosing its content?

Replies from: mikkel-wilson, MathieuRoy
comment by MikkW (mikkel-wilson) · 2021-03-24T02:48:02.870Z · LW(p) · GW(p)

Yes, that is allowed, though I reserve the right to veto any cards that I judge as problematic

comment by Mati_Roy (MathieuRoy) · 2021-03-24T02:44:24.296Z · LW(p) · GW(p)

if so, I want to claim 7 cards

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2021-03-24T02:56:58.164Z · LW(p) · GW(p)

Messaged

comment by wunan · 2021-03-24T02:11:52.488Z · LW(p) · GW(p)

I'm curious what cards people have paid to put in your deck so far. Can you share, if the buyers don't mind?

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2021-03-24T02:50:33.031Z · LW(p) · GW(p)

I currently have three cards entered, and the other seven are being held in reserve by the buyer (and have already been paid for). They are: "Jeff's Friendly Snek", "Book:  The Mathematical Theory of Communication by Claude Shannon", and "Maximize Cooperative Information Transfer for {{Learning New Optimization}}", where {{brackets}} indicate cloze deletions; these were all sponsored by jackinthenet, he described his intention as wanting to use me as a vector for propagating memes and maximizing cooperative information transfer (which prompted the card).

comment by MikkW (mikkel-wilson) · 2020-11-19T00:16:25.974Z · LW(p) · GW(p)

Religion isn't about believing false things. Religion is about building bonds between humans, by means including (but not limited to) costly signalling. It happens that a ubiquitous form of costly signalling used by many prominent modern religions is belief taxes (insisting that the ingroup professes a particular, easily disproven belief as a reliable signal of loyalty), but this is not neccesary for a religion to successfully build trust and loyalty between members. In particular, costly signalling must be negative-value for an individual (before the second-order benefits from the group dynamic), but need not be negative-value for the group, or for humanity. Indeed, the best costly sacrifices can be positive-value for the group or humanity, while negative-value for the performing individual. (There are some who may argue that positive-value sacrifices have less signalling value than negative value sacrifices, but I find their logic dubious, and my own observations of religion seem to suggest positive-value sacrifice is abundant in organized religion, albeit intermixed with neutral- and negative-value sacrifice)

The rationalist community is averse to religion because it so often goes hand in hand with belief taxes, which are counter to the rationalist ethos, and would threaten to destroy much that rationalists value. But religion is not about belief taxes. While I believe sacrifices are an important part of the functioning of religion, a religion should avoid asking its members to make sacrifices that destroy what the collective values, and instead encourage costly sacrifices that help contribute to the things we collectively value.

Replies from: Pattern
comment by Pattern · 2020-11-20T18:41:08.691Z · LW(p) · GW(p)
In particular, costly signalling must be negative-value for an individual

That's one way to do things, but I don't think it's necessary. A group which requires (for continued membership) members to exercise, for instance, imposes a cost, but arguably one that should not be (necessarily*) negative-value for the individuals.

*Exercise isn't supposed to destroy your body.

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2020-11-20T20:42:31.453Z · LW(p) · GW(p)

If it's not negative value, it's not costly signalling. Groups may very well expect members to do positive-value things, and they do - Mormons are expected to follow strict health guidelines, to the extent that Mormons can recognize other Mormons based on the health of their skin; Jews partake in the Sabbath, which has personal mental benefits. But even though these may seem to be costly sacrifices at first glance, they cannot be considered to be costly signals, since they provide positive value

Replies from: Pattern
comment by Pattern · 2020-11-24T04:15:55.125Z · LW(p) · GW(p)

If a group has standard which provide value, then while it isn't a 'costly signal' it sorts out people who aren't willing to invest effort.*

Just because your organization wants to be strong and get things done, doesn't mean it has to spread like cancer*/cocaine**.


And something that provides 'positive value' is still a cost. Living under a flat 40% income tax by one government has the same effect as living under 40 governments which each have a flat 1% income tax. You don't have to go straight to 'members of this group must smoke'. (In a different time and place, 'members of this group must not smoke' might have been regarded as an enormous cost, and worked as such!)


*bigger isn't necessarily better if you're sacrificing quality for quantity

**This might mean that strong and healthy people avoid your group.

comment by MikkW (mikkel-wilson) · 2020-10-30T05:20:58.049Z · LW(p) · GW(p)

If you know someone is rational, honest, and well-read, then you can learn a good bit from the simple fact that they disagree with you.

If you aren't sure someone is rational and honest, their disagreement tells you little.

If you know someone considers you to be rational and honest, the fact that they still disagree with you after hearing what you have to say, tells you something.

But if you don't know that they consider you to be rational and honest, their disagreement tells you nothing.

It's valuable to strive for common knowledge of you and your partners' rationality and honesty, to make the most of your disagreements.

Replies from: Dagon
comment by Dagon · 2020-11-02T21:06:15.875Z · LW(p) · GW(p)

If you know someone is rational, honest, and well-read, then you probably don't know them all that well.   If someone considers you to be rational and honest, and well-read, that indicates they are not.

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2020-08-11T05:05:53.393Z · LW(p) · GW(p)

Does newspeak actually decrease intellectual capacity? (No)

In George Orwell's book 1984, he describes a totalitarian society that, among other initiatives to suppress the population, implements "Newspeak", a heavily simplified version of the English language, designed with the stated intent of limiting the citizens' capacity to think for themselves (thereby ensuring stability for the reigning regime)

In short, the ethos of newspeak can be summarized as: "Minimize vocabulary to minimize range of thought and expression". There are two different, closely related, ideas, both of which the book implies, that are worth separating here.

The first (which I think is to some extent reasonable) is that by removing certain words from the language, which serve as effective handles for pro-democracy, pro-free-speech, pro-market concepts, the regime makes it harder to communicate and verbally think about such ideas (I think in the absence of other techniques used by Orwell's Oceania to suppress independent thought, such subjects can still be meaningfully communicated and pondered, just less easily than with a rich vocabulary provided)

The second idea, which I worry is an incorrect takeaway people may get from 1984, is that by shortening the dictionary of vocabulary that people are encouraged to use (absent any particular bias towards removing handles for subversive ideas), one will reduce the intellectual capacity of people using that variant of the language.

A slight tangent whose relevance will become clear: If you listen to a native Chinese speaker, then compare the sound of their speech to a native Hawaiian speaker, there are many apparent differences in the sound of the two languages. Chinese has a rich phonological inventory containing 19 consonants, 5 vowels, and quite famously, 4 different tones (pitch patterns) which are used for each syllable, for a total of 5400 (approximately) possible syllables, including diphthongs and multi-syllabic vowels. Compare this to Hawaiian, which has 8 consonants, and 5 vowels, and no tones. Including diphthongs, there are 200 possible Hawaiian syllables.

One might naïvely expect that Mandarin speakers can communicate information more quickly than Hawaiian speakers, at a rate of 12.4 bits / syllable vs. 7.6 bits / syllable - however, this is neglecting the speed at which syllables are spoken- Hawaiian speakers speak much faster than Chinese speakers, and accounting for this difference in cadence, Hawaiian and Mandarin are much closer to each other in speed of communication than their phonologies would suggest.

Back to 1984. If we cut the dictionary down, so it is only 1/20th the size it is now (while steering clear of the thoughtpolice and any bias in removal of words), what should we expect will happen? One may naïvely think, that just as banning the words "democracy", "freedom", and "justice" would inhibit people's ability to think about Enlightenment Values, banning most of the words should inhibit our ability to think about most of the things.

But that is not what I would expect to see happen. One should expect to see compound words take the place of deprecated words, speaking speeds increased, and to accommodate the increased cadence of speech, tricky sequences of sounds will be elided (blurred / simplified), allowing for complex ideas to ultimately be communicated at a pace that rivals that of standard English. Plus, it'd be (massively) easier for non-Anglophones to learn, which would be a big plus.

If I had more time, I'd write about why I think we nonetheless find the concept of Simplified English to be somewhat aversive- speaking a simplified version of a language becomes an antisignal for intelligence and social status, so we come to look down on people who attempt to utilize simplified language, while celebrating those who flex their mental capacity by using rare vocabulary.

Since I'm tired and would rather sleep than write more, I'll end with a rhetorical question: would you rather be in a community that excels at signaling, or a community that actually gets stuff done?

Replies from: Viliam
comment by Viliam · 2020-08-17T11:19:20.605Z · LW(p) · GW(p)

Yes, the important thing is the concepts, not their technical implementation in the language.

Like, in Esperanto, you can construct "building for" + "the people who are" + "the opposite of" + "health" = hospital. And the advantage is that people who never heard that specific word can still guess its meaning quite reliably.

we nonetheless find the concept of Simplified English to be somewhat aversive

I think the main disadvantage is that it would exist in parallel, as a lower-status version of the standard English. Which means that less effort would be put into "fixing bugs" or "implementing features", because for people capable of doing so, it would be more profitable to switch to the standard English instead.

(Like those software projects that have a free Community version and a paid Professional version, and if you complain about a bug in the free version that is known for years, you are told to deal with it or buy the paid version. In a parallel universe where only the free version exists, the bug would have been fixed there.)

would you rather be in a community that excels at signaling, or a community that actually gets stuff done?

How would you get stuff done if people won't join you because you suck at signaling? :( Sometimes you need many people to join you. Sometimes you only need a few specialists, but you still need a large base group to choose from.

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2020-08-20T18:06:45.390Z · LW(p) · GW(p)

As an aside, I think it's worth pointing out that Esperanto's use of the prefix mal- to indicate the opposite of something (akin to Newspeak's un-) is problematic: two words that mean the exact opposite will sound very similar, and in an environment where there's noise, the meaning of a sentence can change drastically based on a few lost bits of information, plus it also slows down communication unnecessarily.

In my notes, I once had the idea of a "phonetic inverse": according to simple, well defined rules, each word could be transformed into an opposite word, which sounds as different as possible from the original word, and has the opposite meaning. That rule was intended for an engineered language akin to Sona, so the rules would need to be worked a bit to have something good and similar for English, but I prefer such a system to Esperanto's inversion rules

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2020-10-03T15:33:30.570Z · LW(p) · GW(p)

The other problem is that opposite is ill defined depending and requires someone else to know which dimension you're inverting along as well as what you consider neutral/0 for that dimension

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2020-10-11T21:48:57.527Z · LW(p) · GW(p)

While this would be an inconvenience for the on-boarding process for a new mode of communication, I actually don't think it's that big of a deal for people who are already used to the dialect (which would probably make up the majority of communication) and have a mutual understanding of what is meant by [inverse(X)] even when X could in principle have more than one inverse.

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2020-10-11T22:58:07.149Z · LW(p) · GW(p)

That makes the concept much less useful though. Might as well just have two different words that are unrelated. The point of having the inverse idea is to be able to guess words right?

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2020-10-12T01:02:05.149Z · LW(p) · GW(p)

I'd say the main benefit it provides is making learning easier - instead of learning "foo" means 'good' and "bar" means 'bad', one only needs to learn "foo" = good, and inverse("foo") = bad, which halves the total number of tokens needed to learn a lexicon. One still needs to learn the association between concepts and their canonical inverses, but that information is more easily compressible

comment by MikkW (mikkel-wilson) · 2021-01-20T05:01:20.430Z · LW(p) · GW(p)

"From AI to Zombies" is a terrible title... when I recommend The Sequences to people, I always feel uncomfortable telling them the name, since the name makes it sound like cookey bull****- in a way that doesn't really indicate what it's about

Replies from: elityre, Yoav Ravid, Eh_Yo_Lexa
comment by Eli Tyre (elityre) · 2021-01-22T11:20:39.374Z · LW(p) · GW(p)

I agree. 

I'm also bothered by the fact that it is leading up to AI alignment and the discussion of Zombies is in the middle!

Please change?

comment by Yoav Ravid · 2021-01-20T07:09:18.440Z · LW(p) · GW(p)

I usually just call it "from A to Z"

comment by Willa (Eh_Yo_Lexa) · 2021-01-20T05:50:03.027Z · LW(p) · GW(p)

I think "From AI to Zombies" is supposed to imply "From A to Z", "Everything Under the Sun", etc., but I don't entirely disagree with what you said. Explaining either "Rationality: From AI to Zombies" or "The Sequences" to someone always takes more effort than feels necessary.

The title also reminds me of quantum zombies or p-zombies everytime I read it...are my eyes glazed over yet?

Counterpoint: "The Sequences" sounds a lot more cult-y or religious-text-y.
"whispers: I say, you over there, yes you, are you familiar with The Sequences, the ones handed down from the rightful caliph [LW · GW], Yudkowsky himself? We Rationalists and LessWrongians spend most of our time checking whether we have all actually read them, you should read them, have you read them, have you read them twice, have you read them thrice and committed all their lessons to heart?" (dear internet, this is satire. thank you, mumbles in the distance)

Suggestion: if there were a very short eli5 post or about page that a genuine 5 year old or 8th grader could read, understand, and get the sense of why The Sequences would actually be valuable to read, this would be a handy resource to share.

comment by MikkW (mikkel-wilson) · 2020-10-04T16:04:49.640Z · LW(p) · GW(p)

I'm quite scared by some of the responses I'm seeing to this year's Petrov Day. Yes, it is symbolic. Yes, it is a fun thing we do. But it's not "purely symbolic", it's not "just a game". Taking things that are meant to be serious is important, even if you can't see why they're serious.

As I've said elsewhere, the truly valuable thing a rogue agent destroys by failing to live up to expectations on Petrov day, isn't just whatever has been put at stake for the day's celebrations, but the very valuable chance to build a type of trust that can only be built by playing games with non-trivial outcomes at stake.

Maybe there could be a better job in the future of communicating the essence of what this celebration is intended to achieve, but to my eyes, it was fairly obvious what was going on, and I'm seeing a lot of comments by people (whose other contributions to LW I respect) who seemed to completely fail to see what I thought was obviously the spirit of this exercise

comment by MikkW (mikkel-wilson) · 2020-09-26T08:10:43.500Z · LW(p) · GW(p)

I'm quite baffled by the lack of response to my recent question asking about which AI-researching companies are good to invest in (as in, would have good impact, not necessarily most profitable)- It indicates either A) most LW'ers aren't investing in stocks (which is a stupid thing not to be doing), or B) are investing in stocks, but aren't trying to think carefully about what impact their actions have on the world, and their own future happiness (which indicates a massive failure of rationality)

Even putting this aside, the fact that nobody jumped at the chance to potentially shift a non-trivial (for certain definitions of trivial) amount of funding away from bad organizations and towards good organizations (which I'm investing primarily as a personal financial strategy), seems very worrying to me. While it is (as ChristianKI pointed out) debatable that the amount of funding I can provide as a single person will make a big difference to a big company, it's bad decision theory to model my actions as only being correlated with myself; and besides, if the funding was redirected, it probably would have gone somewhere without the enormous supply of funds Alphabet has, and very well could have made an important difference, pushing the margins away from failure and towards success.

There's a good chance I may change my mind in the future about this, but currently my response to this information is a substantial shift away from the LW crowd actually being any good at usefully using rationality instrumentally

Replies from: habryka4, John_Maxwell_IV, Viliam, ChristianKl
comment by habryka (habryka4) · 2020-09-26T08:25:39.374Z · LW(p) · GW(p)

(For what it's worth, the post made it not at all clear to me that we were talking about a nontrivial amount of funding. I read it as just you thinking a bit through your personal finance allocation. The topic of divesting and impact investing has been analyzed a bunch on LessWrong and the EA Forum, and my current position is mostly that these kinds of differences in investment don't really make much of a difference in total funding allocation, so it doesn't seem worth optimizing much, besides just optimizing for returns and then taking those returns and optimizing those fully for philanthropic impact.)

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2020-09-28T17:53:42.414Z · LW(p) · GW(p)

This seems to be the common rationalist position, but it does seem to be at odds with:

  1. The common rationalist position to vote on UDT grounds.
  2. The common rationalist position to eschew contextualizing because it ruins the commons.

I don't see much difference between voting because you want others to also vote the same way, or choosing stocks because you want others to choose stocks the same way.

I also think it's pretty orthogonal to talk about telling the truth for long term gains in culture, and only giving money to companies with your values for long term gains in culture.

Replies from: MakoYass
comment by MakoYass · 2020-11-24T23:56:14.326Z · LW(p) · GW(p)

eschew contextualizing because it ruins the commons

I don't understand. What do you mean by contextualizing?

Replies from: mr-hire
comment by John_Maxwell (John_Maxwell_IV) · 2020-10-05T12:51:29.235Z · LW(p) · GW(p)

For what it's worth, I get frustrated by people not responding to my posts/comments on LW all the time. This post [LW · GW] was my attempt at a constructive response to that frustration. I think if LW was a bit livelier I might replace all my social media use with it. I tried to do my part to make it lively by reading and leaving comments a lot for a while, but eventually gave up.

comment by Viliam · 2020-09-26T12:34:09.102Z · LW(p) · GW(p)

either A) most LW'ers aren't investing in stocks

Does LW 2.0 still have the functionality to make polls in comments? (I don't remember seeing any recently.) This seems like the question that could be easily answered by a poll.

Replies from: jimrandomh
comment by jimrandomh · 2020-09-27T00:06:37.937Z · LW(p) · GW(p)

It doesn't; this feature didn't survive the switchover from old-LW to LW2.0.

comment by ChristianKl · 2020-09-28T16:09:54.756Z · LW(p) · GW(p)

While it is (as ChristianKI pointed out) debatable that the amount of funding I can provide as a single person will make a big difference to a big company

My point wasn't about the size about the company but about whether or not the company already has large piles of cash that it doesn't know how to invest.

There are companies that want to invest more capital then they have available and thus have room for funding and there are companies where that isn't the case. 

There's a hilarious interview with Peter Thiel and Eric Schmidt where Thiel charges Google with not spending their 50 billion dollar in the bank that it doesn't know what to do with and Eric Schmidt says "What you discover running these companies is that there are limits that are not cash..."

That interview happened back in 2012 but since then the amount of cash reverse of Alphabet has more then doubled despite some stock buybacks. 

Companies like Tesla or Amazon seem to be willing to invest additional capital to which they have access in a way that Alphabet and Microsoft simply don't. 

A) most LW'ers aren't investing in stocks (which is a stupid thing not to be doing)

My general model would be that most LW'ler think that the instrumentally rational thing is to invest the money into a low-fee index fund. 

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2020-09-29T17:35:30.483Z · LW(p) · GW(p)

Wow, that video makes me really hate Peter Thiel (I don't necessarily disagree with any of the points he makes, but that communication style is really uncool)

Replies from: ChristianKl, Benito
comment by ChristianKl · 2020-09-30T14:38:47.352Z · LW(p) · GW(p)

In most context I would also dislike this communication style. In this case I feel that the communication style is necessary to get a straight answer from Eric Schmidt who would rather avoid the topic.  

comment by Ben Pace (Benito) · 2020-09-29T18:15:33.826Z · LW(p) · GW(p)

On the contrary, I aspire to the clarity and honesty of Thiel's style. Schmidt seems somewhat unable to speak directly. Of the two of them, Thiel was able to say specifics about how the companies were doing excellently and how they were failing, and Schmidt could say neither.

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2020-10-02T03:56:00.873Z · LW(p) · GW(p)

Thank you for this reply, it motivated me to think deeper about the nature of my reaction to Thiel's statements, and my thoughts on the conversation between Thiel and Schmidt. I would share my thoughts here, but writing takes time and energy, and I'm not currently in position to do so. 

Replies from: Benito
comment by MikkW (mikkel-wilson) · 2021-06-17T17:24:04.239Z · LW(p) · GW(p)

Asking people to "taboo [X word]" is bad form, unless you already know that the other person is sufficiently (i.e. very) steeped in LW culture to know what our specific corner of internet culture means by "taboo" [? · GW].

Without context, such a request to taboo a word sounds like you are asking the other person to never use that word, to cleanse it from their vocabulary, to go through the rest of their life with that word permanently off-limits. That's a very high, and quite rude, ask to make of someone. While that's of course not what we mean by "taboo", I have seen requests to taboo made where it's not clear that the other person knows what we mean by taboo, which means it's quite likely the receiving party interpreted the request as being much ruder than was meant.

Instead of saying "Taboo [X word]", instead say "could you please say what you just said without using [X word]?" - it conveys the same request, without creating the potential to be misunderstood to be making a rude and overreaching request.

Replies from: Viliam, Pattern
comment by Viliam · 2021-06-18T14:27:33.761Z · LW(p) · GW(p)

I see you tabooed "taboo".

Indeed, this is the right approach to LW lingo... only, sometimes it expands the words into long [? · GW] descriptions.

comment by Pattern · 2021-06-18T21:00:00.176Z · LW(p) · GW(p)

Step 1: Play the game taboo.

Step 2: Request something like "Can we play a mini-round of taboo with *this word* for 5 minutes?"

*[Word X]*


Alternatively, 'Could you rephrase that?'/'I looked up what _ means in the dictionary, but I'm still not getting something...'

comment by MikkW (mikkel-wilson) · 2020-08-31T00:47:42.090Z · LW(p) · GW(p)

During today's LW event, I chatted with Ruby and Raemon (seperately) about the comparison between human-made photovoltaic systems (i.e. solar panels), and plant-produced chlorophyll. I mentioned that in many ways chlorophyll is inferior to solar panels - consumer grade solar panels operate in the 10% to 20% efficiency range (i.e. for every 100 joules of light energy, 10 - 20 joules are converted into usable energy), while chlorophyll is around 9% efficient, and modern cutting edge solar panels can go even as high as nearly 50% efficiency. Furthermore, every fall the leaves turn red and fall down to the ground only for new leaves – that is plant-based solar panels – to be generated again in the spring. One sees green plants where there very well could be solar panels capturing light, and naïvely we would expect solar panels to do a better job, but we plant plants instead, and let them gather energy for us.

One of them (I think Ruby) didn't seem convinced that it was fair to compare solar panels with chlorophyll – is it really an apples to apples comparison? I think it is a fair comparison. It is true that plants do a lot of work beyond simply capturing light, and electricity goes to different things than what plants do, but ultimately what both plant-based farms and photovoltaic cells do is they capture energy from sunlight coming to the earth from the sun, and convert them to human usable energy. One could imagine genetically engineered plants doing much of what we use electricity for these days, or industrial processes being hooked up to solar panels that do the things plants do, and in this way we can make a meaningful comparison of how much energy plants allow us to use for human desired goals and compare that to how much energy photovoltaic cells can redirect to human-desired uses.

Replies from: Raemon, MakoYass
comment by Raemon · 2020-08-31T01:03:19.125Z · LW(p) · GW(p)

Huh, somehow while chatting with you I got the impression that it was the opposite (chlorophyll more effective than solar panels). Might have just misheard.

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2020-08-31T01:06:28.331Z · LW(p) · GW(p)

The big advantage chlorophyll has is that it is much cheaper than photovoltaics, which is why I was saying (in our conversation) we should take inspiration from plants

Replies from: Raemon
comment by Raemon · 2020-08-31T01:19:58.122Z · LW(p) · GW(p)

Gotcha. What's the metric that it's cheaper on?

Replies from: mingyuan
comment by mingyuan · 2020-08-31T01:52:58.707Z · LW(p) · GW(p)

Well, money, for one?

comment by MakoYass · 2020-08-31T02:04:50.127Z · LW(p) · GW(p)

It would be interesting to see the efficiency of solar + direct air capture compared to plants. If it wins I will have another thing to yell at hippies (before yelling about there not being enough land area even for solar)

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2020-08-31T02:50:52.880Z · LW(p) · GW(p)

There's plenty of land area for solar. I did a rough calculation once, and my estimate was that it'd take roughly twice the land area of the Benelux to build a solar farm that produced as much energy per annum as the entirety of humanity uses each year (The sun outputs an insane amount of power, and if one steps back to think about it, almost every single joule of energy we've used came indirectly through the sun - often through quite inefficient routes). I didn't take into account day/night cycles, or losses of efficiency due to transmission, but if we assume 4x loss due to nighttime (probably a pessimistic estimate) and 5x loss due to transmission (again, being pessimistic), it still comes out to substantially less than the land we have available to us (About 1/3 the size of the Sahara desert)

comment by MikkW (mikkel-wilson) · 2021-06-17T19:18:15.830Z · LW(p) · GW(p)

I may have discovered an interesting tool against lethargy and depression [1]: This morning, in place of my usual caffeine pill, I made myself a cup of hot chocolate (using pure cacao powder / baking chocolate from the supermarket), which made me very energetic (much more energetic than usual), which stood in sharp contrast to the past 4 days, which have been marked by lethargy and intense sadness. Let me explain:

Last night, I was reflecting on the fact that one of the main components of chocolate is theobromine, which is very similar in structure to caffeine (theobromine is the reason why chocolate in poisonous to dogs & cats, for reasons similar to how caffeine was evolved to kill insects that feed on plants), and is known to be the reason why eating chocolate makes people happy. Since I have problems with caffeine, but rely on it to have energy, I figured it would be worthwhile to try using chocolate instead as a morning pick-me-up. I used baking chocolate instead of Nesquick or a hot chocolate packet because I'm avoiding sugar these days, and I figured having as pure chocolate as possible would be ideal for my experiment.

I was greeted with pleasant confirmation when I became very alert almost immediately after starting to drink the chocolate, despite having been just as lethargic as the previous days until I drank the chocolate. It's always suggestive when you form a hypothesis based on facts and logic, then test the hypothesis, and exactly what you expected to happen, happens. But of course, I can't be too confident until I try repeating this experiment on future days, which I will happily be doing after today's success.

 

[1]: There are alternative hypotheses for why today was so different from the previous days: I attended martial arts class, then did some photography outside yesterday evening, which meant I got intense exercise, was around people I know and appreciate, and was doing stuff with intentionality, all of which could have contributed to my good mood today. There's also the possibility of regression to the mean, but I'm dubious of this since today was substantially above average for me. I also had a (sugar-free) Monster later in the morning, but that was long after I had noticed being unusually alert, and now I have a headache that I can clearly blame on the Monster (Caffeine almost always gives me a headache) [1a].

[1a]: I drink energy drinks because I like the taste of them, not for utilitarian reasons. I observe that caffeine tends to make whatever contains it become deeply associated with enjoyment and craving, completely separated from the alertness-producing effects of the chemical. A similar thing happened with Vanilla Café Soylent [1b], which I absolutely hated the first time I tried it, but a few weeks later, I had deep cravings for, and could not do without.

[1b]: Sidenote, the brand Soylent has completely gone to trash, and I would not recommend anyone buy it these days. Buy Huel or Plenny instead.

Replies from: gilch, Dagon
comment by gilch · 2021-06-18T06:46:27.412Z · LW(p) · GW(p)

I think I want to try this. What was your hot cocoa recipe? Did you just mix it with hot water? Milk? Cream? Salt? No sugar, I gather. How much? Does it taste any better than coffee? I want to get a sense of the dose required.

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2021-06-19T22:51:28.899Z · LW(p) · GW(p)

Just saw this. I used approximately 5 tablespoons of unsweetened cocoa powder, mixed with warm water. No sweetener, no sugar, or anything else. It's bitter, but I do prefer the taste over coffee.

Replies from: gilch
comment by gilch · 2021-06-20T18:07:05.589Z · LW(p) · GW(p)

I just tried it. I did not enjoy the taste, although it does smell chocolatey. I felt like I had to choke down the second half. If it's going to be bitter, I'd rather it were stronger. Maybe I didn't stir it enough. I think I'll use milk next time. I did find this: https://criobru.com/ apparently people do brew cacao like coffee. They say the "cacao" is similar to cocoa (same plant), but less processed.

Replies from: gilch, gilch
comment by gilch · 2021-06-23T06:17:39.466Z · LW(p) · GW(p)

Milk does take the edge off, even with no added sweeteners. I had no trouble swallowing the whole thing this way.

comment by gilch · 2021-06-20T19:27:58.433Z · LW(p) · GW(p)

I found this abstract suggesting that theobromine doesn't affect mood or vigilance at reasonable doses. But this one suggests that chocolate does.

Subjectively, I feel that my cup of cocoa today might have reduced my usual lethargy and improved my mood a little bit, but not as dramatically as I'd hoped for. I can't be certain this isn't just the placebo effect.

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2021-06-21T17:50:13.567Z · LW(p) · GW(p)

The first linked study tests 100, 200, and 400 mg Theobromine. A rough heuristic based on the toxic doses of the two chemicals suggests 750 mg, maybe a little more (based on subjective experience) is equivalent to 100mg caffeine or a cup of coffee (this is roughly the dose I've been using each day), so I wouldn't expect a particularly strong effect for the first two. The 400 mg condition does surprise me; the sample size of the study is small (n = 24 subjects * 1 trial per condition), so the fact that it failed to find statistical significance shouldn't be too big of an update, though.

Replies from: gilch
comment by gilch · 2021-06-21T19:50:25.308Z · LW(p) · GW(p)

I also noticed that it suppressed my appetite. Again, that's only from trying it once, but it might be useful for weight loss. I'm not sure if that's due to the theobromine, or just due to the fact that cocoa is nutritionally dense.

comment by Dagon · 2021-06-17T21:27:12.209Z · LW(p) · GW(p)

Can you clarify your Soylent anti-recommendation?  I don't use it as an actual primary nutrition, more as an easy snack for a missed meal, once or twice a week.  I haven't noticed any taste difference recently - my last case was purchased around March, and I pretty much only drink the Chai flavor.  

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2021-06-17T21:48:23.533Z · LW(p) · GW(p)

A] Meal replacements require a large amount of trust in the entity that produces it, since if there's any problems with the nutrition, that will have big impacts on your health. This is less so in your case, where it's not a big part of the nutrition, but in my case, where I ideally use meal replacements as a large portion of my diet, trust is important.

B] A few years ago, Rob Rhinehart, the founder and former executive of Soylent, parted ways with the company due to his vision conflicting with the investor's desires (which is never a good sign). I was happy to trust Soylent during the Rhinehart era, since I knew that he relied on his creation for his own sustenance, and seemed generally aligned. During that era, Soylent was very effective at signaling that they really cared about the world in general, and people's nutrition in general. All the material that sent those signals no longer exists, and the implicit signals (e.g. the shape of and branding on the bottles, the new products they are developing [The biggest innovation during the Rhinehart era was caffeinated Soylent, now the main innovations are Bridge and Stacked, products with poor nutritional balance targeted at a naïve general audience, a far cry from the very idea of Complete Food], and the copy on their website) all indicate that the company's main priority is now maximizing profit, without much consideration as to the (perceived) nutritional value of the product. In terms of product, the thing is probably still fine (though I haven't actually looked at the ingredients in the recent new nutritional balance), but in terms of incentives and intentions, the management's intention isn't any better than, say, McDonald's or Jack In The Box.

Since A] meal replacements require high trust and B] Soylent is no longer trustworthy: I cannot recommend anyone use Soylent more than a few times a week, but am happy to recommend Huel, Saturo, Sated, and Plenny, which all seem to still be committed to Complete Food.

(As far as flavour, I know I got one box with the old flavor after the recent flavor change, the supply lines often take time to get cleared out, so it's possible you got a box of the old flavor. I don't actually mind the new flavour, personally)

Replies from: Dagon, Zolmeister
comment by Dagon · 2021-06-18T15:01:39.548Z · LW(p) · GW(p)

Thanks for the detail and info!

comment by Zolmeister · 2021-06-18T02:18:58.656Z · LW(p) · GW(p)

I recommend Ample (lifelong subscriber). It has high quality ingredients (no soy protein), fantastic macro ratios (5/30/65 - Ample K), and an exceptional founder.

comment by MikkW (mikkel-wilson) · 2021-06-10T21:23:01.459Z · LW(p) · GW(p)

In Zvi's most recent Covid-19 post [LW · GW], he puts the probability of a variant escaping mRNA vaccines and causing trouble in the US at most at 10%. I'm not sure I'm so optimistic.

One thing that gives reason to be optimistic, is that we have yet to see any variant that has substantial resistance to the vaccines, which might lead one to think that resistance just isn't something that is likely to come up. However, on the other hand, the virus has had more than a year for more virulent strains to crop up while people were actively sheltering in place, and variants first came on the radar (at least for the population at large) around 9 months after the start of worldwide lockdowns, and a year after the virus was first noticed. In contrast, the vaccine has only been rolling out for half a year, and only come into large-scale contact with the virus for maybe half that time, let's say a quarter of a year. It's maybe not so surprising that a resistant variant hasn't appeared yet.

Right now, there's a fairly large surface area between non-resistant strains of Covid and vaccinated humans. Many vaccinated humans will be exposed to virus particles, which will for the most part be easily defended against by the immune system. However, if it's possible for the virus to change in any way to reduce the immune response it faces, we will see this happen, and particularly in areas where there's roughly half vaccinated people, half unvaccinated, such a variant will have at least a slight advantage over other variants, and will start to spread faster than non-resistant variants. Again, it's taken a while for other variants to crop up, so it's not much information that we haven't seen this happen yet.

The faster we are able to get vaccines in most arms in all countries, the less likely this is to happen. If most humans worldwide are vaccinated 6 months from now, there likely won't be much opportunity for a resistant variant to become prominent. But I don't expect vaccines to roll out so effectively; I'll be pleasantly surprised if they are.

There's further the question of whether the US will be able to respond effectively quickly enough if such a variant arises. I'm very pessimistic about this, and if you're not, you either haven't been paying attention, or are overestimating the difference in effectiveness between the current administration vs the previous administration, or are more optimistic about our ability to learn from our mistakes (on an institutional level) than I am.

All in all, saying no more than a 10% chance that a resistant variant will arise, with the US government not responding quickly enough, seems far too optimistic to me. I'm currently around 55% that such a variant will arise, and that it will cause at least 75,000 deaths OR will prompt a lockdown of at least 30 days in at least 33 US states [edited to add: within the next 7 years].

comment by MikkW (mikkel-wilson) · 2021-02-15T19:53:57.311Z · LW(p) · GW(p)

One thing that is frustrating me right now is that I don't have a good way of outputting ideas while walking. One thing I've tried is talking into voicememos, but it feels awkward to be talking out loud to myself in public, and it's a hassle to transcribe what I write when I'm done. One idea I don't think I've ever seen is a hand-held keyboard that I can use as I'm walking, and can operate mostly by touch, without looking at it, and maybe it can provide audio feedback through my headphones.

Replies from: AllAmericanBreakfast
comment by AllAmericanBreakfast · 2021-02-16T22:34:33.935Z · LW(p) · GW(p)

If you have bluetooth earbuds, you would just look to most other people like you're having a conversation with somebody on the phone. I don't know if that would alleviate the awkwardness, but I thought it was worth mentioning. I have forgotten that other people can't tell when I'm talking to myself when I have earbuds in.

comment by MikkW (mikkel-wilson) · 2020-08-10T20:39:29.867Z · LW(p) · GW(p)

Epistemic: Intend as a (half-baked) serious proposal

I’ve been thinking about ways to signal truth value in speech- in our modern society, we have no way to readily tell when a person is being 100% honest- we have to trust that a communicator is being honest, or otherwise verify for ourselves if what they are saying is true, and if I want to tell a joke, speak ironically, or communicate things which aren’t-literally-the-truth-but-point-to-the-truth, my listeners need to deduce this for themselves from the context in which I say something not-literally-true. This means that almost always, common knowledge of honesty never exists, which significantly slows down [LW(p) · GW(p)] positive effects from Aumann's Agreement Theorem [? · GW]

In language, we speak with different registers. Different registers are different ways of speaking, depending on the context of the speech. The way a salesman speaks to a potential customer, will be distinct from the way he speaks to his pals over a beer - he speaks in different registers in these different situations. But registers can also be used to communicate information about the intentions of the speaker - when a speaker is being ironic, he will intone his voice in a particular way, to signal to his listeners that he shouldn’t be taken 100% literally.

There are two points that come to my mind here: One: establishing a register of communication that is reserved for speaking literally true statements, and Two: expanding the ability to use registers to communicate not-literally-true intent, particularly in text.

On the first point, a large part of the reason why people speaking in a natural register cannot always be assumed to be saying something literally true, is that there is no external incentive to not lie. Well, sometimes there are incentives to not lie, but oftentimes these incentives are weak, and especially in a society built upon free speech, it is hard to - on a large scale - enforce a norm against not lying in natural-register speech. Now my mind imagines a protected register of speech, perhaps copyrighted by some organization (and which includes unique manners of speech which are distinctive enough to be eligible for copyright), which that organization vows to take action against anybody who speaks not-literally-true statements (i.e., which communicate a world model that does not reliably communicate the actual state of the world) in that register; anybody is free (according to a legally enforcable license) to speak whatever literally-true statements they want in that register, but may not speak non-truths in that register, at pain of legal action.

If such a register was created, and was reliably enforced, it would help create a society where people could readily trust strangers saying things that they are not otherwise inclined to believe, given that the statement is spoken in the protected register. I think such a society would look different from current society, and would have benefits compared to current society. I also think a less-strict version of this could be implemented by a single platform (perhaps LessWrong?), replacing legal action with the threat of being suspended for speaking not-literal-truths in a protected register, and I also suspect that it would have a non-zero positive effect. This also has the benefit of being probably cheaper, and in a less unclear legal position related to speech.

I don’t currently have time to get into details on the second point, but I will highlight a few things: Poe’s law states that even the most extreme parody can be readily mistaken for a serious position;; Whereas spoken language can clearly be inflected to indicate ironic intent, or humor, or perhaps even not-literally-true-but-pointing-to-the-truth, the carriers of this inflection are not replicated in written language - therefore, written language, which the internet is largely based upon, lacks the same richness of registers that allows a clear distinction between extreme-but-serious postitions from humor. There are attempts to inflect writing in such a way as to provide this richness, but as far as I know, there is no clear standard that is widely understood that actually accomplishes this. This is worth exploring in the future. Finally, I think it is worthwhile to spend time reflecting on intentionally creating more registers that are explicitly intended to communicate varying levels of seriousness and intent.

Replies from: Dagon
comment by Dagon · 2020-08-11T14:05:54.237Z · LW(p) · GW(p)
most extreme parody can be readily mistaken for a serious position

I may be doing just that by replying seriously. If this was intended as a "modest proposal", good on you, but you probably should have included some penalty for being caught, like surgery to remove the truth-register.

Humans have been practicing lying for about a million years. We're _VERY_ good at difficult-to-legislate communication and misleading speech that's not unambiguously a lie.

Until you can get to a simple (simple enough for cheap enforcement) detection of lies, an outside enforcement is probably not feasible. And if you CAN detect it, the enforcement isn't necessary. If people really wanted to punish lying, this regime would be unnecessary - just directly punish lying based on context/medium, not caring about tone of voice.

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2020-08-11T16:41:29.972Z · LW(p) · GW(p)

I assure you this is meant seriously.

Until you can get to a simple (simple enough for cheap enforcement) detection of lies, an outside enforcement is probably not feasible.

There's plenty of blatant lying out there in the real world, which would be easily detectable by a person with access to reliable sources and their head screwed on straight- I think one important facet of my model of this proposal, that isn't explicitly mentioned in this shortform, is that validating statements is relatively cheap, but expensive enough that for every single person to validate every single sentence they hear is infeasible. By having a central arbiter of truth that enforces honesty, it allows one person doing the heavy lifting to save a million people from having to each individually do the same task.

If people wanted to punish lying this regime would be unnecessary - just directly punish lying based on context/medium, not caring about tone of voice.

The point of having a protected register (in the general, not platform-specific case), is that it would be enforceable even when the audience and platform are happy to accept lies- since the identifiable features of the register would be protected as intellectual property, the organization that owned the IP could enforce a violation of the intellectual property, even when there would be no legal basis for violating norms of honesty

Replies from: Dagon
comment by Dagon · 2020-08-12T19:00:55.719Z · LW(p) · GW(p)
The point of having a protected register (in the general, not platform-specific case), is that it would be enforceable even when the audience and platform are happy to accept lies

Oh, I'd taken that as a fanciful example, which didn't need to be taken literally for the main point, which I thought was detecting and prosecuting lies. I don't think that part of your proposal works - "intellectual property" isn't an actual law or single concept, it's an umbrella for trademark, copyright, patent, and a few other regimes. None of which apply to such a broad category of communication as register or accent.

You probably _CAN_ trademark a phrase or word, perhaps "This statement is endorsed by TruthDetector(TM)". It has the advantage that it applies in written or spoken media, has no accessibility issues, works for tonal languages, etc. And then prosecute uses that you don't actually endorse.

Endorsing only true statements is left as an excercise, which I suspect is non-trivial on it's own.

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2020-08-13T00:42:03.208Z · LW(p) · GW(p)

I suspect there's a difference between what I see in my head when I say "protected register", compared to the image you receive when you hear it. Hopefully I'll be able to write down a more specific proposal in the future, and provide a legal analysis of whether what I envision would actually be enforceable. I'm not a lawyer, but it seems that what I'm thinking of (i.e., the model in my head) shouldn't be dismissed out of hand (although I think you are correct to dismiss what you envision that I intended)

comment by MikkW (mikkel-wilson) · 2021-07-28T01:51:14.841Z · LW(p) · GW(p)

Update on my tinkering with using high doses of chocolate as a psychoactive drug:

(Nb: at times I say "caffeine" in this post, in contrast to chocolate, even though chocolate contains caffeine; by this I mean coffee, energy drinks, caffeinated soda, and caffeine pills collectively, all of which were up until recently frequently used by me; recently I haven't been using any sources of caffeine other than chocolate, and even then try to avoid using it on a daily basis)

I still find that consuming high doses of chocolate (usually 3-6 table spoons of dark cocoa powder, or a corresponding dose of dark chocolate chips / chunks) has a stimulating effect that I find more pleasant than caffeine, and makes me effective at certain things in a way that caffeine doesn't.

I am pretty sure that I was too confident in my hypothesis about why specifically chocolate has this effect. One obvious thing that I overlooked in my previous posts, is that chocolate contains caffeine, and this likely explains a large amount of its stimulant effects. It is definitely true that Theobromine has a very similar structure to caffeine, but it's unclear to me that it has any substantial stimulant effect. Gilch linked me to a study that he stated suggests it doesn't, but after reading the abstract, I found that it only justifies a weak update against thinking the Theobromine specifically has stimulant effects.

I'm confident that there are chemicals in chocolate other than caffeine that are responsible for me finding benefit in consuming it, but I have no idea what those chemicals are.

Originally I was going to do an experiment, randomly assigning days to either consume a large dose of chocolate or not, but after the first couple days, I decided against doing so, so I don't have any personal experimentation to back up my observations, but just observationally, there's a very big difference in my attitude and energy on days when I do or don't consume chocolate.

When I talked to Herschel about his experience using chocolate, he noted that building up tolerance is a problem with any use of chemicals to affect the mind, which is obviously correct, so I ended deciding that I won't use chocolate every day, and will instead use it on days when I have a specific reason to use it, and will make sure that there will be days when I won't use it, even if I find myself always wanting to use it. My thought here, is that if my brain is forced to operate at some basic level on a regular basis without the chemical, then when I do use the chemical, I will be able to achieve my usual operation plus a little more, which will ensure that I can always derive some benefit from it. I think this approach should make sense for many chemicals where building up tolerance is a possibility of concern.

Gilch said he didn't notice any effect when he tried it. I don't know how much he used, but since I specified an amount in response to one of his questions, I presume he probably used an amount similar to what I would use. I don't know if he used it in addition to caffeine, or as a replacement. If it was a replacement, that would explain why he didn't notice any additional stimulation over and above his usual stimulation, but it would still lead to wonder about why he didn't notice any other effects. One possibility is that the effects are a little bit subtle - not too subtle, since its effects tend to be pretty obvious (in contrast to usual caffeine) for me when I'm on chocolate, but subtle enough that a different person than me might not be as attuned to it, for whatever reason (part of why I say this, is that I find chocolate helps me be more sociable, and this is one of the most obvious effects it has in contrast to caffeine for me, and I care a lot about my ability to be sociable, so it's hard to slip my notice, but if someone cares less about how they interact with other people, they may overlook this effect; there are other effects, too, but those do tend to be somewhat subtle, though still noticeable)

As far as delivery, I have innovated slightly on my original method. I now often use dark chocolate chips / chunks in addition to drinking the chocolate, I find that pouring a handful, just enough to fit in my mouth, will have a non-trivial effect. Since I found drinking the chocolate straight would irritate my stomach and cause my stool to have a weird consistency, I have started using milk. My recipe is now to take a tall glass, fill it 1/3rd with water, add some (but not necessarily all) of the desired dose of cocoa powder into the glass, microwave it for 20 seconds, stir the liquid, add a little more water and the rest of the cocoa powder, microwave it for 20 more seconds, stir it until there are no chunks, then fill up the rest of the glass with milk. There are probably changes that can be made to the recipe, but I find this at least gets a consistently good outcome. With the milk, it makes my stomach not get irritated, and my stool is less different, though still slightly different, from how it would otherwise be.

On the subject of it making me sociable, I don't think it's a coincidence that most of the days that my friends receive texts from me, I have had chocolate on those days. I also seem to write more on days when I have had chocolate. I find chocolate helps me feel that I know what I need to say, and I rarely find myself second-guessing my words when I'm on chocolate, whereas I often have a hard time finding words in the first place without chocolate, and feel less confident about what I say without it. I've written a lot on this post alone, and have also messaged a friend today, and have also written a long-ish analysis on a somewhat controversial topic on another website today. Based on the context I say that in, I'm sure you can guess whether I've had chocolate today.

comment by MikkW (mikkel-wilson) · 2021-07-02T21:20:33.117Z · LW(p) · GW(p)

URLs (Universal Resource Locators) are universal over space, but they are not universal over time, and this is a problem

Replies from: Dagon
comment by Dagon · 2021-07-02T21:54:14.687Z · LW(p) · GW(p)

According to https://datatracker.ietf.org/doc/html/rfc1738 , they're not intended to be universal, they're actually Uniform Resource Locators.  Expecting them to be immutable or unique can lead to much pain.

comment by MikkW (mikkel-wilson) · 2021-05-19T14:54:55.693Z · LW(p) · GW(p)

Cryptocurrencies in general are good and the future of money, but Bitcoin in particular deserves to crash all the way down to $0

Replies from: Viliam, Dagon, mikkel-wilson
comment by Viliam · 2021-05-19T18:30:56.778Z · LW(p) · GW(p)

In a universe that cares about "deserve", diamonds would crash to $0 first. Bitcoin at least doesn't run on slave labor.

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2021-05-19T19:00:57.681Z · LW(p) · GW(p)

Hmmm... I guess this is a good illustration of why "deserve" isn't a good way to put what I meant.

Bitcoin isn't actually any good at what it's meant to do- it's really a failure as a currency. It has been a rewarding store of value for a while, but I expect it will be displaced as a store of value by currencies that are more easily moved from account to account. Transaction fees are often too high, and will likely increase, and it is slow to process transactions (the slow speed isn't a hindrance to its quality as a store of value, but it does reduce its economic desirability; transaction fees are very much a problem for a store of value)

I expect in the long run, economic forces will drive BTC to nearly $0 without any regard to what it morally "deserves".

comment by Dagon · 2021-05-19T16:43:50.236Z · LW(p) · GW(p)

While I don't disagree, it's interesting to consider what it means for a currency to deserve something.  I'd phrase it as "people who don't hold very much bitcoin deserve to spend less of our worldwide energy and GPU output on crypto mining".

Replies from: mikkel-wilson, Rana Dexsin
comment by MikkW (mikkel-wilson) · 2021-05-19T18:14:37.984Z · LW(p) · GW(p)

That does not accurately summarize my own personal feelings on this. I do suspect it's correct that BTC miners are using too much of the world's resources (a problem that can be fixed, but I'd be surprised if Bitcoin developers chose to fix), but more generally I feel that people who do hold on to BTC deserve to lose their investment if they don't sell soon (to be clear, I am against the government having anything to do with that. But I will be happy with the market if / when the market decides BTC is worthless)

comment by Rana Dexsin · 2021-05-19T17:05:55.436Z · LW(p) · GW(p)

Language clarification: is "deserve to spend less of…" used in the sense of "deserve that less of … is spent [not necessarily by them]" here?

Replies from: Dagon
comment by Dagon · 2021-05-19T19:01:56.106Z · LW(p) · GW(p)

Actually, I should have used a word different from "deserve".  There's no such thing - I should have said something along the lines of "I'd prefer that...".

comment by MikkW (mikkel-wilson) · 2021-05-19T14:57:10.416Z · LW(p) · GW(p)

This is something that has been in the back of my mind for a while, I sold almost all of my BTC half a year ago and invested that money in other assets.

comment by MikkW (mikkel-wilson) · 2021-04-09T21:47:26.290Z · LW(p) · GW(p)

Last month, I wrote a post here titled "Even Inflationary Currencies Should Have Fixed Total Supply", which wasn't well-received. One problem was that the point I argued for wasn't exactly the same as what the title stated: I supported both currencies with fixed total supply, and currencies that instead choose to scale supply proportional to the amount of value in the currency's ecosystem, and many people got confused and put off by the disparity between the title and my actual thesis; indeed, one of the most common critiques in the comments was a reiteration of a point I had already made in the original post.

Zvi helpfully pointed out another effect that nominal inflation has that serves as part of the reason inflation is implemented the way it is, that I wasn't previously aware of, namely that nominal inflation induces people to accept worsening prices they psychologically would otherwise resist. While I feel intentionally invoking this effect flirts with the boundary of dishonesty, I do recognize the power and practical benefits of this effect.

All that said, I do stand by the core of my original thesis: nominal inflation is a source of much confusion for normal people, and makes the information provided by price signals less easily legible over long spans of time, which is problematic. Even if the day-to-day currency continues to nominally inflate like things are now, it would be stupid not to coordinate around a standard stable unit of value (like [Year XXXX] Dollars, except without having to explicitly name a specific year as the basis of reference; and maybe don't call it dollars, to make it clear that the unit isn't fluidly under the control of some organization)

comment by MikkW (mikkel-wilson) · 2021-04-08T03:21:55.945Z · LW(p) · GW(p)

I learned to type in Dvorak nearly a decade ago, and any time I have typed on a device that supports it, I have used it since then. I don't know if it actually is any better than QWERTY, but I do notice that I enjoy the way it feels to type in Dvorak; the rhythm and shape of the dance my fingers make is noticeably different from when I type on QWERTY.

Even if Dvorak itself turns out not to be better in some way (fx. speed, avoiding injury, facilitation of mental processes) than QWERTY, it is incredibly unlikely that there does not exist some configuration of keys that is provably superior to QWERTY.

Also, hot take: Colemak is the coward's Dvorak.

comment by MikkW (mikkel-wilson) · 2021-03-23T17:41:54.108Z · LW(p) · GW(p)

We're living in a very important time, being on the cusp of both the space revolution and AI revolution truly taking off. Either one alone would make the 2020's on equal historical footing with the original development of life or the Cambrian explosion, and both together will make for a very historic moment.

Replies from: Viliam
comment by Viliam · 2021-03-24T17:08:42.115Z · LW(p) · GW(p)

If we succeed to colonize another planet, preferably outside our solar system, then yeah. Otherwise, it could be a historical equivalent of... the first fish that climbed out of the ocean, realized it can't breathe, and died.

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2021-03-24T17:29:41.303Z · LW(p) · GW(p)

I'm quite confident that we will successfully colonize space, unless something very catastrophic happens

Replies from: Viliam
comment by Viliam · 2021-03-25T17:53:11.977Z · LW(p) · GW(p)

I hope you are right, but here are the things that make me pessimistic:

Seeing the solar system to the right scale. Makes me realize how the universe is a vast desert of almost-nothing, and how insane are the distances between the not-nothings.

Mars sounds like a big deal, but it is smaller than Earth. The total surface of Mars is like the land area of Earth, so successfully colonizing Mars would merely double the space inhabitable by humans. That is, unless we colonize the surface of oceans of Earth first, in which case it would only increase the total inhabitable space by 30%.

And colonizing Mars doesn't mean that now we have the space-colonization technology mastered, because compared to other planets, Mars is easy mode. Venus and Mercury, that would double the inhabitable space again... and then we have gas planets and insanely cold ice planets... and then we need to get out of the solar system, where distances are measured in light-years, which probably means centuries or millenia for us... at which moment, if we have the technology to survive in space for a few generations, we might give up living on planets entirely, and just mine them for resources.

From that perspective, colonizing Mars seems like a dead end. We need to survive in space, for generations. Which will probably be much easier if we get rid of our biology.

Yeah, it could be possible, but probably much more complicated than most of science fiction assumes.

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2021-03-25T19:44:02.206Z · LW(p) · GW(p)

Thanks for the thoughts.

The main resource needed for life is light (which is abundant throughout the solar system), not land or gravity, so the sparseness of planets isn't actually a big deal.

It's also worth remembering the Moon; it's slightly harder than Mars and even smaller; but the Moon will play an important role in the Earth-Moon system, similar to what the Americas have been to the Old World in the past 400 years.

Interstellar travel is a field where we currently don't have good proof of capabilities yet, but if we can figure out how to safely travel at significant fractions of c, it shouldn't take anything more than a few decades to reach the nearest stars, quite possibly even less time than that; and even if we end up failing to expand beyond the Solar System, I'd say that's more than enough to justify calling the events coming in the next few decades a revolution on par with the cambrian explosion and the development of life.

comment by MikkW (mikkel-wilson) · 2021-02-05T06:52:33.209Z · LW(p) · GW(p)

I currently expect a large AI boom, representing a 10x growth in world GDP to happen within the next 5 years with 80% probability, in the next 10 years with ~93% probability, and in the next 3 years with 50% probability.

I'd be happy to doublecrux with anyone whose timelines are slower

comment by MikkW (mikkel-wilson) · 2021-01-31T21:02:55.623Z · LW(p) · GW(p)

I wish the keycaps on some of the keys on my keyboard were textured - I can touch-type well enough for the alphabetic keys, but when using other keys, I often get slightly confused as to which keys are under my fingers unless I use my eyes to see what key it is. If there were textures (perhaps braille symbols) that indicated which key I was feeling, I expect that would be useful.

Replies from: clone of saturn, Raemon
comment by clone of saturn · 2021-02-02T04:10:15.833Z · LW(p) · GW(p)

This seems like it would be pretty easy to DIY with small drops of superglue.

comment by Raemon · 2021-01-31T21:06:19.008Z · LW(p) · GW(p)

There probably exist braille keyboards you could try?

Replies from: abramdemski
comment by abramdemski · 2021-02-02T17:56:54.961Z · LW(p) · GW(p)

I tried this once -- I got Braille stickers designed to put on a keyboard -- but I didn't like it. Still, it would be pretty cool to learn braille this way.

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2021-02-02T21:43:50.660Z · LW(p) · GW(p)

This is useful data. What didn't you like about it?

Replies from: abramdemski
comment by abramdemski · 2021-02-03T16:50:10.009Z · LW(p) · GW(p)

The lumpy feel was aversive.

comment by MikkW (mikkel-wilson) · 2020-11-04T20:22:57.256Z · LW(p) · GW(p)

Scott Garrabrandt presents Cartesian Frames as being a very mathematical idea. When I asked him about the prominence of mathematics in his sequence, he said “It’s fundamentally math; I mean, you could translate it out of math, but ultimately it comes from math”. But I have a different experience when I think about Cartesian Frames- first and foremost, my mental conception of CF is as a common sense idea, that only incidentally happens to be expressable in mathematical terms (edit: when I say "common sense" here, I don't mean that it's a well known idea - it's not, and Scott is doing good by sharing his ideas - but the idea feels similar to other ideas in the "common sense" category). I think both perspectives are valuable, but the interesting thing I want to note here is the difference in perspective that the two of us have. I hope to explore this difference in framing more later.

Replies from: Pattern
comment by Pattern · 2020-11-05T06:38:49.813Z · LW(p) · GW(p)

What's the common sense idea?

comment by MikkW (mikkel-wilson) · 2020-10-29T22:23:14.310Z · LW(p) · GW(p)

Aumann Agreement != Free Agreement

Oftentimes, I hear people talk about Aumann's Agreement Theorem as if it means that two rational, honest agents cannot be aware of disagreeing with each other on a subject, without immediately coming to agree with each other. However, this is overstating the power of Aumann Agreement. Even putting aside the unrealistic assumption of Bayesian updating, which is computationally intractable in the real world [LW · GW], as well as the (not strictly required, but valuable) non-trivial [LW(p) · GW(p)] presumption that the rationality and honesty of the agents is common knowledge [? · GW], the reasoning that Aumann provides is not instantaneous:

To illustrate Aumann's reasoning, let's say Alice and Bob are rational, honest agents capable of Bayesian updating, and have common knowledge of eachother's rationality.

Alice says to Bob: "Hey, did you know pineapple pizza was invented in Canada?"

Bob: "What? No. Pineapple pizza was invented in Hawaii."

Alice: "I'm 90% confident that it was invented in Canada"

Bob is himself 90% confident of the opposite, that it has its origins in Hawaii (it's called Hawaiian Pizza, after all!), but since he knows that Alice is rational and honest, he must act on this information, and thereby becomes less confident in what he previously believed - but not by much.

Bob: "I'm 90% confident of the opposite. But now that I hear that you're 90% confident yourself, I will update to 87% confidence that it's from Hawaii"

Alice notices that Bob hasn't updated very far based on her disagreement, which now provides some information to her that she may be wrong. But she read from a source she trusts that pineapple pizza was first concocted in Canada, so she doesn't budge much:

"Bob, even after seeing how little you updated, I'm still 89% sure that pineapple pizza has its origins in Canada"

Bob is taken aback, that even after he updated so little, Alice herself has barely budged. Bob must now presume that Alice has some information he doesn't have, so updates substantially, but not all the way to where Alice is:

B: "Alright, after seeing that you're still so confident, I'm now only 50% confident that pineapple pizza is from Hawaii"

Alice and Bob go back and forth in this manner for quite a while, sharing their new beliefs, and then pondering on the implications of their partner's previous updates, or lack of updating. After some time, eventually Alice and Bob come to agreement, and both determine that there's an 85% chance pineapple pizza was developed in Canada. Even though it would have been faster if they had just stated outright why they believed what they did (look, Alice and Bob enjoy the Aumann Game! Don't judge them.), simply by playing this back-and-forth ping-ponging of communicating confidence updates, they managed to arrive at the optimal beliefs they would arrive at if they both, together, had access to all the information they each individually had.

What I want to highlight with this post is this: Even being perfect Bayesian agents, Alice and Bob didn't immediately come to the correct beliefs instantly by sharing that they had disagreeing beliefs; they had to take time and effort to share back and forth before they finally reached Aumann Agreement. Aumann agreement does not imply free agreement

Replies from: mark-xu
comment by Mark Xu (mark-xu) · 2020-10-29T22:26:17.301Z · LW(p) · GW(p)

https://arxiv.org/abs/cs/0406061 is a result showing tht Aumann's Agreement is computationally efficient under some assumptions, which might be of interest.

Replies from: Benito
comment by Ben Pace (Benito) · 2020-10-29T22:28:09.685Z · LW(p) · GW(p)

I don't really buy that paper, IIRC it says that you only need to change a polynomial number of messages, but that each message takes exponential time to produce, which doesn't sound very efficient.

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2020-10-30T00:03:29.600Z · LW(p) · GW(p)

From the abstract: The time used by the procedure to achieve agreement within epsilon is on the order of O(e^(epsilon ^ -6))... In other words, yeah, the procedure is not cheap

comment by MikkW (mikkel-wilson) · 2020-10-26T22:24:35.824Z · LW(p) · GW(p)

There's a good number of ideas that I want to share here on LW in the linguistics / communication cluster. The question always comes to mind: "But what does communication have to do with rationality?"- to which I answer, rationality is the attempt to win, in part by believing true things which help one accomplish winning. If humans had infinite computational resources and infinite free time in which to do experiments, there would be nothing stopping us from arriving at the truth by ourselves. But in reality, we can't arrive at all the logical consequences of everything we know by ourselves, nor can we learn every facet of nature's dynamics alone. So humans which aspire to be rational must communicate- and the faster one can exchange information with other humans aspiring to the truth, the more rational one can be. Therefore, it is important for an aspiring rationalist to think deeply about how to best exchange information with their peers.

I'm not without precedent in applying linguistics and communication to the project of rationality- one of my favorite of Yudkowsky's Sequences is "A Human's Guide to Words" [? · GW].

I hope to explain this at some point in time, but for now I'll let it speak for itself
comment by MikkW (mikkel-wilson) · 2020-10-09T17:27:16.874Z · LW(p) · GW(p)

All the food you have on your table,

Your potatoes, corn, and lox,

To grow them yourself you would be able;

But if all were minded such,

Then who would have saved you from the pox?

comment by MikkW (mikkel-wilson) · 2020-10-02T04:50:53.721Z · LW(p) · GW(p)

If I were a middle school teacher, I would implement this system to make nerdy kids more popular (and maybe make aspiring popular kids work harder in class): every week, I would select a handful of students who I felt had done good work that week (according to my subjective taste), and they could write down the names of 3 or 4 other students in the class (but not themselves) who would earn a modest amount of extra credit. Ideally, I would name the students at the start of the week, and only take their nominations at the end of the week, so they have plenty of time for other students to attempt to curry favour with them. (Although perhaps having the students be unknown until they make their nominations would encourage students to anticipate who I would select each week, which may make for more salient long-term effects)

This way, I can hijack the vicious social mechanisms that are prevalent in middle school, and use them to promote an intellectual culture

Replies from: Viliam, supposedlyfun
comment by Viliam · 2020-10-02T20:46:34.854Z · LW(p) · GW(p)

I read somewhere that intelligent people are a positive externality for their neighbors. Their activity improves the country on average, and they only capture a part of the value they add.

If you could clone thousand Einsteins (talented not all in physics, but each one in something different), they could improve your country so much that your life would be awesome, despite the fact that you couldn't compete with them for the thousand best jobs in the country. From the opposite perspective, if you appeared in Idiocracy, perhaps you could become a king, but you would have no internet, no medicine, probably not even good food, or plumbing. From the moment you would actually need something to work, life would suck.

But this effect is artifically removed in schools. Smart classmates are competitors (and grading on the curve takes it to the extreme), and cooperation is frowned upon. The school system is an environment that incentivizes hostility against smart people.

You suggest an artificial mechanism that would incentivize being friendly with the nerds. I like it! But maybe a similar effect could be achieved by simply removing the barriers to cooperation. Abolish all traces of grading on curve; make grades dependent on impartial exams by a computer, so that one year everyone may succeed and another year everyone may fail. (Also, make something near-mode depend on the grades. Like, every time you pass an exam, you get a chocolate. Twenty exams allow you to use a gym one afternoon each week. Etc.) And perhaps, students will start asking their smater classmates to tutor them; which will in turn increase the status of the tutors. Maybe. Worth trying, in my opinion.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2020-10-02T21:16:59.145Z · LW(p) · GW(p)

I saw an anecdote from a parent with two children somewhere, saying that when going outside, they used to reward the child who would get dressed first. This caused competition and bad feelings between the kids. Then they switched to rewarding both based on how quickly they got to the point where both were dressed. Since the children now had a common goal, they started helping each other.

I wonder if one could do apply something like that to a classroom, to make the smart kids be perceived as an asset by the rest of the class.

And perhaps, students will start asking their smater classmates to tutor them; which will in turn increase the status of the tutors. Maybe. 

Datapoint: Finnish schools mostly don't grade on a curve, and some kids did ask me for help in high school, help that I was happy to provide. For the most part it felt like nobody really cared about whether you were smart or not, it was just another personal attribute like the color of your hair.

comment by supposedlyfun · 2020-10-03T14:57:14.421Z · LW(p) · GW(p)

A cute senior in my high school Physics class asked me to tutor her after school because she was having a hard time.  I can't overstate the ways in which this improved me as a young-geek-person, and I think she got better at doing physics, too.  Your proposal would tend to create more opportunities like that, I think, for cross-learning among students who are primarily book-intelligent and those who may be more social-intelligent.

comment by MikkW (mikkel-wilson) · 2020-09-09T19:33:46.755Z · LW(p) · GW(p)

Viliam's shortform posts have got me thinking about income taxes versus wealth taxes, and more generally the question of how taxes should be collected. In general I prefer wealth taxes over income taxes, although I suspect there may very well be better forms of taxes than either of those two - But considering wealth taxes specifically, I think the main problem with wealth taxes is that over the long term they take away control of resources from people who have proven in the past that they know how to use resources effectively, and while this can allow for short-term and medium-term useful allocations of resources, it prevents very long horizon investing – as exemplified by Elon Musk's projects including SpaceX, Tesla, Neuralink and The Boring Company – projects that are good investments primarily because Musk understands that in the very long term these projects will pay off – both in personal financial returns and in general global welfare. While Tesla is very close to becoming profitable (they could turn a profit this year if they wanted to), and SpaceX isn't too far off either, he founded companies without any eye for medium term profits - he founded them understanding the very long game, which is profitable in the absence of year-over-year wealth taxes, but could potentially be unprofitable if year-over-year wealth taxes were introduced

The proposal that came across my mind in regards to alleviating the negative impact wealth taxes would have this way, is to allow entrepreneurs to continue to have control of the money they pay in wealth taxes, but that money is held in trust for the greater public, not for the personal use of the entrepreneur.

To clarify my point, I think it's worth noting that there are two similar concepts that get conflated into the single word "ownership": the 1st meaning of "own" (personal ownership) is that a person has full rights to decide how resources are used, and can use or waste those resources for their own personal pleasure however they wish; the 2nd meaning of "own" (entrusted to) is that a person has the right to decide how resources are used and managed, but ultimately they make decisions regarding those resources for the good of a greater public, or another trustor (entrusting entity), not for themselves.

When resources are owned by (i.e., entrusted to) somebody, they have the right to allocate those resources however they think is best, and aside from the most egregious examples of the resources being used for the personal gain or pleasure of the trustee, nobody can or should question the judgement of the trustee.

Back to wealth taxes: in my proposal, while an entrepreneur would still be expected to "pay" a certain percentage of their wealth each year to the greater public, instead of the money going directly to the government, the resources will instead continue to be "owned" by the entrepreneur, but instead of being personally owned for the entrepreneur's gain and pleasure, it would be entrusted to the entrepreneur in the name of the public, and the entrepreneur will be allowed to continue to use the resources to support any enterprises they expect to be a worthwhile investment, but when the enterprise finally turns a profit, the percentage of revenues that correspond to the part that is entrusted in the name of the public, will then be collected as taxes.

The main benefit of this proposal (assuming wealth taxes are already implemented) is that, while it cannot make profitable any venture that would be rendered unprofitable by a wealth tax, it can maintain the feasibility of ventures that are profitable in the long run, but which are made unfeasible in the short and medium terms by a wealth tax, due to the cost of taxes being more than medium term gains.

Replies from: Viliam, Dagon
comment by Viliam · 2020-09-11T19:40:05.184Z · LW(p) · GW(p)
two similar concepts that get conflated into the single word "ownership"

Sounds like "owner" vs "manager".

So, if I understand it correctly, you are allowed to create a company that is owned by state but managed by you, and you can redirect your tax money there. (I assume that if you are too busy to run two companies, it would also be okay to put your subordinate in charge of the state-owned company.)

I am not an expert, but it reminds me of how some billionaires set up foundations to avoid paying taxes. If you make the state-owned company do whatever the foundation would do, it could be almost the same thing.

The question is, why would anyone care whether the state-owned company actually generates a profit, if they are not allowed to keep it? This could means different things for different entrepreneurs...

a) If you have altruistic goals, you could use your own company to generate profit, and the state-owned company to do those altruistic things that don't generate profit. A lot of good things would happen as a result, which is nice, but the part of "generating profit for the public" would not be there.

b) If the previous option sounds good, consider the possibility that the "altruistic goal" done by the state-owned company would be something like converting people to the entrepreneur's religion, or lobbying for political changes you oppose.

c) For people without altruistic or even controversially-altruistic goals, the obvious option is to mismanage the state-own company and extract as much money as possible. For example, you could make the state-owned company hire your relatives and friends, give them generous salary, and generate no profit. Or you could make the state-owned company buy overpriced services from your company. If this would be illegal, then... you could do the nearest thing that is technically legal. For example, if your goal is to retire early, then the state-owned company could simply hire you and then literally do nothing. Or you would pretend to do something, except that nothing substantial would ever happen.

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2020-09-12T01:58:10.078Z · LW(p) · GW(p)

The intention is that there would be not two separate companies, but one company which is split between being owned fully by the entrepreneur, and being managed by the entrepreneur- so the entrepreneur would still be motivated to make the company do as well as possible, thereby generating revenue for the public at large

comment by Dagon · 2020-09-09T22:14:18.484Z · LW(p) · GW(p)
over the long term they take away control of resources from people who have proven in the past but I know how to use resources

Umm, that's the very point of taxes - taking resources from non-government entities because the government thinks they can use those resources better. We take them from people who have resources, because that's where the resources are.

comment by MikkW (mikkel-wilson) · 2021-06-22T00:03:58.094Z · LW(p) · GW(p)

The changing climate may be scary, but it's also a pretty awesome existence proof of our terraforming capabilities

comment by MikkW (mikkel-wilson) · 2021-04-11T04:32:14.149Z · LW(p) · GW(p)

Random thought: if you have a big enough compost pile, would it spontaneously break into flames due to the heat generated by the bioprocesses that occur therein? If so, at what size would it burst into flames? Surely it could happen before it reached the size of the sun, even ignoring gravitational effects.

(Just pondering out loud, not really asking unless someone really wants to answer)

Replies from: zac-hatfield-dodds
comment by Zac Hatfield Dodds (zac-hatfield-dodds) · 2021-04-11T05:09:36.935Z · LW(p) · GW(p)

For a value of "break into flames" that matches damp and poorly-oxygenated fuel, yep! This case in Australia is illustrative; you tend to get a lot of nasty smoke rather than a nice campfire vibe.

You'd have to mismanage a household-scale compost pile very badly before it spontaneously combusts, but it's a known and common failure mode for commercial-scale operations above a few tons. Specific details about when depend a great deal on the composition of the pile; with nitrate filmstock it was possible with as little as a few grams.

comment by MikkW (mikkel-wilson) · 2021-02-24T07:00:40.538Z · LW(p) · GW(p)

I'm tinkering around in NetLogo with a model I made representing the dynamics of selfishness and altruism. In my model, there are two types of agents ("turtles" in NL parlance), red selfish turtles and blue altruistic turtles. The turtles wander around the world, and occasionally participate in a prisoner's-dilemma-like game with nearby turtles. When a turtle cooperates, their partner receives a reward, at the cost of losing some percentage of that reward themselves. When a turtle defects, they keep all their resources, and their partner gains none. The turtles steadily lose resources over time, and if they reach 0 resources, they die.

(I tried to include an image of this, but I can't seem to upload images right now)

From these rules in this setup, it follows that since the only way to receive resources is if someone cooperates with you, and you are steadily losing resources, a population of only defectors will quickly die out under these rules.

I tried tinkering with different factors to see what happens in different circumstances, specifically with an eye for situations where altruism may have a long-term benefit over selfishness ( I find this particularly interesting to look for, since in many situations, selfishness beats altruism in nature, but instances of altruism to strangers do nonetheless happen in nature ). Of course, the lower the penalty for altruism, the more rewarding altruism becomes. The speed of the turtles also matters - when turtles move very slowly, there can develop small pockets of altruists that avoid selfish turtles for a while simply based on geography, but as turtles speed up, location stops mattering, and most turtles will spend some time around most other turtles - which can be a boon for selfish turtles looking for altruists to feed off of.

However, the variable that seemed to give the biggest advantage to the altruists was how much resources a turtle can store before it gets full, and no longer seeks additional resources. In my earlier runs, turtles could store large surpluses of resources, and selfish turtles could survive for quite a while after the altruists had all died, simply off of their savings. However, when I lowered the maximum level of savings, the absence of altruists would lead to selfish turtles dying much sooner, leaving completely empty parts of the map, which the selfish turtles could then escape to and claim for themselves. Simply by dividing the maximum savings by 3, I went from a situation where all turtles would eventually die (due to the altruists going extinct), to a scenario where selfishness eventually dies out, leaving behind a population of 100% altruists- but not before wild fluctuations where population can go as low as 15 turtles, before the map fills up entirely, then going back down to 15, and repeating for a while.

Replies from: Viliam
comment by Viliam · 2021-02-24T17:17:18.517Z · LW(p) · GW(p)

when turtles move very slowly, there can develop small pockets of altruists that avoid selfish turtles for a while simply based on geography, but as turtles speed up, location stops mattering, and most turtles will spend some time around most other turtles - which can be a boon for selfish turtles looking for altruists to feed off of.

Some people who grew up in a village and later moved to a big city probably feel like this.

People who live in a city have a way to deal with this: interact with members of your subculture(s), not with strangers. In absence of geographical distances, we can create social ones.

However, the variable that seemed to give the biggest advantage to the altruists was how much resources a turtle can store before it gets full, and no longer seeks additional resources.

Isn't this the same thing from a different perspective? I mean, the important thing seems to be how far you can travel on a full stomach. That can be increased by either moving faster or having a greater stomach.

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2021-02-24T18:57:54.433Z · LW(p) · GW(p)

In absence of geographical distances, we can create social ones.

I like this thought

Isn't this the same thing from a different perspective? I mean, the important thing seems to be how far you can travel on a full stomach. That can be increased by either moving faster or having a greater stomach.

I agree that a bigger stomach allows for a bigger range, but this is not the only effect it has - a bigger stomach also allows for survival long after there are literally no providers left, which means that there can be areas that are rich in selfish characters, and if any stray altruists do wander by, they will further feed this group, whereas with a smaller stomach, these areas will be barren, providing a breeding ground for altruists that can then lead to a resurgence of altruists, temporarily spared from the selfish ones.

comment by MikkW (mikkel-wilson) · 2020-12-08T21:05:47.886Z · LW(p) · GW(p)

I step out of the airlock, and I look around. In the distance, I see the sharp cliff extending around the crater, a curtain setting the scene, the Moon the stage. I look up at the giant blue marble in the sky, white clouds streaked across the oceans, brown landmasses like spots on the surface. The vibrant spectacle of the earth contrasts against the dead barren terrain that lies ahead. I look behind at the glass dome, the city I call home. 

Within those arched crystal walls is a new world, a new life for those who dared to dream beyond the heavy shackles that tied them to a verdent rock. New songs, new gardens, new joys, new heartbreaks, reaching, for the first time, to the skies, to the stars, to the wide open empty sea.

A voluminous frontier, filled with opportunity, filled with starlight, filled with the warmth and strength of the sun. We are one step further from the tyrannical grip of gravity, stretching our wings, just now preparing to take off, to soar and harness the fullness of the prosperity that gave us form

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2020-12-08T21:08:36.549Z · LW(p) · GW(p)

NB: I'm currently going through my old blog, which I'm planning on deactivating soon. I may repost some relevant posts from there over here, either to shortform or as a main post, as appropriate. This piece is one of the posts from there which touches on rationality-adjacent themes. You may see other posts from me in the coming days that also originate from there.

comment by MikkW (mikkel-wilson) · 2020-10-24T02:06:00.401Z · LW(p) · GW(p)

To ⌞modern eyes living in a democracy with a well-functioning free market⌟, absolute monarchy and feudalism [1] (as were common for quite a while in history) seem quite stupid and suboptimal (there are some who may disagree, but I believe most will endorse this statement). From the perspective of an ideal society, our current society will appear quite similar to how feudalism seems to us - stupid and suboptimal - in large part because we have inadequate tools to handle externalities (both positive and negative). We have a robust free market which can efficiently achieve outcomes that are optimal in the absence of externalities, and a representative government that is capable of regulating and taxing transactions with negative externalities, as well as subsidizing transactions with positive externalities. However, these capabilities are often under- and over-utilized, and the representative government is not usually incentivized to deal with externalities that affect a small minority of the population represented - plus, when the government does use its mandate to regulate and subsidize, it is very often controversial, even in cases where the economic case for intervention is straightforward.

If the free market and representative government are the signs that separate us from feudalism, what separates the ideal society from us? If I had to guess, public goods markets (PGMs) such as quadratic funding are a big player - PGMs are designed to subsidize projects that have large positive externalities, and I suspect that the mechanism can be easily extended to discourage actions with negative externalities (although I worry that cancel-culture dynamics may cause problems with this)

 

[1] 'Feudalism' is understood in this context not just as a political structure, but also as an alternative to a free market

(NB: My use of the corner brackets ⌞ ⌟ is to indicate intended parsing to prevent potential misreadings)

Replies from: Viliam
comment by Viliam · 2020-10-25T17:50:36.048Z · LW(p) · GW(p)

If the free market and representative government are the signs that separate us from feudalism, what separates the ideal society from us?

The things that separate us from the ideal society will probably seem obvious from hindsight -- assuming we get there. But in order to know that, large-scale experiments will be necessary, and people will oppose them, often for quite good reasons (a large-scale experiment gone wrong could mean millions of lives destroyed), and sometimes for bad reasons, too.

Frequently proposed ideas inclide: different voting systems, universal basic income, land tax, open borders...

comment by MikkW (mikkel-wilson) · 2021-06-18T03:54:56.613Z · LW(p) · GW(p)

The Roman Kingdom and Roman Empire both fell because of ineffective leaders. The Roman Republic fell because of extremely competent, but autocratic, leaders.

Replies from: Pattern
comment by Pattern · 2021-06-18T20:58:55.426Z · LW(p) · GW(p)

Who brought it down, or who were too essential and when they died it collapsed?

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2021-06-19T00:36:39.674Z · LW(p) · GW(p)

The Kingdom was overthrown; the last kings were not particularly well-loved by the people, and when King Tarquin raped Lucretia, the wife of an important general, the people deposed him and established the Republic, in particular creating the rule that any man who tried to make himself king could be killed on the spot without reprecussions.

The Roman Republic gave way to the Empire not all at once, but over the course of several different periods of leadership (since the consuls, the main leaders of the Republic, were elected for 1 year terms that couldn't be immediately repeated, there's a long list of leaders for any era). Julius Caesar did not start the end of the Republic, but he put the final nails in the coffin, having led an army in insurrection against the government, and becoming a king in all but name by the end of his life. The assassination of Caesar led to a series of civil wars, which ended with his nephew Augustus becoming Emperor of Rome. Needless to say, Julius Caesar and Augustus were both very competent men, in addition to many of the men who rivaled them for power, and all involved (with the exception of Augustus, who inherited his influence from Caesar) owed their influence to having been elected by the people of Rome.

As for the fall of the Empire, really the history of the fall of the Empire is just the history of the Empire, period. Sure, there were good Emperors who ruled well and competently, and the fullest extent of the reach of the Empire was after the Republic had already been overthrown, but for every good Emperor, there's another bad Emperor who treats his populace in the cruelest ways imaginable, and blunders away influence and soft power, to mirror him. Already as soon as the first Emperor Augustus died, we get Tiberius, who wasn't exactly great, then Caligula, whose name has justly become synonymous with overflowing sadism and needless excess.

Rome grew to become the great power that it was during the Republic, and the story of the Empire is the story of that great power slowly crumbling and declining under the rule of cruel and incompetent leaders, punctuated by the occasional enlightened ruler who would slow that decline for another 20 or 30 years.

comment by MikkW (mikkel-wilson) · 2021-06-04T00:41:28.631Z · LW(p) · GW(p)

Rule without proportional representation is rule without representation

Taxation without proportional representation is taxation without representation.

Replies from: MakoYass
comment by MakoYass · 2021-06-07T06:12:51.131Z · LW(p) · GW(p)

Public funding seems especially easy to make truly democratic (proportionate to the needs of the voters, without majoritarian dynamics, additive), so it's weird to me that it took cryptocurrencies for it to start to happen.

comment by MikkW (mikkel-wilson) · 2021-05-30T19:14:45.008Z · LW(p) · GW(p)

It's funny, "toxic" is one of the most toxic words these days

Replies from: MakoYass
comment by MakoYass · 2021-06-07T06:37:30.990Z · LW(p) · GW(p)

I wish I could figure out what factor divided people into these two language groups. For one there is toxic masculinity and there is non-toxic (or just ordinary) masculinity. For another, uttering "toxic masculinity" directly means "all masculinity is toxic". I do not know how they came apart.

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2021-06-07T15:34:28.931Z · LW(p) · GW(p)

To be clear, my original post referred to more than just "toxic masculinity".

On that particular subject, the divergence in meaning is that some people identified a motte-and-bailey where people would say "toxic masculinity", defend the term by saying it's referring to a particular subset of masculinity that is problematic, but would then go on to use the phrase to refer to parts of masculinity which are not clearly problematic.

That isn't a linguistic divergence, but some people recognizing a subtext that the original group would deny their words containing

Replies from: MakoYass
comment by MakoYass · 2021-06-08T01:33:22.156Z · LW(p) · GW(p)

but would then go on to use the phrase to refer to parts of masculinity which are not clearly problematic

I think this is usually is a disagreement about which parts of masculinity are problematic. Their position might be really ignorant and hateful, but I think it's sincere.

comment by MikkW (mikkel-wilson) · 2021-05-27T20:25:18.586Z · LW(p) · GW(p)

Dony Christie and I have been having a back-and-forth about the phrase "public goods market" (often shortened to PGM)- originally, I coined the phrase as a way to refer to Quadratic Funding, a mechanism that is quite important, but whose most common name is prone to alienate non-technically minded folks, and not a very resonant name- whereas "public goods market" carries a clearer meaning even to an average person; while "a public good" and "a market" both have technical meanings that are leveraged by the phrase, it also evokes "the public good" (i.e. "the common good") and a market place, concepts that typical people are familiar with.

While in the essay where I originally introduced the phrase [EA · GW], I acknowledged that the phrase “public goods market” could potentially refer to a larger set of mechanisms than just Quadratic Funding - any mechanism that creates a market that incentivizes the creation of public, not just private, goods can be described as a “public goods market”. I’ve also gotten into the habit of using the phrase and its acronym synonymously with QF. However, Dony took me to task on this, since he argues that assurance contracts (i.e. kickstarters) and dominant assurance contracts (ACs and DACs, respectively) are also public goods markets.

It’s certainly clear that kickstarters and DACs create public goods, where the scope of the “public” is the people who participate in (and clearly benefit from) the contract, which can be a quite large group of people, much larger than the stereotypical transaction, which only involves two people, a buyer and a seller [1].

So the question of whether kickstarters are public goods markets comes down to whether or not kickstarters are “markets”. Wikipedia introduces markets as “a composition of systems, institutions, procedures, social relations or infrastructures whereby parties engage in exchange”. Based on this definition, while a single kickstarter is not a market (just as the act of me buying a soda is not a market), a website such as Kickstarter or IndieGoGo, or the ecosystem within which they exist, is indeed a market - the systems and procedures are those of the kickstarter mechanism, the institution is the website, which provides the infrastructure, and the parties who participate in the kickstarter are engaging in an exchange, performing an action or creating a product, in return for others performing an action or providing monetary compensation.

So really, I need to break the habit of using “public goods market” to mean quadratic funding, and find a more specific and resonant phrase to refer to QF specifically.

[1] One potential objection to viewing kickstarters as a mechanism that creates public goods, is that the mechanism still does not consider or handle negative externalites (i.e., bad things that will happen to people who are not participating in the contract), which QF is able to handle via negative votes (although in practice, I have seen negative votes be excluded from the mechanism when implemented, which I take issue with)

comment by MikkW (mikkel-wilson) · 2021-05-27T20:04:04.993Z · LW(p) · GW(p)

It really irks me when people swap "i.e." and "e.g." - i.e. stands for id est - "that is", and indicates that exactly the items listed, and no others, are meant by the  phrase that is being clarified, while e.g. stands for exempli gratia - "for the sake of example", and indicates that the listed items are only a small number of examples of a larger set, and that many items have been omitted.

When I read, my brain always tries to apply the corresponding meaning when I come across i.e. and e.g., and it breaks my brain when the wrong symbol was used, which I find very annoying.

comment by MikkW (mikkel-wilson) · 2021-05-25T15:23:25.399Z · LW(p) · GW(p)

Something I disagree with: Writing advice often implores one to write in a "strong" way, that one should sound authoritative, that one should not sound uncertain.

While I agree that this can create a stronger reaction in the audience, it is a close sibling to dishonesty, and communication is best facillitated when one feels comfortable acknowledging the boundaries of their ability to know.

But perhaps I'm wrong- when one writes, one is not writing for an ideal Bayesian reasoner under the assumption of perfect honesty, since ideal Bayesian reasoners are not physically possible, and one cannot reliably prove that one is being honest, but one is rather writing for humans, and perhaps the most efficient way of transferring information given these constraints is to exaggerate one's confidence.

Or perhaps the important bit is that humans care greatly about common knowledge- Charlie doesn't care so much about the information that Charlie can derive from what David says, but rather how the people around Charlie will respond to what David says, and Charlie can more easily predict others' reactions when David avoids indicating uncertainty, thereby making Charlie more comfortable taking a similarly strong position.

Replies from: bfinn
comment by bfinn · 2021-05-27T13:41:16.912Z · LW(p) · GW(p)

As it happens I came across this issue of strength (& its reverse, qualification) the very first time this morning, in Paul Graham's essay How To Write Usefully. Here are his thoughts on the matter, FYI:
http://www.paulgraham.com/useful.html

comment by MikkW (mikkel-wilson) · 2021-03-08T17:56:34.130Z · LW(p) · GW(p)

American presidential elections should come in two phases: first, asking if the incumbent should continue in office, and then (if the majority says no), a few months later, deciding who should replace them. This would be a big improvement over how we do things now. Let's make it the 34th amendment.

Replies from: Dagon, gerald-monroe, mikkel-wilson
comment by Dagon · 2021-03-08T19:48:20.763Z · LW(p) · GW(p)

Most voters' answer to the first question (should we retain the incumbent) depend heavily on the second (who gets the spot).  What's the benefit of separating these?  Why not reverse it (vote on best replacement excluding incumbent, then runoff between that winner and the incumbent), or combine it (as we do today, but with instant-runoff or other changes that are unstated but necessary for your proposal).

comment by Gerald Monroe (gerald-monroe) · 2021-03-09T09:14:54.766Z · LW(p) · GW(p)

This and many other improvements will never happen.  The founders locked the codebase by requiring 2/3, 2/3, and 75% (of the states).  Therefore it is simply not possible to make any meaningful improvements because in order to really change something requires someone to lose or perceive they are losing.  Even when they are winning in absolute terms but their relative status is shrinking.  (for example, an economic change that grew the economy and reduced wealth inequality)

I see 2 future routes where these bugs get fixed:

   a.  Eventually, the United States may fall.  It may take decades of slow decay but eventually another power without certain flaws may be able to take over one way or another.  The European Union is an example of this - the EU has trumped many incorrect member country laws and policies with their own , hopefully superior versions.  

b.  The problem we have right now is each of us doesn't know the truth, and is being manipulated to act against our own self interests.  Maybe AI could solve this problem and give us all a shared, correct, and common worldview again.  For most Americans alive, "which government policies maximizes my well being" is a factual question with a shared answer.

I am not talking specific politics, just if you have policy A and policy B, most Americans alive will receive more benefit from one of the 2 policies than the other, and it will be the same policy.  In addition, while we cannot know the future, all available evidence can be combined to determine the expected values  of [A,B] against most people's utility heuristics, and for most people they should do [A or B].

But if the right answer is A, currently endless ads may try to scam people in voting for B, and sometimes B wins.

comment by MikkW (mikkel-wilson) · 2021-03-08T17:57:47.161Z · LW(p) · GW(p)

(Obviously, this would only apply to elections at the end of an incumbent's first term. Elections where the incumbent is already outgoing wouldn't look any different)

comment by MikkW (mikkel-wilson) · 2021-03-03T00:34:34.149Z · LW(p) · GW(p)

Myers-Briggs is often criticized, but my understanding is that each of the four categories tracked are variables that actually do vary from person to person- just the traits are distributed on a unimodal bell curve, instead of being binarily distributed (it is continuous, instead of being a thing that is either-or). But just like how height is a real thing, that matters and is continuous, the Myers-Briggs categories are real things that matter; just as there are short people and tall people, there are extroverts and introverts, and there are thinkers and feelers.

But there’s still something missing: I often describe people as not just tall or short, but there are also many average-height people. If I describe someone as short, you know that they’re decently short, and if I describe someone as tall, you know they’re decently tall, and most people are just “medium height”. Myers-Briggs as it is currently commonly expressed, fails to communicate about this prevalent middle-ground. There are extroverts and introverts, but there are also many people who are right in the middle. There are thinkers and feelers, but there are also people who do a little bit of both. There are sensers and intuitors, but many people just walk the middle ground.

I think it’s valuable to be able to express when a person is around the median for a particular trait: I’m close to middle between introversion and extroversion (though I’m clearly on the introvert side), my thinking style very much has qualities of both sensing and intuition, whereas I’m much more clearly on one side or the other in terms of being a thinker and a “perceiver”. I don’t yet have a good notation for this, but you could call me an ~~TP or an I~TP, where the tilde (~) indicates a median trait.

———

Just like there’s kinda tall people (6’ 1”) and very tall people (7’ 0”), there are also people who are only slightly extroverted, vs. very extroverted; there are the extreme tails, and the defined slopes. It’s valuable to be able to communicate whether a person is very much a certain way, or only slightly that way. Perhaps I’m not ~~TP, but rather [I]~TP or [I]~[T]P, where the brackets ([ ]) indicate mild levels of that trait, as opposed to a more noticeable and pronounced personality. There may be a better way to notate this, but I do feel it helps communicate about people’s personalities in a slightly more detailed way.

Replies from: gworley
comment by G Gordon Worley III (gworley) · 2021-03-03T02:16:06.617Z · LW(p) · GW(p)

This is fair, but I think the more common objection to MB is that its dimensions are too correlated and thus measuring the same thing. The Big-5/OCEAN model is explicitly designed to not have this problem.

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2021-03-03T04:48:53.352Z · LW(p) · GW(p)

I don't think "the same thing" is exactly right, since they are not perfectly correlated, but that is an objection

comment by MikkW (mikkel-wilson) · 2021-01-13T08:39:55.575Z · LW(p) · GW(p)

It seems to me that months ago, we should have been founding small villages or towns that enforce contact tracing and required quarantines, both for contacts of people who are known to have been exposed, and for people coming in from outside the bubble. I don't think this is possible in all states, but I'd be surprised if there was no state where this is possible.

Replies from: Dagon
comment by Dagon · 2021-01-13T17:12:23.280Z · LW(p) · GW(p)

I think it'd be much simpler to find the regions/towns doing this, and move there.  Even if there's no easy way to get there or convince them to let you in, it's likely STILL more feasible than setting up your own.  

If you do decide to do it yourself, why is a village or town the best unit?  It's not going to be self-sufficient regardless of what you do, so why is a town/village better than an apartment building or floor (or shared- or non-shared house)?

In any case, if this was actually a good idea months ago, it probably still is.  Like planting a tree, the best time to do it is 20 years ago, and the second-best time is now.  

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2021-01-13T17:36:32.881Z · LW(p) · GW(p)

Are there any areas in the states doing this? I would go to NZ or South Korea, but getting there is a hassle compared to going somewhere in the states. Regarding size, it's not about self-sufficiency, but rather being able to interact in a normal way with other people around me without worrying about the virus, so the more people involved the better

Replies from: Dagon
comment by Dagon · 2021-01-13T20:51:35.635Z · LW(p) · GW(p)

getting there is a hassle

That was my point. Doesn't the hassle of CREATING a town seem incomparably larger than the hassle of getting to one of these places.  

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2021-01-13T22:55:41.265Z · LW(p) · GW(p)

On an individual basis, I definitely agree. Acting alone, it would be easier for me to personally move to NZ or SK than to found a new city. However, from a collective perspective (and if the LW community isn't able to cordinate collective action, then it has failed), if a group of 50 - 1000 people all wanted to live in a place with sane precautions, and were willing to put in effort, creating a new town in the states will scale better (moving countries has effort scaling linearly with magnitude of population flux, while founding a town scales less than linearly)

Replies from: TurnTrout, Dagon
comment by TurnTrout · 2021-01-13T23:00:04.701Z · LW(p) · GW(p)

while founding a town scales less than linearly

I think you're omitting constant factors from your analysis; founding a town is so, so much work. How would you even run out utilities to the town before the pandemic ended? 

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2021-01-13T23:32:20.106Z · LW(p) · GW(p)

I acknowledge that I don't know how the effort needed to found a livable settlement compares to the effort needed to move people from the US to a Covid-good country. If I knew how many person-hours each of these would take, it would be easier for me to know whether or not my idea doesn't make sense.

Replies from: Raemon
comment by Raemon · 2021-01-14T00:22:49.176Z · LW(p) · GW(p)

FYI, folk at MIRI seem to be actively look into this, but, it is indeed pretty expensive and not an obviously good idea.

comment by Dagon · 2021-01-13T23:38:16.154Z · LW(p) · GW(p)

if the LW community isn't able to cordinate collective action, then it has failed

Oh, we're talking about different things.  I don't know much about any "LW community", I just use LW for sharing information, models, and opinions with a bunch of individuals.  Even if you call that a "community", as some do, it doesn't coordinate any significant collective action.  I guess it's failed?

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2021-01-14T00:27:39.922Z · LW(p) · GW(p)

Sorry, I don't think I suceeded at speaking with clarity there. The way you use LW is perfectly fine and good.

My view of LW is that it's a site dedicated to rationality, both epistemic and instrumental. Instrumental rationality is, as Eliezer likes to call it, "the art of winning". The art of winning often calls for collective action to achieve the best outcomes, so if collective action never comes about, then that would indicate a failure of instrumental rationality, and thereby a failure of the purpose of LW.

LW hasn't failed. While I have observed some failures of the collective userbase to properly engage in collective action to the fullest extent, I find it does often succeed in creating collective action, often thanks to the deliberate efforts of the LW team.

Replies from: Dagon
comment by Dagon · 2021-01-14T00:35:49.137Z · LW(p) · GW(p)

Fair enough, and I was a bit snarky in my response.  I still have to wonder, if it's not worth the hassle for a representative individual to move somewhere safer, why we'd expect it's worth a greater hassle (both individually and the coordination cost) to create a new town.  Is this the case where rabbits are negative value so stags are the only option (reference: https://www.lesswrong.com/posts/zp5AEENssb8ZDnoZR/the-schelling-choice-is-rabbit-not-stag)? [LW · GW]  I'd love to see some cost/benefit estimates to show that it's even close to reasonable, compared to just isolating as much as possible individually.

comment by MikkW (mikkel-wilson) · 2021-01-08T21:46:20.172Z · LW(p) · GW(p)

Life needs energy to survive, and life needs energy to reproduce. This isn't just true of biological life made of cells and proteins, but also of more vaguely life-like things - cities need energy to survive, nations need energy to survive and reproduce, even memes rely on the energy used by the brains they live in to survive and spread.

Energy can take different forms - as glucose, starches, and lipids, as light, as the difference in potential energy between four hydrogen atoms and the helium atom they could (under high temperatures and pressures) become, as the gravitational potential of water held behind a dam or of a heavy object waiting to fall, or as the gradient of heat that exists between a warm plume of water and the surrounding cold ocean, just to name a few forms. But anything that wants claim to the title of being alive, must find energy.

If a lifeform cannot find energy, it will cease to create new copies of itself. Those things which are abundant in our world, are things that successfully found a source of energy with which to be created (cars and chairs might be raised as an exception, but they too indeed were created with energy, and either a prototypical idea, or the image of another car or chair in someone's mind, needed to find energy in order to create that object).

The studies of biology and economics and not so far separated as they might seem - at the core of both fields in the question: "Can this phenomenon (organization, person, firm) find enough energy to survive and inspire more things like it?". This question also drives the history of the world. If the answer is no, that phenomenon will die, and you will not notice it. Or, you might notice the death throes of a failed phenomenon, but only because something else, which did find energy, enabled that failed phenomenon to happen. Look around you. All the flowers you see, the squirrels, the humans, the buildings, the soda cans, the roadways, the grass, the birds. All of these phenomena somehow found energy with which to be created. If they didn't, you wouldn't be looking at them, they would never exist.

The ultimate form of life is the life that best gathers energy. The Cambrian explosion happened because first plants discovered they could turn light into usable food, then animals discovered they could use a toxic waste by-product of that photosynthesis - oxygen - as a (partial) source of energy. Look around you.  Where is there free energy laying around, unused? How could that energy be captured? Remember, the nation that can harness that energy will be the nation that influences the world. The man who takes hold of that energy can become the wealthiest man in the world. 

comment by MikkW (mikkel-wilson) · 2020-12-17T18:23:45.965Z · LW(p) · GW(p)

Thinking about rationalist-adjacent poetry. I plan on making a post about this once I have a decent collection to seed discussion, then invite others to share what they have.

  • Tennyson's poems Ulysses and Locksley Hall both touch on rationalist-adjacent themes, among other themes, so I'd want to share excerpts from those
  • Piet Hein has some 'gruks' that would be worth including (although I am primarily familiar with them in the original Danish - I know there exist English translations of most of them, but I'll have to choose carefully, and the translations don't always capture the exact feeling of the original)
  • I have shared two works of my own here on my shortform that I'd want to include
  • Shakespeare's "When I do count the clock that tells the time" is a love poem, but it invokes transhumanist feelings in me
Replies from: mingyuan, ChristianKl
comment by mingyuan · 2020-12-18T06:58:38.494Z · LW(p) · GW(p)

Hey, "When I do count the clock" is my favorite sonnet too! "And death once dead, there's no more dying then" <3

I also recommend "Almighty by degrees" by Luke Murphy (only available on Kindle I think) – I bought it because of an SSC Classified Thread, and ended up using a poem from it in my Solstice last year. There's also a poetry tab on my masterlist of Solstice materials. Damn I love poetry.

comment by ChristianKl · 2020-12-17T19:00:02.248Z · LW(p) · GW(p)

Daniel's secular sermons are good. 

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2020-12-17T19:22:51.905Z · LW(p) · GW(p)

Thanks for the link 👍

comment by MikkW (mikkel-wilson) · 2020-12-17T02:42:13.918Z · LW(p) · GW(p)

Ideal Chess

Chess is fairly well known, but there's also an entire world of chess variants, games that take the core ideas of chess and change either a few details or completely reimagine the game, either to improve the game, or just change the flavour of the game. There's even an entire website dedicated to documenting different variants of chess.

Today I want to tell you about some classic chess variants: Crazyhouse chess, Grand chess, and Shogi (Japanese chess), and posit a combination of the first two that I suspect may become my favorite chess when I have a chance to try it.

I.

Shogi is the version of chess that is native to Japan, and it is wildly different from western chess - both western chess and shogi have evolved continuously from the original chaturanga as the game spread out from India. The core difference between shogi and the familiar western chess is that once a piece has been captured, the capturing player may later place the piece back on the board as his own piece. But if that was the only difference, it would make for a very crazy game, since the pieces in western chess are so powerful while the king is so weak, that the game would be filled with precarious situations that would require the players to always have their guard up for an unexpected piece drop, and checkmate is never more than a few moves away unless both players are paying close attention.

In fact, this version is precisely crazyhouse chess, and this property is both what makes crazyhouse chess so beloved and fun, but also what stands in its way of being taken as seriously as orthodox chess. There are two ways that this barrier could be overcome - either the king can be buffed, giving him more mobility to better dodge the insanity that the drops create, or the pieces can be nerfed, making them much less powerful, and particularly to have less influence at a long range. Shogi chooses the route of nerfing the pieces, replacing the long-ranged and very influential pieces used in orthodox chess with a set of pieces that have much more limited mobility, such as the lance, which moves like a rook, but can only move straight forward (thereby limiting its position to a single track), the uma (horse), which moves like a knight, but can only move in the two forwards most positions, or the gold and silver generals, who can only move in a subset of the directions that a king can move in. Since each piece isn't much stronger than a king, it is much easier for the king to dodge the threats produced by each piece, and a king can only be checkmated when the pieces are acting in coordination to create a trap for the king. (This is the basis of tsume-shogi, checkmate puzzles for shogi. They are fun to solve, and I recommend trying them out to get a feel for how different checkmates in shogi are from orthodox chess checkmates)

I think shogi and crazyhouse solve a problem that I have with modern orthodox chess: the game ends in draws far too often, and the endgame is just too sparse for my taste. You can get good puzzles out of orthodox endgames, but I find the endgames of shogi and crazyhouse to be much more fun and much more exciting.

I a.  (An aside)

While I'm on the topic of shogi and crazyhouse, shogi pieces look quite different from the pieces used in orthodox chess:

I quite like the look of these pieces, and it provides a solution to a practical problem that arises from the piece drop mechanic: With orthodox chess pieces, one would need two sets of chess pieces, a double-sized army for each player, since each player may have up to twice the regular amount of each type of piece after they capture the enemy’s pieces. With these flat, wedge shaped pieces, though, a player can just make the piece face in the opposite direction towards their opponent, and a single set of pieces is enough to play the game. While I think this solution works, and these pieces are quite iconic for shogi, it just doesn’t feel right to play crazyhouse chess with pieces like this: crazyhouse chess is orthodox chess at its core, and it feels right to play crazyhouse with orthodox chess pieces. My ideal solution would be pieces that are as tall as orthodox chess pieces, and have a similar design language, but which are anti-symmetric: the pieces would have flat tops and bottoms, and can be flipped upside down to change the colour of the piece, since one end would be white, and the other end would be black. I imagine the two colours would meet in the middle, with a diagonal slant so that it would show one colour primarily to one player, and the other colour to the other.

II

It's been an observation made more than once, that there's a certain feeling of completeness to the orthodox chess pieces: The rook and bishop each move straight in certain directions, either perpendicularly / parallel to the line of battle, or diagonally to the line of battle. If you were to draw a 5x5 square around each piece, the knight can move to precisely the squares that a rook and bishop can't go to. And the queen can be viewed as the sum of a rook and a bishop. It all feels very interconnected, and almost perfectly complete and platonic. Almost perfectly, because there's two sums that we don't have in orthodox chess: the combination of a rook + knight, and a bishop + knight. These pieces, called the marshal and cardinal are quite fun pieces to play with, and I would not argue that chess is a better game for omitting these pieces. As such, there have been proposals to add these pieces to the game, the most well-known of which are Capablanca chess and grand chess, proposed by Chess World Champion J. R. Capablanca and the game designer Christian Freeling, respectively. The main difference between the two is that Capablanca chess is played on a board 10 wide by 8 tall, while grand chess is played on a 10x10 board, with an empty file behind each player's pieces, aside from the rooks, which are placed in the very back corners (what about castling? Simple, you can't castle in grand chess):

The additional width of the board in Capablanca and grand chess is used to allow one each of the marshal and cardinal to be placed in each player's army. Aside from the additional pieces and larger board, grand chess plays just like regular chess, but I think it deserves to be considered seriously as an alternative to the traditional rules for chess.

III.

While an introduction to chess variants would make a good topic for a post on this website, that's not what I'm writing right now. While these three games would certainly be present in such an article, the selection would be far too limited, and far too conservative - there's some really crazy, wacky, fun, and brilliant ideas in the world of chess variants which I won't be touching on today. I'm writing today because I want to talk about what I think may be the best contender as a replacement for orthodox chess, a cross between grand chess and crazyhouse, with a slight modification to better handle the drop mechanic of crazyhouse. It's clear that Capablanca chess and grand chess were intended from the very start as rivals to the standard ruleset, and I mentioned previously that shogi solves a problem that I have with orthodox chess: orthodox ends in too many draws, and I find orthodox endgames to be less exciting than crazyhouse and shogi endgames. My ideal game of chess would look more like crazyhouse than orthodox chess, since drops just make chess more fun. As I mentioned before, while crazyhouse is a fun game, it's just too intense and unpredictable to present a serious challenge to orthodox chess (at least, that is what I suspected as I was thinking about this post). There are two ways this can be addressed: the first is to do as shogi did, and make the pieces almost all as weak as the king, so the king can more easily survive against the enemy pieces; but doing this makes the game a different game from orthodox chess; it's no longer just a variant on orthodox chess, it's a completely different flavour. A flavour that I happen to love, but not the flavour of orthodox chess. I wanted a game that would preserve the heart of orthodox chess, while giving it the dynamic aspect allowed by drops, but more balanced and sane than crazyhouse chess.

So let's explore the second way to balance crazyhouse chess: instead of nerfing the pieces, let's make the king more formidable, more nimble, and able to more easily survive the intensity of drop chess. I haven't playtested this yet, but it seems appropriate to give the king the 4 backwards moves of the knight: This will give mobility to the king, without giving it too much mobility, and limiting the king to the backwards moves will ensure that it remains a defensive piece, and doesn't gain a new life as an aggressive part of the attacking force. Playtesting may prove this to be too weak (I don't anticipate that it will make it too strong): If this is the case, a different profile of movement may make sense for the king, but in any case, it is clear that increasing the mobility of the king will allow for a balanced form of drop chess.

Ideal Chess

So my ideal chess would differ from orthodox in the following ways:

  • The game is played on a 10x10 board, instead of the traditional 8x8 board (I feel that a wider board will make for a more fun, and deeper, game of chess)
  • The game will feature the marshall (rook + knight) and cardinal (bishop + knight) of grand chess, and will have the pieces arranged in the same way as grand chess (this also implies no castling)
  • When a piece is captured, it may be dropped back in to the game by the capturing player (working exactly as in crazyhouse chess or shogi)
  • The king may, in addition to its usual move, move using one of the 4 backwards moves of the knight. Pieces may be captured using this backwards move.

Ideally, the game would be played using the tall, bichromatic, antisymmetric pieces I propose in section I a of this post.

Replies from: Raemon
comment by Raemon · 2020-12-17T04:04:17.409Z · LW(p) · GW(p)

This was neat, would appreciate it as a top-level post (albeit probably a personal blog one), although it also does seem fine as shortform.

Replies from: mikkel-wilson, mikkel-wilson
comment by MikkW (mikkel-wilson) · 2020-12-18T00:59:53.728Z · LW(p) · GW(p)

I have now made this into a top-level post [LW · GW]

comment by MikkW (mikkel-wilson) · 2020-12-17T16:22:11.294Z · LW(p) · GW(p)

I'm curious to hear more about why you are recommending putting it as a top level personal post- is it length, format, quality, a combination of these, or something else?


I notice that I have some reluctance to post "personal blog" items on the top level- even though I know that the affordance is there, I instinctively only want to post things that I feel belong as frontpage items as top-level posts. I also notice that I feel a little weird when I see other people's personal posts as top-level posts here. I'm certainly not arguing that I have any problem with the way things are now, or arguing that this shouldn't be a top-level post, I'm just putting my subconscious feelings into words.

As for how this post ended up in shortform, I originally started typing it into the shortform box, and I didn't realize it would be this long until after I had already written a good chunk of it, and I just never decided to change it to a top-level post

Replies from: ChristianKl
comment by ChristianKl · 2020-12-17T18:55:18.262Z · LW(p) · GW(p)

I think if something might be want to be shared via a link putting it into a top-level post is valuable. 

comment by MikkW (mikkel-wilson) · 2020-11-30T06:16:23.377Z · LW(p) · GW(p)

There's two ways to consider the constitutional foundation of the modern United States: A) as the Constitution itself and its amendments, interpretted according to what the authors meant when it was written, or B) as the de facto modern interpretation and application of constitutional jurisprudence and precedent, which is often considered to be at odds with the original intent of the authors of the Constitution and its admendments, but nonetheless has become widely accepted practice.

Consider: which of these is the conservative approach, and which is the liberal approach? By liberal and conservative, I don't mean left-wing or right-wing, but am using them in the sense that conservatives conserve what exists, while liberals are liberal in considering different ways things might be (the original meaning of these terms)

The first option, A, which only looks at the written document itself, might often be described as a conservative approach, while B, which throws out the original intent and substitutes a new spirit to it, may be viewed as liberal. But I contend that it is actually the inverse: the conservative view of the US's constitutional foundation is to conserve the existing precedent in how its government functions, which dates back broadly to the 1930's, with some of the modern understanding of the constitutional foundation even going back as far as the Civil War, and has thus been the law of the land for nearly a century and a half.

Meanwhile, the approach of throwing out modern interpretation and precedent in favor of the original intent and meaning of the Constitution is quite a liberal approach, swapping a system that has been shaped and strengthened by cultural evolution for a prototype which is untouched by cultural evolution, and should (by default) be regarded with the same level of suspicion that liberal (i.e. paradigm shifting) proposals should (by default) be regarded with.

comment by MikkW (mikkel-wilson) · 2020-10-12T03:19:29.323Z · LW(p) · GW(p)

I've been considering the possibility of the occurrence of organized political violence in the wake of this year's election. I have been noticing people questioning the legitimacy of the process by which the election will be conducted, with the implied inference that the outcome will be rigged, and therefore without legitimacy. It is also my understanding that there exist organized militias in the US, separate from the armed forces, which are trained to conduct warfare, ostensibly for defense reasons, which I have reason to believe have a nontrivial probability of attempting to take control in their local areas in the case of an election result that they find unfavourable.

Metaculus currently gives 3% probability of a civil war occurring in the wake of this election. While there are many scenarios which would not lead to the description of civil war, this probability seems far too low to me.

Replies from: Dagon
comment by Dagon · 2020-10-12T15:36:07.937Z · LW(p) · GW(p)

3% seems too high for me, depending on definition.  I'd put it at around 1% of significant violent outbreaks (1000+ deaths due to violence), and less than 0.2% (below which point my intuitions break down) of civil war (50k+ deaths).  If you include chance of a coup (significant deviance from current civil procedures with very limited violence), it might hit 3%.  

Metaculus is using a very weak definition - at least two of four listed agencies (Agence France-Presse (AFP), Associated Press (AP), Reuters and EFE) describe the US as being in civil war.  There are a lot of ways this can happen without truly widespread violence.

I think you're misinformed about militias - there are clubs and underground organizations that call themselves that - they exist and they're worrisome.  But they're not widespread nor organized, and 'trained to conduct warfare' is vastly overstating it.  There IS some risk (IMO) in big urban police forces - they are organized and trained for control of important areas, and over the years have become too militarized.  I think it's most likely that they're mostly well-enough integrated into their communities that they won't go much further than they did in the protests this summer, but if the gloves really come off, that'll be a key determinant.

comment by MikkW (mikkel-wilson) · 2020-09-02T17:25:27.513Z · LW(p) · GW(p)

The phrase "heat death of the universe" refers to two different, mutually exclusive possibilities:

  1. The universe gets so hot, that it's practically impossible for any organism to maintain enough organization to be able to sustain itself and create copies of itself Or:
  2. The universe gets so cold, that everything freezes to death, and no organism can put make work happen to create more copies of itself

Originally, the heat death hypothesis referred to #1, we thought that the universe would get extremely hot. After all, heat death is a natural consequence of the second law of thermodynamics, which states that entropy can only increase, never decrease, and ceteris paribus (all else equal) when entropy increases, temperature also increases.

But ceteris is never actually paribus, and in this case, physicists found out that the universe is constantly getting bigger, things are always getting further apart. When volume is increasing, things can get colder even as entropy increases, and physicists now expect that, given our current understanding of how the universe works, possibility #2 is more likely, the universe will eventually freeze to death.

But our current understanding is only ever the best guess we can make of what the laws of the universe actually are, not the actual laws themselves. We currently expect the universe will freeze, but we could very well find evidence in the future that the universe will burn instead. Maybe (quite unlikely) things will just happen to balance out, so that the increase in temperature due to entropy equals the decrease in temperature due to the expansion of the universe.

Perhaps we will discover a loophole in a set of laws that would otherwise suggest a heat death of one kind or the other, but where a sufficiently intelligent process [LW · GW] can influence the evolution of temperature so as to counteract the otherwise prevailing temperature trend - in the vein of (I'd like to note that I do not intend to imply that any of these are likely to happen) creating a large enough amount of entropy to create a permanent warm zone in a universe that is otherwise doomed to freeze (this would probably require a violation of the conservation of energy that we currently have no reason to believe exists), or using an as-yet undiscovered mechanism to accelerate the expansion of the universe that can create a long-lasting cool zone in a universe that is otherwise doomed to burn.

Replies from: Dagon
comment by Dagon · 2020-09-02T18:20:53.202Z · LW(p) · GW(p)

Hrm. I though it referred to distribution of energy, not temperature. "heat death of the universe" is when entropy can increase no more, and there are no differentials across space by which to define anything at conscious scale. No activity is possible when everything is uniform.

At least, that's my simplistic summary - https://en.wikipedia.org/wiki/Heat_death_of_the_universe gives a lot more details, including the fact that my summary was probably not all that good even in the 19th century.

comment by MikkW (mikkel-wilson) · 2020-08-31T02:44:50.251Z · LW(p) · GW(p)

The way we measure most populous cities / most dense cities is weird, and hinges on arbritary factors (take, for example, Chongqing, the "most populous city", which is mostly rural land, in a "city" the size of Austria)

I think a good metric that captures the population / density of a city is the number of people that can be reached with half an hour's or an hour's worth of transportation (1/2 hour down and 1/2 hour back is one hour both ways, a very common commute time, though a radius of 1 hour each way still contributes to the connections available) - this does have the effect of counting a larger area for areas with better transportation, but I think that's a good feature of such a metric.

This metric would remove any arbitrary influences caused by arbitrary boundaries, which is needed for good, meaningful comparisons. I would very much like to see a list organized by this metric.

(Edited: misremembered commute times. See Anthropological Invariants in Travel Behaviour)

Replies from: Dagon, Dagon
comment by Dagon · 2020-09-11T22:44:01.377Z · LW(p) · GW(p)

related map of the US, with clustering of actual commutes: https://www.atlasobscura.com/articles/here-are-the-real-boundaries-of-american-metropolises-decided-by-an-algorithm . Note this uses longer commutes than I'd ever consider.

(edit: removed stray period at end of URL)

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2020-09-12T01:18:27.021Z · LW(p) · GW(p)

Huh, I'm seeing a 404 when I click the link

comment by Dagon · 2020-08-31T19:34:04.787Z · LW(p) · GW(p)

What is often used today is "metropolitan area". This is less arbitrary than city boundaries, but not as rigorous as your "typical 1 hour from given point" - it boils down to "people pay extra to live somewhat near that conceptual location". I think the base ranking metric is not very useful, as well. Why do you care about "most populous" or "densest (population over area)", regardless of definition of location?

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2020-09-01T20:08:44.959Z · LW(p) · GW(p)
Why do you care about "most populous" or "densest (population over area)", regardless of definition of location?

1) Population density has an important impact on the mileau and opportunities that exist in a given location, but we can only make meaningful comparisons when metrics are standardized. 2) I've heard it said that in medieval times, many lords would collect a "bushel" of taxes from the peasants, where the bushel was measured in a large basket, but then when paying a "bushel" of taxes to their king, the bushel would be measured with a much smaller basket, thereby allowing the lord to keep a larger amount of grain for himself. When we don't have consistent standards for metrics, similar failure modes can arise in (subtler) ways - hence why I find reliance on arbitrary definitions of location to have bad taste

comment by MikkW (mikkel-wilson) · 2020-08-14T22:40:39.227Z · LW(p) · GW(p)

A: Reading about r/K reproductive strategies in humans, and slow/fast life histories.

B: It's been a belief of mine, that I have yet to fully gather evidence on / have a compelling case that it should be true/false, that areas with people in poverty leads to increased crime, including in neighboring areas, which would imply that to increase public safety, we should support people in poverty to help them live a comfortable life.

Synthesis:

In niches with high background risk, having many children, who each attempt to reproduce as quickly as possible, is a dominant strategy. In niches where life expectancy is long, strategies which invest heavily in a few children, and reproduce at later ages are dominant.

Fast life histories incentivize cheating and criminal behaviour, slow life histories incentivize cooperating and investing in a good reputation. Some effects mediating this may be genetic / cultural, but I suspect that there's a lot of flexibility in each individual - if one grows up in an environment where background risk is high, one is likely to be more reckless, if one grows up in an environment with long life expectancy, the same person will likely be more cooperative and law-abiding

Replies from: Viliam
comment by Viliam · 2020-08-18T13:17:06.877Z · LW(p) · GW(p)

So what you're saying is that by helping people, we might also improve their lives as a side effect? Awesome! :P

More seriously, on individual level, I agree; whatever fraction of one's behavior is determined by their environment, by improving the environment we likely make the person's behavior that much better.

But on a group level, the environment mostly consists of the individuals, which makes this strategy much more complicated. And which creates the concentrated dysfunction in the bad places. Suppose you want to take people out of the crime-heave places: do you also move the criminals? or only the selected nice people who have a hope to adapt to the new place? Because if you do the latter, you have increased the density of criminals at the old place. And if you do the former, their new neighbors are going to hate you.

I don't know what is best; just saying that there seems to be a trade-off. If you leave the best people in the bad places, you waste their potential. But if you help the best people leave the bad places, there will be no one left with the desire and skills to improve those places a little.

On the national scale, this is called "brain drain", and has some good and some bad effects; the good effects mostly consist of emigrants sending money home (reducing local poverty), and sometimes returning home and improving the local culture. I worry that on a smaller scale the good effects would be smaller: unlike a person moving to another part of the world with different culture and different language, an "emigrant" to the opposite side of the city would not feel a strong desire to return to their original place.

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2020-08-18T15:12:58.855Z · LW(p) · GW(p)

I wasn't mainly thinking of helping people move from one environment to another when I wrote this, but generally improving the environments where people already are (by means of e.g. UBI). I share many of your concerns about moving people between environments, although I suspect that done properly, doing so could be more beneficial than harmful

comment by MikkW (mikkel-wilson) · 2021-07-15T05:46:36.279Z · LW(p) · GW(p)

The most amazing thing about America is that our founders figured out a way to cheaply hold a revolutionary war every 4 years

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2021-07-15T05:48:01.708Z · LW(p) · GW(p)

(Here, "cheaply" is most importantly measured in terms of human lives not sacrificed)

comment by MikkW (mikkel-wilson) · 2021-06-30T18:17:25.502Z · LW(p) · GW(p)

There's a good chance GitHub Copilot (powered by OpenAI Codex, a GPT-3-like AI by the same team) will be remembered by our robot inheritors as the beginning of the end of humanity

Replies from: Pattern
comment by Pattern · 2021-07-07T20:43:22.674Z · LW(p) · GW(p)

Because you see it as a form of 'code that writes code'? (I mean, I'm curious about such things affecting themselves in general. I wonder if Copilot could actually contribute to its source code somehow.)

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2021-07-08T05:38:18.609Z · LW(p) · GW(p)

The fact that it is code that writes code is indeed part of this. I'm a bit reluctant to say too much publicly, since I don't want to risk giving the wrong person any ideas.

comment by MikkW (mikkel-wilson) · 2021-07-14T05:05:51.351Z · LW(p) · GW(p)

Islam and Christianity are often viewed as two distinct, separate offshoots from Judaism, but a perspective where Islam is itself a descendant of Christianity is a useful lens. The founders of Islam were well aware of Christianity when Islam was founded, and while they reject the Christian Bible (both new and old testament) and its teachings as likely inauthentic, it seems that there are many properties (for example, the fervor with which they present their religion to the outside world) of Islam that it receives from Christianity that are not present in Judaism. (Islam also recognizes Jesus (or Isa) as a prophet, but that is secondary to the point I am making)

comment by MikkW (mikkel-wilson) · 2021-07-13T21:34:18.261Z · LW(p) · GW(p)

I cannot understand why anyone, at this point in history, would spend more than ~10% of their investment money on any assets that they would expect to take more than 3 years to double in value.

Replies from: zac-hatfield-dodds, Euglossine
comment by Zac Hatfield Dodds (zac-hatfield-dodds) · 2021-07-13T23:23:33.113Z · LW(p) · GW(p)

What specifically do you think has a 26% expected ARR, while also being low-risk or diversified enough to hold 90% of your investable wealth? That's a much more aggressive allocation to e.g. the entire crypto-and-adjacent ecosystem than I'm comfortable with.

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2021-07-14T04:06:01.992Z · LW(p) · GW(p)

Definitely not crypto, at least not sustainably / predictably. Things along the lines of stock in Tesla, Microsoft, or Google, however, have been performing at such a pace, and I expect in most timelines they will continue that pace in the near future, enough for the expected value to be strongly positive.

(Edit to add:) These stocks, and similar stocks, are in position to take advantage of value generated by trends that are currently starting / are underway, that will produce substantial real-world value, ignorans clades singularitatis, unlike most (almost all) crypto currencies.

Replies from: zac-hatfield-dodds
comment by Zac Hatfield Dodds (zac-hatfield-dodds) · 2021-07-14T06:35:44.782Z · LW(p) · GW(p)

Ah, that position makes a lot of sense. Here's why I'm still in boring market-cap-indices rather than high-growth tech companies:

  • I think public equity markets are weakly inexploitable - i.e. I expect tech stocks to outperform but not that the expected value is much larger than a diversified index
  • Incumbents often fail to capture the value of new trends, especially in tech. The sector can strongly outperform without current companies doing particularly well.
  • Boring considerations about investment size, transaction fees, value of my time to stay on top of active trading, etc.
  • Diversification. Mostly that as a CS PhD my future income is already pretty closely related to tech performance; with a dash of the standard arguments for passive indices.

And then I take my asymetric bets elsewhere, e.g. starting HypoFuzz (business plan).

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2021-07-14T16:01:48.015Z · LW(p) · GW(p)

Incumbents often fail to capture the value of new trends, especially in tech. The sector can strongly outperform without current companies doing particularly well.

Strong agree on this. But while they may capture only a fraction of the growth, I do expect that they will capture enough to grow substantially (this is, in part, helped by the overall growth I expect to be massive). But there is always a chance that even that doesn't happen. I do wish I had a better sense of what current upstarts are well-poised to make an even larger profit in the near future.

Diversification. Mostly that as a CS PhD my future income is already pretty closely related to tech performance; with a dash of the standard arguments for passive indices.

This is certainly valid for the extreme size of my proposed allocation, but I suspect that you stand to profit even more from investing in high-growth stocks than you receive directly from your work; also, not all of the growth I expect is directly related to CS / AI, namely covering the Earth with solar panels, then putting solar panels / solar-powered computers into space (also something something nuclear). The latter case is my main justification for owning Tesla stock, and I'd say isn't overly strongly correlated with CS trends (although not incompletely either).

comment by Euglossine · 2021-07-13T22:05:55.326Z · LW(p) · GW(p)

Historically, haven't assets that claim to take less than 3 years to double in value had a high probability of losing value instead? What are examples of the assets of this sort in the past few decades and at the present time?

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2021-07-14T04:18:44.737Z · LW(p) · GW(p)

Of course, it's easy for someone to claim that something will double in value quickly, and sometimes things even double in value quickly, despite being built on hot air (textbook example, the dutch tulip rush). The important trick is to use first-principles, real-world reasoning to identify the few opportunities that actually can generate real value that quickly. The world would be very different if such companies didn't exist. I invested in Tesla, Microsoft, and Google using such real-world, first-principles reasoning, and these companies has grown at such a clip during the time I have been invested in them (which has been for ~1-3 years depending on the specific asset)

comment by MikkW (mikkel-wilson) · 2021-07-01T20:54:15.187Z · LW(p) · GW(p)

In my post "No, Newspeak Won't Make You Stupid" [LW · GW], I explored the thesis that 'cadence of information is constant', that even if someone uses words which communicate more information, they will have to slow down their speech to compensate, thereby preventing them from communicating a larger amount of information using a rich vocabulary. I then present an alternative hypothesis for why we use rich vocabularies anyways.

One important crux of the thesis, is that the human mind is only able to encode and decode a certain amount of information per unit time, and that this limit is close to the speed we already naturally achieve. But on first reflection, this seems clearly incorrect - in the post, I calculate the cadence of information of speech to be a little more than 60 bits per second, which while a good amount, is a tiny fraction of the amount of processing the human brain actually does every second [citation needed] - surely we could speed up the rate we speak / listen, and still be able to keep track of what's going on, right?

Well, in some sense, we probably could. But there are two reasons why I think this might not be perfectly right - for one, the parts of our brain that handle speech are only one part of many parts of our brain, with most of our information processing capacity not being designed to process speech, and there's also the matter of what Simler and Hanson call "The Elephant in our Brain", the 'press agent' part of our brain that cleans up our output to make us look like much better people than we actually are.

It's not clear to me how much the fact that only a fraction of our information capacity is suited for handling speech affects this, but I do suspect that the "Elephant in the Brain" does have a substantial effect slowing down how fast we speak, making us talk slower so as to avoid revealing thoughts that may reveal intentions we don't want revealed.

For this reason, both making people speak faster than they are comfortable with, and getting them drunk (effectively dilating time for them), are good ways to sus out whether someone's intentions are pure - the second of these is captured by the phrase "In vino veritas".

Replies from: gilch
comment by gilch · 2021-07-01T22:06:34.242Z · LW(p) · GW(p)

I regularly listen to lectures on YouTube at 2x speed and my comprehension is fine. I can even go a bit faster with a browser plugin. This took practice. I gradually increased the speed over time. I think I can probably read text even faster than that.

There are limitations. If the speaker has a thick accent, I can't comprehend it as fast. If the concepts are difficult, then even though I understand the words, I often still have to pause the video while I think it through.

I have heard of blind people who use a narration interface on their smartphones and work up to 7x speed narration. If there are no surprises, humans can process language much faster than they can speak it.

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2021-07-02T01:11:18.063Z · LW(p) · GW(p)

Yeah, it definitely does seem to be possible to listen faster than we usually speak; at the same time, in public speaking classes, one is encouraged to speak slowly, so as to maximize the understanding of the audience. As you mention, the difficulty of concepts can require slowing down. While you can easily pause or rewind a video when you hit a part that you find tricky, you can't do that in a presentation. Furthermore, what one person finds easy could be hard for someone else, but then the two people could be in the opposite positions a few sentences later.

Perhaps most ideas have some fraction of the people that need to process the idea, and the ideas that are hard vary from person to person, so in order to allow everybody to digest the parts they find tricky, a public speaker has to speak much slower than the people can actually understand, so no-one gets left too far behind while they're deep in thought.

comment by MikkW (mikkel-wilson) · 2021-06-28T22:11:00.766Z · LW(p) · GW(p)

I notice that I am confused why assurance contracts are not more widely used. Kickstarter is the go-to example for a successful implementation of ACs, but Kickstarter only targets a narrow slice of the things that could potentially be funded by ACs, and other platforms that could support other forms of ACs people often don't feel comfortable using. The fact that Kickstarter has managed to produce a brand that is so conducive to ACs suggests that people recognize the usefulness of ACs, and are willing to partake in them, but there's low-hanging fruit to extend ACs beyond just what Kickstarter considers 'art'.

Replies from: Viliam
comment by Viliam · 2021-06-29T00:56:09.504Z · LW(p) · GW(p)

Just a guess: transaction costs caused by scams and failed projects?

Imagine that someone promises on Kickstarter to create a project, receives lots of money, and then delivers a thing which obviously is not the same thing or nowhere near the quality that was originally promised, but it still is something and the author insists that the result should count, and most backers -- but not all of them -- disagree. How are you going to resolve this mess? Someone is going to lose their money and be angry at you.

Maybe with non-art these problems are even worse than with art. Maybe with art, you can legally argue that art is subjective, you put a bet on the author, you got something, and if you are not satisfied that's just your opinion, Kickstarter doesn't care whether in your opinion the product is insufficiently artistic. But with non-art, the backers could argue that the product violates some objective criteria, and the Kickstarter would have to take sides and get involved in a possible lawsuit?

As an extreme example, imagine a Kickstarter-backed COVID vaccine, which according to one study is mildly helpful (like, reducing your chance of getting infected by 10%), according to another study is completely useless, and no one wants to spend their money on a third study. Is this a legitimate product, or should backers get their money back?

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2021-06-29T02:53:39.784Z · LW(p) · GW(p)

Yeah, I think scams and the possibility of sub-standard final product are both factors that make people hesitant to participate in non-Kickstarter ACs, and have also caused problems with Kickstarters in the past. I guess Kickstarter manages to set the bar high enough that people feel fairly willing to trust in a Kickstarter project, while people aren't as willing to trust other websites since they perceive a higher risk of scams / substandard final product. My impression is that it's not that hard to make a Kickstarter, but by, for example, requiring a video pitching the idea, that makes it less likely that people will submit low-effort projects.

comment by MikkW (mikkel-wilson) · 2021-06-19T20:57:52.702Z · LW(p) · GW(p)

The lethal dose of caffeine in adult humans is approximately 10 grams, while the lethal dose of theobromine (the main psychoactive chemical in chocolate, nearly identical structurally to caffeine, with similar effects) in humans is 75 grams (this is much lower in most animals, which is why you should never give chocolate to your pets). This can motivate a rough heuristic of 7.5 mg theobromine is roughly equal to 1 mg caffeine, and 750 mg theobromine is equivalent to one cup of coffee.

Therefore, to replace coffee with cocoa or chocolate, 6 spoons of unsweetened cocoa powder should replace a cup of coffee. 11 cups of hot chocolate (that's a lot) or 2 bars of dark chocolate should also work.

comment by MikkW (mikkel-wilson) · 2021-06-19T20:05:56.223Z · LW(p) · GW(p)

I've long been aware of the concept of a "standard drink", a unit for measuring how much alcohol a person has had, regardless of what they are drinking, so one "drink" of wine contains less liquid than one "drink" of beer, but more than one drink of vodka. When I started experimenting with chemicals other than ethanol, I intuitively wanted to extend this notion to other chemicals. For example, in my mind, I roughly equate 10 mg of Tetrahydracannabinol with one drink of ethanol. While the effects of these two chemicals are quite different, and work in different ways, they both have relaxing and depressant (not "depressing") effects, so there is some meaningful comparison - if I want to incapacitate myself to a certain extent, I can use the concept of an "depressant unit" to calculate a dose of either THC or ethanol, or similarly with diphenhydramine (ZZZQuil) or Nyquil.

Clearly, in most cases, I would not want to compare the strength of an alcoholic beverage with the strength of a caffeinated beverage. But I would want to be able to use a "stimulating unit" to compare, say, amounts of caffeine to amounts of theobromine (cocoa) or to other stimulating chemicals (for example, Adderall).

Another unit that would be worth using would be an "entheogenic unit", which would allow one to compare doses of LSD, Psilocybin, THC (regarding its quasi-psychedelic, not depressant qualities), and so on, in terms of their ability to change the way one thinks.

comment by MikkW (mikkel-wilson) · 2021-06-19T19:04:20.123Z · LW(p) · GW(p)

Question: Is it possible to incorporate Caffeine into DNA? Caffeine is structurally similar to Adenine, one of the four DNA nucleobases (and the A in ATCG). But looking at the structure, the hexagonal ring (which is the part of the DNA that bonds A to T and C to G) doesn't look very promising - there are two oxygen atoms that can bond, but they are a bit too far apart, and there are no hydrogens, and since DNA is held together by hydrogen bonds, the hydrogen will have to be provided by whatever it is paired to. Theobromine looks more promising, since a CH3 group is replaced by an H (otherwise it is identical to Caffeine), which provides a continuous run of bondable groups, and the H can be used for hydrogen bonding.

Probably for either Theobromine or Caffeine, they would have to be paired with another base that is not one of the usual ATCG bases, which is specially chosen to complement the shape of the molecules.

comment by MikkW (mikkel-wilson) · 2021-06-13T17:40:27.478Z · LW(p) · GW(p)

I'm probably missing something, but Baye's Theorem seems quite overrated in this corner of the internet. (I've read all of the Sequences + the Arbital Guide)

Replies from: irarseil
comment by irarseil · 2021-06-13T20:35:53.988Z · LW(p) · GW(p)

You have an idea of how likely something is to happen, or an estimate of a figure, or a model of something in the real world (e.g: Peter is a guy who loves cats). You happen to get new information about this something (e.g: you see Peter viciously killing a cute kitten). You'd most likely update, with both epistemical consequences (you'd probably stop believing Peter is the cat-loving guy you thought) and instrumental or practical consequences (you wouldn't ask him to look after your cats while you are away on holiday). The way I see it, Bayes' Theorem tells you how much you should update your beliefs to take into account all the evidence you have, to be right as much of the time as possible, given the limited information you have. Obviously, as they say about information systems in general, "garbage in garbage out", which means you should worry about getting reliable information on the things you care most about, because even with the best possible update algorithm, if the information you get is biased, your beliefs and actions will not be right. I don't know if your criticism of the importance attached to Bayes' Theorem is because you feel other aspects are neglected or what exactly is your rant. Could you please elaborate a bit?

comment by MikkW (mikkel-wilson) · 2021-06-04T23:43:43.360Z · LW(p) · GW(p)

Currently I'm making a "logobet", a writing system that aims to be to logographies as alphabets are to syllabaries [1]. Primarily, I want to use emoji for the symbols [2], but some important concepts don't have good emoji to express them. In these cases, I'm using kanji from either Japanese or Chinese to express the concept. One thing that I notice is that the visual style of emoji and kanji are quite different from eachother. I wouldn't actually say it looks bad, but it is jarring. The emoji are also too bold, colourful, and detailed to really fit well as text (rather than accompaniment for the text, as they are usually used today), though the colour is actually helpful in distinguishing symbols.

Ideally, I would want a font to be made for the logobet that would render emoji and kanji (at least the ones that are used in the logobet) in a similar manner, with simple contours and subdued (but existent) colours. This would require changing both ⸤kanji to have a fuller, more colourful form, and emoji to be less detailed and have less bold colours⸥.

But this will be downstream of actually implementing and publishing the first logobet.

 

[1] By breaking down concepts into component parts the way an alphabet breaks down syllables into component parts, a logobet can be more easily learned, using on the order of hundreds of symbols, rather than tens of thousands of symbols. The benefit of using a concept-based, rather than phonetic, alphabet, is that the system can be read and written by people from any background without having to learn eachother's languages [1a]. 

[1a] We see this already in China, where populations that speak different Sinitic languages, can all communicate with eachother through the written Chinese script (which may be the only actively used language that is primarily written, not spoken). The main reason why I think this has not spread beyond East Asia is because kanji are too hard to learn, requiring months of effort to learn, whereas most writing systems can be learned in hours.

[2] Emoji tend to be easily recognizable without prior knowledge of the script, while kanji tend to only be meaningful to someone who is informed of their meaning, which is why I prefer using emoji.

comment by MikkW (mikkel-wilson) · 2021-05-19T05:14:19.034Z · LW(p) · GW(p)

It is my view that Covid and then the common cold must be eradicated.

It is hardly an original thing to say, but I will say it.

comment by MikkW (mikkel-wilson) · 2021-05-17T02:42:01.330Z · LW(p) · GW(p)

It doesn't seem that there's a good name for the COVID variant that's currently causing havok in India, and will likely cause havok elsewhere in the world (including quite possibly in parts of the US). Of course, there's the technical term, Lineage B.1.617, but that's a mouthful, and not easily distinguishable when spoken in casual form from the many other variants.

It's often called in casual speech by the country where it first appeared, but it's generally considered bad form to refer to diseases by their location of origin, for reasons that I'm inclined to agree with.

Wikipedia also mentions that some people have called this variant the "double mutation", but there will be many variants that feature multiple mutations, so it's not a very specific name to give the variant.

I'm inclined to just call it "(COVID) Six-seventeen" (from B.1.617), but I'm not sure if other people will resonate well with that indexing.

comment by MikkW (mikkel-wilson) · 2021-05-14T04:09:32.810Z · LW(p) · GW(p)

Supposedly people who know how to program and have a decent work ethic are a hot commodity. I may happen to know someone this describes who is not currently employed (i.e: Me)

Replies from: zac-hatfield-dodds
comment by Zac Hatfield Dodds (zac-hatfield-dodds) · 2021-05-14T22:02:08.082Z · LW(p) · GW(p)

The problem is that employers can't take your word for it, because there are many people who claim the same but are lying or honestly mistaken.

Do you have, or can you create, a portfolio of things you've done? Open-soure contributions are good for this because there's usually a review process and it's all publicly visible.

comment by MikkW (mikkel-wilson) · 2021-04-25T04:15:40.216Z · LW(p) · GW(p)

On Relegation in Association Football

Recently 12 European football teams announced their intention to form a "Super League", which was poorly received by the football community at large. While I'm still learning about the details of the story, it seems that the mechanic of relegation is a central piece of the tension between the Super League clubs and the football community.

The structure of European football stands in contrast to, for example, the structure of American (Usonian) major sports, where the roster of teams is fixed, and never changes from year to year; instead, in football, the bottom two teams each year get relegated to a lower league, and the highest performing teams from the lower league get moved up to take their place.

The mechanic of relegation is celebrated because it ensures the teams in the highest echelons of the sport are always those that are most able to bring the best football to the table in games, and any team, even a small, scrappy, local team can work its way to the highest levels if they are good enough (see for example AFC Wimbledon, which was founded after the previous Wimbledon club relocated, leading the local population to found a new team that started from the very bottom again, and subsequently rose rapidly through the leagues).

But for all its benefits, relegation also introduces a large degree of uncertainty into the finances of the teams, since any team can be relegated to a lower-tier, and less profitable, league, which makes it hard for the teams to plan adequately for the long term. Addressing this instability was an important factor in the ideation of Super League, which proposed to operate without relegation, similar to American sports.

I found myself wondering, is there any way to capture the benefits of relegation, ensuring the very best teams are always in the same league, while allowing teams to have a less uncertain image of their long-term finances?

Clearly, to achieve this, no team can ever have too secure of a position, lest they grow complacent and take up space that a sharper team can use; but teams can still be given more security in their position than they have right now. At the highest league, instead of having only a single tier, there could be multiple tiers (which still play against eachother as if they were only one unit), but teams in the highest tiers would have to be relegated multiple times to be removed from the highest league. In this setup, the best teams would have to perform poorly for multiple consecutive seasons before they can be removed from the highest league, but would still need to play sharply in order to maintain their position. This arrangement allows for more stability for the teams that are able to do the best, while also providing the meritocratic structure that football fans find lacking in the proposed Super League.

One criticism that I anticipate of this proposal, is that by providing financial stability to the very highest-performing teams, it would ossify and fossilize a hierarchy of teams, where certain teams are able to more easily spend and obtain resources, and thereby outperform other (less stably positioned) teams in the league, thereby having an easier time defending the stable position, creating a positive feedback loop that hinders other teams from having a fair chance at dethroning the top teams.

One way to address this criticism, is to create a fee associated with the higher tiers of the league; if a team wishes not to pay the fee, they may avoid doing so, but the stable position will instead be offered to the next highest-performing team. The fee will be distributed to the rest of the teams in the league, ensuring that they can have a more competitive chance at rivaling even the entrenched teams.

comment by MikkW (mikkel-wilson) · 2021-04-14T04:06:24.139Z · LW(p) · GW(p)

In response to my earlier post about Myers-Briggs [LW(p) · GW(p)] (where I suggested a more detailed notation for more nuanced communication about personality types), it was pointed out that there is some correlation between the four traits being measured, and this makes the system communicate less information on average than it otherwise would (The traditional notation would communicate 4 bits, my version would communicate ~9.2 if there was no correlation).

I do object to the characterization that it all measures "the same thing", since none of the traits perfectly predicts the others, and all 16 of the traditional configurations have people they describe (though some are more common than others); but I do think it makes sense to try to disentangle things - if the I / E scale is correlated with the J / P scale, we can subtract some amount of J points for more introverted people, and add J points for the extroverts, so that an introvert needs to be more "judgmental" to be considered "J" than an equally judgmental extrovert, with the goal being that 50% of extroverts will be J, 50% P, and have a similar 50-50 split for introverts.

By adjusting for these correlations across all pairs, we can more finely detect and communicate the underlying traits that cause "Judgement" and "Perception" that aren't just a result of a person being more extroverted (a disposition that rewards those who are best able to use their intuition) or introverted (which often leads to pursuits that require careful thinking distanced from our whims).

comment by MikkW (mikkel-wilson) · 2021-03-25T04:01:12.924Z · LW(p) · GW(p)

Even logarithms

Achieve exponential heights

Long before you reach infinity

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2021-03-25T04:02:53.524Z · LW(p) · GW(p)

(not a poem

only

a formatted sentence)

comment by MikkW (mikkel-wilson) · 2021-03-17T09:22:25.032Z · LW(p) · GW(p)

Three types of energy:

  1. Potent energy
  2. Valued energy
  3. Entropic Energy

(2 + 3), as well as 3 are strictly non-decreasing over time, and generally increase, while 1 + 2 and 1 by itself are strictly non-increasing, and generally decrease.

You want to maximize for Valued Energy, and minimize Potent and Entropic Energy

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2021-03-17T09:23:56.294Z · LW(p) · GW(p)

Note that Valued Energy varies from agent to agent

comment by MikkW (mikkel-wilson) · 2021-03-03T05:46:36.551Z · LW(p) · GW(p)

Prediction: 80% chance that Starship SN10 lands in one piece tomorrow / whenever its first flight is

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2021-03-04T04:13:04.519Z · LW(p) · GW(p)

This happened, so this prediction comes in at 80% correct, although about 10 minutes after landing, the rocket blew up.

comment by MikkW (mikkel-wilson) · 2021-02-24T07:15:27.124Z · LW(p) · GW(p)

I have often heard it pronounced (Including by Eliezer [LW · GW]) that group selection is not a thing, that evolution never selects for "the good of the species" - and it is true, in the sense, that if evolution is given the chance to throw the species under the bus for a slight gain to the individual, then it will never hesitate to do so. 

But there is a sense in which a group can be selected for - assume feature A is always bad for whichever species has it, and there are two species which occupy overlapping niches - one group with feature B, which makes feature A unprofitable for the individual, and one group with feature C, which makes feature A profitable for the individual. Assume features B and C are sufficiently complex that it remains constant within the group (there are many such biological traits - human eyes, for example, tend to be much more similar to eachother than to dog eyes, despite there existing variances within each), while feature A can be mutated on or off on an individual level. In this case, we should expect the group where the group-level disease is also unprofitable to the individual, to outperform the group where the group-level disease is profitable to individuals, since feature A will be common in one group (which will suffer) and not the other (which will prosper). This is a way in which group selection can have meaningful effects while still having evolution act on individuals

Replies from: Viliam
comment by Viliam · 2021-02-24T17:38:36.227Z · LW(p) · GW(p)

Eliezer doesn't say that it is impossible, only "pretty unlikely". That is, under usual circumstances, when you do the math, the benefits of being a member of a tribe that benefits from group selection, although greater than zero, are much smaller than the individual benefits of defecting against the rest of the group.

This is the norm, in nature. This is what happens by default. The rare situations where this is not true, require special explanation. For example, ants or bees can collectively gather resources... but that is only possible because most of them are infertile children of the queen, so they cannot spread their genes better by defecting against the queen.

In your example, are the "groups" different species? In other words, is this about how bees would outperform bumblebees? In that case, the answer seems to be that the feature B itself is almost a miracle -- something that turns a profitable behavior into inprofitable behavior, without being itself selected against by evolution... how would you do that?

(So how did bees evolve, if for their pre-bee ancestors, a worker being infertile was probably an evolutionary disadvantage? I have no idea. But the fact that there are only about three known examples in nature where this happened -- ants, bees, naked mole-rats -- suggests it was something pretty unlikely.)

Then you have humans, which are smart enough to recognize and collectively punish some activities that harm the group. If they keep doing so for generations, they can somewhat breed themselves towards harming the group less. But this is very slow and uncertain process, because the criminals are also smart enough to hide their actions, the enforcement has many loopholes (crimes are punished less if you are high-status, or if you do the thing to enemies), different societies have different norms, social order breaks down e.g. during wars, etc. So we get something like slightly fewer murders after a few centuries of civilization.

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2021-03-23T19:06:13.105Z · LW(p) · GW(p)

For example, ants or bees can collectively gather resources... but that is only possible because most of them are infertile children of the queen, so they cannot spread their genes better by defecting against the queen.

It's worth noting that the infertility of worker bees is itself (seemingly) a form of genetic sacrifice, so it doesn't really explain why cooperation evolved among bees. The explanation that I'm familiar with is that male bees (this is also true of ants, but not molerats) only have one set of genes, instead of the usual pair, which means that their daughters always inherit the same genetic material from the father. This means that in the case that two bees share both the same father and mother (which isn't actually always the case in evolutionarily modern beehives, more thoughts on this later) then those bees will have 75% consanguity (improperly speaking, share 75% of their genes), whereas a mother bee only has 50% consanguinity with her own daughters (the same as between human siblings or between human parents and offspring). This means infertility can actually be a very effective strategy, and not at all altruistic, since a bee more effectively propogates her own genes by helping raise her younger (full) sisters than by raising children of her own.

But it's worth noting that many haplodiploid species are not eusocial (for example wasps), and modern beehives often contain bees that have different fathers. Bees have the same consanguinity with half-siblings as humans have with their half-siblings (25%), and in principle, a bee should be able to better propagate her genes by having children of her own than by helping her half-siblings, yet we see bees helping raise their half-siblings all the time. While I wasn't around to watch the evolution of bees myself, here's one plausible story of how this situation could have come about:

In the original beehives, a mother bee would have several children with one father. Since bees are more closely related to their full siblings than to their own offspring, most of the female bees would spend more time helping raise their siblings than on having children themselves. At this point in the process, if a bee tried to raise a family with many different fathers, the household wouldn't cohere very well, and would be outperformed by households (hiveholds?) with only a single father. We should expect that if we examined bee families in this early stage, we would see a noticeable bias towards families with one father (who likely played no role other than providing genetic information), and under this regime, something recognizeable as the eusocial beehives we know today would have been the result, just with one father instead of many, making every bee in the hive a full sibling of the other hivemates.

However, while having a singular father is conducive to eusociality, this actually poses a problem for the hive. To illustrate, you may be aware that the bananas we eat today aren't the same bananas that used to be eaten 80 years ago; since we plant bananas in such a way that one banana is genetically identical to (has 100% consanguinity with) every other banana of the same cultivar, this makes them particularly vulnerable to infections and parasitism; the Gros Michel cultivar that used to be popular got entirely wiped out by a fungus, F. oxysporum cubense, referred to as Panama disease; the lack of genetic variety ensured that the moment a strain of F. oxysporum could infect one Gros Michel, it could run wild through every single Gros Michel in existence. Similarly, in a beehive containing tens of thousands of nearly identical bees, if a parasite or germ can infect one bee, it will spread like wildfire and destroy the hive.

This creates a conundrum for the hives: if it only takes genetic material from one father, it can easily be wiped out by disease; If it took genetic material from multiple fathers, any disease will likely only affect a fraction of the population, but there will be less incentive for the bees to take care of the hive. One semi-stable setup for a hive under these conditions may be for a hive to contain 3-6 different "clans" of bees with the same father (so a few different fathers for the whole hive). We would expect to see strong cooperation within each clan, but weak cooperation between clans (similar to how half-siblings usually interact). This would still provide much of the benefit of a homogenous beehive, but also ensure that the hive can survive after a given disease.

However, diseases would still cause problems as hive size increases, and from the perspective of the hive (as well as the individual in some circumstances), the decreased level of cooperation isn't ideal. We should expect that if the bees are able to detect when a bee is defecting against the hive (either by trying to reproduce, or by trying to collect and reserve food for their own clan instead of the entire hive, or by any other possible way), it would be in a bee's best interest to punish their half-siblings for defection, to help ensure that they contribute to the entire hive's effort (we observe this happening in modern hives). Hives that have more robust traditions of detecting and punishing cheating would be able to outperform hives that don't, by being able to maintain higher levels of cooperation even as the number of "clans" in the hive increases, thereby increasing resistance to disease, and thereby increasing the maximum size of the hive, while still being able to operate as a single organism.

Note that the entire evolutionary story above is made up, and I don't claim to have any scientific or historic evidence backing it up, though it is roughly in line with my knowledge of evolutionary principles.

... Maybe I should just make this into a post. I didn't realize I had so much to say on this topic.

comment by MikkW (mikkel-wilson) · 2021-02-15T16:46:14.904Z · LW(p) · GW(p)

Reading through Atlas Shrugged, I get the sense that if becoming a billionaire (measured in USD) in gold isn't somewhere on your top ten life goals, Ayn Rand wants nothing to do with you.

I will modify that slightly for my own principle- if you don't want to one day have $1 billion worth of UBI Coin [LW · GW], then I don't want to be your friend, based on grounds that I expect can be justified using Functional Decision Theory (related to the fact that the expected value of being a random person in a society that uses mainly DDs is better than the expected value of being a random person in a USD society).

I think I can extend that principle a little further: if you're not the kind of person who also shares a preference for your friends being the type who want to be UBI Coin billionaires, I might still be friends with you if you're cool enough, but you can bet that I'll be judging you for that.

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2021-02-15T16:49:19.993Z · LW(p) · GW(p)

(Ideally, I would want to earn those $1 billion worth of UBI Coin via a Public Goods Market, that almost goes without saying)

comment by MikkW (mikkel-wilson) · 2021-02-12T21:21:19.377Z · LW(p) · GW(p)

A drug that arguably should be legal: a combined dysphoric / time-release euphoric, that initially causes incredibly unpleasant sensations in the mind, then after a few hours, releases chemicals that create incredibly pleasant sensations. Since humans discount time fairly aggressively, it seems possible to me to balance this so that it creates stronger, and longer positive experiences, while still not being addictive, due to the immediate negative sensations associated with it.

The unpleasant initial effects can include characteristics of the pill itself, being quite spicy and bitter, with an unpleasant texture, as well as chemical effects: large doses of dyphenhydramine, or a similar chemical that wears off faster, will make the product much less desirable to use, meaning the eventual high can be even more intense and sustained without forming habits

comment by MikkW (mikkel-wilson) · 2021-01-18T04:12:25.595Z · LW(p) · GW(p)

What happens if we assume that a comfortable life and reproduction are inviolable priviledges, and imagine a world where these are (by the magic of positing) guaranteed never to be violated for any human? This suggests that the number of humans would increase exponentially, without end, until eventually some point is hit where the energy and resources available in the universe, available at the reach of mankind, is less than the resources needed to provide a comfortable life to every person. Therefore, there can exist no world where both reproduction and a comfortable life are guaranteed for all individuals, unless we happen to live in a world where there is infinite energy (negentropy) and resources.

---

The explanation might not be perfect, and the important implications that I believe follow may not be clear from this, but this is a principle that I often find myself meditating upon.

(This was originally written as a response to the daily challenge for day 12 of Hammertime [? · GW])

comment by MikkW (mikkel-wilson) · 2021-01-12T23:33:12.143Z · LW(p) · GW(p)

With vaccines on the horizon, it seems likely that we are nearing the end of lockdowns and the pandemic, but there is talk of worry that it's possible a mutant strain might resist the vaccine, which could put off the end of the pandemic for a while longer.

It seems to me that numerous nations have had a much better response to the pandemic than any state in the US, and have been able to maintain a much better quality of life during the pandemic than the states, including New Zealand, Japan, and South Korea. For someone with the flexibility, moving to one of these countries would have seemed like a smart move when it seemed there was still a long time left in the pandemic; and would still seem like a good idea if one feels that the pandemic will not be over soon enough.

While every US state has as a whole failed to reign in the virus, I suspect that it may be possible and worthwhile to establish a town or village in some state - perhaps not CA or NY, or whichever state you would most want to live in, but in some state - where everybody consents to measures similar to those taken in nations that have gotten a grasp of the virus, and to take advantage of a relative freedom from the virus to live a better life. This may be, if taken up by a collective, be a cheaper and more convenient (in some ways) alternative to moving to a country on the other side of the world.

comment by MikkW (mikkel-wilson) · 2021-01-10T03:24:28.963Z · LW(p) · GW(p)

In "Emedded Agency", Scott and Abram write:

In theory, I don't understand how to do optimization at all - other than methods that look like finding a bunch of stuff that I don't understand, and seeing if it accomplishes my goal. But this is exactly the kind of thing that's most prone to spinning up adversarial subsystems.

One form of optimization that comes to mind that is importantly different, is to carefully consider a prototypical system, think about how the parts interplay, and identify how the system can be improved, and create a new prototype that one can expect to be better. While practical application of this type of optimization will still often involve producing and testing multiple prototypes, it differs from back-propogation or stochastic hill-climbing because the new system will be better than the prototype it is based on due to reasons that the optimizing agent actually understands.

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2021-01-10T03:29:37.079Z · LW(p) · GW(p)

I think capitalism staddles the line between these two modes: an inventor or well-function firm will optimize by making modifications that they actually understand, but the way the market optimizes products is how Scott and Abram describe it: you get a lot of stuff that you don't attempt to understand deeply, and choose whichever one looks best. While I am generally a fan of capitalism, there are examples of "adversarial subsystems" that have been spun up as a result of markets - the slave trade and urban pollution (e.g. smog) come to mind.

comment by MikkW (mikkel-wilson) · 2020-12-24T21:14:26.641Z · LW(p) · GW(p)

I recently wrote about combining Grand Chess with Drop Chess [LW · GW], to make what I felt could become my favorite version of chess. Today, I just read this article, which argues that the queen's unique status as a 'power piece' in Orthodox Chess - a piece that is stronger than any other piece on the board - is part of what makes Orthodox so iconic in the west, and that other major chesslikes similarly have a unique power piece (or pair of power pieces). According to this theory, Grand Chess's trifecta of power pieces may give it less staying power than Orthodox Chess. I'm not convinced, since Shogi has 2 power pieces, which is only 1 less than Grand Chess, and twice as many as Orthodox, but it is food for thought.

My first reaction was to add an Amazon (bishop + rook + knight in one piece) as a power piece, but it's not clear to me that there's an elegant way of adding it (although an 11x11 board might just be the obvious solution), and it has already been pointed out that my 'Ideal Chess' already has a large amount of piece power, and the ability to create a sufficiently buffed King has already been called into question, before an Amazon is added, so I'm somewhat dubious of that naïve approach.

comment by MikkW (mikkel-wilson) · 2020-12-08T18:27:09.515Z · LW(p) · GW(p)

Recently I was looking at the list of richest people, and for the most part it makes sense to me, but one thing confuses me: why is Bernard Arnault so rich? It seems to me that one can't get that rich simply off of fashion - you can get rich, but you can't become the third richest person in the world off of fashion. It's possible that I'm wrong, but I strongly suspect that there's some part of the story that I haven't heard yet- I suspect that one of his ventures is creating value in a way that goes beyond mere fashion, and I am curious to figure that out.

Replies from: mr-hire, mikkel-wilson
comment by Matt Goldenberg (mr-hire) · 2020-12-08T18:30:19.867Z · LW(p) · GW(p)

Most of his wealth comes from his stake in LVMH, a luxury real estate group.

 

Edit: Actually LVMH is involved in several luxury verticals, not just real estate.

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2020-12-08T20:51:04.841Z · LW(p) · GW(p)

But that doesn't answer my question. What is LVMH doing that makes them so valuable? Wikipedia says they "specialize in luxury goods", but that takes us right back to what I say in my original post. What value is LVMH creating, beyond just "luxury"? Again, I may be wrong, but it just doesn't seem possible to become the third richest person by selling "luxury" - whether real estate, champagne, clothes, or jewelry.

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2020-12-09T01:07:03.515Z · LW(p) · GW(p)

Expensive real estate actually seems like a great way to become one of the richest people.  Maybe we just have different priors.  

 

Edit: apparently, the real estate isn't where they make their money though...

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2020-12-09T03:21:23.152Z · LW(p) · GW(p)

I agree that real estate can make a person rich. But the path I see for that is only tangentially connected to luxury

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2020-12-09T14:41:01.381Z · LW(p) · GW(p)

For most sectors, I think there's tiers.  Apple sells less devices at a slightly more expensive price point than e.g. Microsoft or Google.  I think the highest tiers, that only a few can afford, but at the highest price point (which is actually a selling point of your product) makes intuitive sense as a path to being one of the richest, and real estate, as an asset class, makes intuitive sense to apply this strategy to.

comment by MikkW (mikkel-wilson) · 2020-12-08T21:03:04.972Z · LW(p) · GW(p)

An infographic I found shows that LVMH's revenues are driven by the following sections:

"Fashion and leather goods" is 38% of LVMH's revenues

"Selective retailing" is 28%

"Perfumes and cosmetics" is 13%

"Wines and Spirits" is 10%

Between these, they account for ~90% of the value of LVMH, with watches and jewelry making up most of the remaining 10%. So perhaps I should be asking: What is LVMH's fashion and retail sectors doing to make them so valuable?

I will also note, that this is the percentage of revenues, not profits. I might want to find out the proportion each of these sectors contributes to profits (to ensure I don't accidentally chase a high-revenue, low profit wild goose), and I could probably find that out by looking at LVMH's shareholder report.

comment by MikkW (mikkel-wilson) · 2020-10-30T00:49:29.108Z · LW(p) · GW(p)

It's a shame that in practice Aumann Agreement is expensive, but we should try to encourage Aumann-like updating whenever possible.

While, as I pointed out in my previous shortform, [LW(p) · GW(p)] Aumann Agreement is neither cheap nor free, it's powerful that simply by repeatedly mutually communicating the fact that they have opposing beliefs, two people can come to arrive at (in theory) the same beliefs together, that they would have if they had access to all the information the other person has, even without being aware of the specific information the other person has.

While it's not strictly necessary, Aumann's proof of the Agreement Theorem assumes that A) both agents are both honest and rational, and importantly: B) both agents are aware that the other is honest and rational (and furthermore, that the other agent knows that they know they are rational, and so on). In other words, the rationality and honesty of each agent is presumed to be common knowledge between both agents.

In real life, I often have conversations with people (even sometimes on LW) who I'm not sure are honest, or rational, and who I'm not sure consider me to be honest and rational. Lack of common knowledge of honesty is a deal-breaker, and the lack of common knowledge of rationality, while not a deal-breaker, slows the (already cumbersome) Aumann process down quite a bit.

So, I invite you to ask: How can we build common knowledge of our rationality and honesty? I've already posted one shortform [LW(p) · GW(p)] on this subject, but there's more to be said.

Replies from: steven0461
comment by steven0461 · 2020-11-01T18:50:57.988Z · LW(p) · GW(p)

I don't think there's any shortcut. We'll have to first become rational and honest, and then demonstrate that we're rational and honest by talking about many different uncertainties and disagreements in a rational and honest manner.

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2020-11-01T20:13:04.113Z · LW(p) · GW(p)

I don't think there's any shortcut.

Not sure I agree with you here. Well, I do agree that the only practical way I can think of to demonstrate honesty is to actually be honest, and gain a reputation for honesty. However, I do think there are ways to augment that process: right now, I can observe people being honest when I engage with their ideas, verify their statements myself, and update for the future that they seem honest; however, this is something that I generally have to do for myself, and if someone else comes along and engages with the same person, they have to verify the statements all over again for themselves; multiply this across hundreds or thousands of people, and you're wasting a lot of time; and I can only build trust based on content that I have engaged with; even if a person has a large backlog of honest communication, if I don't engage with that backlog, I will end up trusting that person less than they deserve. If there are people who I already know I can trust, it's possible to use their assignment of trust to give trust to people who I otherwise wouldn't be able to. There are ways to streamline that.

Regarding rationality, since rationality is not a single trait or skill, but rather many traits and skills, there is no single way to reliably signal the entirety of rationality; however, each individual trait and skill can reliably be signaled in a way that can facilitate building of trust. As one example, if there existed a test that required an ability to robustly engage with the ideas communicated in Yudkowsky's sequences, if I noticed that somebody had passed this test, I would be willing to update on that person's statements more than if I didn't know they were capable of passing this test. (I anticipate that people reading this right now will object that test generally aren't reliable signals, and that people often forget what they are tested on. To the first objection, I have many thoughts on robust testing that I have yet to share, and haven't seen written elsewhere to my knowledge, and my thoughts on this subject are too long to write in this margin. Regarding forgetting, spaced repetition is the obvious answer)

comment by MikkW (mikkel-wilson) · 2020-10-15T21:01:44.635Z · LW(p) · GW(p)

Riemannian geometry belongs on the list of fundamental concepts that are taught and known far less than they should be in any competent society

comment by MikkW (mikkel-wilson) · 2021-05-10T18:43:20.180Z · LW(p) · GW(p)

Any libertarian who doesn't have a plan to implement a Universal Basic Income in one form or another ultimately subscribes to an inherently contradictory philosophy. Liberty can only be realized when a person is not forced against their will to work in order to live.

Replies from: Gurkenglas, Measure, Dagon
comment by Gurkenglas · 2021-05-10T18:46:50.110Z · LW(p) · GW(p)

So a forager animal with no predators isn't free because it has to look for food?

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2021-05-10T19:37:07.854Z · LW(p) · GW(p)

I'm not sure currently what my thoughts on that situation are. The concepts of freedom and liberty are kinda non-natural concepts, that I have a good framework for discussing meaningfully regarding humans, but the further away an entity gets from being similar to a person, it's harder for me to think concretely about what freedom is. I do suspect in some sense "liberty" is a concept whose specific relevance and salience to humans is unique in contrast to most other forms of life, including many closely related animal species- the human desire to have control over one's own destiny is a social emotion that likely developed to help us maximize our success in the context of the human social environment, which is quite unique even in comparison to other great ape social structures.

None of this is meant, of course, to imply that animals and apes can't or don't value freedom, either extrinsicly or intrisicly, just that the human case is unique and I don't at this moment have a good framework for extrapolating my post to non-human lifeforms.

Replies from: ChristianKl
comment by ChristianKl · 2021-05-11T09:48:17.167Z · LW(p) · GW(p)

Why do you believe that your concept of freedom is automatically the same as the concept of freedom that other libertarians use?

comment by Measure · 2021-05-10T18:46:27.749Z · LW(p) · GW(p)

Even in a "state of nature" you need to work in order to live — even if that's just gathering food from the environment.

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2021-05-10T19:37:26.572Z · LW(p) · GW(p)

State of nature ≠ liberty

Replies from: Measure
comment by Measure · 2021-05-10T22:05:54.049Z · LW(p) · GW(p)

Agreed, but my understanding is that most "libertarians" aren't trying to top-down maximize liberty or anything like that but are starting from state of nature and extrapolating some kind of social contract from there.

comment by Dagon · 2021-05-10T23:12:02.759Z · LW(p) · GW(p)

I think on the level you're evaluating, ALL political philosophies (perhaps excluding solopsistic or nihilistic ones) some inherent contradictions.  If you take them as "preference for slight to major changes from the status quo" rather than "a complete description of a perfect end-state", they get a lot more reasonable.

It's quite consistent to decry some of the more egregious current impositions on liberty without demanding the additional impositions on some that would further free some others.