Mati_Roy's Shortform

post by Mati_Roy (MathieuRoy) · 2020-03-17T08:21:41.287Z · score: 4 (1 votes) · LW · GW · 64 comments


Comments sorted by top scores.

comment by Mati_Roy (MathieuRoy) · 2020-03-30T08:58:13.996Z · score: 5 (3 votes) · LW(p) · GW(p)

EtA: moved to a question: [LW · GW]

Why do we have offices?

They seem expensive, and not useful for jobs that can apparently be done remotely.


  • Social presence of other people working:
  • Accountability
  • High bandwidth communication
  • Meta communication (knowing who's available to talk to)
  • Status quo bias

status: to integrate

comment by Dagon · 2020-03-30T15:30:39.075Z · score: 7 (4 votes) · LW(p) · GW(p)
  • Employee focus (having punctuated behaviors separating work from personal time)
  • Tax advantages for employers to own workspaces and fixtures rather than employees
  • Not clear that "can be done remotely" is the right metric. We won't know if "can be done as effectively (or more effectively) remotely" is true for some time.
comment by Mati_Roy (MathieuRoy) · 2020-03-31T01:02:57.711Z · score: 1 (1 votes) · LW(p) · GW(p)

thanks for your comment!

I just realized I should have used the question feature instead; here it is: [LW · GW]

comment by mr-hire · 2020-03-30T16:49:25.793Z · score: 5 (3 votes) · LW(p) · GW(p)

Increased sense of relatedness seems a big one missed here.

comment by Mati_Roy (MathieuRoy) · 2020-03-31T01:03:13.120Z · score: 1 (1 votes) · LW(p) · GW(p)

thanks for your comment!

I just realized I should have used the question feature instead; here it is: [LW · GW]

comment by Viliam · 2020-03-30T20:14:48.266Z · score: 3 (2 votes) · LW(p) · GW(p)

High status feels better when you are near your subordinates (when you can watch them, randomly disrupt them, etc.). High-status people make the decision whether remote work is allowed or not.

comment by Mati_Roy (MathieuRoy) · 2020-03-31T01:04:07.536Z · score: 1 (1 votes) · LW(p) · GW(p)

thanks for your comment!

I just realized I should have used the question feature instead; here it is: [LW · GW]

comment by Mati_Roy (MathieuRoy) · 2020-03-17T08:21:41.482Z · score: 5 (3 votes) · LW(p) · GW(p)

epistemic status: a thought I just had

  • the concept of 'moral trade' makes sense to me
  • but I don't think there's such a thing as 'epistemic trade'
  • although maybe agents can "trade" (in that same way) epistemologies and priors, but I don't think they can "trade" evidence

EtA: for those that are not familiar with the concept of moral trade, check out:

comment by Dagon · 2020-03-18T00:14:30.027Z · score: 4 (3 votes) · LW(p) · GW(p)

It's worth being clear what you mean by "trade" in these cases. Does "moral trade" mean "compromising one part of your moral beliefs in order to support another part"? or "negotiate with immoral agents to maximize overall moral value" or just "recognize that morals are preferences and all trade is moral trade"?

I think I agree that "trade" is the wrong metaphor for models and priors. There is sharing, and two-way sharing is often called "exchange", but that's misleading. For resources, "trade" implies loss of something and gain of something else, where the utility of the things to each party differ in a way that both are better off. For private epistemology (not in the public sphere where what you say may differ from what you believe), there's nothing you give up or trade away for new updates.

comment by Pattern · 2020-03-19T23:21:13.380Z · score: 3 (2 votes) · LW(p) · GW(p)

(I aimed for non-"political" examples, which ended up sounding ridiculous.)

Suppose you believed that the color blue is evil, and want there to be less blue things in the world.

Suppose I believed the same as you, except for me the color is red.

Perhaps we could agree on a moral trade - we will both be against the colors blue and red! Or perhaps something less extreme - you won't make things red unless they'd look really good red, and I won't make things blue unless they'd look really good blue. Or we trade in some other manner - if we were neighbors and our houses were blue and red we might paint them different colors (neither red nor blue), or trade them.

comment by Dagon · 2020-03-20T04:15:31.410Z · score: 2 (1 votes) · LW(p) · GW(p)

Hmm, those examples seem to be just "trade". Agreeing to do something dispreferred, in exchange for someone else doing something they disprefer and you prefer, when all things under consideration are permitted and optional under the relevant moral strictures.

I aimed for non-"political" examples, which ended up sounding ridiculous.

I wonder if that implies that politics is one of the main areas where the concept applies.

comment by Mati_Roy (MathieuRoy) · 2020-03-18T15:29:07.937Z · score: 1 (1 votes) · LW(p) · GW(p)

meta: thanks for your comment; no expectation for you to read this comment; it doesn't even really respond to your comments, just some thoughts that came after reading it; see last paragraph for an answer to your question| quality: didn't spent much time formatting my thoughts

I use "moral trade" for non-egoist preferences. The latter is the trivial case that's the most prevalent; we trade resources because I care more about myself and you care more about yourself, and both want something personally out of the trade.

Two people that are only different in that one adopts Bentham's utilitarianism and the other adopts Mill's might want to trade. One value the existence of a human more than the existence of a pig. So one might trade their diet (become vegan) for a donation to poverty alleviation.

Two people could have the same values, but one think that there's a religious after life and the other not because they processed evidence differently. Someone could propose the following trade: the atheist will pray all their life (the reward massively overweights the cost from the theist person's perspective), and in exchange, the theist will sign up for cryonics (the reward massively overweights the cost from the atheist person's perspective). Hummmm, actually, writing out this example, it now seems to make sense to me to trade. Assuming both people are pure utilitarians (and no opportunity cost), they would both, in expectation, from their relative model of the world, gain a much larger reward than its cost. I guess this could also be called moral trade, but the different in expected value comes from different model of the worlds instead of different values.

So you never actually trade epistemologies or priors (as in, I reprogram my mind if you reprogram yours so that we have a more similar way of modelling the world), but you can trade acting as if. (Well, there are also cases were you would actually trade them, but only because it's morally beneficial to both parties.) It sounds trivial now, but yeah, epistemologies and priors are not necessarily intrinsically moving. I'm not sure what I had in mind exactly yesterday.

Ah, I think meant, let's assume I have Model 1 and you have Model 2. Model 1 evaluates Model 2 to be 50% wrong and vice versa, and both assume they themselves are 95% right. Let's assume that there's a third model that is 94% right according to both. If you do an average, it seems better. But it obviously doesn't mean it's optimal from any of the agent's perspective to accept this modification to their model.

comment by Mati_Roy (MathieuRoy) · 2020-09-15T04:08:52.073Z · score: 4 (3 votes) · LW(p) · GW(p)

Epistemic status: thinking outloud

The term "weirdness points" puts a different framing on the topic.

I'm thinking maybe I/we should also do this for "recommendation points".

The amount I'm willing to bet depends both on how important it seems to me and how likely I think the other person will appreciate it.

The way I currently try to spend my recommendation points is pretty fat tail, because I see someone's attention as scarce, so I want to keep it for things I think are really important, and the importance I assign to information is pretty fat tail. I'll sometimes say something like "I think you'll like this, but I don't want to bet reputation".

But maybe I'm too averse. Maybe if I make small bets (in terms of recommendation points), it'll increase my budget for future recommendations. Or maybe I should differentiate probability the person will appreciate the info and how much they might appreciate the info. And maybe people will update on my probabilities whether or not I say I want to bet reputation.

comment by Mati_Roy (MathieuRoy) · 2020-04-27T05:59:25.371Z · score: 4 (3 votes) · LW(p) · GW(p)

current intuitions for personal longevity interventions in order of priority (cryo-first for older people): sleep well, lifelogging, research mind-readers, investing to buy therapies in the future, staying home, sign up for cryo, paying a cook / maintain low weight, hiring Wei Dai to research modal immortality, paying a trainer, preserving stem cells, moving near a cryo facility, having some watch you to detect when you die, funding plastination research

EtA: maybe lucid dreaming to remember your dreams; some drugs (becopa?) to improve memory retention)

also not really important in the long run, but sleeping less to experience more

comment by William Walker (william-walker) · 2020-04-28T22:38:44.714Z · score: 4 (3 votes) · LW(p) · GW(p)

NAD+ boosting (NR now, keep an eye on NRH for future).

CoQ10, NAC, keep D levels up in winter.

Telomerase activation (Centella asiatica, astragalus, eventually synthetics if Sierra Sciences gets its TRAP screen funded again or if the Chinese get tired of waiting on US technology...)

NR, C, D, Zinc for SARS-CoV-2 right now, if you're not already.

Become billionaire, move out of FDA zone, have some AAV-vector gene modifications... maybe some extra p53 copies, like the Pachyderms? Fund more work on Bowhead Whale comparative genetics. Fund a company to commercially freeze and store transplant organs, to perfect a freezing protocol (I've seen Alcor's...)

Main thing we need is a country where it's legal and economically possible to develop and sell anti-agathic technology... even a billionaire can't substitute for the whole market.

comment by Mati_Roy (MathieuRoy) · 2020-09-20T05:54:38.868Z · score: 3 (2 votes) · LW(p) · GW(p)

Topic: AI adoption dynamic


  • fixed cost: 4.6M USD
  • variable cost: 790 requests/USD source


  • fixed cost: 0-500k USD (depending whether you start from birth and the task they need to be trained)
  • variable cost: 10 - 1000 USD / day (depending whether you count their maintenance cost or the cost they charge)

So an AI currently seems more expensive to train, but less expensive to use (as might be obvious for most of you).

Of course, trained humans are better than GPT-3. And this comparison has other limitations. But I still find it interesting.

According to one estimate, training GPT-3 would cost at least $4.6 million. And to be clear, training deep learning models is not a clean, one-shot process. There's a lot of trial and error and hyperparameter tuning that would probably increase the cost several-fold. (source)


comment by gwern · 2020-09-20T17:35:14.399Z · score: 7 (4 votes) · LW(p) · GW(p)

and error and hyperparameter tuning that would probably increase the cost several-fold.

All of which was done on much smaller models and GPT-3 just scaled up existing settings/equations - they did their homework. That was the whole point of the scaling papers, to tell you how to train the largest cost-effective model without having to brute force it! I think OA may well have done a single run and people are substantially inflating the cost because they aren't paying any attention to the background research or how the GPT-3 paper pointedly omits any discussion of hyperparameter tuning and implies only one run (eg the dataset contamination issue).

comment by Mati_Roy (MathieuRoy) · 2020-09-21T11:32:13.811Z · score: 1 (1 votes) · LW(p) · GW(p)

Good to know, thanks! 

comment by Mati_Roy (MathieuRoy) · 2020-09-03T21:36:00.818Z · score: 3 (2 votes) · LW(p) · GW(p)

generalising from what a friend proposed me: don't aim at being motivated to do [desirable habit], aim at being addicted (/obsessed) at doing [desirable habit] (ie. having difficulty not to do it). I like this framing; relying on always being motivated feels harder to me

(I like that advice, but it probably doesn't work for everyone)

comment by Mati_Roy (MathieuRoy) · 2020-08-06T16:08:39.347Z · score: 3 (2 votes) · LW(p) · GW(p)

I can pretty much only think of good reasons for having generally pro-entrapment laws. Not any kind of traps, but some kind of traps seem robustly good. Ex.: I'd put traps for situations that are likely to happen in real life, and that show unambiguous criminal intent.

It seems like a cheap and effective way to deter crimes and identify people at risk of criminal behaviors.

I've only thought about this for a bit though, so maybe I'm missing something.

x-post with Facebook:

comment by Dagon · 2020-08-06T17:15:51.298Z · score: 7 (4 votes) · LW(p) · GW(p)

I think the setup you describe (unambiguously show criminal intent in likely situations) is _already_ allowed in most jurisdictions. "entrapment" implies setting up the situation in such a way that it encourages the criminal behavior, rather than just revealing it.

comment by G Gordon Worley III (gworley) · 2020-08-07T00:56:06.299Z · score: 4 (2 votes) · LW(p) · GW(p)

IANAL, but this sounds right to me. It's fine if, say, the police hide out at a shop that is tempting and easy to rob and encourage the owner not to make their shop less tempting or easy to rob so that it can function as a "honeypot" that lets them nab people in the act of committing crimes. On the other hand, although the courts often decide that it's not entrapment, undercover cops soliciting prostitutes or illegal drugs are much closer to being entrapment, because then the police are actively creating the demand for crime to supply.

Depending on how you feel about it, I'd say this suggests the main flaw in your idea, which is that it will be abused on the margin to catch people who otherwise would not have committed crimes, even if you try to circumscribe it such that the traps you can create are far from causing more marginal crime, because incentives will push for expansion of this power. At least, that would be the case in the US, because it already is.

comment by Viliam · 2020-08-07T22:01:09.174Z · score: 6 (3 votes) · LW(p) · GW(p)

Aren't there already too many people in prisons? Do we need to put there also people who normally wouldn't have done any crime?

I guess this depends a lot on your model of crime. If your model is something like -- "some people are inherently criminal, but most are inherently non-criminal; the latter would never commit a crime, and the former will do use every opportunity that seems profitable to them" -- then the honeypot strategy makes sense. You find and eliminate the inherently criminal people before they get an opportunity to actually hurt someone.

My model is that most people could be navigated to commit a crime, if someone would spend the energy to understand them and create the proper temptation. Especially when we consider the vast range of things that are considered crimes, so it does not have to be a murder, but something like smoking weed, or even things that you have no idea they could be illegal; then I'd say a potential success rate is 99%. But even if we limit ourselves to the motte of crime; let's say theft and fraud, I'd still say more than 50% of people could be tempted, if someone spent enough resources on it. Of course some people are easier to nudge than others, but we are all on the spectrum.

Emotionally speaking, "entrapment" feels to me like "it is too dangerous to fight the real criminals, let's get some innocent but stupid person into trouble instead, and it will look the same in our crime-fighting statistics".

comment by Mati_Roy (MathieuRoy) · 2020-08-13T17:44:35.277Z · score: 1 (1 votes) · LW(p) · GW(p)

uh, I didn't say anything about prisons. there are reasons to identify people at high risk of committing crimes.

and no, it's not about catching people that wouldn't have committed crimes, it's about catching people that would have committed crimes without being caught (but maybe I misused the word 'entrapment', and that's not what it means)

Emotionally speaking, "entrapment" feels to me like "it is too dangerous to fight the real criminals, let's get some innocent but stupid person into trouble instead, and it will look the same in our crime-fighting statistics".

Well, that's (obviously?) not what I mean.

I elaborated more on the Facebook post linked above.

comment by Viliam · 2020-08-15T16:19:06.943Z · score: 2 (1 votes) · LW(p) · GW(p)
Well, that's (obviously?) not what I mean.

I agree, but that seems to be how the idea is actually used in real life. By people other than you. By people who get paid when they catch criminals... which creates an incentive for them to increase easy-to-solve criminality rather then reduce it, as long as they find plausibly deniable methods to do it.

In theory, if you could create "traps" in a way that does not increase temptation (because increased temptation = increased crime), for example on a street already containing hundred unlocked bikes you would add dozen unlocked trap bikes... yeah, there is probably no downside to that.

In practice, if you allow this, and if you start rewarding people for catching thieves using the traps, they will get creative. Because a trap that follows the spirit of the law does not maximize the reward.

comment by Mati_Roy (MathieuRoy) · 2020-07-25T23:29:13.654Z · score: 3 (2 votes) · LW(p) · GW(p)

Philosophical zombies are creatures that are exactly like us, down to the atomic level, except they aren't conscious.

Complete philosophical zombies go further. They too are exactly like us, down to the atomic level, and aren't conscious. But they are also purple spheres (except we see them as if they weren't), they want to maximize paperclips (although they act and think as if they didn't), and they are very intelligent (except they act and think as if they weren't).

I'm just saying this because I find it funny ^^. I think consciousness is harder (for us) to reduce than shapes, preferences, and intelligence.

comment by Chris_Leong · 2020-07-26T06:35:27.565Z · score: 5 (3 votes) · LW(p) · GW(p)

It's actually not hard to find examples of people who are intelligent, but act and think as though they aren't =P

comment by Mati_Roy (MathieuRoy) · 2020-07-05T15:15:06.916Z · score: 3 (2 votes) · LW(p) · GW(p)

topic: lifelogging as life extension

which formats should we preserve our files in?

I think it should be:

- open source and popular (to increase chances it's still accessible in the future)

- resistant to data degradation: (thanks to Matthew Barnett for bringing this to my attention)


comment by DonyChristie · 2020-07-06T02:04:29.860Z · score: 1 (1 votes) · LW(p) · GW(p)

Mati, would you be interested in having a friendly and open (anti-)debate on here (as a new post) about the value of open information, both for life extension purposes and else (such as Facebook group moderation)? I really support the idea of lifelogging for various purposes such as life extension but have a strong disagreement with the general stance of universal access to information as more-or-less always being a public good.

comment by Mati_Roy (MathieuRoy) · 2020-07-06T02:18:07.243Z · score: 1 (1 votes) · LW(p) · GW(p)

Meta: This isn't really related to the above comment, so might be better to start a new comment in my shortform next time.

Object: I don't want to argue about open information in general for now. I might be open to discussing something more specific and directly actionable, especially if we haven't done so yet and you think it's important.

It doesn't currently seem to me that relevant to put one's information public for life extension purposes given you can just backup the information in private, in case you were implying or thinking that.

I also don't (at least currently and in the past) advocate for universal access to information in case you were implying or thinking that.

comment by Mati_Roy (MathieuRoy) · 2020-07-05T15:18:24.199Z · score: 1 (1 votes) · LW(p) · GW(p)

note to self, to read:

comment by Mati_Roy (MathieuRoy) · 2020-07-04T12:07:16.703Z · score: 3 (2 votes) · LW(p) · GW(p)

topic: lifelogging as life extension

epistemic status: idea

Backup Day. Day where you commit all your data to blu-rays in a secure location.

When could that be?

Aphelion is at the beginning of the year. But maybe would be better to have it on a day that commemorates some relevant events for us.


comment by avturchin · 2020-07-04T12:16:02.871Z · score: 3 (2 votes) · LW(p) · GW(p)

I am now coping a 4 TB HDD and it is taking 50 hours. Blu rays are more time consuming as one need to change the disks, and it may be around 80 disks 50GB each to record the same hard drive. So it could take more than day of work.

comment by Mati_Roy (MathieuRoy) · 2020-07-05T01:20:22.453Z · score: 3 (2 votes) · LW(p) · GW(p)

good point! maybe we should have a 'Backup Week'!:)

comment by Mati_Roy (MathieuRoy) · 2020-05-07T07:16:37.302Z · score: 3 (2 votes) · LW(p) · GW(p)

I feel like I have slack. I don't need to work much to be able to eat; if I don't work for a day, nothing viscerally bad happens in my immediate surrounding. This allows me to think longer term and take on altruistic projects. But on the other hand, I feel like every movement counts; that there's no loose in the system. Every lost move is costly. A recurrent thought I've had in the past weeks is that: there's no slack in the system.

comment by Dagon · 2020-05-07T15:20:58.350Z · score: 4 (2 votes) · LW(p) · GW(p)

Every move DOES count, and "nothing viscerally bad" doesn't mean it wasn't a lost opportunity for improvement. _on some dimensions_. The problem is that well-being is highly-dimensional, and we only have visibility into a few aspects, and measurements of only a subset of those. It could easily be a huge win on your future capabilities to not work-for-pay that day.

The Slack that can be described is not true Slack. Slack is in the mind. Slack is freedom FROM freedom. Slack is the knowledge that every action or inaction changes the course of the future, _AND_ that you control almost none of it. You don't (and probably CAN'T) actually understand all the ways you're affecting your future experiences. Simply give yourself room to try stuff and experience them, without a lot of stress about "causality" or "optimization". But don't fuck it up. True slack is the ability to obsess over plans and executions in order to pick the future you prefer, WHILE YOU SIMULTANEOUSLY know you're wrong about causality and about your own preferences.

[ note and warning: my conception of slack has been formed over decades of Subgenius popehood (though I usually consider myself more Discordian), and may diverge significantly from other uses. ]

comment by Mati_Roy (MathieuRoy) · 2020-04-14T14:16:05.973Z · score: 3 (2 votes) · LW(p) · GW(p)

Today is Schelling Day. You know where and when to meet for the hangout!

comment by Mati_Roy (MathieuRoy) · 2020-09-24T11:02:50.496Z · score: 2 (2 votes) · LW(p) · GW(p)

tattoo idea: I won't die in this body

in Toki Pona: ale pini mi li ala insa e sijelo ni

direct translation: life's end (that is) mine (will) not (be) inside body this

EtA: actually I got the Toki Pona wrong; see:

comment by Mati_Roy (MathieuRoy) · 2020-09-20T11:11:32.171Z · score: 2 (2 votes) · LW(p) · GW(p)

When you're sufficiently curious, everything feels like a rabbit hole.

Challenge me by saying a very banal statement ^_^


comment by mr-hire · 2020-09-20T17:16:48.239Z · score: 2 (1 votes) · LW(p) · GW(p)

I  tired because I didn't sleep well.

comment by Mati_Roy (MathieuRoy) · 2020-09-20T11:20:33.479Z · score: 2 (2 votes) · LW(p) · GW(p)

Sort of smashing both of those saying together:

> “If you wish to make an apple pie from scratch, you must first invent the universe.” -Carl Sagan

> "Any sufficiently analyzed magic is indistinguishable from science!"-spin of Clarke's third law

to get:

Sufficiently understanding an apple pie is indistinguishable from understanding the world.

comment by Mark Xu (mark-xu) · 2020-09-21T05:51:08.073Z · score: 1 (3 votes) · LW(p) · GW(p)

reminded me of Uriel explaining Kabbalah:


comment by Mati_Roy (MathieuRoy) · 2020-09-21T11:41:27.529Z · score: 1 (1 votes) · LW(p) · GW(p)


comment by Mati_Roy (MathieuRoy) · 2020-07-12T16:04:12.456Z · score: 2 (2 votes) · LW(p) · GW(p)

People say we can't bet about the apocalypse. But what about taking debt? The person thinking the probability of apocalypse is higher would accept higher interest rate on their debt, as once at the judgement period their might be no one to whom the money is worth or the money itself might not be worth much.

I guess there are also reasons to want more money during a global catastrophe, and there are also reasons to not want to keep money for great futures (see:, so that wouldn't actually work.

comment by Mati_Roy (MathieuRoy) · 2020-05-01T05:17:00.265Z · score: 2 (2 votes) · LW(p) · GW(p)

meta - LessWrong have people predict whether they will upvote a post just based on the title

comment by Zachary Robertson (zachary-robertson) · 2020-05-05T03:15:50.238Z · score: 1 (1 votes) · LW(p) · GW(p)

Tangential, but I'd venture to guess there's significant correlation between title-choice (word-vec) and upvotes on this site. I wonder if there'd be a significant difference here as compared with say Reddit?

comment by Mark Xu (mark-xu) · 2020-05-04T04:00:05.294Z · score: 1 (1 votes) · LW(p) · GW(p)

I think this breaks because it results in people upvoting based on the title. I recall some study about how people did things they predicted they would do with higher than like 60% chance almost 95% of the time (numbers made up, think I remember direction/order of magnitude of effect size roughly correctly, don't know if it survived the replication crises)

comment by Mati_Roy (MathieuRoy) · 2020-05-04T04:18:57.972Z · score: 1 (1 votes) · LW(p) · GW(p)

That's potentially a good point! But it doesn't say how the causality works. Maybe the prediction affects the outcome or maybe they're just bad at predicting / modelling themselves.

comment by Mati_Roy (MathieuRoy) · 2020-04-24T17:44:00.788Z · score: 2 (2 votes) · LW(p) · GW(p)

There's a post, I think by Robin Hanson on Overcoming Bias, that says people care about what their peers think of them, but we can hack our brains to doing awesome things by making this reference group the elite of the future. I can't find this post. Do you have a link?

comment by Mati_Roy (MathieuRoy) · 2020-03-30T09:00:24.560Z · score: 2 (2 votes) · LW(p) · GW(p)

Personal Wiki

might be useful for people to have personal wiki where they take note instead of everyone taking notes in private Gdoc

status: to do / to integrate

comment by Mati_Roy (MathieuRoy) · 2020-09-27T18:50:36.228Z · score: 1 (1 votes) · LW(p) · GW(p)

I remember someone in the LessWrong community (I think Eliezer Yudkowsky, but maybe Robin Hanson or someone else, or maybe only Rationalist-adjacent; maybe an article or a podcast) saying that people believing in "UFOs" (or people believing in unproven theories of conspiracy) would stop being so enthusiastic about those if they became actually known as true with good evidence for them. does anyone know what I'm referring to?

comment by Ruby · 2020-09-27T19:12:06.485Z · score: 4 (2 votes) · LW(p) · GW(p)

Eliezer talks about how dragons wouldn't be exciting if they were real, I recall.

I'm not sure that's correct.

comment by Mati_Roy (MathieuRoy) · 2020-09-27T19:31:12.932Z · score: 1 (1 votes) · LW(p) · GW(p)

ah, someone found it:

If You Demand Magic, Magic Won't Help" where he says at one point: "The worst catastrophe you could visit upon the New Age community would be for their rituals to start working reliably, and for UFOs to actually appear in the skies. What would be the point of believing in aliens, if they were just there, and everyone else could see them too? In a world where psychic powers were merely real, New Agers wouldn't believe in psychic powers, any more than anyone cares enough about gravity to believe in it. [? · GW]

comment by Mati_Roy (MathieuRoy) · 2020-09-24T11:47:38.380Z · score: 1 (1 votes) · LW(p) · GW(p)

sometimes I see people say "(doesn't) believe in science" when in fact they should say "(doesn't) believe in scientists"

or actually actually "relative credence in the institutions trying to science"


comment by Mati_Roy (MathieuRoy) · 2020-09-24T09:51:57.174Z · score: 1 (1 votes) · LW(p) · GW(p)

hummm, I think I prefer the expression 'skinsuit' to 'meatbag'. feels more accurate, but am not sure. what do you think?


comment by Mati_Roy (MathieuRoy) · 2020-09-24T09:36:03.276Z · score: 1 (1 votes) · LW(p) · GW(p)

I just realized my System 1 was probably anticipating our ascension to the stars to start in something like 75-500 years.

But actually, colonizing the stars could be millions of subjective years away if we go through an em phase ( On the other hand, we could also have finished spreading across the cosmos in only a few subjective decades if I get cryopreserved and the aestivation hypothesis is true (

comment by Mati_Roy (MathieuRoy) · 2020-09-18T05:57:09.050Z · score: 1 (1 votes) · LW(p) · GW(p)

I created a Facebook group to discuss moral philosophies that value life in and of itself:

comment by Mati_Roy (MathieuRoy) · 2020-09-18T05:35:14.279Z · score: 1 (1 votes) · LW(p) · GW(p)

How to calculate subjective years of life?

For non-human animal brains, I would compare them to the baseline of individuals in their own species.

For transhumans that had their mind expanded, I don't think there's an obvious way to get an equivalence. What would be a subjective year for a Jupiter brain?

Maybe it could be in terms of information processed, but in that case, a Jupiter brain would be living A LOT of subjective time per objective time.

Ultimately, given I don't have "intrinsic" diminishing returns on additional experience, the natural definition for me would the amount of 'thinking' that is as valuable. So a subjective year for my future Jupiter brain would be the duration for which I find that experience as valuable as a subjective year now.

Maybe that could even account for diminishing value of experience at a specific mind size because events would start looking more and more similar?? But it otherwise wouldn't work for people that have "intrinsic" diminishing returns on additional experience. It would notably not work with people for whom marginal experiences start becoming undesirable at some point.

comment by avturchin · 2020-09-18T12:59:27.818Z · score: 4 (2 votes) · LW(p) · GW(p)

Interestingly, an hour in childhood is subjectively equal between a day or a week in adulthood, according to recent poll I made. As a result, the middle of human life in term of subjective experiences is somewhere in teenage.

Also, experiences of an adult are more dull and similar to each other.

Tin Urban tweeted recently: "Was just talking to my 94-year-old grandmother and I was saying something about how it would be cool if I could be 94 one day, a really long time from now. And she cut me off and said “it’s tomorrow.” The "years go faster as you age" phenomenon is my least favorite phenomenon."

comment by Mati_Roy (MathieuRoy) · 2020-09-02T02:43:39.266Z · score: 1 (1 votes) · LW(p) · GW(p)

The original Turing test has a human evaluator.

Other evaluators I think would be interesting include: the AI passing the test, a superintelligent AI, and an omniscient maximally-intelligent entity (except without the answer to the test).

Thought while reading this thread.

comment by Mati_Roy (MathieuRoy) · 2020-05-09T06:47:49.630Z · score: 1 (1 votes) · LW(p) · GW(p)

Blocking one ear canal

Category: Weird life optimization

One of my ear canal is in a different shape. When I was young, my mother would tell me that this one was harder to clean, and that ze couldn't see my ear-drum. This ear gets wax accumulation more easily. A few months ago, I decided to let it block.

Obvious possible cognitive bias is the "just world bias": if something bad happens often enough, I'll start think it's good.

But here are benefits this has for me:

  1. When sleeping, I can put my good ear on the pillow, and this now isolates me from sound pretty well. And isn't uncomfortable unlike alternatives (ie.: earmuffs, second pillow, my arm).

  2. I'm only using one ear, so that if I become hard-of-hearing when older (a common problem I think), then I can remove the wax from my backup hear.

This even makes me wonder whether having one of your ear canal differently shaped is something that's actually been selected for.

Other baffling / really surprising observation: My other hear is also blocked on most morning, but just touching it a bit unblocks it.

comment by Mati_Roy (MathieuRoy) · 2020-05-09T07:25:33.682Z · score: 3 (2 votes) · LW(p) · GW(p)

x-posting someone's comment from my wall:

  1. Earplugs?
  2. Age-related hearing loss isn't caused only by exposure to noise.
comment by Mati_Roy (MathieuRoy) · 2020-09-15T05:33:35.956Z · score: 1 (1 votes) · LW(p) · GW(p)

topic: AI timelines

epistemic status: stated more confidently than I am, but seems like a good consideration to add to my portfolio of plausible models of AI development

when I was asking my inner sim "when will we have an AI better than a human at any task", it was returning 21% before 2100 (52% we won't) (see: which is a low probability among AI researchers and longtermist forecasters.

but then I asked my inner sim "when will we have an AI better than a human at any game". the timeline for this seemed much shorter.

but a game is just task that has been operationalized.

so what my inner sim was saying is not that human-level capable AI was far away, but that human-level capable AND aligned AI was far away. I was imagining AIs wouldn't clean-up my place anytime soon not because it's hard to do (well, not for an AI XD), but because it's hard to specify what we mean by "don't cause any harm in the process".

in other words, I think alignment is likely to be the bottleneck

the main problem won't be to create an AI that can solve a problem, it will be to operationalize the problem in a way that properly captures what we care about. it won't be about winning games, but creating them.

I should have known; I was well familiar with the orthogonality thesis for a long time

also see David's comment about alignment vs capabilities:

the Turing Test might be hard to pass because even if you're as smart as a human, if you don't already know what humans want, it seems like it could be hard to learn (as well as a human) for a human-level AI (?) (side note: learning what humans want =/= wanting what humans want; that's a classic confusion) so maybe a better test for human-level intelligence would be: when an AI can beat a human at any game (where a game is a well operationalized task, and doesn't include figuring out what humans want)

discussed with Matthew Barnett, David Krueger


comment by Mati_Roy (MathieuRoy) · 2020-08-27T06:41:34.680Z · score: 1 (1 votes) · LW(p) · GW(p)

If your animal companion kills a human unlawfully, instead of being euthanized, there should be the option for you to pay to put zir in jail.