Posts

How much harder is it to revive a neuro-only cryonics patient? 2021-01-12T23:24:45.963Z
Signaling importance 2020-12-08T09:14:36.148Z
Predictions made by Mati Roy in early 2020 2020-11-21T03:24:56.020Z
What fraction of Dan Ariely's Irrational Game hasn't replicated? 2020-11-09T20:25:27.445Z
What features would you like a prediction platform to have? 2020-10-13T00:48:03.024Z
Reviews of the book 'The Alignment Problem' 2020-10-11T07:41:14.841Z
Reviews of TV show NeXt (about AI safety) 2020-10-11T04:31:48.363Z
Buying micro-biostasis 2020-10-07T10:55:19.006Z
What reacts would you like to be able to give on posts? (emoticons, cognicons, and more) 2020-10-04T18:31:06.596Z
What are examples of Rationalist fable-like stories? 2020-09-28T16:52:13.500Z
What are good ice breaker questions for meeting people in this community? 2020-09-28T15:07:16.798Z
What hard science fiction stories also got the social sciences right? 2020-09-27T20:37:44.256Z
Surviving Petrov Day 2020-09-26T16:40:03.169Z
Has anyone written stories happening in Hanson's em world? 2020-09-21T14:37:11.150Z
For what X would you be indifferent between living X days, but forgetting your day at the end of everyday, or living 10 days? (terminally) 2020-09-18T04:05:59.078Z
How do you celebrate your birthday? 2020-09-17T10:00:50.609Z
What are examples of simpler universes that have been described in order to explain a concept from our more complex universe? 2020-09-17T01:31:10.367Z
What are examples of 'scientific' studies that contradict what you believe about yourself? 2020-08-03T06:11:19.683Z
When a status symbol loses its plausible deniability, how much power does it lose? 2020-07-07T00:48:21.558Z
The Echo Fallacy 2020-07-05T23:00:39.476Z
[Crowdfunding] LessWrong podcast 2020-07-03T20:59:53.590Z
The Book of HPMOR Fanfics 2020-07-03T13:32:17.536Z
Is taking bacopa good for life extension? 2020-05-23T08:54:27.480Z
What aspects of the world emotionally bothers you on an immediate personal level on a daily basis? 2020-05-22T06:27:55.357Z
[link] Biostasis / Cryopreservation Survey 2020 2020-05-16T07:20:58.879Z
What was your reasoning for deciding whether to raise children? 2020-05-15T03:53:23.776Z
What work of fiction explore increased transparency in the world? 2020-05-13T21:15:01.640Z
What are articles on "lifelogging as life extension"? 2020-05-13T20:35:10.676Z
Is AI safety research less parallelizable than AI research? 2020-05-10T20:43:59.476Z
What are examples of perennial discoveries? 2020-05-09T06:17:18.171Z
What fraction of your lifetime (0-80 years old) egoist budget would you (want to) spend on a pill that made you live for as long as you wanted (perfect invincibility), as healthily as you wanted if you knew it would become available to you once you're 80 years old (and that you would otherwise irreversibly die)? 2020-05-08T08:50:40.815Z
What would you do differently if you were less concerned with looking weird? 2020-05-07T23:29:16.618Z
Why do you (not) use a pseudonym on LessWrong? 2020-05-07T19:34:35.446Z
[link] How many humans will have their brain preserved? Forecasts and trends 2020-05-07T06:11:00.176Z
How much money would you pay to get access to video footage of your surroundings for a year of your choice (in the past)? 2020-05-05T05:50:44.628Z
[Link] Rationalist Games -- Facebook Group 2020-05-04T22:13:41.748Z
What information on (or relevant to) modal immortality do you recommend? 2020-05-04T11:30:47.869Z
What physical trigger-action plans did you implement? 2020-04-27T20:10:08.826Z
What social trigger-action plans did you implement? 2020-04-27T20:08:01.147Z
What cognitive trigger-action plans did you implement? 2020-04-27T19:58:21.808Z
Why do you have a personal rationalist blog instead of using LessWrong? Why don't you cross-post all your relevant blog posts? 2020-04-24T17:54:44.643Z
What are good defense mechanisms against dangerous bullet biting? 2020-04-21T20:51:53.983Z
What are the externalities of predictions on wars? 2020-04-20T17:37:09.510Z
What are habits that a lot of people have and don't tend to have ever questioned? 2020-04-20T05:26:27.584Z
Why do we have offices? 2020-03-31T01:01:21.668Z
What are examples of Rationalist posters or Rationalist poster ideas? 2020-03-22T07:08:46.646Z
Would a tax on AI products be useful? 2020-03-21T04:00:15.139Z
What would be the consequences of commoditizing AI? 2020-03-21T03:56:15.029Z
What's the present value of the Future? 2020-03-21T03:35:29.745Z
Mati_Roy's Shortform 2020-03-17T08:21:41.287Z

Comments

Comment by mathieuroy on Straight-edge Warning Against Physical Intimacy · 2021-01-16T04:54:56.842Z · LW · GW

I think I largely agree with you, but just a thought: Mayyybe drugs can help us explore / learn about parts of ourselves that are usually "keep in check" (in some ways) by other parts_

Comment by mathieuroy on Straight-edge Warning Against Physical Intimacy · 2021-01-16T04:49:30.687Z · LW · GW

Does your drugged self not want to get subber to stay their true (from their reference point IIUC) self?

Comment by mathieuroy on How much harder is it to revive a neuro-only cryonics patient? · 2021-01-13T18:46:35.846Z · LW · GW

am assuming you're implying recreating a brain with the same information content (otherwise it's trivially true AFAICT; just make a baby^^)

yeah, that seems plausible to me

in a way, that's what mind uploading is (although in that case your mind is decoupled from the hardware)

Comment by mathieuroy on Cryonics signup guide #1: Overview · 2021-01-09T15:17:33.131Z · LW · GW

Time cost

Feel free to contract me for help signing up. I already helped multiple people. contact@matiroy.com

Comment by mathieuroy on Cryonics signup guide #1: Overview · 2021-01-09T15:16:47.312Z · LW · GW

But I'm not in the US!

For Québec: https://cryoquebec.org/

Comment by mathieuroy on Cryonics signup guide #1: Overview · 2021-01-09T15:11:24.229Z · LW · GW

What I chose

Whole-body with a note that Alcor could choose what seemed best at the moment of my death (ex.: if they only have the equipment for neuro cryoprotection, then neuro seems better).

20 years term-life insurance for 350k CAD, because I have a high confidence in my capacity to save enough money to be able to pay cash in 20 years, and have other safety nets. Otherwise would recommend whole life insurance.

(I think universal life insurances are bad – better to buy your investments and insurances separately to avoid extra premium. Life insurance agents will likely tell you otherwise. Life insurance agents make more money on universal life insurances.)

Comment by mathieuroy on Straight-edge Warning Against Physical Intimacy · 2021-01-06T19:21:41.960Z · LW · GW

FYI, your comment was posted 3 times, probably because of a LessWrong bug that makes it seems as if your comment was posted when you click on 'submit'

Comment by mathieuroy on Straight-edge Warning Against Physical Intimacy · 2021-01-06T19:17:39.530Z · LW · GW
  1. was a mistake

turning off comments serves as a coordination mechanism to discuss the topic at the same place

Comment by mathieuroy on Straight-edge Warning Against Physical Intimacy · 2021-01-06T18:58:17.028Z · LW · GW

hummm, basically time-consuming, especially if/when it develops into an addiction + am less focused when horny

Comment by mathieuroy on Mati_Roy's Shortform · 2021-01-06T15:29:33.280Z · LW · GW

Am thinking of organizing a one hour livestreamed Q&A about how to sign up for cryonics on January 12th (Bedford's day). Would anyone be interested in asking me questions?

x-post: https://www.facebook.com/mati.roy.09/posts/10159154233029579

Comment by mathieuroy on Mati_Roy's Shortform · 2021-01-01T10:04:12.742Z · LW · GW
  1. No name that I'm aware. Brainstorming ideas: map merging, compartmentalisation merging, uncompartmentalising
Comment by mathieuroy on Mati_Roy's Shortform · 2021-01-01T10:02:59.594Z · LW · GW

We sometimes encode the territory on context-dependent maps. To take a classic example:

  • when thinking about daily experience, stars and the Sun are stored as different things
  • when thinking in terms of astrophysics, they are part of the same category This makes it so that when you ask a question like "What is the closest star [to us]?", in my experience people are likely to say Alpha Centauri, and not the Sun. Merging those 2 maps feels enlightening in some ways; creates new connections / a new perspective. "Our Sun is just a star; stars are just like the Sun." leading to system-1 insights like "Wow, so much energy to harvest out there!" Questions:
  1. Is there a name for such merging? If no, should there be? Any suggestions?
  2. Do you have other (interesting?) examples of map merging? x-post: https://www.facebook.com/mati.roy.09/posts/10159142062619579
Comment by mathieuroy on Mati_Roy's Shortform · 2020-12-18T14:21:06.672Z · LW · GW

Litany of Tarski for instrumental rationality 😊

If it's useful to know whether the box contains a diamond,

I desire to figure out whether the box contains a diamond;

If it's not useful to know whether the box contains a diamond,

I desire to not spend time figuring out whether the box contains a diamond;

Let me not become attached to curiosities I may not need.

Comment by mathieuroy on capybaralet's Shortform · 2020-12-12T20:45:42.509Z · LW · GW

Working a lot is an instrumental goal. If you start tracking your time, and optimizing that metric, you might end up working more than optimal. That seems like a triumph of instrumental goals that isn't a coordination failure. I wouldn't assign this failure to Moloch. Thoughts?

Comment by mathieuroy on What are examples of Rationalist fable-like stories? · 2020-12-10T19:11:22.908Z · LW · GW

Awesome! :)

Comment by mathieuroy on Mati_Roy's Shortform · 2020-12-10T19:10:26.954Z · LW · GW

oh damn, thanks! there was an error message when I was trying to post it which had given me the impression it wasn't working, hence why I posted it 4 times total ^^

Comment by mathieuroy on Signaling importance · 2020-12-09T19:12:22.084Z · LW · GW

either I'm doing it wrong or you can't

tried things from: https://lifelongprogrammer.blogspot.com/2019/01/how-to-style-markdown-with-css.html

Comment by mathieuroy on Signaling importance · 2020-12-09T19:11:40.843Z · LW · GW

<p class="red">red text</p> <style> .red {color: red} </style>

::::: {#special .red} Here is a paragraph.

And another. :::::

Comment by mathieuroy on Moloch Hasn’t Won · 2020-12-09T19:08:03.974Z · LW · GW
Comment by mathieuroy on Mati_Roy's Shortform · 2020-12-09T08:43:25.772Z · LW · GW

In my mind, "the expert problem" means the problem of being able to recognize experts without being one, but I don't know where this idea comes from as the results from a Google search don't mention this. What name is used to refer to that problem (in the literature)?

x-post: https://www.facebook.com/mati.roy.09/posts/10159081618379579

Comment by mathieuroy on Mati_Roy's Shortform · 2020-12-09T08:43:02.810Z · LW · GW

In my mind, "the expert problem" means the problem of being able to recognize experts without being one, but I don't know where this idea comes from as the results from a Google search don't mention this. What name is used to refer to that problem (in the literature)?

x-post: https://www.facebook.com/mati.roy.09/posts/10159081618379579

Comment by mathieuroy on Mati_Roy's Shortform · 2020-12-09T08:42:38.679Z · LW · GW

In my mind, "the expert problem" means the problem of being able to recognize experts without being one, but I don't know where this idea comes from as the results from a Google search don't mention this. What name is used to refer to that problem (in the literature)?

x-post: https://www.facebook.com/mati.roy.09/posts/10159081618379579

Comment by mathieuroy on Mati_Roy's Shortform · 2020-12-09T08:40:47.309Z · LW · GW

In my mind, "the expert problem" means the problem of being able to recognize experts without being one, but I don't know where this idea comes from as the results from a Google search don't mention this. What name is used to refer to that problem (in the literature)?

x-post: https://www.facebook.com/mati.roy.09/posts/10159081618379579

Comment by mathieuroy on LessWrong FAQ · 2020-12-07T14:46:22.985Z · LW · GW

Font color isn't supported, right?

Comment by mathieuroy on Mati_Roy's Shortform · 2020-12-01T13:26:59.441Z · LW · GW

suggestion of something to try at a LessWrong online Meetup:

video chat with a time-budget for each participant. each time a participant unmutes themselves, their time-budget starts decreasing.

note: on jitsi you can see how many minutes someone talked (h/t Nicolas Lacombe)

x-post: https://www.facebook.com/mati.roy.09/posts/10159062919234579

Comment by mathieuroy on AGI Predictions · 2020-11-23T22:19:43.460Z · LW · GW

So the following, for example, don't count as "existential risk caused by AGI", right?

  • many AIs
    • an economy run by advanced AIs amplifying negative externalities, such as pollution, leading to our demise
    • an em world with minds evolving to the point of being non-valuable anymore ("a Disneyland without children")
    • a war by transcending uploads
  • narrow AI
    • a narrow AI killing all humans (ex.: by designing grey goo, a virus, etc.)
    • a narrow AI eroding trust in society until it breaks apart
  • intermediary cause by an AGI, but not ultimate cause
    • a simulation shutdown because our AI didn't have a decision theory for acausal cooperation
    • an AI convincing a human to destroy the world
Comment by mathieuroy on Straight-edge Warning Against Physical Intimacy · 2020-11-23T21:54:04.087Z · LW · GW

Strong like! For me, this is an important consideration to preserving my identity, staying productive / mentally sharp, and as independent as I want:)

Comment by mathieuroy on Predictions made by Mati Roy in early 2020 · 2020-11-23T21:16:57.759Z · LW · GW

oh, thanks!

Comment by mathieuroy on Predictions made by Mati Roy in early 2020 · 2020-11-22T03:48:25.814Z · LW · GW

The prediction is (emphasis added)

At least one other CRISPR baby will be born by January 2030.

Is the article you linked mentioning a second one? (I doubt because I looked into it after that article was published, and even wrote a wiki page on it)

Comment by mathieuroy on Predictions made by Mati Roy in early 2020 · 2020-11-22T03:46:23.060Z · LW · GW

That all makes sense to me:)

Comment by mathieuroy on Predictions made by Mati Roy in early 2020 · 2020-11-21T03:25:02.585Z · LW · GW

https://elicit.org/binary/questions/0AWln74RqZ?binaryQuestions.sortBy=popularity&limit=20&offset=0&predictors=community

Comment by mathieuroy on Mati_Roy's Shortform · 2020-11-19T22:02:13.803Z · LW · GW

There's the epistemic discount rate (ex.: probability of simulation shut down per year) and the value discount (ex.: you do the funner things first, so life is less valuable per year as you become older).

Asking "What value discount rate should be applied" is a category error. "should" statements are about actions done towards values, not about values themselves.

As for "What epistemic discount rate should be applied", it depends on things like "probability of death/extinction per year".

Comment by mathieuroy on Mati_Roy's Shortform · 2020-11-17T05:36:52.981Z · LW · GW

Suggestion for retroactive prizes: Pay the most undervalued post on the topic for the prize, whenever it was written, assuming the writer is still alive or cryopreserved (given money is probably not worth much for most dead people). "undervalue" meaning amount the post is worth minus amount the writers received.

Comment by mathieuroy on Announcing the Forecasting Innovation Prize · 2020-11-17T05:34:13.444Z · LW · GW

I'm curious though, do you have thoughts on what a proposal would look like?

Suggestion: Paying the most undervalued post on the topic, whenever it was written, assuming the writer is still alive or cryopreserved. "undervalue" meaning amount the post is worth minus amount the writers received.

Comment by mathieuroy on What features would you like a prediction platform to have? · 2020-11-03T12:03:38.523Z · LW · GW

2) a) probability mass distribution over time and some other value

I would like to easily be able to predict on a question such as "What will be the price of neuropreservation at Alcor in 2030?" but for many years at the same time.

I was thinking what would be a good way to do that, and here's a thought.

Instead of plotting probability mass over price for a specific year, we could plot price over years for a specific probability.

So to take the same example, the question could become: "For what price is it 50% sure that Alcor will charge more than it over the coming century?" You could repeat the same question for the 10% and 90% mark.

Or you could just a specific distribution, like a normal distribution. And then you have just 2 curves to make:

  • What's the mean of Alcor's expected prices over the coming century?
  • What's the standard deviation of Alcor's expected prices over the coming century?

That way, you can easily get a probability mass distribution over price, over time.

x-post: https://www.metaculus.com/questions/935/discussion-topic-what-features-should-metaculus-add/#comment-44545

Comment by mathieuroy on Taking Ideas Seriously is Hard · 2020-11-01T10:43:42.038Z · LW · GW

nitpic

If you’re taking compounding seriously, you’d learn the skills with the greatest return first.

I don't see how that follows. Whether you multiply your initial value by 1.3 before 1.1, or the other way around, the end result is the same.

Edit: ah, maybe you meant to learn the skill which unlocks the most opportunity for more learning

Comment by mathieuroy on What are examples of Rationalist fable-like stories? · 2020-10-30T20:50:31.433Z · LW · GW

Is Success the Enemy of Freedom? (Full)

(I haven't read yet, but there's a parable, and it's highly upvoted)

Comment by mathieuroy on Mati_Roy's Shortform · 2020-10-29T14:27:46.748Z · LW · GW

ok yeah, that's fair! (although even controlling for that, I think the analogy still points at something interesting)

only sees the parts that someone happened to capture, which are indexed/promoted enough to come to our attention

yeah, I like to see "people just living a normal day"; I sometimes look for that, but even that is likely biased

Comment by mathieuroy on Mati_Roy's Shortform · 2020-10-28T10:50:46.662Z · LW · GW

imagine having a physical window that allowed you to look directly in the past (but people in the past wouldn't see you / the window). that would be amazing, right? well, that's what videos are. with the window it feels like it's happening now, whereas with videos it feels like it's happening in the past, but it's the same

x-post: https://www.facebook.com/mati.roy.09/posts/10158977624499579

Comment by mathieuroy on The Book of HPMOR Fanfics · 2020-10-20T07:21:56.592Z · LW · GW

A new one, as of 2020-10-16: HPRick and MoRty

What if Harry and Quirrell in "Harry Potter and the Methods of Rationality" had the personalities of Rick and Morty from "Rick & Morty"?

Comment by mathieuroy on What are some beautiful, rationalist artworks? · 2020-10-20T07:17:44.943Z · LW · GW

I didn't say extinction risk.

Existential risk – One where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.

source: https://www.nickbostrom.com/existential/risks.html

Comment by mathieuroy on What colour were the shadows? · 2020-10-19T10:49:16.382Z · LW · GW

Woops, not sure how I missed that; I'll retract my comment

Comment by mathieuroy on What are some beautiful, rationalist artworks? · 2020-10-19T10:43:43.152Z · LW · GW

Representing existential risks. A lost opportunity to grab the Reachable Universe (edit: /expand through the cosmos). (at least, that's my interpretation)

Comment by mathieuroy on What are some beautiful, rationalist artworks? · 2020-10-19T10:42:43.852Z · LW · GW

Comment by mathieuroy on What are some beautiful, rationalist artworks? · 2020-10-19T10:34:05.029Z · LW · GW

The true patronus was discovered (possibly rediscovered) by Harry Potter, when he finally understood that Dementors represent Death incarnate. His empathetic desire to protect all of humanity from the pain of that loss allowed him to not just drive away the fear of Death, but to conquer Death itself. This caused his Patronus to evolve into its true form; his Patronus took on the shape of an androgynous human. In this form, the Patronus gains additional abilities, including the ability to destroy Dementors and block the "unblockable" curse, Avada Kedavra.

(source: https://hpmor.fandom.com/wiki/Expecto_Patronum)

Comment by mathieuroy on What are some beautiful, rationalist artworks? · 2020-10-19T10:31:56.509Z · LW · GW

Comment by mathieuroy on What colour were the shadows? · 2020-10-19T10:18:28.190Z · LW · GW

Was the Sun exactly above you? ^_^

Comment by mathieuroy on What are examples of Rationalist fable-like stories? · 2020-10-17T05:26:00.555Z · LW · GW

maybe The Egg

Comment by mathieuroy on What are examples of Rationalist fable-like stories? · 2020-10-17T05:17:53.557Z · LW · GW

maybe this could be transformed into a fable: https://www.smbc-comics.com/comic/2013-06-02

Comment by mathieuroy on If a "Kickstarter for Inadequate Equlibria" was built, do you have a concrete inadequate equilibrium to fix? · 2020-10-16T09:56:54.793Z · LW · GW

yes, but it's a rather small consideration