Comment by mathieuroy on If a "Kickstarter for Inadequate Equlibria" was built, do you have a concrete inadequate equilibrium to fix? · 2019-04-15T21:55:21.463Z · score: 1 (1 votes) · LW · GW

See section “Games and Exercises” of “How to Run a Successful LessWrong Meetup Group” for some ideas: https://wiki.lesswrong.com/mediawiki/images/c/ca/How_to_Run_a_Successful_Less_Wrong_Meetup_Group.pdf

Comment by mathieuroy on If a "Kickstarter for Inadequate Equlibria" was built, do you have a concrete inadequate equilibrium to fix? · 2019-04-06T18:21:45.474Z · score: 12 (3 votes) · LW · GW

Because it's near the best cryonics facility in the world: https://alcor.org, and the quality of cryopreservations for people living in Phoenix is much higher in average than remote cases (it reduces the delay to start the procedure, it avoids problems at borders, the delay to start the sub-zero cool-down is shorter, they have good relationships with nearby hospitals, they have better infrastructure, and there's more legal antecedent supporting cryonics).

This summer I went to Phoenix for about a month to see how it was. I organized the first local effective altruism event: https://www.facebook.com/groups/EffectiveAltruismPhoenix/. I reached out to the LessWrong group: https://www.facebook.com/groups/317266081721112/ and the SlateStarCodex group: https://groups.google.com/forum/#!forum/slate-star-codex-phoenix . There are 4 people in the Brain Debugging Discussion Facebook group that specified living in Phoenix on their Facebook profile: https://www.facebook.com/groups/144017955332/local_members/ , 1 on the Effective Altruism Facebook group: https://www.facebook.com/groups/437177563005273/local_members/ , 0 on EA Hub: https://eahub.org/profiles , and 7 on the Global Transhumanist Association: https://www.facebook.com/groups/2229380549/local_members/ . IIRC, I had reached out to (some of) them as well (and probably more). I also had invited people from the cryonics community. IIRC, there was 2-3 rationalists and 3 cryonicists that showed up to the event. And maybe around 5 that were interested but couldn't make it. IIRC, there had been a few SSC events in the last 2 years, with maybe a total of something like 12 people showing up. I've also met with about 20 cryonics old-timers.

Other approaches I see towards solving this problem:

  • do movement building once I'm Phoenix, or support other people that are interested in doing that
  • try to connect more with rationalists (or rationalists adjacent) that are already in Phoenix
  • instead of finding 75 interesting (to me) people, find only a dozen, but start a strong intentional community
  • significantly improve the cryonics response quality in other cities (current contenders: Salem, Berkeley, Los Angeles)

If you (or anyone you know) are interested or can help with any of those, that would be great/appreciated!

How many rationalists / EAs / interesting people do you know in Phoenix? Do you like living in Phoenix?

I would like to connect with more LessWronger in Phoenix. If you want, you can add me on Facebook: https://www.facebook.com/mati.roy.09 and/or send me an email at contact@matiroy.com and/or chat in public on https://www.reddit.com/r/Mati_Roy/ .

Comment by mathieuroy on LessWrong 2.0 Feature Roadmap & Feature Suggestions · 2019-04-06T00:17:00.533Z · score: 3 (2 votes) · LW · GW

Allow to edit one's username (context: I now go by Mati_Roy instead of MathieuRoy, but I don't want to create another account and loose my history).

Comment by mathieuroy on How could "Kickstarter for Inadequate Equilibria" be used for evil or turn out to be net-negative? · 2019-04-05T23:54:17.599Z · score: 4 (2 votes) · LW · GW

I added this thread here: https://causeprioritization.org/Coordination

Comment by mathieuroy on If a "Kickstarter for Inadequate Equlibria" was built, do you have a concrete inadequate equilibrium to fix? · 2019-04-05T23:50:04.680Z · score: 23 (1 votes) · LW · GW

Meta note: I'll likely edit this answer when I think of more answers.

General note: If you're interested in any of the propositions below (except the first one), please let me know, either here or at contact@matiroy.com .

Bootstrapping a commitment platform

Make at least 5 commitments if a commitment platform is created (or rather, the creator might want to commit to improving a bare-bone platform if at least 200 people commit to make a total of at least 1,000 commitments).

Improving the Cause Prioritization Wiki (CPW)

Migrate the CPW on the MediaWiki platform and improve the structure if enough people commit edits for a total of 2000+ edits.

Side note: I've added this thread here: https://causeprioritization.org/Coordination

Moving to Phoenix

If 75 EAs / rationalists / life extensionists committed to move to Phoenix this year, I’d move to Phoenix this year.

Financing cryonics research

If 500 other people committed 10,000 USD to cryonics research, I would give 10,000 USD to cryonics research.

Doing a cryonics related PhD

I would do a PhD in some field relevant to cryonics if some people committed a fraction of my salary to do cryonics research over 10 years. That is, they would give say 20% (or 10k USD / year) of my salary to whatever cryonics lab that hires me.

Training a local cryonics team

I would arrange to have a local (to Montreal) standby cryonics team if at least 500,000 CAD was committed (exact amount TBD). (Although I guess I could just use Kickstarter for that, or do it entirely ad hoc?)

Organizing Rationalist Olympiads

If 12+ people committed to go to Rationalist Olympiads (in Montreal), I would organize Rationalist Olympiads.

What are the advantages and disadvantages of knowing your own IQ?

2019-04-03T18:31:49.269Z · score: 18 (7 votes)
Comment by mathieuroy on How to Understand and Mitigate Risk · 2019-03-31T17:07:26.568Z · score: 6 (2 votes) · LW · GW

How would you classify existential risks within this framework? (or would you?)

Here's my attempt. Any corrections or additions would be appreciated.

Transparent risks: asteroids (we roughly know the frequency?)
Opaque risks: geomagnetic storms (we don't know how resistant the electric grid is, although we have an idea of their frequency), natural physics disasters (such as vacuum decay), killed by an extraterrestrial civilization (could also fit black swans and adversarial environments depending on its nature)
Knightian risks:
- Black swans: ASI, nanotech, bioengineered pandemics, simulation shutdown (assuming it's because of something we did)
- Dynamic environment: “dysgenic” pressures (maybe also adversarial), natural pandemics (the world is getting more connected, medicine more robust, etc. which makes it difficult how the risks of natural pandemics are changing), nuclear holocaust (the game theoretic equilibrium changes as we get nuclear weapon that are faster and more precised, better detectors, etc.)
- Adversarial environments: resource depletion or ecological destruction, misguided world government or another static social equilibrium that stops technological progress, repressive totalitarian global regime, take-over by a transcending upload (?), our potential or even our core values are eroded by evolutionary development (ex.: Hansonian em world)

Other (?): technological arrests ("The sheer technological difficulties in making the transition to the posthuman world might turn out to be so great that we never get there." from https://nickbostrom.com/existential/risks.html )

Comment by mathieuroy on X-risks are a tragedies of the commons · 2019-03-30T23:09:33.323Z · score: 14 (4 votes) · LW · GW

From the original article by Nick Bostrom: "Reductions existential risks are global public goods [13] and may therefore be undersupplied by the market [14]. Existential risks are a menace for everybody and may require acting on the international plane. Respect for national sovereignty is not a legitimate excuse for failing to take countermeasures against a major existential risk." See: https://nickbostrom.com/existential/risks.html

Comment by mathieuroy on X-risks are a tragedies of the commons · 2019-03-23T01:19:00.609Z · score: 1 (1 votes) · LW · GW

related comment: https://forum.effectivealtruism.org/posts/F7hZ8co3L82nTdX4f/do-eas-underestimate-opportunities-to-create-many-small#XGSQX45NkAN9qSB9A

Comment by mathieuroy on Newcomb's Problem and Regret of Rationality · 2019-02-21T01:26:51.959Z · score: 1 (1 votes) · LW · GW
it's certainly interesting from the perspective of the Doomsday Argument if advanced civilizations have a thermodynamic incentive to wait until nearly the end of the universe before using their hoarded negentropy

Related: That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi's paradox (https://arxiv.org/pdf/1705.03394.pdf)

Comment by mathieuroy on Preventing s-risks via indexical uncertainty, acausal trade and domination in the multiverse · 2018-12-19T01:39:30.097Z · score: 2 (2 votes) · LW · GW

Assuming this is all true, and that Benevolent ASIs have the advantage, in finite universes, it's worth noting that this still requires the Benevolent ASIs to trade-off computations for increasing the lifespan of people to computations to increase the fraction of suffering-free observer-moments.

Comment by mathieuroy on Current AI Safety Roles for Software Engineers · 2018-11-28T20:56:44.395Z · score: 3 (2 votes) · LW · GW
EA safety community

Lapsus? ^_^

[Montreal] Towards High-Assurance Advanced AI Systems by Richard Mallah

2018-11-24T06:24:51.428Z · score: 3 (1 votes)
Comment by mathieuroy on Survey: Help Us Research Coordination Problems In The Rationalist/EA Community · 2018-11-14T22:05:24.584Z · score: 3 (2 votes) · LW · GW

Have you published the results?

Montreal Slate Star Codex Meetup

2018-08-01T02:22:56.853Z · score: 6 (3 votes)
Comment by mathieuroy on Everything I ever needed to know, I learned from World of Warcraft: Incentives and rewards · 2018-05-15T16:50:04.658Z · score: 8 (2 votes) · LW · GW

Might be of interest to some readers:

<<

Spliddit's goods calculator fairly divides jewelry, artworks, electronics, toys, furniture, financial assets, or even an entire estate between two or more people. You begin by providing a list of items that you wish to divide and a list of recipients. We then send the recipients links where they specify how much they believe each item is worth. Our algorithm uses these evaluations to propose a fair division of the items among the recipients.

>>

See: http://www.spliddit.org/apps/goods

Moral Uncertainty

2018-04-05T18:59:22.564Z · score: 2 (2 votes)

Schelling Day

2018-04-05T18:56:58.309Z · score: 4 (2 votes)

Fun Theory - Group Discussion

2018-04-05T18:54:51.069Z · score: 2 (2 votes)

Welcome to Altruisme Efficace Montréal - Effective Altruism Montreal [Edit With Your Details]

2018-04-05T18:52:06.463Z · score: 1 (1 votes)

Welcome to Montréal LessWrong [Edit With Your Details]

2018-04-05T18:50:13.793Z · score: 1 (1 votes)
Comment by mathieuroy on Let's split the cake, lengthwise, upwise and slantwise · 2018-01-27T09:48:24.824Z · score: 0 (0 votes) · LW · GW

the link is broken

Comment by mathieuroy on How to Not Lose an Argument · 2017-09-19T21:24:04.873Z · score: 0 (0 votes) · LW · GW

FYI: I use this Chrome extension to gender-neutralize what I read: https://chrome.google.com/webstore/detail/the-ungender/blfboedipjpaphkkdoddffpnfjknfeda?hl=en

Comment by MathieuRoy on [deleted post] 2017-07-27T07:53:21.802Z

Just saw your comment. Thanks for letting me know.

Meetup : Rationality Potluck

2017-05-25T18:28:04.132Z · score: 0 (1 votes)

Wikipedia book based on betterhumans' article on cognitive biases

2016-10-14T01:03:58.421Z · score: 1 (2 votes)
Comment by mathieuroy on Low Hanging fruit for buying a better life · 2015-01-17T05:33:04.336Z · score: 0 (0 votes) · LW · GW
  • jumpsuits/onepieces (I find them really comfortable)
  • if you don't have a lot of dishes (ex.: live alone), something like this to avoid putting your hands in hot water, and with soap in the handle to be more efficient
  • a second pillow to put between or below your legs when you sleep
Comment by mathieuroy on Open thread, Dec. 8 - Dec. 15, 2014 · 2014-12-11T05:43:34.315Z · score: 2 (2 votes) · LW · GW

David Pizer started a petition to promote more anti-aging research.

"In 40 to 100 years, if the world governments spent money on research for aging reversal instead of for research on building weapons that can kill large numbers of people, world scientists could develop a protocol to reverse aging and at that time people could live as long as they wanted to in youthful, strong, healthy bodies."

To sign the petition, go here

Comment by mathieuroy on Polling Thread - Personality Special · 2014-11-06T23:13:48.604Z · score: 0 (0 votes) · LW · GW

Telling in advance what results you expect change the results for many reasons (ex.: the pygmalion effect, the golem effect, the stereotype threat, etc.).

Comment by mathieuroy on 2014 Less Wrong Census/Survey · 2014-10-27T06:52:53.817Z · score: 31 (31 votes) · LW · GW

Done it. The whole thing! (edit: except the last question)

Comment by mathieuroy on 2014 Less Wrong Census/Survey - Call For Critiques/Questions · 2014-10-16T02:49:40.870Z · score: 0 (0 votes) · LW · GW

Good point, I edited the post to make that clear.

Comment by mathieuroy on 2014 Less Wrong Census/Survey - Call For Critiques/Questions · 2014-10-16T02:36:19.610Z · score: 0 (0 votes) · LW · GW

Is it because a lot of people think that continuing to live as a clone or a simulation is just as good as continuing to live as the original? If so, then I don't mind rephrasing what I mean by death. The important point is that I don't mean the death of the body, but rather the death of the mind.

Comment by mathieuroy on 2014 Less Wrong Census/Survey - Call For Critiques/Questions · 2014-10-14T23:37:58.965Z · score: 1 (1 votes) · LW · GW

Ok thanks for your answers!

Comment by mathieuroy on 2014 Less Wrong Census/Survey - Call For Critiques/Questions · 2014-10-14T21:51:20.754Z · score: 0 (0 votes) · LW · GW

Thank you for your answer.

Do nihilists think they have no goals (aka terminal values) or do nihilists think they don't have goals about fulfilling others' goals or is it something else?

Utilitarianism is used as "the normative ethical theory that one ought to maximize the utility of the world".

Ok so would that be right to say this?: Utilitarianism is giving equal weight to everyone's utility function (including yours) in your "meta" utility function. Egoism means you don't consider others' utility function in your utility function.

And then there is everything in-between (meaning giving more weight to your utility function than to other's utility function in your "meta" utility function).

Comment by mathieuroy on 2014 Less Wrong Census/Survey - Call For Critiques/Questions · 2014-10-14T19:15:42.185Z · score: -1 (1 votes) · LW · GW

I would really like to see these questions in the survey:

For the questions:

  • Give the time from your birth to your death in subjective years (so years where you are cryonically preserved don't count)
  • Give the estimate where you are 50% sure it would be less than your answer and 50% sure it would be more than your answer)

The questions are:

  • 1) How long do you think you will live?
  • 2) If your only way to die was by really wanting to die, when do you think you would die...
  • a) if you could control your aging process but the world would be otherwise unchanged?
  • b) if an AGI optimizing your utility function would be created?

And maybe also this one:

  • c) if your utility function would be maximized (regardless of actual physical laws in our universe)?

EDIT 1: I've removed this specification: where death = permanently not conscious; if you create a clone or a simulation that is not a direct upload, it doesn't count as 'still living'.
EDIT 2: I've added that one could control its aging process in 2a).

Comment by mathieuroy on 2014 Less Wrong Census/Survey - Call For Critiques/Questions · 2014-10-14T03:14:37.006Z · score: 0 (2 votes) · LW · GW

"Moral nihilism is the meta-ethical view that nothing is intrinsically moral or immoral." (http://en.wikipedia.org/wiki/Moral_nihilism) Utility functions (aka morality) are (is) in the mind, not in Nature. That would probably be the answer of most LWers. Otherwise, you'll have to tell me what you mean by morality.

Is utilitarianism used as "maximizing happiness" or "maximizing utility". If it's "maximizing utility", well isn't that everyone's position? What differs is simply what counts as "utility".

Comment by MathieuRoy on [deleted post] 2014-09-08T02:13:32.223Z

Maybe taking online courses could be a solution: you would need to be less often on the road and you could (IMO) get a better and cheaper education.

Comment by MathieuRoy on [deleted post] 2014-09-08T01:48:48.150Z

Maybe taking online courses could be a solution: you would need to be less often on the road and you could (IMO) get a better and cheaper education.

Comment by mathieuroy on [link] Reality Show 'Utopia' · 2014-09-07T16:38:11.212Z · score: 2 (2 votes) · LW · GW

That makes sense since the producers are openly saying they are looking for people that fit stereotypes.

Comment by mathieuroy on [link] Reality Show 'Utopia' · 2014-09-07T03:28:28.590Z · score: 3 (3 votes) · LW · GW

Thank you for your answer. I will try not to post things that don't strongly relate to rationality in the future.

For the record, my reasoning was the following: There has been 5000 candidates so far. If one rationalist applies, let's say s/he has a probability of 1/5000 to be chosen. If s/he is chosen, s/he will be seen by probably millions of viewers. So in average, every application would reach something like 1000 viewers.

Comment by mathieuroy on [link] Reality Show 'Utopia' · 2014-09-07T03:16:48.355Z · score: 2 (2 votes) · LW · GW

Could the -1 be justified so I can adjust my future articles? Is it because you think it's not a good idea, or because I don't explain my idea well enough, or simply because such topics shouldn't be post on LW (or something else)?

Also, should someone delete an article as soon as it gets say -5 karma, since it means it's probably not a good one, and therefore other people shouldn't pay attention to it?

[link] Reality Show 'Utopia'

2014-09-06T20:39:06.366Z · score: -8 (15 votes)

Find a study partner - May 2014 Thread

2014-05-06T05:37:07.030Z · score: 3 (6 votes)
Comment by mathieuroy on Questions to ask theist philosophers? I will soon be speaking with several · 2014-04-28T03:00:44.760Z · score: 0 (0 votes) · LW · GW

Nice questions. Could you please explain me how "Matthew 26:39" is related to "Jesus' willingness to die for our sins"?

Comment by mathieuroy on Questions to ask theist philosophers? I will soon be speaking with several · 2014-04-26T14:22:20.415Z · score: 1 (1 votes) · LW · GW

PRAYERS

How confident are you that prayers can work? Even if controlled for the placebo effect?

How often do prayers work for you?

How much time should one pray versus work? (for example, on an exam)

Comment by mathieuroy on Questions to ask theist philosophers? I will soon be speaking with several · 2014-04-26T14:01:38.229Z · score: 1 (1 votes) · LW · GW

I improved the structure of the questions:

GOD

How confident are you that God exists?

Does an evidence for [---] (would be) an evidence against God? (How would evidence for [---] affect your confidence that God exists) How confident are you that [---] is true?

  1. the many-worlds interpretation
  2. aliens existence
  3. evolution of humans from non-living matter
  4. bad things happening

GOING TO PARADISE

How confident are you that you will go to paradise? How scare are you to go to Hell?

How confident are you that [---] have souls (and/or would go to paradise or hell)?

  1. advanced artificial intelligence of some sort
  2. extremely severely mentally handicapped humans
  3. a crionised human

For what minimum X are you at least 90% sure that someone will go to paradise if...

  1. someone, who is "otherwise" a very good person, is X% confident that God exists,
  2. someone that is (1-X)% chimp and X% human

If it we live in a multiverse, how confident are you that all "souls" from all universes go to (the same) paradise/hell? (if the same, it means you would encounter other versions of yourselves).

What set of memories do you think we bring with us in paradise (the ones we have before we die? all the ones we made in our life? etc.)

BEING IN PARADISE

How intelligent do you think intelligent humans and extremely severely mentally handicapped humans become in paradise?

How confident are you that there are fermions and bosons in paradise/hell? (if low: does that mean there is not concept of: temperature, light, sound, pressure? can time be measured?)

How confident are you that we cannot feel pain in paradise?

After 3^^^3 years in paradise, how much do you think we can remember?

How confident are you that one can commit suicide in paradise if one wants to?

If you could live for any amount of time in paradise, how would you want to live?

ADVANTAGES

What are the advantages to believe in God (beside going to paradise)? What are the disadvantages?

In the "counter-factual" world were God doesn't exist, would you want to believe in it?

ON EARTH

If you could live for any amount of time, how long would you want to live on Earth? (If more than 1 day, you could ask them why wouldn't you rather be in paradise?)

What do you consider as "playing God"? In what case do you think it is good to play God? How well do you think we can (possibly/physically) play God?

How confident are you that we are in a simulation?

How sad are you when someone near you dies? Why?

What would be different in the world (on Earth) if there was no God?

Comment by mathieuroy on Questions to ask theist philosophers? I will soon be speaking with several · 2014-04-26T13:33:25.167Z · score: 2 (4 votes) · LW · GW

It would be nice if you could do an article on their answers.

EDIT: Read my first comment and not this post because I've improved the structure of the text.

Here are some questions:

How confident are you that God exists? (anytime they say 100%, you could ask them if they would bet all human "souls" against one carrot that God exists to be 'sure' that they really mean 100%)

If someone, who is "otherwise" a very good person, is X% confident that God exists, for what minimum X are you at least 75% sure that s/he will go to paradise?

How confident are you that we live in a many-worlds interpretation of quantum physics? If more than 0% you could ask: if it is true, how confident are you that all "souls" from all universes go to (the same) paradise/hell? (if the same, it means you would encounter other versions of yourselves).

Does an evidence for the many-worlds interpretation (would be) an evidence against God? Does an evidence for alien life (would be) an evidence against God?

How confident are you that humans evolved from non-living matter?

How confident are you that chimps have souls? If low you could ask: If we bio-engineer an animal that is X% chimp and (1-X)% human, for what X are you 90% confident that the animal would have a soul and go to paradise or hell?

How confident are you that extremely severely mentally handicapped people have souls? If high, how intelligent do you think they become in paradise?

How confident are you that we can create a conscious being in silico? If an AI was created tomorrow, how would that affect your confidence that God exists?

How confident are you that there are fermions and bosons in paradise/hell? (if low: does that mean there is not concept of: temperature, light, sound, pressure? can time be meaured?)

In the "counter-factual" world were God doesn't exist, would you want to believe in it?

What are the advantages to believe in God (beside going to paradise)? What are the disadvantages?

If you could live for any amount of time, how would you want to live? If you could live for any amount of time, how long would you want to live on Earth? If more than 1 day, you could ask them why wouldn't you rather be in paradise? How confident are you that you will go to paradise? How scare are you to go to Hell?

How confident are you that we cannot feel pain in paradise?

After 3^^^3 years in paradise, how much do you think we can remember?

How confident are you that one can commit suicide in paradise if one wants to?

What do you consider as "playing God"? In what case do you think it is good to play God? How well do you think we can (possibly/physically) play God?

How confident are you that we are in a simulation?

What set of memories do you think we bring with us in paradise (the ones we have before we die? all the ones we made in our life? etc.)

How confident are you that cryonised humans keep their soul?

How sad are you sad when someone near you dies? Why?

What would be different in the world if there was no God?

Find a study partner - April 2014 thread

2014-03-31T19:24:12.184Z · score: 1 (2 votes)

[link] Cybathlon (an Olympics for bionic athletes)

2014-03-27T16:44:24.420Z · score: 2 (5 votes)
Comment by mathieuroy on Evolutions Are Stupid (But Work Anyway) · 2014-03-19T18:14:15.594Z · score: 1 (1 votes) · LW · GW

Thank you.

Comment by mathieuroy on What are you working on? March / April 2014 · 2014-03-19T15:21:16.210Z · score: 1 (1 votes) · LW · GW

I'm a semifinalist for the MA365 mission (a one year simulation of Mars exploration). We now need to do a video to answer some questions such as "Why Mars? Why do you think we should go there?" (see my answer below). The Mars Society will chose 18 finalists that will be split in 3 teams. In august 2014, they (we?) will go to Devon Island during two weeks and they will pick the best team. This team of 6 will then do a simulation of Mars exploration during 365 days starting in July 2015. You can contribute to the project by giving money (in exchange for gifts) in the indiegogo campaign: this will help us to buy better equipment.

my answer to "Why Mars? Why do you think we should go there?" or as asked in the post "Why this project and not others?" (note: I would like to hear your answers too)

I think space exploration in general has a lot of benefits: it improves our understanding of the history of our solar system and it pushes us to develop new technologies that often become useful on Earth. I think the next step in space exploration is to send humans to Mars because it's probably the easiest and most interesting planet to explore in our solar system because it has some similarity with the Earth. I also think that sending humans to Mars will inspire more people to go study in science and technology, which will in turn improve humans' well being and life expectancy. Finally, I think that if we succeed to colonize Mars, it will be a good fallback position in case of an existential catastrophe happening on Earth.

I think the first manned mission to Mars will probably be in the next decade, so we need to prepare ourselves. For example, Inspiration Mars (www.inspirationmars.org‎) wants to send a couple to fly within 100 miles of Mars in 2018; Mars One (www.mars-one.com) wants to start sending humans to Mars in 2024 and let them there indefinitely. The USA, the Russian and the European space agencies are also planning manned missions to Mars relatively soon. So the research done by The Mars Society, such as this mission, will be really useful to all of these organisations.

Comment by mathieuroy on Evolutions Are Stupid (But Work Anyway) · 2014-03-13T08:52:46.877Z · score: 0 (0 votes) · LW · GW

A gene conveying a 3% fitness advantage, spreading through a population of 100,000, would require an average of 768 generations to reach universality in the gene pool.

Generations to fixation = 2 ln(N) / s = 2 ln(100000)/1.03 = 22.36 != 768

I'm confused.

Comment by mathieuroy on Find a study partner - March 2014 thread · 2014-03-07T21:16:23.707Z · score: 0 (0 votes) · LW · GW

If you're interested, I've done this document with all the best resources (IMO) that can be found about ASL.

Concerning an ASL/English dictionary, I know these two: Handspeak and ASL Jinkle.

Find a study partner - March 2014 thread

2014-03-02T06:00:10.483Z · score: 2 (3 votes)
Comment by mathieuroy on March 2014 Media Thread · 2014-03-02T01:44:51.918Z · score: 3 (3 votes) · LW · GW

I am doing a Youtube playlist of transhumanist songs (with a particular quote from each song). Since there's not a lot of these, I also put songs that are only somewhat transhumanist (frankly I'm shocked at the ratio of transhumanist songs to love songs). So do you have suggestions for songs that are somewhat related to transhumanism (and/or rationality) (not necessarily in English) please?

For example, here are the ones that I have put in the playlist so far:

Turn It Around by Tim McMorris

Have you ever looked outside and didn’t like what you see

Or am I the only one who sees the things we could be

If we made more effort, then I think you’d agree

That we could make the world a better place, a place that is free

Another one is Hiro by Soprano: a song about someone who's saying what he would do if he could travel back in time. (it’s in French but with English subtitles) (it's inspired from the TV show Heroes which I also recommend).

Tellement de choses que j’aurais voulu changer ou voulu vivre (So many things that I would change or live)

Tellement de choses que j’aurais voulu effacer ou revivre (So many things that I would erase or live again)

The classic Imagine by John Lennon

Imagine there's no countries

It isn't hard to do

Nothing to kill or die for

And no religion too

Imagine all the people

Living life in peace…

The Future Soon by Jonathan Coulton

Well it's gonna be the future soon

And I won't always be this way

One that I saw recommended on LW: The Singularity by Dr. Steel (it's my favorite!)

Nanotechnology transcending biology

This is how the race is won

Another that I saw on LW: Singularity by The Lisps

You'd keep all the memories and feelings that you ever want,

And now you can commence your life as an uploaded extropian.

Singularity by Steve Aoki & Angger Dimas ft. My Name is Kay

We’re gonna live, we’ll never die

I am the very model of a singularitarian

I am a Transhuman, Immortalist, Extropian

I am the very model of a Singularitarian

Another World by Doug Bard

Sensing a freedom you've never known,

no limitation, only you can decide

Transhuman by Neurotech

The mutation is in our nature

Transhuman by Amaranthe

My adrenaline feeds my desire

To become an immortal machine

E.T. by Katy Perry ft. Kanye West

You're from a whole other world

A different dimension

You open my eyes

And I'm ready to go

Lead me into the light

Space Girl by Charmax

She told me never venture out among the asteroids, yet I did.

Comment by mathieuroy on Open Thread for February 3 - 10 · 2014-02-11T02:14:40.048Z · score: 0 (0 votes) · LW · GW

Thank you for the link.

Comment by mathieuroy on Open Thread for February 3 - 10 · 2014-02-10T04:58:14.431Z · score: 2 (2 votes) · LW · GW

What transhumanist and/or rationalist podcast/audiobook do you prefer beside hpmor which I just finished and really liked!!

Comment by mathieuroy on The Truly Iterated Prisoner's Dilemma · 2014-02-05T12:07:27.428Z · score: 1 (1 votes) · LW · GW

Do you mean "I cooperate with the Paperclipper if AND ONLY IF I think it will one-box on Newcomb's Problem with myself as Omega AND I think it thinks I'm Omega AND I think it thinks I think it thinks I'm Omega, etc." ? This seems to require an infinite amount of knowledge, no?

Edit: and you said "We have never interacted with the paperclip maximizer before", so do you think it would one-box?

Comment by mathieuroy on Open Thread for February 3 - 10 · 2014-02-04T05:20:16.022Z · score: 1 (5 votes) · LW · GW

Would a (hypothetically) pure altruist have children (in our current situation)?

Comment by mathieuroy on Closet survey #1 · 2014-02-03T12:34:19.487Z · score: 0 (0 votes) · LW · GW

I would definitely pre-commit to immortality.

Comment by mathieuroy on Decision Auctions aka "How to fairly assign chores, or decide who gets the last cookie" · 2014-02-03T11:23:49.540Z · score: 1 (1 votes) · LW · GW

So D "wins" the bid, and B pays him $15 to go get the kids from their grandma's.

Shouldn't it be more something like 15+(100-15)/2$? So both win (about) the same amount of utility? Otherwise, the one who was ready to pay 100$ saved ("won") 85$ and the other won nothing (s/he was indifferent to pay or do it for 15$).

Nice post by the way. Such techniques seem useful if you trust the other will make a bid that really represents the amount s/he's ready to pay.

Comment by mathieuroy on Archimedes's Chronophone · 2014-02-02T07:20:31.513Z · score: 0 (0 votes) · LW · GW

If doubting is/was accepted in our current society, and we wanted Archimede to doubt about his beliefs, would we have to doubt about the value doubting, or be certain about the value of doubting?

It's a joke. As Eliezer said "to get nonobvious output, you need nonobvious input", so obviously, we'd just have to find something nonobvious. :-)

I wonder if we will ever come up with something that is as nonobvious to us right now as bayesian thinking was to Archimede.

Comment by mathieuroy on Archimedes's Chronophone · 2014-02-02T06:40:00.076Z · score: 5 (5 votes) · LW · GW

Nice :-) but in fact, the chronophone will transmit a problem just as hard for Archimede as FAI is to us. So he'll probably solve the problem in the same amount of time than us (so it won't help us). I wonder what would be this problem? Is going do the Moon for Archimede just as hard than building a FAI for us?

Comment by mathieuroy on Beautiful Math · 2014-01-28T05:19:47.457Z · score: 1 (1 votes) · LW · GW

Eliezer sometime ask something that I would now like to ask him: how would the world looks like if mathematics didn't precede mathematicians? And if it did?

Comment by mathieuroy on 2013 Survey Results · 2014-01-26T21:13:25.180Z · score: 0 (0 votes) · LW · GW

Ok right. I agree.

Find a study partner

2014-01-24T02:27:38.851Z · score: 22 (22 votes)

Cryonics Presentation [help request]

2013-11-09T20:51:34.599Z · score: 2 (5 votes)