Posts
Comments
I think I largely agree with you, but just a thought: Mayyybe drugs can help us explore / learn about parts of ourselves that are usually "keep in check" (in some ways) by other parts_
Does your drugged self not want to get subber to stay their true (from their reference point IIUC) self?
am assuming you're implying recreating a brain with the same information content (otherwise it's trivially true AFAICT; just make a baby^^)
yeah, that seems plausible to me
in a way, that's what mind uploading is (although in that case your mind is decoupled from the hardware)
Time cost
Feel free to contract me for help signing up. I already helped multiple people. contact@matiroy.com
But I'm not in the US!
For Québec: https://cryoquebec.org/
What I chose
Whole-body with a note that Alcor could choose what seemed best at the moment of my death (ex.: if they only have the equipment for neuro cryoprotection, then neuro seems better).
20 years term-life insurance for 350k CAD, because I have a high confidence in my capacity to save enough money to be able to pay cash in 20 years, and have other safety nets. Otherwise would recommend whole life insurance.
(I think universal life insurances are bad – better to buy your investments and insurances separately to avoid extra premium. Life insurance agents will likely tell you otherwise. Life insurance agents make more money on universal life insurances.)
FYI, your comment was posted 3 times, probably because of a LessWrong bug that makes it seems as if your comment was posted when you click on 'submit'
- was a mistake
turning off comments serves as a coordination mechanism to discuss the topic at the same place
hummm, basically time-consuming, especially if/when it develops into an addiction + am less focused when horny
Am thinking of organizing a one hour livestreamed Q&A about how to sign up for cryonics on January 12th (Bedford's day). Would anyone be interested in asking me questions?
x-post: https://www.facebook.com/mati.roy.09/posts/10159154233029579
- No name that I'm aware. Brainstorming ideas: map merging, compartmentalisation merging, uncompartmentalising
We sometimes encode the territory on context-dependent maps. To take a classic example:
- when thinking about daily experience, stars and the Sun are stored as different things
- when thinking in terms of astrophysics, they are part of the same category This makes it so that when you ask a question like "What is the closest star [to us]?", in my experience people are likely to say Alpha Centauri, and not the Sun. Merging those 2 maps feels enlightening in some ways; creates new connections / a new perspective. "Our Sun is just a star; stars are just like the Sun." leading to system-1 insights like "Wow, so much energy to harvest out there!" Questions:
- Is there a name for such merging? If no, should there be? Any suggestions?
- Do you have other (interesting?) examples of map merging? x-post: https://www.facebook.com/mati.roy.09/posts/10159142062619579
Litany of Tarski for instrumental rationality 😊
If it's useful to know whether the box contains a diamond,
I desire to figure out whether the box contains a diamond;
If it's not useful to know whether the box contains a diamond,
I desire to not spend time figuring out whether the box contains a diamond;
Let me not become attached to curiosities I may not need.
Working a lot is an instrumental goal. If you start tracking your time, and optimizing that metric, you might end up working more than optimal. That seems like a triumph of instrumental goals that isn't a coordination failure. I wouldn't assign this failure to Moloch. Thoughts?
Awesome! :)
oh damn, thanks! there was an error message when I was trying to post it which had given me the impression it wasn't working, hence why I posted it 4 times total ^^
either I'm doing it wrong or you can't
tried things from: https://lifelongprogrammer.blogspot.com/2019/01/how-to-style-markdown-with-css.html
<p class="red">red text</p> <style> .red {color: red} </style>
::::: {#special .red} Here is a paragraph.
And another. :::::
In my mind, "the expert problem" means the problem of being able to recognize experts without being one, but I don't know where this idea comes from as the results from a Google search don't mention this. What name is used to refer to that problem (in the literature)?
x-post: https://www.facebook.com/mati.roy.09/posts/10159081618379579
In my mind, "the expert problem" means the problem of being able to recognize experts without being one, but I don't know where this idea comes from as the results from a Google search don't mention this. What name is used to refer to that problem (in the literature)?
x-post: https://www.facebook.com/mati.roy.09/posts/10159081618379579
In my mind, "the expert problem" means the problem of being able to recognize experts without being one, but I don't know where this idea comes from as the results from a Google search don't mention this. What name is used to refer to that problem (in the literature)?
x-post: https://www.facebook.com/mati.roy.09/posts/10159081618379579
In my mind, "the expert problem" means the problem of being able to recognize experts without being one, but I don't know where this idea comes from as the results from a Google search don't mention this. What name is used to refer to that problem (in the literature)?
x-post: https://www.facebook.com/mati.roy.09/posts/10159081618379579
Font color isn't supported, right?
suggestion of something to try at a LessWrong online Meetup:
video chat with a time-budget for each participant. each time a participant unmutes themselves, their time-budget starts decreasing.
note: on jitsi you can see how many minutes someone talked (h/t Nicolas Lacombe)
x-post: https://www.facebook.com/mati.roy.09/posts/10159062919234579
So the following, for example, don't count as "existential risk caused by AGI", right?
- many AIs
- an economy run by advanced AIs amplifying negative externalities, such as pollution, leading to our demise
- an em world with minds evolving to the point of being non-valuable anymore ("a Disneyland without children")
- a war by transcending uploads
- narrow AI
- a narrow AI killing all humans (ex.: by designing grey goo, a virus, etc.)
- a narrow AI eroding trust in society until it breaks apart
- intermediary cause by an AGI, but not ultimate cause
- a simulation shutdown because our AI didn't have a decision theory for acausal cooperation
- an AI convincing a human to destroy the world
Strong like! For me, this is an important consideration to preserving my identity, staying productive / mentally sharp, and as independent as I want:)
oh, thanks!
The prediction is (emphasis added)
At least one other CRISPR baby will be born by January 2030.
Is the article you linked mentioning a second one? (I doubt because I looked into it after that article was published, and even wrote a wiki page on it)
That all makes sense to me:)
There's the epistemic discount rate (ex.: probability of simulation shut down per year) and the value discount (ex.: you do the funner things first, so life is less valuable per year as you become older).
Asking "What value discount rate should be applied" is a category error. "should" statements are about actions done towards values, not about values themselves.
As for "What epistemic discount rate should be applied", it depends on things like "probability of death/extinction per year".
Suggestion for retroactive prizes: Pay the most undervalued post on the topic for the prize, whenever it was written, assuming the writer is still alive or cryopreserved (given money is probably not worth much for most dead people). "undervalue" meaning amount the post is worth minus amount the writers received.
I'm curious though, do you have thoughts on what a proposal would look like?
Suggestion: Paying the most undervalued post on the topic, whenever it was written, assuming the writer is still alive or cryopreserved. "undervalue" meaning amount the post is worth minus amount the writers received.
2) a) probability mass distribution over time and some other value
I would like to easily be able to predict on a question such as "What will be the price of neuropreservation at Alcor in 2030?" but for many years at the same time.
I was thinking what would be a good way to do that, and here's a thought.
Instead of plotting probability mass over price for a specific year, we could plot price over years for a specific probability.
So to take the same example, the question could become: "For what price is it 50% sure that Alcor will charge more than it over the coming century?" You could repeat the same question for the 10% and 90% mark.
Or you could just a specific distribution, like a normal distribution. And then you have just 2 curves to make:
- What's the mean of Alcor's expected prices over the coming century?
- What's the standard deviation of Alcor's expected prices over the coming century?
That way, you can easily get a probability mass distribution over price, over time.
nitpic
If you’re taking compounding seriously, you’d learn the skills with the greatest return first.
I don't see how that follows. Whether you multiply your initial value by 1.3 before 1.1, or the other way around, the end result is the same.
Edit: ah, maybe you meant to learn the skill which unlocks the most opportunity for more learning
Is Success the Enemy of Freedom? (Full)
(I haven't read yet, but there's a parable, and it's highly upvoted)
ok yeah, that's fair! (although even controlling for that, I think the analogy still points at something interesting)
only sees the parts that someone happened to capture, which are indexed/promoted enough to come to our attention
yeah, I like to see "people just living a normal day"; I sometimes look for that, but even that is likely biased
imagine having a physical window that allowed you to look directly in the past (but people in the past wouldn't see you / the window). that would be amazing, right? well, that's what videos are. with the window it feels like it's happening now, whereas with videos it feels like it's happening in the past, but it's the same
x-post: https://www.facebook.com/mati.roy.09/posts/10158977624499579
A new one, as of 2020-10-16: HPRick and MoRty
What if Harry and Quirrell in "Harry Potter and the Methods of Rationality" had the personalities of Rick and Morty from "Rick & Morty"?
I didn't say extinction risk.
Existential risk – One where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.
Woops, not sure how I missed that; I'll retract my comment
Representing existential risks. A lost opportunity to grab the Reachable Universe (edit: /expand through the cosmos). (at least, that's my interpretation)
The true patronus was discovered (possibly rediscovered) by Harry Potter, when he finally understood that Dementors represent Death incarnate. His empathetic desire to protect all of humanity from the pain of that loss allowed him to not just drive away the fear of Death, but to conquer Death itself. This caused his Patronus to evolve into its true form; his Patronus took on the shape of an androgynous human. In this form, the Patronus gains additional abilities, including the ability to destroy Dementors and block the "unblockable" curse, Avada Kedavra.
Was the Sun exactly above you? ^_^
maybe The Egg
maybe this could be transformed into a fable: https://www.smbc-comics.com/comic/2013-06-02
yes, but it's a rather small consideration