Posts

Thoughts on Death 2014-02-14T20:27:27.578Z

Comments

Comment by BlackNoise on Travel Through Time to Increase Your Effectiveness · 2015-09-07T18:22:20.184Z · LW · GW

Thank you for these marvelous hacks, a few of these were unformed at the back of my head for a long time now.

I really like the Second Chances mentality, this line especially:

There are those who tell you to live each day as if it might be your last. I prefer to live each day as if I'm doing it over.

seems like a way to visualize/weaponize a consequentialist viewpoint that's also agreeable to your selves under reflection.

The Split Selves especially crystallized some of the "cooperate with alt-time self-versions" mentality I'm trying to stay aware of.

I do have to say "use with caution": Most of these are hard to execute or maintain consistently, and inevitable failures can end in a feeling of contract breach/lower self-trust/"fuck this shit" attitude and so on.

As such it's important to, um, let go of failure? I mean maybe analyze what went wrong, but definitely skip the punishment and just go to "lesson learned, sins forgiven, lets do our best next time!". At least that seems healthier than guilt/duty as motivation.

Comment by BlackNoise on Thoughts on Death · 2014-02-15T18:55:35.958Z · LW · GW

Meant more in the context of 'Nothing could have been done' vs 'Something could have but wasn't'. Though yes, it may read as more condescending than intended.

While humans in general have indeed been thinking about death for ages, I doubt many of the less religious ones hold strong beliefs about what exactly it entails. Not to mention those who genuinely believe in an afterlife ought not to be as sad/hurt as those who don't.

All this ultimately doesn't diminish the pain of loss people feel, hence the whole 'death is bad' thing. Also, don't confuse superficially similar things as being similar on a deeper level.

Comment by BlackNoise on Open Thread, November 23-30, 2013 · 2013-11-25T14:23:18.515Z · LW · GW

Not sure where exactly to ask but here goes:

Sparked by the recent thread(s) on the Brain Preservation Foundation and by my Grandfather starting to undergo radiation+chemo for some form of cancer. While timing isn't critical yet, I'm tentatively trying to convince my Mother (who has an active hand in her Fathers' treatment) into considering preservation as an option.

What I'm looking for is financial and logistical information of how one goes about arranging this starting from a non-US country, so if anyone can point me at it I'd much appreciate it.

Comment by BlackNoise on Yet More "Stupid" Questions · 2013-11-24T09:15:22.233Z · LW · GW

Not sure where else to ask but here goes:

Sparked by the recent thread(s) on the Brain Preservation Foundation and by my Grandfather starting to undergo radiation+chemo for some form of cancer. While timing isn't critical yet, I'm tentatively trying to convince my Mother (who has an active hand in her Fathers' treatment) into considering preservation as an option.

What I'm looking for is financial and logistical information of how one goes about arranging this starting from a non-US country, so if anyone can point me at it I'd much appreciate it.

Comment by BlackNoise on Harry Potter and the Methods of Rationality discussion thread, part 19, chapter 88-89 · 2013-06-30T20:15:02.974Z · LW · GW

I've used it myself. All it takes is power and a certain mood.

Harry may have had the mood, but there's doubt about the Power, and there's also been multiple foreshadows of how broken low-level spells are, and a recent mention that he's he can't stop himself from noticing them. Hence "censors off".

Comment by BlackNoise on Asteroids and spaceships are kinetic bombs and how to prevent catastrophe · 2013-02-26T12:57:50.494Z · LW · GW

I was mainly thinking about Project Thor, which roughly means that going at mach 10 (~3km/s) is like being made of TNT energy-wise. Now, current space-shuttles and the ISS weigh around 100 tons, and I'd imagine being able to get at least 10km/s, if not 30 with asteroid mining-level space tech, which should bring spaceships into the kiloton TNT range, that while far from a hydrogen bomb, packs the punch of a smallish fission nuke. So, while it probably won't be easy to wipe out big cities, immense damage is guaranteed.

What I can't estimate properly due to insufficient knowledge is the atmosphere's ability at stopping/limiting such threats, for all I know spaceships/rocks going at too steep an angle might blow up very high up, while stuff going at a gradual entry might be significantly slowed, Although as a rule, bigger things should care less about the atmosphere.

Edit: CellBioGuy's comment points out that spaceships aren't (and probably won't be) built to withstand reentry at dangerous velocities, making at least spaceship-jacks less of a threat.

Comment by BlackNoise on Asteroids and spaceships are kinetic bombs and how to prevent catastrophe · 2013-02-26T03:02:45.837Z · LW · GW

I think the idea is that active defensive measures (as opposed to just watching with telescopes) are a lot more difficult to set up, and there's little motivation considering that the space activities aren't military-oriented. Although I suppose if we'd be far enough in space exploration to have asteroid mining there'd also be some contingency plans for extinction-grade bricks on a collision course; plans that can probably be adapted to include 'hostile local' handling.

Regarding the physics, do not underestimate heavy things flying very fast, especially if they're good at staying in one piece - a ship/asteroid may well destroy a city when dropped at 10~30km/sec, and attackers will aim.

Comment by BlackNoise on Visual Mental Imagery Training · 2013-02-20T11:10:10.778Z · LW · GW

Have you tried your hand at drawing?

It is not quite the same skill, but being able to notice/See things as they are (closer to raw visual input) rather than letting your brain auto-label stuff may help you retain images better, and I think it'd also be interesting if you were to take a written scene from a book and try do draw it.

By the way, there is supposedly a fast way (~20h) to go from kindergarten to recognizably realistic in drawing skills using some neat tricks, there was even a series of articles about it here on lw. (the other 10k hours go into those final touches of skill, but to the untrained eyes the difference isn't as jarring as the no-training - some-training gap, at least in simple scenes)

Comment by BlackNoise on Open Thread, January 1-15, 2013 · 2013-02-05T23:49:52.714Z · LW · GW

Here's an anthropic question/exercise inspired by this fanfic (end of 2nd chapter specifically), I don't have the time to properly think about it but it seems like an interesting tests for current anthropic reasoning theories under esoteric/unusual conditions. The premise is as follows:

There exist a temporal beacon, acting as an anchor in time. An agent/agents may send their memories back to the anchored time, but as time goes on they may also die/be otherwise prevented from sending memories back. Every new iteration, the agent-copy at the time immediately after the beacons' creation gets blasted with memories from 'past' iterations, either from only the immediately preceding one which recursively includes all previous iterations as further back in subjective time, or from every past iteration at once, with or without a convenient way to differentiate between overlapping memories (another malleable aspect of the premise), or for real head-screwes, from all iterations that lived.

the interesting question would be how would an agent estimate their probability of dying in the current iteration, based on information it was blasted with immediately post-anchor time.

A very simple toy model would be something like: assuming all agent copies send back memories after T years if they haven't died, with the probability of dying/being unable to send back memories each iteration being p, an agent that finds itself with memories from N iterations, what should it estimate as its probability of dying in this iteration?

There should probably be more unsafe time-travel based questions to test anthropic decision making, maybe also to shape intuition regarding many-worlds/multiverse views.

Comment by BlackNoise on February 2013 Media Thread · 2013-02-04T08:42:46.199Z · LW · GW

I've been reading a lot of fanfiction recently, starting with HPMoR then going recursively through Eliezer's 'favorites' list, eventually branching to various TVTropes recommended lists. It's in the latter that I found Destiny is a Hazy thing, a Naruto AU fanfic with major Lovecraftian themes and (currently, at least) minor crossover/shout out elements to Evangelion. The author page has a rather good description. Personally I like this story because it combines a lot of elements I seem to enjoy in fiction, mainly a 'large world' feel and Anehgb univat Lbt Fbgubgu nf n sngure svther (minor spoiler).

I'm currently reading another story by Calanor, Harry Potter and the Puppet of Time

Draco receives the memories of his future self and undertakes an effort to free Harry from Dumbledore's plan without turning him into another Dark Lord and make a better life for himself in the process. Naturally, things stray from their planned path very fast.

Seems to be worth reading as well.

If someone is interested, I can probably spare the time to go over the fics I've read and recommend what I liked, but right now I'll limit myself to Calanor due to his works being sufficiently obscure as to not be easily noticeable otherwise (and maybe other obscure but good authors I find).

Comment by BlackNoise on Random LW-parodying Statement Generator · 2012-09-11T22:22:14.133Z · LW · GW

I'm an aspiring god

Comment by BlackNoise on [LINK] Using procedural memory to thwart "rubber-hose cryptanalysis" · 2012-07-20T16:16:38.133Z · LW · GW

Reminded me of this comic

Comment by BlackNoise on Real World Solutions to Prisoners' Dilemmas · 2012-07-03T03:45:15.651Z · LW · GW

though not quite as good as me cooperating against everyone else's defection.

Shouldn't it be the other way around? (you defecting while everyone else cooperates)

ETA: liking this sequence so far, feels like I'm getting the concepts better now.

Comment by BlackNoise on AI risk: the five minute pitch · 2012-05-09T13:21:15.970Z · LW · GW

I thought utility maximizers were allowed to make the inference "Asteroid Impact -> reduced resources -> low utility -> action to prevent that from happening", kinda part of the reason for why AI is so dangerous: "Humans may interfere - > Humans in power is low utility -> action to prevent that from happening"

They ignore anything but what they're maximizing in the sense that they don't follow the Spirit of the code but rather its Letter, all the way to the potentially brutal (for Humans) conclusions.

Comment by BlackNoise on Open Thread, May 1-15, 2012 · 2012-05-01T16:27:47.145Z · LW · GW

If only for the fact that other people who share my values, or play the same game and therefore play by the same rules, will desire the object even more.

How about if there were two worlds - one where they care about whether a spacetime trajectory does or does not go through a destroy-rebuild cycle, and one where they spend the effort on other things they value. In that case, in which world would you rather live in?

The Champagne example helps, I can understand putting value on effort for attainment, but I'd like another clarification:

If you have two rocks where rock 1 is brought from mars via spaceship, and rock 2 is the same as rock 1 only after receiving it you teleport it 1 meter to the right. Would you value rock 2 less than rock 1? If yes, why would you care about that but not about yourself undergoing the same?

Comment by BlackNoise on Open Thread, May 1-15, 2012 · 2012-05-01T15:00:19.162Z · LW · GW

You would care if certain objects are destructively teleported but not care if the same happens to you (and presumably other humans)

Is this a preference you would want to want? I mean, given the ability to self-modify, would you rather keep putting (negative) value on concepts like "copy of" even when there's no practical physical difference? Note that this doesn't mean no longer caring about causal history. (you care about your own casual history in the form of memories and such)

Also, can you trace where this preference is coming from?

Comment by BlackNoise on Visual maps of the historical arguments in the topic, "Can computers think?" · 2012-04-18T04:53:21.302Z · LW · GW

The "Thermostats can have beliefs" seems like a really good example of how beliefs should affect actions.

(For those looking, map 3 lowest area)

Comment by BlackNoise on Harry Potter and the Methods of Rationality discussion thread, part 16, chapter 85 · 2012-04-18T04:29:54.579Z · LW · GW

He didn't actually had to have read it, merely to have come across that particular quote.

Comment by BlackNoise on Meetup : Tel Aviv, Israel · 2012-04-16T11:53:02.339Z · LW · GW

Is there a non-car way to get there from Haifa on time? (5 min searching says earliest bus arrives at 19:38 at the new central bus station)

Comment by BlackNoise on Cryonics without freezers: resurrection possibilities in a Big World · 2012-04-05T08:39:44.727Z · LW · GW

It should be mentioned that when considering things like Cryonics in the Big World, you can't just treat all the other "you" instances as making independent decisions, they'll be thinking similarly enough to you that whatever conclusion you reach, this is what most "you" instances will end up doing. (unless you randomize, and assuming 'most' even means anything)

Seriously, I'd expect people to at least mention the superrational view when dealing with clones of themselves in decide-or-die coordination games.

Comment by BlackNoise on Harry Potter and the Methods of Rationality discussion thread, part 13, chapter 81 · 2012-03-28T22:24:15.641Z · LW · GW

I meant it as Bayesian evidence. (updating P(Arbitrage works) down on Bester regretting means updating up on him not Regretting)

Plus, this is stronger evidence for us than for Harry due to Conservation of Details and the recent disclaimer by EY that there are no red herrings, and that simple solutions != bad solutions (and in fact, the opposite is usually true).

ETA: Also, Bester probably thought about it more more than a few seconds, at least the first time he saw it in Harry's mind - Remember that he didn't just see those Ideas/secrets, he's also seen key moments of his previous conversations.

Comment by BlackNoise on Harry Potter and the Methods of Rationality discussion thread, part 13, chapter 81 · 2012-03-28T20:13:23.352Z · LW · GW

Some counter-evidence for getting gold being difficult: In chapter 27, Mister Bester (the Legilimens who trained Harry) said:

Though I do wish I could remember that trick with the gold and silver.

Implying that it was at least somewhat practical as a means for getting rich quickly.

Comment by BlackNoise on Harry Potter and the Methods of Rationality discussion thread, part 13, chapter 81 · 2012-03-28T18:29:37.021Z · LW · GW

Just trade on forex and use time turner to go back and choose the deal.

You sir, are a genius.

Comment by BlackNoise on Harry Potter and the Methods of Rationality discussion thread, part 11 · 2012-03-28T18:02:36.698Z · LW · GW

Congratulations on correctly guessing (most of) the solution.

Comment by BlackNoise on Harry Potter and the Methods of Rationality discussion thread, part 11 · 2012-03-25T14:12:48.562Z · LW · GW

Actually, if you can loop yourself more than six times at any small stretch of wall-time then you can get more than 30 subjective hours in one 24 wall-time day.

But it's implied you can't actually do that, which is why I think no more than 6 copies at any given time. Plus, if it were possible you could basically use any one day as a stopping point groundhog-day style in which you can (for example) brute-force read the entire Hogwarts library.

At any rate, the general limiting principle is that information cannot travel more than 6 hours backwards, Which I think means that when you draw a graph of a person using time-turners where you represent her using an arrow (going right for positive time, and left in 1h jumps for time-turner use), Then you can't have more than 6 hours of left-arrow in any given 24h wall-time section.

Comment by BlackNoise on Harry Potter and the Methods of Rationality discussion thread, part 11 · 2012-03-25T02:13:32.993Z · LW · GW

Didn't Harry ask Dumbledore if it's possible to get more than 30 hours in a day using multiple time-turners and getting a negative answer?

Comment by BlackNoise on Harry Potter and the Methods of Rationality discussion thread, part 11 · 2012-03-25T02:02:33.860Z · LW · GW

I think he meant the note that came with the Cloak that said to not trust Dumbledore since he'll take the Cloak from Harry. which he didn't, and then said:

But you and I are both gamepieces of the same color, I think. The boy who finally defeated Voldemort, and the old man who held him off long enough for you to save the day. I will not hold your caution against you, Harry, we must all do our best to be wise. I will only ask that you think twice and ponder three times again, the next time someone tells you to distrust me.

And considering that he wrote the note, and set up the mistrust in the first place...

Hence, Magnificent Bastard.

Comment by BlackNoise on Harry Potter and the Methods of Rationality discussion thread, part 11 · 2012-03-25T01:48:18.800Z · LW · GW

I don't think you can have more than 6 versions of yourself present at any given time, since any more than that and information is traveling more than 6 hours back. (at least from the perspectives of the earliest and latest self-clone)

But still, 6 x Dumbledore+Fawkes is quite the army.

Edit: Also,

Many resstrictionss. Locked to your usse only, cannot be sstolen. Cannot transsport other humanss.

You don't actually need to go through animagus+pouch to transport more than one person on an unrestricted Time-Turner. (Canon also agrees on this if I recall correctly)

Comment by BlackNoise on Harry Potter and the Methods of Rationality discussion thread, part 11 · 2012-03-24T18:17:48.140Z · LW · GW

I think my problem is with this "Judge Retrospectively" thing. Here's what I think:

Decisions are what's to be judged, not outcomes. And decisions should be judged relative to the information you had at the time of making them.

In the lottery example, assuming you didn't know what number would win, the decision to buy a ticket is Bad regardless of whether you won or not.

What I got from this:

you will have been retrospectively wrong not to have bought

Is that you think that if you had a (presumably random) number in mind, but did not buy a ticket, and that number ended up winning, then your decision of not buying the ticket was Wrong and that you should Regret it.

My problem is that this doesn't make sense: We agree that playing a lottery is Bad (Negative sum game and all that), and we don't seem to regret not heaving played with the specific number that happened to have won. Which is good, since (to me at least) Regretting decisions made in full knowledge you had at the time of decision seems Wrong.

If this is not what you meant and I'm just bashing a Straw Man, please tell me.

Comment by BlackNoise on Harry Potter and the Methods of Rationality discussion thread, part 11 · 2012-03-24T15:58:49.203Z · LW · GW

you will have been retrospectively wrong not to have bought

Not really; Before you know the outcome, saying "my numbers will be 5, 11, 17, 33, 36, and 42" is privileging the hypothesis. (unless you had other information which allowed you to select that specific combination)

And even if those numbers, by pure chance, were correct, there is still a reason it was a bad decision (in the 'maximizing expected utility' sense) to buy a ticket. Which is what I meant when I said that you can't have expected to win.

Comment by BlackNoise on Harry Potter and the Methods of Rationality discussion thread, part 11 · 2012-03-23T21:57:36.445Z · LW · GW

I agree with the principle, but lottery is a really poor example of this, since it implies ignorance.

Comment by BlackNoise on Harry Potter and the Methods of Rationality discussion thread, part 11 · 2012-03-23T17:46:51.283Z · LW · GW

Maybe what matters isn't the proximity of the caster but of the patronus itself. Though Harry might still not be able to send his 2.0 on a search&destroy mission while staying at Hogwarts.

His wand stayed in his hand, and a slight, sustainable flow from him replaced the slight losses from his Patronus.

There seems to be a need for constant energy transfer into the patronus, and I doubt this transfusion line can go through hyperspace.

Comment by BlackNoise on Harry Potter and the Methods of Rationality discussion thread, part 11 · 2012-03-23T17:23:29.292Z · LW · GW

Actually, its a bad decision with respect to the information you had when you made it, unlike one-boxing instead of two-boxing, you can't have expected to win the lottery.

Comment by BlackNoise on Harry Potter and the Methods of Rationality discussion thread, part 11 · 2012-03-23T16:36:37.343Z · LW · GW

Sorry if this was mentioned before, but I just noticed something (not related to the latest cliffhanger):

It's implied that some people break into Azkaban to give some prisoners normal sleep/Patronus time, but why go to all the trouble when you can just tell a Patronus to go there for a few hours by itself. And we already know that a Patronus can travel into Azkaban (McGonagalls' in TSPE arc)

So, plothole?

Comment by BlackNoise on Harry Potter and the Methods of Rationality discussion thread, part 11 · 2012-03-23T15:54:04.334Z · LW · GW

Technically, the numbers don't have to work out - Lucius is the one on who's request the trial be held, If his debt can make him withdraw charges or clear Hermione's debt, that alone should suffice.

Still, while this is a clever idea, it doesn't sound very "Taboo Trade-off" or "Think of the Wizengamot as individuals instead of wallpaper".

Comment by BlackNoise on Faustian bargains and discounting · 2012-01-29T16:47:57.379Z · LW · GW

Why the hell would you want to doom the vast majority of future-you's to an eternity of torture?

Comment by BlackNoise on The hundred-room problem · 2012-01-22T02:29:09.483Z · LW · GW

This problem has a neat symmetry in that the players are copies of you; so all copies finding themselves in blue rooms will assign p(tails | blue)=x, and conversely all copies finding themselves in red rooms will assign p(tails | red)=1-x. This way (outside view), the problem reduces to finding what x gives the best sum log-score for all observers. Running the numbers gets x=99/100, but this problem has way too much symmetry to attribute this to any particular explanation. Basically, this is faul_sname's reasoning with more math.

Comment by BlackNoise on Spaced Repetition Database for the Mysterious Answers to Mysterious Questions Sequence · 2011-11-15T17:18:48.044Z · LW · GW

I've thought about an alternative (or complimentary) answer for the question " Sequences + Spaced Repetition = ? ": Instead of this approach (which is basically distilling the posts into flashcards as per Incremental reading), how about a deck of cards with one linked post per card with further cards for particularly interesting/important points. When a card comes up for review, you read the linked post and send it to re-reading in the future. This can also be scaled to books, though that may be pushing it.

The Pro is that its easier to implement, while still being better than just one read-through of the sequences. One big con is that there's no active recall; no questions to answer, so it probably won't be as much a part of you.

The point about this being a complementary solution is that you can slowly convert links to posts into proper Q&A flashcards, which basically makes this an intermediate stage between "read once" and "make an anki deck for a whole sequence at once".

Now I didn't actually implement this yet (I only found out about spaced repetition a few weeks ago), and I'd like a second opinion from someone who's used SRS for a while on whether this idea will work or not.

Comment by BlackNoise on Stanislav Petrov Day · 2011-09-30T11:38:29.286Z · LW · GW

Anyone else get hit with a sense of sheer terror as they figured the connection between this story and the anthropic principle?

Comment by BlackNoise on Welcome to Less Wrong! (2010-2011) · 2011-08-28T16:33:37.154Z · LW · GW

Thanks for crushing my last line of retreat, no more excuses to prevent me from (finally) reading the sequences.

As for books, funny how archive panic activates even when you expect and have pecommited to overcome it.

Capital letters. Please use them.

Will try.

Comment by BlackNoise on Welcome to Less Wrong! (2010-2011) · 2011-08-25T12:01:02.465Z · LW · GW

hello lesswrong!

I'm a 20 y.o. student two years in studying EE & physics, though I self-identify more as a scientist than an engineer.

currently I'm juggling about 3 'big' goals - general education (in progress), lucid dreaming (more of a side project; might as well use those sleep-hours for something more fun than being unconscious), and rationality (which is why im here).

I found this site (and the concept and usefulness of rationality) via some of Eliezer's writing as i was scouring the Internet in my eternal quest for vanquishing boredom. that was some time ago (1-2 years i think), back then it seemed like yet another interesting thing so i read a bit and then schedule restrictions had me put this on hold 'for better times'.

fast forward to a few months ago; as part of my increasing interest in self-awareness i simply realized that if i won't work on what interests me 'now' then i never will, so i picked up the projects that interested me and started them.

since then i've read the first sequence and quite a few other articles that caught my eye. as you can guess from my listing 'rationality' as one of my major goals, the ideas i encountered have made quite an impact.

now if past experience is any indication, i doubt i'll become an active member of this society. still, i'll read the sequences and will probably continue lurking around as long as i have Internet access.

that's about it for who i am and why im here, now i have a few questions of the practical sort:

besides the sequences, is there any generally accepted recommended reading in the field of rationality, heuristics & biases and cognitive psychology? (and maybe something at beginner-level about AI, transhumanism and cryogenics) i already have a small list of books and i want to make sure that im covering all the basics, so suggestions are welcome.

and now for the big one; the target - i want to read and comprehend all the sequences. the problem - a few-months familiarity with tvtropes completely destroyed my ability to wiki-walk without a supercritical tab explosion. further details - reading the second sequence (words one) is moving at a pace of 10 posts/two weeks while reading for a few hours/day, and im currently at >90 tabs open, burn-out seems imminent without a change of strategy. the question - does anyone have a systematic way to read through all of the sequences (and interesting comments), which is optimized for comprehension, low risk of burning out and time efficiency? (i have some idea for this but its still an early draft and doesn't 'feel' efficient)