Trauma, Meditation, and a Cool Scar 2019-08-06T16:17:39.912Z · score: 80 (31 votes)
Kissing Scars 2019-05-09T16:00:59.596Z · score: 43 (18 votes)
Towards a Quieter Life 2019-04-07T18:28:15.225Z · score: 16 (10 votes)
Modelling Model Comparisons 2019-04-04T17:26:45.565Z · score: 12 (3 votes)
Formalizing Ideal Generalization 2018-10-29T19:46:59.355Z · score: 3 (3 votes)
Saving the world in 80 days: Epilogue 2018-07-28T17:04:25.998Z · score: 56 (23 votes)
Today a Tragedy 2018-06-13T01:58:05.056Z · score: 51 (22 votes)
Trajectory 2018-06-02T18:29:06.023Z · score: 18 (5 votes)
Gaining Approval: Insights From "How To Prove It" 2018-05-13T18:34:54.891Z · score: 21 (8 votes)
Saving the world in 80 days: Prologue 2018-05-09T21:16:03.875Z · score: 34 (10 votes)
Mental TAPs 2018-02-08T17:26:36.774Z · score: 29 (8 votes)


Comment by elriggs on Two senses of “optimizer” · 2019-08-22T21:52:25.916Z · score: 2 (2 votes) · LW · GW

Thanks Pattern! I do see now how it could be used as an optimizer, but it still seems like it’s not intrinsically an optimizer (like how linear algebra isn’t an optimizer but it could be used as an optimizer).

I don’t think anyone actually claimed they were all intrinsically optimizers, but I was still confused by it.

Comment by elriggs on Two senses of “optimizer” · 2019-08-22T21:46:16.003Z · score: 2 (2 votes) · LW · GW

Thanks for the explanation and links! I agree that linear program solvers are intrinsically optimizing. SAT-solvers not intrinsically so, but could be used to optimize (such as Pattern’s example in his comment).

Moving on, I think it’s hard to define “environment” in a way that isn’t arbitrary, but it’s easier to think about “everything this optimizer affects”.

For every optimizer, I could (theoretically) list everything that it affects at that moment, but that list changes based off what’s around the optimizer to be affected. I could rig a set up such that, as soon as I run the optimizer code, it would cause a power surge and blow a fuse. Then, every optimizer has changed the environment while optimizing.

But you may argue that an opt2 would purposely choose an action that would affect the environment (especially if choosing that action maximizes reward) while an opt1 wouldn’t even have that action available.

But while available actions may be constrained by the type of optimizer, and you could try to make a distinction between different available actions, the affects of those limited actions changes with the programming language, the hardware, etc.

I’m still confused on this, and you still may have made a good distinction between different optimizers.

Comment by elriggs on Two senses of “optimizer” · 2019-08-21T21:58:03.758Z · score: 4 (6 votes) · LW · GW

I’m confused about a couple of your examples. In my mind, “optimizing” relates to a number going up or down. More happiness, more money (more problems), more healthy, etc.

Gradient descent makes the cost function go down. RL makes a reward go up. I understand those two examples because there’s an implied goal.

But how is an SAT solver an optimizer? There’s not an implied goal as far as I can tell.

Same for a linear solver. I could solve linear regression with a linear solver and that has an implied loss function. But not linear solvers in general.

Comment by elriggs on Trauma, Meditation, and a Cool Scar · 2019-08-10T00:32:10.070Z · score: 12 (4 votes) · LW · GW
He was unamused.

Ah man, sorry your joke bombed.

If I someone else I know gets in an accident too, I'll tell them they might experience panic attacks and how to work through them safely. That might be the most helpful thing.

Comment by elriggs on Trauma, Meditation, and a Cool Scar · 2019-08-09T16:44:58.486Z · score: 1 (1 votes) · LW · GW

Thanks! Actually, I did dress as a pirate last Halloween, though I put the patch on my good eye because I'm hilarious.

Comment by elriggs on Trauma, Meditation, and a Cool Scar · 2019-08-09T16:42:30.798Z · score: 3 (2 votes) · LW · GW

That's interesting. I was definitely very surprised, no expectation that something like this could happen. Though, when I went into shock, I said "good thing I have another good eye" on the way to the hospital, so I was never in denial of how bad the damage was.

I was not prepared for the panic attacks, nor did I even think of a plan of action to work through them until a couple months afterwards (as mentioned in the post), which was Elo's idea. It wasn't obvious to me that I would have panic attacks or that there was a way to get past them sooner.

You said you knew it might happen, but had you heard about the "reliving the trauma" method from others? Was that something you just figured out at the hospital in Germany?

Comment by elriggs on Trauma, Meditation, and a Cool Scar · 2019-08-08T15:39:00.580Z · score: 1 (1 votes) · LW · GW

Hahaha, I don't know how many people have mentioned the "Girls will think it's cool" idea.

Also, I'm sure you've dealt with meeting someone new and knowing that they've noticed your scar and want to ask about it. Definitely a conversation starter, and it's been great to try off different ways of telling the story.

Comment by elriggs on Trauma, Meditation, and a Cool Scar · 2019-08-08T15:32:14.238Z · score: 3 (2 votes) · LW · GW

Empathy is right.

I read your post before my accident (and again just now), and it's interesting how much better I understand the trauma.

There's the stereotypical "You don't understand true pain", but I've swung in the opposite direction. Whether it's a story as severe as yours, a splinter, or social discomfort, I feel it and I'm so sorry. Suffering is suffering and it all sucks.

How you dealt with it was interesting. I hated every time my mind would think about the accident because it was scary and I could have died if I reacted differently, but you also could have died, but you purposely relived it. If I could go back, I would try intentionally reliving it on my own terms.

Comment by elriggs on Today a Tragedy · 2019-06-12T19:12:02.893Z · score: 9 (2 votes) · LW · GW

Hey Will,

Today's the day. I was at work when I got the call, and I literally just went into the bathroom, sat on the shower step, and tried to process losing you.

You took care of a lot of your friends when they were being idiots. Do you remember when I had my wisdom teeth pulled, took a whole hydrocodone, and then we went to the speech and debate banquet? I was loopy, could barely stay awake, and I told that one girl "I'm not hitting on you or anything, but you're very beautiful".

You had to carry me out to my mom's car, haha

I hope one day you'll be able to read all these. I just want to give you a giant hug and tell you how stupid you were and then give you another hug.



Comment by elriggs on Today a Tragedy · 2019-04-12T02:47:05.683Z · score: 19 (5 votes) · LW · GW

Remember in high school when you wanted to rent a bounce house for your birthday? I went around to everyone trying to raise money for the cause, but I could only get $20 (all from myself, haha).

$20 isn't enough to rent a bounce house, but you found some employees testing bounce houses, paid them $20 to bounce on one for 20 minutes and left like a king.

I know, I know, I'm a day late, but still,

happy birthday man

Comment by elriggs on Modelling Model Comparisons · 2019-04-04T19:19:34.356Z · score: 1 (1 votes) · LW · GW

I almost agree with your Type 2 = "system", replace [at a different level of abstraction] with [in a different model]. Going from quarks to atoms to chairs would be different levels of abstraction, yes, but I'm trying to point at a an even broader comparison with "system" being just a subset.

For example, I could describe the object "apple" using physics, chemistry, language, and photography. Comparing atoms with words with pixels wouldn't be just varying abstractions, at least in my understanding of the word abstractions.

I've read your article, and thoroughly enjoyed the topic you were addressing directly. I believe you linked it for the similarity between "multiple objects in relation form a new object at a different level of abstraction" and progressing through each Kegan stage, is that correct?

Comment by elriggs on New GreaterWrong feature: anti-kibitzer (hides post/comment author names and karma values) · 2018-10-20T20:29:07.041Z · score: 3 (2 votes) · LW · GW

Definitely just missed it the first time around, my apologies.

Comment by elriggs on New GreaterWrong feature: anti-kibitzer (hides post/comment author names and karma values) · 2018-10-20T20:25:27.221Z · score: 6 (4 votes) · LW · GW

This fixes the issue entirely, thanks!

Comment by elriggs on New GreaterWrong feature: anti-kibitzer (hides post/comment author names and karma values) · 2018-10-20T02:11:08.994Z · score: 20 (8 votes) · LW · GW

I really like this feature, and will probably switch to reading on greaterwrong because of it. The only implementation choice I disagree with is the confirmation notification if you want to toggle it back on.

A reason to keep the confirmation would be to create a trivial inconvenience for people who are tempted to be biased by the Who and the karma value.

A reason to remove the confirmation is when I want to verify who is replying back to whom. Not the specific person's name, just if B comments on A's comment, and then someone comments on B, I'd like to toggle quickly to see if that someone is A or someone else.

Comment by elriggs on Towards a New Impact Measure · 2018-09-19T03:04:33.819Z · score: 3 (2 votes) · LW · GW

From a basic understanding of Hutter's paper that you linked, agents will not self-modify if it affects their utility function (because that plan produces less original utility).

Re-reading your post:

This isn't a penalty "in addition" to what the agent "really wants"; u′A (and in a moment, the slightly-improved u′′A) is what evaluates outcomes.

Clearly states that the penalty is part of the utility function that the agent will "really want"

Comment by elriggs on Towards a New Impact Measure · 2018-09-19T02:48:10.632Z · score: 4 (3 votes) · LW · GW

I think the confusing part is "Impact is change to our ability to achieve goals."

This makes me think that "allowing itself to be put into a box" is high impact since that's a drastic change to it's ability to achieve its goals. This also applies to instrumental convergence, "seizing control", since that's also a drastic change to it's attainable utility. This understanding would imply a high penalty for instrumental convergence AND shut-off (We want the first one, but not the second)

"Impact is with respect to the status quo, to if it does nothing" fixes that; however, changing your succinct definition of impact to "Impact is change to our ability to achieve goals relative to doing nothing" would make it less fluent (and less comprehensible!)

Comment by elriggs on Towards a New Impact Measure · 2018-09-19T02:26:21.022Z · score: 1 (1 votes) · LW · GW

Would an AUP agent ever want to self-modify to get rid of it's penalty and just keep it's utility function?

I'm a little confused on my question: I'm not sure if this is included in the wireheading objection, the embedded agency flaw, or some third alternative.

Comment by elriggs on Saving the world in 80 days: Epilogue · 2018-07-29T14:35:40.131Z · score: 1 (1 votes) · LW · GW

I understand now! haha

My study time was a non-interrupted 1-3 hr block. This made it easier to get in the zone and have relevant details in my working memory.

Going for longer than that (4-6 hr), I'd predict I would need to take a walk outside and just think of nothing to let my subconscious do it's thing. I haven't done that more than once or twice this summer, so I'm not sure what would be normal for me.

TurnTrout has a lot more experience doing that than me, and he'd be a great resource for any of these type questions.

We actually have a discord server with several people studying miri-related materials if that's something that interests you.

Comment by elriggs on Saving the world in 80 days: Epilogue · 2018-07-29T02:36:35.859Z · score: 2 (2 votes) · LW · GW

Thanks for the clarification!

I understand your first equation 3*30= 90 hr (3 hrs/day* 30 days = 90 hrs to complete a book). The second one is a bit confusing. 3*eff*90 = ~30 hours. (3 hrs/day * 1/9 efficiency * 90 ? = ~30 hours) Was the second 90 supposed to be 30 to make efficiency 30%?

I definitely think efficiency plays a huge role, and I'd say (Hours/day *days * efficiency = total actual hours for a 100% efficient person) would be my answer.

(hopefully) Related: an overall good frame to tackle this is focusing on increasing quantity and quality of time spent.

Increasing quantity:

  • Taking time from other activities and put it in this.
  • Gradually building up to longer and longer times. (6-7 hours on a good day is fantastic, and I applaud you! So what about on average? 4-5 hours?)
  • Noticing why you stop. Does your mind want to quit? Are you physically tired or have tired eyes?

Increasing quality:

  • Making sure you have a good textbook.
  • Simply reading faster w/o losing comprehension (Imagining hypothetical "Gun to the head, read as fast as you can w/o losing comprehension for 5 minutes, you will be quizzed" might be a good exercise to play around with)
  • Not wasting time between activities (like going from one exercise to the next) but also!:
  • Knowing how to make use of your subconscious by letting your mind wander (I don't have a gears-level model of this one yet, but I have intuitions that it's important)

May I ask why you would like to compare study paces?

Comment by elriggs on Saving the world in 80 days: Epilogue · 2018-07-29T00:51:33.997Z · score: 1 (1 votes) · LW · GW

I'll look into that course before I start, thanks for the recommendation!

Comment by elriggs on Saving the world in 80 days: Epilogue · 2018-07-29T00:49:43.130Z · score: 1 (1 votes) · LW · GW

Edited, thanks!

Comment by elriggs on Saving the world in 80 days: Epilogue · 2018-07-28T21:39:01.620Z · score: 1 (1 votes) · LW · GW

I appreciate you explicitly stating that it was a lot to deal with. I was actually a bit embarrassed because I didn't get as much done as I expected, so, again, thanks Raemon.

Comment by elriggs on Saving the world in 80 days: Epilogue · 2018-07-28T21:30:30.028Z · score: 13 (5 votes) · LW · GW

Thanks renato!

Regarding your first set of questions:

Reading: originally 3 hr. This changed to 0-3 hours depending on when I woke up in the mornings, which meant going to bed around 9-10, which meant making a habit of trying to fall asleep when I get in bed. I did try to read in the evenings as well, but my eyes would glaze over after working for the day.

Tensor flow: originally 2 hr. I dropped this after the first week due to work taking up 2.2x as much time as I initially predicted. I also felt like I wasn't actually learning anything while going through Google's tutorials, and TurnTrout convinced me to just learn the pre-reqs and theory of ML first.

AI-Safety reading: originally 1 hr. This became 1-3 hours depending on how interested I was in. I dropped this though after I moved 40 days ago due to habit changes and then simply forgetting! I really enjoyed it and it was a low-spoons activity for me.

Meditate: originally 1 hr. This became 0-1 hours depending on the day. I experimented with doing it at different times, but during my lunch break is probably the best for me at the moment.

Weekends are a different animal. If I had a free one & wasn't experiencing emotional problems, I would tear into a book, meditate a lot, read AI papers, and just get a lot of reading on LW and SlateStarCodex done which was great! I really wish I didn't have to work so that could be every day.

So to answer your question, I changed what I was doing after giving it a solid try and adjusting from there if I needed to. After doing that for a couple of months, I have a much better idea of how to do these types of things now.

Regarding "How much efficiency during reading sessions?"

I interpret that to mean "how many pages per day"/"How many chapters per week"/"How many books per month". If that's correct, then I would say I could (right now) learn a subject/book a month. Like I could read Linear Algebra Done Right in less than a month and Tao's Analysis I & II in less than two months, while doing all of the exercises.

If I didn't have to work, I would predict that I could half that time and finish one in less than 2 weeks.

If that wasn't what you asked, please let me know!

Comment by elriggs on Machine Learning Analogy for Meditation (illustrated) · 2018-06-29T02:06:03.312Z · score: 11 (6 votes) · LW · GW

In contrast, I really liked it written out (which makes picture integration natural) and I was surprised to find others having problems reading it. My vision is 20/50 the last time I checked if that's relevant.

Comment by elriggs on Today a Tragedy · 2018-06-17T03:24:02.815Z · score: 3 (1 votes) · LW · GW

Thank you Jerry, I appreciate it

Comment by elriggs on Today a Tragedy · 2018-06-17T03:23:37.615Z · score: 3 (1 votes) · LW · GW

Thank you Raemon

Comment by elriggs on Today a Tragedy · 2018-06-17T03:23:03.879Z · score: 6 (3 votes) · LW · GW

Thanks Vanessa

We will not give up even on those who seem to be already gone, until it's a certainty they can't be brought back.

I had never even considered this as a possibility beyond cryogenics, but I've mulled over yours and Mitchell's comments for a few days now, and that hypothesis is on my radar now, so thank you. Your quote above captures how I want to respond in light of that.

Comment by elriggs on Today a Tragedy · 2018-06-17T03:13:11.287Z · score: 5 (2 votes) · LW · GW

I very much appreciate your comment Mitchell. Your's and Vanessa's comment had both stated a possible option that I hadn't considered before: bringing back those we've lost. It's a low probability in my mind, but it's at least in my hypothesis space now, so thanks.

The funeral was yesterday and I finally did get to be around people who were also mourning for him. It honestly really did help.

I'm also greatly encouraged and look forward to hearing how your own sprint went. Best of luck until then as well.

Comment by elriggs on Saving the world in 80 days: Prologue · 2018-05-10T21:24:10.485Z · score: 3 (1 votes) · LW · GW

Wow! Glad good things are already coming out of this!

Thanks for sharing your experiences and the warning with it (this is the type of post I'd like to promote!), though I predict I'll do well in this program due to what TurnTrout said in the other comment: I enjoy a lot of what I'm doing! * actually considers each item *... yep! This is honestly what I'd rather be doing than a lot of things, so I feel like Nate Soares in that regard (in his post I linked).

Regarding my why/motivation/someone to protect, I'm going to leave that for a separate post. I wanted this one to be a short & to the point intro. My why post will be much more poetic and wouldn't fit here, and to separate it more cleanly, I'm referring to a terminal goal here.

Though I would love to clarify my instrumental goals to achieve that terminal goal! Those are those 3 bullet points "better self-model, feedback, & self-improvement".

Better self-model: I would like to ~maximize my usefulness which would require working hard for several years (So closest to "productivity/ biological limits"). Getting the most bang for my buck those years involves finding a sustainable sprint/jog, so I'm making predictions and testing those predictions to get a more accurate self-model.

Self-improvement: I feel lacking in math and technical knowledge of open-problems in AI safety (as well as how progress has been made so far).

Comment by elriggs on Saving the world in 80 days: Prologue · 2018-05-09T23:23:36.397Z · score: 3 (1 votes) · LW · GW

Agreed on the "not downvoting any more than it is right now (-2)". Though I would still like to dissuade any comments not directly related to the content of the post!

Comment by elriggs on Saving the world in 80 days: Prologue · 2018-05-09T23:17:50.469Z · score: 3 (1 votes) · LW · GW

I initially interpreted Mitchell's as mocking as well, but on a second...third read I interpreted it as:

A reference to a common text book theme "An exercise for the reader" combined with the title of this post. Meant as a funny-joke-but-would-also-be-really-cool-if-someone-actually-did-it. This is just speculation though!! (Is this 100% correct Mitchell?)

I greatly appreciate you standing up for me though!!

If my speculation is correct, then I think the reason both you and I originally interpreted it as mockery would be the "those who are a little more advanced" part (meant as hyperbole) and the "*actually*" part.

Comment by elriggs on AI Summer Fellows Program · 2018-05-07T22:00:15.803Z · score: 3 (1 votes) · LW · GW

I'm in a Discord server with a lot of people who have applied, and as far as I can tell, none of us have received an answer yet! One of the MIRI team told us that they're trying to decide on who, but logistics with multiple decision makers is making it take a bit!

Comment by elriggs on Internalizing Internal Double Crux · 2018-05-01T16:37:54.710Z · score: 9 (3 votes) · LW · GW

Thanks for mentioning Duncan's I want to be healthy, and I deserve rest. That one resonated with me, so I immediately did it with Hardcore Comet King and I'm a human too who deserve's comfort. Situation: Taking a cold shower to be more focused when meditating.


Comfort: *inner scream* cold showers suck!! I don't like it at all.

Comet King: Yes it sucks, but it's only a temporary discomfort until ai takeoff, and then you can have all the comfort you could want.

Comfort: ...

Me: Comfort could you summarize what was said?

Comfort: Cold showers suck, but once ai takeoff happens, I'll have a lot of comfort.

[Then I remembered death]

Comfort: *inner scream*

Comet King: If you'll allow these smaller discomforts, we'll have a greater chance at avoiding the greater discomforts.


And then I took cold shower.

(I don't feel like I fully captured the conversation, and I feel it had some more dialogue)

I'm not too sure about how to mesh this idea (IDC / fusion) & meditation, specifically noticing intentions. Like I can notice "aversion of taking a cold shower" and focus on it until it fades and goes away, OR I can do IDC/fusion where those aversions/thoughts won't show up in the first place.

I would say the second one is better, but I'm a novice in both of those so I might be mis-representing them. There might also be different relationships between those ideas that I've completely missed.

Comment by elriggs on Timothy Chu Origins Chapter 1 · 2018-04-19T01:50:20.473Z · score: 3 (1 votes) · LW · GW

This reminds me of the book The Princess Bride (the abridged version, of course) which I adored. Specifically the idea of successively one-up-ing.

Looking forward to the rest!

Comment by elriggs on Inconvenience Is Qualitatively Bad · 2018-02-27T03:12:05.956Z · score: 16 (4 votes) · LW · GW

I really enjoyed your list of rejected impositions. It seems you're optimizing for time, space, and cognitive capacity in order to do [thing] better.

I read the article you linked, and Terence Tao handles 10 short emails at a time, 5 pieces of paperwork at a time, 2 classes at a time, all errands while he's in town done, etc. (#'s are arbitrary).

When coding games, I would jump through several hoops at a time dealing with Apple Developer to post an app. Now with a different startup, I'll do all the emails/documentation at one time.

What specific "low-intensity" tasks do you struggle doing all at once? (It seems you already do this when buying in bulk)

Comment by elriggs on Tune Your Cognitive Strategies · 2018-02-16T16:11:40.661Z · score: 8 (2 votes) · LW · GW

The very top of the post lists several bullet points of "the good" that would happen to you if you had this skill. Is that what you were asking for? Or were you asking for a personal life example, "I used to do [thing], but I gained this skill and now I do [better thing]". If the latter, then he has a story's tab for his emotional processing post, and I assume he'll eventually have a story tab for this post as soon as someone sends him a personal story.

Comment by elriggs on Hammertime Intermission and Open Thread · 2018-02-08T04:47:36.486Z · score: 20 (6 votes) · LW · GW
  1. I've read through your series so far, and I don't believe your writing quality has dropped. Eliezer's inadeuqacy sequence went from 200 to 50 karma from beginning to end, and you'll see the same drop in views in youtube multi-part videos. I believe it's just barrier-to-entry with each additional post in a sequence because you have to read the other ones first. Posting individual posts and then compiling them as a sequence sounds like a good solution. Have you done a Yoda timer on this yet? lol
  2. I would like to see the dark side technique, which has been stated at Ziz' blog here and has a basis in Nate Soares' guilt series. Probably related to goal factoring and internal double crux just by the sound of those topics. If I was to summarize it, it'd be "Never do anything unless you know how it benefits you"
  3. What are your experiences of the "rationalist uncanny valley"? I would assume sunk cost fallacy fallcy you mentioned, but is there anything else? For me personally, it would be "expending too much social capital for truth's sake" and the above dark side technique. Both of these came from taking those ideas (Truth and Dark Side) seriously, actually trying them in real life, and overdoing it in wrong ways. I did learn from those experiences and am better for it, so trying, failing, learning, repeating was overall beneficial. I assume that's what you would call the uncanny valley?

If so, to improve it would be to improve that feed-back cycle. Anything that increases trying, minimizes failing, and provides better feedback is a possible research avenue. From your own series (and a couple extra):

Increasing trying: Yoda Timers, TAPs, Aversion Factorying, Design, Dark Side

Minimizing failing/error: general biases, Actually Trying, Murphyjitsu,

Better Feedback: Bug Hunt, CoZe, Time calibration, Focusing

Comment by elriggs on introducing: target stress · 2018-01-16T04:22:23.152Z · score: -8 (3 votes) · LW · GW

Warning: demon thread ahead

Comment by elriggs on Updates from Boston · 2017-12-09T05:58:48.706Z · score: 2 (2 votes) · LW · GW

For me, when a tangent conversations starts to die out, I literally say "So... what do you think about [previous topic]?". The other person will usually laugh, probably because they didn't even realize that they went off on a tangent.