Posts

Comments

Comment by StefanDeYoung on How To: A Workshop (or anything) · 2022-06-12T18:57:13.365Z · LW · GW

Really appreciate the level of detail provided. My usual problems with "How To" type content are either "this is too specific, so I can't see how to generalise" or "this is overly broad, and I'm not able to generate my own specific examples." This post was very specific, so avoided the latter failure, and very long so that I got enough content from which to generalise.

Thank you!

Comment by StefanDeYoung on One-Year Anniversary Retrospective - Los Angeles · 2018-04-04T14:15:36.580Z · LW · GW

Congratulations on running a year of meetups! That's not easy.

In the past I've had difficulty pinning down an appropriate meeting schedule. Was there discussion in your group over the meetup frequency? When you rebooted the group, was it explicitly as a weekly group? How well did the first few members know each other before the decision to meet weekly was made?

Comment by StefanDeYoung on Taking the Hammertime Final Exam · 2018-03-22T17:52:18.854Z · LW · GW

The No do-overs section reminded me of a recent conversation. A friend was giving me a lift home from a rationality meetup, we got off of the highway, and I told him to turn right. We should have turned left. Once we realised my mistake, I apologized. His response was something along the lines of "We've just been talking for the last three hours. Why do you believe I'd be averse to spending another five minutes with you?"

The feeling I had wasn't really that there was any badness to spending more time talking, but I knew that he was meeting someone else after dropping me off, and I didn't want to make him late. I dislike being late. I projected that feeling on to him.

No do-overs is also often felt when you forget people's names.

Also, I want to congratulate you for writing the exam. :)

Comment by StefanDeYoung on Hammertime Final Exam · 2018-03-22T16:07:48.912Z · LW · GW

"You could call it heroic responsibility, maybe," Harry Potter said. "Not like the usual sort. It means that whatever happens, no matter what, it's always your fault. Even if you tell Professor McGonagall, she's not responsible for what happens, you are. Following the school rules isn't an excuse, someone else being in charge isn't an excuse, even trying your best isn't an excuse. There just aren't any excuses, you've got to get the job done no matter what." -HPMOR Chapter 75

Reality doesn't grade on a curve.

Comment by StefanDeYoung on Rationality Feed: Last Month's Best Posts · 2018-03-21T15:13:33.994Z · LW · GW

One of the main reasons to have a community blog like Less Wrong is to create common knowledge. I see this kind of summary/highlight post as doing a similar kind of work to the canonization that Raemon write about in his Peer Review post.

Thank you.

Comment by StefanDeYoung on Inference & Empiricism · 2018-03-20T16:13:48.401Z · LW · GW

The E and I in "high-E, low-I" are empiricism and inference?

Comment by StefanDeYoung on A Motorcycle (and Calibration?) Accident · 2018-03-19T14:51:23.951Z · LW · GW

This is very well written. The anecdote at the start especially. Thank you for sharing.

Comment by StefanDeYoung on Friendship · 2018-03-01T20:31:12.779Z · LW · GW

In Subduing Moloch, Teja suggests intentionally creating a channel for rationalists to have one-on-one conversations with each other. As a result, he and I have already had a video chat, and we've joined the LessWrong Slack in order to determine if that might be an appropriate venue to build this project.

I intend to book a conversation with you, and I will also consider creating a similar Calendly system for people to book time with me.

Comment by StefanDeYoung on Subduing Moloch · 2018-02-19T15:30:49.503Z · LW · GW

In her recent post about working remotely, Julia Evans mentions donut.ai as a slack plugin that randomly pairs members of a slack channel for discussions.

Comment by StefanDeYoung on Clarifying the Postmodernism Debate With Skeptical Modernism · 2018-02-16T18:12:11.639Z · LW · GW

Do you see Skeptical Modernism as a new movement in philosophy, or can you point to a previous body of work on this subject?

Comment by StefanDeYoung on Subduing Moloch · 2018-02-15T16:10:58.383Z · LW · GW

I agree that an hour a day is a large time comitment. I couldn't agree to spend an our of my time on this project. I would prefer a smaller time increment by default. For example, calls could be multiples of 15 minutes with participants able to schedule themselves for multiple increments if desired. I'm sensitive to your point that choices are bad, but peoples' schedules will be so widely varying that being able to choose if you want to talk for 1,2,3, or 4 intervals during any given week would allow this to reach a much wider group.

To your point that we should have a concrete set of suggestions for what to do on the call, agendas are essential.

Comment by StefanDeYoung on Subduing Moloch · 2018-02-15T15:48:59.274Z · LW · GW

I disagree that participants would already have to be superhuman, or even particularly strong rationalists. We can all get stronger together through mutual support even though none of us may already be "big-R Rationalists."

In his post about explore/exploit tradeoffs, Sebastian Marshall remembers how Carlos Micelli scheduled a skype call everyday to improve his network and his English. I haven't looked into how many of the people Micelli called were C-suite executives or research chairs or other similar high-status individuals. My guess is that he could have had good results speaking with interesting and smart people on any topic.

For myself, I remember a meetup that I attended in November last year. I was feeling drained by a day job that is not necessarily aligned with my purpose. The event itself was a meeting to brainstorm changes to the education system in Canada, which is also not necessarily aligned with my purpose. However, the charge and energy I got simply from speaking to smart people about interesting things was, and I want to stress this, amazing. For weeks afterwards, the feeling that I got from attending that meeting was all that I wanted to talk about.

If I could get that feeling everyday...

Comment by StefanDeYoung on A Proper Scoring Rule for Confidence Intervals · 2018-02-15T15:17:39.379Z · LW · GW

Thanks for this reply. The technique of asking what each term of your equation represents is one I have not practiced in some time.

This answer very much helped me to understand the model.

Comment by StefanDeYoung on A Proper Scoring Rule for Confidence Intervals · 2018-02-15T15:13:11.307Z · LW · GW

You're welcome. Something that I'm trying to improve about how I engage with lesswrong is writing out either a summary of the article (without re-refering to the article) or an explicit example of the concept in the article. My hope is that this will help me to actually grok what we're discussing.

Comment by StefanDeYoung on A Proper Scoring Rule for Confidence Intervals · 2018-02-13T15:47:53.002Z · LW · GW

I need help figuring out how to use this scoring rule. Please consider the following application.

How much does it cost to mail a letter under 30g in Canada?

I remember when I was a child buying 45c stamps, so it's likely to be larger than that. It's been over a decade or so, and assuming a 2% rise in cost per year, then we should be around c per stamp. However, we also had big budget cuts to our postal service that even I learned about despite not reading the news. Let's say that Canada Post increased their prices by 25% to accomodate some shortfall. My estimate is that stamps cost 75c.

What should be my confidence interval? Would I be surprised if a stamp cost a dollar? Not really, but it feels like an upper bound. Would I be surprised if a stamp cost less than 50c? Yes. 60c? Yes. 70c? Hmmm.... Assume that I'm well calibrated, so I'm reporting 90% confidence for an interval of stamps costing 70c to 100c.

Answer: Stamps in booklets cost 85c each, individual stamps are 100c each. Because I would always buy stamps in booklets, I will use the 85c figure.

S is the size of my confidence interval, . D is the distance between the true value and the interval, but is 0 in this case because the true value is in the interval.

I'm not really sure what to do with this number, so let's move to the next paragraph of the post.

The true value is and the interval is . Because the true value is contained in the interval, .

How does this incentivise honest reporting of confidence intervals?

Let's say that, when I intuited my confidence interval above that I was perturbed that it wasn't symmetric about my estimate of 75c, so I set it to for aesthetic reasons. In this case, my score would be Which is worse than my previous score by a factor of 2.

Let's say that, when I remembered the price of stamps in my childhood, I was way off and remembered 14c stamps. Then I would believe that stamps should cost around 22c now. (Here I have the feeling of "nothing costs less than a quarter!", so I would probably reject this estimate.)That would likely anchor me, so that I would set a high confidence on the price being within

,

Am I trying to maximize this score?

I looked up the answer, and the lowest cost standard delivery is for letters under 30g.

Comment by StefanDeYoung on Monthly Meta: Referring is Underrated · 2018-02-08T17:11:53.581Z · LW · GW

Another reason to become better at referring is to grow your network. I have in mind referring people to specific coaching. If we're referring aspiring rationalists to people outside the community, those people outside the community will be incentivised to engage with us.

Comment by StefanDeYoung on Examples of Mitigating Assumption Risk · 2017-11-30T15:50:23.485Z · LW · GW

Can you provide additional details regarding eating Mealsquare instead of Soylent?

Comment by StefanDeYoung on Fire drill proposal · 2017-11-22T19:13:08.612Z · LW · GW

Can you explain how the not-turning-on-the-phone drill would increase preparedness for the advent of AGI? Is it that it is a demonstration of humanity's ability to coordinate on a massive scale?

Comment by StefanDeYoung on [deleted post] 2017-11-21T15:20:13.689Z

I like this approach because it follows the approach of Taking the Obvious Advice, and because of its focus on operationalising rationality rather than seeking insight porn.

As a short-term solution, would a Google Sheet work? I believe that you could then use a Google Form to populate the sheet. Here's your example data in a spreadsheet.

I will return to this thread on December 11, 2017 to see if anyone else has subscribed to this project. I'm unable to commit any time prior to that date.

Comment by StefanDeYoung on Instrumental Rationality 3: Interlude I · 2017-11-09T19:25:12.289Z · LW · GW

I have also seen other users post about using Anki cards to remember insights from LW. However, I've had difficulty with formulating good flashcards related to this material.

Right now, I have a card for the Litany of Tarski. On one side is the litany ("If the sky is blue...") on the other side, "Litany of Tarski." When I see the card, I try to recite the litany in that form, but also consider the underlying idea that there is a territory to be mapped, and that the map is supposed to reflect the territory. I might also create a new litany with some other object than the blue sky.

Is this the kind of card that you create? Can you give an example of how you use a card to remind yourself of insights rather than definitions?

Comment by StefanDeYoung on Global Catastrophe Prevention Plan Comprehensive Working Outline (wip) · 2017-10-18T18:33:34.314Z · LW · GW

Your plan currently only addresses ex-risk from AGI. However, there are several other problems that should be considered if your goal is to prevent global catastrophe. I have recently been reading 80000 Hours, and they have the following list of causes that may need to be included in your plan: https://80000hours.org/articles/cause-selection/

In general, I think that it's difficult to survey a wide topic like AI Alignment or Existential Risk, and, with granularity, write out a to-do list for solving them. I believe that people who work more intimately with each ex-risk would be better suited to develop the on-the-ground action plan.

It is likely that a variety of ex-risks would be helped by reaching for similar goals, in which case, high-level coordinated action plans developed by groups focused on each ex-risk would be useful to the community. If possible, try to attend events such as EA conferences where groups focusing on each of the possible global catastrophes will be present, and you can try to capture their shared action plans.

Comment by StefanDeYoung on For independent researchers? · 2017-10-18T17:47:03.374Z · LW · GW

In any field, you will be influenced to follow main-stream approaches. I don't see that there's any way to avoid that; you'll need to be keeping abreast of arxiv papers, updates to programming libraries, and whatever wisdom the community can accumulate. I'd say that you should embrace the community, as I've always found it much more difficult to go it alone for reasons of inspiration, motivation, and desire to feel social approval.

If you're concerned that you will miss critical insights while following someone else's approach, set appointments for yourself to check-in with how you're working. Take an hour every month or two every quarter to think through how you've been approaching your work, and how you should change.

Comment by StefanDeYoung on Instrumental Rationality 1: Starting Advice · 2017-10-06T15:32:23.962Z · LW · GW

Thank you!

Comment by StefanDeYoung on Instrumental Rationality 1: Starting Advice · 2017-10-05T19:18:59.943Z · LW · GW

I hadn't read these sequences as part of LW 1.0, so thank you very much for bringing them back into the spotlight. Is there contained within them a listing of habits that have been useful to those aspiring to implement instrumental rationality? Is there a compendium of what obvious advice is on offer in various domains?

Comment by StefanDeYoung on [deleted post] 2017-09-27T16:38:10.400Z

Thanks. That answers my question; seeing VOI capitalised and immediately acronymed made me think that it might be a Named Concept.

When you're thinking about whether to keep pulling on the thread of inquiry, do you actually write down any pseudomath or do you decide by feeling? Sometimes, I think through some pseudomath, but I wonder whether it might be worth recording that information, or if thinking on paper would produce better results than thinking "out-loud."

Comment by StefanDeYoung on [deleted post] 2017-09-26T19:53:21.892Z

Agreed. In the article, Conor says

I claim that my brain frequently produces narrative satisfaction long before the story's really over, causing me to think I've understood when I really haven't.

When you use the phrase Value of Information, are you drawing from any particular definition or framework? Are you using the straightforward concept of placing value on having the information?

Comment by StefanDeYoung on [deleted post] 2017-09-26T14:54:28.892Z

Thanks for the tip. I am sensitive to the limits of my own willpower.

A strategy that was working for me was keeping my daily tasks/to-do lists and my journal in the same book. That way, I needed to check into my book in order to do my work, and would be able to intersperse journaling in between lists as the urge arose.

Comment by StefanDeYoung on [deleted post] 2017-09-26T14:38:12.859Z

At what point do we judge that our map of this particular part of the territory is sufficiently accurate, and accept the level of explanation that we've reached?

If we're going to keep pulling on the thread of "why are the dominoes on the floor" past "Zacchary did it" then we need to know why we're asking. Are we trying to prevent future messes? Are we concerned about exactly how our antique set of dominoes was treated? Are we trying to figure out who should clean this mess?

If we're only trying to figure out who should clean the mess, then "Zacchary did it" is sufficient, and we can stop looking deeper.

Comment by StefanDeYoung on [deleted post] 2017-09-25T16:13:21.906Z

I remember at the start of each year of high-school having the experience of realising just how stupid and ignorant I had been the previous year. And each year, I was surprised to have the same experience. This revealed to me, I think, that I'm more episodic than diachronic in that I dissociate from my past selves.

I appreciate the advice here to have a more diachronous meta-personality. To implement this, I intend to double-down on keeping a journal. I've struggled with this habit before, but upon rereading journal entries from a year ago, I have received insights into how to improve my life in the present.

Comment by StefanDeYoung on Meetup : First(New?) Waterloo Meetup · 2011-11-15T01:00:30.546Z · LW · GW

I will attend. Looking forward to meeting you all!

Comment by StefanDeYoung on Welcome to Less Wrong! (2010-2011) · 2011-08-30T02:41:10.332Z · LW · GW

I'm having trouble with formatting. Here is what I was trying to write, less my attempts to include links:

Greetings, LessWrong.

I'm a 21 y/o Physics undergrad at the University of Waterloo. I'm currently finishing a coop work-term at the Grand River Regional Cancer Centre. I'm also trying to build a satellite www.WatSat.ca.

My girlfriend recommended that I read HPMoR - which I find delightful - but I thought LessWrong a strange penname. I followed the links back here, and spent a month or so skimming the site. I'm happy to find a place on the internet where people are happy to provide constructive criticism in support of self-optimization. I'm also particularly intrigued by this Bayesian Conspiracy you guys have going.

I tend to lurk on sites like this, rather than actually joining the community. However, I discovered a call for a meetup in Waterloo http://lesswrong.com/r/discussion/lw/790/are_there_any_lesswrongers_in_the_waterloo/, and I couldn't help myself.

Comment by StefanDeYoung on Welcome to Less Wrong! (2010-2011) · 2011-08-30T02:39:15.845Z · LW · GW

Greetings, LessWrong.

I'm a 21 y/o Physics undergrad at the University of Waterloo. I'm currently finishing a coop work-term at the Grand River Regional Cancer Centre. I'm also trying to build a satellite.

My girlfriend recommended that I read HPMoR - which I find delightful - but I thought LessWrong a strange penname. I followed the links back here, and spent a month or so skimming the site. I'm happy to find a place on the internet where people are happy to provide constructive criticism in support of self-optimization. I'm also particularly intrigued by this Bayesian Conspiracy you guys have going.

I tend to lurk on sites like this, rather than actually joining the community. However, I discovered a call for a meetup in Waterloo, and I couldn't help myself.

Comment by StefanDeYoung on Are there any Lesswrongers in the Waterloo, Ontario area? · 2011-08-29T22:57:07.838Z · LW · GW

I am a student at UW. If you build it, I will come.

I've been skimming LW for about a month now, and just registered to respond to this post. Seems you lowered my cost-of-entry. I thank you.