Posts

Comments

Comment by t3tsubo (calvin-ho) on Rationality Feed: Last Month's Best Posts · 2018-03-21T14:16:15.698Z · LW · GW

Thanks, and I hope you keep doing these!

Comment by t3tsubo (calvin-ho) on Expertise Exchange · 2018-03-15T18:57:33.681Z · LW · GW

If anyone is a multidisciplinary expert in some or most of the following: sociology, psychology, law, business, political science and economics; I'd love to reach out. I'm thinking of pursuing a post-grad after a few years of practice, where my thesis would be trying to combine/map out the relationships between those fields. Having someone to ping ideas off of would be great.

If anyone wants to ask me about Law and Economics, (mostly Canadian) legal theory and/or (Canadian) constitutional law, or behavioural economics, feel free.

Caveat: Don't consider anything I say as legal advice, everything is in an academic context.

Comment by t3tsubo (calvin-ho) on A Developmental Framework for Rationality · 2018-03-14T15:30:35.646Z · LW · GW

Really interesting. To provide a personal data point/take, my journey started at a mix of C and D through personal introspection, reading HPMOR by happenstance one day which led me to lesswrong, which then got me started in A at the same time as I was taking a behavioral economics course.

That let me re-approach C and D with a framework more in line with what the rationality community uses since everything was ad hoc before, and I'm just currently at the point of trying to implement the automaticity in B.

Comment by t3tsubo (calvin-ho) on Active vs Passive Distraction · 2018-02-16T14:52:37.775Z · LW · GW

Aren't most distractions ones that you can put as much or as little thought into as you want? To use your example, playing the guitar can be passive as well - just playing songs you already know or jamming along randomly/improv (assuming you have sufficient skill). Conversely, a lot of people completely shut off their brain while watching TV, or are so engrossed in the drama that they do not ruminate on whatever distress they are experiencing. I would guess that the distinction between distraction 'kinds' as you described it is not tied to the activity but to the mindset of individual.

Anecdotally, I find this especially applies to exercise/sports. When I run, I can be either deep in thought or completely engrossed in my muscle movements - it really depends on my mood.

Comment by t3tsubo (calvin-ho) on Pseudo-Rationality · 2018-02-07T19:19:19.872Z · LW · GW

I think you miss the point entirely with justifying some of these actions as wrong based on your own set of values instead of based on the goals and values of the person doing them. For example:

  • Being overly skeptical to demonstrate how skeptical you are

The person doing this values social signalling of skepticism more than efficiency in this particular matter

  • Always fighting for the truth, even when you’re burning more social capital than the argument is worth

The person doing this values truth more than social capital, or values the argument more than the lost social capital

  • Optimising for charitability in discussions to the point where you are consistently being exploited

The person doing this values the outcomes of being seen as consistenty charitable more than the effect of being exploited on occasion

  • Refusing to do any social signalling or ever bow to social norms to signal that you're above them

The person doing this is not doing it for signalling purposes, but because the effort to comply with social norms or signalling would take away from the spoons they have to do other things that are more important to them.

It seems to be that the "pseudo-rational" trap one should actually avoid is to apply one's own goals/values/utility functions to other people by default.

Comment by t3tsubo (calvin-ho) on Factorio, Accelerando, Empathizing with Empires and Moderate Takeoffs · 2018-02-06T16:31:15.130Z · LW · GW

Thanks for the post, it was a viewpoint I hadn't closely considered (that a friendly but "technically unsafe" AI would be the singularity and its lack of safety would not be addressed in time due to its benefits) and is worth thinking about more.

Comment by t3tsubo (calvin-ho) on Beta-Beta Testing: Frontpage Rework [Update - further tweak] · 2018-02-01T21:58:29.710Z · LW · GW

leastwrong?

Comment by t3tsubo (calvin-ho) on Parable of the Faraway Manager · 2018-02-01T14:44:07.195Z · LW · GW

Sounds like a stupid manager who takes unnecessary personal risks (cooking books) for no personal gain.

Comment by t3tsubo (calvin-ho) on Arbital postmortem · 2018-01-31T15:51:00.419Z · LW · GW

For those of us who are new to the community, can you have a writeup at the start of the explaining what Arbital is? I got to Chapter 2 in the article and I still had only a vague idea what it was (is?) and the website itself doesn't even have an 'about' page or explanation other than " Arbital is a hybrid blogging and wiki platform. "

Comment by t3tsubo (calvin-ho) on Seeking an Outside View on Israeli Military Service · 2018-01-31T14:17:45.877Z · LW · GW

If you broaden the definition of military to include cyberwarfare in the vein of propaganda/fake news/hacking, then I think it is still a tremendously important activity. Military participation in on the ground activities - not as much. I'm personally of the view that actual ground activities now play the support role to cyber activities, rather than the other way around (Rather than try explain my views on that here, I'll just link you to the book I got that view from: War in 140 Characters: How Social Media is Reshaping Conflict in the Twenty-First Century).

Comment by t3tsubo (calvin-ho) on "Taking AI Risk Seriously" (thoughts by Critch) · 2018-01-31T14:07:47.990Z · LW · GW

Why do you think people wouldn't shut down an AI when they see it developing the capability of being unable to shut down, regardless of how useful it is currently being?

Comment by t3tsubo (calvin-ho) on "Taking AI Risk Seriously" (thoughts by Critch) · 2018-01-30T21:29:33.676Z · LW · GW

Just pointing out I'm still waiting on a response from my comment asking a similar question the zulu here. I read the sequences and superintelligence, but I still don't see how an AI would proliferate and advance faster than our ability to kill it - a year to get from baby to einstein level intelligence is plenty long enough to react.

Comment by t3tsubo (calvin-ho) on Teaching Ladders · 2018-01-26T15:11:20.413Z · LW · GW

I think this works well for mental activities like math or Go, but it's much less effective for physical skills. Say you're trying to learn tennis as an absolute beginner. If you try to learn it from someone only slightly better than you, you will inevitably learn bad/incorrect/inefficient habits when you should have just learned from the 'master' to start.

Comment by t3tsubo (calvin-ho) on I Vouch For MIRI · 2017-12-19T16:11:26.572Z · LW · GW

Can you expand on this point:

"If we do build an AGI, its actions will determine what is done with the universe.

If the first such AGI we build turns out to be an unfriendly AI that is optimizing for something other than humans and human values, all value in the universe will be destroyed. We are made of atoms that could be used for something else."

Especially in the sense of why the first unfriendly AI we built will immediately be uncontainable and surpass current human civilization's ability to destroy it?

Comment by t3tsubo (calvin-ho) on Hogwarts House Primaries · 2017-11-21T14:15:51.508Z · LW · GW

This was interesting, and I could see how I would fit myself into these categories. However, I question whether this is mutually-exclusive/collectively-exhaustive of all personality motivations. While it might work well for people who are rational and who strive to be consistent in their actions - I know plenty of people swap the principles they seem to act on depending on the situation. To use the hogwarts examples, they would switch from one house to the next depending on their mood.

And I can think of at least one type of motivation which none of the houses seem to cover - which is pure interest in the work itself - i.e. the hermit savant who doesn't care for any meta/epistemological system (ravenclaw), nor do they have any type of moral or personal convictions (gryffindor), nor do they care about others (hufflepuff) or themselves (slytherin). They simply care about the work or thing that they are fixated on.