Posts

New Petrov Game Brainstorm 2019-10-03T19:48:14.009Z · score: 19 (6 votes)
What tools exist to compute all possible programs? 2019-09-09T16:50:57.162Z · score: 18 (4 votes)
Mapping of enneagram to MTG personality types 2019-07-29T15:20:09.115Z · score: 6 (3 votes)
Watch Elon Musk’s Neuralink presentation 2019-07-19T21:48:11.162Z · score: 10 (3 votes)
Black hole narratives 2019-07-07T04:07:11.835Z · score: 24 (9 votes)
Crypto quant trading: Naive Bayes 2019-05-07T19:29:40.507Z · score: 30 (7 votes)
Swarm AI (tool) 2019-05-01T23:39:51.553Z · score: 18 (4 votes)
Crypto quant trading: Intro 2019-04-17T20:52:53.279Z · score: 60 (23 votes)
[Link] OpenAI LP 2019-03-12T23:22:59.861Z · score: 15 (5 votes)
Link: That Time a Guy Tried to Build a Utopia for Mice and it all Went to Hell 2019-01-23T06:27:05.219Z · score: 15 (6 votes)
What's up with Arbital? 2017-03-29T17:22:21.751Z · score: 24 (27 votes)
Toy problem: increase production or use production? 2014-07-05T20:58:48.962Z · score: 4 (5 votes)
Quantum Decisions 2014-05-12T21:49:11.133Z · score: 1 (6 votes)
Personal examples of semantic stopsigns 2013-12-06T02:12:01.708Z · score: 44 (49 votes)
Maximizing Your Donations via a Job 2013-05-05T23:19:05.116Z · score: 115 (117 votes)
Low hanging fruit: analyzing your nutrition 2012-05-05T05:20:14.372Z · score: 7 (8 votes)
Robot Programmed To Love Goes Too Far (link) 2012-04-28T01:21:45.465Z · score: -5 (12 votes)
I'm starting a game company and looking for a co-founder. 2012-03-18T00:07:01.670Z · score: 16 (23 votes)
Water Fluoridation 2012-02-17T04:33:00.064Z · score: 1 (9 votes)
What happens when your beliefs fully propagate 2012-02-14T07:53:25.005Z · score: 22 (50 votes)
Rationality and Video Games 2011-09-18T19:26:01.716Z · score: 6 (11 votes)
Credit card that donates to SIAI. 2011-07-22T18:30:35.207Z · score: 5 (8 votes)
Futurama does an episode on nano-technology. 2011-06-27T02:44:14.496Z · score: 3 (6 votes)
Considering all scenarios when using Bayes' theorem. 2011-06-20T18:11:34.810Z · score: 9 (10 votes)
Discussion for Eliezer Yudkowsky's paper: Timeless Decision Theory 2011-01-06T00:28:29.202Z · score: 10 (11 votes)
Life-tracking application for android 2010-12-11T01:48:11.676Z · score: 20 (21 votes)

Comments

Comment by alexei on Implementing an Idea-Management System · 2019-10-19T05:03:14.913Z · score: 4 (3 votes) · LW · GW

https://www.lesswrong.com/posts/NfdHG6oHBJ8Qxc26s/the-zettelkasten-method-1

Comment by alexei on ML is an inefficient market · 2019-10-15T22:08:50.680Z · score: 2 (1 votes) · LW · GW

Which tools are you talking about?

Comment by alexei on Reflections on Premium Poker Tools: Part 1 - My journey · 2019-10-09T04:07:37.978Z · score: 5 (5 votes) · LW · GW

Yeah, I’m not sure how big the market is for a poker software. There aren’t that many people playing poker and a vast majority of them play it casually with no software. So even if you capture the entire market, it still might be only 1000 people or so.

As for people who say they are interested but then flake, Sebastian Marshall has the perfect word for them: jokers. Just ignore them immediately and move on. If someone wants to do business, you’ll feel it.

I did a web startup too for a few years. And everything took me longer than expected as well. Including authentication. It’s just a fact of life; but hopefully we can plan better now.

Comment by alexei on New Petrov Game Brainstorm · 2019-10-05T06:59:23.907Z · score: 2 (1 votes) · LW · GW

I guess that seems to me to be within the spirit of the game.

Comment by alexei on New Petrov Game Brainstorm · 2019-10-05T00:30:27.347Z · score: 2 (1 votes) · LW · GW

What information would you share that would give you an advantage? (Keep in mind that players are anonymous. Though I guess anyone could in theory de-anonymize themselves.)

Comment by alexei on Meeting the Dragon in Your Garage. · 2019-10-03T01:29:36.438Z · score: 3 (2 votes) · LW · GW

No easy answers to these questions. Welcome to LessWrong where we try to figure it out. I’d recommend reading the Sequences if you haven’t already.

Comment by alexei on Honoring Petrov Day on LessWrong, in 2019 · 2019-10-02T20:45:27.158Z · score: 2 (0 votes) · LW · GW

I’m confused why you got downvoted so much over a joke.... sorry.

Comment by alexei on Meeting the Dragon in Your Garage. · 2019-09-30T20:07:29.348Z · score: 6 (4 votes) · LW · GW

The first two examples are for finding more of a type of thing we already know to exist (astronomical objects, elementary particles). The third example is less obviously so. So, your priors are different.

That aside, I suppose there is no difference. The only thing that I’d consider is the opportunity cost.

Comment by alexei on Idols of the Mind Pt. 2 (Novum Organum Book 1: 53-68) · 2019-09-27T21:56:12.766Z · score: 4 (2 votes) · LW · GW

I've read all of these so far, but may be you can post them a little less frequently?

Comment by alexei on Rationality and Levels of Intervention · 2019-09-26T00:24:53.019Z · score: 8 (5 votes) · LW · GW

There are a few factors which I imagine influence the optimal strategy criteria:

  • How much time do you have? If there's not a lot of time, more direct intervention methods (lower levels) seem to work better. If you have a lot of time, then it's probably okay to let people meander more as long as they eventually reach the low entropy. (Low entropy = behaving well consistently.)
  • How sticky is the low entropy? If the child notices that when it's behaving well things are going much greater for them, then probably they'll continue to stick with that behavior. But if the rewards are random, then they might be well behaved but then switch their behavior.
  • How much do you value the individuals? I.e. what's your utility for one well behaving kid vs one misbehaving one? I think in the rationalist community there's a tendency to value few very well behaving kids as being much better than a lot of somewhat well behaving kids. In that case, individual attention does seem more warranted / effective.
  • Your overall resources and expertise. If you had it all, why not do all of the levels at once? There's obviously something good to be said for all levels. But if you're not experienced in one of them, then you have to weigh the cost of getting better + making mistakes vs ignoring that level + focusing on others. And if your resources are limited, but expertise is even, you probably want to spread the resources around and focus on 80/20'ing each level.
  • The expertise brings up the point of: do you even know what "well behaving" is? To the extent you're not sure, you should probably focus on reducing uncertainty around that for yourself first. (Level 0)

At the end of the day, you either need to build robust gear level models that will help you make these decisions or have enough kids in your study that you could collect and analyze it statistically.

Comment by alexei on How can I reframe my study motivation? · 2019-09-25T04:14:38.778Z · score: 5 (2 votes) · LW · GW

I'd ask yourself a few things: Is there a way to read those things that seems more fun? May be skip "boring" parts? May be read out of order? May be there's a chapter / section that caught your eye? May be there's a part that connects with something else you're interested in / something useful?

I'd also recommend to not read what you think you _should_ read, especially during your free time. Read what interests you. And if that's nothing, then do something else personally productive with that time.

Comment by alexei on The Zettelkasten Method · 2019-09-24T13:40:28.931Z · score: 16 (5 votes) · LW · GW

1. Thank you for sharing about this method! I don't think I've ever heard about it.

2. I'm super excited to try it! There's something that just immediately made sense / called out to me about it. Specifically about the fact that these are physical cards. I'm guessing it's similar to why you like this method as well.

3. I ordered the supplies. By the end of October I promise I will write up a post / comment with how this method went for me.

Comment by alexei on Timer Toxicities · 2019-09-22T13:19:09.804Z · score: 2 (1 votes) · LW · GW

Now that I think about it, the first game with a serious timer mechanic I played was Tamagotchi. But I think that mechanic worked in its favor / was consistent with the game’s central point of taking care of a “live” creature that couldn’t just be paused. Another part that made it work was that it was a separate physical object vs a thing on your computer (or phone; not that there were smart phones back then).

Comment by alexei on Focus · 2019-09-16T02:37:31.418Z · score: 4 (2 votes) · LW · GW

I like your analysis of the situation, and it honestly doesn’t seem like a problem to me. Find more things you’re excited about, if you really have a lot of free time. One thing you can do when you have random free 1-3 minutes is meditate or journal.

Comment by alexei on Is competition good? · 2019-09-11T19:59:30.271Z · score: 5 (3 votes) · LW · GW

So an incorruptible person is one that has all of their needs met, but doesn't depend on anything for it. They can always make the moral choice, because no choices are incompatible with the foundation of their well-being.

This actually goes to the very heart of Buddhism (and probably a few other religions). Well done. ;)

Comment by alexei on Russian x-risks newsletter, summer 2019 · 2019-09-07T20:22:50.146Z · score: 2 (1 votes) · LW · GW

Thanks for the updates!

Comment by alexei on Seven habits towards highly effective minds · 2019-09-06T03:14:21.637Z · score: 2 (1 votes) · LW · GW

Great list! I was happy to be reminded of all these points for myself too.

Comment by alexei on LessWrong Updates - September 2019 · 2019-09-06T03:11:42.845Z · score: 4 (2 votes) · LW · GW

Looks like it’s now fixed by the iOS menu showing up before the pop up.

Comment by alexei on LessWrong Updates - September 2019 · 2019-09-04T23:38:40.755Z · score: 2 (1 votes) · LW · GW

Actually I wasn’t opted in. But now I changed my settings to opt in.

Comment by alexei on LessWrong Updates - September 2019 · 2019-09-04T20:00:10.784Z · score: 6 (3 votes) · LW · GW

On mobile, I often click+hold a link to bring up the iOS menu to open a new link. Now when I do this the LW pop up appears and doesn’t go away, until I find a safe spot to click away from it.

Comment by alexei on The Missing Math of Map-Making · 2019-08-29T01:40:20.157Z · score: 7 (4 votes) · LW · GW

I’d say the map is accurate because an agent can use it to navigate.

Comment by alexei on Actually updating · 2019-08-24T19:53:04.161Z · score: 2 (1 votes) · LW · GW

https://www.lesswrong.com/posts/5JDkW4MYXit2CquLs/your-strength-as-a-rationalist

Comment by alexei on A Primer on Matrix Calculus, Part 1: Basic review · 2019-08-13T20:24:34.739Z · score: 9 (4 votes) · LW · GW

Awesome! I’m with you so far. :)

Comment by alexei on Diana Fleischman and Geoffrey Miller - Audience Q&A · 2019-08-11T18:11:47.418Z · score: 5 (7 votes) · LW · GW
"I have a fantasy about finding a shitty loser that no one likes and have him dominate me. "

I think that's called topping from the bottom. You want to tell someone to dominate you. The bonus over just being dominated is that you often get to specify how you want it, rather them doing it the way they want it.

In that context, I think the "loser" part makes sense too. If you pick a "winner", you might get someone who can actually dominate you. But with a "loser", you're more likely to be in control across various dimensions (economically, socially, mentally). So you can still participate in the game of being dominated, while having the safety of only being dominated physically.

"I used to date an Effective Altruist who made a lot less money than I did. At the beginning of an evening, I would give him cash so that he could pay for everything through the course of the night."

This was said by another person, but could be an example of a similar behavior. Though without more context, it's hard to tell.

Comment by alexei on Mapping of enneagram to MTG personality types · 2019-08-09T20:21:52.004Z · score: 4 (2 votes) · LW · GW

That sounds great! I don't have time to help with the development, but the math is pretty easy: https://acritch.com/credence-game/

Definitely post it here on LW when you have a beta version.

Comment by alexei on Why Gradients Vanish and Explode · 2019-08-09T04:39:07.499Z · score: 7 (4 votes) · LW · GW

Yay for learning matrix calculus! I’m eager to read and learn. Personally I’ve done very well in the class where we learned it, but I’d say I didn’t get it at a deep / useful level.

Comment by alexei on Trauma, Meditation, and a Cool Scar · 2019-08-07T06:42:17.542Z · score: 5 (3 votes) · LW · GW

Thank you for sharing such a personal story. I’m happy you had help along the way and that you chose to accept it and become better, rather than angry and resentful.

Comment by alexei on Open Thread July 2019 · 2019-08-02T23:42:49.001Z · score: 4 (2 votes) · LW · GW

Sounds great. Welcome!

Comment by alexei on Gathering thoughts on Distillation · 2019-08-01T10:20:04.046Z · score: 7 (4 votes) · LW · GW

In the past, I have found distillations posts extremely helpful. I wonder if part of that is because when someone currently attempts it, they're doing it because they're correctly judging that: 1) it's important, and 2) they'll do it well. Probably as it becomes more common, one or both of these will stop being true as often.

I don't write that frequently, but I think I would do 1-2 distillments per year if it was more socially encouraged / rewarded.

Comment by alexei on Forum participation as a research strategy · 2019-07-30T19:55:42.842Z · score: 11 (5 votes) · LW · GW

This is one of those "so obvious when someone says it" things that are not at all obvious until someone says them. Well done!

Comment by alexei on What is our evidence that Bayesian Rationality makes people's lives significantly better? · 2019-07-30T03:56:00.654Z · score: 2 (1 votes) · LW · GW

Your question was: “What evidence can I show to a non-Rationalist that our particular movement...”

I’m saying for non rationalists that’s one of the better ways to do it. They don’t need the kind of data you seem to require. But if you talk about your life in a friendly, open way, that will get you far.

Additionally, “example of your own life” is data. And some people know how to process that pretty remarkably.

Comment by alexei on What is our evidence that Bayesian Rationality makes people's lives significantly better? · 2019-07-30T01:52:01.900Z · score: 1 (3 votes) · LW · GW

The questions are being asked (at least on my part) because I believe the best way to “convince” someone is to show them with the example of your own life.

Comment by alexei on Mapping of enneagram to MTG personality types · 2019-07-30T01:49:26.931Z · score: 2 (1 votes) · LW · GW

I actual don’t know! I kind of learned it from people / a bit from the official site.

Comment by alexei on Mapping of enneagram to MTG personality types · 2019-07-29T18:46:38.291Z · score: 2 (1 votes) · LW · GW

When I first ran across enneagram and read about it myself, it also felt like all the types kind of fit. But then a friend was describing the true meaning of Peacemaker to a group of us and it... felt like he was reading my essence. Then I went home, took the test, and sure enough that was my type. So I think the descriptions out there are not that great and it's better if you talk to someone with a strong understanding of the system and they can help you figure out your type.

Comment by alexei on Mapping of enneagram to MTG personality types · 2019-07-29T18:42:18.503Z · score: 2 (1 votes) · LW · GW

My (admittedly limited) model of you does have Red, but also the Red is super well integrated. So it might just feel natural? (E.g. "rebellion" is red, but it's more common in red that's not very developed.) I dunno, thanks anyway for the data point. :)

Comment by alexei on What woo to read? · 2019-07-29T18:38:58.832Z · score: 2 (1 votes) · LW · GW

Second "Dance with the Gods".

Comment by alexei on What woo to read? · 2019-07-29T18:38:23.594Z · score: 5 (2 votes) · LW · GW

Aww, thanks! I'll delete my answer, since yours is actually more in depth.

Comment by alexei on What woo to read? · 2019-07-29T15:40:06.983Z · score: 8 (4 votes) · LW · GW

I wrote this a while ago to explain some of it: https://www.lesswrong.com/posts/nbw2NfjiBmuGx7mim/ms-blue-meet-mr-green

Comment by alexei on What is our evidence that Bayesian Rationality makes people's lives significantly better? · 2019-07-29T05:28:44.279Z · score: 5 (3 votes) · LW · GW

Right, but how do you know? Are there specific stories of how you were going to make a decision X but then you used a rationality tool Y and it saved the day?

Comment by alexei on Shortform Beta Launch · 2019-07-28T07:15:19.190Z · score: 3 (2 votes) · LW · GW

Not sure how to find that section from mobile.

Comment by alexei on Normalising utility as willingness to pay · 2019-07-18T21:19:05.203Z · score: 10 (5 votes) · LW · GW

On a purely fun note, sometimes I imagine our universe running on such "willingness to pay" for each quantum event. At each point in time various entities observing this universe bid on each quantum event, and the next point in time is computed from the bid winners.

Comment by alexei on Prereq: Cognitive Fusion · 2019-07-18T03:47:09.618Z · score: 9 (4 votes) · LW · GW

Oh! I realized I was describing the same concept here: https://www.lesswrong.com/posts/diFBzESLKvKoBetrr/black-hole-narratives

Comment by alexei on Very Short Introduction to Bayesian Model Comparison · 2019-07-18T01:26:44.901Z · score: 18 (5 votes) · LW · GW

I realize it might feel a bit silly/pointless to you to write such "simple" posts, especially on a website like LW. But this is actually the ideal level for me, and I find this very helpful. Thank you!

Comment by alexei on Commentary On "The Abolition of Man" · 2019-07-18T01:11:05.562Z · score: 4 (2 votes) · LW · GW

Strong upvote for: 1) reading C.S. Lewis in the first place (since I think he is largely outside of the rationalist canon), 2) steel-manning his opinions, 3) connecting his opinions to the rationalist diaspora, 4) understanding Lewis' point at a pretty deep level.

Comment by alexei on Commentary On "The Abolition of Man" · 2019-07-18T01:08:45.153Z · score: 4 (2 votes) · LW · GW

To draw from this post, Jordan Peterson, and a few other things I've read, I think their message is something like:

"We, as a society, are losing something very valuable. There is the Way (Tao) of living that used to be passed from generation to generation. This Way is in part reflected in our religion, traditions, and virtues. Over time there was an erosion, especially on the religious side. This led to the society that abandoned religion, traditions, and virtues. We should try to get back to the Way."

I mostly agree. I think the best route is to find a new way "back", rather than try to undo the steps that led us here. Trying to teach religion, tradition, or virtues directly is largely missing the Way. (Similarly to how teaching only the first 11 virtues of rationality is missing the last and most important one.) At this point we have come so far as a society that we should be able to find new, more direct, and more epistemically honest ways of teaching Tao.

Comment by alexei on Intellectual Dark Matter · 2019-07-16T20:57:44.402Z · score: 9 (5 votes) · LW · GW

Random spot check: I ran the paragraphs about the lawyers by my lawyer friend and she approved.

Comment by alexei on Schism Begets Schism · 2019-07-10T14:51:30.854Z · score: 15 (6 votes) · LW · GW

Another consideration is that people splitting off become explorers. If they don’t realize that, they’re very likely to fail or not to go very far. And I’d say overall explorers are very valuable. But if everyone is one, then that doesn’t work.

Comment by alexei on Schism Begets Schism · 2019-07-10T14:50:43.513Z · score: 5 (3 votes) · LW · GW

Basically agree, but for every group there’s probably some size / some disagreement where it would be better off splitting. So may be I’d rephrase this as people / groups have a modern bias for splitting too early.

Comment by alexei on Black hole narratives · 2019-07-09T02:25:50.170Z · score: 2 (1 votes) · LW · GW

I basically agree. Hence this post, which I hope says more than just "get out of the car," but provides additional tools.

There's a sort of egging on that I think is going on when people keep repeating "get out of the car." And I think it works. It's something like... trying to get the person frustrated enough to look outside of the space of the solutions they've looked at already. Or, may be more accurately, reexamine more of their assumptions than they have so far. I think it works for people who already want to "get out of the car". But I also agree that it has a bit of the higher-than-thou tone. Some people respond to that, some people find it off putting. I tried to do both, but who knows how well that turned out.

Comment by alexei on Black hole narratives · 2019-07-09T02:14:38.510Z · score: 3 (2 votes) · LW · GW

I sympathize. And I don't know what else to say. If someone is trying to get you to agree that "2+2=4", you know it'll end with you buying into the whole algebra thing and then math and then who knows what else. Every piece of information you take in sets you up to learn / update on other information in a different way.

My most charitable interpretation (and please correct me if I'm wrong) of your comment is something like: "Hmm, this sounds interesting, but, man oh man, I'm not sure where this guy is trying to lead me. What will happen to my mind / to my behavior / to me if I start doing this? Or if I just start thinking about this?"

And if my interpretation is correct, then the only thing I can say is: look at the people who are pointing this way. And see if you want to be more or less like them. See if they have happy / successful lives. Which parts of them do you like and want to emulate? Which parts seem off? I don't think we know each other, but you can probably find other people in your life who would make a decent substitute.