Posts

Mapping of enneagram to MTG personality types 2019-07-29T15:20:09.115Z · score: 6 (3 votes)
Watch Elon Musk’s Neuralink presentation 2019-07-19T21:48:11.162Z · score: 10 (3 votes)
Black hole narratives 2019-07-07T04:07:11.835Z · score: 24 (9 votes)
Crypto quant trading: Naive Bayes 2019-05-07T19:29:40.507Z · score: 30 (7 votes)
Swarm AI (tool) 2019-05-01T23:39:51.553Z · score: 18 (4 votes)
Crypto quant trading: Intro 2019-04-17T20:52:53.279Z · score: 60 (23 votes)
[Link] OpenAI LP 2019-03-12T23:22:59.861Z · score: 15 (5 votes)
Link: That Time a Guy Tried to Build a Utopia for Mice and it all Went to Hell 2019-01-23T06:27:05.219Z · score: 15 (6 votes)
What's up with Arbital? 2017-03-29T17:22:21.751Z · score: 24 (27 votes)
Toy problem: increase production or use production? 2014-07-05T20:58:48.962Z · score: 4 (5 votes)
Quantum Decisions 2014-05-12T21:49:11.133Z · score: 1 (6 votes)
Personal examples of semantic stopsigns 2013-12-06T02:12:01.708Z · score: 44 (49 votes)
Maximizing Your Donations via a Job 2013-05-05T23:19:05.116Z · score: 115 (117 votes)
Low hanging fruit: analyzing your nutrition 2012-05-05T05:20:14.372Z · score: 7 (8 votes)
Robot Programmed To Love Goes Too Far (link) 2012-04-28T01:21:45.465Z · score: -5 (12 votes)
I'm starting a game company and looking for a co-founder. 2012-03-18T00:07:01.670Z · score: 16 (23 votes)
Water Fluoridation 2012-02-17T04:33:00.064Z · score: 1 (9 votes)
What happens when your beliefs fully propagate 2012-02-14T07:53:25.005Z · score: 22 (50 votes)
Rationality and Video Games 2011-09-18T19:26:01.716Z · score: 6 (11 votes)
Credit card that donates to SIAI. 2011-07-22T18:30:35.207Z · score: 5 (8 votes)
Futurama does an episode on nano-technology. 2011-06-27T02:44:14.496Z · score: 3 (6 votes)
Considering all scenarios when using Bayes' theorem. 2011-06-20T18:11:34.810Z · score: 9 (10 votes)
Discussion for Eliezer Yudkowsky's paper: Timeless Decision Theory 2011-01-06T00:28:29.202Z · score: 10 (11 votes)
Life-tracking application for android 2010-12-11T01:48:11.676Z · score: 20 (21 votes)

Comments

Comment by alexei on A Primer on Matrix Calculus, Part 1: Basic review · 2019-08-13T20:24:34.739Z · score: 9 (4 votes) · LW · GW

Awesome! I’m with you so far. :)

Comment by alexei on Diana Fleischman and Geoffrey Miller - Audience Q&A · 2019-08-11T18:11:47.418Z · score: 5 (7 votes) · LW · GW
"I have a fantasy about finding a shitty loser that no one likes and have him dominate me. "

I think that's called topping from the bottom. You want to tell someone to dominate you. The bonus over just being dominated is that you often get to specify how you want it, rather them doing it the way they want it.

In that context, I think the "loser" part makes sense too. If you pick a "winner", you might get someone who can actually dominate you. But with a "loser", you're more likely to be in control across various dimensions (economically, socially, mentally). So you can still participate in the game of being dominated, while having the safety of only being dominated physically.

"I used to date an Effective Altruist who made a lot less money than I did. At the beginning of an evening, I would give him cash so that he could pay for everything through the course of the night."

This was said by another person, but could be an example of a similar behavior. Though without more context, it's hard to tell.

Comment by alexei on Mapping of enneagram to MTG personality types · 2019-08-09T20:21:52.004Z · score: 4 (2 votes) · LW · GW

That sounds great! I don't have time to help with the development, but the math is pretty easy: https://acritch.com/credence-game/

Definitely post it here on LW when you have a beta version.

Comment by alexei on Why Gradients Vanish and Explode · 2019-08-09T04:39:07.499Z · score: 7 (4 votes) · LW · GW

Yay for learning matrix calculus! I’m eager to read and learn. Personally I’ve done very well in the class where we learned it, but I’d say I didn’t get it at a deep / useful level.

Comment by alexei on Trauma, Meditation, and a Cool Scar · 2019-08-07T06:42:17.542Z · score: 5 (3 votes) · LW · GW

Thank you for sharing such a personal story. I’m happy you had help along the way and that you chose to accept it and become better, rather than angry and resentful.

Comment by alexei on Open Thread July 2019 · 2019-08-02T23:42:49.001Z · score: 4 (2 votes) · LW · GW

Sounds great. Welcome!

Comment by alexei on Gathering thoughts on Distillation · 2019-08-01T10:20:04.046Z · score: 7 (4 votes) · LW · GW

In the past, I have found distillations posts extremely helpful. I wonder if part of that is because when someone currently attempts it, they're doing it because they're correctly judging that: 1) it's important, and 2) they'll do it well. Probably as it becomes more common, one or both of these will stop being true as often.

I don't write that frequently, but I think I would do 1-2 distillments per year if it was more socially encouraged / rewarded.

Comment by alexei on Forum participation as a research strategy · 2019-07-30T19:55:42.842Z · score: 10 (4 votes) · LW · GW

This is one of those "so obvious when someone says it" things that are not at all obvious until someone says them. Well done!

Comment by alexei on What is our evidence that Bayesian Rationality makes people's lives significantly better? · 2019-07-30T03:56:00.654Z · score: 2 (1 votes) · LW · GW

Your question was: “What evidence can I show to a non-Rationalist that our particular movement...”

I’m saying for non rationalists that’s one of the better ways to do it. They don’t need the kind of data you seem to require. But if you talk about your life in a friendly, open way, that will get you far.

Additionally, “example of your own life” is data. And some people know how to process that pretty remarkably.

Comment by alexei on What is our evidence that Bayesian Rationality makes people's lives significantly better? · 2019-07-30T01:52:01.900Z · score: 1 (3 votes) · LW · GW

The questions are being asked (at least on my part) because I believe the best way to “convince” someone is to show them with the example of your own life.

Comment by alexei on Mapping of enneagram to MTG personality types · 2019-07-30T01:49:26.931Z · score: 2 (1 votes) · LW · GW

I actual don’t know! I kind of learned it from people / a bit from the official site.

Comment by alexei on Mapping of enneagram to MTG personality types · 2019-07-29T18:46:38.291Z · score: 2 (1 votes) · LW · GW

When I first ran across enneagram and read about it myself, it also felt like all the types kind of fit. But then a friend was describing the true meaning of Peacemaker to a group of us and it... felt like he was reading my essence. Then I went home, took the test, and sure enough that was my type. So I think the descriptions out there are not that great and it's better if you talk to someone with a strong understanding of the system and they can help you figure out your type.

Comment by alexei on Mapping of enneagram to MTG personality types · 2019-07-29T18:42:18.503Z · score: 2 (1 votes) · LW · GW

My (admittedly limited) model of you does have Red, but also the Red is super well integrated. So it might just feel natural? (E.g. "rebellion" is red, but it's more common in red that's not very developed.) I dunno, thanks anyway for the data point. :)

Comment by alexei on What woo to read? · 2019-07-29T18:38:58.832Z · score: 2 (1 votes) · LW · GW

Second "Dance with the Gods".

Comment by alexei on What woo to read? · 2019-07-29T18:38:23.594Z · score: 5 (2 votes) · LW · GW

Aww, thanks! I'll delete my answer, since yours is actually more in depth.

Comment by alexei on What woo to read? · 2019-07-29T15:40:06.983Z · score: 8 (4 votes) · LW · GW

I wrote this a while ago to explain some of it: https://www.lesswrong.com/posts/nbw2NfjiBmuGx7mim/ms-blue-meet-mr-green

Comment by alexei on What is our evidence that Bayesian Rationality makes people's lives significantly better? · 2019-07-29T05:28:44.279Z · score: 4 (2 votes) · LW · GW

Right, but how do you know? Are there specific stories of how you were going to make a decision X but then you used a rationality tool Y and it saved the day?

Comment by alexei on Shortform Beta Launch · 2019-07-28T07:15:19.190Z · score: 3 (2 votes) · LW · GW

Not sure how to find that section from mobile.

Comment by alexei on Normalising utility as willingness to pay · 2019-07-18T21:19:05.203Z · score: 10 (5 votes) · LW · GW

On a purely fun note, sometimes I imagine our universe running on such "willingness to pay" for each quantum event. At each point in time various entities observing this universe bid on each quantum event, and the next point in time is computed from the bid winners.

Comment by alexei on Prereq: Cognitive Fusion · 2019-07-18T03:47:09.618Z · score: 9 (4 votes) · LW · GW

Oh! I realized I was describing the same concept here: https://www.lesswrong.com/posts/diFBzESLKvKoBetrr/black-hole-narratives

Comment by alexei on Very Short Introduction to Bayesian Model Comparison · 2019-07-18T01:26:44.901Z · score: 17 (4 votes) · LW · GW

I realize it might feel a bit silly/pointless to you to write such "simple" posts, especially on a website like LW. But this is actually the ideal level for me, and I find this very helpful. Thank you!

Comment by alexei on Commentary On "The Abolition of Man" · 2019-07-18T01:11:05.562Z · score: 4 (2 votes) · LW · GW

Strong upvote for: 1) reading C.S. Lewis in the first place (since I think he is largely outside of the rationalist canon), 2) steel-manning his opinions, 3) connecting his opinions to the rationalist diaspora, 4) understanding Lewis' point at a pretty deep level.

Comment by alexei on Commentary On "The Abolition of Man" · 2019-07-18T01:08:45.153Z · score: 4 (2 votes) · LW · GW

To draw from this post, Jordan Peterson, and a few other things I've read, I think their message is something like:

"We, as a society, are losing something very valuable. There is the Way (Tao) of living that used to be passed from generation to generation. This Way is in part reflected in our religion, traditions, and virtues. Over time there was an erosion, especially on the religious side. This led to the society that abandoned religion, traditions, and virtues. We should try to get back to the Way."

I mostly agree. I think the best route is to find a new way "back", rather than try to undo the steps that led us here. Trying to teach religion, tradition, or virtues directly is largely missing the Way. (Similarly to how teaching only the first 11 virtues of rationality is missing the last and most important one.) At this point we have come so far as a society that we should be able to find new, more direct, and more epistemically honest ways of teaching Tao.

Comment by alexei on Intellectual Dark Matter · 2019-07-16T20:57:44.402Z · score: 9 (5 votes) · LW · GW

Random spot check: I ran the paragraphs about the lawyers by my lawyer friend and she approved.

Comment by alexei on Schism Begets Schism · 2019-07-10T14:51:30.854Z · score: 15 (6 votes) · LW · GW

Another consideration is that people splitting off become explorers. If they don’t realize that, they’re very likely to fail or not to go very far. And I’d say overall explorers are very valuable. But if everyone is one, then that doesn’t work.

Comment by alexei on Schism Begets Schism · 2019-07-10T14:50:43.513Z · score: 5 (3 votes) · LW · GW

Basically agree, but for every group there’s probably some size / some disagreement where it would be better off splitting. So may be I’d rephrase this as people / groups have a modern bias for splitting too early.

Comment by alexei on Black hole narratives · 2019-07-09T02:25:50.170Z · score: 2 (1 votes) · LW · GW

I basically agree. Hence this post, which I hope says more than just "get out of the car," but provides additional tools.

There's a sort of egging on that I think is going on when people keep repeating "get out of the car." And I think it works. It's something like... trying to get the person frustrated enough to look outside of the space of the solutions they've looked at already. Or, may be more accurately, reexamine more of their assumptions than they have so far. I think it works for people who already want to "get out of the car". But I also agree that it has a bit of the higher-than-thou tone. Some people respond to that, some people find it off putting. I tried to do both, but who knows how well that turned out.

Comment by alexei on Black hole narratives · 2019-07-09T02:14:38.510Z · score: 3 (2 votes) · LW · GW

I sympathize. And I don't know what else to say. If someone is trying to get you to agree that "2+2=4", you know it'll end with you buying into the whole algebra thing and then math and then who knows what else. Every piece of information you take in sets you up to learn / update on other information in a different way.

My most charitable interpretation (and please correct me if I'm wrong) of your comment is something like: "Hmm, this sounds interesting, but, man oh man, I'm not sure where this guy is trying to lead me. What will happen to my mind / to my behavior / to me if I start doing this? Or if I just start thinking about this?"

And if my interpretation is correct, then the only thing I can say is: look at the people who are pointing this way. And see if you want to be more or less like them. See if they have happy / successful lives. Which parts of them do you like and want to emulate? Which parts seem off? I don't think we know each other, but you can probably find other people in your life who would make a decent substitute.

Comment by alexei on Black hole narratives · 2019-07-08T17:16:19.677Z · score: 2 (1 votes) · LW · GW

In my post I mean neither of those definitions. I’m taking about the thing your mind does during recounting a particular story.

Comment by alexei on Let's Read: an essay on AI Theology · 2019-07-06T02:27:19.738Z · score: 6 (4 votes) · LW · GW

This was a pretty enjoyable read. Meandering, but in a nice relaxing way, without being overly dogmatic. More like musing.

Comment by alexei on Open Thread July 2019 · 2019-07-06T02:12:15.855Z · score: 9 (5 votes) · LW · GW

Hypothesis: there are less comments per user on LW 2.0 than the old LW, because the user base is more educated as to where they have a valuable opinion vs where they don’t.

Comment by alexei on GreaterWrong Arbital Viewer · 2019-06-29T00:04:24.703Z · score: 9 (4 votes) · LW · GW

Color me impressed!

Comment by alexei on Problems with Counterfactual Oracles · 2019-06-11T18:36:40.599Z · score: 2 (1 votes) · LW · GW

This makes me wonder if you could get a safe and extremely useful oracle if you only allow it to output a few bits (eg buy/sell some specific stock).

Comment by alexei on FB/Discord Style Reacts · 2019-06-02T01:19:54.756Z · score: 4 (2 votes) · LW · GW

“Muddled thinking but very interesting direction”

Comment by alexei on Newcomb's Problem: A Solution · 2019-05-27T04:39:52.084Z · score: 2 (1 votes) · LW · GW

Seems fine as a practical solution. But it’s still nice to do the math to figure out the formula, just like we have a formula for gravity.

Comment by alexei on Offer of collaboration and/or mentorship · 2019-05-26T21:09:01.705Z · score: 8 (4 votes) · LW · GW

Thanks for the update! I’m glad to hear there was some traction. Certainly more than I expected.

Comment by alexei on Highlights from "Integral Spirituality" · 2019-05-25T01:35:47.344Z · score: 7 (3 votes) · LW · GW

Ok, I’m gonna read this book. Just reread this summary; so good! Thanks again for writing it up! As they say, sorry I only have one super upvote to give you.

Comment by alexei on Offer of collaboration and/or mentorship · 2019-05-17T18:15:31.981Z · score: 15 (7 votes) · LW · GW

In two weeks or so can you please post an update about how many people reached out. This might partially demonstrate how much actual demand there’s for something like this. Also, thank you!

Comment by alexei on Physical linguistics · 2019-05-13T23:43:26.654Z · score: 2 (1 votes) · LW · GW

I really like the analogous questions translation. That was illuminating. Thanks for writing this up.

Comment by alexei on Tales From the American Medical System · 2019-05-10T04:13:32.720Z · score: 3 (2 votes) · LW · GW

There’s a good number of people for whom utility is almost linear in amount of money they have.

Comment by alexei on Crypto quant trading: Naive Bayes · 2019-05-09T18:18:42.056Z · score: 2 (1 votes) · LW · GW

Good question. I don’t know.

Comment by alexei on Crypto quant trading: Naive Bayes · 2019-05-08T19:25:58.902Z · score: 4 (2 votes) · LW · GW

In this case the latency is not a big issue because you're trading on day bars. So if it takes you a few minutes to get into the position, that seems fine. (But that's something you'd want to measure and track.)
In these strategies you'd be holding a position every bar (long or short). So at the end of the day, once the day bar closes, you'd compute your signal for the next day and then enter that position. If you're going to do stop-losses that's something you'd want to backtest before implementing.

Overall, you'll want to start trading some amount of capital (may be 0, which is called paper trading) using any new strategy and track its performance relative to your backtest results + live results. A discrepancy with backtest results might suggest overfit (most likely) or market conditions changing. Discrepancy with live results might be a result of order latency, slippage, or other factors you haven't accounted for.

Comment by alexei on Crypto quant trading: Naive Bayes · 2019-05-08T19:18:46.254Z · score: 3 (2 votes) · LW · GW

Yes to everything Satvik said, plus: it helps if you've tested the algorithm across multiple different market conditions. E.g. in this case we've looked at 2017 and 2018 and 2019, each having a pretty different market regime. (For other assets you might have 10+ years of data, which makes it easier to be more confident in your findings since there are more crashes + weird market regimes + underlying assumptions changing.)

But you're also getting at an important point I was hinting at in my homework question:

We're predicting up bars, but what we ultimately want is returns. What assumptions are we making? What should we consider instead?

Basically, it's possible that we predict the sign of the bar with a 99% accuracy, but still lose money. This would happen if every time we get the prediction right the price movement is relatively small, but every time we get it wrong, the price moves a lot and we lose money.
Stop losses can help. Another way to mitigate this is to run a lot of uncorrelated strategies. Then even if the market conditions becomes particularly adversarial for one of your algorithms, you won't lose too much money because other algorithms will continue to perform well: https://www.youtube.com/watch?v=Nu4lHaSh7D4

Comment by alexei on A Possible Decision Theory for Many Worlds Living · 2019-05-05T01:14:20.206Z · score: 3 (2 votes) · LW · GW

If you would prefer the universe to be in ... If I was to make Evan's argument, that's the point I'd try to make.

My own intuition supporting Evan's line of argument comes from the investing world: it's much better to run a lot of uncorrelated positive EV strategies than a few really good ones, since the former reduces your volatility and drawdown, even while at the expense of EV measured in USD.

Comment by alexei on A Possible Decision Theory for Many Worlds Living · 2019-05-05T01:12:28.866Z · score: 3 (2 votes) · LW · GW

I'm actually very glad you wrote this up, because I have had a similar thought for a while now. And my intuition is roughly similar to yours. I wouldn't use terms like "decision theory," though, since around here that has very specific mathematical connotations. And while I do think my intuition on this topic is probably incorrect, it's not yet completely clear to me how.

Comment by alexei on Crypto quant trading: Intro · 2019-05-04T16:52:46.742Z · score: 4 (2 votes) · LW · GW

I’ll try to post it this weekend. :)

Comment by alexei on Habryka's Shortform Feed · 2019-04-27T19:38:41.011Z · score: 4 (2 votes) · LW · GW

Yeah, I had the same occurrence + feeling recently when I wrote the quant trading post. It felt like: "Wait, who would downvote this post...??" It's probably more likely that someone just retracted an upvote.

Comment by alexei on Crypto quant trading: Intro · 2019-04-26T03:20:58.109Z · score: 4 (2 votes) · LW · GW

price_change=price/old_price, so no. pct_change is what you're thinking.

Basically support is some line (horizontal or angled) which predicts when the price going down will stop and revert to going up. It's acting as a kind of "support". Resistance is the opposite. I'll probably end up writing a post on this sometime.

Comment by alexei on Counterfactuals about Social Media · 2019-04-22T19:41:49.861Z · score: 3 (3 votes) · LW · GW

I’ve stopped using Facebook since December of 2018. I currently have no desire to go back to it. I don’t feel like I’m missing anything either. After being off it for two months or so I went and checked all my notifications. There wasn’t a single post that I felt like “wow, I can’t believe I almost missed this.” (I still use Messenger and FB events.)

Comment by alexei on Crypto quant trading: Intro · 2019-04-21T07:39:10.947Z · score: 2 (1 votes) · LW · GW

Can't comment on market making; that's not something we do.

I agree that linear models usually perform much better.