Posts

Ms. Blue, meet Mr. Green 2018-03-01T13:43:39.821Z
Crypto autopsy reply 2018-02-06T10:32:31.291Z
Arbital postmortem 2018-01-30T13:48:31.399Z
Meditation retreat highlights 2017-06-27T22:58:14.567Z

Comments

Comment by alexei (alexei.andreev) on The Unreasonable Effectiveness of Deep Learning · 2018-10-03T15:25:12.137Z · LW · GW

This is really good and I found it very useful for what I'm currently working on.

One note: it felt a bit disconnected. And I didn't get the impression that RL is "unreasonably effective."

Comment by alexei (alexei.andreev) on A possible solution to the Fermi Paradox · 2018-05-07T07:41:51.466Z · LW · GW

Yeah, makes sense. Also note that if Many Worlds is true and quantum immortality exists, you will never (from your own point of view) die.

Comment by alexei (alexei.andreev) on Ms. Blue, meet Mr. Green · 2018-03-01T21:41:49.522Z · LW · GW

Yeah, splitting it into two parts would have been better.

What exactly do you mean by "things that try very hard to break down the map/territory distinction"?

Comment by alexei (alexei.andreev) on Ms. Blue, meet Mr. Green · 2018-03-01T21:23:42.488Z · LW · GW

Good points. I agree that there are many ways to slice these practices. This goes along with habryka's point that I should have split this post into two parts, which I agree with as well.

Comment by alexei (alexei.andreev) on Ms. Blue, meet Mr. Green · 2018-03-01T19:48:46.419Z · LW · GW

I'd predict that most people teach mindfulness horribly wrong. I'd also predict that the way it's usually taught does not resonate with most people, and they end up not doing the thing. (This was true for me the first few times I encountered it.) (Also, I know people who've done meditation for years and they're not much further along than when they started because they're still not doing the thing.) I'd also predict that they didn't do it for long enough. (Conservatively, I'd say you need 6 months to see some results, but it depends how many minutes a day you meditate.) And, yes, it's hard to measure internal clarity.

One experiment that might pick up on it though: when my brother was in college, he participated in an experiment ran by some PhD student. The experiment was: they'd flash letters very rapidly on the screen, changing about every 20 ms or so (don't remember the exact number, but it was very fast, where you couldn't keep up consciously). You were supposed to count how many As and Bs appeared. Their hypothesis was that when you'd see one of those letters, your mind would become occupied with counting that letter and your vision would become temporarily turned off, so you'd miss if there was another A or B right after. I think they did end up finding that effect. But what's interesting is that my brother scored 3 standard deviations higher than the mean. (At that time I think he has been meditating for at least a year.) This is something that I'd predict other people who practiced insight meditation to perform well at.

Comment by alexei (alexei.andreev) on Arbital postmortem · 2018-02-08T09:50:47.603Z · LW · GW

I sympathize. It's a giant and weird project the likes of which the world has not seen in a while. If I wrote down how to implement just what we built so far so that someone could read it an unambiguously translate it into the current product, I think the document would be around 200 pages. And what we implemented was may be ~15% of Eliezer's full vision that he was describing in his document.

By the way, we followed Eliezer's direct vision for only 1.5 years. Then we took matters into our own hands and the design went elsewhere.

Turns out it's hard to get the broad details right too. It's basicly hard on every level.

If it's not according to Eliezer's specification, then it doesn't have Eliezer's "magic touch". I think if you'd ask Eliezer, he would tell you that the feature you built (or the whole product) won't work as well or at all.

Comment by alexei (alexei.andreev) on Arbital postmortem · 2018-02-07T02:23:53.548Z · LW · GW

No.

Comment by alexei (alexei.andreev) on Arbital postmortem · 2018-02-07T02:23:10.517Z · LW · GW

What habryka said. Basically you're totally underestimating the complexity of the project and how granular and specific things get if you're to build them in a way Eliezer would approve.

Comment by alexei (alexei.andreev) on Arbital postmortem · 2018-02-07T00:26:27.726Z · LW · GW

It's too easy for people to just recommend their best lawyer friends. I suppose if you really trust your friends not to recommend their lawyer friends just because of the relationship (a big if!) then you could take their advice.

Comment by alexei (alexei.andreev) on Arbital postmortem · 2018-02-07T00:21:35.992Z · LW · GW

Oh no, totally the same feelings. You get it. :)

However, since then I've gotten over that "should universe" and went back to "is universe", where this is just how people are. Won't be making that mistake twice. Sounds like we learned the same lesson. :)

Comment by alexei (alexei.andreev) on Arbital postmortem · 2018-02-07T00:18:41.130Z · LW · GW

Yes, it's possible we weren't ideal. On the other hand, sometimes you have to play the hand you're dealt. I don't think disagreement by itself is necesserily bad.

Comment by alexei (alexei.andreev) on Arbital postmortem · 2018-02-07T00:15:28.431Z · LW · GW

Advisor.

Comment by alexei (alexei.andreev) on Arbital postmortem · 2018-02-07T00:13:45.448Z · LW · GW

Yes.

Comment by alexei (alexei.andreev) on Arbital postmortem · 2018-02-07T00:13:34.726Z · LW · GW

Yes about the prestige. That was the realization I had in 2017.

Be careful: I think Arbital as an idea has evolved to be extremely sticky and "obviously good". The antidote is to find a problem that actually exists and people want solved (and ideally will pay for). And only then take parts of Arbital that might provide a solution.

Comment by alexei (alexei.andreev) on Arbital postmortem · 2018-02-06T23:59:45.277Z · LW · GW

Yes, there is a good case to be made that Eliezer's vision hasn't been fully tried. But I also think it's impossible to try it because the exact design is locked inside Eliezer's head, so he would be the bottleneck. (That is, unless you found someone who thought like him and could be on the team full time. We have tried to find a person like that, but couldn't.)

I maintain that someone doing their own project in this area would be a better bet. And they can take features / inspiration / overall direction from Arbital.

Comment by alexei (alexei.andreev) on Arbital postmortem · 2018-02-06T23:56:10.770Z · LW · GW

We did some work on the community design. That's what Eric Bruylant did part time (slack channel, writing guidelines, etc..). He worked with the Wikipedia community (and other forums) in the past, so he certainly had the right experience. But overall I agree with your sentiment.

Comment by alexei (alexei.andreev) on Arbital postmortem · 2018-02-06T23:51:05.303Z · LW · GW

Thanks for the comment. Yeah, we definitely planned to make a lot of money. But I think the steps from what we were building to where we would be making money were too indirect / too far.

Comment by alexei (alexei.andreev) on Arbital postmortem · 2018-02-06T23:25:08.521Z · LW · GW

Sound correct to me. Also, I think it's much easier to start / contribute to a page on Wikipedia. Arbital's pages were trying to be educational and readable, which, I think is a higher bar.

Comment by alexei (alexei.andreev) on Crypto autopsy reply · 2018-02-06T22:48:40.947Z · LW · GW

To clarify, I'm not arguing that Tezos is shiny right now. It was shiny during ICO, and now it's possible I wouldn't buy it.

But OCaml was certainly one of the big factors that lept at me. It signalled that they were serious about writing code that could be proven to be correct. (This is in contrast to Ethereum's Solidity.) Basically: 1) I looked at what they were promising to do, and it seemed to have enough additional good things beyond Ethereum, 2) nobody I saw was disputing their tech, so I didn't have to understand it in detail, just enough to verify that it seemed innovative, and 3) they had a credible team (this was before the conflict). To me that was sufficient at the time, given that much much shittier ICOs were making money hand over first.

Comment by alexei (alexei.andreev) on Crypto autopsy reply · 2018-02-06T16:05:07.736Z · LW · GW

The cost is actually not that high. I spent may be 5-10 hours researching Bitcoin (about 5 hours before I invested; 10 hours total). There aren't that many things one can invest 5-10 hours into and instantly make money. In fact, none of your examples come even close. Those are huge fields that you need to sink a ton of hours into before you can start reaping rewards. And none of them have direct monetary rewards like crypto.

I googled Tezos for a while and still do not have any idea why this has a strong "signal".

Ok, then may be I've overestimating how easy it is to look at crypto coins and have a rough guess at how good they are. It's also possible I'm overestimating my own skill at it; it's not like I have that many data points yet. (Not knowing you very well, I won't make hypotheses about you.)

I believe you're advocating investing some money in Tezos since smart people think it's cool and skip the investigation step.

Absolutely not. I'm advocating reading the thinking and research that smart people did and following their pointers, but then also checking for sanity. Sanity checks are usually pretty easy to do, but if you can't do them, then this strategy just won't work.

Comment by alexei (alexei.andreev) on Crypto autopsy reply · 2018-02-06T15:37:56.100Z · LW · GW

It was around $4 when I wrote the FB post literally two days ago. :D

Comment by alexei (alexei.andreev) on Arbital postmortem · 2018-01-31T13:06:12.307Z · LW · GW

If I heard correctly that AF forum is moving to LW 2.0, you'll have to solve the math blogging problem. ;) And with the current features you're already 50% there. (Assuming they are working well, which right now it doesn't quite look like that.)

Comment by alexei (alexei.andreev) on Arbital postmortem · 2018-01-31T13:02:36.257Z · LW · GW

Thanks for the response. After reading it, it's now even more clear to what extent collaborative explanations is just not a thing that can easily work.

Comment by alexei (alexei.andreev) on A LessWrong Crypto Autopsy · 2018-01-31T10:05:25.162Z · LW · GW

FWIW, I ran into the same issue with Arbital, and very quickly decided to change it to $$. Otherwise, any time you're writing a post about money, it's super inconvinient.

Comment by alexei (alexei.andreev) on Arbital postmortem · 2018-01-30T23:59:10.579Z · LW · GW
Of course it needs a good admin supporting it

Yup, that's a nonstarter for most casual bloggers.

Comment by alexei (alexei.andreev) on Arbital postmortem · 2018-01-30T23:56:58.474Z · LW · GW

Huh!! 2015, no less. I'll check them out.

Comment by alexei (alexei.andreev) on Arbital postmortem · 2018-01-30T23:52:26.880Z · LW · GW

Fun fact: originally Eliezer called the project Zanaduu (a play on Xanadu-doomed).

I'll bet that parts of Arbital will show up across various products (and I've already seen some), but I would be very very surprised if we get something that has the entire package in the next 5 years.

Comment by alexei (alexei.andreev) on Arbital postmortem · 2018-01-30T23:49:08.445Z · LW · GW
... but if they did, they were not assertive enough in applying it.
... experience / domain knowledge are somewhat underrated in the community compared to generic rationality skills

Yes to both.

And yes, I'd love to see LW 2.0 execute my plan and become a social network. (They already did the first few steps; just instead of math, they did rationality.)

Comment by alexei (alexei.andreev) on Arbital postmortem · 2018-01-30T23:46:00.266Z · LW · GW

Two features I miss the most are greenlinks (hover over a link to see summary) and claims (vote with probability / agreement).

But I think this question should be answered by LW community needs.

Well, trying to build a system that will dynamically link pages together to form a sequence based on requisites would be hard. But I think basically all other features are very modular.

Comment by alexei (alexei.andreev) on Arbital postmortem · 2018-01-30T17:52:26.179Z · LW · GW

Yes, there are components one can put together to make it all work well. But there is nothing as simple and as good looking as Medium.

Comment by alexei (alexei.andreev) on Arbital postmortem · 2018-01-30T15:32:48.550Z · LW · GW

Which version of product are you talking about specifically?

Also, part of the reasoning was that if we had a functioning product, we could try many things with it. (In practice, we only got to try a few.)

Comment by alexei (alexei.andreev) on Announcement: AI alignment prize winners and next round · 2018-01-28T23:56:08.852Z · LW · GW

I'm curious if these papers / blogs would have been written at some point anyway, or if they happened because of the call to action? And to what extend was the prize money a motivator?

Comment by alexei (alexei.andreev) on A Day in Utopia · 2017-11-24T20:35:32.175Z · LW · GW

Sounds beautiful, thank you for sharing!

Comment by alexei.andreev on [deleted post] 2017-11-10T00:38:41.529Z

Link please.

Comment by alexei.andreev on [deleted post] 2017-10-27T18:43:23.987Z

You're about to flip one now.

Now *that's* how you end a post & a sequence! Well done.

Comment by alexei.andreev on [deleted post] 2017-10-22T04:52:33.204Z

This is kind of like Eliezer's 12th virtue of rationality (the void) taking a human shape.

Comment by alexei (alexei.andreev) on Placing Yourself as an Instance of a Class · 2017-07-26T03:34:26.833Z · LW · GW

Playing poker at higher levels actually requires one to practice this skill a lot.

Comment by alexei (alexei.andreev) on Epistemic Spot Check: A Guide To Better Movement (Todd Hargrove) · 2017-07-02T05:35:20.255Z · LW · GW

Thanks. Sorry about the lost comments. :(

Comment by alexei (alexei.andreev) on Epistemic Spot Check: A Guide To Better Movement (Todd Hargrove) · 2017-07-01T20:32:37.523Z · LW · GW

Nice, thanks for writing this up.

The prediction is that improving the quality of processing via the principles explained in the book can reduce pain and increase your physical capabilities.

Is there a summary of the principles somewhere?

Comment by alexei (alexei.andreev) on [Classifieds] What are you doing to make the world a better place and how can we help? · 2017-06-27T02:38:48.076Z · LW · GW

Arbital 2.0

Blogging / social media platform.

Initially: 1) make math blogging much nicer, 2) help people connect over similar interests. Eventually: change the shape of the social media and information flowing through it.

If you know math bloggers, I'd appreciate a referral so I could tell them about the platform and see what their needs are.

Comment by alexei (alexei.andreev) on Pair Debug to Understand, not Fix · 2017-06-27T02:29:51.493Z · LW · GW

Also asking "how do you feel about that?" helps, although might come off a bit psycho-analytical if asked repeatedly and directly.