Comment by alexei on A Strange Situation · 2019-02-18T21:51:54.052Z · score: 4 (3 votes) · LW · GW

Why do you think you should be reading / learning more vs going and doing / making something?

Comment by alexei on Limiting an AGI's Context Temporally · 2019-02-17T21:51:32.203Z · score: 12 (4 votes) · LW · GW

Seems like it would throw a brick at you, because it wanted to throw a brick, not caring that in 2 seconds it’ll hit your face. (You can probably come up with a better example with a slightly longer timeframe.)

Comment by alexei on The RAIN Framework for Informational Effectiveness · 2019-02-13T23:48:56.217Z · score: 2 (1 votes) · LW · GW

RAIN is easiest and most memorable.

Comment by alexei on When should we expect the education bubble to pop? How can we short it? · 2019-02-12T20:12:50.495Z · score: 5 (3 votes) · LW · GW

I learned recently that some states used to offer an equivalent of "forever stamps" for education. Meaning you pay $X at any time and you've guaranteed your payment for a state university in the future. Obviously, they discontinued it, since the costs rose and they lost money. But if you wanted to short *education cost*, you'd basically want to sell these guarantees yourself.

Comment by alexei on Probability space has 2 metrics · 2019-02-10T07:47:23.006Z · score: 4 (4 votes) · LW · GW

I don’t think I’ve read this view before, or if I have, I’ve forgotten it. Thanks for writing this up!

Comment by alexei on The Case for a Bigger Audience · 2019-02-10T07:39:59.366Z · score: 13 (7 votes) · LW · GW

I’ve been reading basically every post for the past few months. I don’t usually leave comments though, unless it’s to support and thank the author. (Thanks for writing this! Funny enough I also noticed recently how few comments there are, and it seemed worth bringing up.) I guess I feel like I just don’t have much to add to most posts.

Comment by alexei on The Question Of Perception · 2019-01-30T00:48:13.535Z · score: 2 (1 votes) · LW · GW

“view them as a set of open-ended concepts that have yet to reach a satisfactory conclusion“ is a good description of how I feel about this post.

Comment by alexei on Río Grande: judgment calls · 2019-01-27T04:24:43.674Z · score: 5 (3 votes) · LW · GW

I call it “making an executive decision.” And I used that term before getting into startups.

Link: That Time a Guy Tried to Build a Utopia for Mice and it all Went to Hell

2019-01-23T06:27:05.219Z · score: 15 (6 votes)
Comment by alexei on Some Thoughts on My Psychiatry Practice · 2019-01-17T06:00:35.763Z · score: 15 (9 votes) · LW · GW

Thanks for writing your thoughts out loud. Curious to follow your progress.

Comment by alexei on Optimizing for Stories (vs Optimizing Reality) · 2019-01-07T22:23:01.879Z · score: 4 (2 votes) · LW · GW
Of course, truly effective tea plus a well-conveyed story about its great properties will generate more sales than effective tea or a good story alone.

Sometimes not, I think. It's almost like a measure of the efficiency / effectiveness of the given market. If the market is really good at recognizing reality, then you don't need to tell a story. (Basic software libraries are like that: do they give the compute the right thing? If yes, then it's good.) If the market is not at recognizing reality, then creating stories is often way cheaper than then doing the real thing. (And also transfers better across domains.)

Comment by alexei on The E-Coli Test for AI Alignment · 2018-12-17T15:45:54.273Z · score: 2 (1 votes) · LW · GW

Is there anything in the world that we know of that does alignment for something else? Can we say that humans are doing "coherent extrapolated volition" for evolution? Keeping in mind that under this view, evolution itself would evolve and change into something more complex and may be better.

Comment by alexei on The E-Coli Test for AI Alignment · 2018-12-17T15:41:48.376Z · score: 2 (1 votes) · LW · GW

I think there's a bias when we consider optimizing for X's values to consider only X without its environment. But the environment gave rise to X and actually much of X doesn't make sense without it. So I think to some extent we also would need to find ways to do alignment on the environment itself. And that means to some extent helping evolution.

Comment by alexei on The E-Coli Test for AI Alignment · 2018-12-17T15:36:45.546Z · score: 4 (3 votes) · LW · GW

First off, great thought experiment! I like it, and it was a nice way to view the problem.

The most obvious answer is: “Wow, we sure don’t know how to help. Let’s design a smarter intelligence that’ll know how to help better.”

At that point I think we’re running the risk of passing the buck forever. (Unless we can prove that process terminates.) So we should probably do at least something. Instead of trying to optimize, I’d focus on doing things that are most obvious. Like helping it not to die. And making sure it has food.

Comment by alexei on Argue Politics* With Your Best Friends · 2018-12-16T01:30:21.457Z · score: 9 (5 votes) · LW · GW

Just wanted to say I’ve been reading all your recent posts. And I really like all of them, the ideas you lay out, and how you talk about them. You really make LW worth coming to almost every day. Thank you! FWIW, if you continue down this path, I could see you having your own SSC-sized community.

Comment by alexei on Review: Slay the Spire · 2018-12-10T06:03:59.540Z · score: 4 (2 votes) · LW · GW

I love that game too. Thanks for the write up, it’s been fun to read your take on it. I haven’t played it recently, so didn’t know they added the ending. I’ll come back and play it again.

Comment by alexei on What's up with Arbital? · 2017-08-10T19:19:56.537Z · score: 3 (1 votes) · LW · GW

This will take a long time to load, but it's comprehensive: https://arbital.com/explore/math/

Comment by alexei on What's up with Arbital? · 2017-04-13T20:58:03.116Z · score: 3 (1 votes) · LW · GW

I'm not sure when you tried. It works right now.

Comment by alexei on What's up with Arbital? · 2017-04-13T20:56:48.530Z · score: 3 (1 votes) · LW · GW

Hmm, I'm skeptical a barter system would work. I don't think I've seen a successful implementation of it anywhere, though I do hear about people trying.

Yes, we've considered paying people, but that's not scalable. (A good 3-5 page explanation might take 10 hours to write.)

Comment by alexei on What's up with Arbital? · 2017-04-10T23:09:43.374Z · score: 3 (1 votes) · LW · GW

It's not open sourced.

The pages might take a while to load (up to 30 seconds).

Comment by alexei on What's up with Arbital? · 2017-04-03T20:20:28.772Z · score: 3 (1 votes) · LW · GW

I can imagine a neural-activation-like effect coming out of that, where frequently co-active posts naturally rise to the top of each other's links and become threads or topics.

Not sure what you mean by this.

Comment by alexei on What's up with Arbital? · 2017-04-01T17:06:32.277Z · score: 5 (4 votes) · LW · GW

My guess for Wikipedia's success is that they were one of the first; and there was more of a sense of an online community back then. Also it's easier to create Wikipedia content than, say, a good explanation. StackOverflow succeeded because asking and answering questions is pretty easy, you get instant feedback, and they got community management right. (They solved exactly one problem well!) The founders were also really well known so it was easy for them to seed the platform.

I can't open-source the platform as long as I'm doing the for-profit venture, since the platforms are too similar. However, if at some point I have to stop, then I'll be happy to open source everything at that point.

Comment by alexei on What's up with Arbital? · 2017-04-01T17:01:40.983Z · score: 3 (1 votes) · LW · GW

I'm not into persuading people. :) If you want to write, go for it. I still think Arbital is a really good platform for writing up math explanations.

Comment by alexei on What's up with Arbital? · 2017-03-30T16:48:15.776Z · score: 3 (1 votes) · LW · GW

Not someone with sufficient authority, just the blog owner. That seems fair though. You can create you own blog and then you would be in charge of which comments to approve.

Comment by alexei on What's up with Arbital? · 2017-03-30T16:20:00.003Z · score: 3 (1 votes) · LW · GW

Yes, but that's not "invite-only".

Comment by alexei on What's up with Arbital? · 2017-03-30T15:29:46.778Z · score: 4 (2 votes) · LW · GW

How is it invite only? Are you talking about the comment section?

Originally the plan was to do exactly that if we couldn't figure how to build a "joyful maze": just throw open the doors and see what people do with it. Unfortunately there is still a significant amount of work left to do that well, and right now I'm more optimistic about the new platform than I am about scavenging the current version.

Comment by alexei on What's up with Arbital? · 2017-03-30T15:20:44.189Z · score: 1 (5 votes) · LW · GW

Here are like 5 pages explaining all the visions: https://arbital.com/p/more_about_arbital/

Basically what we tried is: "let's figure out how people are supposed to have truth-seeking conversations, build a platform that facilities that, and then grow it." Step 1 is very hard. Step 3 is made harder because your platform only attracts truth-seeking people.

New approach: "build a platform that facilities communication, grow it, then shape the ongoing discussion to be more truth-seeking." Step 1 is still hard, but not made harder. Step 3 sounds a lot more doable.

Comment by alexei on What's up with Arbital? · 2017-03-30T15:15:55.467Z · score: 6 (6 votes) · LW · GW

Don't use the outside view. Use your brain. If Arbital was confusing, but you didn't look closer at it, then you didn't use your brain. If MIRI seems confusing and you don't look closer at it, then you aren't using your brain. The whole concept of "smart people" whom you can trust with anything is just wrong. There are only niche experts.

My two cents: Arbital had very little to do with MIRI, aside from Arbital being Eliezer's idea. But this was definitely out of his realm of expertise. MIRI/AI stuff is not.

Comment by alexei on What's up with Arbital? · 2017-03-30T15:09:05.739Z · score: -2 (2 votes) · LW · GW

It's a complicated answer and also somewhat outside of the topic I want to discuss here (which is Arbital 1.0). For part of the answer see: http://lesswrong.com/lw/otq/whats_up_with_arbital/dqbc

Comment by alexei on What's up with Arbital? · 2017-03-30T15:07:04.730Z · score: 3 (1 votes) · LW · GW
  1. Currently it's not clear to anyone what Arbital is, what it can do, who it's for, etc.. It needs to solve a real problem and present itself as solving that clear problem.
  2. The tech we used is now somewhat obsolete. The codebase has accumulated a lot of unnecessary features. Also Google Material UI turned out to be too heavyweight and not as pleasant to design with as I thought initially. (These are all arguments for remaking the platform.)
  3. The blogging platform will be "as open and as inviting of contribution as possible."
Comment by alexei on What's up with Arbital? · 2017-03-30T14:55:42.162Z · score: 4 (2 votes) · LW · GW

Yes. There is already a pretty large demand for blogging platforms, and Arbital 2.0 will have features which will make it a much better option for some users. I'll also be personally reaching out to a lot of bloggers to interview them about their experience / wishes. I'll also be testing the key value propositions with a graphic that's being created right now.

But also sometimes you just have to go ahead and build the thing to really test it.

Comment by alexei on What's up with Arbital? · 2017-03-30T14:49:36.188Z · score: 7 (6 votes) · LW · GW

Yes, many students would benefit from a math explanation platform. But it was hard for us to find writers, and we weren't getting as much traction with them as we wanted. We reached out to some forums and to many individuals. That version of Arbital was also promoted by Eliezer on FB. When we switched away from math, it wasn't because we thought it was hopeless. We had a lot of ideas left to try out. But when it's not going well, you have to call it quits at some point, and so we did. There was also the consideration that if we built a platform for (math) explanations, it would be hard to eventually transition to a platform that solved debates (which always seemed like the more important part).

I think if someone wanted to give it a shot with another explanation platform and had a good strategy for getting writers, I'd feel pretty optimistic about their chance of success.

Comment by alexei on What's up with Arbital? · 2017-03-30T03:42:43.274Z · score: 4 (2 votes) · LW · GW

See this comment: http://lesswrong.com/lw/otq/whats_up_with_arbital/dq9h

I think we likely made a mistake with respect to openness, but it's not obvious when/how. Probably the biggest problem is that we couldn't settle on what we wanted the users to do once they were on the platform.

Comment by alexei on What's up with Arbital? · 2017-03-30T03:35:58.904Z · score: -2 (2 votes) · LW · GW

Noted, but I disagree.

Comment by alexei on What's up with Arbital? · 2017-03-30T01:29:27.480Z · score: 4 (2 votes) · LW · GW

See my reply to gjm: http://lesswrong.com/lw/otq/whats_up_with_arbital/dqa0?context=3

Comment by alexei on What's up with Arbital? · 2017-03-30T01:28:16.778Z · score: 1 (5 votes) · LW · GW

That's step 1. Steps 2 and after involve slowly converging towards the original Arbital vision. I just don't think you can get there without mass adoption.

Comment by alexei on What's up with Arbital? · 2017-03-29T23:16:33.642Z · score: 3 (3 votes) · LW · GW

It's a blogging platform, it's done by me with some support from Eliezer, and I'm doing it because it will help with x-risk. This is essentially identical to what we had in 2015.

Comment by alexei on What's up with Arbital? · 2017-03-29T23:04:24.062Z · score: 5 (3 votes) · LW · GW

We don't have Arbital history anywhere, although I guess there is the blog, which captures a fraction of it.

"The Stacks Project, but for everything" was a decent description for part of our first approach. However, it's relatively easy for people to answer questions: doesn't take much time and you get instant credit. It's much harder to get people to write wiki pages / explanations: it takes a long time and you don't get that much credit.

Comment by alexei on What's up with Arbital? · 2017-03-29T23:00:03.150Z · score: 7 (5 votes) · LW · GW

That's one approach we ruled out pretty much from the start because that kind of structure is hard to read and laborious to create and maintain. However that mechanic on the blog level makes sense and that's basically how debates work right now in the wild.

Our main approach was creating "claims". Blogs would reuse claims and the discussion around each claim. I'd say that part was actually moderately successful.

One idea we played around with but didn't get to implement was allowing comments to easily leverage double-crux structure.

Comment by alexei on What's up with Arbital? · 2017-03-29T20:52:38.299Z · score: 5 (3 votes) · LW · GW

I'm thinking of it as a completely new, unrelated platform. Whether or not it will live at arbital.com and be called Arbital is not yet decided, though currently I'm leaning towards yes. (But if, for example, some people are using the old Arbital, then it would probably be easier for me to put the new platform under a different domain & name.)

Comment by alexei on What's up with Arbital? · 2017-03-29T20:49:19.279Z · score: 5 (3 votes) · LW · GW

How about writing a "top 10 posts on Arbital" post for LW discussion? That way it's easier for people to see discussions to which they might want to contribute.

I think that's better done by people who are actively writing and want to invite commenters.

That's a valid strategy but if that's what you did, why do you think your experience proves that it's hard to get people to contribute to the discussion?

Because we reached out to people who were pretty excited about the platform and who were already spending a lot of their time blogging / doing discussions. I imagine if we reached out to people who are less excited / who write less, we would have gotten even less of a response.

Comment by alexei on What's up with Arbital? · 2017-03-29T20:44:01.013Z · score: 3 (1 votes) · LW · GW

Correct. We (somewhat prematurely) worried about trolls, so by default people can only propose comments. And it would be up to Toon to approve them. (If there is sufficient demand, I can add a feature to let users have open commenting. But in general adding features to old Arbital is not high on my priority list.)

Comment by alexei on What's up with Arbital? · 2017-03-29T20:25:07.111Z · score: 4 (4 votes) · LW · GW

Why?

Comment by alexei on What's up with Arbital? · 2017-03-29T20:23:51.801Z · score: 7 (5 votes) · LW · GW

That's a very good point. When we were doing math explanations, we did reach out to a lot of people (just not via LW). When we were doing debates, we reached out to a few people, because we didn't quite know what shape we wanted to the debate to take. So we didn't need that many people. (It would be a bit silly to move a community from one platform to another that's basically the same.)

So, yes, there were multiple times where we thought that we should invite more people / throw open the doors. Some of those times we postponed it because we weren't ready; one of the other times we probably should have done it.

You can think of this post as an invitation to use the platform.

Comment by alexei on What's up with Arbital? · 2017-03-29T19:52:59.080Z · score: 8 (6 votes) · LW · GW

Eliezer said he wanted all of those features. (And he is using basically all of them.) But also we worked on it for 2 years, so a lot of features accumulated as we were trying different approaches.

Comment by alexei on What's up with Arbital? · 2017-03-29T18:41:37.396Z · score: 8 (6 votes) · LW · GW

See MIRI's recent post under "Going Forward": https://intelligence.org/2017/03/28/2016-in-review/

  1. AGI alignment overviews: “Eliezer Yudkowsky and I will be splitting our time between working on these problems and doing expository writing. Eliezer is writing about alignment theory, while I’ll be writing about MIRI strategy and forecasting questions.”

Basically explaining why people still don't get AI safety is a very important task and Eliezer is particularly well suited for it.

Comment by alexei on What's up with Arbital? · 2017-03-29T18:37:49.760Z · score: 6 (4 votes) · LW · GW

Hmm, I had that feeling too, but wasn't sure what else to add. I'm happy to answer specific/vague questions.

Comment by alexei on What's up with Arbital? · 2017-03-29T18:37:07.843Z · score: 4 (2 votes) · LW · GW

See my reply to michaelkeenan: http://lesswrong.com/r/discussion/lw/otq/whats_up_with_arbital/dq91

Comment by alexei on What's up with Arbital? · 2017-03-29T18:36:44.021Z · score: 14 (13 votes) · LW · GW

Here is my person take on why it's complicated:

When you ask someone if they would like a debate platform and describe all the features and content it'll have, they go: "Hell yeah I'd love that!" And it took me a while to realize that what they are imagining is someone else writing all the content and doing all the heavy lifting. Then they would come along, read some of it, and may be leave a comment or two. And basically everyone is like that: they want it, but they are not willing to put in the work. And I don't blame them, because I'm not willing to put in the work (of writing) either. There are just a handful of people who are.

So the problem is definitely not on the technical side. It's a problem with the community / society in general. Except I'm hesitant to even call it a "problem," because that feels like calling gravity a "problem." This is just the way humans are. They want to do things they want to do.

What's up with Arbital?

2017-03-29T17:22:21.751Z · score: 24 (27 votes)
Comment by alexei on "Flinching away from truth” is often about *protecting* the epistemology · 2016-12-20T04:48:14.965Z · score: 3 (3 votes) · LW · GW

I basically just try to do the "obvious" thing: when I notice I'm averse to taking in "accurate" information, I ask myself what would be bad about taking in that information.

Interestingly enough this is a common step in Connection Theory charting.

Comment by alexei on Circles of discussion · 2016-12-16T23:20:46.335Z · score: 1 (1 votes) · LW · GW

This is a really really good comment. Check out https://arbital.com and let me know what you think. :)

Toy problem: increase production or use production?

2014-07-05T20:58:48.962Z · score: 4 (5 votes)

Quantum Decisions

2014-05-12T21:49:11.133Z · score: 1 (6 votes)

Personal examples of semantic stopsigns

2013-12-06T02:12:01.708Z · score: 44 (49 votes)

Maximizing Your Donations via a Job

2013-05-05T23:19:05.116Z · score: 114 (116 votes)

Low hanging fruit: analyzing your nutrition

2012-05-05T05:20:14.372Z · score: 7 (8 votes)

Robot Programmed To Love Goes Too Far (link)

2012-04-28T01:21:45.465Z · score: -5 (12 votes)

I'm starting a game company and looking for a co-founder.

2012-03-18T00:07:01.670Z · score: 16 (23 votes)

Water Fluoridation

2012-02-17T04:33:00.064Z · score: 1 (9 votes)

What happens when your beliefs fully propagate

2012-02-14T07:53:25.005Z · score: 22 (50 votes)

Rationality and Video Games

2011-09-18T19:26:01.716Z · score: 6 (11 votes)

Credit card that donates to SIAI.

2011-07-22T18:30:35.207Z · score: 5 (8 votes)

Futurama does an episode on nano-technology.

2011-06-27T02:44:14.496Z · score: 3 (6 votes)

Considering all scenarios when using Bayes' theorem.

2011-06-20T18:11:34.810Z · score: 9 (10 votes)

Discussion for Eliezer Yudkowsky's paper: Timeless Decision Theory

2011-01-06T00:28:29.202Z · score: 10 (11 votes)

Life-tracking application for android

2010-12-11T01:48:11.676Z · score: 20 (21 votes)