Posts

MIRI 2024 Mission and Strategy Update 2024-01-05T00:20:54.169Z
MIRI’s 2019 Fundraiser 2019-12-03T01:16:48.652Z
MIRI’s 2018 Fundraiser 2018-11-27T05:30:11.931Z
MIRI's 2017 Fundraiser 2017-12-07T21:47:35.235Z
MIRI's 2017 Fundraiser 2017-12-01T13:45:32.975Z
SI is coming to Oxford, looking for hosts, trying to keep costs down 2012-11-08T15:40:52.935Z
SI is looking to hire someone to finish a Decision Theory FAQ 2012-09-02T04:48:08.553Z
SI/CFAR Are Looking for Contract Web Developers 2012-08-06T22:47:43.332Z
[Applications Closed] The Singularity Institute is hiring remote LaTeX editors 2012-07-29T00:27:26.381Z
Article about LW: Faith, Hope, and Singularity: Entering the Matrix with New York’s Futurist Set 2012-07-25T19:28:25.901Z

Comments

Comment by Malo (malo) on Claude 3.5 Sonnet · 2024-06-20T20:10:16.973Z · LW · GW

Agree. I think Google DeepMind might actually be the most forthcoming about this kind of thing, e.g., see their Evaluating Frontier Models for Dangerous Capabilities report.

Comment by Malo (malo) on LessWrong's (first) album: I Have Been A Good Bing · 2024-04-01T22:00:25.142Z · LW · GW

Apple Music?

Comment by Malo (malo) on MIRI 2024 Mission and Strategy Update · 2024-02-14T22:36:57.169Z · LW · GW

I’d certainly be interested in hearing about them, though it currently seems pretty unlikely to me that it would make sense for MIRI to pivot to working on such things directly as opposed to encouraging others to do so (to the extent they agree with Nate/EYs view here).

Comment by Malo (malo) on MIRI 2024 Mission and Strategy Update · 2024-02-14T22:24:44.454Z · LW · GW

I think this a great comment, and FWIW I agree with, or am at least sympathetic to, most of it.

Comment by Malo (malo) on More on the Apple Vision Pro · 2024-02-14T01:17:34.482Z · LW · GW

If you are on an airplane or a train, and you can suddenly work or watch on a real theater screen, that would be a big game. Travel enough and it is well worth paying for that, or it could even enable more travel.

Ben Thompson agrees in a followup (paywalled):

Vision Pro on an Airplane

I tweeted about this, but I think it’s worth including in the Update as a follow-up to last week’s review of the Vision Pro: I used the Vision Pro on an airplane over the weekend, sitting in economy, and it was absolutely incredible. I called it “life-changing” on Twitter, and I don’t think I was being hyperbolic, at least for this specific scenario:

  • The movie watching experience was utterly immersive. When you go into the Apple TV+ or Disney+ theaters, with noise-canceling turned on, you really are transported to a different place entirely.
  • The Mac projection experience was an even bigger deal: my 16″ MacBook Pro is basically unusable in economy, and a 14″ requires being all scrunched up with bad posture to see anything. In this case, though, I could have the lid actually folded towards me (if, say, the person in front of me reclined), while still having a big 4K screen to work on. The Wifi on this flight was particularly good, so I had a basketball game streaming to the side while I worked on the Mac; it was really extraordinary.
  • I mentioned the privacy of using a headset in my review, and that really came through clearly in this use case. It was really freeing to basically be “spread out” as far as my computing and entertainment went and to feel good about the fact I wasn’t bothering anyone else and that no one could see my screen.


 

Comment by Malo (malo) on More on the Apple Vision Pro · 2024-02-14T00:54:07.452Z · LW · GW

There is no sign that anyone plans to actually offer MLB or other games in this mode.


NBA seems somewhat bullish.

Comment by Malo (malo) on We're Not Ready: thoughts on "pausing" and responsible scaling policies · 2023-10-28T02:07:40.690Z · LW · GW

That may be right but then the claim is wrong. The true claim would be "RSPs seem like a robustly good compromise with people who are more optimistic than me".

IDK man, this seems like nitpicking to me ¯\_(ツ)_/¯. Though I do agree that, on my read, it’s technically more accurate.

My sense here is that Holden is speaking from a place where he considers himself to be among the folks (like you and I) who put significant probability on AI posing a catastrophic/existential risk in the next few years, and “people who have different views from mine” is referring to folks who aren’t in that set.

(Of course, I don’t actually know what Holden meant. This is just what seemed like the natural interpretation to me.) 

And then the claim becomes not really relevant?

Why?

Comment by Malo (malo) on We're Not Ready: thoughts on "pausing" and responsible scaling policies · 2023-10-27T21:44:20.069Z · LW · GW

Responsible scaling policies (RSPs) seem like a robustly good compromise with people who have different views from mine

2. It seems like it's empirically wrong based on the strong pushback RSPs received so that at least you shouldn't call it "robustly", unless you mean a kind of modified version that would accommodate the most important parts of the pushback. 

FWIW, my read here was that “people who have different views from mine” was in reference to these sets of people:

  • Some people think that the kinds of risks I’m worried about are far off, farfetched or ridiculous.
  • Some people think such risks might be real and soon, but that we’ll make enough progress on security, alignment, etc. to handle the risks - and indeed, that further scaling is an important enabler of this progress (e.g., a lot of alignment research will work better with more advanced systems).
  • Some people think the risks are real and soon, but might be relatively small, and that it’s therefore more important to focus on things like the U.S. staying ahead of other countries on AI progress.
Comment by Malo (malo) on Announcing MIRI’s new CEO and leadership team · 2023-10-20T16:13:04.302Z · LW · GW

Our reserves increased substantially in 2021 due to a couple of large crypto donations

At the moment we've got ~$20M.

Comment by Malo (malo) on Announcing MIRI’s new CEO and leadership team · 2023-10-17T01:04:50.702Z · LW · GW

FWIW, I approached Gretta about starting to help out with comms related stuff at MIRI, i.e., it wasn't Eliezer's idea.

Comment by Malo (malo) on Announcing MIRI’s new CEO and leadership team · 2023-10-17T00:40:08.658Z · LW · GW

Interesting, I don't think I knew about this post until I clicked on the link in your comment.

Comment by Malo (malo) on Announcing MIRI’s new CEO and leadership team · 2023-10-13T23:09:06.231Z · LW · GW

Quickly chiming in to add that I can imagine there might be some research we could do that could be more instrumentally useful to comms/policy objectives. Unclear whether it makes sense for us to do anything like that, but it's something I'm tracking.

Comment by Malo (malo) on Announcing MIRI’s new CEO and leadership team · 2023-10-13T21:00:21.885Z · LW · GW
  1. Given Nate's comment: "This change is in large part an enshrinement of the status quo. Malo’s been doing a fine job running MIRI day-to-day for many many years (including feats like acquiring a rural residence for all staff who wanted to avoid cities during COVID, and getting that venue running smoothly). In recent years, morale has been low and I, at least, haven’t seen many hopeful paths before us." (Bold emphases are mine). Do you see the first bold sentence as being in conflict with the second, at all? If morale is low, why do you see that as an indicator that the status quo should remain in place?

A few things seem relevant here when it comes to morale:

  • I think, on average, folks at MIRI are pretty pessimistic about humanity's chances of avoiding AI x-risk, and overall I think the situation has felt increasingly dire over the past few years to most of us.
  • Nate and Eliezer lost hope in the research directions they were most optimistic about, and haven’t found any new angles of attack in the research space that they have much hope in.
  • Nate and Eliezer very much wear their despair on their sleeve so to speak, and I think it’s been rough for an org like MIRI to have that much sleeve-despair coming from both its chief executive and founder.

During my time as COO over the last ~7 years, I’ve increasingly taken on more and more of the responsibility traditionally associated at most orgs with the senior leadership position. So when Nate says “This change is in large part an enshrinement of the status quo. Malo’s been doing a fine job running MIRI day-to-day for many many years […]” (emphasis mine), this is what he’s pointing at. However, he was definitely still the one in charge, and therefore had a significant impact on the org’s internal culture, narrative, etc. 

While he has many strengths, I think I’m stronger in (and better suited to) some management and people leadership stuff. As such, I’m hopeful that in the senior leadership position (where I’ll be much more directly responsible for steering our culture etc.), I’ll be able to “rally the troops” so to speak in a way that Nate didn’t have as much success with, especially in these dire times.

Comment by Malo (malo) on Announcing MIRI’s new CEO and leadership team · 2023-10-12T23:15:27.098Z · LW · GW
  • Does MIRI also plan to get involved in policy discussions (e.g. communicating directly with policymakers, and/or advocating for specific policies)?

We are limited in our ability to directly influence policy by our 501(c)3 status; that said, we do have some latitude there and we are exercising it within the limits of the law. See for example this tweet by Eliezer.

To expand on this a bit, I and a couple others at MIRI have been spending some time syncing up and strategizing with other people and orgs who are more directly focused on policy work themselves. We've also spent some time chatting with folks in government that we already know and have good relationships with. I expect we'll continue to do a decent amount of this going forward. 

It's much less clear to me that it makes sense for us to end up directly engaging in policy discussions with policymakers as an important focus of ours (compared to focusing on broad public comms), given that this is pretty far outside of our area of expertise. It's definitely something I'm interested in exploring though, and chatting about with folks who have expertise in the space.

Comment by Malo (malo) on Announcing MIRI’s new CEO and leadership team · 2023-10-12T22:48:13.177Z · LW · GW
  • Does MIRI need any help? (Or perhaps more precisely "Does MIRI need any help from the right kind of person with the right kind of skills, and if so, what would that person or those skills look like?")

Yes, I expect to be hiring in the comms department relatively soon but have not actually posted any job listings yet. I will post to LessWrong about it when I do.

That said, I'd be excited for folks who think they might have useful background or skills to contribute and would be excited to work at MIRI, to reach out and let us know they exist, or pitch us on why they might be a good addition to the team.

Comment by Malo (malo) on Announcing MIRI’s new CEO and leadership team · 2023-10-12T22:15:28.226Z · LW · GW

MIRI used to be focused on safety research, but now it's mostly trying to stop the march towards superintelligence, by presenting the case for the extreme danger of the current trajectory.

Yeah, given the current state of the game board we think that work in the comms/policy space seems more impactful to us on the margin, so we'll be focusing on that as our top priority and see how things develop, That won't be our only focus though, we'll definitely continue to host/fund research.

Comment by Malo (malo) on Announcing MIRI’s new CEO and leadership team · 2023-10-11T05:07:00.482Z · LW · GW

Sometimes quick org updates about team changes can be a little dry. ¯\(ツ)

I expect you’ll find the next post more interesting :)

(Edit: fixed typo)

Comment by Malo (malo) on Announcing MIRI’s new CEO and leadership team · 2023-10-11T01:00:34.834Z · LW · GW

My read was that his comment was in response to this part at the end of the post:

There’s a lot more we hope to say about our new (and still evolving) strategy, and about our general thinking on the world’s (generally very dire) situation. But I don’t want those announcements to further delay sharing the above updates, so I’ve factored our 2023 strategy updates into multiple posts, beginning with this one.

Comment by Malo (malo) on Inadequacy and Modesty · 2018-12-16T06:38:22.339Z · LW · GW

First two of the six volumes are out: https://www.lesswrong.com/posts/NjFgqv8bzjhXFaELP/new-edition-of-rationality-from-ai-to-zombies

Comment by Malo (malo) on MIRI’s 2018 Fundraiser · 2018-11-29T02:17:02.867Z · LW · GW

Update: Added an announcement of our newest hire, Edward Kmett, as well as a list of links to relatively recent work we've been doing in Agent Foundations, and updated the post to reflect the fact that Giving Tuesday is over (though our matching opportunity continues)!

Comment by Malo (malo) on LessWrong.com URL transfer complete, data import will run for the next few hours · 2018-03-23T17:52:26.965Z · LW · GW

It's my understanding, thought Oliver can of course correct me if I'm wrong, that the canonical domain will be lesswrong.com, and all lesserwrong.com/* links will redirect to lesswrong.com/*, to ensure that any links on the web to lesserwrong.com continue to work.

Comment by Malo (malo) on Leaving beta: Voting on moving to LessWrong.com · 2018-03-19T22:21:21.589Z · LW · GW

No.

Comment by Malo (malo) on MIRI's 2017 Fundraiser · 2017-12-15T20:22:06.492Z · LW · GW

Update 2:

Professional poker players Martin Crowley, Tom Crowley, and Dan Smith, in partnership with Raising for Effective Giving, have just announced a $1 million Matching Challenge and included MIRI among the 10 organizations they are supporting!

Also, we’ve hit our first fundraising target ($625,000)!

See here for more details.

Comment by Malo (malo) on MIRI's 2017 Fundraiser · 2017-12-15T20:17:26.609Z · LW · GW

Update 2: Professional poker players Martin Crowley, Tom Crowley, and Dan Smith, in partnership with Raising for Effective Giving, have just announced a $1 million Matching Challenge and included MIRI among the 10 organizations they are supporting!

Also, we’ve hit our first fundraising target ($625,000)!

See here for more details.

Comment by Malo (malo) on MIRI's 2017 Fundraiser · 2017-12-08T06:49:25.563Z · LW · GW

Awesome! Thanks so much :)

Comment by Malo (malo) on MIRI's 2017 Fundraiser · 2017-12-07T18:35:05.383Z · LW · GW

We just passed the 1/4 mark towards our first target! Fun fact, of the ~$200k raised so far in the fundraiser, ~65% of that has come from cryptocurrency dontions.

Comment by Malo (malo) on MIRI's 2017 Fundraiser · 2017-12-07T16:05:39.113Z · LW · GW

We just passed the 1/4 mark towards our first target! Fun fact, of the ~$200k raised so far in the fundraiser, ~65% of that has come from cryptocurrency dontions.

Comment by Malo (malo) on Living in an Inadequate World · 2017-11-13T18:38:36.212Z · LW · GW

You might also want to check out Ketolent.

Comment by Malo (malo) on Inadequacy and Modesty · 2017-11-09T04:34:21.084Z · LW · GW

It's just a really big project. It's almost an order of magnitude longer then In Eq, and it was written in a way that makes it much more challenging to turn into a paper book. E.g., links are pretty important when reading the Sequences. Said another way, the task of getting a physical book up for sale on Amazon is pretty trivial. The process of transforming the actual content of the Sequences into something that works in book form is significantly harder. In Eq doesn't have this issue.

The enormity of the task combined with other competing priorities at MIRI are the reason it's not out yet.

Comment by Malo (malo) on Inadequacy and Modesty · 2017-11-06T08:54:37.578Z · LW · GW

Yes.

Comment by Malo (malo) on 10/28/17 Development Update: New editor framework (markdown support!), speed improvements, style improvements · 2017-10-29T21:24:27.246Z · LW · GW

I think the new font looks pretty good. I do think though for a body font the x-height is pretty small which makes is less readable.

Comment by Malo (malo) on LessWrong 2.0 Feature Roadmap & Feature Suggestions · 2017-09-03T18:33:57.786Z · LW · GW

I personally think to grey lines on the side do a pretty good job, but I also think that the boxes on LW 1 are doing something that makes things clearer. I do think that the LW 1 comments boxes do look a little junky though, and I'm very much enjoying the clean look of LW 2.0 overall. Not sure what a good compromise would be. Maybe all top level comments are a little more distinguishable in some way?

Comment by Malo (malo) on LessWrong 2.0 Feature Roadmap & Feature Suggestions · 2017-09-03T18:21:55.797Z · LW · GW

I agree there is something nice about being able to see who upvoted or downvoted a comment or post, but I don't think I'd want this to be the default. I expect I'd feel uncomfortable voting on some stuff if I knew that my vote would be public. Maybe after voting, an option could appear that said something like “Make vote public”. Then you could have something pop up on hover (or with a tap on tablets/phones) that showed something like “Malo and 3 other people upvoted this post”. Though that would probably get unweildy if lots of people made their votes public. I think there's a good idea in there though, just not sure about implementation specifics.

Comment by Malo (malo) on LessWrong 2.0 Feature Roadmap & Feature Suggestions · 2017-09-03T18:15:16.039Z · LW · GW

Yeah upvotes can mean a lot of different things like endorse, agree, or high quality comment (even though I disagree). This comment thread on another post discussed some potential extensions to upvoting that might help with this.

Comment by Malo (malo) on LessWrong 2.0 Feature Roadmap & Feature Suggestions · 2017-09-03T17:58:32.270Z · LW · GW

I don't think this is working for me. I just made a bunch of comments last night, and got a couple replies since then. When I visited the site today I only noticed people had added comments when I saw then in the recent comments section.

How's this supposed to work?

Comment by Malo (malo) on LessWrong 2.0 Feature Roadmap & Feature Suggestions · 2017-09-03T17:55:26.682Z · LW · GW

Re: #1: I to am a big fan of Practical Typography :) That's a pretty good point, I actually don't thik we disagree much. I think I may prefer just slightly prefer whiter backgrouds with slightly grey text. But only slightly.

Re: #2: I largely agree with this, though I might lean more on the side of giving the user less configuration options. Like, if you give everyone an option for everything, then the options get real cluttered. But I don't have strong feeling about adding this preference in general.

Re: #3: Totally.

Comment by Malo (malo) on LessWrong 2.0 Feature Roadmap & Feature Suggestions · 2017-09-03T04:13:58.242Z · LW · GW

This is an epic comment with lots of great ideas and observations.

A few comments/opinions:

  1. I don't think the text should be proper black as in #000000. I find that slightly off black makes for a better reading experience, and I think this is pretty standard practice, though I may be mistaken.

  2. I think it's a feature that upvotes and downvotes appear above and below. I may want to see the count at the top before reading, but then again at the bottom so I can vote once I've read the post.

  3. Agree that hamburgers aren't great, but hover based UIs aren't tablet friendly, so I'm not keen on that solution.

  4. Strongly agree with the comment editor not being visually distinct enough. Multible times when writing this comment I've scrolled up to reference your comment, and then found it a little annoying to find where I was typing my reply.

Comment by Malo (malo) on LessWrong 2.0 Feature Roadmap & Feature Suggestions · 2017-09-03T03:46:49.980Z · LW · GW

Nice!

Two thoughts:

  1. What about adding a small link icon next to the time that is the link to the comment. Having the time be the link is pretty hard to discover. Facebook does it this way, and it took me a pretty long time to consistently remember, and rediscovering was really annoying.

  2. I think the idea of displaying the linked comment at the top of the page is cool, but I also find it a little confusing (like I instictively think “where’s the rest of the discussion” for a quick sec). I also almost always click the “Show comment in full contetxt” link. Given this it seems to me that brining the user directly to the comment in context might be best. Maybe the comment could be highlighted in some way so that it was easy to see which comment was linked to.

Comment by Malo (malo) on LessWrong 2.0 Feature Roadmap & Feature Suggestions · 2017-09-03T03:32:32.673Z · LW · GW

Harvard Law Review also has a pretty classy way of doing footnotes (example post).

Comment by Malo (malo) on LessWrong 2.0 Feature Roadmap & Feature Suggestions · 2017-09-03T03:31:05.921Z · LW · GW

Yeah that would be really great. Medium does this kind of well. Chris Olah's blog also has this feature (example post), but it’s implemented in a pretty hacky way using Disqus.

It would be cool if you could highlight some text in a post, and there was an easy way to create a comment that quoted that part of the text. Maybe you could even show some sort of visual highlight on that text in the post if the dicussion is high quality (measure by come combination of Karma and lenght?).

Comment by Malo (malo) on Welcome to Lesswrong 2.0 · 2017-06-20T01:13:12.311Z · LW · GW

Yeah, it's pretty unreasonable to expect typography to be dialed in for the closed beta :)

Some quick thoughts/opinions I have for the post text:

  • I'd consider making the body text a serif font. I find it's a better reading experience.

  • Body text is too grey. It definitely shouldn't be black, but maybe darker at something like: #2F2230.

  • I'd differentiate heading a little more, maybe a different font, or real small caps. Also if I was being really opinionated I'd only support 3 heading levels and make them smaller. I think people are overdoing it these days with really big headings in post/article text on the web. It certainly clearly differentiates them from the text but there are classier ways of doing that.

  • The current line-height is pretty big at 1.846, I'd change it to something closer to 1.6. Maybe even as low as 1.4.

  • Most sites have their font size to small, so I'm really happy to see you didn't do that, but I think to current body font is two big at 20px. I'd do 18px at most, an no smaller then 16px. Doing this might make the lines a little on the long end though.

  • I'd also implement hanging bullets. This is were bullet text is flush with the body text, and the bullets are in the margin a little. It's very easy to do with CSS. For bonus points you could do hanging punctuation for quotes, but that's much harder.

Comment by Malo (malo) on Welcome to Lesswrong 2.0 · 2017-06-20T00:22:41.528Z · LW · GW

I'd generally recommend reading Practical Typography, and Professional Web Typography. I expect knowing that stuff well would be valuable since LW is primary a websites where people read lots of text.

Comment by Malo (malo) on MIRI's 2015 Summer Fundraiser! · 2015-08-30T20:59:23.023Z · LW · GW

Someone just snagged the last $1,001 match. Thanks to all those who donated $1,001 to secure the matching, and DeevGrape for providing it!

Comment by Malo (malo) on MIRI's 2015 Summer Fundraiser! · 2015-08-29T19:15:01.295Z · LW · GW

Another $1,001 donation has come it.

One last $1,001 match remaining.

Comment by Malo (malo) on MIRI's 2015 Summer Fundraiser! · 2015-08-29T02:37:30.419Z · LW · GW

3 of the 5 $1,001 matches have already been claimed. As additional $1,001 donations come in I'll post updates here.

Comment by Malo (malo) on New forum for MIRI research: Intelligent Agent Foundations Forum · 2015-03-19T16:17:18.629Z · LW · GW

The server was down, but it is back up again now.

Comment by Malo (malo) on New forum for MIRI research: Intelligent Agent Foundations Forum · 2015-03-19T16:15:57.912Z · LW · GW

It does: http://agentfoundations.org/rss

The link to it is the last thing in the right sidebar. It says RSS in green.

Comment by Malo (malo) on Rationality: From AI to Zombies · 2015-03-15T23:59:36.453Z · LW · GW

Definitely beneficial, there is no cost worth considering when it comes to the next marginal person getting the book through our site, even if their selection is $0. So don't worry about directing them there.

Comment by Malo (malo) on Rationality: From AI to Zombies · 2015-03-15T23:56:30.202Z · LW · GW

No fees, but also takes some extra staff time (additional bookkeeping/accounting work is involved), so there is some cost to it. If we got more BTC donations it would reduce the time cost per donation, due to effects of batching, but as it stands now, they are usually processed (record added to our donor database and accounting software) on an individual basis.

One thing that takes a significant amount of time is when someone mis-pays a Coinbase invoice (sends a different amount of BTC then they indicated on the Coinbase form on our site). Coinbase treats these payments in a different way that ends up requiring more time to process on our end.

All that being said we like having the BTC donation option, and it always makes me happy to see one come in. So if making contributions via BTC is your preference, I'm all for it :)

Comment by Malo (malo) on Rationality: From AI to Zombies · 2015-03-15T16:19:47.099Z · LW · GW

See my comment here about this.