[Link] John Carmack working on AGI 2019-11-14T00:08:37.250Z · score: 16 (7 votes)
bgold's Shortform 2019-10-17T22:18:11.822Z · score: 4 (2 votes)
Running Effective Structured Forecasting Sessions 2019-09-06T21:30:25.829Z · score: 21 (5 votes)
How to write good AI forecasting questions + Question Database (Forecasting infrastructure, part 3) 2019-09-03T14:50:59.288Z · score: 30 (13 votes)
AI Forecasting Resolution Council (Forecasting infrastructure, part 2) 2019-08-29T17:35:26.962Z · score: 30 (12 votes)
AI Forecasting Dictionary (Forecasting infrastructure, part 1) 2019-08-08T16:10:51.516Z · score: 41 (22 votes)
Do bond yield curve inversions really indicate there is likely to be a recession? 2019-07-10T01:23:36.250Z · score: 22 (8 votes)
What is the best online community for questions about AI capabilities? 2019-05-31T15:38:11.678Z · score: 4 (2 votes)
What's the best approach to curating a newsfeed to maximize useful contrasting POV? 2019-04-26T17:29:30.806Z · score: 28 (5 votes)
How do S-Risk scenarios impact the decision to get cryonics? 2019-04-21T15:59:50.342Z · score: 12 (5 votes)


Comment by bgold on AlphaStar: Impressive for RL progress, not for AGI progress · 2019-11-13T18:12:28.958Z · score: 5 (2 votes) · LW · GW

I watched all of the Grandmaster level games. When playing against grandmasters the average win rate of AlphaStar across all three races was 55.25%

  • Protoss Win Rate: 78.57%
  • Terran Win Rate: 33.33%
  • Zerg Win Rate: 53.85%

Detailed match by match scoring

While I don't think that it is truly "superhuman", it is definitely competitive against top players.

Comment by bgold on bgold's Shortform · 2019-10-23T19:22:34.889Z · score: 1 (1 votes) · LW · GW

I remember seeing other claims/analysis of this but don't remember where

Comment by bgold on bgold's Shortform · 2019-10-21T20:17:07.533Z · score: 1 (3 votes) · LW · GW

Is the clearest "win" of a LW meme the rise of the term "virtue signaling"? On the one hand I'm impressed w/ how dominant it has become in the discourse, on the other... maybe our comparative advantage is creating really sharp symmetric weapons...

Comment by bgold on bgold's Shortform · 2019-10-17T22:18:11.972Z · score: 8 (5 votes) · LW · GW

I have a cold, which reminded me that I want fashionable face masks to catch on so that I can wear them all the time in cold-and-flu season without accruing weirdness points.

Comment by bgold on Daniel Kokotajlo's Shortform · 2019-10-14T19:18:57.220Z · score: 8 (3 votes) · LW · GW

I'm interested, and I'd suggest using for this

Comment by bgold on Hazard's Shortform Feed · 2019-10-14T17:51:56.607Z · score: 3 (2 votes) · LW · GW

I'd like to see someone in this community write an extension / refinement of it to further {need-good-color-name}pill people into the LW memes that the "higher mind" is not fundamentally better than the "animal mind"

Comment by bgold on Daniel Kokotajlo's Shortform · 2019-10-14T17:47:21.312Z · score: 2 (2 votes) · LW · GW

I'd agree w/ the point that giving subordinates plans and the freedom to execute them as best as they can tends to work out better, but that seems to be strongly dependent on other context, in particular the field they're working in (ex. software engineering vs. civil engineering vs. military engineering), cultural norms (ex. is this a place where agile engineering norms have taken hold?), and reward distributions (ex. does experimenting by individuals hold the potential for big rewards, or are all rewards likely to be distributed in a normal fashion such that we don't expect to find outliers).

My general model is in certain fields humans look more tool shaped and in others more agent shaped. For example an Uber driver when they're executing instructions from the central command and control algo doesn't require as much of the planning, world modeling behavior. One way this could apply to AI is that sub-agents of an agent AI would be tools.

Comment by bgold on Honoring Petrov Day on LessWrong, in 2019 · 2019-09-26T16:33:04.737Z · score: 13 (8 votes) · LW · GW

so shiny. It's like, it's begging to be pressed.

Comment by bgold on Conversation on forecasting with Vaniver and Ozzie Gooen · 2019-08-11T16:52:29.913Z · score: 5 (3 votes) · LW · GW

From a 2 min brainstorm of "info products" I'd expect to be action guiding:

  • Metrics and dashboards reflecting the current state of the organization.
  • Vision statements ("what do we as an organization do and thus what things should we consider as part of our strategy")
  • Trusted advisors
  • Market forces (e.g. price's of goods)

One concrete example is from when I worked in a business intelligence role. What executives wanted was extremely trustworthy reliable data sources to track business performance over time. In a software environment (e.g. all the analytic companies constantly posting to Hacker News) that's trivial, but in a non-software environment that's very hard. It was very action-guiding to be able to see if your last initiative worked, because if it did you could put a lot more money into it and scale it up.

Comment by bgold on Conversation on forecasting with Vaniver and Ozzie Gooen · 2019-08-09T21:09:29.327Z · score: 12 (6 votes) · LW · GW

This seems true that there's a lot of way to utilize forecasts. In general forecasting tends to have an implicit and unstated connection to the decision making process - I think that has to do w/ the nature of operationalization ("a forecast needs to be on a very specific thing") and because much of the popular literature on forecasting has come from business literature (e.g. How to Measure Anything).

That being said I think action-guidingness is still the correct bar to meet for evaluating the effect it has on the EA community. I would bite the bullet and say blogs should also be held to this standard, as should research literature. An important question for an EA blog - say, LW :) - is what positive decisions it's creating (yes there are many other good things about having a central hub, but if the quality of intellectual content is part of it that should be trackable).

If in aggregate many forecasts can produce the same type of guidance or better as many good blog posts, that would be really positive.

Comment by bgold on Quotes from Moral Mazes · 2019-05-30T20:26:15.707Z · score: 3 (2 votes) · LW · GW

This is great, I also had struggled reading Moral Mazes and I appreciate the selected quotes.

For a more readable, modern treatment of the subject I strongly recommend Power: Why Some People Have It - And Others Don't. The author draws heavily from Moral Mazes as well as other case studies.

Comment by bgold on What is a reasonable outside view for the fate of social movements? · 2019-01-04T00:58:55.922Z · score: 12 (6 votes) · LW · GW

Off the cuff:

  • Temperance movement in the United States
  • Much of the radical left movement from the 60s to the 70s (ex. Students for a Democratic Society -> Weatherman)
  • Georgism
  • The Shakers

Another useful line of inquiry might be factoring out what success for a social movement looks like, find social movements that "succeeded", and see what happened to the social movements they were competing against.

Comment by bgold on Oops on Commodity Prices · 2018-06-12T21:27:04.172Z · score: 17 (3 votes) · LW · GW

+1 for noting mistake and for noting the importance of being bold, and asking questions and sharing models even when you're uncertain.

Your use of the Epistemic status tag - which I think /u/gwern pioneered? - seems good for balancing the value of sharing models while preventing polluting the "idea space" with potentially misleading/untrue things.