Posts

Comments

Comment by dmav on How to (hopefully ethically) make money off of AGI · 2024-12-24T06:25:03.549Z · LW · GW

Unfortunately, comparing the returns isn't a great way of evaluating the portfolio compared to the S&P 500. You should really be comparing their Sharpe ratios (or just annualized tstat). If you have, for example, 5% annualized returns on $x in excess of the risk-free rate, you can just double your pnl to 10% by borrowing $x more money and investing it (assuming you can borrow at a competitive rate). Why not do that? Well, you'll also have more variance in your portfolio. Probably what you really care about is risk-adjusted returns.

The most common way to evaluate this is to compare the (daily mean returns)/(daily stdev returns), where maybe you adjust the first thing by the rate you can borrow money at.

(Eyeballing it, SPY was probably like 1.5-2x as good as the other portfolio by this metric.)

Happy to explain more if this is confusing or you're curious and have other questions.

Edit: I see your other post/comment now that has a Sharpe ratio and portfolio that looked like it outperformed this one; maybe this isn't new/interesting or useful, but I'll leave it up in case someone else finds it useful.

Comment by dmav on Congressional Insider Trading · 2024-09-02T14:20:17.729Z · LW · GW

This wouldn't really solve much of the problem though, since ETFs are still pretty expressive. For example, when they have a sense for whether an important clean-energy bill will pass or fail, they could buy/sell a clean-energy-tracking ETF.

Some ETFs are pretty high-weight Nvidia, so it would be pretty easy to still trade it indirectly, albeit a little bit less efficiently.

And honestly even the S&P500 will still move a lot based on various policy outcomes.

Comment by dmav on Mechanistic Interpretability Quickstart Guide · 2023-02-22T03:44:12.422Z · LW · GW

Just so you know, this is still missing on your personal site.
Also the image here doesn't exist on your personal site's post.
Thanks for writing all these wonderful resources Neel!

Comment by dmav on EigenKarma: trust at scale · 2023-02-10T18:17:42.107Z · LW · GW

You probably also want to do some kind of normalization here based on how many total posts the user has upvoted. (So you can't just i.e. upvote everything.) (You probably actually care about something a little different from the accuracy of their upvoted-as-predictions on average though...)

Comment by dmav on Why square errors? · 2022-11-27T03:21:37.417Z · LW · GW

Here's a good/accessible blog post that does a pretty good job discussing this topic. https://ericneyman.wordpress.com/2019/09/17/least-squares-regression-isnt-arbitrary/

Comment by dmav on Meta AI announces Cicero: Human-Level Diplomacy play (with dialogue) · 2022-11-24T15:04:51.556Z · LW · GW

I think that this is true of the original version of alphastar, but they have since trained a new version on camera inputs and with stronger limitations on apm (22 actions/5s) (Maybe you'd want some kind of noise applied to the inputs still, but I think the current state is much closer to human-like playing conditions.) See: https://www.deepmind.com/blog/alphastar-grandmaster-level-in-starcraft-ii-using-multi-agent-reinforcement-learning

Comment by dmav on How Risky Is Trick-or-Treating? · 2022-10-27T19:04:32.038Z · LW · GW

In other words, we should be telling children 'be careful of roads/cars' (including on Halloween) Not 'be careful of Halloween'

I agree with the post, but I will point out that you really do need to emphasize the utility per micromort here. If you keep your utility constant, it is the total risk that matters. Just like if you were going to go on a long car ride tomorrow (on safer-than-usual roads, but not enough to outweigh the total driving) and someone points out you're much more likely to die than usual - sure, you can point out 'ah yes, but the chance I die per-mile is lower than usual!' but that's not the right reference point if your utility isn't a function of the driving-amount.

All that said, the total number of deaths is only ~double on Halloween? That feels so insane, roads must be SO much safer than usual.

Comment by dmav on AI Safety field-building projects I'd like to see · 2022-09-12T11:54:17.211Z · LW · GW

As you kind of say - there are already (at least decently smart/competent) people trying to do (almost) all of these things. For many of these projects, joining current efforts is probably a better allocation than starting your own effort, and most of the value to be added is if you're in the 99.5th+ %-ile (?) for the 'skills needed.' (or sometimes there's just not enough people working on a problem, or sometimes there's a place to add value if you're willing to do annoying work other people don't want to do - these are both rarer though, in the current funding regime)

Something I'd add to this list (or at least the bottom?) that I've heard a couple people mention would be useful is a nonprofit (regranting-like?) org whose primary goal is to hire international independent researchers in the Berkeley area and provide them with visas

Comment by dmav on [deleted post] 2022-08-30T05:17:01.449Z

Note that your prediction isn't interesting. Each year, conditioned on a doomsday not happening, it would be pretty weird for the date(s) to not have moved forward. 
Do you instead mean to say that you guess that the date will move forward each year by more than a year, or something like that?

Comment by dmav on My Plan to Build Aligned Superintelligence · 2022-08-21T22:51:12.771Z · LW · GW

Here are some objections I have to your post:
How are you going to specify the amount of optimization pressure the AI exerts on answering a question/solving a problem? Are you hoping to start out training a weaker AI that you later augment? 
If so, I'd be concerned about any distributional shifts in its optimization process that occur during that transition
If not, it's not clear to me how you have the AI 'be safe' through this training process.

At the point where you, the human, is labeling data to train the AI to identify concepts with measurements/feature - you now have a loss function that's dependent on human feedback, and which, once again, you can't specify in terms of the concepts you want the AI to identify. It seems like the AI is pretty incentivized to be deceptive here (or really at any point in the process).
I.e. if i's superintelligent and you accidentally gave it the loss function 'maximize paperclips', but it models humans as potentially not realizing they gave it this loss function, then I think it would act indistinguishably from an AI with the loss function you intended (at least during this stage of training you outline).

Even if, say, it does do things at first that look like things a paperclip maximizer would try to do, instead of whatever you actually want it to do (label things appropriately) - say, it tries to get a human user to upload it to the internet or something, but your safe-guards are sufficiently strong to prevent things like this - then I think as you train away actions like this, you're not just training it to have better utility functions or whatever, but you're training it to be more effectively deceptive.

Comment by dmav on What's the problem with having an AI align itself? · 2022-04-06T13:51:13.002Z · LW · GW

I think the question of you/Adele miscommunicating is mostly under-specification of what features you want your test-AGI to have.

  • If you throttle its ability to optimize for its goals, see EY and Adele's arguments.

  • If you don't throttle in this way, you run into goal-specification/constraint-specification issues, instrumental convergence concerns and everything that goes along with it.

I think most people here will strongly feel a (computationally) powerful AGI with any incentives is scary, and that any test-versions should require using at-most a much-less-powerful one.

Sorry if I've misunderstood you at all. If you specify the nature of/goals/constraints etc of your test-AI more specifically, maybe I or someone else can try to give you more specific failure-modes.