Posts

How might cryptocurrencies affect AGI timelines? 2021-02-28T19:16:15.326Z
Self-Similarity Experiment 2020-08-15T13:19:56.916Z

Comments

Comment by Telofy on How might cryptocurrencies affect AGI timelines? · 2021-04-10T08:31:14.350Z · LW · GW

Oh sorry, I didn’t see your comment.

  1. There is the existence proof of gold, silver, platinum, etc. But people have mostly convinced me already that 2 OOM are not going to be enough to have much of an effect.

  2. I think you understand my argument correctly. :-) You argued that AI safety will not differentially speed up compared to AI capabilities because it will slow down at the same rate. Maybe not literally the same rate but the same rate in expectation. But that still seems unlikely to me since I can think of reasons why it would slow down less but can’t think of reasons why it would slow down more.

Comment by Telofy on How might cryptocurrencies affect AGI timelines? · 2021-04-10T08:22:33.363Z · LW · GW

There were a few good points distributed over many answers and comments, so I’ll collate them here. I’ll ignore answers to other questions, such as “Is it likely that cryptocurrencies will get big?” I’ve paraphrased all of these.

  1. Question

    1. Bitcoin is not actually deflationary by design but its popularity is increasing faster than its supply. That will stall at some point, at which its value will not grow anymore. That’s probably going to be 100 years from now, so maybe not relevant to AI, but a cyberpunk scenario is unlikely to be stable that way unless a more deflationary asset becomes dominant. (H/t Olomana)
    2. A 1–2 OOM increase in market cap may not be enough for crypto to have much impact on society. It would still just be one more biggish assets class. There is not that much value in currencies that becoming as big as all currencies combined would have these vast societal effects. (H/t romeostevensit)
  2. Question

    1. Proof of work generates demand for GPUs and ASICs, allowing them to be produced at greater scale (and maybe smoothing out demand volatility). That hardware is also useful for AI, so proof of work at least may accelerate AI. (H/t Gerald Monroe) Bitcoin is posed to remain dominant, and it may be too conservative to switch away from proof of work. (That leave the avenue of promoting non-PoW blockchains so that tokens on those blockchains can displace Bitcoin to some extend.)

Did I forget any?

Comment by Telofy on How might cryptocurrencies affect AGI timelines? · 2021-03-03T17:34:55.000Z · LW · GW

Preventing 51% attacks. Maybe also others. And for the environmentally minded people, there’s also the reason to decrease the power consumption. I heard a region of China has banned new mining facilities because of their energy consumption. If that continues, 51% attacks may become easier again unless more blockchains switch away from proof of work.

Comment by Telofy on How might cryptocurrencies affect AGI timelines? · 2021-03-02T14:10:37.499Z · LW · GW

Away from proof of work. :-)

Comment by Telofy on How might cryptocurrencies affect AGI timelines? · 2021-03-01T14:04:36.320Z · LW · GW

Thanks! Yeah, I don’t imagine that a currency that deflates will be used for everyday payments like salaries. Stablecoins or fiat seem more appropriate for that. But that doesn’t undercut the worry that a currency that deflates may be preferred over reinvestment and hence stall innovation – the scenario I sketch above. Or does it?

Comment by Telofy on How might cryptocurrencies affect AGI timelines? · 2021-03-01T13:53:24.354Z · LW · GW

Good to know!

Comment by Telofy on How might cryptocurrencies affect AGI timelines? · 2021-03-01T13:52:47.507Z · LW · GW

Yeah, but that’s just a network effect where when a critical mass of them start doing it, it becomes normal and they are more at risk of being called stupid for not investing enough.

Comment by Telofy on How might cryptocurrencies affect AGI timelines? · 2021-03-01T11:34:41.155Z · LW · GW

Good point. A broad switch away from proof of work (as it seems to be happening) may change that dynamic.

Comment by Telofy on How might cryptocurrencies affect AGI timelines? · 2021-03-01T08:35:26.102Z · LW · GW

Thanks! If I read your reply correctly, then those are reasons why the extreme growth scenario that I’m worried about is unlikely and why, therefore, my conditional question is unimportant. They’re also similar to the reasons why I didn’t invest into crypto until 2016. I took it for an obvious speculative bubble about to pop, one with no commensurate underlying value.

I suppose my default assumption should still be that most of the current crypto price and market capitalization is purely speculative. But the more boom and bust cycles the market continues to survive without collapsing entirely the more I’m starting to think that I might be wrong about that in ways I don’t anticipate or that it may not be important in the end.

So I’m still interested in what might happen if the above scenario comes true and lasts for long enough to make a difference for safety research.

Comment by Telofy on How might cryptocurrencies affect AGI timelines? · 2021-03-01T08:21:37.507Z · LW · GW
  1. Thanks! I can’t argue for the importance of market capitalization, but I don’t think transactions are a good proxy either. For something to actually be used as currency, its value would have to be somewhat predictable, so I think that role will fall toward cryptocurrencies like the stablecoins that we have. The deflationary cryptocurrencies are more likely to be used as “stores of value,” just bought and held by most owners, and traded only by traders on markets where you don’t actually exchange the underlying coin (like most centralized crypto exchanges today).
  2. Hmhmm, yeah maybe… I suppose it should be the default assumption that sectors behave the same unless there are differences between them. But I can think of some differences. Crypto finance may be highly automated, so there may be few jobs there. The energy and hardware sectors may grow, but perhaps also not to the point where they can absorb all labor. So people will either fall into the category of (1) desperately looking for employment or (2) not in need of employment. The assumption is that AGI development will slow because companies are not incentivized to invest in it – compute or staff. Just as an observation, AI safety people will enter this period with capital. Also they’re not motivated primarily by profit but by saving the world. So they’ll be motivated and able to pay the people in group 1 and retain the people in group 2, who may not even ask for a salary. That could still lead to a differential speed-up.
Comment by Telofy on How might cryptocurrencies affect AGI timelines? · 2021-03-01T07:57:33.756Z · LW · GW

Fascinating, thanks! I found this article. Is that roughly what you’re referring to? It sounds like the author would agree that it is deflationary so long as the user base grows faster than the supply. In that case, my scenario above should self-correct eventually, unless a more deflationary coin catches on.

Comment by Telofy on How might cryptocurrencies affect AGI timelines? · 2021-02-28T19:43:25.852Z · LW · GW

Very good! I’ve incorporated the question, but I’d like to keep the post focused on AGI timelines. :-)

Comment by Telofy on What trade should we make if we're all getting the new COVID strain? · 2021-01-25T19:40:00.636Z · LW · GW

Thx!

Comment by Telofy on What trade should we make if we're all getting the new COVID strain? · 2020-12-28T22:51:40.465Z · LW · GW

Thank you! I’d like to keep it simple, so I’m considering some volatility ETFs, but they seem to come in the shape of short-term and mid-term futures ETFs. Which ones would you recommend for this purpose?

Comment by Telofy on Honoring Petrov Day on LessWrong, in 2020 · 2020-09-26T12:23:08.556Z · LW · GW

Aw, consoling hugs!

Comment by Telofy on Effective Altruism from XYZ perspective · 2015-07-21T12:23:40.856Z · LW · GW

Before I delay my reply until I’ve read everything you’ve linked, I’ll rather post a WIP reply.

Thanks for all the data! I hope I’ll have time to look into Open Borders some more in August.

Error theorists would say that the blog post “Effective Altruists are Cute but Wrong” is cute but wrong, but more generally the idea of using PageRank for morality is beautifully elegant (but beautifully elegant things have often turned out imperfect in practice in my experience). I still have to read the rest of the blog post though.

Comment by Telofy on Effective Altruism from XYZ perspective · 2015-07-14T11:46:38.947Z · LW · GW

Thanks! I hadn’t seen the formulae for the expected value of perfect information before. I haven’t taken the time to think them through yet, but maybe they’ll come in handy at some point.

Comment by Telofy on Effective Altruism from XYZ perspective · 2015-07-14T11:44:33.579Z · LW · GW

I didn’t respond to your critiques that went into a more political direction because there was already discussion of those aspects there that I wouldn’t have been able to add anything to. There is concern in the movement in general and in individual EA organizations that because EAs are so predominantly computer scientists and philosophers, there is a great risk of incurring known and unknown unknowns. In the first category, more economists for example would be helpful; in the second category it will be important to bring people from a wide variety of demographics into the movement without compromising its core values. As computer scientist I’m pretty median again.

then there are lots of EA missed opportunities lying around waiting for someone to pick them up

Indeed. I’m not sure if the median EA is concerned about this problem yet, but I wouldn’t be surprised if they are. Many EA organizations are certainly very alert to the problem.

Followed to its logical conclusion, this outlook would result in a lot more concern about the West.

This concern manifests in movement-building (GWWC et al.) and capacity-building (80k Hours, CEA, et al.). There is also concern that I share but that may not yet be median EA concern that we should focus more on movement-wide capacity-building, networking, and some sort of quality over quantity approach to allow the movement to be better and more widely informed. (And by “quantity” I don’t mean to denigrate anyone but just I mean more people like myself who already feel welcomed in the movement because everyone speaks their dialect and whose peers are easily convinced too.)

Throughout the time that I’ve been part of the movement, the general sentiment either in the movement as a whole or within my bubble of it has shifted in some ways. One trend that I’ve perceived is that in the earlier days there was more concern over trying vs. really trying while now concern over putting one’s activism on a long-term sustainable basis has become more important. Again, this may be just my filter bubble. This is encouraging as it shows that everyone is very well capable of updating, but it also indicates that as of one or two years ago, we still had a bunch to learn even concerning rather core issues. In a few more years, I’ll probably be more confident that come core questions are not so much in flux anymore that new EAs can overlook or disregard them and thereby dilute what EA currently stands for or shift it into a direction I couldn’t identify with anymore.

Again, I’m not ignoring your points on political topics, I just don’t feel sufficiently well-informed to comment. I’ve been meaning to read David Roodman’s literature review on open borders–related concerns, since I greatly enjoyed some of his other work, but I haven’t yet. David Roodman now works for the Open Philanthropy Project.

Well, there is a question about what EA is. Is EA about being effectively altruistic within your existing value system? Or is it also about improving your value system to more effectively embody your terminal values? Is it about questioning even your terminal values to make sure they are effective and altruistic?

I’ve always perceived EA as whatever stands at the end of any such process, or maybe not the end but some critical threshold when a person realizes that they agree with the core tenets that they value other’s well-being, and that greater well-being or the well-being or more beings weighs heavier than lesser well-being or the well-being of fewer. If they reach such a threshold. If they do, I see all three processes as relevant.

Regardless of whether you are an antirealist, not all value systems are created equal.

Of course.

Their knowledge of history, politics, and object-level social science is low. … I'm doing the same thing: encouraging EAs to reflect on their value systems, and attain a broader geopolitical and historical context to evaluate their interventions.

Yes, thanks! That’s why I was most interested in your comment in this thread, and because all other comments that piqued my interest in similar ways already had comprehensive replies below them when I found the thread.

This needs to be turned into a concrete strategy, and I’m sure CEA is already on that. Identifying exactly what sorts of expertise are in short supply in the movement and networking among the people who possess just this expertise. I’ve made some minimal-effort attempts to pitch EA to economists, but inviting such people to speak at events like EA Global is surely a much more effective way of drawing them and their insights into the movement. That’s not limited to economists of course.

Do you have ideas for people or professions the movement would benefit from and strategies for drawing them in and making them feel welcome?

I just don't think a lot of EAs have thought their value systems through very thoroughly

Given how many philosophers there are in the movement, this would surprise me. Is it possible that it’s more the result of the ubiquitous disagreement between philosophers?

How do we know we aren't also deluded by present-day politics?

I’ve wondered about that in the context of moral progress. Sometimes the idea of moral progress is attacked on the grounds that proponents base their claims for moral progress on how history has developed into the direction of our current status quo, which is rather pointless since by that logic any historical trend toward the status quo would then become “moral progress.” However, by my moral standards the status quo is far from perfect.

Analogously I see that the political views EAs are led to hold are so heterogeneous that some have even thought about coining new terms for this political stance (such as “newtilitarianism”), luckily only in jest. (I’m not objecting to the pun but I’m wary of labels like that.) That these political views are at least somewhat uncommon in their combination suggests to me that we’re not falling into that trap, or at least making an uncommonly good effort of avoiding it. Since the trap is pretty much the default starting point for many of us, it’s likely we still have many legs trapped in it despite this “uncommonly good effort.” The metaphor is already getting awkward, so I’ll just add that some sort of contrarian hypercorrection would of course constitute just another trap. (As it happens, there’s another discussion of the importance of diversity in the context of Open Phil in that Vox article.)

Comment by Telofy on Effective Altruism from XYZ perspective · 2015-07-12T09:08:02.736Z · LW · GW

As someone said in another comment there are the core tenets of EA, and there is your median EA. Since you only seem to have quibbles with the latter, I’ll address some of those, but I don’t feel like accepting or rejecting them is particularly important for being an EA in the context of the current form of the movement. We love discussing and challenging our views. Then again I think I so happen to agree with many median EA views.

which values people based on their contributions, not just their needs

VoiceOfRa put very concisely what I think is a median EA view here, but the comment is so deeply nested that I’m afraid it might get buried: “Even if he values human lives terminally, a utilitarian should assign unequal instrumental value to different human lives and make decision based on the combination of both.”

I don’t think EAs do a very good job of distinguishing their moral intuitions from good philosophical arguments

I think this has been mentioned in the comments but not very directly. The median EA view may be not to bother with philosophy at all because the branches that still call themselves philosophy haven’t managed to come to a consensus on central issues over centuries so that there is little hope for the individual EA to achieve that.

However when I talk to EAs who do have a background in philosophy, I find that a lot of them are metaethical antirealists. Lukas Gloor, who also posted in this thread, has recently convinced me that antirealism, though admittedly unintuitive to me, is the more parsimonious view and thus the view under which I operate now. Under antirealism moral intuitions, or some core ones anyway, are all we have, so that there can be no philosophical arguments (and thus no good or bad ones) for them.

Even if this is not a median EA view, I would argue that most EAs act in accordance with it just out of concern for the cost-effectiveness of their movement-building work. It is not cost-effective to try to convince everyone of the most unintuitive inferences from ones own moral system. However, among the things that are important to the individual EA, there are likely many that are very uncontroversial in most of society and focusing on those views in one’s “evangelical” EA work is much more cost-effective.

Betting on a particular moral philosophy with a percentage of your income shows an immense amount of confidence, and extraordinary claims require extraordinary evidence.

From my moral vantage point, the alternative (I’ll consider a different counterfactual in a moment) that I keep the money to spend it on myself where its marginal positive impact on my happiness is easily two or three orders of magnitude lower and my uncertainty over what will make me happy is also just slightly lower than with some top charities, that alternative would be a much more extraordinary claim.

You could break that up and note that in the end I’m not deciding to just “donate effectively,” but that I’ll decide on a very specific intervention and charity to donate to, for example Animal Equality, making my decision much more shaky again, but I’d also have to make such highly specific decisions that are probably only slightly less shaky when trying to spend money on my own happiness.

However, the alternative might also be:

keeping your money in your piggy bank until more obvious opportunities emerge

That’s something the median EA has probably considered a good deal. Even at GiveWell there was a time in 2013 when some of the staff pondered whether it would be better to hold off with their personal donations and donate a year later when they’ve discovered better giving opportunities.

However several of your arguments seem to stem from uncertainty in the sense of “There is substantial uncertainty, so we should hold off doing X until the uncertainty is reduced.” Trading off these element in an expected value framework and choosing the right counterfactuals is probably again a rather personal decision when it comes to investing ones donation budget, but over time I’ve become less risk-averse and more ready to act under some uncertainty, which has hopefully brought me closer to maximizing the expected utility of my actions. Plus I don’t expect any significant decreases in uncertainty wrt the best giving opportunities in the future that I could wait for. There will hopefully be more with similar or only slightly greater levels of uncertainty though.