Posts

Libertarianism, Neoliberalism and Medicare for All? 2020-10-14T21:06:08.811Z · score: 5 (6 votes)
AI Boxing for Hardware-bound agents (aka the China alignment problem) 2020-05-08T15:50:12.915Z · score: 11 (6 votes)
Why is the mail so much better than the DMV? 2019-12-29T18:55:10.998Z · score: 30 (16 votes)
Many Turing Machines 2019-12-10T17:36:43.771Z · score: -5 (4 votes)

Comments

Comment by logan-zoellner on The EMH Aten't Dead · 2020-08-17T12:20:49.527Z · score: 1 (1 votes) · LW · GW
I'm still curious if you would be willing to bet against a fund run exclusively by founders of the S&P500Those do underperform the S&P 500.

Oh yeah, I definitely agree that mutual funds are terrible. Pretty sure they're optimizing for management fees, though, not to actually outperform the market.


I'm still curious if you would be willing to bet against a fund run exclusively by founders vs the S&P 500. Saying the management fee for such a fund would be ridiculously high seems like a reasonable objection though.

For that matter, would you be willing to bet against SpaceX vs the S&P 500?

Comment by logan-zoellner on The EMH Aten't Dead · 2020-08-14T08:21:33.423Z · score: 1 (1 votes) · LW · GW
No, you would only assume that if you bill the capacity of that founder to work at zero. Successful founders have skill at managing companies that distinct from having access to private information.

Care to elucidate the difference between "skilled at managing companies" and "skilled at investing". Do you really claim that if I restricted the same set of people to buying/selling publicly tradable assets they would underperform the S&P 500?

Comment by logan-zoellner on Why is the mail so much better than the DMV? · 2020-08-13T14:19:17.114Z · score: 1 (1 votes) · LW · GW

Well, it looks like the correct answer was "the post office has avoided politicization"

😒

Comment by logan-zoellner on The EMH Aten't Dead · 2020-08-13T14:14:32.648Z · score: 1 (1 votes) · LW · GW
Plenty of Venture Capitalist underform the market. Saying that everyone of them beats the market is not based on real data.

I didn't say every Venture Capitalist beats the market. Venture Capital in particular seems like a hobby for people who are already rich. I said every founder of a $1B startup beat the market.

I propose the following bet: take any founder of a $1B startup that you please, strip them of all of their wealth, give them $1M cash. What percent of them do you think would see their net-worth grow by more than the S&P 500 over the next 10 years? If the EMH is true, the answer should be 50%. Would you really be willing to bet 50% of them will under preform the market?

Comment by logan-zoellner on The EMH Aten't Dead · 2020-08-13T14:10:06.111Z · score: 1 (1 votes) · LW · GW
Private information should be very hard to come by, it is not something that can be learned in a few minutes from an internet search.

I think we have different definitions of private information.

I have private information if I disagree with the substantial majority of people, even if everything I know is in principle freely available. The market is trading on the consensus expectation of the future. If that consensus is wrong and I know so, I have private information.

Specifically, when Tesla was trading at $600 or so, it was publicly available that they were building cars in a way that no other company could, but the public consensus was not that they were therefore the most valuable car company in the world.

Similarly, SpaceX is currently valued at $44B according to the public consensus. But I would be willing to be a substantial sum of money that they are worth 5-10x that and people just haven't fully grasped the implications of Starlink and Starship.

When you think about private information this way, in order to have private information all you have to do is:

1) Disagree with the general consensus

2) Be right

Incidentally, those are precisely the skills that rationality is training you for. Most people aren't optimizing for the truth, they're optimizing for fitting in with their peers.


To me it doesn't look trivial/nor easy at all: there are orders of magnitude more intelligence people than rich intelligence people.

Very few intelligent people are optimizing for "make as much money as possible". A trivial example of this, almost anyone working in academia could get a massive pay raise by switching to private industry. In addition, people can be very intelligent without being rational, so even if they claim to be optimizing for wealth they might not be doing a very good job of it. There are hordes of very intelligent people who are goldbugs or young earth creationists or global warming deniers. Why should we expect these people to behave rationally when it comes to financial self-interest when they so blatantly fail to do so in other domains?

I'm not even sure I buy the idea that there are more intelligent people than rich people. The 90% percentile for wealth in the USA is north of $1M. Going by the "MENSA" definition of highly intelligent, only 2% of people qualify. That means there are 5x as many millionaires as geniuses.

Comment by logan-zoellner on The EMH Aten't Dead · 2020-05-16T15:47:30.736Z · score: 1 (1 votes) · LW · GW

I think you're understating the amount of private information available to anyone with a reasonable level of intelligence. If you have a decent level of curiosity, chances are that you know some things that the rest of the world hasn't "caught onto" yet. For example, most fans of Tesla probably realized that EVs are going to kill ICEs and that Telsa is at least 4 years ahead of anyone else in terms of building EVs long before the sudden rise in Tesla stock in Jan 2020. Similarly, people who nerd out about epidemics predicted the scale of COVID-19 before the general public.

The extreme example of this is Venture Capital. People who are a bit "weird" and follow their hunches routinely start companies worth millions or billions of dollars. Every single one of them "beat the market" by tapping private information.

None of this invalidates the EMH (which as you pointed out is unfalsifiable). The key is figuring out how to take your personal unique insights and translate them into meaningful investments (with reasonable amounts of leverage and appropriate stop-losses). Of course, the easier it is to trade something, the more likely someone has "already had that idea", so predicting the S&P500 is harder than predicting an individual stock. But starting your own company is a power move so difficult that it's virtually unbeatable.

Comment by logan-zoellner on AI Boxing for Hardware-bound agents (aka the China alignment problem) · 2020-05-10T00:11:35.610Z · score: 1 (1 votes) · LW · GW
You are still being stupid, because you are ignoring effective tools and making the problem needlessly harder for yourself.

I think this is precisely where we disagree. I believe that we do not have effective tools for writing utility functions and we do have effective tools for designing at least one Nash Equilibrium that preserves human value, namely:

1) All entities have the right to hold and express their own values freely

2) All entities have the right to engage in positive-sum trades with other entities

3) Violence is anathema.

Some more about why I think humans are bad at writing utility functions:

I am the extremely skeptical about anything of the form: We will define a utility function that encodes human values. Machine learning is really good at misinterpreting utility functions written by humans. I think this problem will only get worse with a super-intelligence AI.

I am more optimistic about goals of the form "Learn to ask what humans want". But I still think these will fail eventually. There are lots of questions even ardent utilitarians would have difficulty answering. For example, "Torture 1 person or give 3^^^3 people a slight headache?".

I'm not saying all efforts to design friendly AIs are pointless, or that we should willingly release paperclip maximizes on the world. Rather, I believe we boost our chances of preserving human existence and values by encouraging a multi-polar world with lots of competing (but non-violent) AIs. The competing plan of "don't create AI until we have designed the perfect utility function and hope that our AI is the dominant one" seems like it has a much higher risk of failure, especially in a world where other people will also be developing AI.

Importantly, we have the technology to deploy "build a world where people are mostly free and non-violent" today, and I don't think we have the technology to "design a utility function that is robust against misinterpretation by a recursively improving AI".


One additional aside

Suppose the AI has developed the tech to upload a human mind into a virtual paradise, and is deciding whether to do it or not.

I must confess the goals of this post are more modest than this. The Nash equilibrium I described is one that preserves human existence and values as they are it does nothing in the domain of creating a virtual paradise where humans will enjoy infinite pleasure (and in fact actively avoids forcing this on people).

I suspect some people will try to build AIs that grant them infinite pleasure, and I do not grudge them this (so long as they do so in a way that respects the rights of others to choose freely). Humans will fall into many camps. Those who just want to be left alone, those who wish to pursue knowledge, those who wish to enjoy paradise. I want to build a world where all of those groups can co-exist without wiping out one-another or being wiped out by a malevolent AI.

Comment by logan-zoellner on What does a positive outcome without alignment look like? · 2020-05-09T14:46:54.554Z · score: 1 (1 votes) · LW · GW
You clearly have some sort of grudge against or dislike of china. In the face of a pandemic, they want basically what we want, to stop it spreading and someone else to blame it on. Chinese people are not inherently evil.

I certainly don't think the Chinese are inherently evil. Rather I think that from the view of an American in the 1990's a world dominated by a totalitarian China which engages in routine genocide and bans freedom of expression would be a "negative outcome to the rise of China".

This is a description of a Nash equilibria in human society. Their stability depends on humans having human values and capabilities.

Yes. Exactly. We should be trying to find a Nash equilibrium in which humans are still alive (and ideally relatively free to pursue their values) after the singularity. I suspect such a Nash equilibrium involves multiple AIs competing with strong norms against violence and focus on positive-sum trades.

But I don't see why any of the Nash equilibria between superintelligences will be friendly to humans.

This is precisely what we need to engineer! Unless your claim is that there is no Nash equilibrium in which humanity survives, which seems like a fairly hopeless standpoint to assume. If you are correct, we all die. If you are wrong, we abandon our only hope of survival.

Why would one AI start shooting because the other AI did an action that benefited both equally?

Consider deep seabed mining. I would estimate the percent of humans who seriously care (are are aware of the existence of) the sponges living at the bottom of the deep ocean at <1%. Moreover, there are substantial positive economic gains that could potentially be split among multiple nations from mining deep sea nodules. Nonetheless, every attempt to legalize deep sea mining has run unto a hopeless tangle of legal restrictions because most countries view blocking their rivals as more useful than actually mining the deep sea.

If you have several AI's and one of them cares about humans, it might bargain for human survival with the others. But that implies some human managed to do some amount of alignment.

I would hope that some AIs have an interest in preserving humans for the same reason some humans care about protecting life on the deep seabed, but I don't think this is a necessary condition for ensuring humanity's survival in a post-singularity world. We should be trying to establish a Nash equilibrium in which even insignificant actors have their values and existence preserved.

My point is, I'm not sure that aligned AI (in the narrow technical sense of coherently extrapolated values) is even a well-defined term. Nor do I think it is an outcome to the singularity we can easily engineer, since it requires us to both engineer such an AI and to make sure that it is the dominant AI in the post-singularity world.

Comment by logan-zoellner on AI Boxing for Hardware-bound agents (aka the China alignment problem) · 2020-05-09T14:09:15.674Z · score: 1 (1 votes) · LW · GW
A lot of the approaches to the "China alignment problem" rely on modifying the game theoretic position, given a fixed utility function. Ie having weapons and threatening to use them. This only works against an opponent to which your weapons pose a real threat. If, 20 years after the start of Moof, the AI's can defend against all human weapons with ease, and can make any material goods using less raw materials and energy than the humans use, then the AI's lack a strong reason to keep us around.

If the AIs are a monolithic entity whose values are universally opposed to those of humans then, yes, we are doomed. But I don't think this has to be the case. If the post-singularity world consists of an ecosystem of AIs whose mutually competing interests causes them to balance one-another and engage in positive sum games then humanity is preserved not because the AI fears us, but because that is the "norm of behavior" for agents in their society.

Yes, it is scary to imagine a future where humans are no longer at the helm, but I think it is possible to build a future where our values are tolerated and allowed to continue to exist.

By contrast, I am not optimistic about attempts to "extrapolate" human values to an AI capable of acts like turning the entire world into paperclips. Humans are greedy, superstitious and naive. Hopefully our AI descendants will be our better angels and build a world better than any that we can imagine.

Comment by logan-zoellner on AI Boxing for Hardware-bound agents (aka the China alignment problem) · 2020-05-09T13:56:29.408Z · score: 1 (1 votes) · LW · GW

I really like this this response! We are thinking about some of the same math.

Some minor quibbles, and again I think "years" not "weeks" is an appropriate time-frame for "first Human AI -> AI surpasses all humans"

Therefore, in a hardware limited situation, your AI will have been training for about 2 years. So if your AI takes 20 subjective years to train, it is running at 10x human speed. If the AI development process involved trying 100 variations and then picking the one that works best, then your AI can run at 1000x human speed.

A three-year-old child does not take 20 subjective years to train. Even a 20-year-old adult human does not take 20 subjective years to train. We spend an awful lot of time sleeping, watching TV, etc. I doubt literally every second of that is mandatory for reaching the intelligence of an average adult human being.

At the moment, current supercomputers seem to have around enough compute to simulate every synapse in a human brain with floating point arithmetic, in real time. (Based on 1014 synapses at 100 Hz, 1017 flops) I doubt using accurate serial floating point operations to simulate noisy analogue neurons, as arranged by evolution is anywhere near optimal.

I think just the opposite. A synapse is not a FLOP. My estimate is closer to 10^19. Moreover most of the top slots in the TOP500 list are vanity projects by governments or used for stuff like simulating nuclear explosions.

Although, to be fair, once this curve collides with Moore's law, that 2nd objection will no longer be true.

Comment by logan-zoellner on AI Boxing for Hardware-bound agents (aka the China alignment problem) · 2020-05-09T13:37:37.017Z · score: 1 (3 votes) · LW · GW
Free trade can also have a toxic side. It could make sidelining human dignity in terms of economic efficiency the expected default.

Yes!

This means we need to seriously address problems like secular stagnation, climate change, and economic inequality.

The problem should remain essentially the same if we reframe the China problem as the US problem.

Saying there is no difference between the US and China is uncharitable.

Also, I specifically, named it the China problem in reference to this:

Suppose, living in the USA in the early 1990's, you were aware that there was a nation called China with the potential to be vastly more economically powerful than the USA and whose ideals were vastly different from your own.

Namely, the same strategies the USA would use to contain a rising China are the ones I would expect humanity to use to contain a rising AI.

If we really wanted to call it the "America" problem, the context would be:

Suppose in the year 1784, you were a leader in the British Empire. It was clear to you that at some point in the next century the USA would become the most powerful superpower in existence. How would you guarantee that the USA did not become a threat to your existence and values?

By that measure, I think the British succeeded, since most Brits I know are not worried that the USA is going to take them over or destroy their National Healthcare System.

FWIW, I also think the USA is mostly succeeding. China has been the World's largest economy for almost a decade and yet they are still members of the UN, WTO, etc. They haven't committed any horrible acts such as invading Taiwan or censoring most of the internet outside of China and the rest of the world seems to have reluctantly chosen the USA over China (especially in light of COVID-19). The mere fact that most people view the USA as vastly more powerful than the Chinese is a testament to just how good of a job the USA has done in shaping the world order to one that suits their own values.

Comment by logan-zoellner on AI Boxing for Hardware-bound agents (aka the China alignment problem) · 2020-05-09T07:31:16.085Z · score: 1 (1 votes) · LW · GW
My major concern with AI boxing is the possibility that the AI might just convince people to let it out

Agree. My point was boxing a human-level AI is in principle easy (especially if that AI exists on a special purpose device of which there is only one in the world), but in practice someone somewhere is going to unbox AI before it is even developed.

The biggest threat from AI comes from AI-owned AI with a hostile worldview -- no matter whether how the AI gets created. If we can't answer the question "how do we make sure AIs do the things we want them to do when we can't tell them all the things they shouldn't do?"
Beyond that, I'm not really worried about economic dominance in the context of AI. Given a slow takeoff scenario, the economy will be booming like crazy wherever AI has been exercised to its technological capacities even before AGI emerges.

I think there's a connection between these two things, but probably I haven't made it terribly clear. The reason I talked about economic interactions, is because they're the best framework we currently have for describing positive-sum interactions between entities with vastly different levels of power.

I am certain that my bank knows much more about finance than I do. Likewise, my insurance company knows much more about insurance than I do. And my ISP probably knows more about networking than I do (although sometimes I wonder). If any of these entities wanted to totally screw me over at any point, they probably could. The reason I am able to successfully interact with them is not because they fear my retaliation or share my worldviews. But it is because they exist in a wider economy in which maintaining their reputation is valuable because it allows them to engage in positive-sum trades in the future.

Note that the degree to which this is true varies widely across time and space. People who are socially outcast in countries with poor rule of law cannot trust the bank. I propose that we ought to have less faith in our ability to control AI or its worldview and place more effort into making sure that potential AIs exist in a sociopolitical environment where it is to their benefit not to destroy us.

The reason I called this post the "China alignment problem" is because the same techniques we might use to interact with China (a potentially economically powerful agent with an alien or even hostile worldview) are the same ones I think we should be using to align our interactions with AI. Our chances of changing China's (or AIs) worldview to match our own are fairly slim, but our ability to ensure their "peaceful rise" is much greater.

I believe the best framework to do this is to establish a pluralistic society in which no single actor dominates, and where positive-sum trades are the default as enforced by collective action against those who threaten or abuse others.


Still, we were able to handle nuclear weapons so we should probably be able to handle this to.

Small nitpick, but "we were able to handle nuclear weapons" is a bit iffy. Looking up a list of near-misses during the Cold War is terrifying. Much less thinking about countries like Iran or North Korea going through a succession crisis.

Comment by logan-zoellner on AI Boxing for Hardware-bound agents (aka the China alignment problem) · 2020-05-09T02:13:08.314Z · score: 1 (1 votes) · LW · GW
In other words, you got the easy and useless part ("will it happen?") right, and the difficult and important part ("when will it happen?") wrong.

"Will it happen?" isn't vacuous or easy, generally speaking. I can think of lots of questions where I have no idea what the answer is, despite a "trend of ever increasing strength". For example:

Will Chess be solved?

Will faster than light travel be solved?

Will P=NP be solved?

Will the hard problem of consciousness be solved?

Will a Dyson sphere be constructed around Sol?

Will anthropogenic climate change cause Earth's temperature to rise by 4C?

Will Earth's population surpass 100 billion people?

Will the African Rhinoceros go extinct?

I feel obligated to point out that "predictions" of this caliber are the best you'll ever be able to do if you insist on throwing out any information more specific and granular than "historically, these metrics seem to move consistently upward/downward".

I've made specific statements about my beliefs for when Human-Level AI will be developed. If you disagree with these predictions, please state your own.

Comment by logan-zoellner on AI Boxing for Hardware-bound agents (aka the China alignment problem) · 2020-05-09T01:34:03.178Z · score: 1 (1 votes) · LW · GW
It modified its own algorithms to take better use of processor cache, bringing its speed from 500x human to 1000x human. It is making several publishable new results in AI research a day.

I think we disagree on what Moof looks like. I envision the first human-level AI as also running at human-like speeds on a $10 million+ platform and then accelerating according to Moore's law. This still results in pretty dramatic upheaval but over the course of years, not weeks. I also expect humans will be using some pretty powerful sub-human AIs, so it's not like the AI gets a free boost just for being in software.

Again, the reason why is I think the algorithms will be known well in advance and it will be a race between most of the major players to build hardware fast enough to emulate human-level intelligence. The more the first human-level AI results from a software innovation rather than a Manhatten-project style hardware effort, the more likely we will see Foom. If the first human-level AI runs on commodity hardware, or runs 500x faster than any human, we have already seen Foom.

Comment by logan-zoellner on AI Boxing for Hardware-bound agents (aka the China alignment problem) · 2020-05-09T01:22:35.647Z · score: 3 (2 votes) · LW · GW
Given that you emphasize hardware-bound agents: have you seen AI and Compute? A reasonably large fraction of the AI alignment community takes it quite seriously.

This trend is going to run into Moore's law as an upper ceiling very soon (within a year, the line will require a year of the world's most powerful computer). What do you predict will happen then?


"what are you doing if not encoding human values"

Interested in the answer to this, and how much it looks like/disagrees with my proposal: building free trade, respect for individual autonomy, and censorship resistance into the core infrastructure and social institutions our world runs on.

Comment by logan-zoellner on AI Boxing for Hardware-bound agents (aka the China alignment problem) · 2020-05-08T22:32:54.218Z · score: 1 (1 votes) · LW · GW
In any case, this does not seem to stop the Chinese people from feeling happier than the US people.

Lots of happy people in China.

And yes, I expect that in 2050 it will be possible to monitor the behavior of each person in countries 24/7. I can’t say that it makes me happy, but I think that the vast majority will put up with this. I don't believe in a liberal democratic utopia, but the end of the world seems unlikely to me.

Call me a crazy optimist, but I think we can aim higher than: Yes, you will be monitored 24/7, but at least humanity won't literally go extinct.

Comment by logan-zoellner on AI Boxing for Hardware-bound agents (aka the China alignment problem) · 2020-05-08T21:49:50.244Z · score: 1 (1 votes) · LW · GW
Why are some so often convinced that the victory of China in the AGI race will lead to the end of humanity?

I don't think a Chinese world order will result in the end of humanity, but I do think it will make stuff like this much more common. I am interested in creating a future I would actually want to live in.

The most prominent experts give a 50% chance of AI in 2099

How much would you be willing to bet that AI will not exist in 2060, and at what odds?

but I think that the probability of an existential disaster in this world will become less.

Are you arguing that a victory for Chinese totalitarianism makes Human extinction less likely than a liberal world order?

Comment by logan-zoellner on Why is the mail so much better than the DMV? · 2019-12-31T14:58:05.189Z · score: 1 (1 votes) · LW · GW

I think "private monopolies are worse than government ones" is probably true in my experience as well. Although some of this is the subjective experience of having to pay money to be treated badly.


I think this makes me believe more strongly in competition as the main reason why the USPS is comparatively well-run.


Edit:

I would still expect private monopolies to be run more cost-efficiently than government ones. Although I'm not sure about cases like utilities where their profits are directly tied to their costs by government regulations.

Comment by logan-zoellner on Many Turing Machines · 2019-12-15T21:48:44.311Z · score: 1 (1 votes) · LW · GW
For it to be a formal claim would require us knowing more physics than we do such that we would know the true metaphysics of the universe.

You are correct that I used Church-Turing as a shortcut to demonstrate my claim that MWH is computable. However, I am not aware of anyone seriously claiming quantum physics is non computable. People simulate quantum physics on computers all the time, although it is slow.


I'm inclined to view your description as a strawman of MWI

I don't think it's quite a strawman, since the point is that MTM is literally equivalent to MWH. In math saying "A is isomorphic to B, but B is easier to reason about" is something that is done all the time.


but it's also not an argument against MWI, only against MWI mattering to your purposes.

Yes.


Comment by logan-zoellner on Many Turing Machines · 2019-12-15T21:42:45.243Z · score: 1 (1 votes) · LW · GW

I like the Mathematical Universe Hypothesis for simplicity and internal consistency, but it seems like we're assuming a lot. And it's not as simple as all that, either. Where do we draw the line? Only computable functions? The whole Turing hierarchy? Non-standard Turing Machines? If we draw the line at "anything logically conceivable", I would worry that things like "a demon that can jump between different branches of the multiverse" ought to be popping into our reality all the time.

If we want our theory to be predictive, we should probably cut it off at "anything computable exists", but if predictability was our goal, why not go all the way back to "anything observable exists"?

Comment by logan-zoellner on Many Turing Machines · 2019-12-15T21:27:31.148Z · score: 1 (1 votes) · LW · GW
The MTM model is completely non interacting

The MTM model is literally computing the same thing as the MWH. Specifically, suppose for a human brain I compute the events observed by the same human brain. Granted, this requires solving both the easy problem of consciousness and the grand unified theory . But I don't think anyone here is seriously suggesting those are inherently non-computable functions.

I suppose a reasonable objection is that the shortest program is MWH, since I don't have to determine when an observation happens. But if I ask for the fastest program in terms of time and memory efficiency instead, MWH is a clear loser.