Posts

Against Against Boredom 2021-05-16T18:19:59.909Z
TAI? 2021-03-30T12:41:29.790Z
(Pseudo) Mathematical Realism Bad? 2020-11-22T18:21:30.831Z
Libertarianism, Neoliberalism and Medicare for All? 2020-10-14T21:06:08.811Z
AI Boxing for Hardware-bound agents (aka the China alignment problem) 2020-05-08T15:50:12.915Z
Why is the mail so much better than the DMV? 2019-12-29T18:55:10.998Z
Many Turing Machines 2019-12-10T17:36:43.771Z

Comments

Comment by Logan Zoellner (logan-zoellner) on Can someone help me understand the arrow of time? · 2021-06-16T22:07:46.741Z · LW · GW

Solving the hard problem might be necessary for explaining why people have a quale of passing-time, but is not sufficient -- you dont have to have that particular quale.

Yes.

There are no good theories of time as an illusion, either. Not least because you have to solve the hard problem as part of them.

I would rate timeless MWI (along with the additional assertion that anything isomorphic to a mind is conscious) as a "good" theory in the following sense:  It is internally consistent and adequately describes the perceptions of conscious individuals at any given moment at time.  That is to say at any given moment in time, there is no logical argument or evidence I am aware of which strongly contradicts this theory.  Its primary weakness is (as I mentioned) that it seems to do a poor job explaining why humans experience time as a series of moments (at not say as a single moment only or a unified whole across all possible world lines).

No, collapse theories don't have to be dualistic.

Agreed, but I think you will find that in practice most advocates of collapse theory are dualists.

If anything is inherently subjective , or beyond the scope of science, then strong physicalism is false.

then strong physicalism is false.

Comment by Logan Zoellner (logan-zoellner) on Can someone help me understand the arrow of time? · 2021-06-16T22:04:04.167Z · LW · GW

I was definitely thinking more of Zen, but "claims have been exaggerated for rhetorical effect" is also a fair characterization of what I said.

Comment by Logan Zoellner (logan-zoellner) on Can someone help me understand the arrow of time? · 2021-06-16T15:05:46.572Z · LW · GW

I think you might be confusing two things: the arrow of time, and the hard problem of consciousness.

 

The arrow of time refers to the fact that there as a difference between the past and the future.  This is straightforward, and has to do with the fact that entropy increases over time (2nd law of thermodynamics).  This also explains lots of things like why your brain has an easier time remembering the past than predicting the future.  In theory if we lived in a universe where entropy was at a maximum (and all laws of physics were reversible) there would be no obvious difference between the past and the future.

 

The hard problem of consciousness asks: why do humans perceive time as a series of moments (or more fundamentally, why do humans perceive anything at all)?  Because perception is inherently subjective, this question is beyond the scope of scientific inquiry.  This accounts for the hardness of the hard problem of consciousness.  Of course the fact that perception is subjective and hence fundamentally beyond the reach of science doesn't stop philosophers from speculating about it.  

Around here at Less Wrong, the theories you are most likely to come across is "time is an illusion" or "perception is what an algorithm feels like from the inside".  Of course both of these theories might be unsatisfying to you, since like all human beings you undoubtedly experience time as a series of moments, not as a timeless whole.  

Unfortunately, the competing theories (e.g. "time is created when conscious beings cause quantum waveform collapse") are all pretty bad, since they depend on the existence on some non-physical "consciousness substance".  These theories are collectively called dualism and were espoused by such greats as Rene Descartes but are now generally out of vogue.  The main problem with these theories is that the "consciousness substance" would itself be subject to metaphysical laws, and then we would again be stuck asking "but what about those laws causes consciousness to arise?" leaving us in some sort of endless regress.  Talking  about non-physical substances that can never be measured is also considered a big no-no under Occam's Razor.

There is a way out of this particular trap, often taken by Buddhist and Existentialist philosophers.  Namely, any time you ask them to explain consciousness, they shake their head and grumble "Existence is existence! It cannot be explained! It can only be experienced!"  While this neatly avoids the argument (by refusing to engage in it), it can certainly be frustrating if you want to understand what consciousness is.

Comment by Logan Zoellner (logan-zoellner) on Controlling Intelligent Agents The Only Way We Know How: Ideal Bureaucratic Structure (IBS) · 2021-05-26T22:26:56.539Z · LW · GW

My sense is that these are not fundamental components of a rationally applied bureaucratic structure, but rather of the limited information and communication capabilities of the agents that hold the positions within the bureaucratic structure. My sense is that AIs could overcome these challenges given some flexibility in structure based on some weighted voting mechanism by the AIs.

 

I think this is the essential question  that needs to be answered: Is the stratification of bureaucracies a result of the fixed limit on human cognitive capacity, or is it an inherent limitation of bureaucracy?

One way to answer such a question might be to look at the asymptotics of the situation.  Suppose that the number of "rules" governing an organization is proportional to the size of the organization.  The question would then be does the complexity of the coordination problem also increase only linearly as well?  If so, it is reasonable to suppose that  humans (with a finite capacity) would face a coordination problem but AI would not.  

Suppose instead that the complexity of the coordination problem increases with the square of organization size.  In this case, as the size of an organization grows, AI might find the coordination harder and harder, but still tractable.  

Finally, what if the AI must consider all possible interactions between all possible rules in order to resolve the coordination problem?  In this case, the complexity of "fixing" a stratified bureaucracy is exponential in the size of the bureaucracy and beyond a certain (slowly rising) threshold the coordination problem is intractable.

My sense is that AIs could overcome these challenges given some flexibility in structure based on some weighted voting mechanism by the AIs.

If weighted voting is indeed a solution to the problem of bureaucratic stratification, we would expect this to be true of both human and AI organizations.  In this case, great effort should be put into discovering such structures because they would be of use in the present and not only in our AI dominated future.

It's not clear to me that if the bureaucratic norms and training could be updated for better rules and regulation to be imposed upon it why it would need to be overthrown.

Suppose the coordination problem is indeed intractable.  That is to say that once a bureaucracy has become sufficiently complex it is impossible to reduce the complexity of the system without unpredictable and undesirable side-effects.  In this case, the optimal solution may be the one chosen by capitalism (and revolutionaries) to periodically replace the bureaucracy once it is no longer near the efficiency frontier .

I would suggest that market competition and bureaucratic structure are along a continuum of structures for effectively and efficiently processing information.

There is undoubtedly a continuum of solutions between "survival of the fittest" capitalistic competition and "rules abiding" bureaucratic management.  The discovery of new "points" on this continuum (for example bureaucracy with capitalist characteristics) is something that deserves in-depth study.  

To take one example, the Bezos Mandate aims to structure communication between teams at Amazon more like a marketplace and less like a bureaucracy.  Google's 20% time is another example of purposely reducing management overhead in order to foster innovation.

It would be awesome if one could "fine tune" the level of competitiveness and thereby choose any point on this continuum.  If this were possible, one might even be able to use control theory to dynamically change the trade-off over time in order to maximize utility.

Comment by Logan Zoellner (logan-zoellner) on Controlling Intelligent Agents The Only Way We Know How: Ideal Bureaucratic Structure (IBS) · 2021-05-24T13:48:02.811Z · LW · GW

While I do think the rise of bureaucracies is inevitable, it's important to remember that there is a tradeoff between bureaucracy and innovation.

I'm not sure that the statement

A mature bureaucracy is an almost indestructible social structure.

is false so much as sub-optimal.  The easiest place to see this is in businesses.  Businesses follow a fairly predictable cycle:

  1.  A new business is created that has access to some new innovative idea or technology that allows it to overcome entrenched rivals
  2. As the business grows, it develops a bureaucracy that allows its actions to more efficiently "mine" its advantage and fend off competitors.  This increased business efficiency comes at a tradeoff with innovation.  The company becomes better at the things it does and worse at the things it doesn't.
  3. Eventually the world changes and what was once a business advantage is now a disadvantage.  However the business is now too entrenched in its own bureaucracy to change.
  4. New innovative rivals appear and destroy what remains of the business.

Or, to quote Paul Graham:

Companies never become less bureaucratic, but they do get killed by startups that haven't become bureaucratic yet, which amounts to the same thing.

I suspect that a similar cycle plays out in the realm of public governance as well, albeit on a much larger time scale.  Consider the Chinese concept of the Mandate of Heaven.  As governments age, they gradually become less responsive to the needs of the people until they are ultimately overthrown.  Indeed, one of the primary advantages of multi-party democracy is that the ruling party can be periodically overthrown without burning down the entire country first.

 

The basic energy behind this process is rule 6 of your bureaucratic process

The duties of the position are based on general learnable rules and regulation, which are more or less firm and more or less comprehensive

Because bureaucracies follow a fixed set of rules (and because rules are much more likely to be added than repealed), the action of the bureaucracy becomes more stratified over time.  This stratification leads to paralysis because no individual agent is capable of change, even if they know what change is needed and want to implement it.  Creating a bureaucracy is creating a giant coordination problem that can only be solved by replacing the bureaucracy.

 

What does any of this mean for AI?

Will we use bureaucracies to govern AI?  Yes, of course we will.  I am doing some work with GPT-3 and they have already developed a set of rules governing its use, and a series of procedures for determining if those rules are being followed.

Can we imagine a single "perfect bureaucracy" that will govern all of AI on behalf of humans?  No.  Just like businesses and governments need to periodically die in order to allow innovation, so will the bureaucracies that govern AI.  Indeed one sub-optimal singularity would be if a single bureaucracy of AIs became so powerful that it could never be overthrown.  This would hopefully leave humans much better off than they are today, but permanently locked in at whatever level of development the bureaucracy had reached prior to ossification.

Is there some post-bureaucracy governance model that can give us the predictability/controllability of bureaucracy without the tradeoff of lost innovation?  If you consider a marketplace with capitalistic competition a "structure", then sure.  If AI is somehow able to solve the coordination problem that leads to ossification of bureaucracy (perhaps this is a result of the limits on human's cognitive abilities), then maybe?  I feel like the tradeoff between rigid predictable rules and innovation is more fundamental than just the coordination problem, but I could be wrong.

Comment by Logan Zoellner (logan-zoellner) on What is the biggest crypto news of the past year? · 2021-05-22T20:45:42.542Z · LW · GW

The biggest crypto news of the past year was when Vitalik announced that Ethereum was moving it's scaling strategy from focusing primarily on sharding to focusing primarily on L2 speedups.

This switch substantially decreases the complexity of the Ethereum 2.0 launch allowing developers to set a target date of December instead of "one to two  years from now".

The Ethereum 2.0 launch will reduce Ethereum's energy usage by 99% and the roll out of Ethereum L2s will largely solve problems of high fees and congestion on the Ethereum blockchain.  Toghether these two changes will all but guarantee Ethereum's status as the "final settlement layer" for transactions of value anywhere on Earth.

Comment by Logan Zoellner (logan-zoellner) on Utopic Nightmares · 2021-05-16T23:50:31.486Z · LW · GW

What reasons are those?  I can understand the idea that there are things worse than death, but I don't see what part of this makes it qualify.

 

Can you imagine why taking a drug that made you feel happy forever but cut you off from reality might be perceived as worse than death?

Comment by Logan Zoellner (logan-zoellner) on Against Against Boredom · 2021-05-16T21:03:06.050Z · LW · GW

Boredom is why we value even temporary negatives, because of that subsequent boost back up, which indicates optimizing for 

Yep, that is precisely the point of simulated annealing.  Allowing temporary negative values lets you escape local maxima.

On a side note, I really appreciate that you took the time to write a post in response.  That was my first post on LW, and the engagement is very encouraging.

It was an interesting post and made me think about some things I hadn't in a while, thanks for writing it!

Comment by Logan Zoellner (logan-zoellner) on Utopic Nightmares · 2021-05-16T20:46:45.768Z · LW · GW

Aren't these single problems that deal with infinities rather than each being an infinite sequence of problems?  Would that kind of infinity bring about any sense of excitement or novelty more than discovering say, the nth digit of pi?

 

The n-th digit of pi is computable, meaning there exists a deterministic algorithm that runs in finite time and always gives you the right answer.  The n-th busy bever number is not, meaning that discovering it will require new advancements in mathematics to figure it out.  I'm not claiming that you personally will find that problem interesting (although mathematicians certainly do).  I'm claiming that it is likely that whatever field you do find interesting probably has similar classes of problems with a literally inexhaustible supply of interesting problems.

 

Out of curiosity, if we did run out of new exciting truths to discover and there was a way to feel the exact same thrill and novelty directly that you would have in those situations, would you take it?

No.  I would consider such a technology abhorrent for the same reason I consider taking a drug that would make me feel infinitely happy forever abhorrent.  I would literally prefer death to such a state.  If such a mindset seems unfathomable to you, consider reading the death of Socrates since he expresses the idea that there are things worse than death much more eloquently than I can.

Comment by Logan Zoellner (logan-zoellner) on Utopic Nightmares · 2021-05-16T16:45:50.578Z · LW · GW

Godel's incompleteness implies that the general question "is statement X true" (for arbitrary X) can never be answered by a finite set of Axioms.  Hence, finding new axioms and using them to prove new sets of statements is an endless problem.  Similar infinite problems exist in computability "Does program X halt?" and computational complexity "What is the Kolmogorov compexity of string X?" as well as topology "Are 2 structures which have properties X, Y, Z... in common homeomorphic?".  

Why is a steady state utopia equal to us dying out?  I can see why that would be somewhat true given the preference we give now to the state of excitement at discovery and novelty, but why objectively?

I should clarify, this is a value judgement.  I personally consider existing in a steady state (or a finitely repeating set of states) morally equivalent to death, since creativity is one of my "terminal" values.

If we reach the point where we can safely add and edit our own emotions, I don't think removing one emotion that we deem counterproductive would be seen as negative.

Again, this is a value judgement.  I would consider modifying my mind so that I no longer cared about learning new things morally repugnant.

 

It's probably worth noting that my moral opinions seem to be in disagreement with many of the people around here, as I place much less weight on avoidance of suffering and experiencing physical bliss and much more on novelty of experience, helping others and seeking truth than the general feeling I get from people who want to maximize qualies or don't consider orgasmium morally repugnant.

Comment by Logan Zoellner (logan-zoellner) on Utopic Nightmares · 2021-05-15T12:14:54.828Z · LW · GW

I don't really think endless boredom is as much of a risk as others seem to.  Certainly not enough to be worth lobotomizing the entire human race in order to achieve some faux state of "eternal bliss".  Consider, for example, that Godel's Incompleteness implies there are a literal infinite number of math problems to be solved.  Math not your thing?  Why would we imagine there are only a finite number advancements that can be made in dance, music, poetry, etc?  Are these fields less rich than mathematics somehow?  

In my mind the only actual "utopia" is one of infinite endless growth and adventure.  Either we continue to grow forever, discovering new and exciting things, or we die out.  Any kind of "steady state utopia" is just an extended version of the latter.

Comment by Logan Zoellner (logan-zoellner) on Technocratic Plimsoll Line · 2021-05-15T11:28:41.135Z · LW · GW

It would be really neat if there was some sort of measure of where this line exists at various different corporations so we could figure out if there's any kind of correlation between the height of the line and the success of the organization.  If we think of the line as "skin in the game" vs "collecting a paycheck", the line is probably much lower at an organization like Google than at e.g. Walmart.  For senior programmers, a large portion of their compensation is stock options vs salary.  

Comment by Logan Zoellner (logan-zoellner) on Three reasons to expect long AI timelines · 2021-05-01T05:04:57.476Z · LW · GW

We still have many of those reactors built in the 1970s.  They are linked in the lazard data above as 'paid for' reactors.  They are $29 a megawatt-hour.  Solar hits as low as $31 a megawatt-hour, and natural gas $28 in the same 'paid for case'.

 

Your claim here is that under optimal regulatory policy we could not possibly do better today than with 1970's technology?

 

My other point was that if other nations could do it - not all of them have the same regulatory scheme.  If other nations could build reactors at a fraction of the price they would benefit.  And China has a strong incentive if this were true - they have a major pollution problem with coal.  But, "the industry has not broken ground on a new plant in China since late 2016".  

from the article you linked

The 2011 meltdown at Japan’s Fukushima Daiichi plant shocked Chinese officials and made a strong impression on many Chinese citizens. A government survey in August 2017 found that only 40% of the public supported nuclear power development.

It seems perfectly reasonable to believe China too can suffer from regulatory failure due to public misconception.   In fact, given it's state-driven economy, wouldn't we expect market forces to be even less effective at finding low-cost solutions than in Western countries?  Malinvestment seems to be a hallmark of the current Chinese system.

Comment by Logan Zoellner (logan-zoellner) on Three reasons to expect long AI timelines · 2021-05-01T04:35:39.052Z · LW · GW

In 2020 the average number of days that Americans teleworked doubled from 2.4 to 5.8 per month.  If we assume that 100% of that work could be done by AGI and that all of those working days were replaced in a single year, that would be a 29% boost to productivity, just barely above the 25%/year growth definition of TAI.

It is unlikely that 100% of such work can be automated (for example at-home learning makes up a large fraction of telework).  And much of what can be automated will be automated long before we reach AGI (travel agents, real estate, ...).

I'm not sure how putting AGI on existing robots makes them automatically more useful?  Neither my roomba nor car manufacturing robots (to pick two extremes) can be greatly improved by additional intelligence.  Undoubtedly self-driving cars would be much easier (perhaps trival) to implement given AGI, but self-driving cars are almost certainly a less than AGI-hard task.  Did you have some particular examples in mind of existing robots that need/benefit from AGI specifically?

Comment by Logan Zoellner (logan-zoellner) on Three reasons to expect long AI timelines · 2021-04-24T11:55:28.012Z · LW · GW

I don't think technological deployment is likely to take that long for AI's. With a physical device like a car or fridge, it takes time for people to set up the factories, and manufacture the devices. AI can be sent across the internet in moments.

 

Most economically important uses of AGI (self driving cars, replacing fast-food workers) require physical infrastructure.  There are some areas (e.g. high frequency stock trading and phone voice assistants) that do not, but those are largely automated already so there won't be a sudden boost when the "cross the threshold" of AGI.

Comment by Logan Zoellner (logan-zoellner) on Three reasons to expect long AI timelines · 2021-04-24T11:51:26.145Z · LW · GW

Regulatory agencies and people don't have a choice but to adopt.  That is, it's not a voluntary act.  They either do it or they go broke/cease to matter.  This is something I will break out just more generally: if a country has a (robust, general) AI agent that can drive cars, they can immediately save paying several million people.   This means that any nation that 'slows down' adoption via regulation becomes uncompetitive on the global scale, and any individual firm that 'slows down' adoption goes broke because it's competitors can sell services below marginal cost.  

 

This argument seems to prove too much.  If regulators absolutely cannot regulate something because they will get wiped out by competitors, why does overregulation exist in any domain?  Taking nuclear power as an example, it is almost certainly true that nuclear could be 10x cheaper than existing power sources with appropriate regulation, yet no country has done this.

The whole point is that regulators DO NOT respond to economic incentives because the incentives apply to those being regulated, not the regulator themselves.

Comment by Logan Zoellner (logan-zoellner) on What will GPT-4 be incapable of? · 2021-04-06T20:44:09.209Z · LW · GW

Play Go better than AlphaGo Zero.  AlphaGo Zero was trained using millions of games.  Even if GPT-4 is trained on all of the internet, there simply isn't enough training data for it to have comparable effectiveness.

Comment by Logan Zoellner (logan-zoellner) on TAI? · 2021-03-30T23:46:15.271Z · LW · GW

Yeah, I definitely think we're very early in the transition.  I would still say it's extremely likely (>90%) even given no new "breakthroughs".  

The real-life commercial uses of AI+robotics are still pretty limited at this point.  Off the top of my head I can only think of Roomba, Tesla, Kiva and those security robots in malls.

Anecdotally, from the people I talk do deep learning + any application in science seems to yield  immediate low-hanging fruit (one recent example being protein folding).  I think the limiting factor right now is the number of deep learning + robotics experts is extremely small.  It's also the case that a robot has to be very cheap to compete with an employee making minimum wage (even in developed countries).  If there were 10000x as many deep learning experts and everyone in the world was earning $30/hour I think we would see robots taking over many more jobs than we do presently.

I also think it's likely that better AI + more compute will dramatically accelerate this transition.  Maybe there will be some threshold at which this transition will become more obviously inevitable than it is today.  

 

Perhaps"when will TAI be developed?" is something that can only be answered retrospectively.  By way of analogy, it now seems obvious to us that the invention of the steam engine (1698) and flying shuttle (1733) marked the beginning of a major change in how humans worked, but it wasn't until the 1800's that those changes began to appear in the labor market.

Comment by Logan Zoellner (logan-zoellner) on (Pseudo) Mathematical Realism Bad? · 2020-11-24T02:21:12.851Z · LW · GW

No, thanks for the recommendation!

Comment by Logan Zoellner (logan-zoellner) on (Pseudo) Mathematical Realism Bad? · 2020-11-24T02:20:55.903Z · LW · GW

Initially the beings are pure minds existing in a empty universe, so there's no risk of dying or killing yourself, but plenty of driving yourself mad.  If they want a body, they have to imagine it into existence like anything else.  They reproduce by imagining other beings into existence.  I'm not really sure where the first one came from or how it learned anything, but at this point they have a thriving society and a culture for training new minds how to exist in harmony with the others.  One of the chief concerns of the beings is maintaining the norms of this culture with the worst possible punishment being ostracism for people who don't play by the rules.

Objects imagined into existence follow the laws of physics you imagine along with them, so you could have ice that melts or a perpetual motion machine if you want that instead.

It's also possible to create "planes" with more restrictive rules (sort of like spinning up a VM in a computer).

Comment by Logan Zoellner (logan-zoellner) on (Pseudo) Mathematical Realism Bad? · 2020-11-22T22:09:32.300Z · LW · GW

Changing the title to "pseudo mathematical realism bad?"

Comment by Logan Zoellner (logan-zoellner) on Libertarianism, Neoliberalism and Medicare for All? · 2020-11-22T17:22:59.029Z · LW · GW

There's a lot to get into here, maybe I will start a separate post about "ideal tax policy".  

 

I think the "ideal" reference case for non-distortionary tax policy is one with zero taxes in which all public services are provided for by a magic genie somehow.

Comment by Logan Zoellner (logan-zoellner) on Libertarianism, Neoliberalism and Medicare for All? · 2020-11-22T17:20:26.709Z · LW · GW

Yep.  This definitely not how it's done in the "real world".

In the "seat belts" example, this would involve replacing a law mandating seat-belts what a (presumably high) tax on selling vehicles without seatbelts set to equal the economic/social benefits of seat belts.

I think as a matter of pragmatism, there are cases where an outright ban is more/less reasonable than trying to determine the appropriate tax.  For example, I don't think anyone thinks that the "social  cost" of dumping nuclear waste into a river is something we actually want to contemplate.

Comment by Logan Zoellner (logan-zoellner) on Libertarianism, Neoliberalism and Medicare for All? · 2020-11-18T06:34:48.186Z · LW · GW

I think there is a good argument for a general principle in most cases of not subsidizing bad actors for stopping causing harm.

A carbon tax refunded in the from of a UBI is economically equivalent to a "low carbon subsidy" in which each citizen is paid for the amount of carbon they consume below the defined threshold.  In one case we are "penalizing bad behavior" in the other we are "subsidizing people for avoiding bad behavior".  

I agree that for the sake of optics we should "tax bad things" and "subsidize good ones" but from an economic point of view this is irrelevant.

 

But "avoiding introducing economic distortions" is, to me, not an obvious goal for a tax policy

Given two tax systems which produce the same amount of "income" for the government, we should prefer the one which leads to higher welfare overall.  This is why, for example, raising all of a government's income from tariffs is bad, because it uneconomically disadvantages imports leading to a less efficient economy overall.

A poll tax distorts in favor of the rich, a flat tax or VAT doesn't account for the declining marginal value of money to individuals (regressive in utility if not dollars)

I was suggesting a flat tax on income or consumption (VAT).  These should be identical over a lifetime.  A poll tax would be bad for obvious reasons.

I think that we should solve for "regressiveness" by doing a UBI, not by messing with the tax code.

WRT "declining marginal utility of money", I think this is over hyped.  Rich people don't consume dramatically more than those in the middle class.  To the extent that they spend their "excess" income on charity or investment, the marginal utility of those dollars is possibly higher than giving the same money to a already well-off middle class family.

I would much rather have a UBI in place than a minimum wage, if that choice were in front of me.

I think we agree here

Comment by Logan Zoellner (logan-zoellner) on Propinquity Cities So Far · 2020-11-18T06:07:02.447Z · LW · GW

I picked an extreme example of over regulation as a caricature, not to prove the general case.  But needless to say California has also rejected well-reasoned proposals with an ability to make a real impact.

Comment by Logan Zoellner (logan-zoellner) on Propinquity Cities So Far · 2020-11-17T02:02:21.891Z · LW · GW

Don't basically all cities control density pretty tightly? I know that a lot of density restriction is just nimbies defending housing scarcity, but it can't all be that, can it?

 

I think this is where the whole post goes off the rails.

In the real world there are massive economic inefficiencies created by government restrictions on density.  Suggesting that we can fix these with a more complex government system is like suggesting we can solve the "wolves eat sheep" problem with bigger wolves.

Comment by Logan Zoellner (logan-zoellner) on Libertarianism, Neoliberalism and Medicare for All? · 2020-11-17T01:41:49.919Z · LW · GW

In that sense, I'm not sure an unfunded mandate is any different than a tax increase on a specific activity with the goal of reducing or offsetting that activity.

 

I do agree there is an important truth here.  

A "punitive tax" and a "funded mandate" are exactly identical from a Pareto-optimum point of view.  In one case the costs show up as higher prices, in the other as higher taxes, but the net effect should be the same.  But sometimes I think we should have a funded mandate (Medicare for all) and sometimes we should have a tax (carbon tax).  

Why?

I think it partly boils down to political expediency.  Most people agree that health-care is good and so it should be subsidized.  Most people think global warming is bad, hence it should be taxed.  

I also think we should choose whichever one is simpler.

Imagine a counter-factual world where we taxed "non health insurance" and subsidized "negative carbon emitting activities".  Because everyone--presumably--needs the same type of insurance, we are creating addition work in which each individual is required to seek out and buy a product that is ultimately supposed to be identical for everyone.  

Conversely, in order to subsidize  negative carbon emissions, the government would be required to determine the carbon footprint of every individual and subsidize individuals whose footprint was lower than this amount.   This would be massively more complex than simply taxing carbon "at the source".  In fact the easiest way to implement such a subsidy would be to implement a carbon tax and then give every individual a "carbon subsidy" that they could use to pay for the tax.  This is precisely what a carbon tax+UBI rebate does anyway.

Comment by Logan Zoellner (logan-zoellner) on Libertarianism, Neoliberalism and Medicare for All? · 2020-11-17T01:24:51.599Z · LW · GW

First, sometimes a mandate is effectively a way to counteract already-existing externalities without needing constant legal battles that make it too inefficient for civil lawsuits to fix.

 

How is a mandate better at avoiding "constant legal battles" than e.g. a tax

 

In that sense, I'm not sure an unfunded mandate is any different than a tax increase on a specific activity with the goal of reducing or offsetting that activity.

I think this intuition is correct.  I'm just advocating that the government should explicitly acknowledge it is a tax (e.g. carbon tax) or subsidize it (e.g. medicare for all).  "Hidden"  taxes (in the form of unfunded mandates) seem like the worst possible method from an economic point of view because regulators can easily lie about their costs.

I also think costs tend to be higher overall in the "hidden tax" situation.  For example, mandating that people buy private insurance creates additional paperwork.  Similarly, imagine what a "regulatory regime" for carbon would look like.  The government would have to determine what industries were allowed to emit carbon and at what levels.  This would create a massive bureaucracy as well as an incentive to play favorites (for example by allowing cement producers to emit more than natural gas producers or vice versa).  

Aren't "taxes" in general an unfunded mandate, backed only by threat of force? 

MMT or not, the government needs to raise revenue in order to pay its debts.  MMT is just an argument about the appropriate level of taxation. I'm not aware of any MMT'ers who think the ideal level is 0, since that would inevitably lead to hyperinflation.  Ideally taxes should be as broad based as possible to avoid introducing economic distortions.  That means a VAT or a flat tax on income.

 

Suppose tomorrow the US government passes a law that abolishes the federal minimum wage, but imposes a tax on all businesses calculated to be exactly the difference between the wages paid to any employee making less than $7.25/hr and what their wage would be at $7.25/hr.

This only works if the businesses are earning more than $7.25/hr in profit per employee.  Otherwise, it drives businesses in low productivity areas into bankruptcy and forces those workers into unemployment.  This is precisely the argument against a minimum wage and in favor of a UBI.

I think your intuitions are good but don't survive contact with the average level of intelligence, sanity, and knowledge in political practice.

Alas, this is not wrong.

Comment by Logan Zoellner (logan-zoellner) on The EMH Aten't Dead · 2020-08-17T12:20:49.527Z · LW · GW
I'm still curious if you would be willing to bet against a fund run exclusively by founders of the S&P500Those do underperform the S&P 500.

Oh yeah, I definitely agree that mutual funds are terrible. Pretty sure they're optimizing for management fees, though, not to actually outperform the market.


I'm still curious if you would be willing to bet against a fund run exclusively by founders vs the S&P 500. Saying the management fee for such a fund would be ridiculously high seems like a reasonable objection though.

For that matter, would you be willing to bet against SpaceX vs the S&P 500?

Comment by Logan Zoellner (logan-zoellner) on The EMH Aten't Dead · 2020-08-14T08:21:33.423Z · LW · GW
No, you would only assume that if you bill the capacity of that founder to work at zero. Successful founders have skill at managing companies that distinct from having access to private information.

Care to elucidate the difference between "skilled at managing companies" and "skilled at investing". Do you really claim that if I restricted the same set of people to buying/selling publicly tradable assets they would underperform the S&P 500?

Comment by Logan Zoellner (logan-zoellner) on Why is the mail so much better than the DMV? · 2020-08-13T14:19:17.114Z · LW · GW

Well, it looks like the correct answer was "the post office has avoided politicization"

😒

Comment by Logan Zoellner (logan-zoellner) on The EMH Aten't Dead · 2020-08-13T14:14:32.648Z · LW · GW
Plenty of Venture Capitalist underform the market. Saying that everyone of them beats the market is not based on real data.

I didn't say every Venture Capitalist beats the market. Venture Capital in particular seems like a hobby for people who are already rich. I said every founder of a $1B startup beat the market.

I propose the following bet: take any founder of a $1B startup that you please, strip them of all of their wealth, give them $1M cash. What percent of them do you think would see their net-worth grow by more than the S&P 500 over the next 10 years? If the EMH is true, the answer should be 50%. Would you really be willing to bet 50% of them will under preform the market?

Comment by Logan Zoellner (logan-zoellner) on The EMH Aten't Dead · 2020-08-13T14:10:06.111Z · LW · GW
Private information should be very hard to come by, it is not something that can be learned in a few minutes from an internet search.

I think we have different definitions of private information.

I have private information if I disagree with the substantial majority of people, even if everything I know is in principle freely available. The market is trading on the consensus expectation of the future. If that consensus is wrong and I know so, I have private information.

Specifically, when Tesla was trading at $600 or so, it was publicly available that they were building cars in a way that no other company could, but the public consensus was not that they were therefore the most valuable car company in the world.

Similarly, SpaceX is currently valued at $44B according to the public consensus. But I would be willing to be a substantial sum of money that they are worth 5-10x that and people just haven't fully grasped the implications of Starlink and Starship.

When you think about private information this way, in order to have private information all you have to do is:

1) Disagree with the general consensus

2) Be right

Incidentally, those are precisely the skills that rationality is training you for. Most people aren't optimizing for the truth, they're optimizing for fitting in with their peers.


To me it doesn't look trivial/nor easy at all: there are orders of magnitude more intelligence people than rich intelligence people.

Very few intelligent people are optimizing for "make as much money as possible". A trivial example of this, almost anyone working in academia could get a massive pay raise by switching to private industry. In addition, people can be very intelligent without being rational, so even if they claim to be optimizing for wealth they might not be doing a very good job of it. There are hordes of very intelligent people who are goldbugs or young earth creationists or global warming deniers. Why should we expect these people to behave rationally when it comes to financial self-interest when they so blatantly fail to do so in other domains?

I'm not even sure I buy the idea that there are more intelligent people than rich people. The 90% percentile for wealth in the USA is north of $1M. Going by the "MENSA" definition of highly intelligent, only 2% of people qualify. That means there are 5x as many millionaires as geniuses.

Comment by Logan Zoellner (logan-zoellner) on The EMH Aten't Dead · 2020-05-16T15:47:30.736Z · LW · GW

I think you're understating the amount of private information available to anyone with a reasonable level of intelligence. If you have a decent level of curiosity, chances are that you know some things that the rest of the world hasn't "caught onto" yet. For example, most fans of Tesla probably realized that EVs are going to kill ICEs and that Telsa is at least 4 years ahead of anyone else in terms of building EVs long before the sudden rise in Tesla stock in Jan 2020. Similarly, people who nerd out about epidemics predicted the scale of COVID-19 before the general public.

The extreme example of this is Venture Capital. People who are a bit "weird" and follow their hunches routinely start companies worth millions or billions of dollars. Every single one of them "beat the market" by tapping private information.

None of this invalidates the EMH (which as you pointed out is unfalsifiable). The key is figuring out how to take your personal unique insights and translate them into meaningful investments (with reasonable amounts of leverage and appropriate stop-losses). Of course, the easier it is to trade something, the more likely someone has "already had that idea", so predicting the S&P500 is harder than predicting an individual stock. But starting your own company is a power move so difficult that it's virtually unbeatable.

Comment by Logan Zoellner (logan-zoellner) on AI Boxing for Hardware-bound agents (aka the China alignment problem) · 2020-05-10T00:11:35.610Z · LW · GW
You are still being stupid, because you are ignoring effective tools and making the problem needlessly harder for yourself.

I think this is precisely where we disagree. I believe that we do not have effective tools for writing utility functions and we do have effective tools for designing at least one Nash Equilibrium that preserves human value, namely:

1) All entities have the right to hold and express their own values freely

2) All entities have the right to engage in positive-sum trades with other entities

3) Violence is anathema.

Some more about why I think humans are bad at writing utility functions:

I am the extremely skeptical about anything of the form: We will define a utility function that encodes human values. Machine learning is really good at misinterpreting utility functions written by humans. I think this problem will only get worse with a super-intelligence AI.

I am more optimistic about goals of the form "Learn to ask what humans want". But I still think these will fail eventually. There are lots of questions even ardent utilitarians would have difficulty answering. For example, "Torture 1 person or give 3^^^3 people a slight headache?".

I'm not saying all efforts to design friendly AIs are pointless, or that we should willingly release paperclip maximizes on the world. Rather, I believe we boost our chances of preserving human existence and values by encouraging a multi-polar world with lots of competing (but non-violent) AIs. The competing plan of "don't create AI until we have designed the perfect utility function and hope that our AI is the dominant one" seems like it has a much higher risk of failure, especially in a world where other people will also be developing AI.

Importantly, we have the technology to deploy "build a world where people are mostly free and non-violent" today, and I don't think we have the technology to "design a utility function that is robust against misinterpretation by a recursively improving AI".


One additional aside

Suppose the AI has developed the tech to upload a human mind into a virtual paradise, and is deciding whether to do it or not.

I must confess the goals of this post are more modest than this. The Nash equilibrium I described is one that preserves human existence and values as they are it does nothing in the domain of creating a virtual paradise where humans will enjoy infinite pleasure (and in fact actively avoids forcing this on people).

I suspect some people will try to build AIs that grant them infinite pleasure, and I do not grudge them this (so long as they do so in a way that respects the rights of others to choose freely). Humans will fall into many camps. Those who just want to be left alone, those who wish to pursue knowledge, those who wish to enjoy paradise. I want to build a world where all of those groups can co-exist without wiping out one-another or being wiped out by a malevolent AI.

Comment by Logan Zoellner (logan-zoellner) on What does a positive outcome without alignment look like? · 2020-05-09T14:46:54.554Z · LW · GW
You clearly have some sort of grudge against or dislike of china. In the face of a pandemic, they want basically what we want, to stop it spreading and someone else to blame it on. Chinese people are not inherently evil.

I certainly don't think the Chinese are inherently evil. Rather I think that from the view of an American in the 1990's a world dominated by a totalitarian China which engages in routine genocide and bans freedom of expression would be a "negative outcome to the rise of China".

This is a description of a Nash equilibria in human society. Their stability depends on humans having human values and capabilities.

Yes. Exactly. We should be trying to find a Nash equilibrium in which humans are still alive (and ideally relatively free to pursue their values) after the singularity. I suspect such a Nash equilibrium involves multiple AIs competing with strong norms against violence and focus on positive-sum trades.

But I don't see why any of the Nash equilibria between superintelligences will be friendly to humans.

This is precisely what we need to engineer! Unless your claim is that there is no Nash equilibrium in which humanity survives, which seems like a fairly hopeless standpoint to assume. If you are correct, we all die. If you are wrong, we abandon our only hope of survival.

Why would one AI start shooting because the other AI did an action that benefited both equally?

Consider deep seabed mining. I would estimate the percent of humans who seriously care (are are aware of the existence of) the sponges living at the bottom of the deep ocean at <1%. Moreover, there are substantial positive economic gains that could potentially be split among multiple nations from mining deep sea nodules. Nonetheless, every attempt to legalize deep sea mining has run unto a hopeless tangle of legal restrictions because most countries view blocking their rivals as more useful than actually mining the deep sea.

If you have several AI's and one of them cares about humans, it might bargain for human survival with the others. But that implies some human managed to do some amount of alignment.

I would hope that some AIs have an interest in preserving humans for the same reason some humans care about protecting life on the deep seabed, but I don't think this is a necessary condition for ensuring humanity's survival in a post-singularity world. We should be trying to establish a Nash equilibrium in which even insignificant actors have their values and existence preserved.

My point is, I'm not sure that aligned AI (in the narrow technical sense of coherently extrapolated values) is even a well-defined term. Nor do I think it is an outcome to the singularity we can easily engineer, since it requires us to both engineer such an AI and to make sure that it is the dominant AI in the post-singularity world.

Comment by Logan Zoellner (logan-zoellner) on AI Boxing for Hardware-bound agents (aka the China alignment problem) · 2020-05-09T14:09:15.674Z · LW · GW
A lot of the approaches to the "China alignment problem" rely on modifying the game theoretic position, given a fixed utility function. Ie having weapons and threatening to use them. This only works against an opponent to which your weapons pose a real threat. If, 20 years after the start of Moof, the AI's can defend against all human weapons with ease, and can make any material goods using less raw materials and energy than the humans use, then the AI's lack a strong reason to keep us around.

If the AIs are a monolithic entity whose values are universally opposed to those of humans then, yes, we are doomed. But I don't think this has to be the case. If the post-singularity world consists of an ecosystem of AIs whose mutually competing interests causes them to balance one-another and engage in positive sum games then humanity is preserved not because the AI fears us, but because that is the "norm of behavior" for agents in their society.

Yes, it is scary to imagine a future where humans are no longer at the helm, but I think it is possible to build a future where our values are tolerated and allowed to continue to exist.

By contrast, I am not optimistic about attempts to "extrapolate" human values to an AI capable of acts like turning the entire world into paperclips. Humans are greedy, superstitious and naive. Hopefully our AI descendants will be our better angels and build a world better than any that we can imagine.

Comment by Logan Zoellner (logan-zoellner) on AI Boxing for Hardware-bound agents (aka the China alignment problem) · 2020-05-09T13:56:29.408Z · LW · GW

I really like this this response! We are thinking about some of the same math.

Some minor quibbles, and again I think "years" not "weeks" is an appropriate time-frame for "first Human AI -> AI surpasses all humans"

Therefore, in a hardware limited situation, your AI will have been training for about 2 years. So if your AI takes 20 subjective years to train, it is running at 10x human speed. If the AI development process involved trying 100 variations and then picking the one that works best, then your AI can run at 1000x human speed.

A three-year-old child does not take 20 subjective years to train. Even a 20-year-old adult human does not take 20 subjective years to train. We spend an awful lot of time sleeping, watching TV, etc. I doubt literally every second of that is mandatory for reaching the intelligence of an average adult human being.

At the moment, current supercomputers seem to have around enough compute to simulate every synapse in a human brain with floating point arithmetic, in real time. (Based on 1014 synapses at 100 Hz, 1017 flops) I doubt using accurate serial floating point operations to simulate noisy analogue neurons, as arranged by evolution is anywhere near optimal.

I think just the opposite. A synapse is not a FLOP. My estimate is closer to 10^19. Moreover most of the top slots in the TOP500 list are vanity projects by governments or used for stuff like simulating nuclear explosions.

Although, to be fair, once this curve collides with Moore's law, that 2nd objection will no longer be true.

Comment by Logan Zoellner (logan-zoellner) on AI Boxing for Hardware-bound agents (aka the China alignment problem) · 2020-05-09T13:37:37.017Z · LW · GW
Free trade can also have a toxic side. It could make sidelining human dignity in terms of economic efficiency the expected default.

Yes!

This means we need to seriously address problems like secular stagnation, climate change, and economic inequality.

The problem should remain essentially the same if we reframe the China problem as the US problem.

Saying there is no difference between the US and China is uncharitable.

Also, I specifically, named it the China problem in reference to this:

Suppose, living in the USA in the early 1990's, you were aware that there was a nation called China with the potential to be vastly more economically powerful than the USA and whose ideals were vastly different from your own.

Namely, the same strategies the USA would use to contain a rising China are the ones I would expect humanity to use to contain a rising AI.

If we really wanted to call it the "America" problem, the context would be:

Suppose in the year 1784, you were a leader in the British Empire. It was clear to you that at some point in the next century the USA would become the most powerful superpower in existence. How would you guarantee that the USA did not become a threat to your existence and values?

By that measure, I think the British succeeded, since most Brits I know are not worried that the USA is going to take them over or destroy their National Healthcare System.

FWIW, I also think the USA is mostly succeeding. China has been the World's largest economy for almost a decade and yet they are still members of the UN, WTO, etc. They haven't committed any horrible acts such as invading Taiwan or censoring most of the internet outside of China and the rest of the world seems to have reluctantly chosen the USA over China (especially in light of COVID-19). The mere fact that most people view the USA as vastly more powerful than the Chinese is a testament to just how good of a job the USA has done in shaping the world order to one that suits their own values.

Comment by Logan Zoellner (logan-zoellner) on AI Boxing for Hardware-bound agents (aka the China alignment problem) · 2020-05-09T07:31:16.085Z · LW · GW
My major concern with AI boxing is the possibility that the AI might just convince people to let it out

Agree. My point was boxing a human-level AI is in principle easy (especially if that AI exists on a special purpose device of which there is only one in the world), but in practice someone somewhere is going to unbox AI before it is even developed.

The biggest threat from AI comes from AI-owned AI with a hostile worldview -- no matter whether how the AI gets created. If we can't answer the question "how do we make sure AIs do the things we want them to do when we can't tell them all the things they shouldn't do?"
Beyond that, I'm not really worried about economic dominance in the context of AI. Given a slow takeoff scenario, the economy will be booming like crazy wherever AI has been exercised to its technological capacities even before AGI emerges.

I think there's a connection between these two things, but probably I haven't made it terribly clear. The reason I talked about economic interactions, is because they're the best framework we currently have for describing positive-sum interactions between entities with vastly different levels of power.

I am certain that my bank knows much more about finance than I do. Likewise, my insurance company knows much more about insurance than I do. And my ISP probably knows more about networking than I do (although sometimes I wonder). If any of these entities wanted to totally screw me over at any point, they probably could. The reason I am able to successfully interact with them is not because they fear my retaliation or share my worldviews. But it is because they exist in a wider economy in which maintaining their reputation is valuable because it allows them to engage in positive-sum trades in the future.

Note that the degree to which this is true varies widely across time and space. People who are socially outcast in countries with poor rule of law cannot trust the bank. I propose that we ought to have less faith in our ability to control AI or its worldview and place more effort into making sure that potential AIs exist in a sociopolitical environment where it is to their benefit not to destroy us.

The reason I called this post the "China alignment problem" is because the same techniques we might use to interact with China (a potentially economically powerful agent with an alien or even hostile worldview) are the same ones I think we should be using to align our interactions with AI. Our chances of changing China's (or AIs) worldview to match our own are fairly slim, but our ability to ensure their "peaceful rise" is much greater.

I believe the best framework to do this is to establish a pluralistic society in which no single actor dominates, and where positive-sum trades are the default as enforced by collective action against those who threaten or abuse others.


Still, we were able to handle nuclear weapons so we should probably be able to handle this to.

Small nitpick, but "we were able to handle nuclear weapons" is a bit iffy. Looking up a list of near-misses during the Cold War is terrifying. Much less thinking about countries like Iran or North Korea going through a succession crisis.

Comment by Logan Zoellner (logan-zoellner) on AI Boxing for Hardware-bound agents (aka the China alignment problem) · 2020-05-09T02:13:08.314Z · LW · GW
In other words, you got the easy and useless part ("will it happen?") right, and the difficult and important part ("when will it happen?") wrong.

"Will it happen?" isn't vacuous or easy, generally speaking. I can think of lots of questions where I have no idea what the answer is, despite a "trend of ever increasing strength". For example:

Will Chess be solved?

Will faster than light travel be solved?

Will P=NP be solved?

Will the hard problem of consciousness be solved?

Will a Dyson sphere be constructed around Sol?

Will anthropogenic climate change cause Earth's temperature to rise by 4C?

Will Earth's population surpass 100 billion people?

Will the African Rhinoceros go extinct?

I feel obligated to point out that "predictions" of this caliber are the best you'll ever be able to do if you insist on throwing out any information more specific and granular than "historically, these metrics seem to move consistently upward/downward".

I've made specific statements about my beliefs for when Human-Level AI will be developed. If you disagree with these predictions, please state your own.

Comment by Logan Zoellner (logan-zoellner) on AI Boxing for Hardware-bound agents (aka the China alignment problem) · 2020-05-09T01:34:03.178Z · LW · GW
It modified its own algorithms to take better use of processor cache, bringing its speed from 500x human to 1000x human. It is making several publishable new results in AI research a day.

I think we disagree on what Moof looks like. I envision the first human-level AI as also running at human-like speeds on a $10 million+ platform and then accelerating according to Moore's law. This still results in pretty dramatic upheaval but over the course of years, not weeks. I also expect humans will be using some pretty powerful sub-human AIs, so it's not like the AI gets a free boost just for being in software.

Again, the reason why is I think the algorithms will be known well in advance and it will be a race between most of the major players to build hardware fast enough to emulate human-level intelligence. The more the first human-level AI results from a software innovation rather than a Manhatten-project style hardware effort, the more likely we will see Foom. If the first human-level AI runs on commodity hardware, or runs 500x faster than any human, we have already seen Foom.

Comment by Logan Zoellner (logan-zoellner) on AI Boxing for Hardware-bound agents (aka the China alignment problem) · 2020-05-09T01:22:35.647Z · LW · GW
Given that you emphasize hardware-bound agents: have you seen AI and Compute? A reasonably large fraction of the AI alignment community takes it quite seriously.

This trend is going to run into Moore's law as an upper ceiling very soon (within a year, the line will require a year of the world's most powerful computer). What do you predict will happen then?


"what are you doing if not encoding human values"

Interested in the answer to this, and how much it looks like/disagrees with my proposal: building free trade, respect for individual autonomy, and censorship resistance into the core infrastructure and social institutions our world runs on.

Comment by Logan Zoellner (logan-zoellner) on AI Boxing for Hardware-bound agents (aka the China alignment problem) · 2020-05-08T22:32:54.218Z · LW · GW
In any case, this does not seem to stop the Chinese people from feeling happier than the US people.

Lots of happy people in China.

And yes, I expect that in 2050 it will be possible to monitor the behavior of each person in countries 24/7. I can’t say that it makes me happy, but I think that the vast majority will put up with this. I don't believe in a liberal democratic utopia, but the end of the world seems unlikely to me.

Call me a crazy optimist, but I think we can aim higher than: Yes, you will be monitored 24/7, but at least humanity won't literally go extinct.

Comment by Logan Zoellner (logan-zoellner) on AI Boxing for Hardware-bound agents (aka the China alignment problem) · 2020-05-08T21:49:50.244Z · LW · GW
Why are some so often convinced that the victory of China in the AGI race will lead to the end of humanity?

I don't think a Chinese world order will result in the end of humanity, but I do think it will make stuff like this much more common. I am interested in creating a future I would actually want to live in.

The most prominent experts give a 50% chance of AI in 2099

How much would you be willing to bet that AI will not exist in 2060, and at what odds?

but I think that the probability of an existential disaster in this world will become less.

Are you arguing that a victory for Chinese totalitarianism makes Human extinction less likely than a liberal world order?

Comment by Logan Zoellner (logan-zoellner) on Why is the mail so much better than the DMV? · 2019-12-31T14:58:05.189Z · LW · GW

I think "private monopolies are worse than government ones" is probably true in my experience as well. Although some of this is the subjective experience of having to pay money to be treated badly.


I think this makes me believe more strongly in competition as the main reason why the USPS is comparatively well-run.


Edit:

I would still expect private monopolies to be run more cost-efficiently than government ones. Although I'm not sure about cases like utilities where their profits are directly tied to their costs by government regulations.

Comment by Logan Zoellner (logan-zoellner) on Many Turing Machines · 2019-12-15T21:48:44.311Z · LW · GW
For it to be a formal claim would require us knowing more physics than we do such that we would know the true metaphysics of the universe.

You are correct that I used Church-Turing as a shortcut to demonstrate my claim that MWH is computable. However, I am not aware of anyone seriously claiming quantum physics is non computable. People simulate quantum physics on computers all the time, although it is slow.


I'm inclined to view your description as a strawman of MWI

I don't think it's quite a strawman, since the point is that MTM is literally equivalent to MWH. In math saying "A is isomorphic to B, but B is easier to reason about" is something that is done all the time.


but it's also not an argument against MWI, only against MWI mattering to your purposes.

Yes.


Comment by Logan Zoellner (logan-zoellner) on Many Turing Machines · 2019-12-15T21:42:45.243Z · LW · GW

I like the Mathematical Universe Hypothesis for simplicity and internal consistency, but it seems like we're assuming a lot. And it's not as simple as all that, either. Where do we draw the line? Only computable functions? The whole Turing hierarchy? Non-standard Turing Machines? If we draw the line at "anything logically conceivable", I would worry that things like "a demon that can jump between different branches of the multiverse" ought to be popping into our reality all the time.

If we want our theory to be predictive, we should probably cut it off at "anything computable exists", but if predictability was our goal, why not go all the way back to "anything observable exists"?

Comment by Logan Zoellner (logan-zoellner) on Many Turing Machines · 2019-12-15T21:27:31.148Z · LW · GW
The MTM model is completely non interacting

The MTM model is literally computing the same thing as the MWH. Specifically, suppose for a human brain I compute the events observed by the same human brain. Granted, this requires solving both the easy problem of consciousness and the grand unified theory . But I don't think anyone here is seriously suggesting those are inherently non-computable functions.

I suppose a reasonable objection is that the shortest program is MWH, since I don't have to determine when an observation happens. But if I ask for the fastest program in terms of time and memory efficiency instead, MWH is a clear loser.