Posts

Google’s Ethical AI team and AI Safety 2021-02-20T09:42:20.260Z
AGI Alignment Should Solve Corporate Alignment 2020-12-27T02:23:58.872Z
LDL 9: When Transfer Learning Fails 2017-12-12T00:22:48.316Z
LDL 8: A Silly Realization 2017-12-06T02:20:44.216Z
LDL 7: I wish I had a map 2017-11-30T02:03:57.713Z
LDL 6: Deep learning isn't about neural networks 2017-10-27T17:15:20.115Z
LDL 5: What to do first 2017-10-26T18:11:18.412Z
LDL 4: Big data is a pain in the ass 2017-10-25T20:59:41.007Z
LDL 3: Deep Learning Pedagogy is Hard 2017-10-24T18:15:27.233Z
LDL 2: Nonconvex Optimization 2017-10-20T18:20:54.915Z
Learning Deep Learning: Joining data science research as a mathematician 2017-10-19T19:14:01.823Z
Uncertainty in Deep Learning 2017-09-28T18:53:51.498Z
Why do people ____? 2012-05-04T04:20:36.854Z
My Elevator Pitch for FAI 2012-02-23T22:41:40.801Z
[LINK] Matrix-Style Learning 2011-12-13T00:41:52.281Z
[link] Women in Computer Science, Where to Find More Info? 2011-09-23T21:11:51.628Z
Computer Programs Rig Elections 2011-08-23T02:03:07.890Z
Best Textbook List Expansion 2011-08-08T11:17:33.462Z
Traveling to Europe 2011-05-18T22:48:30.933Z
Rationality Exercise: My Little Pony 2011-05-13T02:13:39.781Z
[POLL] Slutwalk 2011-05-08T07:00:38.842Z
What Else Would I Do To Make a Living? 2011-03-02T20:09:47.330Z
Deep Structure Determinism 2010-10-10T18:54:15.161Z

Comments

Comment by magfrump on How much chess engine progress is about adapting to bigger computers? · 2021-07-11T00:27:38.303Z · LW · GW

70% compute, 30% algo (give or take 10 percentage points) over the last 25 years. Without serious experiments, have a look at the Stockfish evolution at constant compute. That's a gain of +700 ELO points over ~8 years (on the high side, historically). For comparison, you gain ~70 ELO per double compute. Over 8 years one has on average gained ~400x compute, yielding +375 ELO. That's 700:375 ELO for compute:algo

Isn't that 70:30 algo:compute?

Comment by magfrump on Covid 4/9: Another Vaccine Passport Objection · 2021-04-09T16:45:00.706Z · LW · GW

I'm curious about what the state of evidence around long covid is now, and especially how protective vaccines are against it. I imagine there still isn't much data about it yet though.

Comment by magfrump on Covid 3/25: Own Goals · 2021-03-26T05:07:53.705Z · LW · GW

A friend of mine on Facebook notes that the instances of blood clots in Germany were concerning because in Germany mostly young health care workers are getting vaccinated, where it's both more possible to distinguish small numbers of blood clots from chance and more concerning to see extreme side effects.

The rate is still low enough that pausing vaccination is (obviously) a dangerous move, but dismissing the case that blood clots may be caused by the vaccine isn't a fair assessment of the evidence, and that may be important in maybe two years when supply of non-AZ vaccines is no longer a limit for the world.

Comment by magfrump on Another RadVac Testing Update · 2021-03-24T07:42:56.839Z · LW · GW

Do you have any thoughts on what you'd do differently to be more personally confident doing this again?

Comment by magfrump on Strong Evidence is Common · 2021-03-16T02:10:57.074Z · LW · GW

Maybe but the US number lines up with 1% of the population lines up with the top 1% figure; if people outside the US are ~50x as likely to be top-1% at various hobbies that's a bold statement that needs justification, not an obvious rule of thumb!

Or it could be across all time, which lines up with ~100 billion humans in history.

Comment by magfrump on Strong Evidence is Common · 2021-03-15T06:24:57.529Z · LW · GW

I think "a billion people in the world" is wrong here--it should only be about 75 million by pure multiplication.

Comment by magfrump on The case for aligning narrowly superhuman models · 2021-03-15T06:14:11.200Z · LW · GW

I see, I definitely didn't read that closely enough.

Comment by magfrump on The case for aligning narrowly superhuman models · 2021-03-14T21:41:20.671Z · LW · GW

Looks like the initial question was here and a result around it was posted here. At a glance I don't see the comments with counterexamples, and I do see a post with a formal result, which seems like a direct contradiction to what you're saying, though I'll look in more detail.

Coming back to the scaling question, I think I agree that multiplicative scaling over the whole model size is obviously wrong. To be more precise, if there's something like a Q-learning inner optimizer for two tasks, then you need the cross product of the state spaces, so the size of the Q-space could scale close-to-multiplicatively. But the model that condenses the full state space into the Q-space scales additively, and in general I'd expect the model part to be much bigger--like the Q-space has 100 dimensions and the model has 1 billion parameters, so going adding a second model of 1 billion parameters and increasing the Q-space to 10k dimensions is mostly additive in practice, even if it's also multiplicative in a technical sense.

I'm going to update my probability that "GPT-3 can solve X, Y implies GPT-3 can solve X+Y," and take a closer look at the comments on the linked posts. This also makes me think that it might make sense to try to find simpler problems, even already-mostly-solved problems like Chess or algebra, and try to use this process to solve them with GPT-2, to build up the architecture and search for possible safety issues in the process.

Comment by magfrump on The case for aligning narrowly superhuman models · 2021-03-14T03:54:23.561Z · LW · GW

I'm replying on my phone right now because I can't stop thinking about it. I will try to remember to follow up when I can type more easily.

I think the vague shape of what I think I disagree about is how dense GPT-3's sets of implicit knowledge are.

I do think we agree that GPT-5000 will be broadly superhuman, even if it just has a grab bag of models in this way, for approximately the reasons you give.

I'm thinking about "intelligent behavior" as something like the set of real numbers, and "human behavior" as covering something like rational numbers, so we can get very close to most real numbers but it takes some effort to fill in the decimal expansion. Then I'm thinking of GPT-N as being something like integers+1/N. As N increases, this becomes close enough to the rational numbers to approximate real numbers, and can be very good at approximating some real numbers, but can't give you incomputable numbers (unaligned outcomes) and usually won't give you duplicitous behavior (numbers that look very simple at first approximation but actually aren't, like .2500000000000004, which seems to be 1/4 but secretly isn't). I'm not sure where that intuition comes from but I do think I endorse it with moderate confidence.

Basically I think for minimal circuit reasons that if "useful narrowly" emerges in GPT-N, then "useful in that same domain but capable of intentionally doing a treacherous turn" emerges later. My intuition is that this won't be until GPT-(N+3) or more, so if you are able to get past unintentional turns like "the next commenter gives bad advice" traps, this alignment work is very safe, and important to do as fast as possible (because attempting it later is dangerous!)

In a world where GPT-(N+1) can do a treacherous turn, this is very dangerous, because you might accidentally forget to check if GPT-(N-1) can do it, and get the treacherous turn.

My guess is that you would agree that "minimal circuit that gives good advice" is smaller than "circuit that gives good advice but will later betray you", and therefore there exist two model sizes where one is dangerous and one is safe but useful. I know I saw posts on this a while back, so there may be relevant math about what that gap might be, or it might be unproven but with some heuristics of what the best result probably is.

My intuition is that combining narrow models is multiplicative, so that adding a social manipulation model will always add an order of magnitude of complexity. My guess is that you don't share this intuition. You may think of model combination as additive, in which case any model bigger than a model that can betray you is very dangerous, or you might think the minimal circuit for betrayal is not very large, or you might think that GPT-2-nice would be able to give good advice in many ways so GPT-3 is already big enough to contain good advice plus betrayal in many ways.

In particular if combining models is multiplicative in complexity, a model could easily learn two different skills at the same time, while being many orders of magnitude away from being able to use those skills together.

Comment by magfrump on The case for aligning narrowly superhuman models · 2021-03-13T07:03:32.142Z · LW · GW

I think this is obscuring (my perception of) the disagreement a little bit.

I think what I'm saying is, GPT-3 probably doesn't have any general truth+noise models. But I would expect it to copy a truth+noise model from people, when the underlying model is simple.

I then expect GPT-3 to "secretly" have something like an interesting diagnostic model, and probably a few other narrowly superhuman skills.

But I would expect it to not have any kind of significant planning capacity, because that planning capacity is not simple.

In particular my expectation is that coherently putting knowledge from different domains together in generally useful ways is MUCH, MUCH harder than being highly superhuman in narrow domains. Therefore I expect Ajeya's approach to be both effective, because "narrowly superhuman" can exist, and reasonably safe, because the gap between "narrowly superhuman" or even "narrowly superhuman in many ways" and "broadly superhuman" is large so GPT-3 being broadly superhuman is unlikely.

Phrased differently, I am rejecting your idea of smartness-spectrum. My intuition is that levels of GPT-N competence will scale the way computers have always scaled at AI tasks--becoming usefully superhuman at a few very quickly, while taking much much longer to exhibit the kinds of intelligence that are worrying, like modeling human behavior for manipulation.

Comment by magfrump on The case for aligning narrowly superhuman models · 2021-03-12T03:48:36.192Z · LW · GW

This seems like it's using the wrong ontology to me.

Like, in my mind, there are things like medical diagnostics or predictions of pharmaceutical reactions, which are much easier cognitive tasks than general conversation, but which humans are specialized away from.

For example, imagine the severity of side effects from a specific medication. can be computed by figuring out 15 variables about the person and putting them into a neural network with 5000 parameters, and the output is somewhere in a six-dimensional space, and this model is part of a general model of human reactions to chemicals.

Then GPT-3 would be in a great position to use people's reddit posts talking about medication side effects to find this network. I doubt that medical science in our current world could figure that out meaningfully. It would be strongly superhuman in this important medical task, but nowhere near superhuman in any other conversational task.

My intuition is that most professional occupations are dominated by problems like this, that are complex enough that we as humans can only capture them as intuitions, but simple enough that the "right" computational solution would be profoundly superhuman in that narrow domain, without being broadly superhuman in any autonomous sense.

Maybe a different reading of your comment is something like, there are so many of these things that if a human had access to superhuman abilities across all these individual narrow domains, that human could use it to create a decisive strategic advantage for themself, which does seem possibly very concerning.

Comment by magfrump on The case for aligning narrowly superhuman models · 2021-03-07T23:29:07.915Z · LW · GW

This post matches and specifies some intuitions I've had for a while about empirical research and I'm very happy it has been expanded.

Comment by magfrump on Google’s Ethical AI team and AI Safety · 2021-02-21T17:48:53.980Z · LW · GW

Upcoming this comment because it helped me understand why nobody seems to be engaging with what I think the central point of my post is.

Comment by magfrump on Google’s Ethical AI team and AI Safety · 2021-02-21T17:47:58.111Z · LW · GW

After reading some of this reddit thread I think I have a better picture of how people are reacting to these events. I will probably edit or follow up on this post to follow up.

My high level takeaway is:

  • people are afraid to engage in speech that will be interpreted as political, so are saying nothing.
  • nobody is actually making statements about my model of alignment deployment, possibly nobody is even thinking about it.

In the edit or possibly in a separate followup post I will try to present the model at a further disconnect from the specific events and actors involved, which I am only interested in as inputs to the implementation model anyway.

Comment by magfrump on Google’s Ethical AI team and AI Safety · 2021-02-20T20:17:21.461Z · LW · GW

I appreciate the thread as context for a different perspective, but it seems to me that it loses track of verifiable facts partway through (around here), though I don't mean to say it's wrong after that.

I think in terms of implementation of frameworks around AI, it still seems very meaningful to me how influence and responsibility are handled. I don't think that a federal agency specifically would do a good job handling an alignment plan, but I also don't think Yann LeCun setting things up on his own without a dedicated team could handle it.

Comment by magfrump on Google’s Ethical AI team and AI Safety · 2021-02-20T20:08:22.684Z · LW · GW

I would want to see a strong justification before deciding not to discuss something that is directly relevant to the purpose of the site.

Comment by magfrump on Google’s Ethical AI team and AI Safety · 2021-02-20T20:01:41.886Z · LW · GW

Noted that a statement has been made. I don't find it convincing, and even if it did I don't think it changes the effect of the argument.

In particular, even if it was the case that both dismissals were completely justified, I think the chain of logic still holds.

Comment by magfrump on Google’s Ethical AI team and AI Safety · 2021-02-20T19:57:56.034Z · LW · GW

I think this makes sense, but I disagree with it as a factual assessment.

In particular I think "will make mistakes" is actually an example of some combination of inner and outer alignment problems that are exactly the focus of LW-style alignment.

I also tend to think that the failure to make this connection is perhaps the biggest single problem in both ethical AI and AI alignment spaces, and I continue to be confused about why no one else seems to take this perspective.

Comment by magfrump on Apply to Effective Altruism Funds now · 2021-02-14T05:34:32.191Z · LW · GW

I am currently writing fiction that features protagonists that are EAs.

This seems at least related to the infrastructure fund goal of presenting EA principles and exposing more people to them.

I think receiving a grant would make me more likely to aggressively pursue options to professionally edit, publish, and publicize the work. That feels kind of selfish and makes me self-conscious, but also wouldn't require a very large grant. It's hard for me to unwrap my feelings about this vs. the actual public good, so I'm asking here first.

Does this sounds like a good grant use?

Comment by magfrump on Making Vaccine · 2021-02-12T06:01:05.814Z · LW · GW

Any preliminary results on side effects so far?

Comment by magfrump on Making Vaccine · 2021-02-07T04:51:46.612Z · LW · GW

How were you able to find someone who would give you an antibody test?

I made some effort to get an antibody test a few weeks ago but multiple sources refused to order or run one, even after I had an appointment that I showed up for in person.

Comment by magfrump on D&D.Sci II: The Sorceror's Personal Shopper · 2021-01-18T21:03:25.910Z · LW · GW

Welp, I spent five minutes plus trying to switch to the markdown editor to fix my spoilers and failed. Giving up now.

Comment by magfrump on Covid 1/7: The Fire of a Thousand Suns · 2021-01-10T21:34:58.312Z · LW · GW

I would expect the prior to be to end up with something similar to the flu vaccine, which we try to get everyone to take approximately yearly and have more safety concerns about people not taking it.

Comment by magfrump on Change My View: Incumbent religions still get too much leeway · 2021-01-08T18:05:38.933Z · LW · GW

I find both directions plausible. I do agree that I don't see any existing institutions ready to take it's place, but looking at secular solstice, for example, I definitely expect that better institutions are possible.

There might be a sufficiency stagnation following similar mechanics to crowding out, since people have a "good enough" option they don't try to build better things, and centralized leadership causes institutional conservatism.

I would bet this is supported by worse outcomes for more centralized churches, like unitarians vs megachurches or orthodox catholics, but that's a weakly held belief.

Comment by magfrump on Change My View: Incumbent religions still get too much leeway · 2021-01-08T02:11:03.242Z · LW · GW

I think I find this plausible. An alternative to MichaelBowbly's take is that religion may crowd out other community organization efforts which could plausibly be better.

I'm thinking of unions, boys and girls clubs, community centers, active citizenship groups, meetup groups, and other types of groups that have never yet existed.

It could be that in practice introducing people to religious practices shows them examples of ways to organize their communities, but it could also be that religious community efforts are artificially propped up by government subsidies via being tax exempt.

The normative implication in this case, which I think is probably a good idea in general, is that you should focus on building intimate (not professionalized and distant) community groups to connect with people and exchange services.

Comment by magfrump on Fourth Wave Covid Toy Modeling · 2021-01-07T17:07:57.702Z · LW · GW

A toy model that makes some sense to me is that the two population distinction is (close to) literally true; that there's a subset of like 20% of people who have reduced their risk by 95%+, and models should really be considering only the other 80% of the population, which is much more homogeneous.

Then because you started with effectively 20% population immunity, that means R0 is actually substantially higher, and each additional piece of immunity is less significant because of that.

I haven't actually computed anything with this model so I don't know whether it is actually explanatory.

Comment by magfrump on Fourth Wave Covid Toy Modeling · 2021-01-07T03:59:57.769Z · LW · GW

I did some calculations of basic herd immunity thresholds based on fractal risk (without an infection model) a few months back, and the difference between splitting the population into high exposure vs low exposure captures more than half the change from the limit of infinite splits. The threshold stopped changing almost entirely after three splits, which was only 6 subpopulatuons.

With many other variables as exist here I'm not confident that effect would persist but my default guess is that adding fractal effects to the model will less than double the change from the homogenous case, and possibly change very little at all as the herd immunity threshold and therefore level of spread reduction will be changed even less (especially with control systems.)

That may end up being pretty significant in terms of actual number of deaths and infections at the end, but I would be very surprised if it changes whether or not there are peaks.

Comment by magfrump on Feature request: personal notes about other users · 2021-01-03T00:20:41.299Z · LW · GW

I'd like to use this feature, especially to keep track if I meet a user in the walled garden or IRL but need consistency to remember which user they are. This is a common feature in video games and without it I would have no idea who most of my friends in League of Legends are.

I wouldn't be that worried about privacy for the notes, since I'd expect few of them to contain sensitive information, though they might contain some awkward information.

Comment by magfrump on Anti-Aging: State of the Art · 2021-01-02T21:10:15.763Z · LW · GW

Yeah I think my main disagreements are 4 and 5.

Given stories I've heard about cryonics orgs, I'd put 10-50% on 5. Given my impression of neuroscience, I'd put 4 at 25-75%.

Given that I'm more pessimistic in general, I'd put an addition 2x penalty on my skepticism of their other guesses.

That puts me around 0.01%-20% spread, or one in ten thousand lower bound, which is better than I expected. If I was convinced that a cryo org was actually a responsible business that would be enough for me to try to make it happen.

Comment by magfrump on Anti-Aging: State of the Art · 2021-01-02T11:15:24.589Z · LW · GW

Even 0.2% seems quite optimistic to me. Without going into detail, anything from 3-8 seems like it could be 10% or lower and 12-14 seem nearly impossible to estimate. I wouldn't be surprised to find my personal estimate below one in a million.

Comment by magfrump on Make more land · 2021-01-02T10:19:18.054Z · LW · GW

I was trying to do a back of the envelope calculations of total cost of work and total value created (where I'm using cost of rent as a (bad) proxy for (capturable) value created).

I definitely wouldn't assume that the government or any single agent would be doing the project, just that the overall amount of capturable value must be worth it for the investment costs, then different parties can pay portions of those costs in exchange for portions of or rights to that value, but I doubt adding in the different parties involved would make my estimates more accurate.

Do you have a source for cost of similar projects? My estimates are definitely very bad for many reasons.

Comment by magfrump on Alignment Research Field Guide · 2020-12-31T07:10:47.123Z · LW · GW

I want to have this post in a physical book so that I can easily reference it.

It might actually work better as a standalone pamphlet, though. 

Comment by magfrump on Mistakes with Conservation of Expected Evidence · 2020-12-31T00:32:51.614Z · LW · GW

I like that this responds to a conflict between two of Eliezer's posts that are far apart in time. That seems like a strong indicator that it's actually building on something.

Either "just say the truth", or "just say whatever you feel you're expected to say" are both likely better strategies.

I find this believable but not obvious. For example, if the pressure on you is you'll be executed for saying the truth, saying nothing is probably better that saying the truth. If the pressure on you is remembering being bullied on tumblr, and you're being asked if you disagree with the common wisdom at a LW meetup, saying nothing is better than saying what you feel expected to say.

I find it pretty plausible that those are rare circumstances where the triggering uncertainty state doesn't arise, but then there are some bounds on when the advice applies that haven't been discussed at all.

a little cherry-picking is OK

I think the claim being made here is that in most cases, it isn't practical to review all existing evidence, and if you attempt to draw out a representative sub-sample of existing evidence, it will necessarily line up with your opinion.

In cases where you can have an extended discussion you can mention contradicting evidence and at least mention that it is not persuasive, and possibly why. But in short conversations there might only be time for one substantial reference. I think that's distinct from what I would call "cherry-picking." (it does seem like it would create some weird dynamics where your estimate of the explainer's bias rises as you depart from uncertainty, but I think that's extrapolating too far for a review)

I think the comment of examples is helpful here.

I wonder about the impact of including something like this, especially with social examples, in a curated text that is at least partly intended for reading outside the community.

Comment by magfrump on The Forces of Blandness and the Disagreeable Majority · 2020-12-30T02:48:37.909Z · LW · GW

The factual point that moderate liberals are more censorious is easy to lose track of, and I saw confusion about it today that sent me back to this article.

I appreciate that this post starts from a study, and outlines not just the headline from the study but the sample size. I might appreciate more details on the numbers, such as how big the error bars are, especially for subgroups stats.

Historical context links are good, and I confirm that they state what they claim to state.

Renee DiResta is no longer at New Knowledge, though her previous work there is still up on her site. I really like the exploration of her background. It might be nice to see something similar about Justin Murphy as well.

Swearing is negatively correlated with agreeableness

citation for this is in the link on the previous sentence; I might adjust the link so it's clear what it covers.

It’s often corporate caution that drives speech codes that restrict political controversy, obscenity, and muckraking/whistleblowing. It’s not just racist or far-right opinions that get silenced; media and social-media corporations worry about offending prudes, homophobes, Muslim extremists, the Chinese government, the US military, etc, etc.

This paragraph seems clearly true to me, but I'd prefer to see citations, especially since it's related to politics.

every guy with a printing press could publish a “newspaper” full of opinions and scurrilous insults

citation for this would be nice, or just a link to an example. Here's a discussion with sources.

I really like Zvi's comment tying this back to a more detailed model of Asymmetric Justice.

 

I really like this post overall; especially in the context of Asymmetric Justice it feels like something that's simple and obvious to me after reading it, while being invisible to me beforehand.

Comment by magfrump on Give it a google · 2020-12-30T02:25:02.412Z · LW · GW

it's not a simple enough question for easy answers. 

It's also plausible to me that it requires enough intersections (owns a house; rents the house out on AirBnB; in a single metro area; measures success in a reasonable way; writes about it on the internet) gets small enough that there are no results.

Looking for general advice (how to succeed as an AirBNB host) might give a model that's easy to fill in, like "you will succeed if the location is X appealing and there are f(X) listings or fewer."

That still seems like a pretty easy answer to me, but it could only be found with slightly better Google Fu.

I think that leads to a need for heuristics on how hard to try rephrasing things or when to give up quickly rather than getting sucked down a two day wiki walk rabbit hole.

Comment by magfrump on AGI Alignment Should Solve Corporate Alignment · 2020-12-30T00:30:45.981Z · LW · GW

I think you're misunderstanding my analogy.

I'm not trying to claim that if you can solve the (much harder and more general problem of) AGI alignment, then it should be able to solve the (simpler specific case of) corporate incentives.

It's true that many AGI architectures have no clear analogy to corporations, and if you are using something like a satisficer model with no black-box subagents, this isn't going to be a useful lens.

But many practical AI schema have black-box submodules, and some formulations like mesa-optimization or supervised amplification-distillation explicitly highlight problems with black box subagents.

I claim that an employee that destroys documentation so that they become irreplaceable to a company is a misaligned mesa-optimizer. Then I further claim that this suggests:

  • Company structures contain existing research on misaligned subagents. It's probably worth doing a literature review to see if some of those structures have insights that can be translated.
  • Given a schema for aligning sub-agents of an AGI, either the schema should also work on aligning employees at a company or there should be a clear reason it breaks down
    • if the analogy applies, one could test the alignment schema by actually running such a company, which is a natural experiment that isn't safely accessible for AI projects. This doesn't prove that the schema is safe, but I would expect aspects of the problem to be easier to understand via natural experiment than via doing math on a whiteboard.
Comment by magfrump on Where are the post-COVID complainers? · 2020-12-29T02:39:43.496Z · LW · GW

As remizidae points out, most of these restrictions are not effectively enforced by governments, they are enforced by individuals and social groups. In California, certainly, the restaurants and bars thing is enforced mostly by the government, but that's mostly a "governments can't act with nuance" problem.

But for things like gatherings of friends, I think this question still applies. The government cannot effectively enforce limits on that, but your group of friends certainly can.

And I think in that context, this question remains. That is, I think groups of friends in California should start plans for how to handle social norms under partial immunity. 

I have personally suggested this to friends a couple times, and I've met with a lack of enthusiasm. I think a part of that is that the question is so politically tribal, that taking any action that isn't MAXIMALLY SERIOUS is a betrayal of the tribe, even if it has no practical value.

Also, making any such plans public, versus just keeping a google doc of who among your friends has been vaccinated, creates a lot of social awkwardness, so I'd expect that in practice people will come up with their own personal, secret, and highly error-prone ways of handling it.

Comment by magfrump on Make more land · 2020-12-29T02:30:22.640Z · LW · GW

Thanks! Updated.

Comment by magfrump on AGI Alignment Should Solve Corporate Alignment · 2020-12-29T02:24:09.571Z · LW · GW

I think this misunderstands my purpose a little bit.

My point isn't that we should try to solve the problem of how to run a business smoothly. My point is that if you have a plan to create alignment in AI of some kind, it is probably valuable to ask how that plan would work if you applied it to a corporation.

Creating a CPU that doesn't lie about addition is easy, but most ML algorithms will make mistakes outside of their training distribution, and thinking of ML subcomponents as human employees is an intuition pump for how or whether your alignment plan might interact.

Comment by magfrump on Make more land · 2020-12-27T23:32:45.996Z · LW · GW

I like this post and would like to see it curated, conditional on the idea actually being good. There are a few places where I'd want more details about the world before knowing if this was true.

  • Who owns this land? I'm guessing this is part of the Guadalupe Watershed, though I'm not sure how I'd confirm that.

This watershed is owned and managed by the Santa Clara Valley Water District.

  • What legal limits are there on use of the land? Wikipedia notes:

The bay was designated a Ramsar Wetland of International Importance on February 2, 2012.

I don't know what that means, but it might be important.

  • How much does it cost to fill in land like this?

It looks like for pool removal there's a cost of between $20-$130 per cubic foot yard (thanks johnswentworth). Making the bad simplifying assumption of 6ft of depth and 50 square miles that's 8.3 billion ft^3 310 million cubic yards. Since the state of CA is very bad at cutting costs, let's use the high end cost estimate which is about 1/8 of $1000 so that makes the cost estimate $1 trillion $300 billion. 

With a trillion dollar price tag, this stops looking worthwhile pretty fast.

Spitballing about price estimates:

  • People have filled in things like this in the past, which suggests lower costs
  • Human effort may be much more expensive than it was previously
  • pool filling prices might include massive fixed costs and regulatory costs that wouldn't scale with volume
  • The state could auction the land to a private company that might do a better job negotiating costs

If fixed costs are 90% of pool fillings and will be negligible by volume for this, and if we further use the lower bound of cost per filling, then we reduce cost by 60x to about $5 billion. Let's call that an 80% confidence interval, where the low end is clearly worth it and the high end clearly not.

  • How much does it cost to build a bunch of housing there?

First Google result says $65k-86k per unit, though economies of scale might bring that down. Then the suggested 2 million units would cost ~$130-170 billion; potentially significantly more or less.

  • How much value does the housing create?

The cheapest rents I could see with a casual search was something around $900/bedroom/month in Fremont.

Rounding up to $11k/year, it would take 6-8 years to recoup construction costs, not counting maintenance.

At the low end of land filling costs, $16 billion, adds less than one year to the recoup timeline. At the high end around $1 trillion, it would take about 50 years to recoup the costs. $300 billion, that ~triples to ~20 years.

 

Reaching the end of this, I think I'm uncertain about how economical the idea is. This is mostly because of large error bars around my cost calculations.

An investment that pays off in value created 50 years down the line is probably worth it for society, but very unlikely to happen given the investment environment today.

My ending impression is I want this post curated, because I want city managers and real estate investors to run these numbers (ideally being nerd-sniped by my terrible naïve calculations) and make the decision for themselves.

Comment by magfrump on The LessWrong 2019 Review · 2020-12-27T19:48:50.024Z · LW · GW

The review process has been a nice way for me to feel good about getting more involved in rereading articles and posting comments.

Comment by magfrump on Open & Welcome Thread - December 2020 · 2020-12-27T19:45:11.103Z · LW · GW

I'm less concerned about ticker symbols and more concerned about things like shorting an industry and what defines that industry, or buying treasury bonds on the right timescale based on what my guesses on volatility are.

Comment by magfrump on Open & Welcome Thread - December 2020 · 2020-12-27T17:25:37.031Z · LW · GW

An analogy that I use for measured IQ is that it relates to intelligence similarly to how placement in a badminton tournament relates to physical fitness.

I think this has been pretty effective for me. I think the overall analogy between intelligence and physical fitness has been developed well in places so I don't care to rehash it, but I'm not sure if I've seen the framing of IQ as a very specific (and not very prestigious, maybe somewhat pretentious) sport, which I think encapsulates the metaphor well.

Comment by magfrump on Book Review: The Secret Of Our Success · 2020-12-27T17:12:24.325Z · LW · GW

(not very serious)

This seems more like a mark against the post than in it's favor, to me.

Comment by magfrump on Introduction to Cartesian Frames · 2020-12-27T08:31:50.748Z · LW · GW

I feel like this analogy should make it possible to compress the definition of some agents; for example the agent that consists of the intersection of two agents, I would expect to be able to be represented as some combination of the two rows representing those two agents. It's not clear to me how to do that, in particular because the elements of the matrix are "outcomes" which don't have any arithmetic structure.

Comment by magfrump on Sequence introduction: non-agent and multiagent models of mind · 2020-12-27T08:03:32.103Z · LW · GW

in retrospect it looks like understanding cognitive biases doesn’t actually make you substantially more effective

 

I'm not convinced that this is true, or that it's an important critique of the original sequences.

 

Looking at the definition of agent, I'm curious how this matches with Cartesian Frames.

Given that we want to learn to think about humans in a new way, we should look for ways to map the new way of thinking into a native mode of thought

I was very happy to read this pingback, but it's purely anecdotal. There are better sources for this idea.

 

Overall as a sequence index, it's not clear to me whether this post makes sense for inclusion. I can imagine a few possibilities

  1. Most of the rest of the sequence will be included in the curation, and
    1. This post lays out motivations and definitions that aren't repeated in the sequence, or
    2. The rest of the sequence will function fine without this post
  2. Only the sequence index will be included
    1. The motivation and definitions provide the most value
    2. The post summaries contain most of the value of the sequence
  3. A small subset of posts in the sequence will be included
    1. The subset is standalone ideas
      1. The index serves as motivation, connecting tissue, and a reference point
      2. The index isn't included and the posts stand alone
    2. The subset excludes tangents and only includes a central theme
      1. the index motivates and ties the included posts together
      2. the index isn't necessary as the posts have their own flow

Overall I don't know what to expect from the context of curating this post. Would be interested to hear more from people who have spent more time with the sequence.

Comment by magfrump on Instant stone (just add water!) · 2020-12-27T07:39:05.481Z · LW · GW

I think this comment is convincing to me that the post should NOT be curated.

I upvoted primarily for H1 because I enjoyed reading it, and partly for H2.

I think reading more gears-level descriptions of things from day to day life is helpful for keeping an accurate reductionist picture of reality. In particular, I want to reinforce in myself the idea that mundane inventions (1) have a long history with many steps (2) solve specific problems, and (3) are part of an ongoing process that contains problems yet to be solved.

That makes this post nice for me to read day to day, but it makes it definitively NOT a post that I care about revisiting or that I think expands the type of thinking that the curation is trying to build.

Comment by magfrump on Open & Welcome Thread - December 2020 · 2020-12-27T05:42:55.413Z · LW · GW

Are there simple walkthrough references for beginning investing?

I have failed to take sensible actions a few times with my savings because I'm stuck on stages like, how do I transfer money between my bank account and investment account, or how can I be confident that the investment I'm making is the investment I think I'm making. These are things that could probably be resolved in ten minutes with someone experienced.

Maybe I should just consult a financial advisor? In which case my question becomes how do I identify a good one/a reasonable fee/which subreddit should I look at for this first five minutes kind of advice?

Comment by magfrump on Open & Welcome Thread - December 2020 · 2020-12-27T05:33:45.022Z · LW · GW

Has anybody rat-adjacent written about Timnit Gebru leaving Google?

I'm curious (1) whether and how much people thought her work was related to long term alignment, (2) what indirect factors were involved in her termination, such as would her paper make GOOG drop, and (3) what the decision making and especially Jeff Dean's public response says about people's faith in Google RMI and the DeepMind ethics board.

Comment by magfrump on AGI Alignment Should Solve Corporate Alignment · 2020-12-27T02:33:34.276Z · LW · GW

After writing this I came up with the following summary that might be much cleaner:

In the intelligence amplification/distillation framework, it seems like "hire a team of people" is a possible amplification step, in which case running a company is a special case of the framework. Therefore I'd expect to see useful analogies from advice for running a company that would apply directly to amplification safety measurements.

I also meant to mention the analogy between UX research and the Inception network architecture having middle layers that directly connect to the output, but I forgot.