Posts

Safe AIs through engineering principles 2018-01-20T17:31:58.326Z

Comments

Comment by Gerald Monroe (gerald-monroe) on Why has nuclear power been a flop? · 2021-04-20T17:49:51.210Z · LW · GW

Agree with everything but the last bit. It is possible to find fragments of the core itself still in the area with a kilometer or so of the reactor. These tiny fragments are high level nuclear waste.

https://youtu.be/ejZyDvtX85Y

Comment by Gerald Monroe (gerald-monroe) on Why has nuclear power been a flop? · 2021-04-18T06:28:40.590Z · LW · GW

Fair enough.  Unfortunately you can walk around with a geiger counter and perceive the dangers of nuclear in the 2 disaster areas.  You can't perceive the coal pollution in most areas except when it gets bad enough.

Comment by Gerald Monroe (gerald-monroe) on Why has nuclear power been a flop? · 2021-04-18T06:27:33.999Z · LW · GW

What is your definition of contaminate? If Devanney is correct that low doses of radiation are acceptable - and I believe he is - then much land which is described as ‘contaminated’ is in fact perfectly liveable. (Also see the people who illegally live in the Chernobyl exclusion zone). For a reasonable definition of ’contaminate’ then, it follows that a nuclear accident contaminates much smaller areas of land and is less expensive.

One issue is that it is not possible to rigorously prove it's livable because the parameter you are trying to measure - extra cancers and subtle damage - won't show up for 20-30 years.  Over such a long timescan it is difficult to even tease out causation.  Your data will be incomplete, your subjects won't all have lived long enough for any radiation damage to matter, some of them smoke, etc.  But for the sake of argument I will let the conclusions be conclusive that radiation is harmless below a threshold. 

I agree with you that the NRC's decision making is not rational in that it is not factoring in the consequences of a decision to the host society.  It's factoring in the consequences of the decision to the NRC.  This is true for most regulatory agencies, at best they are captured by not wanting to do something that endangers their own reputation.  

Anyways even if all of the above is true the innovation cost I mentioned above isn't there.  Nuclear is also small market size in that many advancements do not make economic sense because few reactors are being built, and this would remain true if more were being built up to a point.  

Solar and batteries are enormous market scales, and thus many improvements make economic sense.  

Comment by Gerald Monroe (gerald-monroe) on Why has nuclear power been a flop? · 2021-04-17T03:28:28.490Z · LW · GW

Things you have neglected:

      1.  Accidents contaminating large areas of land.  These are events that occur infrequently and can negate the lifetime profits from many reactors. (example, fukishima price tag at 187 billion)

      2.  The very nature of what it means to innovate or cost reduce a product.  In any other industry, when you try to make something cheaper, you change the design to remove parts, or cheapen a part that is better than it needs to be.  Even if you accept that the NRC is over-zealous, the risk of #1 is a strong incentive not to do either.

Other competing sources of energy, the worst case scenario is acceptable.  If you notice, grid-scale battery installations are outdoors and separated by a gap between each metal cabinet.  This is so that a lithium fire will be limited to a single battery cabinet.  That's an acceptable failure. Ditto the worst case for other forms of power generation.  "Contaminating a nearby city and making it permanently unusable if things go badly enough" is not an acceptable scenario.  

Anyways, what this means is that solar/wind/batteries are going to keep getting cheaper.  And they also have the potential to decarbonize the planet as well.  And you can keep innovating and reducing cost wherever possible because the worst case scenario when a solar panel/battery/wind turbine fails is a warranty claim or small fire.

Comment by Gerald Monroe (gerald-monroe) on Are there opportunities for small investors unavailable to big ones? · 2021-04-15T23:40:54.046Z · LW · GW

Right.  It's arguably not morally worse than various HFT and dark pool and other fintech moneymaking tricks though.  All these involve buying a mispriced commodity (even by a fraction of a cent) and reselling it for it's true (market value).  And the buying/selling opportunities are unavailable to most people, in the same way you can't retail scalp effectively unless you are using a computer program to do it.  

My point is the 'less morally repugnant but also still as profitable' hurdle isn't an easy one to clear since it's not that morally repugnant.  

Comment by Gerald Monroe (gerald-monroe) on Are there opportunities for small investors unavailable to big ones? · 2021-04-15T23:04:52.711Z · LW · GW

I know of an investment that fits all of these criteria.  Retail scalping.  Due to the fact that you can usually return the items you bought if you fail to sell them for above your cost, the risks of a loss is low.  It's small scale - there might be '10' of the desired commodity in an online store showing up in a day, and your bots could snag just 1.  The ROI can trivially be 20-50% in a week as you 'flip' the item for a markup. 

It is generally considered to be despicable behavior but is also currently legal.  

Downside risks : sometimes retail scalpers have gotten ejected from online marketplaces for hoarding essential goods.  For example early in the pandemic there are some who hoarded hand sanitizer and then they were banned from selling them online.

I don't practice it myself so I don't know all of the risks, but it seems to fit your request.  

Comment by Gerald Monroe (gerald-monroe) on The consequentialist case for social conservatism, or “Against Cultural Superstimuli” · 2021-04-15T22:39:22.258Z · LW · GW

Sure. My point is the OP is not just saying these traditions are traditional but that we should follow them because they are proven to work by the fact of our existence.

And I am just saying this is suboptimal. Even if I can't make up a new tradition - say a new holiday for my bi roommate and me and our children together and her girlfriend to all celebrate - I should at least steal working ideas from the best.

In slightly clearer terms:
what should I do in my life?
rational answer: Output = max( utility_heuristic( alternative actions) ). Output = watch more catgirl porn. Conservative Answer: Output = (Query("What did my parents do")) Output= "watch more Fox News" Optimized answer: Output = ("Query("what did the most successful parents do?")) Output = "invite parents to live in house to provide child raising help and find me a wife"

Comment by Gerald Monroe (gerald-monroe) on Covid 4/15: Are We Seriously Doing This Again · 2021-04-15T21:17:11.192Z · LW · GW

Technically speaking that isn't true but practically speaking it is. (Just like technically speaking you could write a letter of complaint to Stalin)

Congress could find their behavior so egregious they pass a law authorizing you to sue.

Comment by Gerald Monroe (gerald-monroe) on The consequentialist case for social conservatism, or “Against Cultural Superstimuli” · 2021-04-15T20:14:54.171Z · LW · GW

The point is that now you're descending into nonsense.  If we cannot use rational thought to decide what to do, but instead have to trust some old irrational idea, which idea is the correct one?  Oh, 'someone' said that television rots our brains.  Ok are all the rest of their ideas good?  You are likely to find the answer is no.

Entire cultures have deep respect for their elders and are highly conservative in that whatever advice their elders give is treated as a good idea.  This works except when it turns out that the 'elders' have 10 different incompatible bits of advice, or things that simply don't work at all.

Comment by Gerald Monroe (gerald-monroe) on The consequentialist case for social conservatism, or “Against Cultural Superstimuli” · 2021-04-15T20:12:51.885Z · LW · GW

Focusing on the main point.  I am saying that if evolution has found sets of ideas that work, and you genuinely want your life to use the ideas that work the best (so you have many children), it appears you should adopt the ideas that work the best.

Which are not USA conservative values, they are Chinese and Asian values.  Everything else you are saying is simply that 'the way that work in the past is best'.  Which it is - for the purpose of having as much reproductive success as possible.  That is the only 'constraint' applied to it.

Comment by Gerald Monroe (gerald-monroe) on The consequentialist case for social conservatism, or “Against Cultural Superstimuli” · 2021-04-15T00:01:20.124Z · LW · GW

So what you are saying is, the Conservatives have a bunch of 'settings' for every aspect of our lives.  They 'worked in the past' and 'worked well enough to make it'.  Even when a particular setting doesn't make any rational sense, we should just 'have faith our ancestors knew what they were doing'.

Also conservatives in many cases want the government to force us through coercion and outright violence to obey laws written from Conservative social 'values'.  For example, the obvious being a marriage, where this is a legal contract that is 'one size fits all', you either agree to the terms or you are not married.  There is no room for modernization or amendments, just "the arbitrary way inherited from our ancestors is the way or the highway".  (even a pre-nup doesn't amend the marriage, just exempts pre-marital assets)

Your argument that "it worked well enough to get us here" is moderately compelling.  I can point out that other cultures, especially Asia, sometimes do things differently.  Therefore the "different settings" are also valid.  In fact in terms of success, due to higher population numbers, the Asian way appears to be 'more correct'.  If you really wanted to 'do what is best for future children', it seems we need to adopt some mixture of Chinese and Indian cultures, because apparently in objective terms they work the best.  Guess you better invite your parents to live with you.  Hope they can find you a wife.

My other thought is I have had arguments sometimes with my father, who doesn't understand why I am not interested in car tinkering or car culture.  To me, a car is a machine to reach a destination, and I should buy the one with the lowest total operating costs.  

He sees car culture as a conservative value.  Except, uh, it isn't one that has stood the test of time, it was "made up" somewhere in the 1920s by auto manufacturers.  

Similarly, conservatives trumpet things like celibacy before marriage as a value that has "stood the test of time", ignoring the fact that people used to marry far, far younger...

Anyways, back to the main subject.  If catgirl porn is your thing, well, you can watch Fox News or Storage Wars or Cops or catgirl porn in the evenings.  I'm not seeing a compelling argument how the first 3 are "better" for your life and well being if you really really like catgirls.

Sure, you might now feel unsatisfied with any sexual partners who are not catgirls.  But then again, Fox News is designed to make you feel dissatisfied with anything a Democrat is trying to do, feeling a sense of imminent doom, where the President is about to just cut loose with executive orders and let the entire population of Latin America through the border all at once in one day.  And defund the police in every city.  (this is what conservatives seem to really believe).

Storage Wars makes you feel dissatisfied that you are not running your own business scavenging millions in value.  Cops makes you feel unsafe and a Conservative might check that their firearm is loaded and aimed at the door after an episode.

Just not seeing a difference.

Comment by Gerald Monroe (gerald-monroe) on [AN #146]: Plausible stories of how we might fail to avert an existential catastrophe · 2021-04-14T20:34:13.697Z · LW · GW

I think I see Hintons point. Unless cryo works or a few other long shots, every human on earth is scheduled to die within a short period of time.

Our descendants will be intelligent and have some information we gave them from genes to whatever records we left and ideas we taught them.

This is true regardless of whether the descendants are silicon or flesh. So who cares which.

Either way the survival environment will require high intelligence so no scenario save a small number of true annihilation scenarios are distinguishable as good or bad outcomes.

Comment by Gerald Monroe (gerald-monroe) on A Brief Review of Current and Near-Future Methods of Genetic Engineering · 2021-04-13T22:09:29.891Z · LW · GW

Embryo selection is a weak form of genetic engineering though, literally just restricting certain rolls from a die.

This is not how you get someone with a 1000 iq, its how you make 130 iq more common.

Comment by Gerald Monroe (gerald-monroe) on A Brief Review of Current and Near-Future Methods of Genetic Engineering · 2021-04-13T22:08:40.109Z · LW · GW

Can't do it without enough power to overthrow a western government. Only thing that could even theoretically do that would be a TAI fighting on your side...

Comment by Gerald Monroe (gerald-monroe) on A Brief Review of Current and Near-Future Methods of Genetic Engineering · 2021-04-13T17:28:20.956Z · LW · GW

Oh. The reason you shouldn't go into genetics as a career is you will not be permitted to do anything on humans until after we have TAI. Your career will just be wasted. You should work on AI unless you are already in a PhD program.

There are countless legal and structural barriers in the way.

Comment by Gerald Monroe (gerald-monroe) on A Brief Review of Current and Near-Future Methods of Genetic Engineering · 2021-04-12T21:28:58.132Z · LW · GW

My support for the last paragraph is that many of the things we credit "exceptionally smart" people with doing like solving equations can be automated. Or exploring function spaces for a better solution. Or, well, any problem that has a checkable answer, which are the very things iq tests measure.

It's not on an IQ test how to imagine a better aircraft that is both creative and meets design specs. It's always problems that a clear answer exists for.

Anyways in my personal experience I have met a lot of "brittle" people. They have no outer visualization for how a machine actually works and just get stuck the moment they hit a problem that wasn't in a training exercise at school. Basic ideas just don't occur to them.

But yeah if you put me up against them on rigidly defined problems taught in a book I might be slightly slower.

Note that I personally test at around 80-97th percentile depending on the test. (MCATs was 97). This tells me that whatever intelligence I have lucked into having is substantially above average but not the best.

I am saying an army of people only as good as me - top quintile - can and will create TAI decades before genetic engineering will matter.

Comment by Gerald Monroe (gerald-monroe) on A Brief Review of Current and Near-Future Methods of Genetic Engineering · 2021-04-11T22:52:26.022Z · LW · GW

There's a hole in the assumptions in your last paragraph.  Implicitly you are saying that you believe TAI will benefit from or require the actions of a few 'super-genius' human beings to make possible.

There are some flaws in your statements to unpack:

      a.  The existence of human 'super geniuses'.  Nature can only do so much to improve our intelligence, being stuck with living cells as computational circuits in a finite brain volume, with finite energy supply.  It isn't clear how meaningful the intelligence differences really are in terms of utility on actual tasks.

     b.  The kind of tasks that intelligence testing can measure being relevant to the task of designing a TAI.  Thing is, the road to get there isn't going to involve a whole lot of someone solving math problems in their head as they pound a keyboard through the night writing reams of custom code.  A whole lot of it will be careful, methodical organization of your problem into clear layers and carefully checked assumptions to prevent math leaks (a math leak would be where a heuristic being optimized for is slightly incorrect, leading to the system building a suboptimal solution.  I think of it as 'leaking' the delta between the incorrect approximation and the correct approximation).  A lot of the "keyboard pounding" can be automated by building early bootstrap agents that find for us a near optimal algorithm for a given piece of the AI problem.  Moreover, most code should be reused so we don't have humans just re-resolving the same problems over and over.  

     c.  A lot of the pieces needed to get there from here are probably organizational.  You need thousands of people and some way to standardize everyone's efforts and build APIs and frameworks and other mechanisms to gain benefit from all these separate workers.  A single person is not going to meaningfully solve this problem by themselves.  You'll very likely need an immense framework of support software, and some method of iteratively improving it over time without significant regression.  (the failure mode of most large software projects)

 If a-c has a 90% chance of being correct, then the actual probability would be 0.1*0.25 or 2.5%, and probably not worth the hassle.  Note that there is a cost - the medical procedures to create genetically modified embryos have risks of screwing something up, giving you humans who are doomed to die some horrific way.

Just as a general policy, anything current flesh and blood humans with are having trouble with, that smarter humans have less trouble with, current humans can probably write a piece of software that is better than the efforts of any humans.  With today's techniques.

Comment by Gerald Monroe (gerald-monroe) on Specializing in Problems We Don't Understand · 2021-04-11T07:19:52.696Z · LW · GW

So intentional problems would be markets, where noise is being injected and any clear pattern is being drained dry by automated systems, preventing you from converging to a model. Or public private encryption where you aren't supposed to be able to solve it? (But possibly you can)

Comment by Gerald Monroe (gerald-monroe) on Specializing in Problems We Don't Understand · 2021-04-10T23:52:48.162Z · LW · GW

building fusion power plants, treating and preventing cancer, high-temperature superconductors, programmable contracts, genetic engineering, fluctuations in the value of money, biological and artificial neural networks.

vs

building bridges and skyscrapers, treating and preventing infections, satellites and GPS, cars and ships, oil wells and gas pipelines and power plants, cell networks and databases and websites.

 

Note that there is a way to split these sets into "problems we can easily perform experiments both real and simulated" and "problems where experimentation is extremely expensive and sometimes unethical".

Perhaps the element making this problems less tractable is we cannot easily obtain a lot of good quality information about the problem itself.  

Fusion you need giga-dollars to actually tinker with the plasmas at the scale you would get net power from.  Cancer, you can easily find a way to kill cancer in a lab or lab rat but there are no functioning mockups of human bodies (yet) to try your approach on.  Also there are government barriers that create shortages of workers and slow down any trial of new ideas.  HTSC, well, the physical models predict these poorly and it is not certain if a solution exists under STP.  Programmable contracts are easy to write but difficult to prove impervious to assault.  Genetic engineering, easy to do on small scales, difficult to do on complex creatures like humans due to the same barriers behind cancer treatment.  Money fluctuations - there are hostile and irrational agents blocking you from learning clean information about how it works, so your model will be confused by the noise they are injecting [in real economies].  And biological NNs have the information barrier, artificial NNs seem to be tractable they are just new.


How is this relevant? Well to me it sounds like if we invent a high end AGI, it'll still be throttled for solving this problems until the right robotics/mockups are made for the AGI to get the information it needs to solve them.

The AGI will not be able to formulate a solution merely reading human writings and journals on these subjects, we will need to authorize it to build thousands of robotic research systems where it then generates it's own experiments to fill in the gaps in our knowledge and to learn enough to solve them.

Comment by Gerald Monroe (gerald-monroe) on Solving the whole AGI control problem, version 0.0001 · 2021-04-09T17:25:07.813Z · LW · GW

I think you are missing something critical.

What do we need AGI for that mere 2021 narrow agents can't do?

The top item we need is for a system that can keep us biologically and mentally alive as long as possible.

Such an AGI is constrained by time and will constantly be in situations where all choices cause some harm to a person.

Comment by Gerald Monroe (gerald-monroe) on Solving the whole AGI control problem, version 0.0001 · 2021-04-09T07:13:29.547Z · LW · GW

One comment: for a realtime control system, the trolley problem isn't even an ethical dilemna.

At design time, you made your system to consider the minimum[expected harm done(possible options)].

In the real world, harm done is never zero.  For a system calculating the risks of each path taken, every possible path has a non zero amount of possible harm.  

And every timestep [30-1000 times a second generally] the system must output a decision. "leaving the lever alone" is also a decision and there is no reason to privilege it over "flipping it".  

So a properly engineered system will, the instant it is able to observe the facts of the trolley problem (and maybe several frames later for filtering reasons), switch to the path with a single person tied to the tracks.

It has no sense of empathy or guilt and for the programmers looking at the decision later, well, it worked as intended.

Stopping the system when this happens has the consequence of killing everyone on the other track and is incorrect behavior and a bug you need to fix.

Comment by Gerald Monroe (gerald-monroe) on Air Quality and Cognition · 2021-04-09T06:10:44.295Z · LW · GW

Do you see a single study listed where the experiment design was to put the subject in a room full of visible pollutant particles and have them take an exam?  I don't.  

I'm kind of disappointed in the robustness of human bodies assuming the above general trends are true, but it is what it is.  

Get yourself an air purifier, then, one with measurably good performance : https://www.nytimes.com/wirecutter/reviews/best-air-purifier/

Evidence appears to be clearly in favor of doing it.  

Comment by Gerald Monroe (gerald-monroe) on Is there any plausible mechanisms for why taking an mRNA vaccine might be undesirable for a young healthy adult? · 2021-04-08T08:30:36.518Z · LW · GW

The converse of that is that 225 million doses have been given and the serious negative effect rate is extremely low.  It's improbable that merely another doubling of time and doses will reveal any new information.  

If there is some new way this method causes the human body to fail it won't be found for years.  

Conversely, there's still the risk of Covid, and isolation has holes.  The biggest one being you might get sick and have to see medical treatment, and hospital acquired infections are estimated to happen 1.7 million times a year.  And while being young your odds are good, there are illness 'stacks' where Covid would kill you.  (some respiratory or autoimmune illness as well as covid at the same time, etc)

Comment by Gerald Monroe (gerald-monroe) on Another (outer) alignment failure story · 2021-04-08T08:04:43.131Z · LW · GW

I like this story.  Here's what I think is incorrect:

      I don't think, from the perspective of humans monitoring single ML system running a concrete, quantifiable process - industry or mining or machine design - that it will be unexplainable.  Just like today, tech stacks are already enormously complex, but at each layer someone does know how they work, and we know what they do  at the layers that matter.  Ever more complex designs for, say, a mining robot might start to resemble more and more some mix of living creatures and artwork out of a fractal, but we'll still have reports that measure how much performance the design gives per cost.  

   And systems that "lie to us" are a risk but not an inevitability in that careful engineering, auditing systems where finding True Discrepancies is their goal, etc, might become a thing.  

  Here's the part that's correct:

      I was personally a little late to the smartphone party.  So it felt like overnight everyone has QR codes plastered everywhere and is playing on their phone in bed.  Most products adoption is a lot slower for reasons of cost (esp up front cost) and speed to make whatever new idea there is. 

     Self replicating robots that in vast swarms can make any product that the process to build is sufficiently defined would change all that.  New cities could be built in a matter of months by enormous swarms of robotics, installing prefabricated components from elsewhere.  Newer designs of cars, clothes, furniture - far less limits.

    ML systems that can find a predicted optimal design, and send it for physical prototyping for it's design parameters to be checked are another way to get rid of some of the bottlenecks behind a new technology.  Another one is that the 'early access' version might still have problems, but the financial model will probably be 'rental' not purchase.

    This sounds worse but the upside is rental takes away the barrier to adoption.  You don't need to come up with $XXX for the latest gadget, just make the first payment and you have it.  The manufacturer doesn't need to force you into a contract either because their cost to recycle the gadget if you don't want it is low.  

Anyways the combination of all these factors would create a world of, well, future shock.  But it's not "the machines" doing this to humans, it would be a horde of separate groups of mainly humans doing this to each other.  It's also quite possible this kind of technology will for some areas negate some of the advantages of large corporations, in that many types of products will be creatable without needing the support of a large institution.  

Comment by Gerald Monroe (gerald-monroe) on Which counterfactuals should an AI follow? · 2021-04-07T18:18:37.041Z · LW · GW

Why not define a sub agent and deliver to that subagent a list of "white listed" observations. These would be all nodes the judge allowed, and your "life experiences and observations" set that excludes anything from during the trial or any personal experience with the case.

As an AI you can actually do this and solve the problem as instructed Humans cannot.

As a human, well. Yes a major problem is that your very perception of other portions of the proceedings is going to be affected by this observation you have been told to ignore. You may now "perceive" many little things that convince you RM is a gang member.

The only way to solve this problem as a human is to be explicit. The "reasonable doubt" means to construct a series of nodes that each have probabilities of some threshold (maybe 10 percent? Law doesn't say) that result in the defendant being innocent.

There only needs to exist one causal chain that explains all evidence. ('there was noise in the sample' is fine I'd you can't explain a few low magnitude observations) it doesn't need to be the most probable explanation.

So a fair jury would write down these nodes on something. For example if an eye witness says they saw the defendant do it, the node has to be (p of lying or mistaken). If the probability is so small as to be "unreasonable" you are done, no reasonable doubt exists and you can issue a verdict.

This kind of explicit reasoning isn't told to jurors, the average person will not be able to do this, "unreasonable" isn't defined, and arguably the above standard fails to actually give "justice". But as far as I can tell this is a formal way to represent what the courts expect.

Comment by Gerald Monroe (gerald-monroe) on What do you think would be the best investment policy for a cryonics trust? · 2021-04-07T00:36:13.797Z · LW · GW

Referring to one theory of the universe. It's nonsense to say that it is slowly degrading to nothing and nothing is the ground state because if this is the true law of nature nothing at all would exist. So one theory is all possible consistent universes exist which means if it is possible to survive cryo you will experience it. Similar to how our existence as humans is seeing one of the universes where we are possible, we aren't seeing the others.

Might be a wrong theory it's just a way to try to make sense of what doesn't really make any sense.

Comment by Gerald Monroe (gerald-monroe) on Tales from Prediction Markets · 2021-04-05T22:17:59.244Z · LW · GW

Sure. I was pointing out that providing a market to anonymously sell illegal information skirts the rules. Whether or not this is a net good is subjective. Similar to having bets on whether so and so vip is still alive on a particular date. It means "notanassasin007" can buy in to that bet the day before and assassinate the vip.

Actually isn't liquidity a problem? Let's say someone is really really certain a specific vip will die. (Queue a movie scene of someone assembling a sniper rifle with their laptop open to polymarket). But if the killer tries to bet a million dollars their max gain is the funds invested on the other side of the betting book.

Comment by Gerald Monroe (gerald-monroe) on Mathisco's Shortform · 2021-04-05T17:39:08.253Z · LW · GW

I am saying below a certain level of abstraction it becomes a solved problem in that you precisely have defined what correctness is and have fully represented your system. And you can trivially check any output and validate it versus a model.

The reason software fails constantly is we don't have a good definition that can be checked by computer of what correctness means. Software Unit tests help but are not nearly as reliable as tests for silicon correctness. Moreover software just ends up being absurdly more complex than hardware and ai systems are worse.

Part of it is "unique complexity". A big hardware system is millions of copies of the same repeating element. And locality matters - an element cannot affect another one far away unless a wire connects them. A big software system is millions of copies of often duplicated and nested and invisibly coupled code.

Comment by Gerald Monroe (gerald-monroe) on Mathisco's Shortform · 2021-04-05T00:46:23.086Z · LW · GW

Note that for hardware, the problems are that you need a minimum instruction set in order to make a computer work.  So long as you at least implement the minimum instruction set, and for all supported instructions perform the instructions (which are all in similar classes of functions) bit for bit correctly, done.  It's ever more difficult to make a faster computer for a similar manufacturing cost and power consumption because the physics keep getting harder.

But it is in some ways a "solved problem".  Whether a given instance of a computer is 'better' is a measurable parameter, the hardware people try things, the systems and software engineers adopt the next chip if it meets their definition of 'better'.  

So yeah if we want to see new forms of applications that aren't in the same class as what we have already seen - that's a software and math problem.  

Comment by Gerald Monroe (gerald-monroe) on What do you think would be the best investment policy for a cryonics trust? · 2021-04-04T22:40:13.188Z · LW · GW

That last bit is interesting.  Because, yes, from some hypotheses an instance of you will see revival if in any possible world it happens.  Without a 'god hypothesis' - where even if it's not an old man in a bathrobe, something set the simulation engine to this exact universe's laws - you have to realize that in order for something to exist, everything must exist, eternally.  Similarly, the 'you' reading this now is one who didn't die in the other universes before this moment.

Comment by Gerald Monroe (gerald-monroe) on "AI and Compute" trend isn't predictive of what is happening · 2021-04-04T22:11:32.180Z · LW · GW

My bigger point is that there is finite R&D money and massive single models need to have a purpose.  That going even bigger needs to accomplish something you need to do for revenue or for an experiment where larger size is meaningful.  

Comment by Gerald Monroe (gerald-monroe) on What are all these children doing in my ponds? · 2021-04-04T22:02:06.624Z · LW · GW

#1, 2, 3 : you are correct in absolute terms in that probably the world has changed and you are probably correct.  

Still, #1 : you can use that over $100k funds to shift the probability more for your kids, even if the gains are very marginal, our firmware says to do it.

#2: You're right, but our firmware is programmed to ignore this possibility.

#3: You're right, but our firmware says that our own child is worth a lot more than other people's.  As long as ours doesn't drown in pools, screw everyone else.  


One thing your thought experiment points out is the difference between what humans claim to care about "I don't want any children to starve or drown in pools" to what they actually care about.  Because obviously everyone walking with headphones on is doing it because they really don't give a shit, they just said they did to fit in.


Another thing I might note: you gave children drowning in pools as the analogy but the real boogeyman, aging, is going to affect everyone walking by oblivious.  They all would personally benefit if they collectively worked together sufficiently on methods to slow and reverse it.

 

Maybe ignore the below ramble


Part of the issue here is that the current roadblocks in biomedical progress - the recent mRNA vaccines are an example of what is possible when the funds and roadblocks are temporarily removed - have made my prior for "any further progress at all" to be assumed to be none.

I am still kind of imagining a world where I am 95 and there is still no treatment for Parkinson's, an ineffective treatment for dementias (so if I see the world at 95 it's due to luck), and barely any better cancer treatments as well.  If we subtract 60 years from the present day, 1961, this is basically true.  It's an observation of historical fact.  

This is why AI (and exponentially more intelligence) is our best personal hope.  To get through these red tapes and roadblocks you would basically need to be superhuman. (because AI can fill out forms for free, design experiments for the maximum knowledge gain within ethical constraints, perform and analyze the results of thousands of experiments in parallel, read every science journal article published to set up priors, and so on).

To control biology it's what we need.  

Comment by Gerald Monroe (gerald-monroe) on Tales from Prediction Markets · 2021-04-04T21:51:11.965Z · LW · GW

So essentially you're saying you can benefit from insider trading information indirectly without legal culpability.  Fair enough.

Comment by Gerald Monroe (gerald-monroe) on "AI and Compute" trend isn't predictive of what is happening · 2021-04-04T04:34:49.667Z · LW · GW

Assuming that prices haven't improved, what money has someone made to pay for the first 5-12 million dollar tab?  
For AI to take off it has to pay for itself.  It very likely will but this requires deployment of highly profitable, working applications.

Comment by Gerald Monroe (gerald-monroe) on Tales from Prediction Markets · 2021-04-04T04:32:25.966Z · LW · GW

Cool.  These are all pure zero sum bets, right?  Where the EV for the 'average' better is $0?

Comment by Gerald Monroe (gerald-monroe) on What are all these children doing in my ponds? · 2021-04-04T04:28:31.569Z · LW · GW

Right.  Another aspect is you have to compete with the other people who walk right by.  Saving one child won't affect your competitiveness much.  It's why we give spare change to beggers or make small donations even if middle class to Africa funds.  But ultimately you are in a red queen race with all those other passerbys, so even if you personally really really really care about children drowning, if you don't put most of your effort into keeping up in the race, you won't have children of your own.

And then, fastforward enough time, and now there are nothing but these oblivious passerbys, wearing earmuffs to block out the sound of children drowning - they are the descendents of the first set, since the ones who saved a lot of kids were less successful in reproduction.  

One viable strategy that would work is you get a camera crew to follow you around when you save children, and guilt lots of people into giving you money to do it.  You then use your new status as a bigshot motivational speaker, etc, to get more mating opportunities.  And you write an autobiography and 'raise awareness' about all these darn children drowning everywhere, and even after you are gone, some of your effort stays, and society is slightly more likely to prevent children drowning.  

Comment by Gerald Monroe (gerald-monroe) on 2012 Robin Hanson comment on “Intelligence Explosion: Evidence and Import” · 2021-04-02T20:55:29.283Z · LW · GW

It's true we shouldn't mistreat sentient AGI systems any more than we should mistreat humans; but we're in the position of having to decide what kind of AGI systems to build, with finite resources.

That's not how R&D works.  The early versions of something, you need the freedom to experiment, and early versions of an idea need to be both simple and well instrumented.  

One reason your first AI might be a 'paperclip maximizer' is simply that's less code to fight with.  Certainly, OpenAI's papers that's basically what all their systems are.  (they don't have the ability or capacity to allocate additional resources which seems to be the key step that makes a paperclip maximizer dangerous)

Comment by Gerald Monroe (gerald-monroe) on Hardware is already ready for the singularity. Algorithm knowledge is the only barrier. · 2021-04-01T01:47:01.960Z · LW · GW

He's specifically talking about building a computer not any more efficient than a brain algorithm wise and saying we have enough compute to do this.

He is incorrect because he is not factoring in the compute architecture. The reason you should consider my judgement is I am a working computer engineer and I have personally designed systems (for smaller scale tasks, I am not the architect for the self driving team I work for now)

Of course more efficient algorithms exist but by definition they take time and effort to find. And we do not know how much more efficient a system we can build and still have sentience.

Comment by Gerald Monroe (gerald-monroe) on Many methods of causal inference try to identify a "safe" subset of variation · 2021-03-31T18:48:16.394Z · LW · GW

I wasn't saying a flowchart wasn't helpful. I was saying if you want to find an algorithm to solve the problem, which is obtaining information about causal relationships at the lowest cost, you need to do it numerically.

This problem is very solvable as you are simply seeking an algorithm with the best score on a heuristic for accuracy and cost. Where "solvable means" "matches or exceeds state of the art".

Comment by Gerald Monroe (gerald-monroe) on Julia Galef and Matt Yglesias on bioethics and "ethics expertise" · 2021-03-31T00:32:32.619Z · LW · GW

I am saying a person who chooses an action that kills thousands of people and claims it to be ethical is probably not ethical.

(FDA delaying vaccine approvals)

Comment by Gerald Monroe (gerald-monroe) on TAI? · 2021-03-31T00:21:34.776Z · LW · GW

Sure. And the kiva and roomba examples : at a low level both machines could work using pure non deep learning software. 2d SLAM is a 'classic' technique at this point, and nothing in the way kiva robots move in x-y grids requires deep learning to work.

Robots that for example do soft complex object picking are using DL, and are an example of a machine that actually needs it to work. Ditto any autonomous car.

Yeah Tesla is using DL for the distance estimation. Dunno about the mall robots.

Comment by Gerald Monroe (gerald-monroe) on How many micromorts do you get per UV-index-hour? · 2021-03-30T23:43:55.303Z · LW · GW

You're talking about exposure to ionizing radiation.  This means there is a chance each UV photon that hits exactly in the right spot will cause permanent DNA changes that eventually lead to cancer.  So the right answer when asking how many invisible bullets you want to be shot with until one is fatal is "as few as practical".  

You can get vitamin D from a tablet.

Now, yes, a lack of sun may cause depression and you die from that, or other malfunctions, and most humans don't die from skin cancer.  

So I don't see it in terms of 'micromorts', I see it in terms of 'the outdoor activity had better be really fun' and "i'm going to protect myself as much as practical".  

Hanging out on a beach with potential mates?  Worth the risk.  Mowing or weeding your lawn?  I'm gonna wait til dusk or use a lot of protective gear.

Comment by Gerald Monroe (gerald-monroe) on Hardware is already ready for the singularity. Algorithm knowledge is the only barrier. · 2021-03-30T23:10:06.067Z · LW · GW

Here's why you are wrong:

           a.  Just recognizing objects with something like human levels of accuracy takes hundreds of teraflops per second!  We call it "TOPS" and just keeping up with a few full resolution cameras with networks that are not as robust as humans costs about 300-400 TOPS, or 'all she's got' from a current gen accelerator board.  This is like an inferior version of a human visual cortex, with our main issue being lack of generality and all sorts of terrible edge cases [that can misidentify objects, leading to a crash]

          Hundreds of teraflops in a single chip wasn't available until a few years ago, where several companies [Tesla, Nvidia, Waymo] developed NN accelerators.

          b.  You don't understand the computer architecture consequences when we say a brain has 2.5 petabytes.  This is not the same as having 2.5 petabytes of data where only a tiny fraction is being accessed at any time.  At any given moment, any of the neurons in the brain might get hit with a spike train and have to give outputs, taking into account the connection weight and connectivity graph.  The graph (what connects to what) and the strength and type of each connection are all information, and this is where the 2.5 petabyte estimate is coming from - the number of connections (86 billion  times 1000) and how many bits of information you estimate each connection holds.

(2.5 petabyte) / (1000 * 86 billion) = 29 bytes, apparently that is all scientific America thinks a synapse holds.  Say 8 bits of resolution for the weight, some "in progress" state variables (there are variables that are changed over time that are used to update the weights for learning), and enough bytes to specify uniquely that synapse's relative position in a graph with 86 trillion entries.  

Anyways your computer must be able to compute on all 2.5 petabytes at all times.  Every timestep, even neurons that are not going to fire are checking to see if they should, and they must access the synaptic weights to do this and update in progress state variables.  The brain isn't synchronous but 1000 timesteps per synapse per second of realtime is a rough estimate of what you need.

This is architecturally sorta like having all 2.5 petabytes at a minimum in RAM with a very very fast bus connecting you to the chip, but if you really get serious you need massive caches or you need to just build the circuitry that evaluates the neural network directly on top of the storage medium that holds the weights.  (several startups are doing this)

Let's make a more concrete estimate.   We have 2.5 petabytes of values we need to access 1000 times a second.

Therefore, our bandwidth requirement is 2.5 petabytes * 1000 = 2.5 exabytes/second .  Each Nvidia A100 has 2 terabytes/second of bandwidth.   Therefore for 1 brain equivalent we need 1,250,000 A100 AI accelerators, all released May 14, 2020.  World's largest supercomputer uses 158,976 nodes so this would be 10x larger. 

Also the amount of traffic between A100 nodes probably exceeds available interconnects but I'm not sure about that assertion.  

Each A100 is $199,000 MSRP.  But probably there is a chip shortage so you would need to wait a few years for your order to fill.

So you need 248.75 billion in A100 accelerators to get to "1 brain worth" of capacity.  And current AI algorithms are much less efficient than humans so..

Please note, I fully believe TAI is possible but with numbers like these it starts to become apparent why we don't have it yet.  Also this explains neatly the "AI winter", the AI winter happened because it turned out, with computers of that era, meaningful progress wasn't possible.

Also note the 200k pricetag is Nvidia's MSRP, which factors in their need to pay back all the money they spent on R&D.  They 'only' spent a few billion and maybe you could cut a deal.  Each A100 likely only costs $1000 or less in real component costs.

Comment by Gerald Monroe (gerald-monroe) on Many methods of causal inference try to identify a "safe" subset of variation · 2021-03-30T21:05:04.277Z · LW · GW

You're seeking an algorithm F, that, given observations O, gives you a correct causal model M.  Each observation has a cost, and each manipulation of a variable (experimental case) has a large cost.  You are seeking an algorithm with a good ratio of effectiveness to cost.

So I have an idea.  You have a large number of cases : places where there is correlation, where there is causation, where confounding factors are large, where they are small.

To me this sounds like you can find a better model by generating a 'benchmark' of a large number of randomized situations, generating the observations, with error, that you would see, and find out which algorithms discover the true relationship best.  Might be easier than theorizing with flowcharts.

Comment by Gerald Monroe (gerald-monroe) on TAI? · 2021-03-30T20:57:49.462Z · LW · GW

Most labor (including almost  physical labor) has been replaced by robots.  The jobs that remain consist of research and application of AI and robotics.

This conclusion is still 'doubted'.  I generally agree with you that this is possible but there is a huge gap between where we are now, and actually reliable, real time, economical to deploy robotics.  As far as I know, actual robotics using deep learning for commercial tasks is extremely rare.  I have not heard of any, I've just seen OpenAI and Google's demos.  

It's sort of the difference between "have demoed a train that could run in a tunnel" and "have dug a tunnel" and "have a working subway line" to "the whole city is interconnected".

In real life examples the gap there was many decades.  

https://en.wikipedia.org/wiki/Beach_Pneumatic_Transit [1869]

https://en.wikipedia.org/wiki/Tremont_Street_subway [1903]

https://en.wikipedia.org/wiki/IND_Sixth_Avenue_Line [1940] : approximately the completion date of the NYC system

Comment by Gerald Monroe (gerald-monroe) on Julia Galef and Matt Yglesias on bioethics and "ethics expertise" · 2021-03-30T20:35:18.393Z · LW · GW

"what do they claim to know and how do they know it"

No amount of credentials or formal experience makes an expert not wrong if they do not have high quality evidence, that they have shown, to get their conclusions from. And an algorithm formally proven to be correct that they show they are using.

Or in the challenge trials : ethicist claims to value human life. A challenge trial only risks the lives of a few people, where even if they die it would have saved hundreds of thousands.

In this case the " basic math" is one of multiplication and quantities, showing the "experts" don't know anything. As you might notice, ethicists do not have high quality information as input to generate their conclusions from. Without that information you cannot expect more than expensive bullshitting.

"Ethics" today is practiced by reading ancient texts and more modern arguments, many of which have cousins with religion. But ethics is not philosophy. It is actually a math problem. Ultimately, there are things you claim to value ("terminal values"). There are actions you can consider doing. Some actions have an expected value that with a greater score on the things you care about, and some actions have a lesser expected value.

Any other action but taking the one with the highest expected value (factoring in variance), is UNETHICAL.

Yes, professional ethicists today are probably mostly all liars and charlatans, no more qualified than a water douser. I think EY worked down to this conclusion in a sequence but this is the simple answer.

One general rule of thumb if you didn't read the above: if an expert claims to know what they are doing, look at the evidence they are using. I don't know the anatomy of the human body enough to gainsay an orthopedic surgeon, but I'm going to trust the one that actually looks at a CT scan over one that palpates my broken limb and reads from some 50 year old book. Doesn't matter if the second one went to the most credible medical school and has 50 years experience.

Comment by Gerald Monroe (gerald-monroe) on Eli's shortform feed · 2021-03-30T20:32:16.139Z · LW · GW

Ok but at the price point you are talking you are not going to have a good time.

Analogy: would you "experiment with having a computer" by grabbing a packard bell from the 1990s and putting an ethernet card in it so it can connect to the internet from windows 95?

Do you need the minivan form factor? As a vehicle in decent condition (6-10 years old, under 100k miles, from a reputable brand) is cheapest in the small car form factor.

Comment by Gerald Monroe (gerald-monroe) on Julia Galef and Matt Yglesias on bioethics and "ethics expertise" · 2021-03-30T05:53:12.431Z · LW · GW

Note there is a whole class of basic problems where it is possible to verify or falsify a solution quickly, but not come up with as solution yourself.

Part of this 'trust experts' meme is that : most human beings on earth are not sufficiently trained to venture an opinion that should be taken seriously on complex subjects like ethics.

But, it is trivially easy for anyone with even basic math to check an ethicist's decision, and see when it is lethally bad. [for example, by declaring challenge trials unethical or the mask debacle]

The issue with this is that when regular citizens check the official advice, find it is incoherent and wrong, then because they are not a credentialed expert they are disallowed from having their criticism even looked at for correctness.  Note that doctors frequently make medical errors, and when their patients check the decision and find it to be obviously wrong, they have difficulty getting their criticism acknowledged as valid.

Comment by Gerald Monroe (gerald-monroe) on Raj Thimmiah's Shortform · 2021-03-30T05:47:16.252Z · LW · GW

Well, in real terms, if the nation has N people, working M hours on average each.

Either you employ more people (economic crises cause employment to shrink and the inverse happens in booms), making M bigger and economic output larger, or you get more done per hour, making productivity higher.

Any sort of machine you invent that helps people get more done per hour increases real GDP if the machine is adopted and use many places.  

For example, if you had invented the tractor, the consequences would temporarily have been mass unemployment, but later the people that would have been working in the fields can be doing other things.  

You might not agree that the things they are doing are adding value - for example they might be wearing cosplay costumes in Times Square and charging tourists to take pictures of them.  However, any good or service that other humans are willing to voluntarily pay for is adding to GDP, as if it were not providing value in excess of the cost, the buyer wouldn't pay.

Comment by Gerald Monroe (gerald-monroe) on What are the biggest current impacts of AI? · 2021-03-30T05:39:12.064Z · LW · GW

Ok, so please note I do work in the field.  This doesn't mean I know everything, and I could be wrong, but I have some knowledge, much of which is under NDA. 

There are many levels of similarity.

From the platform level - the platform is the nn accelerator chips, all the support electronics, the RTOS, the drivers, the interfaces, and a host of other software tools - there is zero difference between AI systems at all.  The platform's role is to take an NN graph, usually defined as a *.onnx file, and to run that graph with deterministic timing, using inputs from many sensors which there have to be device drivers for.

So that's one part of the platforming - everyone deploying any kind of autonomy system will need to purchase platforms to run it on.  (and there will be only a few good enough for real time tasks where safety is a factor)

From the network architecture level, again, there are many similarities.  In addition, networks that solve problems in the same class can often share the same architecture.  For example, 2 networks that just identify images from a dataset can be very similar in architecture even if the datasets have totally different members.  

There are technical reasons why you want to use an existing, 'known to work' architecture, a main one being that novel architectures will take a lot more work to run in real time on your target accelerator platform.  

For different tasks that involve physical manipulations of objects in the real world , I expect there will be many similarities even if robots are doing different tasks.

Just a few : perception networks need to be similar, segmentation networks need to be similar, networks that predict how realworld objects will move, that predict damage, that predict what humans may do, that predict where an optimal path might be found, and so on and so forth.

I expect there will be far more similarities than differences.  

In addition, even when the network weights are totally different, using the same software and network and platform architecture means that you can share code and you merely have to repeat training on a different dataset.  Example: GPT-3 trained on a different language.

 


Hmm, just because the abstract form of your algorithm is the same as everyone else's, this doesn't mean you can reuse the same algorithm... In some sense, it's trivial that abstract form of all algorithms is the same: [inputs] -> [outputs]. But this doesn't mean the same algorithm can be reused to solve all the problems.

 

This is incorrect.  You're also not thinking abstractly enough - you're thinking what we see today, where AI systems are not platformed and are just a mess of python code defining some experimental algorithm.  (eg Open AI's examples).  This isn't production grade or reusable and it has to be or it will not be economical to use.