Posts

Robert Cousineau's Shortform 2025-01-21T03:26:25.962Z
Wagering on Will And Worth (Pascals Wager for Free Will and Value) 2024-11-27T00:43:08.522Z
Predict the future! Earn fake internet points! Get a (free) gambling addiction! 2023-12-12T04:39:50.162Z

Comments

Comment by Robert Cousineau (robert-cousineau) on My model of what is going on with LLMs · 2025-02-13T06:11:04.789Z · LW · GW

I found this to be a valuable post!

I disagree with your conclusion though - the thoughts that come to my mind as to why are:

  1. You seem overly anchored on COT as the only scaffolding system in the near-mid future (2-5 years). While I'm uncertain what specific architectures will emerge, the space of possible augmentations (memory systems, tool use, multi-agent interactions, etc.) seems vastly larger than current COT implementations.
     
  2. Your crux that "LLMs have never done anything important" feels only mildly compelling.  Anecdotally, many people do feel LLM's significantly improve their ability to do important and productive work, both work which requires creativity/cross field information integration and work that does not. 
    Further, I am not aware of any large scale ($10 million+) instances of people trying something like a better version of "Ask an LLM to list out in context fields it feels like would be ripe for information integration leading to a breakthrough, and then do further reasoning on what those breakthroughs are/actually perform them."  
    Something like that seems like it would be a MVP of "actually try and get an LLM to come up with something significantly economically valuable. I expect that the lack of this type of experiment existing is because major AI labs feel like that would be choosing to exploit while there are still many gains to be made from exploring further architectural and scaffolding-esque improvements.  
     
  3. Where you say "Certainly LLMs should be useful tools for coding, but perhaps not in a qualitatively different way than the internet is a useful tool for coding, and the internet didn't rapidly set off a singularity in coding speed.", I find this to be untrue both in terms of the impact of the internet (while it did not cause a short takeoff, it did dramatically increase the amount of new programmers and the effective transfer of information between them.  I expect without it we would see computers having <20% of their current economic impact), and in terms of the current and expected future impact of LLM's (LLM's simply are widely used by smart/capable programmers.  I trust them to evaluate if it is noticeably better than StackOverflow/the rest of the internet).
Comment by Robert Cousineau (robert-cousineau) on When you downvote, explain why · 2025-02-07T07:05:57.043Z · LW · GW

I am strong down voting in this case as when I put a noticeable amount of effort responding to your prior post "are there 2 types of alignment?", you gave an unsubstantiative followup to my answer to your question, and no followup to the 5 other people who commented in response to your post.  

When I attempted to communicate with you clearly and helpfully in response to one of your low effort questions, I saw little value. Why should others listen to you when you tell them to do what I did?

Comment by Robert Cousineau (robert-cousineau) on Are we trying to figure out if AI is conscious? · 2025-01-27T06:04:12.266Z · LW · GW

I quite enjoyed reading this - I’m surprised I’d not read something like it before and quite happy you did the work and posted it here.


Do you have plans of using the dataset you built here to work on “figuring out if AI is conscious”?

Comment by Robert Cousineau (robert-cousineau) on Quotes from the Stargate press conference · 2025-01-27T04:13:56.918Z · LW · GW

Agreed - that's what I was trying to say with the link under "80b number is the same number Microsoft has been saying repeatedly."

Comment by Robert Cousineau (robert-cousineau) on are there 2 types of alignment? · 2025-01-23T02:08:07.838Z · LW · GW

That would be described well by the CEV link above.  

Comment by Robert Cousineau (robert-cousineau) on are there 2 types of alignment? · 2025-01-23T00:31:23.894Z · LW · GW

I think having a single word like "Alignment" mean multiple things is quite useful, similar to how I think having a single word like "Dog" mean many things is also useful. 

Comment by Robert Cousineau (robert-cousineau) on are there 2 types of alignment? · 2025-01-23T00:24:15.852Z · LW · GW

I'm having trouble remembering many times people here say "AI Alignment" in a way that would be best described as "making an AI that builds utopia and stuff".  Maybe Coherent Extrapolated Volition would be close. 

My general understanding is that when people here talk about AI Alignment, they are talking about something closer to what you call "making an AI that does what we mean when we say 'minimize rate of cancer' (that is, actually curing cancer in a reasonable and non-solar-system-disassembling way)".  

On a somewhat related point, I'd say that "making an AI that does what we mean when we say "minimize rate of cancer" (that is, actually curing cancer in a reasonable and non-solar-system-disassembling way)" is entirely encapsulated under "making an AI that builds utopia and stuff", as it is very very unlikely an AI makes a utopia while misunderstanding what we intended its goal to be that much.  

You would likely enjoy reading through this (short) post: Clarifying inner alignment terminology, and I expect it would help you get a better understanding of what people mean when they are discussing AI Alignment.  

Another resource you might enjoy would be reading through the Tag and Subtags around AI: https://www.lesswrong.com/tag/ai 

PS: In the future, I'd probably make posts like this in the Open Thread.  

 

Comment by Robert Cousineau (robert-cousineau) on Thane Ruthenis's Shortform · 2025-01-22T18:45:30.646Z · LW · GW

Here is what I posted on "Quotes from the Stargate Press Conference":

On Stargate as a whole:

This is a restatement with a somewhat different org structure of the prior OpenAI/Microsoft data center investment/partnership, announced early last year (admittedly for $100b).  

Elon Musk states they do not have anywhere near the 500 billion pledged actually secured: 

I do take this as somewhat reasonable, given the partners involved just barely have $125 billion available to invest like this on a short timeline.

Microsoft has around 78 billion cash on hand at a market cap of around 3.2 trillion.
Softbank has 32 billion dollars cash on hand, with a total market cap of 87 billion.
Oracle has around 12 billion cash on hand, with a market cap of around 500 billion.
OpenAI has raised a total of 18 billion, at a valuation of 160 billion.  

Further, OpenAI and Microsoft seem to be distancing themselves somewhat - initially this was just an OpenAI/Microsoft project, and now it involves two others and Microsoft just put out a release saying "This new agreement also includes changes to the exclusivity on new capacity, moving to a model where Microsoft has a right of first refusal (ROFR)."

Overall, I think that the new Stargate numbers published may (call it 40%) be true, but I also think there is a decent chance this is new administration trump-esque propoganda/bluster (call it 45%), and little change from the prior expected path of datacenter investment (which I do believe is unintentional AINotKillEveryone-ism in the near future).  

Edit:  Satya Nadella was just asked about how funding looks for stargate, and said "Microsoft is good for investing 80b".  This 80b number is the same number Microsoft has been saying repeatedly.

Comment by Robert Cousineau (robert-cousineau) on Quotes from the Stargate press conference · 2025-01-22T16:14:55.949Z · LW · GW

On Stargate as a whole:

This is a restatement with a somewhat different org structure of the prior OpenAI/Microsoft data center investment/partnership, announced early last year (admittedly for $100b).  

Elon Musk states they do not have anywhere near the 500 billion pledged actually secured: 

I do take this as somewhat reasonable, given the partners involved just barely have $125 billion available to invest like this on a short timeline.

Microsoft has around 78 billion cash on hand at a market cap of around 3.2 trillion.
Softbank has 32 billion dollars cash on hand, with a total market cap of 87 billion.
Oracle has around 12 billion cash on hand, with a market cap of around 500 billion.
OpenAI has raised a total of 18 billion, at a valuation of 160 billion.  

Further, OpenAI and Microsoft seem to be distancing themselves somewhat - initially this was just an OpenAI/Microsoft project, and now it involves two others and Microsoft just put out a release saying "This new agreement also includes changes to the exclusivity on new capacity, moving to a model where Microsoft has a right of first refusal (ROFR)."

Overall, I think that the new Stargate numbers published may (call it 40%) be true, but I also think there is a decent chance this is new administration trump-esque propoganda/bluster (call it 45%), and little change from the prior expected path of datacenter investment (which I do believe is unintentional AINotKillEveryone-ism in the near future).  

Edit:  Satya Nadella was just asked about how funding looks for stargate, and said "Microsoft is good for investing 80b".  This 80b number is the same number Microsoft has been saying repeatedly.

Comment by Robert Cousineau (robert-cousineau) on Robert Cousineau's Shortform · 2025-01-21T03:26:26.092Z · LW · GW

As best I can tell, the US AI Safety institute is likely to be shuttered in the near future.  I bet accordingly on this market.  

Trump rescinded Executive Order 14110, which established the U.S. AI Safety Institute (AISI).

There was some congressional work going on (HR 9497 and S.4769) and that would have formalized the AISI outside of the executive order, but that has not been enacted per my best understanding of the machinations of our government.  

Here's to hoping I'm wrong!  also maybe next time I'll place a smaller bet...

Edit: I sold my shares at a modest profit.  It seems the AISI is less directly linked to 14110 than I expected.  Further, no news on it yet seems unlikely if it was actually ending.  

Comment by Robert Cousineau (robert-cousineau) on Thane Ruthenis's Shortform · 2025-01-20T18:25:33.731Z · LW · GW

I personally put a relatively high probability of this being a galaxy brained media psyop by OpenAI/Sam Altman.  

Eliezer makes a very good point that confusion around people claiming AI advances/whistleblowing benefits OpenAI significantly, and Sam Altman has a history of making galaxy brained political plays (attempting to get Helen fired (and then winning), testifying to congress that it is good he has oversight via the board and he should not be full control of OpenAI and then replacing the board with underlings, etc).

Sam is very smart and politically capable.  This feels in character. 

Comment by Robert Cousineau (robert-cousineau) on Monthly Roundup #26: January 2025 · 2025-01-20T16:43:46.528Z · LW · GW

I often find the insinuations people make with this graph to be misleading.  The increase in time spent at home is in very large part due to the rise in remote work, which I would say is a public good (and at least for me, leads to much easier high quality socialization, as I can make my schedule work for my friends).  Additionally. time spent not at home includes people commuting, with all of the negative internalities (risk of crash, wasted time, etc) and negative externalities (emissions, greater traffic load, etc) that driving includes.  

That it is trending down post covid seems like a negative, not a positive.  

Comment by robert-cousineau on [deleted post] 2025-01-20T07:07:35.674Z

I mostly agree with the body of this post, and think your calls to action make sense. 

On your title and final note: Butlerian Jihad feels out of place.  It’s catchy, but it seems like you are recommending AI concerned people more or less do what AI concerned people already do.  I feel like we should save our ability to use words that are a call to arms for a time when that is what we are doing.

Comment by Robert Cousineau (robert-cousineau) on (The) Lightcone is nothing without its people: LW + Lighthaven's big fundraiser · 2025-01-09T23:40:02.708Z · LW · GW

I've donated 5k.  Lesswrong (and the people it brings together) deserve credit for the majority of my intellectual growth over the last 6 years.  I cannot think of a higher signal:noise place to learn, nor can I think of a more enjoyable and growth inducing community than the community which has grown around it.  

Thank you to both those who directly work on it and those who contribute to it!

 

Lighthaven's wonder is self evident. 

Comment by Robert Cousineau (robert-cousineau) on Decorated pedestrian tunnels · 2024-11-25T02:20:44.428Z · LW · GW

I'm honestly really skeptical of the cost effectiveness of pedestrian tunnels as a form of transportation.  Asking Claude for estimates on tunnel construction costs gets me the following:

A 1-mile pedestrian tunnel would likely cost $15M-$30M for basic construction ($3,000-$6,000 per foot based on utility tunnel costs), plus 30% for ventilation, lighting, and safety systems ($4.5M-$9M), and ongoing maintenance of ~$500K/year.

To put this in perspective: Converting Portland's 400 miles of bike lanes to tunnels would cost $7.8B-$15.6B upfront (1.1-2.3× Portland's entire annual budget) plus $200M/year in maintenance. For that same $15.6B, you could:

  • Build ~780 miles of protected surface bike lanes ($2M/mile)
  • Fund Portland's bike infrastructure maintenance for 31 years
  • Give every Portland resident an e-bike and still have $14B left over

Even for a modest 5-mile grid serving 10,000 daily users (optimistic for suburbs), that's $10K-$20K per user in construction costs alone.

Alternative: A comprehensive street-level mural program might cost $100K-$200K per mile, achieving similar visual variety at ~1% of the tunnel cost.

Comment by Robert Cousineau (robert-cousineau) on Are You More Real If You're Really Forgetful? · 2024-11-25T02:00:31.467Z · LW · GW

I'll preface this with: what I'm saying is low confidence - I'm not very educated on the topics in question (reality fluid, consciousness, quantum mechanics, etc).  

Nevertheless, I don't see how the prison example is applicable.  In the prison scenario there's an external truth (which prisoner was picked) that exists independent of memory/consciousness. The memory wipe just makes the prisoner uncertain about this external truth.

But this post is talking about a scenario where your memories/consciousness are the only thing that determines which universes count as 'you'. 

There is no external truth about which universe you're really in - your consciousness itself defines (encompasses?) which universes contain you. So, when your memories become more coarse, you're not just becoming uncertain about which universe you're in - you're changing which universes count as containing you, since your consciousness is the only arbiter of this.

Comment by Robert Cousineau (robert-cousineau) on Monthly Roundup #24: November 2024 · 2024-11-18T19:09:44.254Z · LW · GW

A cool way to measure dishonesty: How many people claim to have completed an impossible five minute task.

This has since been community noted, fairly from my understanding.  

This graph is not about how many people reported completing a task in 5 minutes when that was not true, this graph shows how many people completed the whole task even though it took them more than 5 minutes (which was all the time they were getting paid for).  
 

Comment by Robert Cousineau (robert-cousineau) on Gwern: Why So Few Matt Levines? · 2024-11-04T15:20:42.812Z · LW · GW

Derek Lowe I believe does the closest to a Matt Levine for Pharma (and chem): https://www.science.org/blogs/pipeline

 

He has a really fun to read series titled "Things I Won't Work With" where he talks a bunch about dangerous chemicals: https://www.science.org/topic/blog-category/things-i-wont-work-with 

Comment by Robert Cousineau (robert-cousineau) on Lonely Dissent · 2024-10-22T21:00:41.219Z · LW · GW

http://www.overcoming-bias.com/2007/06/against_free_th.html.

This link should be: https://www.overcomingbias.com/p/against_free_thhtml (removing the hyphen will allow a successful redirect).

Comment by Robert Cousineau (robert-cousineau) on The case for a negative alignment tax · 2024-09-19T00:41:07.503Z · LW · GW

In the limit (what might be considered the ‘best imaginable case’), we might imagine researchers discovering an alignment technique that (A) was guaranteed to eliminate x-risk and (B) improve capabilities so clearly that they become competitively necessary for anyone attempting to build AGI. 

I feel like throughout this post, you are ignoring that agents, "in the limit", are (likely) provably taxed by having to be aligned to goals other than their own.  An agent with utility function "A" is definitely going to be less capable at achieving "A" if it is also aligned to utility function "B".   I respect that current LLM's not best described as having a singular consistent goal function, however, "in the limit" that is what they will be best described as.  

Comment by Robert Cousineau (robert-cousineau) on What would stop you from paying for an LLM? · 2024-05-21T23:45:32.339Z · LW · GW

I stopped paying for chatGPT earlier this week, while thinking about the departure of Jan and Daniel.

Whereas before they left I was able to say to myself "well, there are smarter people than me with worldviews similar to mine who have far more information about openAI than me, and they think it is not a horrible place, so 20 bucks a month is probably fine", I am no longer able to do that.

They have explicitly sounded the best alarm they reasonably know how to currently. I should listen!

Comment by Robert Cousineau (robert-cousineau) on Will 2024 be very hot? Should we be worried? · 2023-12-29T20:38:28.971Z · LW · GW

Market odds are currently at 54% that 2024 is hotter than 2023: https://manifold.markets/SteveRabin/will-the-average-global-temperature?r=Um9iZXJ0Q291c2luZWF1

I have some substantial limit orders +-8% if anyone strongly disagrees.

Comment by Robert Cousineau (robert-cousineau) on In defence of Helen Toner, Adam D'Angelo, and Tasha McCauley (OpenAI post) · 2023-12-05T21:57:52.017Z · LW · GW

I like the writeup, but reccomend actually directly posting it to LessWrong. The writeup is of a much higher quality than your summary, and would be well suited to inline comments/the other features of the site.

Comment by Robert Cousineau (robert-cousineau) on We Should Talk About This More. Epistemic World Collapse as Imminent Safety Risk of Generative AI. · 2023-11-16T20:01:05.833Z · LW · GW

I have a lot of trouble justifying to myself reading through more than the first five paragraphs.  Below is my commentary on what I've read.

I doubt that the short term impacts of sub-human level AI, be it generating prose, photographs, or films, on our epistemology are negative enough to justify them being weighted high as the X-Risk that is likely to emerge upon creating human level AI.  

We have been living in a adversarial information space for as long as we have had human civilization.  Some of the most most impressive changes to our epistemology were made prior to photographs (empiricism, rationalism, etc), some of them were made after (they are cool!).  It will require modifying how we judge the accuracy of a given claim when we can no longer trust photos/videos in low stakes situations (we've not been able to trust them in high stakes situations for as long as they have existed; see special effects, film making, conspiracies about any number of recorded claims, etc), but that is just a normal fact of human societal evolution.  

If you want to convince (me, at least) that this is a threat that is potentially society ending, I would love an argument/hook that addresses my above claims.

Comment by Robert Cousineau (robert-cousineau) on Bostrom Goes Unheard · 2023-11-14T00:40:41.158Z · LW · GW

This seems like highly relevant (even if odd/disconcerting) information.  I'm not sure if it should necessarily get it's own post (is this as important than the UK AI Summit or the Executive Order?), but it should certainly gets a top level item in your next roundup at least.  

Comment by Robert Cousineau (robert-cousineau) on It's OK to eat shrimp: EAs Make Invalid Inferences About Fish Qualia and Moral Patienthood · 2023-11-14T00:36:07.500Z · LW · GW

I think unless you take a very linguistics heavy understanding of the emergence of qualia, you are over-weighting your arguments around being able to communicate with an agent being highly related to how likely they are to have consciousness.  

___________________________________________________________________________________________

You say:

In short, there are some neural circuits in our brains that run qualia. These circuits have inputs and outputs: signals get into our brains, get processed, and then, in some form, get inputted into these circuits. These circuits also have outputs: we can talk about our experience, and the way we talk about it corresponds to how we actually feel.

And: 

It is valid to infer that, likely, qualia has been beneficial in human evolution, or it is a side effect of something that has been beneficial in human evolution.

I think both of the above statements are very likely true.  From that, it is hard to say that a chimpanzee likely to lack those same circuits.  Neither our mental circuits nor our ancestral environments are that different.  Similarly, it is hard to say "OK, this is what a lemur is missing, as compared to a chimpanzee".  

I agree that as you go down the list of potentially conscious entities (e.g. Humans -> Chimpanzees -> Lemurs -> Rats -> Bees -> Worms -> Bacteria -> Virus -> Balloon) it gets less likely that each has qualia, but I am very hesitant to put anything like an order of magnitude jump at each level.  

Comment by Robert Cousineau (robert-cousineau) on How to (hopefully ethically) make money off of AGI · 2023-11-13T23:46:19.518Z · LW · GW

Did you hear back here?

Comment by Robert Cousineau (robert-cousineau) on AI Alignment [progress] this Week (10/29/2023) · 2023-10-31T17:01:01.049Z · LW · GW

Retracted - my apologies. I was debating if I should add a source when I commented that and clearly I should have.

Comment by Robert Cousineau (robert-cousineau) on AI Alignment [progress] this Week (10/29/2023) · 2023-10-31T03:34:21.273Z · LW · GW

What does this mean: More research is better, in my opinion. But why so small? AI Alignment is at least a $1T problem.

Open AI may have a valuation of 80 million in a couple days and they are below that currently.  

I haven't read the article yet, but that is a decent percentage of their current valuation. 

Comment by Robert Cousineau (robert-cousineau) on Evolution provides no evidence for the sharp left turn · 2023-04-12T20:53:55.015Z · LW · GW

This post prompted me to create the following Manifold Market on the likelihood of a Sharp Left Turn occuring (as described by the Tag Definition, Nate Soares, and Victoria Krakovna et. al), prior to 2050: https://manifold.markets/RobertCousineau/will-a-sharp-left-turn-occur-as-des?r=Um9iZXJ0Q291c2luZWF1

Comment by Robert Cousineau (robert-cousineau) on Why Portland · 2022-07-10T19:44:57.645Z · LW · GW

Take note: it is only 2 hours away if you are driving in the middle of the night, on a weeknight. Else, it is 3-4 hours depending on how bad traffic is (there will almost always be some along I-5).

Comment by Robert Cousineau (robert-cousineau) on Debating Whether AI is Conscious Is A Distraction from Real Problems · 2022-06-22T01:53:36.776Z · LW · GW

From my beginners understanding, the two objects you are comparing are not mutually exclusive.

There is currently work being done on inner alignment and outer alignment, where inner alignment is more focused on making sure that an AI doesn't coincidentally optimize humanity out of existence due to [us not teaching it a clear enough version of/it misinterpreting] our goals and outer alignment more focused on making sure we have goals aligned to human values we should teach it.

Different big names focus on different parts/subparts of the above (with crossover as well).

Comment by Robert Cousineau (robert-cousineau) on [$20K in Prizes] AI Safety Arguments Competition · 2022-05-27T14:41:13.492Z · LW · GW

Nice work there!

Comment by Robert Cousineau (robert-cousineau) on [$20K in Prizes] AI Safety Arguments Competition · 2022-05-26T23:51:53.877Z · LW · GW

"When we create an Artificial General Intelligence, we will be giving it the power to fundamentally transform human society, and the choices that we make now will affect how good or bad those transformations will be.  In the same way that humanity was transformed when chemist and physicists discovered how to make nuclear weapons, the ideas developed now around AI alignment will be directly relevant to shaping our future."

Comment by Robert Cousineau (robert-cousineau) on [$20K in Prizes] AI Safety Arguments Competition · 2022-05-26T23:46:53.677Z · LW · GW

"Once we make an Artificial General Intelligence, it's going to try and achieve its goals however it can, including convincing everyone around it that it should achieve them.  If we don't make sure that it's goals are aligned with humanity's, we won't be able to stop it."

Comment by Robert Cousineau (robert-cousineau) on [$20K in Prizes] AI Safety Arguments Competition · 2022-05-26T23:22:01.564Z · LW · GW

"There are 7 billion people on this planet.  Each one of them has different life experiences, different desires, different aspirations, and different values.  The kinds of things that would cause two of us to act, could cause a third person to be compelled to do the opposite.  An Artificial General Intelligence will have no choice but to act from the goals we give it.  When we give it goals that 1/3 of the planet disagrees with, what will happen next?" (Policy maker)

Comment by Robert Cousineau (robert-cousineau) on [$20K in Prizes] AI Safety Arguments Competition · 2022-05-26T23:12:35.249Z · LW · GW

"With all the advanced tools we have, and with all our access to scientific information, we still don't know the consequences of many of our actions.  We still can't predict weather more than a few days out, we can't predict what will happen when we say something to another person, we can't predict how to get our kids to do what we say. When we create a whole new type of intelligence, why should we be able to predict what happens then?"

Comment by Robert Cousineau (robert-cousineau) on [$20K in Prizes] AI Safety Arguments Competition · 2022-05-26T23:06:50.735Z · LW · GW

"There are lots of ways to be smarter than people.  Having a better memory, being able to learn faster, having more thorough reasoning, deeper insight, more creativity, etc.  
When we make something smarter than ourselves, it won't have trouble doing what we ask.  But we will have to make sure we ask it to do the right thing, and I've yet to hear a goal everyone can agree on." (Policy maker)

Comment by Robert Cousineau (robert-cousineau) on [$20K in Prizes] AI Safety Arguments Competition · 2022-05-26T22:46:32.128Z · LW · GW

"When we start trying to think about how to best make the world a better place, rarely can anyone agree on what is the right way.   Imagine what would happen to if the first person to make an Artificial General intelligence got to say what the right way is, and then get instructed how to implement it like they had every researcher and commentator alive spending all their day figuring out the best way to implement it.  Without thinking about if it was the right thing to do." (Policymaker)

Comment by Robert Cousineau (robert-cousineau) on [$20K in Prizes] AI Safety Arguments Competition · 2022-05-26T22:34:52.056Z · LW · GW

"How well would your life, or the life of your child go, if you took your only purpose in life to be the first thing your child asked you to do for them?  When we make an Artificial Intelligence smarter than us, we will be giving it as well defined of a goal as we know how.  Unfortunately, the same as a 5 year old doesn't know what would happen if their parent dropped everything to do what they request, we won't be able to think through the consequences of what we request." (Policymaker)

Comment by Robert Cousineau (robert-cousineau) on [$20K in Prizes] AI Safety Arguments Competition · 2022-05-26T22:28:46.832Z · LW · GW

"Through the past 4 billion years of life on earth, the evolutionary process has emerged to have one goal: create more life.  In the process, it made us intelligent.  In the past 50 years, as humanity gotten exponentially more economically capable we've seen human birth rates fall dramatically.  Why should we expect that when we create something smarter than us, it will retain our goals any better than we have retained evolution's?" (Policymaker)