Posts

Predict the future! Earn fake internet points! Get a (free) gambling addiction! 2023-12-12T04:39:50.162Z

Comments

Comment by Robert Cousineau (robert-cousineau) on Gwern: Why So Few Matt Levines? · 2024-11-04T15:20:42.812Z · LW · GW

Derek Lowe I believe does the closest to a Matt Levine for Pharma (and chem): https://www.science.org/blogs/pipeline

 

He has a really fun to read series titled "Things I Won't Work With" where he talks a bunch about dangerous chemicals: https://www.science.org/topic/blog-category/things-i-wont-work-with 

Comment by Robert Cousineau (robert-cousineau) on Lonely Dissent · 2024-10-22T21:00:41.219Z · LW · GW

http://www.overcoming-bias.com/2007/06/against_free_th.html.

This link should be: https://www.overcomingbias.com/p/against_free_thhtml (removing the hyphen will allow a successful redirect).

Comment by Robert Cousineau (robert-cousineau) on The case for a negative alignment tax · 2024-09-19T00:41:07.503Z · LW · GW

In the limit (what might be considered the ‘best imaginable case’), we might imagine researchers discovering an alignment technique that (A) was guaranteed to eliminate x-risk and (B) improve capabilities so clearly that they become competitively necessary for anyone attempting to build AGI. 

I feel like throughout this post, you are ignoring that agents, "in the limit", are (likely) provably taxed by having to be aligned to goals other than their own.  An agent with utility function "A" is definitely going to be less capable at achieving "A" if it is also aligned to utility function "B".   I respect that current LLM's not best described as having a singular consistent goal function, however, "in the limit" that is what they will be best described as.  

Comment by Robert Cousineau (robert-cousineau) on What would stop you from paying for an LLM? · 2024-05-21T23:45:32.339Z · LW · GW

I stopped paying for chatGPT earlier this week, while thinking about the departure of Jan and Daniel.

Whereas before they left I was able to say to myself "well, there are smarter people than me with worldviews similar to mine who have far more information about openAI than me, and they think it is not a horrible place, so 20 bucks a month is probably fine", I am no longer able to do that.

They have explicitly sounded the best alarm they reasonably know how to currently. I should listen!

Comment by Robert Cousineau (robert-cousineau) on Will 2024 be very hot? Should we be worried? · 2023-12-29T20:38:28.971Z · LW · GW

Market odds are currently at 54% that 2024 is hotter than 2023: https://manifold.markets/SteveRabin/will-the-average-global-temperature?r=Um9iZXJ0Q291c2luZWF1

I have some substantial limit orders +-8% if anyone strongly disagrees.

Comment by Robert Cousineau (robert-cousineau) on In defence of Helen Toner, Adam D'Angelo, and Tasha McCauley (OpenAI post) · 2023-12-05T21:57:52.017Z · LW · GW

I like the writeup, but reccomend actually directly posting it to LessWrong. The writeup is of a much higher quality than your summary, and would be well suited to inline comments/the other features of the site.

Comment by Robert Cousineau (robert-cousineau) on We Should Talk About This More. Epistemic World Collapse as Imminent Safety Risk of Generative AI. · 2023-11-16T20:01:05.833Z · LW · GW

I have a lot of trouble justifying to myself reading through more than the first five paragraphs.  Below is my commentary on what I've read.

I doubt that the short term impacts of sub-human level AI, be it generating prose, photographs, or films, on our epistemology are negative enough to justify them being weighted high as the X-Risk that is likely to emerge upon creating human level AI.  

We have been living in a adversarial information space for as long as we have had human civilization.  Some of the most most impressive changes to our epistemology were made prior to photographs (empiricism, rationalism, etc), some of them were made after (they are cool!).  It will require modifying how we judge the accuracy of a given claim when we can no longer trust photos/videos in low stakes situations (we've not been able to trust them in high stakes situations for as long as they have existed; see special effects, film making, conspiracies about any number of recorded claims, etc), but that is just a normal fact of human societal evolution.  

If you want to convince (me, at least) that this is a threat that is potentially society ending, I would love an argument/hook that addresses my above claims.

Comment by Robert Cousineau (robert-cousineau) on Bostrom Goes Unheard · 2023-11-14T00:40:41.158Z · LW · GW

This seems like highly relevant (even if odd/disconcerting) information.  I'm not sure if it should necessarily get it's own post (is this as important than the UK AI Summit or the Executive Order?), but it should certainly gets a top level item in your next roundup at least.  

Comment by Robert Cousineau (robert-cousineau) on It's OK to eat shrimp: EAs Make Invalid Inferences About Fish Qualia and Moral Patienthood · 2023-11-14T00:36:07.500Z · LW · GW

I think unless you take a very linguistics heavy understanding of the emergence of qualia, you are over-weighting your arguments around being able to communicate with an agent being highly related to how likely they are to have consciousness.  

___________________________________________________________________________________________

You say:

In short, there are some neural circuits in our brains that run qualia. These circuits have inputs and outputs: signals get into our brains, get processed, and then, in some form, get inputted into these circuits. These circuits also have outputs: we can talk about our experience, and the way we talk about it corresponds to how we actually feel.

And: 

It is valid to infer that, likely, qualia has been beneficial in human evolution, or it is a side effect of something that has been beneficial in human evolution.

I think both of the above statements are very likely true.  From that, it is hard to say that a chimpanzee likely to lack those same circuits.  Neither our mental circuits nor our ancestral environments are that different.  Similarly, it is hard to say "OK, this is what a lemur is missing, as compared to a chimpanzee".  

I agree that as you go down the list of potentially conscious entities (e.g. Humans -> Chimpanzees -> Lemurs -> Rats -> Bees -> Worms -> Bacteria -> Virus -> Balloon) it gets less likely that each has qualia, but I am very hesitant to put anything like an order of magnitude jump at each level.  

Comment by Robert Cousineau (robert-cousineau) on How to (hopefully ethically) make money off of AGI · 2023-11-13T23:46:19.518Z · LW · GW

Did you hear back here?

Comment by Robert Cousineau (robert-cousineau) on AI Alignment [progress] this Week (10/29/2023) · 2023-10-31T17:01:01.049Z · LW · GW

Retracted - my apologies. I was debating if I should add a source when I commented that and clearly I should have.

Comment by Robert Cousineau (robert-cousineau) on AI Alignment [progress] this Week (10/29/2023) · 2023-10-31T03:34:21.273Z · LW · GW

What does this mean: More research is better, in my opinion. But why so small? AI Alignment is at least a $1T problem.

Open AI may have a valuation of 80 million in a couple days and they are below that currently.  

I haven't read the article yet, but that is a decent percentage of their current valuation. 

Comment by Robert Cousineau (robert-cousineau) on Evolution provides no evidence for the sharp left turn · 2023-04-12T20:53:55.015Z · LW · GW

This post prompted me to create the following Manifold Market on the likelihood of a Sharp Left Turn occuring (as described by the Tag Definition, Nate Soares, and Victoria Krakovna et. al), prior to 2050: https://manifold.markets/RobertCousineau/will-a-sharp-left-turn-occur-as-des?r=Um9iZXJ0Q291c2luZWF1

Comment by Robert Cousineau (robert-cousineau) on Why Portland · 2022-07-10T19:44:57.645Z · LW · GW

Take note: it is only 2 hours away if you are driving in the middle of the night, on a weeknight. Else, it is 3-4 hours depending on how bad traffic is (there will almost always be some along I-5).

Comment by Robert Cousineau (robert-cousineau) on Debating Whether AI is Conscious Is A Distraction from Real Problems · 2022-06-22T01:53:36.776Z · LW · GW

From my beginners understanding, the two objects you are comparing are not mutually exclusive.

There is currently work being done on inner alignment and outer alignment, where inner alignment is more focused on making sure that an AI doesn't coincidentally optimize humanity out of existence due to [us not teaching it a clear enough version of/it misinterpreting] our goals and outer alignment more focused on making sure we have goals aligned to human values we should teach it.

Different big names focus on different parts/subparts of the above (with crossover as well).

Comment by Robert Cousineau (robert-cousineau) on [$20K in Prizes] AI Safety Arguments Competition · 2022-05-27T14:41:13.492Z · LW · GW

Nice work there!

Comment by Robert Cousineau (robert-cousineau) on [$20K in Prizes] AI Safety Arguments Competition · 2022-05-26T23:51:53.877Z · LW · GW

"When we create an Artificial General Intelligence, we will be giving it the power to fundamentally transform human society, and the choices that we make now will affect how good or bad those transformations will be.  In the same way that humanity was transformed when chemist and physicists discovered how to make nuclear weapons, the ideas developed now around AI alignment will be directly relevant to shaping our future."

Comment by Robert Cousineau (robert-cousineau) on [$20K in Prizes] AI Safety Arguments Competition · 2022-05-26T23:46:53.677Z · LW · GW

"Once we make an Artificial General Intelligence, it's going to try and achieve its goals however it can, including convincing everyone around it that it should achieve them.  If we don't make sure that it's goals are aligned with humanity's, we won't be able to stop it."

Comment by Robert Cousineau (robert-cousineau) on [$20K in Prizes] AI Safety Arguments Competition · 2022-05-26T23:22:01.564Z · LW · GW

"There are 7 billion people on this planet.  Each one of them has different life experiences, different desires, different aspirations, and different values.  The kinds of things that would cause two of us to act, could cause a third person to be compelled to do the opposite.  An Artificial General Intelligence will have no choice but to act from the goals we give it.  When we give it goals that 1/3 of the planet disagrees with, what will happen next?" (Policy maker)

Comment by Robert Cousineau (robert-cousineau) on [$20K in Prizes] AI Safety Arguments Competition · 2022-05-26T23:12:35.249Z · LW · GW

"With all the advanced tools we have, and with all our access to scientific information, we still don't know the consequences of many of our actions.  We still can't predict weather more than a few days out, we can't predict what will happen when we say something to another person, we can't predict how to get our kids to do what we say. When we create a whole new type of intelligence, why should we be able to predict what happens then?"

Comment by Robert Cousineau (robert-cousineau) on [$20K in Prizes] AI Safety Arguments Competition · 2022-05-26T23:06:50.735Z · LW · GW

"There are lots of ways to be smarter than people.  Having a better memory, being able to learn faster, having more thorough reasoning, deeper insight, more creativity, etc.  
When we make something smarter than ourselves, it won't have trouble doing what we ask.  But we will have to make sure we ask it to do the right thing, and I've yet to hear a goal everyone can agree on." (Policy maker)

Comment by Robert Cousineau (robert-cousineau) on [$20K in Prizes] AI Safety Arguments Competition · 2022-05-26T22:46:32.128Z · LW · GW

"When we start trying to think about how to best make the world a better place, rarely can anyone agree on what is the right way.   Imagine what would happen to if the first person to make an Artificial General intelligence got to say what the right way is, and then get instructed how to implement it like they had every researcher and commentator alive spending all their day figuring out the best way to implement it.  Without thinking about if it was the right thing to do." (Policymaker)

Comment by Robert Cousineau (robert-cousineau) on [$20K in Prizes] AI Safety Arguments Competition · 2022-05-26T22:34:52.056Z · LW · GW

"How well would your life, or the life of your child go, if you took your only purpose in life to be the first thing your child asked you to do for them?  When we make an Artificial Intelligence smarter than us, we will be giving it as well defined of a goal as we know how.  Unfortunately, the same as a 5 year old doesn't know what would happen if their parent dropped everything to do what they request, we won't be able to think through the consequences of what we request." (Policymaker)

Comment by Robert Cousineau (robert-cousineau) on [$20K in Prizes] AI Safety Arguments Competition · 2022-05-26T22:28:46.832Z · LW · GW

"Through the past 4 billion years of life on earth, the evolutionary process has emerged to have one goal: create more life.  In the process, it made us intelligent.  In the past 50 years, as humanity gotten exponentially more economically capable we've seen human birth rates fall dramatically.  Why should we expect that when we create something smarter than us, it will retain our goals any better than we have retained evolution's?" (Policymaker)