Posts

How to express this system for ethically aligned AGI as a Mathematical formula? 2023-04-19T20:13:43.881Z
Consciousness is irrelevant - instead solve alignment by asking this question 2023-03-04T22:06:46.424Z
Should AI focus on problem-solving or strategic planning? Why not both? 2022-11-05T19:17:56.783Z
How to store human values on a computer 2022-11-05T19:17:56.595Z

Comments

Comment by Oliver Siegel (oliver-siegel) on Consciousness is irrelevant - instead solve alignment by asking this question · 2023-04-30T16:25:32.693Z · LW · GW

Correct! That's my point with the main post. I don't see anyone discussing conscience, I mostly hear them contemplate consciousness or computability. 

As far as how to actually do this, I've dropped a few ideas on this site, they should be listed on my profile.

Comment by Oliver Siegel (oliver-siegel) on Consciousness is irrelevant - instead solve alignment by asking this question · 2023-04-21T16:48:45.239Z · LW · GW

Makes perfect sense!

Isn't that exactly why we should develop an artificial conscience, to prevent an AI from lying or having a shadow side? 

A built in conscience would let the AI know that lying is not something it should do. Also, using a conscience in the AI algorithm would make the AI combat it's own potential shadow. It'll have knowledge of right and wrong / good or bad, and it's even got superhuman ability to orient itself towards that which is good & right, rather than to be "seduced" by the dark side.

Comment by Oliver Siegel (oliver-siegel) on Consciousness is irrelevant - instead solve alignment by asking this question · 2023-04-20T00:37:47.333Z · LW · GW

Thank you for your comment!

In your opinion, what's the biggest challenge about feeding a DNN with human values, and then adjusting the biases in such a manner that it's not degrading them?

We've taught AI how to speak, and it appears that openAI has taught their AI how to produce as little offensive content as possible. So it seems to be feasible, or not?

Comment by Oliver Siegel (oliver-siegel) on How to store human values on a computer · 2022-11-21T07:59:48.859Z · LW · GW

Fair point! But how do you know that this ungrounded mysticism doesn't apply to current debate about the potential capabilities of AI systems?

Why is an AI suddenly able to figure out how to break the laws of physics and be super intelligent about how to end intelligent life, but somehow incapable of comprehending the human laws of ethics and morality, and valuing life as we know it?

What makes the laws of physics easier to understand and easier to circumvent than the human laws of ethics and morality? (And also, navigating the human laws of ethics and morality must be required for ending all life. Unless software suddenly has the same energy as enriched plutonium or something like that, and one wrong bit flip causes an explosive chain reaction)

What makes it so much more difficult to understand critical thinking and "how to store human values in a computer", and in contrast what makes "accidentally ending all intelligent life" so easy, by comparison?

It seems to me that "ASI on mission to destroy the humans" is the same thing as "luminiferous aether".

We taught AI English and how to draw pictures and create art. Both pretty "fuzzy" things.

How hard can it be to train AI on a dataset of 90% of known human values and 90% of known problems and solutions with respect to those values for a neural net to have an "above average human"-grasp on the idea that "ending all intelligent life" computes as "that's a problem and it's immoral" ?

Beyond that, alignment is unsolvable anyways for AGI systems that perform at above human intelligence. Can't predict the future with a software, because there could always be software that uses the future predicting software and negates the output - aka the Halting Problem. Can't do anything about that.

Comment by Oliver Siegel (oliver-siegel) on How to store human values on a computer · 2022-11-18T22:26:37.053Z · LW · GW

Absolutely, I'm here for the feedback! No solution should go without criticism, regardless of what authority posted the idea, or how much experience the author has.  :) 

Comment by Oliver Siegel (oliver-siegel) on How to store human values on a computer · 2022-11-18T22:24:32.196Z · LW · GW

Interesting article! It reminds me of Monica Anderson's blog: https://experimental-epistemology.ai/

She embraces the mysticism and proposes that holistic, non-reductionist model free systems are undeniably effective.

> "The biggest problem, as I see it, is that you haven't come to a thorough understanding of what you mean" 

That's another one, that Monica writes a lot about: Understanding.

What does is mean to understanding something? And what is the meaning of meaning? 

Yes, they sounds like metaphysical, mystical ideas, and they might be fundamentally unsolvable (See: Hard problem of consciousness or explanatory gap).

But we already see that it must be possible for systems to exist in this universe, that are aligned within a group. Many groups of humans use their brains to figure out how to coexist peacefully. 

So unless humans posses a mystical ingredient, it must be possible to recreate this sort of understanding in machines. 

But we don't currently know how to teach this to this machines, in part because we lack a good dataset, and we don't understand it, ourselves.

Do you think that alignment is fundamentally an engineering problem, or is it one of the humanities and one of philosophy?

Comment by Oliver Siegel (oliver-siegel) on Informal semantics and Orders · 2022-11-18T22:13:11.584Z · LW · GW

Love this! 


We're working on something related / similar:
https://forum.effectivealtruism.org/posts/FnviTNXcjG2zaYXQY/how-to-store-human-values-on-a-computer 

This was cross-posted in less wrong, as well, where it's received a lot of criticism:
https://www.lesswrong.com/posts/rt2Avf63ADbzq9SuC/how-to-store-human-values-on-a-computer 
 

Comment by Oliver Siegel (oliver-siegel) on How to store human values on a computer · 2022-11-07T08:05:42.725Z · LW · GW

Yea, i agree!

But if it were easy, everyone would do it... ;p

Based on your knowledge, what do you think might be the biggest hurdles to making it possible, using a system similar like the one i described above?

Comment by Oliver Siegel (oliver-siegel) on How to store human values on a computer · 2022-11-07T08:02:35.044Z · LW · GW

Thank you for the resource!

I'm planning to continue publishing more details about this concept. I believe it will address many of the things mentioned in the post you linked.

Instead of posting it all at once, I'm posting it in smaller chunks that all connect.

I have something coming up about preventing instrumental convergence with formalized critical thinking, as well as a general problem solving algorithm. It'll hopefully make sense once it's all there!

Comment by Oliver Siegel (oliver-siegel) on How to store human values on a computer · 2022-11-06T09:07:20.634Z · LW · GW

Thanks for sharing! Yes, is seems that the computational complexity could indeed explode at some point.

But then again, an average human brain is capable of storing common sense values and ethics, so unless there's a magic ingredient in the human brain, it's probably not impossible to rebuild it on a computer.

Then, with an artificial brain that has all the benefits of never fatiguing and such, we may come close to a somewhat useful Genie that can at least advise on the best course of action given all the possible pitfalls.

Even if it'll just be, say 25% better than the best human - all humans could get access to this Genie on their Smartphone, how cool would that be?

But I'll have to dig deeper into The Sequences, seems very comprehensive.

I found Monica Anderson's blog quite inspiring, as well. She writes about model free, holistic systems. https://experimental-epistemology.ai/

Comment by Oliver Siegel (oliver-siegel) on How to store human values on a computer · 2022-11-06T03:02:10.162Z · LW · GW

Thank you! Could I get a link to "The Sequences" ? I can't find it here: https://www.lesswrong.com/tags/all 

Comment by Oliver Siegel (oliver-siegel) on Should AI focus on problem-solving or strategic planning? Why not both? · 2022-11-04T19:39:20.426Z · LW · GW

Hey there, just FYI I added some more content! 

Comment by Oliver Siegel (oliver-siegel) on Should AI focus on problem-solving or strategic planning? Why not both? · 2022-11-03T21:43:11.332Z · LW · GW

Thank you! I've been researching this for quite some time. 

But I also don't want to overload anyone by going too deep into the subject right away and make it too jargony.