Call for submissions: “(In)human Values and Artificial Agency”, ALIFE 2023

post by the gears to ascension (lahwran) · 2023-01-30T17:37:48.882Z · LW · GW · 4 comments

This is a link post for https://humanvaluesandartificialagency.com/

Contents

  EXAMPLES OF A-LIFE RELATED TOPICS
  EXAMPLES OF AI SAFETY RELATED TOPICS
None
4 comments

key points:

ALIFE 2023 (the 2023 conference on Artificial Life) will feature a Special Session on “(In)human Values and Artificial Agency”. This session focuses on issues at the intersection of AI Safety and Artificial Life. We invite the submission of research papers, or extended abstracts, that deal with related topics.

We particularly encourage submissions from researchers in the AI Safety community, who might not otherwise have considered submitting to ALIFE 2023.

...

EXAMPLES OF A-LIFE RELATED TOPICS

Here are a few examples of topics that engage with A-Life concerns:

  • Abstracted simulation models of complex emergent phenomena
  • Concepts such as embodiment, the extended mind, enactivism, sensorimotor contingency theory, or autopoiesis
  • Collective behaviour and emergent behaviour
  • Fundamental theories of agency or theories of cognition
  • Teleological and goal directed behaviour of artificial agents
  • Specific instances of adaptive phenomena in biological, social or robotic systems
  • Thermodynamic and statistical-mechanical analyses
  • Evolutionary, ecological or cybernetic perspectives

EXAMPLES OF AI SAFETY RELATED TOPICS

Here are a few examples of topics that engage with AI Safety concerns:

  • Assessment of distinctive risks, failure modes or threat models for artificial adaptive systems
  • Fundamental theories of agency, theories of cognition or theories of optimization.
  • Embedded Agency, formalizations of agent-environment interactions that account for embeddedness, detecting agents and representations of agents’ goals.
  • Selection theorems – how selection pressures and training environments determine agent properties.
  • Multi-agent cooperation; inferring / learning human values and aggregating preferences.
  • Techniques for aligning AI models to human preferences, such as Reinforcement Learning from Human Feedback (RLHF)
  • Goal Misgeneralisation – how agent’s goals generalise to new environments
  • Mechanistic interpretability of learned / evolved agents (“digital neuroscience”)
  • Improving fairness and reducing harm from machine learning models deployed in the real world.
  • Loss of human agency from increasing automation


 

4 comments

Comments sorted by top scores.

comment by Charlie Steiner · 2023-01-30T20:53:05.796Z · LW(p) · GW(p)

Do you have any insider knowledge about this conference? Do you know if it tends to be interesting, have interesting people, be in touch with reality, etc?

Replies from: rorygreig100, lahwran
comment by rorygreig (rorygreig100) · 2023-01-31T19:41:46.329Z · LW(p) · GW(p)

Hey, one of the co-organisers of this special session here (I was planning to make a post about this on LW myself but OP beat me to it!).

Clearly I am biased, but I would highly recommend the ALIFE conference (even outside the context of this special session). I published a paper there myself at ALIFE 2021 and really enjoyed the experience.

It has a diverse, open-minded and enthusiastic set of attendees from a wide range of academic disciplines, the topics are varied but interesting. Regarding being in touch with reality, this is harder to comment on but it does typically include a lot of practical and empirical research, for example computer simulations, as well as more theoretical and philosophical work.

We are arranging this special session because we think that Artificial Life as a field, and in particular attendees of this conference, may have a lot to contribute to AI safety, so we are excited about the potential overlap between these areas.

Please feel free to reach out to me directly if you have any questions.

comment by the gears to ascension (lahwran) · 2023-01-30T23:52:26.623Z · LW(p) · GW(p)

no insider knowledge; I have some outsider knowledge that makes me optimistic that there will be interesting things to be learned from people there. alife research has been focusing on things like https://chakazul.github.io/lenia.html, and now particle lenia, https://google-research.github.io/self-organising-systems/particle-lenia/, and it connects well with other research on related topics that I've had a hobby of making indexes of, eg this recent post [LW · GW].

here are the proceedings of the 2022 conference: https://direct.mit.edu/isal/isal/volume/33 - some papers that seem worth a skim, from those proceedings (I really wish lesswrong had hover-to-preview, use a chrome tab group or something to open the links quickly then flip through them fast):

but as a person who's always focused on approximations and is kind of beginner at actually doing the math I like to read about, I can only offer a pointer to the conference.

comment by rorygreig (rorygreig100) · 2023-02-25T11:07:19.782Z · LW(p) · GW(p)

Update: The submissions deadline for this Special Session has been extended to 13th March.