You don't need to be a genius to be in AI safety research
post by Claire Short (claire-short) · 2023-05-06T02:32:02.164Z · LW · GW · 1 commentsContents
Using ‘genius’ is not a reliable metric for researchers Who gets to decide if you’re a genius? What actually makes a good researcher? Creating a more inclusive environment Conclusion None 1 comment
The aim of this article is to share my experiences within the AI safety community to those in research and operations in the field, with the goal of creating a more inclusive, supportive, and positive environment. The intention of this article is to allow those in a more privileged position to think about these considerations and potentially implement new practices in their organizations or work in order to create a more welcoming space for those not traditionally represented here. I’ve also had some incredibly supportive and fulfilling experiences within this field which are worth celebrating. However in order to create the most inclusive space that we can, I think the following points are important to address.
Using ‘genius’ is not a reliable metric for researchers
When people claim to be searching for ‘genius’ researchers or that you need to be a genius to contribute in a meaningful way to AI safety, it’s unclear which specific metrics they are using for evaluation. A number of factors contribute to the ambiguity and subjectivity surrounding the definition of "genius," including people's individualized perspectives shaped by their upbringing, the complexity of intelligence, our growing understanding of cognitive processes, the context-dependence of genius, and the lack of clear criteria to determine genius. These factors make it challenging to establish a universally accepted definition of genius.
Regardless of the criteria used to determine genius, it is critical to understand that this quality isn’t the only one necessary to contribute in a meaningful way.
Recruiting researchers is a task that requires careful evaluation of multiple factors such as skill, experience, ability to work with others, and ingenuity. However certain organizations, program managers, and coaches in the AI safety field pay too much attention to the benchmark of ‘genius’ to recruit or encourage researchers. This type of thinking is harmful and unnecessary because it leads to a lack of diversity, narrow focus, elitism, imposter syndrome, and waste of resources in the pursuit of this elusive trait.
By allowing this idea to continue, we run the risk of recruiting for a narrow set of skills or attributes, thus limiting potential innovation and diversity of thought, while also potentially slowing research progress. Instead, it's important to actively seek talented individuals with alternative educational paths, rather than solely relying on these "typical" backgrounds. Likewise, embracing various research directions offer opportunities for meaningful input. For instance, initiatives like PIBBSS, which encourages individuals beyond the traditional ML domain to engage in alignment work, highlight the importance of diversifying perspectives.
Most incredible discoveries are the result of incremental advances, even if these aren’t in the public eye. Research progress is the result of continuous and gradual accumulation of knowledge through collaborative endeavors.While it’s very uncommon to find exceptional individuals like Einstein or Newton, there are thousands of researchers who are making significant, interesting and valuable contributions to their field without necessarily being geniuses. You don’t even have to make a correct discovery to be a contributor - John Nicholson, Anton Van den Broek, Richard Abegg, Charles Bury, John Main Smith, Edmund Stoner and Charles Janet are all lesser known contributors to the field of atomic structure, yet each of them published one or two (even incorrect) ideas that allowed for other researchers, such as Niels Bohr, to push their ideas forward leading to breakthroughs.
By the way, you don’t need to be a genius to be an independent researcher. Here’s a roadmap [LW · GW] with actionable steps you can take to get there.
Who gets to decide if you’re a genius?
There are many ways in which genius can present itself, some of which may be unfamiliar to the person making the judgment. If you are going off of a conversation with someone for instance, there may be multiple things at play that could make someone appear to not fit the role of genius such as nerves, social anxiety, differences in communication style, or unconscious bias in the judge. The person making the call is likely not a genius themselves, using unclear ways of measuring what they think constitutes a genius. This notion is often biased towards individuals from privileged backgrounds, such as going to a prestigious school or participating in certain fellowships.
Gender biases can also play an unfair role in how we perceive ‘genius’, resulting in men being linked to the term at a greater rate than women throughout history. In a global perceptions study measuring stereotypes, men are more likely than women to be seen as “brilliant” as a result of implicit bias. This stereotype can be attributed to multiple reasons such as the undervaluing and overlooking of women throughout history in fields where they could have greatly contributed but were either not allowed to or were not given the same resources. Gender stereotypes surrounding brilliance appear to emerge early in childhood, and in order to break out of this way of thinking we need to be aware of these stereotypes and consciously work to unlearn them.
What actually makes a good researcher?
Collaboration: AI safety research encapsulates an interdisciplinary field that involves experts from a range of disciplines such as computer science, math, philosophy, linguistics, psychology, and ethics. While a strong technical background is an asset, communication with other researchers and with an audience of varying degrees of technical background is equally important. [1] [2]
Honesty and ethics: It’s important that your research be reproducible and your methods transparent. Honesty in reporting results and limitations helps the broader research community build upon and validate your findings.
Persistence: Research can be a long and challenging process that requires a significant amount of time and effort. It’s possible you’ll run into hurdles in idea generation, debugging, funding/resources, hitting a theory-related dead-end, etc. Persistence pays off.
Openness to feedback: Researchers are responsible for gathering feedback, especially in the early stages of their career, and publishing work in public spaces if it’s safe to do so. You are opening yourself up to critique and comments, which can feel vulnerable but this is something that allows for growth and opportunities to collaborate by getting your name out and attached to your research.
Creativity and curiosity: Since this is a new field, many types of studies are still in their infancy, which requires researchers to come up with new ideas perhaps more readily than in other fields. Having creativity allows for researchers to come to insights from different angles and see what others are missing, potentially from unexpected places.
Range of Strengths: We need skilled software engineers to carry out important experimental work that may not require theorizing. We also need skilled research engineers that have a foundational understanding of theory and machine learning. We need theorists that have a background in math, but may not necessarily know how to code. There’s no need to be an expert in all fields to make valuable contributions to AI safety research.
Openness to being wrong (and patience): Many people are quick to jump to solutions without fully understanding the problem, or feel locked in to a hypothesis or solution because they’ve put in a lot of time pursuing this path. It’s important to give up on an idea when evidence points us that way, let go of ego, and take your time when pursuing solutions [LW · GW].
Perseverance after failure: Failure can sting, but it’s important to not take this as a personal defect. Additionally, many people who excel in school may not have experienced an academic failure, and may have trouble accepting this in a research setting. This can be common during a shift going from university (where problems are generally able to be solved with some effort), to a research environment (where problems can potentially have no clear solution). It’s important to realize this is a part of life when you are trailblazing new paths. Being able to bounce back quickly is a super power.
Creating a more inclusive environment
We have a long way to go in terms of inclusivity and diversity in the AI safety community, but there are ways that we can become more welcoming to a diverse range of people (and to ourselves):
Patience: accept that there’s a learning curve, and that research takes time. It’s okay (and very normal) to not be very productive in the first few months that you’re new to research, this isn’t indicative of your ability to be a great researcher. Be kind to yourself.
Adversarial communication: even casual conversations in the community can sometimes feel like you are being asked to constantly explain yourself or that your conversation partner's goal is to contradict you and prove you wrong (even if this isn’t the intention). This can be a bit off-putting to some but can be mitigated by adopting a more friendly conversation style when first speaking with someone and taking a moment to understand their communication style before defaulting to this.
Rest: Allow yourself to rest regardless of your timeline, take care of yourself and realize you don’t deserve to feel guilty for not being productive every day. You can walk, workout, stretch away from your desk, or do a 10 min meditation. This time may even lead you to a breakthrough.
Leadership: Pushing for a more diverse leadership team for AI alignment organizations, fellowships, and research teams as the norm. Advocating for more diversity of perspectives and encouraging the ideas of those not traditionally represented.
Conclusion
Focusing on ‘genius’ shouldn’t be used to assess potential researchers because of the vague and not well understood nature of the term. This can lead to overlooking researchers with diverse skills, experiences, and perspectives. Instead, I urge those in AI safety research to adopt an inclusive and well-rounded approach that considers unique strengths, areas of knowledge, and collaboration potential in their view of what a productive researcher looks like. In order to incorporate a range of skills and perspectives, we need to encourage different types of people to become involved at multiple levels.
The concept of genius is complex and multifaceted, and can change contexts based on culture and history. Let’s recognize and appreciate diverse forms of intelligence and creativity, rather than narrow our viewpoints at the expense of many.
Rather than recruiting for genius, organizations should evaluate promising researchers on their potential backed by a combination of various factors such as curiosity, creativity, persistence, skill, collaboration, motivation, and willingness to learn. Recruiting based on ‘genius’ is harmful for both the applicant and the organization because it is an unfair, potentially biased, and unclear way of measuring how fit someone is to be involved in AI safety research.
1 comments
Comments sorted by top scores.
comment by Cookiecarver · 2023-05-06T05:54:57.113Z · LW(p) · GW(p)
Do you have to have roughly the same kind of worldview as the top AI alignment researchers? Do you have to be a secular atheist and reductive naturalist/physicalist holding a computational theory of mind?