[UPDATE: deadline extended to July 24!] New wind in rationality’s sails: Applications for Epistea Residency 2023 are now open
post by Jana Meixnerová (Epistea), Irena Kotíková · 2023-07-11T11:02:28.705Z · LW · GW · 7 commentsContents
Format What we offer Apply now! Support us None 7 comments
UPDATE 2: Do you have any questions about the Epistea Residency? Join us for a live Q&A call with the organizers on July 20, Thursday 9 am CET / Wednesday midnight 0 am PDT at https://us06web.zoom.us/j/85371078204
UPDATE: We clarified the sections on funding, applying as an individual and have extended the deadline to apply (now July 24).
Motivation
In the past years, the research and development of new methods in rationality and epistemics have slowed down significantly and become increasingly more neglected. However, clear thinking and rationality, we believe, is not less important today than it was before. Even though attention of LessWrong and applied rationality communities has been recently mostly focused on direct intervention into AI risk, in our view, a lot of potential problems with the transition to powerful AI systems are related to the ability of humans to think clearly, and in particular, the ability of humans to think clearly in groups. Based on thinking about the relative importance and neglectedness of rationality, we at Epistea decided to organize a residency program [EA · GW] aiming to improve the situation and bring new people to the areas of rationality, epistemics, and civilisational sanity.
Epistea Residency
The Epistea Residency program will be happening from September 18 to November 25, 2023 as part of Prague Fall Season 2023 at Fixed Point in Prague, Czech Republic. One way to think about the fellowship is as a program somewhat similar to SERI-MATS or PIBBSS, but focused on advancing epistemics and applied rationality, rather than AI alignment directly (this does not preclude projects which advance both rationality and alignment). For the 10-week in-person residency program, we will invite between 20 to 30 participants, forming teams of 3-5 members each. We expect all team members to dedicate at the minimum 0.5 FTE (20 hours per week) to the residency, granting them the possibility to at least partially maintain their regular work commitments if necessary. By the end of their residency, the teams will be expected to deliver a project output or reach a major project milestone.
Current tentative mentors for the program are: Anna Salamon, Jan Kulveit, Nora Ammann, Eli Tyre, Gavin Leech, Raymond Douglas and Damon Sasi
Format
The teams will have the option to propose their own project within the areas of rationality, epistemics, and civilisational sanity. The following is a list of directions we are excited about:
- Applied rationality theory. For example: rethinking applied rationality from scratch based on geometric rationality and active inference;
- Group rationality. For example developing group coordination mechanism avoiding coordination on false "coordination beliefs", advancing previous Epistea work [LW · GW];
- Applied rationality practice. For example, developing new easy way to transmit habits or techniques, or a new workshop format;
- Use of AI to improve human rationality and coordination, e.g. developing an "automated Double-Crux aid" or automated group deliberation aid;
- Interventions to increase sanity waterline in general public, e.g. developing easily spreadable and useful models for understanding the AI transition, developing metaphors and simple arguments to counter biases influencing the AI risk debate, or developing other scalable interventions into civilizational sanity;
- Interventions against memetic hijacking, e.g. developments of tools to help people avoid becoming ideological bots;
We are open to different forms of work - with the soft boundary that residency projects should aim to produce concrete outputs (posts, papers, courses, tools, workshops, new techniques). The progress can take different forms, including:
- Research;
- Distillation, communication, and publishing (writing and publishing a series of explanatory posts, video production, writing a textbook or course materials, etc.);
- Program development (events, workshops, etc.);
- Software development.
What we offer
Following the feedback from participants of the program pilot last year, this time around we will support the Epistea Residency teams with:
- Venue and operations (fully equipped co-working office space for each team, meeting rooms, office supplies, access to and maintenance of the house facilities, daily lunch catering, drinks, and snacks);
- Administrative support (assistance with booking flights and/or accommodation and settling in Prague, operations for any projects or events organized within the scope of Prague Fall Season);
- Targeted mentoring by experts in the field of each team’s project. Current tentative mentors are Anna Salamon, Jan Kulveit, Nora Ammann, Eli Tyre, Gavin Leech, and Raymond Douglas. We expect to announce more soon;
- Opportunity to independently take advantage of the complete Prague Fall Season programming;
- Possibly funding for travel expenses, accommodation, and salaries (to be determined depending on funding availability).
We have a preliminary commitment from CFAR to fund a part of this program (teams would apply with CFAR separately to get funding for their project) and we are awaiting more funding decisions. If funding is a crux for you to participate in the program, please apply anyway, and by the time of the selection process (late July), we should have more information about the scope of the funding.
Apply now!
The first round of written applications for the Epistea Residency is now open here.
We welcome applications from both teams and independent individuals. The program is by default designed for teams but if you don't have anyone to team up with, you can still apply and based on your application we suggest other applicants join you in a team. In exceptional cases you can participate as an individual if this serves your project the best, we will evaluate this on a case-by-case basis.
We are accepting applications on a rolling basis with the final deadline of July 17. UPDATE: extended to July 24! Successful applicants will be interviewed in the second application round. All applicants can expect to hear back from us by the beginning of August. If you would like to join us at Fixed Point during Prague Fall Season in another capacity, please fill out this form or stay tuned for the short-term visitor applications for Prague Fall Season 2023.
Support us
We are currently in the process of securing funding for this program (this includes travel reimbursements, financial aid for housing, staff salaries, a small stipend for residents etc.). If you think this is a good fit for your organization or you know of someone who would be interested in supporting this project, please let us know at info@praguefallseason.com.
7 comments
Comments sorted by top scores.
comment by Chris_Leong · 2023-07-01T08:08:28.222Z · LW(p) · GW(p)
Exciting to see this program. There’s definitely been less progress on rationality recently and I’m keen to see what comes out of this program.
comment by Ulisse Mini (ulisse-mini) · 2023-07-13T05:02:05.738Z · LW(p) · GW(p)
Excited to see what comes out of this. I do want to raise attention to this failure mode [LW · GW] covered in the sequences. however. I'd love for those who do the program try to bind their results to reality in some way, ideally having a concrete result of how they're substantively stronger afterwards, and how this replicated with other participants who did the training.
Replies from: pktechgirl, Jan_Kulveit↑ comment by Elizabeth (pktechgirl) · 2023-07-19T20:16:09.382Z · LW(p) · GW(p)
This failure mode is definitely real. OTOH, demands for immediate, legible results can kill off many valuable options. There are major improvements in my life that are measurable (e.g. ability to take moral stands when people are yelling at me, ability to think on my feet while anxious) but can't be attributed to any one action[1]. If you took away everything that couldn't objectively justify itself in a few months, I'd be much worse off, even though probably a good chunk of what was cut was valueless.
- ^
or can be attributed to specific actions, but only partially. The sum of improvements with traceable causes is far less than the total improvement.
↑ comment by Jan_Kulveit · 2023-07-13T10:01:12.010Z · LW(p) · GW(p)
Broadly agree the failure mode is important; also I'm fairly confident basically all the listed mentors understand this problem of rationality education / "how to improve yourself" schools / etc. and I'd hope can help participants to avoid it.
I would subtly push back against optimizing for something like being measurably stronger on a timescale like 2 months. In my experience actually functional things in this space typically work by increasing the growth rate of [something hard to measure], so instead of e.g. 15% p.a. you get 80% p.a.
↑ comment by Alex Vermillion (tomcatfish) · 2023-07-19T19:02:59.747Z · LW(p) · GW(p)
For example then, how would someone know this is a useful thing based on other signals? It's totally valid to suggest using something else, but is there one? If not, you're going to have a selection effect against people for whom that matters
comment by JanGoergens (jantrooper2) · 2023-07-06T13:03:51.438Z · LW(p) · GW(p)
I am very interested in this project. Unfortunately, I will not be able to participate full-time, but I would like to join / visit for one week in October.
I am eager to contribute in any capacity I can during my visit, be it brainstorming, contributing ideas, or assisting a team with their project.