Posts

ACX Montreal Meetup August 17th 2024 2024-08-16T23:36:19.940Z
Introducing AlignmentSearch: An AI Alignment-Informed Conversional Agent 2023-04-01T16:39:09.643Z

Comments

Comment by TheBayesian on [Completed] The 2024 Petrov Day Scenario · 2024-09-26T18:35:26.347Z · LW · GW

'Tis I. Didn't intend bad incentives, the stakes on that market are imo pretty tiny. But I N/Aed, I don't want anyone suspecting that had affected the final outcome.

Comment by TheBayesian on Sam Altman fired from OpenAI · 2023-11-19T04:24:58.822Z · LW · GW

Note:  Those are two different markets. Nathan's market is this one and Sophia Wisdom's market (currently the largest one by far) is this one. 

Comment by TheBayesian on Speed running everyone through the bad alignment bingo. $5k bounty for a LW conversational agent · 2023-04-02T22:15:16.850Z · LW · GW

I posted our submission in your twitter DMs and as a standalone post on LW the other day, but thought it wise to send it here as well: https://alignmentsearch.up.railway.app/

Mirroring other comments, we plan to get in contact with the team behind Stampy and possibly integrate some of the functionality of our project into their conversational agent.

Comment by TheBayesian on Introducing AlignmentSearch: An AI Alignment-Informed Conversional Agent · 2023-04-02T21:46:00.420Z · LW · GW

There already exists a bunch of projects which do something similar. This technique is known as Retrieval-Augmented Generation, as described in this paper from May 2020. Tools like Langchain and openAI tutorials have been used to build similar projects quickly, and the tech (cheap openAI embeddings, separating the dataset into ~200 token chunks and chatGPT) have all existed and been used together for many months. A few projects I've seen that do something akin to what we do include HippocraticAI, Trevor Hubbard, and ChatLangChain. This could and will be applied more widely, like people adding Q&A abilities to their library's documentation, to blogs, etc., but a key limitation is that, since it uses LLMs, it is pricier, slower and less reliable at inference time, without tricks that attempt to go around these limitations.