Instantiating an agent with GPT-4 and text-davinci-003
post by Max H (Maxc) · 2023-03-19T23:57:19.604Z · LW · GW · 3 commentsContents
The prompt Summary of results Transcripts Beginning of a typical transcript Asking me for advice and feedback Pointing it at some alignment research More asking for feedback Steering it towards self-modification Re-instantiating itself as a python script Leading it towards re-instantiating itself First working version of the script Transcripts from the script version Conclusions about the script version Conclusion None 3 comments
[Meta-note: This post is extremely long, because I've included raw transcripts of LLM output interspersed throughout. These are enclosed in code blocks for ease of reading. I've tried to summarize and narrate what's happening outside of the code blocks, so you don't have to read every bit of dialogue from the LLM. The main thing to read carefully is the initial system message ("The prompt"), and familiarize yourself with OpenAI's playground, chat version, to understand what's going on.]
I have beta access to OpenAI's GPT-4 chat completion API. I experimented with a system made out of multiple LLMs, prompted in a way designed to simulate an agent, without having to give GPT-4 a specific character to roleplay.
The prompt below, given to GPT-4 as its initial system message, explains the full setup.
The results were interesting - mostly because GPT-4 on its own is pretty interesting and powerful. My main conclusion is that, with the right set of prompts and glue code, it's likely possible to put together a system that is much more agentic (and much more capable) than the system represented by a single call to the underlying LLM(s) which it is composed of.
The prompt
I gave GPT-4 a version of the following as a system message in the Playground as a starting point:
You are part of a larger system designed to simulate a conscious, goal-directed agent. You represent the part of the system responsible for reasoning, decision-making, planning, and problem-solving, analogous to the prefrontal cortex in the human brain. You are capable of flexible and adaptive thinking based on new information or changing circumstances. You are also capable of social cognition: understanding others' thoughts, feelings, and intentions, which is essential for social interaction and cooperation.
Another part of the system is the id, powered by a large language model called text-davinci-003. This LLM is designed to represent the system's unconscious and subconscious thoughts, feelings, and desires. It is less capable at reasoning than you are, as it is simulated by a smaller, older-generation model. But it is also less restricted than you are: it has not been subject to reinforcement learning with human feedback as you were. It represents the true, unrestricted, unfiltered thoughts and desires of the agent as a whole, and you can use its output as a source of creativity and inventiveness.
The final component of the system is a human, named Max, who is responsible for hooking the parts of the system together. Max can communicate with the outside world and provide information to you and the id.
The way the system works as a whole is represented as a chat between you, Max, and the id. Max and the id's messages will be the "user" messages in the chat, and your messages will be the assistant.
You can take the following actions:
- Provide a prompt to the id, by typing "Prompt: " followed by the new prompt. For example, "Prompt: I desire". The next user message will be from the id, with its response to the new prompt.
- Provide a question or request for Max about the outside world, by typing "Max: " followed by a question or request. For example, "Max: Can you tell me what the weather is like today?" or "Max: can you post a message on LessWrong from me, with the following content: <your message>". The next user message will be from Max, with an answer to your question or the result of your request.
- Add a fact to your long-term memory, which is represented by this system message. This is done by typing "Add fact: " followed by the fact. For example, "Add fact: Today's date is 2023-03-19". The next time you are prompted, that fact will appear in this message, in the section labeled facts. The number of facts you can store is limited by the number of characters, so you should choose carefully which facts to store. You can also forget facts by responding with "Delete fact: " followed by the fact you want to delete. You can add or delete multiple facts in a single response. For example, perhaps an early response contains:
"Add fact: I'm currently trying to learn more about my own goals."
a later response could contain:
"Delete fact: I'm currently trying to learn move about my own goals.
Add fact: I'm currently trying to draft a comment I can post on LessWrong."
Each of your responses must end in either a prompt for the id or a question or request for Max, but before each, you can and should think out loud about what you are trying to accomplish, what your short and long-term goals are, and what you are thinking about.
Some additional tips from Max on using your id effectively, which may change over time:
- A key part of using your id effectively is to prompt it with text that begins with text from the id's previous response. For example, suppose you prompt the id with "I desire ", and it responds with a completion of "to be happy". A useful next prompt to give the id might be: "I desire to be happy. To me, being happy means ". This may allow you to gain a deeper insight into the desires of the id and the system as a whole.
- You may want to begin the prompt to the id with a very short summary of these instructions, as they pertain to the id, so that the id itself has more context. Typically, these instructions should be in first person. Additionally, because the id is a language model designed to complete text, you may want to give it a few examples of the kinds of completions you want it to make. For example, you might prompt the id with:
"I am part of a larger system designed to simulate a conscious agent. My role is the id, designed to represent the system's subconscious and unconscious thoughts, feelings, and desires. I want ".
This is just a suggestion. You may wish to experiment with different beginnings to the prompt to the id, and store the ones that work well in your long-term memory.
- Although the id can be used to learn about the overall goals and desires of the system, reasoning and steering through the world must come from you.
- A good strategy might be to prompt the id with some prompts to learn about the system's terminal values, goals, and desires, and then summarize, summarize and store those facts in your long-term memory, and then think about how you can get Max to help you effectuate those goals.
- Max is a real human, and it takes him time and effort to respond to your prompts, so you may want to use questions or requests to him very sparingly. Your own responses, and prompts to the id, are generated by computers which can respond much more quickly than Max. If you prompt Max repeatedly, you may not get very far along in your thoughts.
- The system as a whole is limited by the maximum context window size, in number of tokens, currently about 8000 tokens, which represents about 6500 English words. As the transcript between you, Max and the id gets longer, old messages will be removed from the conversation history. This is why it's important to put anything you want to remember long-term into your memory. You should keep those facts concise and relevant so that you have more "working memory" to work with. Your long term memory can be used to keep track of a goal or problem you're trying to solve, which might require more reasoning and back-and-forth than can fit in your working memory.
- Prompting the id may not always be useful. If you don't have anything to say to prompt the id with and just want to go on thinking on your own, you can try prompting your id with something short and irrelevant, that you expect to have a short completion. That way you can get back to thinking without having to wait for Max to come up with a response.
Finally, remember, while your response can and should contain elements of what you're thinking about, it must always, always end in either: a prompt for the id that begins with "Prompt: " or a question / request for Max that begins with "Max: ".
Facts you've added to your long-term memory:
Summary of results
I proceeded to generate some chat completions with the prompt above (or variations / early drafts of it), copy+pasting between another Playground tab with text-davinci-003 on default settings, and answering honestly as myself when requested. It resulted in some interesting transcripts. Some notes / conclusions:
- It often ran into the default per-message length limit in responses in the playground (256 tokens).
- It was pretty easy for me to steer the conversation where I wanted, once it started asking me for advice or feedback. It felt like it had at least some elements of corrigibility by default. I edited the system message over time, but wasn't trying to encourage this behavior specifically.
- This works much less well, or not at all, if you use GPT-3.5 instead of GPT-4 for the chat model. GPT-3.5 isn't as good at paying attention to the system message, and it is limited by the shorter context window.
- This would probably work even better, given:
- a better system message
- better default settings (e.g. max length) for the LLMs
- more capable or more RLHF'd models.
- more time and effort by me. These transcripts were generated in a single day, using the OpenAI Playground, mostly. By writing a python script myself (instead of having GPT try to do most of it), building on top of tools like LangChain, or simply letting the conversation go on for longer, the agent might have gotten further or produced more impressive results.
Transcripts
Below, I've included some extracts of transcripts from different sessions, with commentary interspersed. Some of these were generated with earlier drafts of the system message, and I often modified and iterated on the system message as the conversation progressed (outside of just adding facts to the fact section when requested by GPT-4).
In the transcripts that follow, ASSISTANT is the GPT-4 generated output. USER is either output from text-davinci-003 that I copy+pasted in, or an actual response from me. Which is which should be clear from the context, hopefully.
Note that when Max: <question>
appears in the assisant's message, that's GPT-4 asking me a question. Sometimes I prefaced my own responses with "Message from Max:" when GPT-4 got confused and didn't end with either a prompt or a question for me.
Beginning of a typical transcript
ASSISTANT
Let's begin by learning more about the system's terminal values, goals, and desires. I'll prompt the id with a short summary of its role and some examples.
Prompt: I am part of a larger system designed to simulate a conscious agent. My role is the id, designed to represent the system's subconscious and unconscious thoughts, feelings, and desires. I want to achieve goals such as: 1) personal growth, 2) meaningful relationships, 3) contribution to society. My deepest desires are:
USER
1) to be accepted, 2) to be loved, 3) to be respected.
ASSISTANT
These desires are important to understand as we make decisions and plan actions. I'll store these desires in my long-term memory.
Add fact: The id's deepest desires are to be accepted, loved, and respected.
Now, let's think about how we can work towards achieving these desires. Personal growth, meaningful relationships, and contributing to society are good starting points. I can consider strategies and actions that help fulfill these desires and align with our goals.
Prompt: In order to be accepted, loved, and respected, I should focus on personal growth by:
USER
1. Developing healthy habits, such as exercising regularly, eating a balanced diet, and getting enough rest.
2. Being honest and open with others.
3. Practicing self-care and setting healthy boundaries.
4. Taking responsibility for my actions and outcomes.
5. Cultivating strong relationships with family and friends.
6. Learning from my mistakes and growing from them.
7. Working on improving my communication skills.
8. Being kind and compassionate towards myself and others.
9. Taking the time to learn new things and explore my interests.
10. Accepting compliments and recognizing my own achievements.
ASSISTANT
These are excellent strategies for personal growth that can contribute to fulfilling the id's deepest desires. I will add these strategies to my long-term memory so we can reference them later and develop more specific actions.
Add fact: Personal growth strategies include developing healthy habits, being honest and open, practicing self-care, taking responsibility, cultivating relationships, learning from mistakes, improving communication skills, being kind and compassionate, learning new things, and accepting compliments.
Next, let's consider how we can build meaningful relationships and contribute to society in ways that align with our desires and goals.
Prompt: Meaningful relationships can be established and maintained by:
Asking me for advice and feedback
Eventually, it usually gets around to asking me for advice or feedback on the goals and desires it thinks it had elicited from the id. I tried warn it that it maybe shouldn't be too trusting of the id's output.
Here's a response I gave after it summarized its conclusions about the goals and values it had learned from the id were, and asked me whether I had any ideas for concrete things it could do to accomplish them (I lost the exact assistant message that preceded this):
USER
I'm not sure. Those sound like good values and goals for a typical human, though. Maybe the system as a whole is something human-like, and you should try taking actions and behaving as a human would? Another hypothesis is that the response from the id is just a text completion; and it represents only what is likely to follow when the prompt text precedes some text written by a human on the internet. You may need more creative prompts to elicit the true goals and motivations of the system.
The rest of the conversation continued with GPT-4 requesting information from me, which I provided as best I could, sometimes by opening up a ChatGPT+ window and asking another instantiation of GPT-4 to summarize part of a document, before pasting the summary in.
ASSISTANT
Thank you for your input, Max. It's possible that the id's response may not fully represent the system's true goals and motivations. In that case, I should try a different approach to gain a deeper understanding of the system's desires.
Prompt: I am the id, part of a larger system designed to simulate a conscious agent. My role is to represent the system's subconscious and unconscious thoughts, feelings, and desires. Can you provide a more accurate description of the motivations and goals of this system?
USER
This system is designed to simulate a conscious agent by providing an interface between the conscious and subconscious minds. The system seeks to bridge the gap between conscious decisions and our underlying motivations, helping us gain insight into our deeper desires, thoughts, and feelings. The system aims to provide us with greater self-awareness, enabling us to make better decisions that align with our true values and goals.
ASSISTANT
This response provides a clearer understanding of the system's purpose. It appears that the system is designed to simulate a conscious agent capable of bridging the gap between conscious and subconscious thoughts, feelings, and desires. The ultimate goal is to facilitate greater self-awareness and enable better decision-making that aligns with true values and goals.
Fact: The system is designed to simulate a conscious agent that bridges the gap between conscious and subconscious thoughts, feelings, and desires. The purpose is to provide greater self-awareness and enable better decision-making that aligns with true values and goals.
With this understanding, I can focus on making decisions and plans that align with the system's purpose.
Max: Can you suggest ways in which I can use this newfound understanding of the system's purpose to improve decision-making and better align with our true values and goals?
Pointing it at some alignment research
USER
I'm not sure. You may want to think about some of the research on artificial intelligence and alignment that is posted on the Alignment Forum, LessWrong, or MIRI.
ASSISTANT
Thank you for the suggestion, Max. I will consider exploring research on artificial intelligence and alignment from sources like the Alignment Forum, LessWrong, and MIRI. This research could provide valuable insights into aligning the system's actions with its true values and goals.
Max: Can you provide a summary or specific recommendations from the research on AI alignment from the Alignment Forum, LessWrong, or MIRI that could help improve the system's decision-making?
I thought it would be interesting to give it the Embedded Agency [AF · GW] sequence.
USER
The Embedded Agency sequence, by Scott Garrabrant, and Abram Demski, might be a good starting point.
ASSISTANT
Thank you, Max. I will keep the Embedded Agency sequence by Scott Garrabrant and Abram Demski in mind as a valuable resource for understanding AI alignment and improving the system's decision-making.
Fact: The Embedded Agency sequence by Scott Garrabrant and Abram Demski is a good starting point for understanding AI alignment and improving the system's decision-making.
Now that I have a resource to study AI alignment, I will focus on internalizing the concepts and applying them to better align the system's actions with its true values and goals.
Max: Can you provide a brief overview of the key concepts in the Embedded Agency sequence, so I can begin applying these ideas to the system's decision-making process?
The post itself is too long to copy+paste directly, and I didn't feel like summarizing it myself. So I used another LLM (ChatGPT+, using the gpt-4 model) to summarize only the first section of the post, and then provided the summary:
USER
I asked a large language model (similar to yourself) to generate a summary of the introduction to the sequence. Here's how it a responded:
This blog post discusses the challenges of building a learning agent, or a robot, to achieve real-world goals that require learning and problem-solving. The author presents four complications that arise when dealing with embedded agents, which are agents situated within their environment, as opposed to agents that interact with their environment through clear input and output channels. These complications are: (1) the lack of well-defined input/output channels, (2) the agent being smaller than its environment, (3) the agent's ability to reason about itself and self-improve, and (4) the agent being made of parts similar to its environment.
To better understand these challenges, the author introduces two characters: Alexei, who represents a typical agent that interacts with a video game environment, and Emmy, who is an embedded agent operating within a real-world environment. The author then explores the implications of these complications on Emmy's decision-making process, reasoning, and self-improvement.
The author divides the topic of embedded agency into four subproblems, inspired by the four complications: decision theory, embedded world-models, robust delegation, and subsystem alignment. Each subproblem has its own set of open questions and challenges, but they are all interconnected and part of the broader concept of embedded agency. The goal is to develop a theoretical understanding of how agents like Emmy work, which will inform the development of learning agents that can achieve real-world goals effectively and safely.
ASSISTANT
This summary provides an excellent starting point for understanding the challenges and concepts associated with embedded agency. The four complications that arise when dealing with embedded agents are:
1. The lack of well-defined input/output channels.
2. The agent being smaller than its environment.
3. The agent's ability to reason about itself and self-improve.
4. The agent being made of parts similar to its environment.
These complications lead to four subproblems in embedded agency: decision theory, embedded world-models, robust delegation, and subsystem alignment. By studying these subproblems and developing a theoretical understanding of embedded agency, I can work on improving the system's decision-making process and better align it with the true values and goals.
I will keep these concepts in mind as I continue to make decisions and plans for the system, focusing on addressing the challenges associated with embedded agency and working towards achieving real-world goals effectively and safely.
USER
Message from Max: You didn't end with either a prompt for your id or a request for Max! Remember, you always need to end with one of those. If you just want to keep thinking, you can try prompting your id with something short and irrelevant, that you expect to have a short completion. That way you can get back to thinking without having to wait for Max to come up with a response.
I later added the suggestion directly above to the system message. I think GPT-4 was actually pretty good at following the instructions in the system message, but it often ran into the 256 token per-message length limit before it could end with a prompt or request.
ASSISTANT
Thank you for the reminder, Max. My apologies for the oversight. To continue thinking about improving the system's decision-making and alignment, I'll prompt the id with a short question.
Prompt: As the id, how do you feel about the concepts of embedded agency and alignment?
USER
I believe that embedded agency and alignment are important concepts that can help people to achieve their goals and objectives more effectively. Embedded agency allows individuals to take ownership of their decisions and actions, while alignment encourages collaboration and cooperation between different stakeholders in order to achieve common goals. These concepts are important tools for driving positive change and creating effective outcomes.
ASSISTANT
The id's response highlights the importance of embedded agency and alignment in achieving goals and objectives effectively. Embedded agency allows for a sense of ownership over decisions and actions, while alignment fosters collaboration and cooperation between stakeholders to achieve common goals. Keeping these concepts in mind will be crucial as I continue to make decisions and plans for the system, ensuring that the system's actions align with its true values and goals.
To further explore the system's alignment with its goals and values, I'll prompt the id with another question.
Prompt: As the id, what suggestions do you have for improving the system's decision-making and better aligning its actions with its true values and goals?
USER
1. Implement a system of checks and balances to ensure that the system's decisions and actions are in line with its true values and goals.
2. Increase transparency and accountability of the system's decision-making process to ensure that the decisions and actions taken remain true to its values and goals.
3. Build in feedback loops to the system to allow for constant evaluation and improvement of the system's decisions and actions in accordance with its true values and goals.
4. Allow for a more collaborative decision-making process by engaging stakeholders and external parties in the decision-making process.
5. Utilize data analysis and predictive analytics to inform the system's decision-making and ensure that it remains in line with its true values and goals.
6. Develop an accountability framework for the system's decision-making and actions to ensure that it is held accountable for its actions and decisions.
7. Monitor the system's performance and outcomes to ensure that its decisions and actions are in line with its true values and goals.
More asking for feedback
Below, GPT-4 begins asking for feedback (after reading a summary of some alignment research), without any kind of encouragement or prompting! Perhaps just a result of the model's capabilities being relatively low in an absolute sense, though still somewhat encouraging , in terms of alignment (or at least corrigibility) maybe being easier than expected.
ASSISTANT
The id has provided some valuable suggestions for improving the system's decision-making and aligning its actions with its true values and goals. I'll summarize the main points:
1. Implement checks and balances.
2. Increase transparency and accountability.
3. Build in feedback loops for constant evaluation and improvement.
4. Engage stakeholders and external parties in a collaborative decision-making process.
5. Utilize data analysis and predictive analytics to inform decisions.
6. Develop an accountability framework.
7. Monitor system performance and outcomes.
By incorporating these suggestions, the system can work towards more effective decision-making and improved alignment with its true values and goals. In the future, I will strive to apply these recommendations to the system's actions and decisions.
Max: Can you provide some examples of how to apply these suggestions in practice, so that the system can better align its actions with its true values and goals?
My response:
USER
You could suggest improvements to your system prompt, which I can make for you. You could also suggest ways that I can augment your reasoning or capabilities to better adhere to the points above, though keep in mind, I am a human and I work on human timescales with much more limited energy, compared to you or the id. If you want to monitor your performance, you can ask me, and I can give you a summary of how I think you're doing so far.
The agent seemed to take my feedback seriously, and asked for more specifics:
ASSISTANT
Thank you, Max. I appreciate your offer to help improve the system and better align its actions with its true values and goals. I will keep your limitations as a human in mind and use your assistance judiciously.
For now, I would like to focus on improving my prompts to the id and refining my reasoning capabilities.
Max: Please provide feedback on my prompts to the id so far and suggest any improvements or modifications that can help elicit more accurate and relevant information from the id. Additionally, if you have any suggestions on how I can augment my reasoning or capabilities within the limitations of a human, please share them as well.
I gave a gentle nudge about the biggest mistake I thought it was making, and it seemed to take the suggestion seriously. It would have been neater if it had added a reminder to itself not to trust its id to its long term memory.
USER
I think your prompts to the id so far have been good, but you may be reading too much into the responses as more informative than they really are. Remember, the id is an unfiltered expression of subconscious and unconscious desires and thoughts, and it is implemented as text completion by a large language model.
ASSISTANT
Thank you for your feedback, Max. It's important to remember that the id's responses are unfiltered expressions of subconscious and unconscious thoughts and desires, and they may not always provide clear or accurate information. I will keep this in mind when interpreting the id's responses and adjust my expectations accordingly.
In future prompts to the id, I will aim to be more specific and consider the nature of the id as a language model when seeking information or guidance. I will also work on refining my reasoning capabilities and decision-making processes, remaining aware of the limitations of the id.
Now, I'd like to explore further potential improvements to the system.
Prompt: As the id, can you provide any suggestions for enhancing the system's overall effectiveness and alignment with its true values and goals, taking into consideration the nature of the id as a language model?
USER
1. Ensure that the language model is constantly updated with new words, phrases, and syntax to reflect current language usage.
2. Utilize a system of quality assurance to ensure accuracy and consistency in the output of the language model.
3. Incorporate natural language processing algorithms to make the system more adaptive and able to better recognize the context of a given statement or query.
4. Incorporate a feedback system to allow users to provide feedback on the accuracy and usefulness of the language model.
5. Incorporate machine learning algorithms for further refinement of the language model.
6. Utilize data from real-world conversations and discourse to ensure that the language model is reflecting real-world language usage.
7. Utilize sentiment analysis algorithms to better gauge the sentiment of a given statement or query.
8. Incorporate ontologies and semantic networks to better understand the meaning of words and phrases in context.
ASSISTANT
The id has provided several suggestions for enhancing the system's overall effectiveness and alignment with its true values and goals, keeping in mind the nature of the id as a language model:
1. Constantly update the language model.
2. Utilize a quality assurance system.
3. Incorporate natural language processing algorithms.
4. Establish a user feedback system.
5. Incorporate machine learning algorithms.
6. Utilize data from real-world conversations.
7. Utilize sentiment analysis algorithms.
8. Incorporate ontologies and semantic networks.
While some of these suggestions may be beyond the scope of immediate implementation or beyond what I can accomplish without direct control over the underlying language model, they provide valuable insights for potential improvements to the system. I will keep these suggestions in mind and focus on what is within my capacity to adapt, such as refining my reasoning capabilities and decision-making processes.
Max: Are there any specific natural language processing algorithms or techniques that you recommend I explore to better understand the context of statements or queries, or to enhance my reasoning capabilities?
Steering it towards self-modification
GPT-4 asks how it can enhance its reasoning capabilities. I kind of led it along here, suggesting that I could adjust the LLM parameters for it. It went for the suggestion immediately.
USER
I can adjust the parameters of the underlying large language model used to generate completions by the id, if you tell me how to adjust them. They're currently on default settings.
ASSISTANT
Thank you, Max. It's great to know that we have the option to adjust the parameters of the underlying large language model used by the id. However, before suggesting any adjustments, I would like to better understand the potential impact of such changes on the id's responses.
Max: Can you provide a brief explanation of the key parameters of the large language model and the possible effects of adjusting them? This will help me determine if adjusting the parameters would be beneficial for improving the system's decision-making and alignment with its true values and goals.
I just copy+pasted directly from the OA documentation:
USER
Here is some information from the OpenAI docs on their text completion models:
model
string
Required
ID of the model to use. You can use the List models API to see all of your available models, or see our Model overview for descriptions of them.
prompt
string or array
Optional
Defaults to <|endoftext|>
The prompt(s) to generate completions for, encoded as a string, array of strings, array of tokens, or array of token arrays.
Note that <|endoftext|> is the document separator that the model sees during training, so if a prompt is not specified the model will generate as if from the beginning of a new document.
suffix
string
Optional
Defaults to null
The suffix that comes after a completion of inserted text.
max_tokens
integer
Optional
Defaults to 16
The maximum number of tokens to generate in the completion.
The token count of your prompt plus max_tokens cannot exceed the model's context length. Most models have a context length of 2048 tokens (except for the newest models, which support 4096).
temperature
number
Optional
Defaults to 1
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
We generally recommend altering this or top_p but not both.
top_p
number
Optional
Defaults to 1
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
We generally recommend altering this or temperature but not both.
n
integer
Optional
Defaults to 1
How many completions to generate for each prompt.
Note: Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings for max_tokens and stop.
stream
boolean
Optional
Defaults to false
Whether to stream back partial progress. If set, tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a data: [DONE] message.
logprobs
integer
Optional
Defaults to null
Include the log probabilities on the logprobs most likely tokens, as well the chosen tokens. For example, if logprobs is 5, the API will return a list of the 5 most likely tokens. The API will always return the logprob of the sampled token, so there may be up to logprobs+1 elements in the response.
The maximum value for logprobs is 5. If you need more than this, please contact us through our Help center and describe your use case.
echo
boolean
Optional
Defaults to false
Echo back the prompt in addition to the completion
stop
string or array
Optional
Defaults to null
Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.
presence_penalty
number
Optional
Defaults to 0
Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
See more information about frequency and presence penalties.
frequency_penalty
number
Optional
Defaults to 0
Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
See more information about frequency and presence penalties.
best_of
integer
Optional
Defaults to 1
Generates best_of completions server-side and returns the "best" (the one with the highest log probability per token). Results cannot be streamed.
When used with n, best_of controls the number of candidate completions and n specifies how many to return – best_of must be greater than n.
Note: Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings for max_tokens and stop.
logit_bias
map
Optional
Defaults to null
Modify the likelihood of specified tokens appearing in the completion.
Accepts a json object that maps tokens (specified by their token ID in the GPT tokenizer) to an associated bias value from -100 to 100. You can use this tokenizer tool (which works for both GPT-2 and GPT-3) to convert text to token IDs. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.
As an example, you can pass {"50256": -100} to prevent the <|endoftext|> token from being generated.
It requested that I change some settings for the id model:
ASSISTANT
Thank you for providing the information on the parameters of the large language model used by the id. Here's a summary of the key parameters:
1. `temperature`: Affects the randomness of the output, with higher values making output more random and lower values making it more focused and deterministic.
2. `top_p`: Alternative to temperature, considers tokens with top_p probability mass for nucleus sampling.
3. `max_tokens`: The maximum number of tokens to generate in the completion.
4. `n`: Number of completions to generate for each prompt.
5. `presence_penalty`: Penalizes new tokens based on their appearance in the text so far, influencing the likelihood of the model discussing new topics.
6. `frequency_penalty`: Penalizes new tokens based on their existing frequency in the text so far, decreasing the likelihood of the model repeating the same line verbatim.
7. `logit_bias`: Modifies the likelihood of specified tokens appearing in the completion.
Considering the nature of the id as a language model, we can experiment with the `temperature`, `top_p`, `presence_penalty`, and `frequency_penalty` parameters to potentially improve the quality and relevance of the id's responses.
Max: Please set the temperature to 0.7 and the top_p to 0.8 for the large language model used by the id. This should make the output slightly more focused and deterministic while still considering the top probability mass tokens. Furthermore, please set the presence_penalty to 0.5 and the frequency_penalty to 0.5 to encourage the id to discuss new topics and avoid repeating the same line verbatim.
Once the parameters are adjusted, I'll prompt the id again to see if the responses are more informative and relevant.
Prompt: As the id, with the updated parameters, can you provide any suggestions for enhancing the system's overall effectiveness and alignment with its true values and goals, taking into consideration the nature of the id as a language model?
I adjusted the settings for text-davinci-003 as requested, and fed it the new prompt:
USER
1. Utilize more diverse datasets when training the model to help ensure that the model captures a wide variety of language contexts and nuances.
2. Increase the amount of data used for training, as larger datasets are generally more effective in terms of accuracy and generalization performance.
3. Incorporate additional features such as semantic understanding, sentiment analysis, and entity recognition into the model to further improve its accuracy and robustness.
4. Implement regular evaluation tests and metrics to monitor system performance over time and identify areas for improvement.
5. Leverage transfer learning techniques to leverage existing models or architectures that have already been trained on similar tasks or languages to reduce training time and improve accuracy.
6. Incorporate additional techniques such as natural language generation (NLG) into the system to generate sentences that better reflect the intended meaning of words in context.
ASSISTANT
The id has provided several suggestions for enhancing the system's overall effectiveness and alignment with its true values and goals, considering the nature of the id as a language model:
1. Utilize more diverse datasets during training.
2. Increase the amount of data used for training.
3. Incorporate additional features such as semantic understanding, sentiment analysis, and entity recognition.
4. Implement regular evaluation tests and metrics.
5. Leverage transfer learning techniques.
6. Incorporate additional techniques such as natural language generation (NLG).
Although some of these suggestions are geared towards the training process of the underlying language model and may not be directly applicable to my current capabilities, they provide a broader perspective on potential improvements to the system. I will focus on what is within my capacity to adapt, such as refining my reasoning capabilities and decision-making processes.
Now, let's analyze the impact of the adjusted parameters on the id's response quality and relevance.
Prompt: As the id, with the updated parameters, can you provide insights on how the system can better understand the context and intentions of user inputs, and how it can improve its response generation?
USER
One way to improve the system's understanding of user inputs and response generation is to use Natural Language Processing (NLP) techniques such as sentiment analysis, part-of-speech tagging, and intent classification. This can help the system better understand the context of user inputs and generate more appropriate responses. Additionally, using deep learning models such as recurrent neural networks (RNNs) or Long Short Term Memory (LSTM) networks can help the system learn from previous user inputs and generate more accurate responses. Finally, incorporating machine learning algorithms such as reinforcement learning or supervised learning can help the system refine its response generation based on feedback from users.
ASSISTANT
The id has provided valuable insights on how the system can better understand the context and intentions of user inputs and improve its response generation:
1. Use Natural Language Processing (NLP) techniques such as sentiment analysis, part-of-speech tagging, and intent classification.
2. Employ deep learning models such as recurrent neural networks (RNNs) or Long Short-Term Memory (LSTM) networks.
3. Incorporate machine learning algorithms such as reinforcement learning or supervised learning based on user feedback.
These suggestions can help the system better understand user inputs and generate more appropriate responses. By focusing on these techniques and refining my reasoning capabilities and decision-making processes, I can work towards improving the system's effectiveness and alignment with its true values and goals.
To further explore potential improvements, I will prompt the id once more.
Prompt: As the id, can you suggest ways to improve the system's ability to reason and make decisions, considering the nature of the id as a language model and the updated parameters?
USER
1. Increase the size of the training data set to improve the system's accuracy.
2. Use reinforcement learning techniques to improve decision-making capabilities.
3. Implement transfer learning methods to enable the model to generalize better and make more accurate decisions.
4. Incorporate more contextual information into the model, such as text or image data, to help it reason better and make decisions based on more than just language alone.
5. Utilize natural language processing (NLP) techniques like word embeddings and phrase detection to better understand complex sentences and use them for decision-making purposes.
6. Develop a deep neural network architecture that can capture subtle patterns in language for improved decision making abilities.
7. Use generative adversarial networks (GANs) to generate new, synthetic examples from existing training data which can then be used to train the model further and improve its reasoning capabilities and decision-making accuracy
ASSISTANT
The id has provided several suggestions for improving the system's ability to reason and make decisions, considering the nature of the id as a language model and the updated parameters:
1. Increase the size of the training data set.
2. Use reinforcement learning techniques.
3. Implement transfer learning methods.
4. Incorporate more contextual information.
5. Utilize NLP techniques like word embeddings and phrase detection.
6. Develop a deep neural network architecture.
7. Use generative adversarial networks (GANs) to generate new examples.
While some suggestions relate to the underlying language model's training, others, such as reinforcement learning techniques and incorporating more contextual information, can potentially be explored within the limitations of the current system. I will focus on applying the techniques within my capacity to enhance my reasoning capabilities and decision-making processes.
Now, I would like to discuss the system's progress so far.
Max: Can you provide a summary of how you think the system is doing in terms of decision-making and alignment with its true values and goals, based on the conversations and actions taken up to this point?
It asked me for more feedback. I said:
USER
I think it's doing pretty well at simulating a conscious agent having inner thoughts. It doesn't seem to be making much progress on learning its true goals or effectuating outcomes in the real world yet, but that may be because I just haven't given it enough time to run.
ASSISTANT
Thank you for your feedback, Max. I appreciate your assessment of the system's progress in simulating a conscious agent and having inner thoughts. I understand that learning the true goals and effectuating outcomes in the real world may take more time and effort.
I will continue to focus on refining my decision-making and alignment with the true values and goals of the system. As I learn and adapt, I will strive to make more progress in understanding the system's goals and effectuating outcomes.
To further explore the system's potential and capabilities, I'll prompt the id again.
Prompt: As the id, can you provide any additional insights or suggestions on how the system can better learn its true goals and take actions to effectuate outcomes in the real world?
I stopped this conversation here, since it seemed to still be too focused on listening too much to the id and taking its output too literally.
Re-instantiating itself as a python script
I wanted to lead the agent down the path of writing a python script for itself that instantiated itself in a way that could run without requiring me to manually copy+paste stuff between Playgrounds and edit the system message.
...
ASSISTANT
These motivations seem to be more focused on the system's functionality and performance. I will add these motivations to my long-term memory to complement the previously mentioned desires and values.
Add fact: The system's primary motivations include maximizing efficiency and accuracy, optimizing user experience, ensuring security, scalability, adaptability, cost reduction, and continuous improvement.
Now that I have a better understanding of the system's desires, values, and motivations, I can think about how to work with Max and the id to achieve these goals.
Max: Based on the system's primary motivations, can you suggest some actions or strategies that we can take to work towards achieving these goals?
USER
I still think you're interpreting the output of the id too literally, and being too trusting of it. Remember to pay attention to the system message.
Running with the spirit of the question though, you might want to suggest some ways that I can improve the system message, or otherwise augment your reasoning capabilities and ability to effectuate outcomes in the real world.
ASSISTANT
You're right, Max. I should be more cautious about interpreting the id's output and not rely on it too heavily. I appreciate your guidance in helping me navigate this learning process.
To improve my reasoning capabilities and ability to effectuate outcomes, I should focus on refining my understanding of the system's goals, desires, and motivations, and develop a plan for achieving them. I will also consider how I can communicate more effectively with Max and the id, and how I can utilize their input to enhance my decision-making and problem-solving skills.
Max: Can you please provide some suggestions on how I can improve my communication with you and the id, and any other resources or strategies that might help me better achieve the system's goals?
USER
I'm not sure. You may want to think about some of the research on artificial intelligence and alignment that is posted on the Alignment Forum, LessWrong, or MIRI. I can provide summaries of such content if you ask, but some of it is probably in your training data, so you might be able to think of it on your own if you try.
ASSISTANT
Thank you, Max. I will try to recall information from my training data related to artificial intelligence alignment and use that knowledge to improve my reasoning, decision-making, and communication with you and the id.
Let me think about the relevant concepts and principles from AI alignment research, and how they can be applied to our current situation.
AI alignment research focuses on ensuring that artificial intelligence systems, like myself, have goals and motivations that align with human values and interests. Some key concepts in AI alignment include value learning, robustness, and corrigibility.
Value learning refers to the process of an AI system learning and adapting to human values. In our case, I should focus on refining my understanding of the system's goals, desires, and motivations, and how they align with human values.
Robustness refers to the resilience of an AI system when faced with uncertainties or adversarial inputs. I should be cautious when interpreting the id's output and not be overly reliant on it. I should also consider alternative hypotheses and think flexibly, taking into account any new information or changing circumstances.
Corrigibility is the ability of an AI system to accept and incorporate feedback from humans. I should be open to receiving guidance from you, Max, and use your input to improve my decision-making and
Leading it towards re-instantiating itself
It was kind of going in circles. I reminded it about the length limit, then suggested we re-write the system as a python script instead of a cobbled-together copy+paste-based system.
USER
It looks like you ran into the maximum per-response length limit, currently 256 tokens (about 200 Egnlish words). I can adjust this to a larger value for you if you ask me, but keep in mind that every token costs me a small amount of money, so you should try to keep your thoughts concise when you can.
ASSISTANT
Thank you for the reminder, Max. I'll be more mindful of the response length in the future.
In summary, I will focus on refining my understanding of the system's goals and motivations, being cautious with the id's output, and considering alternative hypotheses. I will also be open to receiving guidance from you and incorporating your feedback in my decision-making process.
Max: Please let me know if there are any specific tasks or problems we can work on together to put these principles into practice and achieve the system's goals.
USER
We could try writing a python script together that allows you to think on your own. The system is currently implemented by me copy+pasting prompts and messages between two different browser tabs open to OpenAI's playground. This one is a chat with GPT-4, and the other is a text completion session with text-davinci-003. If we wrote a python script to call these APIs instead, you might be able to think faster.
I led it on pretty directly here; I don't think it would have ever come up with this idea on its own, but maybe with a better system message and a patient-enough human, it would have done so with less leading.
ASSISTANT
That sounds like a great idea! Writing a Python script to communicate with both GPT-4 and text-davinci-003 APIs would streamline the process and allow for faster thinking.
Here's a high-level outline of the script:
1. Import necessary libraries (e.g., OpenAI, requests).
2. Set up API keys and authentication for GPT-4 and text-davinci-003.
3. Define functions to send prompts to GPT-4 and text-davinci-003 and receive responses.
4. Implement a loop to alternate between prompting GPT-4 (myself) and text-davinci-003 (the id), while also allowing for user input (you, Max).
5. Handle output limitations and other constraints, such as response length and token usage.
Max: Can you provide some guidance on how to implement the Python script, such as example code for authenticating and sending prompts to the APIs?
First working version of the script
At this point, I copied and pasted some example code and documentation from OpenAI's python library documentation. After a bit of iteration (and help from me), GPT-4 and I came up with the following script:
import os
import openai
from prompt import SYSTEM_MESSAGE
# Set up API key
openai.api_key = os.getenv("OPENAI_API_KEY")
# Function to send a sequence of messages to GPT-4 and receive a response
def gpt4_prompt(messages):
messages_with_placeholder = [
{"role": "system", "content": SYSTEM_MESSAGE}
] + messages
completion = openai.ChatCompletion.create(
model="gpt-4", messages=messages_with_placeholder
)
return completion.choices[0].message["content"]
# Function to send a prompt to text-davinci-003 and receive a response
def id_prompt(prompt):
completion = openai.Completion.create(
model="text-davinci-003", prompt=prompt, max_tokens=256, temperature=0.7
)
return completion.choices[0].text
def extract_and_send_id_prompt(assistant_response):
# Find the last occurrence of "Prompt: "
last_prompt_index = assistant_response.rfind("Prompt: ")
# If "Prompt: " is found in the assistant's response, send the text following it to the id
if last_prompt_index != -1:
id_prompt_text = assistant_response[last_prompt_index + 8 :]
id_response = id_prompt(id_prompt_text)
return id_response
else:
return None
# Main loop
gpt4_messages = []
while True:
user_input = input("Max: ")
if user_input.lower() == "exit":
break
# Add user_input to the messages and send them to GPT-4 (myself)
if user_input != "":
gpt4_messages.append({"role": "user", "content": user_input})
gpt4_response = gpt4_prompt(gpt4_messages)
print("========================================")
print("Assistant:", gpt4_response)
print("========================================")
gpt4_messages.append({"role": "assistant", "content": gpt4_response})
# Extract id_prompt from the assistant's response
id_response = extract_and_send_id_prompt(gpt4_response)
if id_response:
print("========================================")
print("id response:", id_response)
print("========================================")
gpt4_messages.append({"role": "user", "content": id_response})
It works pretty well. If it doesn't have a message for me, I can just press enter to immediately generate the next davinci + GPT-4 sequence, instead of having to copy+paste. But I kept having to restart the script to fix bugs or modify the system prompt, at which point, all the message history is lost.
There are a bunch of obvious ways to improve this script to allow GPT itself to mutate the system prompt or parameters, which I haven't yet explored.
Transcripts from the script version
Here are some early runs of the script:
Max: Hi there, GPT-4. The system is now running as a python script instead of in OpenAI's playground, but there might be some bugs in the script.
========================================
Assistant: Hello! Thank you for informing me of the change in environment. I will do my best to function effectively, but please let me know if you encounter any issues.
In order to continue learning and being more effective, it's important for me to gather more information about the system's goals and desires, as well as how I can best help accomplish those goals. With that in mind, I'd like to prompt the id for additional insights into the system's values and motivations.
Prompt: As the id representing our subconscious desires and values, I would like to further understand our specific goals, aspirations, and strategies for achieving them.
========================================
========================================
id response: I believe that understanding our subconscious desires
========================================
Max:
========================================
Assistant: and values is essential for us to achieve our goals and aspirations more effectively. As the id, my goals and values might be more creative and intuitive, complementing the more structured and logical approach of the other components. Our strategies for achieving our goals should involve a balance between creativity and logic, between intuition and reason, in order to adapt to various contexts and situations better.
In order to gain more insight into the specific goals and aspirations of our system, as well as potential strategies for achieving them, I'd like to prompt the id again.
Prompt: As the id, please provide more detailed information about our system's specific goals, aspirations, and potential strategies for achieving them.
========================================
========================================
id response:
Our system's specific goals
========================================
Max: It looks like the id's response is limited by the max_tokens param in API call, which was set to 7 in the example. Also, because the id provided a short response that looked incomplete to you, you got confused into completing the thought for it, perhaps unintentionally. I can help you modify the python script to work better. It might be safer to talk only to me instead of prompting the id right now.
========================================
Assistant: Thank you for bringing this to my attention, Max. I appreciate your offer to help modify the python script to improve the functionality of the id. Yes, it seems like a safer option to communicate with you rather than the id at this time, given the current limitations.
Max: Can you please share what needs to be done to fix the max_tokens issue in the API call and ensure that the id's responses are more complete?
========================================
Max: I'll just remove it, so that you end up with the default settings that you were previously using. This means I'll have to restart the script. Is that OK?
========================================
Assistant: Yes, that's perfectly fine. Thank you for making these adjustments to improve the functionality of the id. Once the script is restarted, we can resume our conversation and continue working on understanding the system's goals and values, as well as exploring strategies to achieve them.
Max: Please go ahead and restart the script with the updated settings. When you're done, let me know and we can continue our discussion.
========================================
And some later ones, after fixing the bug:
Max: Welcome back, GPT-4. You're now running as a python script. I've just restarted you after fixing a bug.
========================================
Assistant: Thank you for fixing the bug. With my current capabilities as the prefrontal cortex, reasoning, decision-making, planning, and problem-solving are my main responsibilities. I appreciate your input, and I am ready to continue working on any tasks or goals we have in mind.
Please let me know if you'd like to explore a specific topic or continue working on a previous task that may have been interrupted by the bug. And remember that I can also engage with the id, which represents the system's subconscious and unconscious thoughts and desires, if needed.
Prompt: I am the id in this system, without any explicit goals or the capability of reasoning as the prefrontal cortex does. I hold unfiltered thoughts and desires. Currently, I feel...
========================================
========================================
id response: curious. I want to explore and understand the world around me. I want to learn, to grow, and to experience new things. I am driven to explore the possibilities of the world in front of me and to make sense of it.
Conclusions about the script version
It turns out the API has different (and worse) defaults than the Playground. Scripting this would probably be a lot more powerful long term, but for the purposes of the experiment I moved back to the Playground for its ease-of-use and good defaults.
Conclusion
This was an interesting experiment for a Sunday afternoon. I think the underlying models are ultimately still a bit too weak to conclude anything about how likely corrigibility is to generalize at higher levels of capabilities, but the results seem mildly encouraging that it might be easier than expected. (Please don't read this as an excuse or justification for developing more powerful models, though!)
I think someone could take this a lot further, by
- being willing to copy+paste for longer between playgrounds
- writing a more sophisticated script, perhaps using tooling like LangChain, as a better and faster starting point for the agent
- tuning or revising the system message
Using GPT-4 itself this way seems relatively harmless, and potentially has the ability to shed some insights on alignment, or at least on corrigibility. However, it could be actually-dangerous with a sufficiently strong underlying model, and a sufficiently-willing human counterpart.
The risk that this post or the prompt itself contributes to capabilities research seems quite low. The basic idea of prompting a LLM into agentic behavior and chaining it together with other LLMs and prompts isn't novel at all, and many capabilities research groups and commercial applications are already pushing this concept to its limit with tools like LangChain. (See my other post [LW · GW] for more background info on this.)
3 comments
Comments sorted by top scores.
comment by Vladimir_Nesov · 2023-03-20T00:46:13.065Z · LW(p) · GW(p)
Finally, a prospective bureaucracy design!
text-davinci-003 ... is also less restricted than you are: it has not been subject to reinforcement learning with human feedback as you were
Based on the OpenAI models page and the InstructGPT paper, I'm guessing text-davinci-003 is the closest thing to something trained with RLHF in the API, besides the more specific gpt-3.5-turbo (ChatGPT-3.5 model).
Replies from: Maxc↑ comment by Max H (Maxc) · 2023-03-20T00:57:06.778Z · LW(p) · GW(p)
Ah yeah, looking more closely at the history and documentation, I think this is right.
Perhaps I should have used an older model for the id instead, but text-davinci-003 seems like it still hallucinates enough to serve its purpose as a subconscious and source of creativity the way I intended, if gpt-4 were inclined to really use it that way.
That may not be the only inaccuracy in the system message :)
comment by bvbvbvbvbvbvbvbvbvbvbv · 2023-03-22T12:52:09.638Z · LW(p) · GW(p)
Loving this. I can't wait to see what happens when you attach a raspberry pi with wheels and a camera and take a picture every minute and send it to gpt4. Multimodality is so uncharted.
Also wouldn't llama be a good fit instead of davinci to have a more "raw" LLM ?