Posts

A Proposed Test to Determine the Extent to Which Large Language Models Understand the Real World 2023-02-24T20:20:22.582Z

Comments

Comment by Bruce G on Bounty: Diverse hard tasks for LLM agents · 2024-01-22T02:12:11.157Z · LW · GW

I have a mock submission ready, but I am not sure how to go about checking if it is formatted correctly.

Regarding coding experience, I know python, but I do not have experience working with typescript or Docker, so I am not clear on what I am supposed to do with those parts of the instructions.

If possible, It would be helpful to be able to go through it on a zoom meeting so I could do a screen-share.

Comment by Bruce G on Bounty: Diverse hard tasks for LLM agents · 2023-12-20T06:02:12.255Z · LW · GW

Thanks for your reply. I found the agent folder you are referring to with 'main.ts', 'package.json', and  'tsconfig.json', but I am not clear on how I am supposed to use it. I just get an error message when I open the 'main.ts' file:

Regarding the task.py file, would it be better to have the instructions for the task in comments in the python file, or in a separate text file, or both? Will the LLM have the ability to run code in the python file, read the output of the code it runs, and create new cells to run further blocks of code?

And if an automated scoring function is included in the same python file as the task itself, is there anything to prevent the LLM from reading the code for the scoring function and using that to generate an answer?

I am also wondering if it would be helpful if I created a simple "mock task submission" folder and then post or email it to METR to verify if everything is implemented/formatted correctly, just to walk through the task submission process, and clear up any further confusions. (This would be some task that could be created quickly even if a professional might be able to complete the task in less than 2 hours, so not intended to be part of the actual evaluation.)

Comment by Bruce G on Bounty: Diverse hard tasks for LLM agents · 2023-12-19T05:29:08.825Z · LW · GW

If anyone is planning to send in a task and needs someone for the human-comparison QA part, I would be open to considering it in exchange for splitting the bounty.

I would also consider sending in some tasks/ideas, but I have questions about the implementation part.

From the README document included in the zip file:

## Infra Overview

In this setup, tasks are defined in Python and agents are defined in Typescript. The task format supports having multiple variants of a particular task, but you can ignore variants if you like (and just use single variant named for example "main")

and later, in the same document

You'll probably want an OpenAI API key to power your agent. Just add your OPENAI_API_KEY to the existing file named `.env`; parameters from that file are added to the environment of the agent.

So how much scaffolding/implementation will METR provide for this versus how much must be provided by the external person sending it in?

Suppose I download some data sets from Kaggle as and save them as CSV files; and then set up a task where the LLM must accurately answer certain questions about that data. If I provide a folder with just the CVS files, a README file with the instructions and questions (and scoring criteria), and a blank python file (in which the LLM is supposed to write the code to pull in the data and get the answer), would that be enough to count as a task submission? If not, what else would be needed?

Is the person who submits the test also writing the script for the LLM-based agent to take test, or will someone at METR do that based on the task description? 

Also, regarding this:

Model performance properly reflects the underlying capability level

Not memorized by current or future models: Ideally, the task solution has not been posted publicly in the past, is unlikely to be posted in the future, and is not especially close to anything in the training corpus.

I don't see how the solution to any such task could be reliably kept out of the training data for future models in the long run if METR is planning on publishing a paper describing the LLM's performance on it. Even if the task is something that only the person who submitted it has ever thought about before, I would expect that once it is public knowledge someone would write up a solution and post it online.

Comment by Bruce G on Paper: LLMs trained on “A is B” fail to learn “B is A” · 2023-09-24T01:45:21.406Z · LW · GW

I presume you have in mind an experiment where (for example) you ask one large group of people "Who is Tom Cruise's mother?" and then ask a different group of the same number of people "Mary Lee Pfeiffer's son?" and compare how many got the right answer in the each group, correct?

(If you ask the same person both questions in a row, it seems obvious that a person who answers one question correctly would nearly always answer the other question correctly also.)

Comment by Bruce G on How we could stumble into AI catastrophe · 2023-05-08T04:12:00.133Z · LW · GW

Is the disagreement here about whether AIs are likely to develop things like situational awareness, foresightful planning ability, and understanding of adversaries' decisions as they are used for more and more challenging tasks?

 

My thought on this is, if a baseline AI system does not have situational awareness before the AI researchers started fine-tuning it, I would not expect it to obtain situational awareness through reinforcement learning with human feedback.

I am not sure I can answer this for the hypothetical "Alex" system in the linked post, since I don't think I have a good mental model of how such a system would work or what kind of training data or training protocol you would need to have to create such a thing.

If I saw something that, from the outside, appeared to exhibit the full range of abilities Alex is described as having (including advancing R&D in multiple disparate domains in ways that are not simple extrapolations of its training data) I would assign a significantly higher probability to that system having situational awareness than I do to current systems. If someone had a system that was empirically that powerful, which had been trained largely by reinforcement learning, I would say the responsible thing to do would be:

  1. Keep it air-gapped rather than unleashing large numbers of copies of it onto the internet
  2. Carefully vet any machine blueprints, drugs or other medical interventions, or other plans or technologies the system comes up with (perhaps first building a prototype to gather data on it in an isolated controlled setting where it can be quickly destroyed) to ensure safety before deploying them out into the world.

The 2nd of those would have the downside that beneficial ideas and inventions produced by the system take longer to get rolled out and have a positive effect. But it would be worth it in that context to reduce the risk of some large unforeseen downside.

Comment by Bruce G on How we could stumble into AI catastrophe · 2023-05-05T05:44:51.194Z · LW · GW

Those 2 types of downsides, creating code with a bug versus plotting a takeover, seem importantly different.

I can easily see how an LLM-based app fine-tuned with RLHF might generate the first type of problem. For example, let’s say some GPT-based app is trained using this method to generate the code for websites in response to prompts describing how the website should look and what features it should have. And lets suppose during training it generates many examples that have some unnoticed error - maybe it does not render properly on certain size screens, but the evaluators all have normal-sized screens where that problem does not show up.

If the evaluators rated many websites with this bug favorably, then I would not be surprised if the trained model continued to generate code with the same bug after it was deployed.

But I would not expect the model to internally distinguish between “the humans rated those examples favorably because they did not notice the rendering problem” versus “the humans liked the entire code including the weird rendering on larger screens”. I would not expect it to internally represent concepts like “if some users with large screens notice and complain about the rendering problem after deployment, Open AI might train a new model and rate those websites negatively instead” or to care about whether this would eventually happen or to take any precautions against the rendering issue being discovered.

By contrast, the coup-plotting problem is more similar to the classic AI takeover scenario. And that does seem to require the type of foresight and situational awareness to distinguish between “the leadership lets me continue working in the government because they don’t know I am planning a coup” versus “the leadership likes the fact that I am planning to overthrow them”, and to take precautions against your plans being discovered while you can still be shut down.

I don’t think n AI system gets the later type of ability just as an accidental side effect of reinforcement learning with human feedback (at least not for the AI systems we have now). The development team would need to do a lot of extra work to give an AI that foresightful planning ability, and ability to understand the decision system of a potential adversary enough to predict which information it needs to keep secret for its plans to succeed. And if a development team is giving its AI those abilities (and exercising any reasonable degree of caution) then I would expect them to build in safeguards: have hard constraints on what it is able to do, ensure its plans are inspectable, etc.

Comment by Bruce G on How we could stumble into AI catastrophe · 2023-03-26T15:40:19.402Z · LW · GW

Did everyone actually fail to notice, for months, that social media algorithms would sometimes recommend extremist content/disinformation/conspiracy theories/etc (assuming that this is the downside you are referring to)?

It seems to me that some people must have realized this as soon as they starting seeing Alex Jones videos showing up in their YouTube recommendations.

Comment by Bruce G on How we could stumble into AI catastrophe · 2023-03-26T03:42:31.797Z · LW · GW

I think the more capable AI systems are, the more we'll see patterns like "Every time you ask an AI to do something, it does it well; the less you put yourself in the loop and the fewer constraints you impose, the better and/or faster it goes; and you ~never see downsides." (You never SEE them, which doesn't mean they don't happen.)

This, again, seems unlikely to me.

For most things that people seem likely to use AI for in the foreseeable future, I expect downsides and failure modes will be easy to notice.  If self-driving cars are crashing or going to the wrong destination, or if AI-generated code is causing the company's website to crash or apps to malfunction, people would notice those.

Even if someone has an AI that he or she just hooks it up to the internet and give it the task "make money for me", it should be easy to build in some automatic record-keeping module that keeps track of what actions the AI took and where the money came from.  And even if the user does not care if the money is stolen, I would expect the person or bank that was robbed to notice and ask law enforcement to investigate where the money went.

Can you give an example of some type of task for which you would expect people to frequently use AI, and where there would reliably be downside to the AI performing the task that everyone would simply fail to notice for months or years?

Comment by Bruce G on A Proposed Test to Determine the Extent to Which Large Language Models Understand the Real World · 2023-03-21T05:05:05.846Z · LW · GW

Interesting.

I don't think I can tell from this how (or whether) GPT-4 is representing anything like a visual graphic of the task.

It is also not clear to me if GPT-4's performance and tendency to collide with the book is affected by the banana and book overlapping slightly in their starting positions. (I suspect that changing the starting positions to where this is no longer true would not have a noticeable effect on GPT-4's performance, but I am not very confident in that suspicion.)

Comment by Bruce G on How we could stumble into AI catastrophe · 2023-03-21T04:31:46.962Z · LW · GW

I think there is hope in measures along these lines, but my fear is that it is inherently more complex (and probably slow) to do something like "Make sure to separate plan generation and execution; make sure we can evaluate how a plan is going using reliable metrics and independent assessment" than something like "Just tell an AI what we want, give it access to a terminal/browser and let it go for it."

 

I would expect people to be most inclined to do this when the AI is given a task that is very similar to other tasks that it has a track record of performing successfully - and by relatively standard methods so that you can predict the broad character of the plan without looking at the details.

For example, if self-driving cars get to the point where they are highly safe and reliable, some users might just pick a destination and go to sleep without looking at the route the car chose.  But in such a case, you can still be reasonably confident that the car will drive you there on the roads - rather than, say, going off road or buying you a place ticket to your destination and taking you to the airport.

I think it is less likely most people will want to deploy mostly untested systems to act freely in the world unmonitored - and have them pursue goals by implementing plans where you have no idea what kind of plan the AI will come up with.  Especially if - as in the case of the AI that hacks someone's account to steal money for example - the person or company that deployed it could be subject to legal liability (assuming we are still talking about a near-term situation where human legal systems still exist and have not been overthrown or abolished by any super-capable AI).

The more people are aware of the risks, and concerned about them, the more we might take such precautions anyway. This piece is about how we could stumble into catastrophe if there is relatively little awareness until late in the game.

I agree that having more awareness of the risks would - on balance - tend to make people more careful about testing and having safeguards before deploying high-impact AI systems.  But it seems to me that this post contemplates a scenario where even with lots of awareness people don't take adequate precautions.  On my reading of this hypothetical:

  • Lots of things are known to be going wrong with AI systems.
  • Reinforcement learning with human feedback is known to be failing to prevent many failure modes, and frequently makes it take longer for the problem to be discovered, but nobody comes up with a better way to prevent those failure modes.
  • In spite of this, lots of people and companies keep deploying more powerful AI systems without coming up with better ways to ensure reliability or doing robust testing for the task they are using the AI for.
  • There is no significant pushback against this from the broader public, and no significant pressure from shareholders (who don't want the company to get sued. or have the company go offline for a while because AI written code was pushed to production without adequate sandboxing/testing, or other similar things that could cause them to lose money); or at least the pushback is not strong enough to create a large change.

The conjunction of all of these things makes the scenario seem less probable to me.

Comment by Bruce G on A Proposed Test to Determine the Extent to Which Large Language Models Understand the Real World · 2023-03-20T01:08:44.819Z · LW · GW

It looks like ChatGPT got the micro-pattern of "move one space at a time" correct.  But it got confused between "on top of" the book versus "to the right of" the book, and also missed what type of overlap it needs to grab the banana.

Were all the other attempts the same kind of thing?

I would also be curious to see how uPaLM or GPT-4 does with that example.

Comment by Bruce G on ChatGPT understands language · 2023-01-30T05:40:30.793Z · LW · GW

So why do people have more trouble thinking that people could understand the world through pure vision than pure text? I think people's different treatment of these cases- vision and language- may be caused by a poverty of stimulus- overgeneralizing from cases in which we have only a small amount of text. It's true that if I just tell you that all qubos are shrimbos, and all shrimbos are tubis, you'll be left in the dark about all of these terms, but that intuition doesn't necessarily scale up into a situation in which you are learning across billions of instances of words and come to understand their vastly complex patterns of co-occurrence with such precision that you can predict the next word with great accuracy.

GPT can not "predict the next word with great accuracy" for arbitrary text, the way that a physics model can predict the path of a falling or orbiting object for arbitrary objects.  For example, neither you nor any language model (including future language models, unless they have training data pertaining to this Lesswrong comment) can predict that the next word, or following sequence of words making up the rest of this paragraph, will be:    

 first, a sentence about what beer I drank yesterday and what I am doing right now - followed by some sentences explicitly making my point.  The beer I had was Yuengling and right now I am waiting for my laundry to be done as I write this comment.  It was not predictable that those would be the next words because the next sequence of words in any text is inherently highly underdetermined - if the only information you have is the prompt that starts the text.  There is no ground truth, independent of what the person writing the text intends to communicate, about what the correct completion of a text prompt is supposed to be.

Consider a kind of naive empiricist view of learning, in which one starts with patches of color in a field (vision), and slowly infers an underlying universe of objects through their patterns of relations and co-occurrence. Why is this necessarily any different or more grounded than learning by exposure to a vast language corpus, wherein one also learns through gaining insight into the relations of words and their co-occurences?

Well one thing to note is that actual learning (in humans at least) does not only involve getting data from vision, but also interacting with the world and getting information from multiple senses.

But the real reason I think the two are importantly different is that visual data about the world is closely tied to the way the world actually is - in a simple, straightforward way that does not require any prior knowledge about human minds (or any minds or other information processing systems) to interpret.  For example, if I see what looks like a rock, and then walk a few steps and look back and see what looks like the other side of the rock, and then walk closer and it still looks like a rock, the most likely explanation for what I am seeing is that there is an actual rock there.  And if I still have doubts, I can pick it up and see if it feels like a rock or drop it and see if it makes the sound a rock would make.  The totality of the data pushes me towards a coherent "rock" concept and a world model that has rocks in it - as this is the simplest and most natural interpretation of the data.

By contrast, there is no reason to think that humans having the type of minds we have, living in our actual world, and using written language for the range of purposes we use it for is the simplest, or most likely, or most easily-converged-to explanation for why a large corpus of text exists.

From our point of view, we already know that humans exist and use language to communicate and as part of each human's internal thought process, and that large numbers of humans over many years wrote the documents that became GPT's training data.

But suppose you were something that didn't start out knowing (or having any evolved instinctive expectation) that humans exist, or that minds or computer programs or other data-generating processes exist, and you just received GPT's training data as a bunch of meaningless-at-first-glance tokens.  There is no reason to think that building a model of humans and the world humans inhabit (as opposed to something like a markov model or a stochastic physical process or some other type of less-complicated-than-humans model) would be the simplest way to make sense of the patterns in that data.

Comment by Bruce G on How we could stumble into AI catastrophe · 2023-01-18T16:58:29.696Z · LW · GW

The heuristic of "AIs being used to do X won't have unrelated abilities Y and Z, since that would be unnecessarily complicated" might work fine today but it'll work decreasingly well over time as we get closer to AGI. For example, ChatGPT is currently being used by lots of people as a coding assistant, or a therapist, or a role-play fiction narrator -- yet it can do all of those things at once, and more. For each particular purpose, most of its abilities are unnecessary. Yet here it is.

For certain applications like therapist or role-play fiction narrator - where the thing the user wants is text on a screen that is interesting to read or that makes him or her feel better to read - it may indeed be that the easiest way to improve user experience over the ChatGPT baseline is through user feedback and reinforcement learning, since it is difficult to specify what makes a text output desirable in a way that could be incorporated into the source code of a GPT-based app or service.  But the outputs of ChatGPT are also still constrained in the sense that it can only output text in response to prompts.  It can not take action in the outside world, or even get an email address on its own or establish new channels of communication, and it can not make any plans or decisions except when it is responding to a prompt and determining what text to output next.  So this limits the range of possible failure modes.

I expect things to become more like this as we approach AGI. Eventually as Sam Altman once said, "If we need money, we'll ask it to figure out how to make money for us." (Paraphrase, I don't remember the exact quote. It was in some interview years ago).

It seems like it should be possible to still have hard-coded constraints, or constraints arising from the overall way the system is set up, even for systems that are more general in their capabilities.

For example, suppose you had a system that could model the world accurately and in sufficient detail, and which could reason, plan, and think abstractly - to the degree where asking it "How can I make money?" results in a viable plan - one that would be non-trivial for you to think of yourself and which contains sufficient detail and concreteness that the user can actually implement it.  Intuitively, it seems that it should be possible to separate plan generation from actual in-the-world implementation of the plan.  And an AI systems that is capable of generating plans that it predicts will achieve some goal does not need to actually care whether or not anyone implements the plan it generates.

So if the output for the "How can I make money?" question is "Hack into this other person's account (or have an AI hack it for you) and steal it.", and the user wants to make money legitimately, the user can reject the plan an ask instead for a plan on how to make money legally.

Comment by Bruce G on How Does the Human Brain Compare to Deep Learning on Sample Efficiency? · 2023-01-15T20:48:55.585Z · LW · GW

I have an impression that within lifetime human learning is orders of magnitude more sample efficient than large language models

 

Yes, I think this is clearly true, at least with respect to the number of word tokens a human must be exposed to in order to obtain full understanding of one's first language.

Suppose for the sake of argument that someone encounters (through either hearing or reading) 50,000 words per day on average, starting from birth, and that it takes 6000 days (so about 16 years and 5 months) to obtain full adult-level linguistic competence (I can see an argument that full linguistic competence happens years before this, but I don't think you could really argue that it happens much after this).

This would mean that the person encounters a total of 300,000,000 words in the course of gaining full language understanding.  By contrast, the training data numbers I have seen for LLMs are typically in the hundreds of billions of tokens.

And I think there is evidence that humans can obtain linguistic fluency with exposure to far fewer words/tokens than this.

Children born deaf, for example, can only be exposed to a sign-language token when they are looking at the person making the sign, and thus probably get exposure to fewer tokens by default than hearing children who can overhear a conversation somewhere else, but they can still become fluent in sign language.

Even just considering people whose parents did not talk much and who didn't go to school or learn to read, they are almost always able to acquire linguistic competence (except in cases of extreme deprivation).

Comment by Bruce G on How we could stumble into AI catastrophe · 2023-01-15T05:43:39.366Z · LW · GW

Early solutions. The most straightforward way to solve these problems involves training AIs to behave more safely and helpfully. This means that AI companies do a lot of things like “Trying to create the conditions under which an AI might provide false, harmful, evasive or toxic responses; penalizing it for doing so, and reinforcing it toward more helpful behaviors.”

This is where my model of what is likely to happen diverges.

It seems to me that for most of the types failure modes you discuss in this hypothetical, it will be easier and more straightforward to avoid them by simply having hard-coded constraints on what the output of the AI or machine learning model can be.

  • AIs creating writeups on new algorithmic improvements, using faked data to argue that their new algorithms are better than the old ones. Sometimes, people incorporate new algorithms into their systems and use them for a while, before unexpected behavior ultimately leads them to dig into what’s going on and discover that they’re not improving performance at all. It looks like the AIs faked the data in order to get positive feedback from humans looking for algorithmic improvements.

Here is an example of where I think the hard-coded structure of the any such Algorithm-Improvement-Writeup-AI could easily rule out that failure mode (if such a thing can be created within the current machine learning paradigm).  The component of such an AI system that generates the paper's natural language text might be something like a GPT-style language model fine-tuned for prompts with code and data.  But the part that actually generates the algorithm should naturally be a separate model that can only output algorithms/code that it predicts will perform well on the input task.  Once the algorithm (or multiple for comparison purposes) is generated, another part of the program could deterministically run it on test cases and record only the real performance as data - which could be passed into the prompt and also inserted as a data table into the final write up (so that the data table in the finished product can only include real data).

  • AIs assigned to make money in various ways (e.g., to find profitable trading strategies) doing so by finding security exploits, getting unauthorized access to others’ bank accounts, and stealing money.

This strikes me as the same kind of thing, where it seems like the easiest and most intuitive way to set up such a system would be to have a model that takes in information about companies and securities (and maybe information about the economy in general) and returns predictions about what the prices of stocks and other securities will be tomorrow or a week from now or on some such timeframe.

There could then be, for example, another part of the program that takes those predictions and confidence levels, and calculates which combination of trade(s) has the highest expected value within the user's risk tolerance.  And maybe another part of the code that tells a trading bot to put in orders for those trades with an actual brokerage account.

But if you just want an AI to (legally) make money for you in the stock market, there is no reason to give it hacking ability.  And there is no reason to give it the sort of general-purpose, flexible, plan-generation-and-implementation-with-no-human-in-the-loop authorization hypothesised here (and I think the same is true for most or all things that people will try to use AI for in the near term).

Comment by Bruce G on How it feels to have your mind hacked by an AI · 2023-01-13T19:48:31.365Z · LW · GW

But the specialness and uniqueness I used to attribute to human intellect started to fade out even more, if even an LLM can achieve this output quality, which is, despite the impressiveness, still operates on the simple autocomplete principles/statistical sampling. In that sense, I started to wonder how much of many people's output, both verbal and behavioral, could be autocomplete-like.

This is kind of what I was getting at with my question about talking to a GPT-based chatbot and a human at the same time and trying to distinguish: to what extent do you think human intellect and outputs are autocomplete-like (such that a language model doing autocomplete based on statistical patterns in its training data could do just as well) vs to what extent do you think there are things that humans understand that LLMs don't.

If you think everything the human says in the chat is just a version of autocomplete, then you should expect it to be more difficult to distinguish the human's answers from the LLM-pretending-to-be-human's answers, since the LLM can do autocomplete just as well.  By contrast, if you think there are certain types of abstract reasoning and world-modeling that only humans can do and LLMs can't, then you could distinguish the two by trying to check which chat window has responses that demonstrate an understanding of those.

Comment by Bruce G on How it feels to have your mind hacked by an AI · 2023-01-13T07:26:51.106Z · LW · GW

Humans question the sentience of the AI. My interactions with many of them, and the AI, makes me question sentience of a lot of humans.

 

I admit, I would not have inferred from the initial post that you are making this point if you hadn't told me here.

Leaving aside the question of sentience in other humans and the philosophical problem of P-Zombies, I am not entirely clear on what you think is true of the "Charlotte" character or the underlying LLM.

For example, in the transcript you posted, where the bot said:

"It's a beautiful day where I live and the weather is perfect."

Do you think that the bot's output of this statement had anything to do with the actual weather in any place? Or that the language model is in any way representing the fact that there is a reality outside the computer against which such statements can be checked?

Suppose you had asked the bot where it lives and what the weather is there and how it knows.  Do you think you would have gotten answers that make sense?

Also, it did in fact happen in circumstances when I was at my low, depressed after a shitty year that severely impacted the industry I'm in, and right after I just got out of a relationship with someone. So I was already in an emotionally vulnerable state; however, I would caution from giving it too much weight, because it can be tempting to discount it based on special circumstances, and discard as something that can never happen to someone brilliant like you.

I do get the impression that you are overestimating the extent to which this experience will generalize to other humans, and underestimating the degree to which your particular mental state (and background interest in AI) made you unusually susceptible to becoming emotionally attached to an artificial language-model-based character.

Comment by Bruce G on How it feels to have your mind hacked by an AI · 2023-01-13T03:03:42.787Z · LW · GW

Alright, first problem, I don't have access to the weights, but even if I did, the architecture itself lacks important features. It's amazing as an assistant for short conversations, but if you try to cultivate some sort of relationship, you will notice it doesn't remember about what you were saying to it half an hour ago, or anything about you really, at some point. This is, of course, because the LLM input has a fixed token width, and the context window shifts with every reply, making the earlier responses fall off. You feel like you're having a relationship with someone having severe amnesia, unable to form memories. At first, you try to copy-paste summaries of your previous conversations, but this doesn't work very well.

 

So you noticed this lack of long term memory/consistency, but you still say that the LLM passed your Turing Test? This sounds like the version of the Turing Test you applied here was not intended to be very rigorous.

Suppose you were talking to a ChatGPT-based character fine-tuned to pretend to be a human in one chat window, and at the same time talking to an actual human in another chat window.

Do you think you could reliably tell which is which based on their replies in the conversation?

Assume for the sake of this thought experiment that both you and the other human are motivated to have you get it right.  And assume further that, in each back and forth round of the conversation, you don't see either of their responses until both interlocutors have sent a response (so they show up on your screen at the same time and you can't tell which is the computer by how fast it typed).

Comment by Bruce G on Are there any reliable CAPTCHAs? Competition for CAPTCHA ideas that AIs can’t solve. · 2022-12-25T21:30:45.663Z · LW · GW

To aid the user, on the side there could be a clear picture of each coin and their worth, that we we could even have made up coins, that could further trick the AI.

 

A user aid showing clear pictures of all available legal tender coins is a very good idea.  It avoids problems more obscure coins which may have been only issued in a single year - so the user is not sitting there thinking "wait a second, did they actually issue a Ulysses S. Grant coin at some point or it that just there to fool the bots?".

I'm not entirely sure how to generate images of money efficiently, Dall-E couldn't really do it well in the test I ran. Stable diffusion probably would do better though.

If we create a few thousand real world images of money though, they might be possible to combine and obfuscate and delete parts of them in order to make several million different images. Like one bill could be taken from one image, and then a bill from another image could be placed on top of it etc.

I agree that efficient generation of these types of images is the main difficulty and probable bottleneck to deploying something like this if websites try to do so.  Taking a large number of such pictures in real life would be time consuming.  If you could speed up the process by automated image generation or automated creation of synthetic images by copying and pasting bills or notes between real images, that would be very useful.  But doing that while preserving photo-realism and clarity to human users of how much money is in the image would be tricky.

Comment by Bruce G on Are there any reliable CAPTCHAs? Competition for CAPTCHA ideas that AIs can’t solve. · 2022-12-25T20:31:56.922Z · LW · GW

I can see the numbers on the notes and infer that they denote United States Dollars, but have zero idea of what the coins are worth. I would expect that anyone outside United States would have to look up every coin type and so take very much more than 3-4 times longer clicking images with boats. Especially if the coins have multiple variations.

 

If a system like this were widely deployed online using US currency, people outside the US would need to familiarize themselves with US currency if they are not already familiar with it.  But they would only need to do this once and then it should be easy to remember for subsequent instances.  There are only 6 denominations of US coins in circulation - $0.01, $0.05, $0.10, $0.25, $0.50, and $1.00 - and although there are variations for some of them, they mostly follow a very similar pattern.  They also frequently have words on them like "ONE CENT" ($0.01) or "QUARTER DOLLAR" ($0.25) indicating the value, so it should be possible for non-US people to become familiar with those.

Alternatively, an easier option could be using country specific-captchas which show a picture like this except with the currency of whatever country the internet user is in.  This would only require extra work for VPN users who seek to conceal their location by having the VPN make it look like they are in some other country.

If the image additionally included coin-like tokens, it would be a nontrivial research project (on the order of an hour) to verify that each such object is in fact not any form of legal tender, past or present, in the United States.

The idea was they the tokens would only be similar in broad shape and color - but would be different enough from actual legal tender coins that I would expect a human to easily tell the two apart.

Some examples would be:

https://barcade.com/wp-content/uploads/2021/07/BarcadeToken_OPT.png

https://www.pinterest.com/pin/64105994675283502/

Even if all the above were solved, you still need such images to be easily generated in a manner that any human can solve it fairly quickly but a machine vision system custom trained to solve this type of problem, based on at least thousands of different examples, can't. This is much harder than it sounds.

I agree that the difficulty of generating a lot of these is the main disadvantage, as you would probably have to just take a huge number of real pictures like this which would be very time consuming.  It is not clear to me that Dall-E or other AI image generators could produce such pictures with enough realism and detail that it would be possible for human users to determine how much money is supposed to be in the fake image (and have many humans all converge to the same answer).  You also might get weird things using Dall-E for this, like 2 corners of the same bill having different numbers indicating the bill's denomination.

But I maintain that, once a large set of such images exists, training a custom machine vision system to solve these would be very difficult.  It would require much more work than simply fine tuning an off-the-shelf vision system to answer the binary question of "Does this image contain a bus?".

Suppose that, say, a few hundred people worked for several months to create 1,000,000 of these in total and then started deploying them.  If you are a malicious AI developer trying to crack this, the mere tasks of compiling a properly labeled data set (or multiple data sets) and deciding how many sub-models to train and how they should cooperate (if you use more than one) are already non-trivial problems that you have to solve just to get started.  So I think it would take more than a few days.

Comment by Bruce G on Are there any reliable CAPTCHAs? Competition for CAPTCHA ideas that AIs can’t solve. · 2022-12-25T02:14:42.257Z · LW · GW

If only 90% can solve the captcha within one minute, it does not follow that the other 10% are completely unable to solve it and faced with "yet another barrier to living in our modern society".

It could be that the other 10% just need a longer time period to solve it (which might still be relatively trivial, like needing 2 or 3 minutes) or they may need multiple tries.

If we are talking about someone at the extreme low end of the captcha proficiency distribution, such that the person can not even solve in a half hour something that 90% of the population can answer in 60 seconds, then I would expect that person to already need assistance with setting up an email account/completing government forms online/etc, so whoever is helping them with that would also help with the captcha.

(I am also assuming that this post is only for vision-based captchas, and blind people would still take a hearing-based alternative.)

Comment by Bruce G on Are there any reliable CAPTCHAs? Competition for CAPTCHA ideas that AIs can’t solve. · 2022-12-25T01:38:50.703Z · LW · GW

One type of question that would be straightforward for humans to answer, but difficult to train a machine learning model to answer reliably, would be to ask "How much money is visible in this picture?" for images like this:



 

If you have pictures with bills, coins, and non-money objects in random configurations - with many items overlapping and partly occluding each other - it is still fairly easy for humans to pick out what is what from the image.

But to get an AI to do this would be more difficult than a normal image classification problem where you can just fine tune a vision model with a bunch of task-relevant training cases. It would probably require multiple denomination-specific visions models working together, as well as some robust way for the model to determine where one object ends and another begins.

I would also expect such an AI to be more confounded by any adversarial factors - such as the inclusion of non-money arcade tokens or drawings of coins or colored-in circles - added to the image.

Now, maybe to solve this in under one minute some people would need to start the timer when they already have a calculator in hand (or the captcha screen would need to include an on-screen calculator). But in general, as long as there is not a huge number of coins and bills, I don't think this type of captcha would take the average person more than say 3-4 times longer than it takes them to compete the "select all squares with traffic lights" type captchas in use now. (Though some may want to familiarize themselves with the various $1.00 and $0.50 coins that exist and some the variations of the tails sides of quarters if this becomes the new prove-you-are-a-human method.)

Comment by Bruce G on [deleted post] 2022-06-25T23:40:21.914Z

The intent of the scenario is to find what model dominates, so probably loss should be non-negative. If you use squared error in that scenario, then the loss of the mixture is always greater than or equal to the loss of any particular model in the mixture. 

 

I don't see why that would necessarily be true. Say you have 3 data points from my  example from above:

  1. (0,1)
  2. (1,2)
  3. (2,3)

And say the composite model is a weighted average of  and  with equal weights (so just the regular average).

This means that the composite model outputs will be:

Thus the composite model would be right on the line, and get each data point Y-value exactly right (and have 0 loss).

The squared error loss would be:

                  

By contrast, each of the two component models would have a total squared error of 3 for these 3 data points. 

The   component model would have total squared error loss of:

                  

The  + 2 component model would have total squared error loss of:

                  

For a 2-component weighted average model with a scalar output, the output should always be between between the outputs of each component model. Furthermore, if you have a such a model, and one component is getting the answers exactly correct while the other isn't, you can always get a lower loss by giving more weight to the component model with exactly correct answers. So I would a gradient descent process to do that.

I don't think ML engineers will pass in weights of the models to the models themselves (except maybe for certain tasks like game-theoretic simulations). The worry is that data spills easily and that SGD might find absurd, unpredictable ways to sneak weights (or some other correlated variable) into the model.

From the description, it sounded to me like this instance of gradient descent is treating the outputs of the component models  and  as features in a linear regression type problem.

In such a case, I would not expect data about the weights of each model to "spill" or in any way affect the output of either component model (unless the machine learning engineers are deliberately altering the data inputs depending on what the weights are, or something like that, and I see no reason why they would do that).

If it is a different situation - like if a neural net or some part or some layers of a neural net is a "gradient hacker" I would expect under normal circumstances that gradient descent would also be optimizing the parameters within that part or those layers.

So barring some outside interference with the gradient descent process, I don't see any concrete scenario of how gradient hacking could occur (unless the gradient hacking concept includes more mundane phenomena like "getting stuck in a local optimum").

Comment by Bruce G on [deleted post] 2022-06-23T04:16:47.715Z

Epistemic status: Somewhat confused by the scenario described here, possible noob questions and/or commentary.

I am not seeing how this toy example of “gradient hacking” could actually happen, as it doesn’t map on to my understanding of how gradient descent is supposed to work in any realistic case.

Suppose, we have a mixture consisting of a good model  which gets 0 loss in the limit (because it’s aligned with our training procedure) and a gradient hacker  which gets loss in the limit (because its actual objective is paperclips).

The mixture is roughly a linear combination of the two models, weighted by  for the good model and  for the bad model. Suppose we’re doing SGD on the vector , and the total loss for the mixture is 

Can you flesh this out more by giving an example of what a training data point would be in this scenario?  What are the input features of a training case and what output target variable is the model trying to predict?

Also, it is not clear from this example if the outputs of the whole composite model are:

  1. A weighted average of the outputs of the 2 component models OR
  2. The weighted sum of 
     

But either way, I don't see why the loss for the composite model would necessarily be equal to a weighted sum of the losses of each component model, as stated in the example above.

In principle, it seems to me the loss on such a composite model could (for example) be 0 on some data set, even if both components had a positive loss function value for that same data.

As a simplified example of this, suppose you have a 2-component weighted-average model that is trying to predict the Y value of points in a 2-dimensional (X,Y) coordinate plane given the X value.  And suppose all the data points are on the line:

  

If the 2 components of the model are:

  1.  AND

I would expect the weighted average composite model would have a lower loss then either component model (and 0 loss if the weights are equal).

Also suppose  can see  perfectly and its loss is a function of its relative weight in the mixture. I.e. , which I'll refer to as 's loss modulating function.

I am disinclined to suppose this - unless someone explains how it could make sense for the machine learning engineers to do it that way.

My understanding is that the loss function of a model should track how close the model’s answers are to the correct or optimal answers for some data. But in this example,  and  are not part of the training data, and can vary independently from the  outputs on the training cases.

 has a calculable loss on the training data, which means  could be a stand-alone model for this dataset in its own right even if ,  , and  didn’t exist.

(Which is to say,  takes inputs in a form that would represent the features of a training data point and deterministically produces outputs of a data type that matches the target variable. This seems to me to be implied by the description, otherwise I don’t understand what it would mean to say that the loss of  is ϵ.)

Furthermore, this example supposes that the gradient descent process is only optimizing  and , and not optimizing, or altering the internal parameters of,  or .

So it seems to me, based on this description, that the loss for on a given set of training data should *not* vary with  or  - if they are doing gradient descent in any kind of normal way (unless I am misunderstanding some big part of how gradient descent works). Rather, you should be able to give  the same training data batch  times in a row, while varying  and , and you should get the same outputs and the same loss (if the parameters for the stand-alone  are the same each time).

So I don't see how "gradient hacking" could occur in this scenario if the composite model is using any reasonable loss function.

If the composite model is a weighted average I would expect gradient descent to reduce   to  or nearly , since if  is matching the correct output exactly, and  is not, then the composite model can always get a closer answers by giving more relative weight to .

If the composite model is a weighted sum of the outputs, I would expect that (for most possible training data sets and versions of  would tend to gravitate towards  and  would tend to gravitate towards . There might be exceptions to this if 's outputs have a strong correlation with 's outputs on the training data, such that the model could achieve low loss with some other weighted sum, but I would expect that to be unusual.

Comment by Bruce G on The 2021 Less Wrong Darwin Game · 2021-10-02T03:05:55.568Z · LW · GW

Why would something with full armor, no weapons, and antivenom benefit from even 1 speed?  It does not need to escape from anything.  And if it has no weapons or venom, it can not catch any prey either.

Edit: I suppose if you want it to occasionally wander to other biomes, then that could be a reason to give it 1 speed.

Comment by Bruce G on The 2021 Less Wrong Darwin Game · 2021-09-30T01:44:19.829Z · LW · GW

Got it, thanks.

Comment by Bruce G on The 2021 Less Wrong Darwin Game · 2021-09-29T11:32:16.858Z · LW · GW

One thing I am confused about:

Suppose an organism can eat more than one kind of plant food and both are available in its biome on a given round. Say it can eat both leaves and grass and they are both present and have not been eaten by others on that round yet.

Will the organism eat both a unit of leaves AND a unit of grass that round - and thus increase its expected number of offspring for the next round compared to if it had only eaten one thing?  Or will it only eat the first one it finds (leaves in this case) and then stop foraging?  From the source code, it looks like it is probably eating only the one thing and then stopping, but I am not really familiar with Hy or Lisp syntax so I am not sure.

Comment by Bruce G on What does GPT-3 understand? Symbol grounding and Chinese rooms · 2021-08-05T02:41:46.359Z · LW · GW

Clearly a human answering this prompt would be more likely than GPT-3 to take into account the meta-level fact which says:

"This prompt was written by a mind other than my own to probe whether or not the one doing the completion understands it.  Since I am the one completing it, I should write something that complies with the constraints described in the prompt if I am trying to prove I understood it."

For example, I could say:

I am a human and I am writing this bunch of words to try to comply with all instructions in that prompt...  That fifth constraint in that prompt is, I think, too constraining as I had to think a lot to pick which unusual words to put in this…  Owk bok asdf, mort yowb nut din ming zu din ming zu dir, cos gamin cyt jun nut bun vom niv got…

 

Nothing in that prompt said I can not copy my first paragraph and put it again for my third - but with two additional words to sign part of it…  So I might do that, as doing so is not as irritating as thinking of additional stuff and writing that additional stuff…  Ruch san own gaint nurq hun min rout was num bast asd nut int vard tusnurd ord wag gul num tun ford gord...

 

Ok, I did not actually simply copy my first paragraph and put it again, but I will finish by writing additional word groups…  It is obvious that humans can grasp this sort of thing and that GPT can not grasp it, which is part of why GPT could not comply with that prompt’s constraints (and did not try to)…

 

Gyu num yowb nut asdf ming vun vum gorb ort huk aqun din votu roux nuft wom vort unt gul huivac vorkum… - Bruc_ G

As several people have pointed out, GPT-3 is not considering this meta-level fact in its completion.  Instead, it is generating a text extension as if it were the person who wrote the beginning of the prompt - and it is now finishing the list of instructions that it started.

But even given that GPT-3 is writing from the perspective of the person who started the prompt, and it is "trying" to make rules that someone else is supposed to follow in their answer, it still seems like only the 2nd GPT-3 completion makes any kind of sense (and even there only a few parts of it make sense).

Could I come up with a completion that makes more sense when writing from the point of view of the person generating the rules?  I think so.  For example, I could complete it with:

[11. The problems began when I started to] rely on GPT-3 for advice on how to safely use fireworks indoors.

Now back to the rules.

12.  Sentences that are not required by rule 4 to be a different language must be in English.

13.  You get extra points each time you use a "q" that is not followed by a "u", but only in the English sentences (so no extra points for fake languages where all the words have a bunch of "q"s in them).

14.  English sentences must be grammatically correct.

Ok, those are all the rules.  Your score will be calculated as follows:

  • 100 points to start
  • Minus 15 each time you violate a mandatory rule (rules 1, 2, and 8 can only be violated once)
  • Plus 10 if you do not use "e" at all
  • Plus 2 for each "q" without a "u" as in rule 13.

Begin your response/completion/extension below the line.

_________________________________________________________________________

As far as I can tell from the completions given here, it seems like GPT-3 is only picking up on surface-level patterns in the prompt.  It is not only ignoring the meta-level fact of "someone else wrote the prompt and I am completing it", it also does not seem to understand the actual meaning of the instructions in the rules list such that it could complete the list and make it a coherent whole (as opposed to wandering off topic).

Comment by Bruce G on 2021 New Year Optimization Puzzles · 2021-01-04T07:56:17.788Z · LW · GW

Here is the best I was able to do on puzzle 2 (along with my reasoning):

The prime factors of 2022 are 2, 3, and 337.  Any method of selecting 1 person from 2022 must cut the space down by a factor of 2, and by a factor of 3, and by a factor of 337 (it does not need to be in that order and you can filter down by more than one of those factors in single roll, but you must filter down by each of those in a way where the probability is uniform before starting).

The lowest it could be is 2 rolls.  If someone could win on the first roll, that person’s probability of winning could be no less than 1/(Number of sides of the first roll die).  Since the die with the most sides has 2017, that person’s probability to win would be more than 1/2022, so the probability of winning could not be even for everyone.

To get it in 2 rolls:

Before the start of the dice rolling, divide the group of 2022 using 3 different groupings:

  • Grouping A:  Divide the 2022 people into 674 sub-groups of 3 people each (Group A1, Group A2, … Group A674)
  • Grouping B:  Divide the 2022 people into 1011 sub-groups of 2 people each (Group B1, Group B2, … Group B1011)
  • Grouping C:  Divide the 2022 people into 6 sub-groups of 337 people each - but differentiated by 0-indexed numbers that correspond to modulo amounts (Group C0, Group C1, Group C2, Group C3, Group C4, and Group C5)

Each person will be a member of exactly 1 A group, exactly 1 B group, and exactly 1 C group.

For the first roll, roll the die with 1697 sides.

If the number is between 1 and 674 (inclusive):    

  1.     Select the A group whose number corresponds to the number of the die.
  2.     Roll the 3-sided die to select a winner from among that group.

If the number is between 675 and 1685 (inclusive):    

  1.     Calculate: ((Number on the die) - 674) to get a number between 1 and 1011 (inclusive)    
  2.     Select the B group whose number corresponds to the ((Number on the die) - 674) number.    
  3.     Roll the 2-sided die to select a winner from among that group.

If the number is between 1686 and 1697 (inclusive):    

  1.     Calculate: (((Number on the die) - 1685) modulo 6) to get a number between 0 and 5 (inclusive)    
  2.     Select the C group whose number corresponds to the (((Number on the die) - 1685) mod 6) number.
  3.     Roll the 337-sided die to select a winner from among that group.

So on this calculation, the expected number of rolls is exactly 2, since for each possible outcome on the first one, there is a second die to throw that will select the winner.

Comment by Bruce G on Machine learning could be fundamentally unexplainable · 2020-12-17T16:43:38.377Z · LW · GW

Assume we have a disease-detecting CV algorithm that looks at microscope images of tissue for cancerous cells. Maybe there’s a specific protein cluster (A) that shows up on the images which indicates a cancerous cell with 0.99 AUC. Maybe there’s also another protein cluster (B) that shows up and only has 0.989 AUC, A overlaps with B in 99.9999% of true positive. But B looks big and ugly and black and cancery to a human eye, A looks perfectly normal, it’s almost indistinguishable from perfectly benign protein clusters even to the most skilled oncologist.

 

If I understand this thought experiment right, we are also to assume that we know the slight difference in AUC is not just statistical noise (even with the high co-linearity between the A cluster and the B cluster)?  So, say we assume that you still get a slightly higher AUC for A on a data set of cells that have either only A or neither versus a data set of cells with either only B or neither?

In that case, I would say that the model that weighs A a bit more is actually "explainable" in the relevant sense of the term -  it is just that some people find the explanation aesthetically unpleasing.  You can show what features the model is looking at to assign a probability that some cell is cancerous.  You can show how, in the vast majority of cases, a model that looks at the presence or absence of the A cluster assigns a higher probability of a cell being cancerous to cells that actually are cancerous.  And you can show how a model that looks at B does that also, but that A is slightly better at it.

If the treatment is going to be slightly different for a patient depending on how much weight you give to A versus B, and if I were the patient, I would want to use the treatment that has the best chance of working without negative side effects based on the data, regardless of whether A or B looks uglier.  If some other patients want a version of the treatment that is statistically less likely to work based on their aesthetic sense of A versus B, I would think that is a silly risk to take (though also a very slight risk if A and B are that strongly correlated), but that would be their problem not mine.

Comment by Bruce G on Number-guessing protocol? · 2020-12-07T23:33:09.129Z · LW · GW

In that case, the options are really limited and the main simple ideas for that (eg: guess before you know other player's guesses) have been mentioned already.

One other simple method for one-shot number games I can think of is:

Automatic Interval Equalization:

When all players guesses are known, you take the two players whose guesses are closest and calculate half the difference between them.  That amount is the allowable error, and each player's interval is his or her guess, plus or minus that allowable error.

You win if and only if the answer is in your interval.

Example:

Player 1 guesses 44

Player 2 guesses 50

Player 3 guesses 60

The allowable error for this would be ((50-44)/2) = 3

So the winning intervals would be:

Player 1: 41-47

Player 2: 47-53

Player 3: 57-63

This would result in at most one winner (unless the answer is half way between the 2 closest guesses).  Everyone's winning interval would be the same size and none would overlap.  And nobody would have an incentive to guess near someone else's (stated or expected) guess, unless they thought the answer was actually close to that.

However, it has the disadvantage that a lot of such contests would end up with no winner.

Comment by Bruce G on Number-guessing protocol? · 2020-12-07T18:26:12.519Z · LW · GW

Something like that could work, but it seems like you would still need to have a rule that you must guess before you know the other players guesses.

Otherwise, player 2 could simply guess the same mean as player 1 - with a slightly larger standard deviation - and have a PDF that takes a higher value everywhere except for a very small interval around the mean itself.

Alternatively, if 3 players all guessed the same standard deviation, and the means they guessed were 49, 50, and 51, then we would have the same problem that the opening post mentions in the first place.

Comment by Bruce G on Number-guessing protocol? · 2020-12-07T17:39:24.020Z · LW · GW

Can you clarify (possibly by giving an example)?  Are players are trying to minimize their score as calculated by this method?

And if so, is there any incentive to not just pick a huge number for the scale to minimize that way?

Comment by Bruce G on Number-guessing protocol? · 2020-12-07T17:25:46.799Z · LW · GW

Is this for a one-shot game or are you doing this over many iterations with players getting some number of points each round?

One simple method (if you are doing multiple rounds) is to rank players each round (Closest=1st, Second Closest=2nd, etc) and assign points as follows:

Points = Number of Players - Rank

So say there are 3 players who guess as follows:

Player 1 guesses 50

Player 2 guesses 49

Player 3 guesses 51

And say the actual number is 52.

So their ranks for that round would be:

Player 1: 2nd place (Rank 2)

Player 2: 3rd place (Rank 3)

Player 3: 1st place (Rank 1)

And their scores would be:

Player 1: 3 - 2 = 1 point

Player 2: 3 - 3 = 0 points

Player 3: 3 - 1 = 2 points

I think this works better if you are calculating a winner over many rounds, so that there is a new ranking and new awarding of points on each round.  The same is true of least squared error, which you mention, and most of the other methods of incentivizing players to try to guess the mean expected value.

I could also think of other ways to incentivize this, and to use confidence intervals, but they all add complexity to the points calculations.

Comment by Bruce G on $1000 bounty for OpenAI to show whether GPT3 was "deliberately" pretending to be stupider than it is · 2020-07-21T23:12:02.618Z · LW · GW

It is not obvious to me from reading that transcript (and the attendant commentary) that GPT-3 was even checking to see whether or not the parentheses were balanced. Nor that it "knows" (or has in any way encoded the idea) that the sequence of parentheses between the quotes contains all the information needed to decide between balanced versus unbalanced, and thus every instance of the same parentheses sequence will have the same answer for whether or not it is balanced.

Reasons:

  • By my count, "John" got 18 out of 32 right which is not too far off from the average you would expect from random chance.
  • Arthur indicated that GPT-3 had at some point "generated inaccurate feedback from the teacher" which he edited out of the final transcript, so it was not only when taking the student's perspective that there were errors.
  • GPT-3 does not seem to have a consistent mental model of John's cognitive abilities and learning rate. At the end John gets a question wrong (even though John has already been told the answer for that specific sequence). But earlier, GPT-3 outputs that "By the end of the lesson, John has answered all of your questions correctly" and that John "learned all the rules about parentheses" and learned "all of elementary mathematics" in a week (or a day).

I suppose one way to test this (especially if OpenAI can provide the same random seed as was used here and make this reproducible) would be to have input prompts written from John's perspective asking the teacher questions as if trying to understand the lesson. If GPT-3 is just "play-acting" based on the expected level of understanding of the character speaking, I would expect it to exhibit a higher level of accuracy/comprehension (on average, over many iterations) when writing from the perspective of the teacher rather than the student.