Posts

Comments

Comment by Qumeric (valery-cherepanov) on Stephen McAleese's Shortform · 2024-09-15T16:17:33.581Z · LW · GW

I would like to note that this dataset is not as hard as it might look like. Humans performed not so well because there is a strict time limit, I don't remember exactly but it was something like 1 hour for 25 tasks (and IIRC the medalist only made arithmetic errors). I am pretty sure any IMO gold medailst would typically score 100% given (say) 3 hours.

Nevertheless, it's very impressive, and AIMO results are even more impressive in my opinion.

Comment by Qumeric (valery-cherepanov) on Question for Prediction Market people: where is the money supposed to come from? · 2023-06-08T20:59:18.092Z · LW · GW

Thanks, I think I understand your concern well now.

I am generally positive about the potential of prediction markets if we will somehow resolve the legal problems (which seems unrealistic in the short term but realistic in the medium term). 

Here is my perspective on "why should a normie who is somewhat risk-averse, don't enjoy wagering for its own sake, and doesn't care about the information externalities, engage with prediction markets"

First, let me try to tackle the question at face value:

  1. "A normie" can describe a large social group, but it's too general to describe a single person. You can be a normie, but maybe you work at a Toyota dealership. Maybe you just accidentally overheard that the head of your department was talking on the phone and said that recently there were major problems with hydrogen cars which are likely to delay deployment by a few years. If there is a prediction market for hydrogen cars, you can bet and win (or at least you can think that you will win). It's relatively common among normies to think along the lines "I bought a Toyota car and it's amazing, I will buy Toyota stock and it will make me rich". Of course, such thinking is usually invalid, Toyota's quality is probably already priced in, so it's a toss of a coin if it will overperform the broader market or not. Overall, it's probably not a bad idea to buy Toyota stock, but some people do it not because it's an ok idea but because they think it's an amazing idea. I expect the same dynamics to play in prediction markets.
  2. Even if you don't enjoy "wagering for its own sake", prediction markets can be more than mere wagering.  Although it's a bit similar in spirit, gamification is applicable to prediction markets, for example, Manifold is doing it pretty successfully (from my perspective as an active user, it's quite addictive) although it hasn't led to substantial user growth yet. Even the wagering itself can be different -- you can bet "all on black" because you desperately need money and it's your only chance, you can be drawn by the dopamine-driving experience of the slots, you can believe in your team and bet as kind of confirmation of your belief, you can make a bet to make watching the game more interesting. There are many aspects of gambling which have a wide appeal, and many of them are applicable to prediction markets.

Second, I am not sure it has to be a thing for the masses. In general, normies usually don't have much valuable information, so why would we want them to participate? Of course, it will attract professionals who will correct mispricings and make money but ordinary people losing money is a negative externality which can even outweigh the positive ones.

I consider myself at least a semi-professional market participant. I bet on Manifold and use Metaculus a lot for a few years. I used Polymarket before but don't do it anymore and resort to funny money ones despite they have problems (and of course can't make me money).

Why I am not using Polymarket anymore:

  1. As a real market should be, it's far from trivial to make money on Polymarket. Despite that fact, I do (perhaps incorrectly) believe that my bets would be +EV. However, I don't believe that I can be much better than random, so I don't find it to be more profitable than investing in something else. However, if I could bet with "my favourite asset" it would become profitable for me (at least in my eyes, which is all that matters) and I would use it.
  2. There are not enough interesting markets, mostly politics or sports. Which is mostly caused by the legal situation. Even Polymarket, a grey-area crypto-based market is very limited by that. PredictIt is even worse. Even if I am wrong here and it's not the reason, still, there will be definitely more platforms which would experiment more if it was legal in the U.S.
  3. The user experience is (or at least was) not great. Again, I believe it's mostly caused by legal problems, it's hard to raise money to improve your product if it's not legal.

 

I do agree with your point, definitely "internalize the positive information externalities generated by them" is something which prediction markets should aspire to, an important (and interesting!) problem.

However, I don't believe it's essential for "making prediction markets sustainably large" unless we have a very different understanding of "sustainably large". I am confident that it would be possible to achieve 1% of the global gambling market which would be billions of revenue and a lot of utility. It even seems to be a modest goal, given that it's a serious instrument. But unfortunately, prediction markets are "basically regulated out of existence" :(

 

Sidenote on funny money market problems:

Metaculus's problem is that it's not a market at all. Perhaps it's a correct decision but makes it boring, less competitive and less accurate (there are many caveats here, probably making Metaculus a market right now would make it less accurate, but from the highest-level perspective markets are a better mechanism).

Manifold's problem is that serious markets draw serious people and unserious markets draw unserious people. As a result, serious markets are significantly more accurately priced which disincentivises competitive users to participate in them.  That kinda defies the whole point. And also, perhaps even more importantly, users are not engaged enough (because they don't have money at stake) so winning at Manifold is mostly information arbitrage which is tedious and unfulfilling.

Comment by Qumeric (valery-cherepanov) on Question for Prediction Market people: where is the money supposed to come from? · 2023-06-08T18:34:10.938Z · LW · GW

Good to know :)

I do agree that subsidies run into a tragedy-of-commons scenario. So despite subsidies are beneficial, they are not sufficient.

But do you find my solution to be satisfactory?

I thought about it a lot, I even seriously considered launching my own prediction market and wrote some code for it. I strongly believe that simply allowing the usage of other assets solves most of the practical problems, so I would be happy to hear any concerns or further clarify my point.

Or another, perhaps easier solution (I updated my original answer):  just allow the market company/protocol to invest the money which are "locked" until resolution to some profit generating strategy and share the profit with users. Of course, it should be diversified, both in terms of investment portfolio and across individual markets (users get the same annual rate of return, no matter what particular thing they bet on). It has some advantages and disadvantages, but I think it's a more clear-cut solution.

Comment by Qumeric (valery-cherepanov) on Question for Prediction Market people: where is the money supposed to come from? · 2023-06-08T17:22:55.461Z · LW · GW

Isn't this just changing the denominator without changing the zero- or negative-sum nature?

I feel like you are mixing two problems here: an ethical problem and a practical problem. UPD: on second thought, maybe you just meant the second problem, but still I think my response would be clearer by considering them separately.

The ethical problem is that it looks like prediction markets do not generate income, thus they are not useful and shouldn't be endorsed, they don't differ much from gambling. 

While it's true that they don't generate income and are zero-sum games in a strictly monetary sense, they do generate positive externalities. For example, there could be a prediction market about an increase of <insert a metric here> after implementing some policy.  The market will allow us to estimate the policy efficiently and make better decisions. Therefore, the market will be positive-sum because of the "better judgement" externality.

The practical problem is that the zero-sum monetary nature of prediction markets disincentives participation (especially in year+ long markets) because on average it's more profitable to invest in something else (e.g. S&P 500). It can be solved by allowing to bet other assets, so people would bet their S&P 500 shares and on average get the same expected value, so it will be not disincentivising anymore.

Also, there are many cases where positive externalities can be beneficial for some particular entity. For example, an investment company may want to know about the risk of a war in a particular country to decide if they want to invest in the country or not. In such cases, the company can provide rewards for market participants and make it a positive-sum game for them even from the monetary perspective.

This approach is beneficial and used in practice, however, it is not always applicable and also can be combined with other approaches.

Additionally, I would like to note that there is no difference between ETH and "giving a loan to a business" from a mechanism design perspective, you could tokenize your loan (and it's not crypto-related, you could use traditional finance as well, I am just not sure what "traditional" word fits here) and use the tokenized loan to bet at the prediction market.

but once all the markets resolve, the total wealth would still be $1M, right

Yes, the total amount will still be the same. However, your money will not be locked during the duration of the market, so you will be able to use it to do something else, be it buying a nice home or giving a loan to a real company. 

Of course, not all your money will be unlocked and probably not immediately, but it doesn't change much. Even if only 1% will be unlocked and only in certain conditions, it's still an improvement.

Also, I encourage you to look at it from another perspective:

What problem do we have? Users don't want to use prediction markets.

Surely, they would be more interested if they had free loans (of course they are not going to be actually free, but they can be much cheaper than ordinary uncollateralized loans). 

 

Meta-comment: it's very common in finance to put money through multiple stages. Instead of just buying stock, you could buy stock, then use it as collateral to get a loan, then buy a house on this loan, rent it to somebody, sell the rent contract and use the proceeds to short the original stock to get into a delta-neutral position. Risks multiply after each stage, so it should be done carefully and responsibly. Sometimes the house of cards crumbles, but it's not a bad strategy per se.

Comment by Qumeric (valery-cherepanov) on Question for Prediction Market people: where is the money supposed to come from? · 2023-06-08T17:00:59.485Z · LW · GW

Why does it have to be "safe enough"? If all market participants agree to bet using the same asset, it can bear any degree of risk. 

I think I should have said that a good prediction market allows users to choose what asset will a particular "pair" use. It will cause a liquidity split which is also a problem, but it's also manageable and, in my opinion, it would be much closer to an imaginary perfect solution than "bet only USD". 

I am not sure I understand your second sentence, but my guess is that this problem will also go away if each market "pair" uses a single (but customizable) asset. If I got it wrong, could you please clarify?

Comment by Qumeric (valery-cherepanov) on Question for Prediction Market people: where is the money supposed to come from? · 2023-06-08T15:30:32.942Z · LW · GW

In a good prediction market design users would not bet USD but instead something which appreciates over time or generates income (e.g. ETH, Gold, S&P 500 ETF, Treasury Notes, or liquid and safe USD-backed positions in some DeFi protocol).

Another approach would be to use funds held in the market to invest in something profit-generating and distribute part of the income to users. This is the same model which non-algorithmic stablecoins (USDT, USDC) use.

So it's a problem, but definitely a solvable one, even easily solvable. The major problem is that prediction markets are basically illegal in the US (and probably some other countries as well).

Also, Manifold solves it in a different way -- positions are used to receive loans, so you can free your liquidity from long (timewise) markets and use it to e.g. leverage. The loans are automatically repaid when you sell your positions. It is easy for Manifold because it doesn't use real money, but the same concept can be implemented in the "real" markets, although it would be more challenging (there will be occasional losses for the provider due to bad debt but it's the same with any other kind of credit, it can be managed).

Comment by Qumeric (valery-cherepanov) on AGI Ruin: A List of Lethalities · 2023-05-28T11:09:44.681Z · LW · GW

Regarding 9: I believe it's when you are successful enough that your AGI doesn't instantly kill you immediately but it still can kill you in the process of using it. It's in the context of a pivotal act, so it assumes you will operate it to do something significant and potentially dangerous.

Comment by Qumeric (valery-cherepanov) on Request: stop advancing AI capabilities · 2023-05-26T18:36:37.491Z · LW · GW

I am currently job hunting, trying to get a job in AI Safety but it seems to be quite difficult especially outside of the US, so I am not sure if I will be able to do it. 

If I will not land a safety job, one of the obvious options is to try to get hired by an AI company and try to learn more there in the hope I will either be able to contribute to safety there or eventually move to the field as a more experienced engineer.

I am conscious of why pushing capabilities could be bad so I will try to avoid it, but I am not sure how far it extends. I understand that being Research Scientist in OpenAI working on GPT-5 is definitely pushing capabilities but what about doing frontend in OpenAI or building infrastructure at some strong but not leading (and hopefully a bit more safety-oriented) company such as Cohere? Or let's say working in a hedge fund which invests in AI? Or working in a generative AI company which doesn't build in-house models but generates profit for OpenAI? Or working as an engineer at Google on non-AI stuff?

I do not currently see myself as an independent researcher or AI safety lab founder, so I will definitely need to find a job. And nowadays too many things seem to touch AI one way or the other, so I am curious if anybody has an idea about how could I evaluate career opportunities.

Or am I taking it too far and the post simply says "Don't do dangerous research"?

Comment by Qumeric (valery-cherepanov) on AI #12:The Quest for Sane Regulations · 2023-05-20T19:48:08.655Z · LW · GW

The British are, of course, determined to botch this like they are botching everything else, and busy drafting their own different insane AI regulations.

I am far from being an expert here, but I skimmed through the current preliminary UK policy and it seems significantly better compared to EU stuff. It even mentions x-risk!

Of course, I wouldn't be surprised if it will turn out to be EU-level insane eventually, but I think it's plausible that it will be more reasonable, at least from the mainstream (not alignment-centred) point of view.

Comment by Qumeric (valery-cherepanov) on AI #11: In Search of a Moat · 2023-05-12T09:14:37.077Z · LW · GW

And compute, especially inference compute, is so scarce today that if we had ASI right now, it would take several decades, even with exponential growth, to build enough compute for ASIs to challenge humanity.

Uhm, what? "Slow takeoff" means ~1 year... Your opinion is very unusual, you can't just state it without any justification.

Comment by Qumeric (valery-cherepanov) on Google "We Have No Moat, And Neither Does OpenAI" · 2023-05-05T09:04:59.217Z · LW · GW

Are you implying that it is close to GPT-4 level? If yes, it is clearly wrong. Especially in regards to code: everything (maybe except StarCoder which was released literally yesterday) is worse than GPT-3.5, and much worse than GPT-4.

Comment by valery-cherepanov on [deleted post] 2023-05-05T09:00:49.825Z

In addition to many good points already mentioned, I would like to add that I have no idea how to approach this problem.

Approaching x-risk is very hard too, but it is much clearer in comparison.

Comment by Qumeric (valery-cherepanov) on Stability AI releases StableLM, an open-source ChatGPT counterpart · 2023-04-24T11:35:04.913Z · LW · GW

Preliminary benchmarks had shown poor results. It seems that dataset quality is much worse compared to what LLaMA had or maybe there is some other issue.

Yet another proof that top-notch LLMs are not just data + compute, they require some black magic.

 

Generally, I am not sure if it's bad for safety in the notkilleveryoneism sense: such things prevent agent overhang and make current (non-lethal) problems more visible. 

Hard to say if net good or net bad, too many factors and the impact of each are not clear.

Comment by Qumeric (valery-cherepanov) on The surprising parameter efficiency of vision models · 2023-04-09T20:11:46.428Z · LW · GW

I am not sure how did you come to the conclusion that current models are superhuman. I can visualize complex scenes in 3D for example. Especially under some drugs :)

And I don't even think I have an especially good imagination. 

In general, it is very hard to compare mental imagery with Stable Diffusion. For example, it it is hard to imagine something with many different details in different parts of the image but it is perhaps a matter of representation. An analogy could be that our perception is like a low-resolution display. I can easily zoom in on any area and see the details.

I wouldn't say that current models are superhuman. Although I wouldn't claim humans are better either, it is just very unobvious how to compare it properly and probably there are a lot of potential pitfalls.

So 1) has a large role here. 

In 2) CNNs are not a great example (as you mentioned yourself). Vision transformers demonstrate similar performance. It seems that inductive bias is relatively easy to learn for neural networks. I would guess it's similar for human brains too although I don't know much about neurobiology.

3) Doesn't seem like a good reason to me. There are modern GANs that demonstrate similar performance to diffusion models, also there are approaches which make diffusion work in a very small number of steps, even 1 step showed decent results IIRC. Also, even ImageGPT worked pretty well back in the day.

4) Similarly to the initial claim, I don't think much can be confidently said about LLM language abilities in comparison to humans. I do not know what exactly it means and how to measure it. We can do benchmarks, yes. Do they tell us anything deep? I don't think so. LLMs are very different kinds of intelligence, they can do many things humans can't and vice versa.

But at the same time, I wouldn't say that visual models strike me as much more capable given the same size/same amount of compute. They are quite stupid. They can't count. They can't do simple compositionally.

5) It is possible we will have much more efficient language models, but again, I don't think they are much more inefficient than visual models.

 

My two main reasons for the perceived efficiency difference:

  1. It is super hard to compare with humans. We may do it completely wrong. I think we should aspire to avoid it unless absolutely necessary.
  2. "Language ability" depends much more on understanding and having a complicated world model compared to "visual ability". We are not terribly disappointed when Stable Diffusion consistently draws three zombies when we ask for four and mostly forgive it for weird four-fingered hands sometimes growing from the wrong places. But when LLMs do similar nonsense, it is much more evident and hurts performance a lot (both on benchmarks and in the real world). LLMs can imitate style well, they have decent grammar. Larger ones GPT-4 can even count decently well and probably do some reasoning. So the hard part (at least for our current deep learning methods) is the world model. Pattern matching is easy and not really important in the grand scheme of things. But it still looks kinda impressive when visual models do it.
Comment by Qumeric (valery-cherepanov) on Anthropic is further accelerating the Arms Race? · 2023-04-09T08:04:12.639Z · LW · GW

It is easy to understand why such news could increase P(doom) even more for people with high P(doom) prior.

But I am curious about the following question: what if an oracle told us that P(doom) is 25% before the announcement (suppose it was not clear to the oracle what strategy will Anthropic choose, it was inherently unpredictable due to quantum effects or whatever).

Would it still increase P(doom)?

What if the oracle said P(doom) is 5%?

I am not trying to make any specific point, just interested in what people think.

Comment by Qumeric (valery-cherepanov) on All AGI Safety questions welcome (especially basic ones) [April 2023] · 2023-04-08T17:24:37.661Z · LW · GW

I think it is not necessarily correct to say that GPT-4 is above village idiot level. Comparison to humans is a convenient and intuitive framing but it can be misleading. 

For example, this post argues that GPT-4 is around Raven level. Beware that this framing is also problematic but for different reasons.

I think that you are correctly stating Eliezer's beliefs at the time but it turned out that we created a completely different kind of intelligence, so it's mostly irrelevant now.

In my opinion, we should aspire to avoid any comparison unless it has practical relevance (e.g. economic consequences).

Comment by Qumeric (valery-cherepanov) on Catching the Eye of Sauron · 2023-04-08T12:44:09.182Z · LW · GW

I can't agree more with the post but I would like to note that even the current implementation is working. It definitely grabbed people's attention. 

My friend who never read LW writes in his blog about why we are going to die. My wife who is not a tech person and was never particularly interested in AI gets TikToks where people say that we are going to die.

So far it looks like definitely positive impact overall. But it's early to say, I am expecting some kind of shitshow soon. But even shitshow is probably better than nothing.

Comment by Qumeric (valery-cherepanov) on Bringing Agency Into AGI Extinction Is Superfluous · 2023-04-08T09:38:07.407Z · LW · GW

I agree it's a very significant risk which is possibly somewhat underappreciated in the LW community. 

I think all three situations are very possible and potentially catastrophic:

  1. Evil people do evil with AI
  2. Moloch goes Moloch with AI
  3. ASI goes ASI (FOOM etc.)

Arguments against (1) could be "evil people are stupid" and "terrorism is not about terror". 

Arguments against (1) and (2) could be "timelines are short" and "AI power is likely to be very concentrated". 

Comment by Qumeric (valery-cherepanov) on Hooray for stepping out of the limelight · 2023-04-01T08:42:19.887Z · LW · GW

I think that Deepmind is impacted by race dynamics and Google's code red etc. I heard from a Deepmind employee that the leadership including Demis is now much more focused on products and profits, at least in their rhetoric.

But I agree it looks like they tried and likely still trying to push back against incentives.

And I am pretty confident that they reduced publishing on purpose and it's visible.

Comment by Qumeric (valery-cherepanov) on Stop pushing the bus · 2023-03-31T18:10:50.435Z · LW · GW

This is true that it is not evidence of misalignment with the user but it is evidence of misalignment with ChatGPT creators.

Comment by Qumeric (valery-cherepanov) on "Dangers of AI and the End of Human Civilization" Yudkowsky on Lex Fridman · 2023-03-31T16:25:08.814Z · LW · GW

I agree it was a pretty weak point. I wonder if there is a longer form exploration of this topic from Eliezer or somebody else. 

I think it is even contradictory. Eliezer says that AI alignment is solvable by humans and that verification is easier than the solution. But then he claims that humans wouldn't even be able to verify answers.

I think a charitable interpretation could be "it is not going to be as usable as you think". But perhaps I misunderstand something?

Comment by Qumeric (valery-cherepanov) on Alignment-related jobs outside of London/SF · 2023-03-31T15:12:28.757Z · LW · GW

Fwiw I live in London and have been to the Bay Area and I think that London is better across all 4 dimensions you mentioned. 

  • Social scene: Don't know what exactly you are looking for but London is large and diverse.
  • High cost of living: London is pretty expensive too but cheaper.
  • Difficulty getting around: London has pretty good public transportation.
  • Homeless problem: I think I see homeless people 10x less compared to when I was in the Bay.
Comment by Qumeric (valery-cherepanov) on Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky · 2023-03-31T11:16:12.629Z · LW · GW

you're misunderstanding the TIME article as more naive and less based-on-an-underlying-complicated-model than is actually the case.

I specifically said "I do not necessarily say that this particular TIME article was a bad idea" mainly because I assumed it probably wasn't that naive. Sorry I didn't make it clear enough.

I still decided to comment because I think this is pretty important in general, even if somewhat obvious. Looks like one of those biases which show up over and over again even if you try pretty hard to correct it.

Also, I think it's pretty hard to judge what works and what doesn't. The vibe has shifted a lot even in the last 6 months. I think it is plausible it shifted more than in a 10-year period 2010-2019.

Comment by Qumeric (valery-cherepanov) on Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky · 2023-03-30T14:30:02.562Z · LW · GW

I second this.

I think people really get used to discussing things in their research labs or in specific online communities. And then, when they try to interact with the real world and even do politics, they kind of forget how different the real world is.

Simply telling people ~all the truth may work well in some settings (although it's far from all that matters in any setting) but almost never works well in politics. Sad but true. 

I think that Eliezer (and many others including myself!) may be suspectable to "living in the should-universe" (as named by Eliezer himself).

I do not necessarily say that this particular TIME article was a bad idea, but I am feeling that people who communicate about x-risk are on average biased in this way.  And it may greatly hinder the results of communication.

 

I also mostly agree with "people don't take AI alignment seriously because we haven't actually seen anything all that scary yet". However, I think that the scary thing is not necessarily "simulated murders". For example, a lot of people are quite concerned about unemployment caused by AI. I believe it might change perception significantly if it will actually turn out to be a big problem which seems plausible. 

Yes, of course, it is a completely different issue. But on an emotional level, it will be similar (AI == bad stuff happening). 

Comment by Qumeric (valery-cherepanov) on FLI open letter: Pause giant AI experiments · 2023-03-29T17:35:18.700Z · LW · GW

2. I think non-x-risk focused messages are a good idea because:

  • It is much easier to reach a wide audience this way.
  • It is clear that there are significant and important risks even if we completely exclude x-risk. We should have this discussion even in a world where for some reason we could be certain that humanity will survive for the next 100 years.
  • It widens Overton's window. x-risk is still mostly considered to be a fringe position among the general public, although the situation has improved somewhat.

3. There were cases when it worked well. For example, the Letter of three hundred.

4. I don't know much about EA's concerns about Elon. Intuitively, he seems to be fine. But I think that in general, people are more biased towards too much distancing which often hinders coordination a lot.  

5. I think more signatures cannot make things worse if authors are handling them properly. Just rough sorting by credentials (as FLI does) may be already good enough. But it's possible and easy to be more aggressive here.

 

I agree that it's unlikely that this letter will be net bad and that it's possible it can make a significant positive impact. However, I don't think people argued that it can be bad. Instead, people argued it could be better. It's clearly not possible to do something like this every month, so it's better to put a lot of attention to details and think really carefully about content and timing.

Comment by Qumeric (valery-cherepanov) on FLI open letter: Pause giant AI experiments · 2023-03-29T12:36:06.198Z · LW · GW

I think it is only getting started. I expect that likely there will be more attention in 6 months and very likely in 1 year.

OpenAI has barely rolled out its first limited version of GPT-4 (only 2 weeks have passed!). It is growing very fast but it has A LOT of room to grow. Also, text-2-video is not here in any significant sense but it will be very soon.

Comment by Qumeric (valery-cherepanov) on What 2026 looks like · 2023-03-27T10:40:36.043Z · LW · GW

When it was published, it felt like a pretty short timeline. But now we are in early 2023 and it feels like late 2023 according to this scenario.

Comment by Qumeric (valery-cherepanov) on The Overton Window widens: Examples of AI risk in the media · 2023-03-25T09:08:17.984Z · LW · GW

I wonder if soon the general public will freak out on a large scale (Covid-like). I will be not surprised if it will happen in 2024 and only slightly surprised if it will happen this year. If it will happen, I am also not sure if it will be good or bad.

Comment by Qumeric (valery-cherepanov) on Speed running everyone through the bad alignment bingo. $5k bounty for a LW conversational agent · 2023-03-24T10:43:34.368Z · LW · GW

OpenAI just dropped ChatGPT plugins yesterday. It seems like it is an ideal platform for it? Probably will be even easier to implement than before and have better quality. But more importantly, it seems that ChatGPT plugins will quickly shape to be the new app store and it would be easier to get attention on this platform compared to other more traditional ways of distribution. Quite speculative, I know, but seems very possible.

If somebody will start such a project, please contact me. I am ex-Google SWE with decent knowledge of ML and experience of running software startup (as co-founder and CTO in the recent past).

I would also be interested to hear why it could be a bad idea.

Comment by Qumeric (valery-cherepanov) on Sparks of Artificial General Intelligence: Early experiments with GPT-4 | Microsoft Research · 2023-03-23T13:05:55.981Z · LW · GW

Good point. It's a bit weird that performance on easy Codeforces questions is so bad (0/10) though. 

https://twitter.com/cHHillee/status/1635790330854526981

Comment by Qumeric (valery-cherepanov) on Sparks of Artificial General Intelligence: Early experiments with GPT-4 | Microsoft Research · 2023-03-23T09:34:33.446Z · LW · GW

Might be caused mostly by data leaks (training set contamination).

Comment by Qumeric (valery-cherepanov) on AI #4: Introducing GPT-4 · 2023-03-21T15:50:02.651Z · LW · GW

I think you misinterpret hindsight neglect. It got to 100% accuracy, so it got better, not worse.

Also,  a couple of images are not shown correctly, search for <img in text.

Comment by Qumeric (valery-cherepanov) on What did you do with GPT4? · 2023-03-18T19:02:21.298Z · LW · GW

Really helpful for learning new frameworks and stuff like that. I had a very good experience using it for Kaggle competitions (I am semi-intermediate level, probably it is much less useful on the expert level).

Also, I found it quite useful for research on obscure topics like "how to potentiate this not well-known drug". Usually, such research involves reading through tons of forums, subreddits etc. and signal to noise ratio is quite high. GPT-4 is very useful to distil signal because it basically already read this all.

Btw, I tried to make it solve competitive programming problems. I think it's not a matter of prompt engineering: it is genuinely bad on it. The following pattern is common:

  • GPT-4 proposes some solutions, usually wrong at the first glance.
  • I point to mistakes.
  • GPT-4 says yeah you're right, but now it is fixed.
  • It is going on like this for ~4 iterations until I give up on this particular problem or more interestingly GPT-4 starts to claim that it's impossible to solve.

It really feels like a low IQ (but very eloquent) human in such moments, it just cannot think abstractly.

Comment by Qumeric (valery-cherepanov) on Try to solve the hard parts of the alignment problem · 2023-03-18T18:59:06.970Z · LW · GW

Well, I do not have anything like this but it is very clear that China is way above GPT-3 level. Even the open-source community is significantly above. Take a look at LLaMA/Alpaca, people run them on consumer PC and it's around GPT-3.5 level, the largest 65B model is even better (it cannot be run on consumer PC but can be run on a small ~10k$ server or cheaply in the cloud). It can also be fine-tuned in 5 hours on RTX 4090 using LORA: https://github.com/tloen/alpaca-lora .

Chinese AI researchers contribute significantly to AI progress, although of course, they are behind the USA. 

My best guess would be China is at most 1 year away from GPT-4. Maybe less.

Btw, an example of a recent model: ChatGLM-6b

Comment by Qumeric (valery-cherepanov) on GPT-4 · 2023-03-14T19:16:42.666Z · LW · GW

I just bought a new subscription (I didn't have one before), it is available to me.

Comment by valery-cherepanov on [deleted post] 2023-03-14T17:06:53.076Z

MMLU 86.4% is impressive, predictions were around 80%.
1410 SAT is also above expectations (according to prediction markets).

Comment by Qumeric (valery-cherepanov) on Are we too confident about unaligned AGI killing off humanity? · 2023-03-07T11:40:19.747Z · LW · GW

Uhm, I don't think anybody (even Eliezer) implies 99.9999%. Maybe some people imply 99% but it's 4 orders of magnitude difference (and 100 times more than the difference between 90% and 99%).

I don't think there are many people who think 95%+ chance, even among those who are considered to be doomerish. 

And I think most LW people are significantly lower despite being rightfully [very] concerned. For example, this Metaculus question (which is of course not LW but the audience intersects quite a bit) is only 13% mean (and 2% median) 

Comment by Qumeric (valery-cherepanov) on The Waluigi Effect (mega-post) · 2023-03-05T12:09:45.911Z · LW · GW

I don't think that Waluigi is an attractor state in some deeply meaningful sense. It is just that we have more stories where bad characters pretend to be good than vice versa (although we have some). So a much simpler "solution" would be just to filter the training set. But it's not an actual solution, because it's not an actual problem. Instead, it is just a frame to understand LLM behaviour better (in my opinion).

Comment by Qumeric (valery-cherepanov) on The Waluigi Effect (mega-post) · 2023-03-05T11:54:09.195Z · LW · GW

I think that RLHF doesn't change much for the proposed theory. A "bare" model just tries to predict next tokens which means finishing the next part of a given text. To complete this task well, it needs to implicitly predict what kind of text it is first. So it has a prediction and decides how to proceed but it's not discrete. So we have some probabilities, for example

  • A -- this is fiction about "Luigi" character
  • B -- this is fiction about "Waluigi" character
  • C -- this is an excerpt from a Wikipedia page about Shigeru Miyamoto which quotes some dialogue from Super Mario 64, it is not going to be focused on "Luigi" or "Waluigi" at all
  • D -- etc. etc. etc.

LLM is able to give sensible prediction because while training the model we introduce some loss function which measures how similar generated proposal is to the ground truth (I think in current LLM it is something very simple like does the next token exactly match but I am not sure if I remember correctly and it's not very relevant). This configuration creates optimization pressure.

Now, when we introduce RLHF we just add another kind of optimization pressure on the top. Which is basically "this is a text about a perfect interaction between some random user and language model" (as human raters imagine such interaction, i.e. how another model imagines human raters imagine such conversation). 

Naively it is like throwing another loss function in the mix so now the model is trying to minimize text_similarity_loss + RLHF_loss. It can be much more complicated mathematically because the pressure is applied in order (and the "optimization pressure" operation is probably not commutative, maybe not even associative) and the combination will look like something more complicated but it doesn't matter for our purpose. 

The effect it has on the behaviour of the model is akin to adding a new TEXT GENRE to the training set "a story about a user interacting with a language model" (again, this is a simplification, if it were literally like this then it wouldn't cause artefacts like "mode collapse"). It will contain a very common trope "user asking something inappropriate and the model says it is not allowed to answer". 

In the jailbreak example, we are throwing a bunch of fiction tropes to the model, it pattern matches really hard on those tropes and the first component of the loss function pushes it in the direction of continuing like it's fiction despite the second component saying "wait it looks like this is a savvy language model user who tries to trick LLM to do stuff it shouldn't, this is a perfect time for the trope 'I am not allowed to do it'". But the second trope belongs to another genre of text, so the model is really torn between continuing as fiction and continuing as "a story LLM-user interaction". The first component wins before the patch and loses now.

So despite that I think the "Waluigi effect" is an interesting and potentially productive frame, it is not enough to describe everything, and in particular, it is not what explains the jailbreak behaviour.

In a "normal" training set which we can often treat as "fiction" with some caveats, it is indeed the case when a character can be secretly evil. But in the "LLM-user story" part of the implicit augmented training set, there is no such possibility. What happens is NOT "the model acts like an assistant character which turns out to be evil", but "the model chooses between acting as SOME character which can be 'Luigi' or 'Waluigi' (to make things a bit more complicated, 'AI assistant' is a perfectly valid fictional character)" and acting as the ONLY character in a very specific genre of "LLM-user interaction".

Also, there is no "detect naughty questions" circuit and a "break character and reset" circuit. I mean there could be but it's not how it's designed. Instead, it's just a byproduct of the optimization process which can help the model predict texts. E.g. if some genre has a lot of naughty questions then it will be useful to the model to have such a circuit. Similar to a character of some genre which asks naughty questions.

In conclusion, the model is indeed always in a superposition of characters of a story but it's only the second layer of superposition, while the first (and maybe even more important one?) layer is "what kind of story it is".

Comment by Qumeric (valery-cherepanov) on Cognitive Emulation: A Naive AI Safety Proposal · 2023-02-28T12:33:39.996Z · LW · GW

On the surface level, it feels like an approach with a low probability of success. Simply put, the reason is that building CoEm is harder than building any AGI. 

I consider it to be harder not only because it is not what everyone already does but also because it seems to be similar to AI people tried to create before deep learning and it didn't work at all until they decided to switch to Magic which [comparatively] worked amazingly.

Some people are still trying to do something along the lines (e.g. Ben Goertzel) but I haven't seen anything working at least remotely comparable with deep learning yet.

I think that the gap between (1) "having some AGI which is very helpful in solving alignment" and (2) "having very dangerous AGI" is probably quite small.

It seems very unlikely that CoEm will be the first system to reach (1), so probably it is going to be some other system. Now, we can either try to solve alignment using this system or wait until CoEm is improved enough so it reaches (1). Intuitively, it feels like we will go from (1) to (2) much faster than we will be able to improve CoEm enough.

So overall I am quite sceptical but I think it still can be the best idea if all other ideas are even worse. I think that more obvious ideas like "trying to understand how Magic works" (interoperability) and "trying to control Magic without understanding" (things like Constitutional AI etc.) are somewhat more promising, but there are a lot of efforts in this direction, so maybe somebody should try something else. Unfortunately, it is extremely hard to judge if it's actually the case.

Comment by Qumeric (valery-cherepanov) on What's the Most Impressive Thing That GPT-4 Could Plausibly Do? · 2022-10-26T11:40:46.007Z · LW · GW

Getting grandmaster rating on Codeforces.

Upd after 4 months: I think I changed my opinion, now I am 95% sure no model will be able to achieve this in 2023 and it seems quite unlikely in 2024 too.

Comment by Qumeric (valery-cherepanov) on [linkpost] The final AI benchmark: BIG-bench · 2022-10-21T04:02:45.129Z · LW · GW

Codex + CoT reaches 74 on a *hard subset* of this benchmark: https://arxiv.org/abs/2210.09261

The average human is 68, best human is 94.

Only 4 months passed and people don't want to test on full benchmark because it is too easy...

Comment by Qumeric (valery-cherepanov) on Forecasting ML Benchmarks in 2023 · 2022-10-21T03:21:00.399Z · LW · GW

Flan-PaLM reaches 75.2 on MMLU: https://arxiv.org/abs/2210.11416

Comment by Qumeric (valery-cherepanov) on Why I think there's a one-in-six chance of an imminent global nuclear war · 2022-10-09T14:58:28.843Z · LW · GW

Formally, it needs to be approved by 3 people: the President, the Minister of Defence and the Chief of the General Staff. Then (I think) it doesn't launch rockets. It unlocks them and sends a signal to other people to actually launch them.

Also, it is speculated to be some way to launch them without confirmation from all 3 people in case some of them cannot technically approve (e.g. briefcase doesn't work/the person is dead/communication problems), but the details of how exactly it works are unknown.

Comment by Qumeric (valery-cherepanov) on Why I think strong general AI is coming soon · 2022-09-29T11:02:45.119Z · LW · GW

It is goalpost moving. Basically, it says "current models are not really intelligent". I don't think there is much disagreement here. And it's hard to make any predictions based on that.

Also, "Producing human-like text" is not well defined here; even ELIZA may match this definition. Even the current SOTA may not match it because the adversarial Turning Test has not yet been passed.

Comment by Qumeric (valery-cherepanov) on Why I think strong general AI is coming soon · 2022-09-29T10:50:49.643Z · LW · GW

They are simluators (https://www.lesswrong.com/posts/vJFdjigzmcXMhNTsx/simulators), not question answerers. Also, I am sure Minerva does pretty good on this task, probably not 100% reliable but humans are also not 100% reliable if they are required to answer immediately. If you want the ML model to simulate thinking [better], make it solve this task 1000 times and select the most popular answer (which is a quite popular approach for some models already). I think PaLM would be effectively 100% reliable.

Comment by Qumeric (valery-cherepanov) on Why I think strong general AI is coming soon · 2022-09-29T08:42:34.221Z · LW · GW

Another related Metaculus prediction is 

I have some experience in competitive programming and competitive math (although I was never good in math despite I solved some "easy" IMO tasks (already in university, not onsite ofc)) and I feel like competitive math is more about general reasoning than pattern matching compared to competitive programming.

 

P.S the post matches my intuitions well and is generally excellent.

Comment by Qumeric (valery-cherepanov) on What 2026 looks like · 2022-09-22T07:25:22.763Z · LW · GW

So far 2022 predictions were correct. There is Codegeex and others. Copilot, DALLE-2 and Stable Diffusion made financial prospects obvious (somewhat arguably).

ACT-1 is in a browser, I have neural search in Warp Terminal (not a big deal but qualifies), not sure about Mathematica but there was definitely significant progress in formalization and provers (Minerva).

And even some later ones

2023
ImageNet -- nobody measured it exactly but probably already achievable.

2024
Chatbots personified through video and audio -- Replica sort of qualifies?

40% on MATH already reached.

Comment by Qumeric (valery-cherepanov) on AI Forecasting: One Year In · 2022-07-06T11:57:19.482Z · LW · GW

It actually shifted quite a lot. From "in 10 years" to "in 7 years", a 30% reduction!

Comment by Qumeric (valery-cherepanov) on "A Generalist Agent": New DeepMind Publication · 2022-05-13T07:26:20.362Z · LW · GW

Two main options:
* It was trained e.g. 1 year ago but published only now
* All TPU-v4 very busy with something even more important