Posts

Comments

Comment by β-redex (GregK) on shortplav · 2025-04-17T17:31:24.258Z · LW · GW

I would definitely be interested if you found a way to self-review recordings of your social interactions for improvement. The main roadblock I see is that either you tell the other parties that you are recording, which will probably influence their behavior a lot and erase most of the signal you were looking for in the first place, or you don't, which does feel a bit unethical.

Comment by β-redex (GregK) on Jan Betley's Shortform · 2025-03-31T21:50:01.449Z · LW · GW

What does "obscure" mean here? (If you label the above "obscure", I feel like every query I consider "non-trivial" could be labeled obscure.)

I don't think Lean is obscure, it's one of the most popular proof assistants nowadays. The whole Lean codebase should be in the AIs training corpus (in fact that's why I deliberately made sure to specify an older version, since I happen to know that the olean header changed recently.) If you have access to the codebase, and you understand the object representation, the solution is not too hard.

Here is the solution I wrote just now:[1]

import sys, struct
assert sys.maxsize > 2**32
f = sys.stdin.buffer.read()
def u(s, o, l): return struct.unpack(s, f[o:o+l])
b = u("Q", 48, 8)[0]
def c(p, i): return u("Q", p-b+8*(i+1), 8)[0]
def n(p):
    if p == 1: return []
    assert u("iHBB", p-b, 8)[3] == 1 # 2 is num, not implemented
    s = c(p, 1) - b
    return n(c(p, 0)) + [f[s+32:s+32+u("Q", s + 8, 8)[0]-1].decode('utf-8')]
a = c(u("Q", 56, 8)[0], 1) - b
for i in range(u("Q", a+8, 8)[0]):
    print('.'.join(n(u("Q", a+24+8*i, 8)[0])))

(It's minified to both fit in the comment better and to make it less useful as future AI training data, hopefully causing this question to stay useful for testing AI skills.)


  1. I wrote this after I wrote the previous comment, my expectation that this should be a not-too-hard problem was not informed by me actually attempting it, only by rough knowledge of how Lean represents objects and that they are serialized pretty much as-is. ↩︎

Comment by β-redex (GregK) on Tormenting Gemini 2.5 with the [[[]]][][[]] Puzzle · 2025-03-31T21:41:27.805Z · LW · GW

Yep, that "standard library" part sure seems problematic, I am not sure if an algorithm for listing primes is shorter than just the above lookup table.

Comment by β-redex (GregK) on Jan Betley's Shortform · 2025-03-31T19:05:23.550Z · LW · GW

Just to give an example, here is the kind of prompt I am thinking of. I am being very specific about what I want, I think there is very little room for misunderstanding about how I expect the program to behave:

Write a Python program that reads a .olean file (Lean v4.13.0), and outputs the names of the constants defined in the file. The program has to be standalone and only use modules from the python standard library, you cannot assume Lean to be available in the environment.

o3-mini gives pure garbage hallucination for me on this one, like it's not even close.

Comment by β-redex (GregK) on Jan Betley's Shortform · 2025-03-31T18:34:26.523Z · LW · GW

If your answer to question A is "a specific thing," and your answer to B is "yes, I'm very clear on what I want," then just explain it thoroughly, and you're likely to get satisfying results. Impressive examples like "rewrite this large complex thing that particular way" fall into this category.

Disagree. It sounds like by "being specific" you mean that you explain how you want the task to be done to the AI, which in my opinion can only be mildly useful.

When I am specific to an AI about what I want, I usually still get buggy results unless the solution is easy. (And asking the AI to debug is only sometimes successful, so if I want to fix it I have to put in a lot of work to understand the code the AI wrote carefully to debug it.)

Comment by β-redex (GregK) on Tormenting Gemini 2.5 with the [[[]]][][[]] Puzzle · 2025-03-30T05:13:21.158Z · LW · GW

I guess the reasoning for why the solution given in the post is more "valid" than this one is "something something Occam's razor" or that it is "more elegant" (whatever "elegant" means), but if someone can make a more precise argument I would be interested to hear. (In particular, in my mind Occam's razor is something to do with empiricism, while what we are doing here is pure logic, so not sure how it exactly applies?)

Comment by β-redex (GregK) on Tormenting Gemini 2.5 with the [[[]]][][[]] Puzzle · 2025-03-30T05:06:13.021Z · LW · GW

Unfortunately no, I don't think any contradictions can be derived from the examples given in the post if we assume -E and [E] unary, and E E binary operators. Here are some example assignments for these operators that satisfy (AFAICT) the examples from the post (assuming left associativity for juxtaposition, and that the precedence of - is lower, so that -E E is interpreted as -(E E) in the last example):

Definitions for [E]:

1 => 2
2 => 4
-1 => 1/2
-2 => 1/4
1/2 => sqrt(2)
1001 => 1001
1004 => 1004

Definitions for E E:

1 2 => 1001
1001 1 => 3
1001 1/2 => 1001
1001 4 => 25/24
1 1 => 1002
1002 1 => 1002
1002 2 => 19
4 2 => 12
4 1 => 1003
1003 2 => 20
1 1/2 => 1004
sqrt(2) 1004 => 72^1/6
1/4 1 => 1005
1005 2 => 5/4
12 1 => 1006
1006 2 => 84

Definitions for -E:

-(1) => -1
-(2) => -2
-(1001) => 1001
Comment by β-redex (GregK) on Tormenting Gemini 2.5 with the [[[]]][][[]] Puzzle · 2025-03-29T20:03:24.117Z · LW · GW

I fairly quickly figured out that the grammar is something like E ::= "[]" | "[" E "]" | "-" E | "E E", and that eval([E]) = 2^eval(E) (and eval(-E) = -eval(E)), and then went down the rabbit hole of trying to come up with some f eval(E1 E2) = f(eval(E1), eval(E2)) for juxtaposition, and thinking about whether it's left or right associative. I was also thinking that maybe it's n-ary rather than binary so that associativity does not matter.

Anyway, I think where I went wrong is that I decided that [E] is a unary operator by itself, and did not reconsider this. So the lesson is... don't just assume the simplest possible AST structure implied by the grammar?

Comment by β-redex (GregK) on Daniel Tan's Shortform · 2025-03-26T17:52:42.928Z · LW · GW

Pull requests. Useful to group a bunch of messy commits into a single high-level purpose and commit that to main. Makes your commit history easier to read.

You can also squash multiple commits without using PRs. In fact, if someone meticulously edited their commit history for a PR to be easy-to-follow and the changes in each commit are grouped based on them being some higher level logical single unit of change, squashing their commits can be actively bad, since now you are destroying the structure and making the history less readable by making a single messy commit.

With most SWEs when I try to get them to create nicer commit histories, I get pushback. Sure, not knowing the tools (git add -p and git rebase -i mostly tbh.) can be a barrier, but showing them nice commit histories does not motivate them to learn the tools used to create them. They don't seem to see the value in a nice commit history.[1]

Which makes me wonder: why do you advocate for putting any effort into the git history for research projects (saying that "It's more important here to apply 'Good SWE practices'"), when even 99% of SWEs don't follow good practices here? (Is looking back at the history maybe more important for research than for SWE, as you describe research code being more journal-like?)


  1. Which could maybe be because they also don't know the tools that can extract value from a nice commit history? E.g. using git blame or git bisect is a much more pleasant experience with a nice history. ↩︎

Comment by β-redex (GregK) on johnswentworth's Shortform · 2025-03-14T23:49:25.380Z · LW · GW

Does Overleaf have such AI integration that can get "accidentally" activated, or are you using some other AI plugin?

Either way, this sounds concerning to me, we are so bad at AI boxing that it doesn't even have to break out, we just "accidentally" hand it edit access to random documents. (And especially an AI safety research paper is not something I would want a misaligned AI editing without close oversight.)

Comment by β-redex (GregK) on johnswentworth's Shortform · 2025-03-14T21:54:13.675Z · LW · GW

Could someone explain the joke to me? If I take the above statement literally, some change made it into your document, which nobody with access claims to have put there. You must have some sort of revision control, so you should at least know exactly who and when made that edit, which should already narrow it down a lot?

Comment by β-redex (GregK) on So how well is Claude playing Pokémon? · 2025-03-09T18:49:22.466Z · LW · GW

I am not a 100% convinced by the comparison, because technically LLMs are only "reading" a bunch of source code, they are never given access to a compiler/interpreter. IMO actually running the code one has written is a very important part of learning, and I think it would be a much more difficult task for a human to learn to code just by reading a bunch of books/code, but never actually trying to write & run their own code.[1]

Also, in the video linked earlier in the thread, the girlfriend playing Terraria is deliberately not given access to the wiki, and thus I believe is an unfair comparison. I expect to see much better human performance if you give them access to manuals & wikis about the game.

Another interesting question is to what extent you can train an AI system on a variety of videogames and then have it take on a new one with no game-specific training. I don't know if anyone has tried that with LLMs yet.

Not sure either, but I agree that this would be an interesting experiment. (Human gamers are often much quicker at picking up new games and are much better at them than someone with no gaming background.)


  1. I would expect the average human to stay very bad at coding, no matter how many books & code examples you give them. I would also expect some smaller class of humans to nevertheless be able to pull that feat off. (E.g. maybe a mathematician well versed in formal logic, who is used to doing complex symbolic manipulation correctly "only on paper", could probably write non-trivial correct programs just by reading about the subject. In fact, a lot of stuff from computer science was worked out well before computers were built, e.g. Ada Lovelace is usually credited with writing the "first computer program", well before the first digital computer existed.) ↩︎

Comment by β-redex (GregK) on So how well is Claude playing Pokémon? · 2025-03-08T09:46:58.193Z · LW · GW

And as a separate note, I'm not sure what the appropriate human reference class for game-playing AIs is, but I challenge the assumption that it should be people who are familiar with games. Rather than, say, people picked at random from anywhere on earth.

If you did that for programming, AIs would already be considered strongly superhuman. Just like we compare AI's coding knowledge to programmers, I think it's perfectly fair to compare their gaming abilities to people who play video games.

Comment by β-redex (GregK) on Can a finite physical device be Turing equivalent? · 2025-03-06T23:17:50.841Z · LW · GW

Notably, this was exactly the sort of belief I was trying to show is false

Please point out if there is a specific claim I made in my comment that you believe to be false. I said that "I don't think a TC computer can ever be built in our universe.", which you don't seem to argue with? (If we assume that we can only ever get access to a finite number of atoms. If you dispute this I won't argue with that, neither of us has a Theory of Everything to say for certain.)

Just to make precise why I was making that claim and what it was trying to argue against, take this quote from the post:

yes, you can consider a finite computer in the real world to be Turing-complete

I dropped the "finite computer" constraint and interpreted the phrase "real world" to mean that "it can be built in our universe", this is how I arrived at the "a TC computer can be built in our universe" statement, which I claimed was false.

Comment by β-redex (GregK) on Can a finite physical device be Turing equivalent? · 2025-03-06T22:48:05.853Z · LW · GW
Comment by β-redex (GregK) on Can a finite physical device be Turing equivalent? · 2025-03-06T21:41:08.709Z · LW · GW

I don't think a TC computer can ever be built in our universe. The observable universe has a finite number of atoms, I have seen numbers around thrown around. Even if you can build a RAM where each atom stores 1 bit,[1] this is still very much finite.

I think a much more interesting question is why TC machines are — despite only existing in theory — such useful models for thinking about real-world computers. There is obviously some approximation going on here, where for the vast majority of real-world problems, you can write them in such a way that they work just fine with finite RAM. (E.g. the cases where we would run out of RAM are quite easy to predict, and don't just crop up randomly.)


  1. You can maybe push that number up by e.g. using subatomic particles or using a number of different quantum states of a single particle, but I think with all the tricks you will still end up with some well bounded constant multiplier for how many bits you can store, which does not matter much for this argument. ↩︎

Comment by β-redex (GregK) on Can a finite physical device be Turing equivalent? · 2025-03-06T21:22:44.552Z · LW · GW

Let be the state space of our finite/physical computer, where is the number of bits of state the computer has. This can include RAM, non-volatile storage, CPU registers, cache, GPU RAM, etc... just add up all the bits.

The stateless parts of the computer can be modeled as a state transition function , which is applied at every time step to produce the next state. (And let's suppose that there is some special halting state .)

This is clearly a FSM with states, and not TC. The halting problem can be trivially solved for it: it is guaranteed to either halt or loop after steps.

(Proof: It can be seen that if we ever get back to a state where we have already been, we will loop. So let's suppose that after time we have had a different state at each time step, and haven't halted. This is impossible, since there are only non-halting states, so there cannot exist distinct ones.)

Comment by β-redex (GregK) on How do you feel about LessWrong these days? [Open feedback thread] · 2023-12-06T03:45:06.364Z · LW · GW

I think the reacts being semantic instead of being random emojis is what makes this so much better.

I wish other platforms experimented with semantic reacts as well, instead of just letting people react with any emoji of their choosing, and making you guess whether e.g. "thumbs up" means agreement, acknowledgement, or endorsement, etc.

Comment by β-redex (GregK) on 2023 Unofficial LessWrong Census/Survey · 2023-12-02T19:33:41.904Z · LW · GW

This was my first time taking this, looking forward to the results!

Comment by β-redex (GregK) on Which LessWrongers are (aspiring) YouTubers? · 2023-10-23T13:38:46.092Z · LW · GW

I know of Robert Miles, and Writer, who does Rational Animations. (In fact Robert Miles' channel is the primary reason I discovered LessWrong :) )

Comment by β-redex (GregK) on Rationalist horror movies · 2023-10-15T08:54:44.090Z · LW · GW

Don't leave me hanging like this, does the movie you are describing exist? (Though I guess your description is a major spoiler, you would need to go in without knowing whether there will be anything supernatural.)

Comment by β-redex (GregK) on Rationalist horror movies · 2023-10-15T08:47:57.997Z · LW · GW
  1. The Thing: classic
  2. Eden Lake
  3. Misery
  4. 10 Cloverfield Lane
  5. Gone Girl: not horror, but I specifically like it because of how agentic the protagonist is

2., 3. and 4. have in common that there is some sort of abusive relationship that develops, and I think this adds another layer of horror. (A person/group of people gain some power over the protagonist(s), and they slowly grow more abusive with this power.)

Comment by β-redex (GregK) on Rationalist horror movies · 2023-10-15T08:12:42.194Z · LW · GW

Somewhat related: does anyone else strongly dislike supernatural elements in horror movies?

It's not that I have anything against a movie exploring the idea of "what if we suddenly discovered that we live in a universe where supernatural thing X exist", but the characters just accept this without much evidence at all.

I would love a movie though where they explore the more likely alternate hypotheses first (mental issues, some weird optical/acoustic phenomenon, or just someone playing a super elaborate prank), but then the evidence starts mounding, and eventually they are forced to accept that "supernatural thing X actually exists" is really the most likely hypothesis.

Comment by β-redex (GregK) on Jailbreaking GPT-4's code interpreter · 2023-07-13T21:52:51.228Z · LW · GW

These examples show that, at least in this lower-stakes setting, OpenAI’s current cybersecurity measures on an already-deployed model are insufficient to stop a moderately determined red-teamer.

I... don't actually see any non-trivial vulnerabilities here? Like, these are stuff you can do on any cloud VM you rent?

Cool exploration though, and it's certainly interesting that OpenAI is giving you such a powerful VM for free (well actually not because you already pay for GPT-4 I guess?), but I have to agree with their assessment which you found that "it's expected that you can see and modify files on this system".

Comment by β-redex (GregK) on gamers beware: modded Minecraft has new malware · 2023-06-07T21:08:01.534Z · LW · GW

The malware is embedded in multiple mods, some of which were added to highly popular modpacks.

Any info on how this happened? This seems like a fairly serious supply chain attack. I have heard of incidents with individual malicious packages on npm or PyPI, but not one where multiple high profile packages in a software repository were infected in a coordinated manner.

Comment by β-redex (GregK) on Chatbot convinces Belgian to commit suicide · 2023-03-29T20:05:32.217Z · LW · GW

Uhh this first happening in 2023 was the exact prediction Gary Marcus made last year: https://www.wired.co.uk/article/artificial-intelligence-language

Not sure whether this instance is a capability or alignment issue though. Is the LLM just too unreliable, as Gary Marcus is saying? Or is it perfectly capable, and just misaligned?

Comment by β-redex (GregK) on We have to Upgrade · 2023-03-23T23:01:30.092Z · LW · GW

I don't see why communicating with an AI through a BCI is necessarily better than through a keyboard+screen. Just because a BCI is more ergonomic and the AI might feel more like "a part of you", it won't magically be better aligned.

In fact the BCI option seems way scarier to me. An AI that can read my thoughts at any time and stimulate random neurons in my brain at will? No, thanks. This scenario just feels like you are handing it the "breaking out of the box" option on a silver platter.

Comment by β-redex (GregK) on How seriously should we take the hypothesis that LW is just wrong on how AI will impact the 21st century? · 2023-02-16T21:40:08.174Z · LW · GW

Why is this being downvoted?

From what I am seeing people here are focusing way too much on having a precisely calibrated P(doom) value.

It seems that even if P(doom) is 1% the doom scenario should be taken very seriously and alignment research pursued to the furthest extent possible.

The probability that after much careful calibration and research you would come up with a P(doom) value less than 1% seems very unlikely to me. So why invest time into refining your estimate?

Comment by β-redex (GregK) on Is it a coincidence that GPT-3 requires roughly the same amount of compute as is necessary to emulate the human brain? · 2023-02-10T17:04:54.260Z · LW · GW

There was a recent post estimating that GTP-3 is equivalent to about 175 bees. There is also a comment there asserting that a human is about 140k bees.

I would be very interested if someone could explain where this huge discrepancy comes from. (One estimate is equating synapses with parameters, while this one is based on FLOPS. But there shouldn't be such a huge difference.)

Comment by β-redex (GregK) on Why Are Bacteria So Simple? · 2023-02-08T23:53:13.405Z · LW · GW

Indeed (as other commenters also pointed out) the ability to sexually reproduce seems to be much more prevalent than I originally thought when writing the above comment. (I thought that eukaryotes only capable of asexual reproduction were relatively common, but it seems that there may only be a very few special cases like that.)

I still disagree with you dismissing the importance of mitochondria though. (I don't think the OP is saying that mitochondria alone are sufficient for larger genomes, but the argument for why they are at least necessary is convincing to me.)

Comment by β-redex (GregK) on English is a Terrible Programming Language—And other reasons AI won't displace programmers · 2023-02-07T21:16:16.279Z · LW · GW

I disagree with English (in principle at least) being inadequate for software specification.

For any commercial software, the specification basically is just "make profit for this company". The rest is implementation detail.

(Obviously this is an absurd example, but it illustrates how you can express abstractions in English that you can't in C++.)

Comment by β-redex (GregK) on English is a Terrible Programming Language—And other reasons AI won't displace programmers · 2023-02-07T21:02:30.667Z · LW · GW

I don't think the comparison of giving a LLM instructions and expecting correct code to be output is fair. You are vastly overestimating the competence of human programmers: when was the last time you wrote perfectly correct code on the very first try?

Giving the LLM the ability to run its code and modify it until it thinks its right would be a much fairer comparison. And if, as you say, writing unit tests is easy for a LLM, wouldn't that just make this trial-and-error loop trivial? You can just bang the LLM against the problem until the unit tests pass.

(And this process obviously won't produce bug-free code, but humans don't do that in the first place either.)

Comment by β-redex (GregK) on Why Are Bacteria So Simple? · 2023-02-06T13:02:49.303Z · LW · GW

Not all eukaryotes employ sexual reproduction. Also prokaryotes do have some mechanisms for DNA exchange as well, so copying errors are not their only chance for evolution either.

But I do agree that it's probably no coincidence that the most complex life forms are sexually reproducing eukaryotes.

Comment by β-redex (GregK) on Small Talk is Good, Actually · 2023-02-04T12:05:02.742Z · LW · GW

I barely registered the difference between small talk and big talk

I am still confused about what "small talk" is after reading this post.

Sure, talking about the weather is definitely small talk. But if I want to get to know somebody, weather talk can't possibly last for more than 30 seconds. After that, both parties have demonstrated the necessary conversational skills to move on to more interesting topics. And the "getting to know each other" phase is really just a spectrum between surface level stuff and your deepest personal secrets, so I don't really see where you would draw the line between small and deep talk.

One situation I struggle with on the other hand is when I would rather avoid talking to a person at all, and so I want to maintain the shallowest possible level of small talk. (Ideally I could tell them that "sorry, I would rather just not talk to you right now", but that's not really socially accepted.)

Comment by β-redex (GregK) on Exercise is Good, Actually · 2023-02-03T23:11:01.443Z · LW · GW

It was actually this post about nootropics that got me curious about this. Apparently (based on self reported data) weightlifting is just straight up better than most other nootropics?

Anyway, thank you for referencing some opposing evidence on the topic as well, I might try to look into it more at some point.

(Unfortunately, the thing that I actually care about - whether it has cognitive benefits for me - seems hard to test, since you can't blind yourself to whether you exercised.)

Comment by β-redex (GregK) on Nice Clothes are Good, Actually · 2023-02-03T18:08:11.713Z · LW · GW

I think this is (and your other post about exercise) are good practical examples of situations where rational thinking makes you worse off (at least for a while).

If you had shown this post to me as a kid, my youth would probably have been better. Unfortunately no one around me was able to make a sufficiently compelling argument for caring about physical appearance. It wasn't until much later that I was able to deduce the arguments for myself. If I just blindly "tried to fit in with the cool kids, and do what is trendy", I would have been better off.

I wonder what similar blind spots I could have right now where the argument in favor of doing something is quite complicated, but most people in society just do it because they blindly copy others, and I am worse off as a result.

Comment by β-redex (GregK) on Exercise is Good, Actually · 2023-02-03T17:27:52.637Z · LW · GW

This alone trumps any other argument mentioned in the post. None of the other arguments seem universal and can be argued with on an individual basis.

I actually like doing things with my body. I like hiking and kayaking and mountain climbing and dancing.

As some other commenters noted, what if you just don't?

I think it would be valuable if someone made a post just focused on collecting all the evidence for the positive cognitive effects of exercise. If the evidence is indeed strong, no other argument in favor of exercise should really matter.

Comment by β-redex (GregK) on ChatGPT struggles to respond to the real world · 2023-01-12T17:33:22.168Z · LW · GW

FWIW I don't think that matters, in my experience interactions like this arise naturally as well, and humans usually perform similarly to how Friend did here.

In particular it seems that here ChatGPT completely fails at tracking the competence of its interlocutor in the domain at hand. If you asked a human with no context at first they might give you the complete recipe just like ChatGPT tried, but any follow up question immediately would indicate to them that more hand-holding is necessary. (And ChatGPT was asked to "walk me through one step at a time", which should be blatantly obvious and no human would just repeat the instructions again in answer to this.)

Comment by β-redex (GregK) on Basic building blocks of dependent type theory · 2022-12-15T23:06:15.950Z · LW · GW

Cool! (Nitpick: You should probably mention that you are deviating from the naming in the HoTT book. AFAIK usually and types are called Pi and Sigma types respectively, while the words "product" and "sum" (or "coproduct" in the HoTT book) are reserved for and .)

I am especially looking forward to discussion on how MLTT relates to alignment research and how it can be used for informal reasoning as Alignment Research Field Guide mentions.

I always get confused when the term "type signature" is used in text unrelated to type theory. Like what do people mean when they say things like "What’s the type signature of an agent?" or "the type of agency is "?

Comment by β-redex (GregK) on Side-channels: input versus output · 2022-12-13T10:55:27.736Z · LW · GW

This argument seems a bit circular, nondeterminism is indeed a necessary condition for exfiltrating outside information, so obviously if you prevent all nondeterminism you prevent exfiltration.

You are also completely right that removing access to obviously nondeterministic APIs would massively reduce the attack surface. (AFAIK most known CPU side-channel require timing information.)

But I am not confident that this kind of attack would be "robustly impossible". All you need is finding some kind of nondeterminism that can be used as a janky timer and suddenly all Spectre-class vulnerabilities are accessible again.

For instance I am pretty sure that rowhammer depends on the frequency of the writes. If you insert some instruction between the writes to RAM, you can suddenly measure the execution time of said instruction by looking at how many cycles it took to flip a bit with rowhammer. (I am not saying that this particular attack would work, I am just saying that I am not confident you couldn't construct something similar that would.)

I am confident that this direction of exfiltration would be robustly impossible.

If you have some deeper reason for believing this it would probably be worth its own post. I am not saying that its impossible to construct some clever sandbox environment that ensures determinism even on a buggy CPU with unknown classes of bugs, I am just saying that I don't know of existing solutions.

(Also in my opinion it would be much easier to just make a non-buggy CPU instead of trying to prove correctness of something executing on a buggy one. (Though proving your RAM correct seems quite hard, e.g. deriving the lack of rowhammer-like attacks from Maxwell's laws or something.))

Comment by β-redex (GregK) on Side-channels: input versus output · 2022-12-13T00:49:45.119Z · LW · GW

Yes, CPUs leak information: that is the output kind of side-channel, where an attacker can transfer information about the computation into the outside world. That is not the kind I am saying one can rule out with merely diligent pursuit of determinism.

I think you are misunderstanding this part, input side channels absolutely exist as well, Spectre for instance:

On most processors, the speculative execution resulting from a branch misprediction may leave observable side effects that may reveal private data to attackers.

Note that the attacker in this case is the computation that is being sandboxed.

Comment by β-redex (GregK) on Side-channels: input versus output · 2022-12-12T22:08:01.118Z · LW · GW

This implies that we could use relatively elementary sandboxing (no clock access, no networking APIs, no randomness, none of these sources of nondeterminism, and that’s about it) to prevent a task-specific AI from learning any particular facts

It's probably very hard to create such a sandbox though, your list is definitely not exhaustive. Modern CPUs leak information like a sieve. (The known ones are mostly patched of course but with this track record plenty more unknown vulnerabilities should exist.)

Maybe if you build the purest lambda calculus interpreter with absolutely no JIT and a deterministic memory allocator you could prove some security properties even when running on a buggy CPU? This seems like a bit of a stretch though. (And maybe while running it like this on a single thread you can prevent the computation from being able to measure time, any current practical AI needs massive parallelism to execute. With that probably all hopes of determinism and preventing timing information from leaking in go out the window.)

Comment by β-redex (GregK) on Formalization as suspension of intuition · 2022-12-12T15:04:18.510Z · LW · GW

Also I just found that you already argued this in an earlier post, so I guess my point is a bit redundant.

Anyway, I like that this article comes with an actual example, we could probably use more examples/case studies for both sides of the argument.

Comment by β-redex (GregK) on Formalization as suspension of intuition · 2022-12-12T09:34:44.115Z · LW · GW

Upon reading the title I actually thought the article would argue the exact opposite, that formalization affects intuition in a negative way. I like non-eucledian geometry as a particular example where formalization actually helped discovery.

But this is definitely now always true. For instance if you wanted to intuitively understand why addition of naturals is commutative, maybe to build intuition for recognizing similar properties elsewhere, would this formal proof really help?

plus_comm =
fun n m : nat =>
nat_ind (fun n0 : nat => n0 + m = m + n0)
  (plus_n_O m)
  (fun (y : nat) (H : y + m = m + y) =>
   eq_ind (S (m + y))
     (fun n0 : nat => S (y + m) = n0)
     (f_equal S H)
     (m + S y)
     (plus_n_Sm m y)) n
     : forall n m : nat, n + m = m + n

This is as formal as it gets, a 100% machine checkable proof without room for human error.

I think formalization is just a tool that may or may not be helpful depending on your goal, and the real question is how you can tell ahead of time what level of formalization will be helpful?

Comment by β-redex (GregK) on Using GPT-Eliezer against ChatGPT Jailbreaking · 2022-12-06T23:40:27.587Z · LW · GW

Isn't this similar to a Godzilla Strategy? (One AI overseeing the other.)

That variants of this approach are of use to superintelligent AI safety: 40%.

Do you have some more detailed reasoning behind such massive confidence? If yes, it would probably be worth its own post.

This seems like a cute idea that might make current LLM prompt filtering a little less circumventable, but I don't see any arguments for why this would scale to superintelligent AI. Am I missing something?

Comment by β-redex (GregK) on Is school good or bad? · 2022-12-03T22:47:25.542Z · LW · GW

Collaborating with an expert/getting tutoring from an expert might be really good?

Probably. How does one go about finding such experts, who are willing to answer questions/tutor/collaborate?

(I think the usual answer to this is university, but to me this does not seem to be worth the effort. Like I maybe met 1-2 people at uni who would qualify for this? How do you find these people more effectively? And even when you find them, how do you get them to help you? Usually this seems to require luck & significant social capital expenditure.)

Comment by β-redex (GregK) on Is school good or bad? · 2022-12-03T22:27:09.199Z · LW · GW

I unfortunately don't have any answers, just some more related questions:

  • Does anyone have practical advice on this topic? In the short term we are obviously powerless to change the system as a whole. But I couldn't in good conscience send my children to suffer through the same system I was forced to spend a large part of my youth in. Are there any better practically available alternatives?
  • What about socialization? School is quite poor at this, yet unilaterally removing one kid would probably make them even worse off. (Since presumably all other kids their age are still at school.)
  • As an adult, what actually useful methods of learning exist? I learned the vast majority of my useful knowledge through autodidactism, everything else (school, university) is pretty much noise. I would be open to alternatives, but I haven't seen any kind of "teaching" so far that came anywhere close.
Comment by β-redex (GregK) on Is ChatGPT rigth when advising to brush the tongue when brushing teeth? · 2022-12-03T04:55:23.506Z · LW · GW

ability to iterate in a fast matter

This is probably key. If GPT can solve something much faster that's indeed a win. (With the SPARQL example I guess it would take me 10-20 minutes to look up the required syntax and fields, and put them together. GPT cuts that down to a few seconds, this seems quite good.)

My issue is that I haven't found a situation yet where GPT is reliably helpful for me. Maybe someone who has found such situations, and reliably integrated "ask GPT first" as a step into some of their workflows could give their account? I would genuinely be curious about practical ways people found to use these models.

My experience has been quite bad so far unfortunately. For example I tried to throw a problem at it that I was pretty sure didn't have an easy solution, but I just wanted to check that I didn't miss anything obvious. The answer I would expect in this case is "I don't know of any easy solution", but instead I got pages of hallucinated BS. This is worse than if I just hadn't asked GPT at all since now I have to waste my time reading through its long answers just to realize it's complete BS.

Comment by β-redex (GregK) on Is ChatGPT rigth when advising to brush the tongue when brushing teeth? · 2022-12-02T18:46:47.995Z · LW · GW

Yeah I guess many programming problems fall into the "easy to verify" category. (Though definitely not all.)

Comment by β-redex (GregK) on Is ChatGPT rigth when advising to brush the tongue when brushing teeth? · 2022-12-02T15:12:18.603Z · LW · GW

And apparently ChatGPT will shut you right down when attempting to ask for sources:

I'm sorry, but I am unable to provide sources for my claims as I am a large language model trained by OpenAI and do not have the ability to browse the internet. My answers are based on the information I have been trained on, but I cannot provide references or citations for the information I provide.

So... if you have to rigorously fact-check everything the AI tells you, how exactly is it better than just researching things without the AI in the first place? (I guess you need a domain where ChatGPT has adequate knowledge and claims in said domain are easily verifiable?)