Feature Request to OpenAI: Share button in ChatGPT

post by Taleuntum · 2023-03-22T14:19:41.419Z · LW · GW · 4 comments

Contents

  Description of the feature requested
  Why it would be good
  Why people would use this
  Resources needed from server
  User privacy
None
4 comments

Description of the feature requested

  1. Share Button: Places text into system clipboard.
  2. This text is the entire conversation[1] appended with the encrypted hash of the entire conversation (encrypted with ChatGPT's private key) 

Why it would be good

Currently it is trivial to make deceiving screenshots using eg. Chrome DevTools

This feature is useful against deliberate misinformation:

This feature is useful against accidental misinformation:

Past examples: I've seen lots of examples of the above around the web, but I did not save them and now it was harder than expected to again come upon them. In one particularly shocking example I saw a (to me) obviously doctored screenshot shared in an IRL presentation I attended about ChatGPT capabilities. Hopefully everyone already encountered some of these, but if needed I can put in more time and actually link to some of it.

Why people would use this

I admit this is merely conjecture, but I expect that when this or a similar feature will be available, people would be increasingly less and less likely to trust a conversation history not shared this way (without the appended digital signature) which would simultaneously lead to people sharing conversations increasingly more and more using this feature as time goes on.

Resources needed from server

A digital signature obviously would require some amount of additional computation from the server, but even if OpenAI is stingy with compute they could count a share button press as a ChatGPT-4 invocation (or 10 press as 1 invocation) with respect to the set limit which is certainly an overestimate of the needed server resources, yet it would at least allow those who want to be credible about their reported conversation to be just that.

User privacy

Currently if I select the chat window manually and copy it into the system clipboard, it will contain my personal email adress in place of each of my pictures. Consequently, I have to manually delete these if I want to share the conversation in contexts where I am anonymous/pseudonymous. In the case of this requested feature deleting anything would invalidate the encrypted hash, so I would strongly suggest not including the email adress in the text placed into the system clipboard (and in the encrypted hash), rather merely something simple like: "Human: ".

  1. ^

    It's a small detail and probably obvious to the programmer if the feature is ever implemented but you have to be conscious of user's injecting dialogue in the prompt to make it seem the AI said something which it did not. To avoid this one could eg. surround every user-written prompt with a special character which is then escaped in the whole conversation when pasting it into the system clipboard/encrypting.

4 comments

Comments sorted by top scores.

comment by ProgramCrafter (programcrafter) · 2023-03-22T20:57:39.856Z · LW(p) · GW(p)

Probably message's background could be made not plain color, but rather some hexagons or rectangles encoding conversation (most probably, tokens fitting into context window and output tokens). This way screenshots could be checked for being true. Also, this can provide way to detect data generated by ChatGPT and not train on it or move it to separate dataset.

Replies from: Taleuntum
comment by Taleuntum · 2023-03-23T11:26:08.857Z · LW(p) · GW(p)

Good idea! 

I especially like that your feature does not require active buy-in from the user: anytime they make a screenshot the signature will be there. It is also nice, that the user could keep making screenshots of the conversation which (as a picture is more eye-catching than text) is great for marketing reasons (though this will be imo less and less important as chatGPT (or successor or competitor models) inevitably become household names on par with "Google")


I fear however, that if OpenAI is anything like software companies I knew and there is a list of 40 current "TOP PRIORITY!" tasks, the feature's extra complexity makes it less likely it would be implemented, especially so because in addition to the visual coding scheme they would also have to implement a signature checker as even a community of users would likey not be able to check themselves. These problems could be avoided though if somekind of flexible, open-source visual coding scheme already exists. 


Another possible problem is that the different colored parts in the background would have to big enough and different coloured enough to store the information even after the screenshot is uploaded to different sites that use various compression algorithms for images. My fear here is that this could clash with the current aesthetic of the site and in the worst cases could make the text hard to read.


That said, I am of course not insistent on any specific scheme, my only goal is to not have to constantly track in my head how likely it is that a given chatGPT conversation is fake. I can also imagine other methods of proving authenticity.:

  1. "Share link" like in Google Drive (this would require the most amount of the programmers' time in my opinion though and the original user could delete the conversation which would make it disappear for everyone which is annoying)
  2. A combination of your scheme with the button: On "Share" press an image of the whole conversation is generated and the ascii signature is placed in an appropriate non-overlapping-with-text position. (maybe less complexity, but would require active buy-in)
Replies from: programcrafter, Taleuntum
comment by ProgramCrafter (programcrafter) · 2023-03-23T20:30:24.644Z · LW(p) · GW(p)

Another possible problem is that the different colored parts in the background would have to big enough and different coloured enough to store the information even after the screenshot is uploaded to different sites that use various compression algorithms for images.

 

Well, this is a problem for my approach.

Let's estimate useful screen size as 1200x1080, 6 messages visible - that gives around 210K pixels per message. Then, according to [Remarks 1-18 on GPT](https://www.lesswrong.com/posts/7qSHKYRnqyrumEfbt/remarks-1-18-on-gpt-compressed), input state takes at least log2(50257)*2048= 32K bits. If we use 16 distinct colors for background (I believe there is a way to make 16-color palette look nice) we get 4 bits of information per pixel, so we only have 210K * 4 / 32K = 26-27 pixels for each chunk, which is rather small so after compression it wouldn't be easy to restore original bits.

So, probably OpenAI could encode hash of GPT's input, and that would require much less data. Though, this would make it hard to prove that prompt matches the screenshot...

comment by Taleuntum · 2023-06-04T15:23:53.421Z · LW(p) · GW(p)

Recently, OpenAI implemented a "Share link" feature which is a bit different than the one mentioned in the parent comment (It creates a snapshot of the conversation which is not updated as the user continues to chat, but at any time they can generate a new link if they wish. I especially like that you can switch between sharing anonymously or with your name.); therefore, this feature request can be considered closed: Now the authenticity of chatGPT's output can be proved! Thanks to everyone who supported it and OpenAI for implementing it (even though these events are probably unrelated)!