frontier64's Shortform

post by frontier64 · 2021-08-28T20:53:55.869Z · LW · GW · 18 comments

Contents

18 comments

18 comments

Comments sorted by top scores.

comment by frontier64 · 2023-06-26T02:57:17.970Z · LW(p) · GW(p)

You woo a girl to fall in love with you, sleep with her, then abandon her? You're going to be run out of town as a rotten fool 100 years ago. Nowadays that communal protection is gone.

Replies from: Viliam
comment by Viliam · 2023-06-27T14:52:30.236Z · LW(p) · GW(p)

Do you have a specific proposal for bringing back the communal protection in this case?

As I see it, there are three levels in how this works:

  • what are the local norms of behavior (sex before marriage: ok / not ok)
  • who enforces the norms and what are their incentives (lynching / police)
  • how anonymous is the environment (depends on size of the town)

(By the way, the situation 100 years ago would depend a lot on your and the girl's status. High status men were not punished for things they did with low status girls. A denial, not necessarily plausible, would probably suffice. So let's assume that both are approximately average.)

First, there is a change of sexual norms. Sleeping with someone and then abandoning them is... basically, the freedom our hippie ancestors have fought for. From this perspective, nothing bad happened. The girl may feel sad, but that's partially a result of her own choices, and partially just a force of nature that cannot be avoided.

(Note that 100 years ago, the girl would also be blamed, because she also broke the rules.)

Second, the behavior is now enforced by police (and Twitter mobs). The police will, ideally, follow the written laws. If there is no law, there is nothing for the police to do; actually, police might interfere with the attempts to drive a guy out of town.

Third, what you described would work only in a small town anyway. In a sufficiently big town, there are always new girls to woo, and whatever happens, 99.99% people will be like "I don't know anything about it, and I don't care".

Replies from: frontier64
comment by frontier64 · 2023-06-28T22:45:53.598Z · LW(p) · GW(p)

As to the status:

I was thinking of the situation more so with young people. Where the guy doesn't have high status and the girl is under the full care of her family. Certainly you're right that it was hard to punish high status people for stealing a girl's heart and running away with it. But I think that's just because it was hard to punish high status people for most improprieties back then. A high status person could not pay the shoe shiner, renege on oral agreements, and do all manner of improper behavior as long as mainly harmed low status people and didn't ever have to defend against a campaign launched against him.

But this gets away from my main thinking which is that there was SOME level at which parents had security over their daughter. They knew that if the [equivalent of the local weed dealing loser back in 1900] swooned over their daughter and broke her heart and the rest of the town found out then he would be ostracized from polite society. I think of this as community policing/shaming.

what are the local norms of behavior (sex before marriage: ok / not ok)

I don't think this is really that important honestly. The shaming and being driven out went away long before norms sufficiently changed that being a deadbeat was considered ok. I grew up in a city where the majority of the population would want to shun a deadbeat like that.

who enforces the norms and what are their incentives (lynching / police)

When I say driven out by the town I don't mean lynching or anything necessarily illegal. I mean not being served at restaurants, being kicked out of his lodging, being harassed by sympathetic local cops, etc. Cancelling but less astroturfed.

For me this is the big reason communal policing and shaming went away. Businesses lost the legal ability to exclude people. If a business chooses to exclude someone they can now be fined by nonsensical/corrupt judges who believe that the same cake can both celebrate gender transitioning and not make any expressive statement. The risk to a business by excluding someone is just too high to justify any benefit they receive by policing the community.

how anonymous is the environment (depends on size of the town)

I think the general idea is that communal policing/shaming in small towns forces deadbeats to only prey upon people in those sorts of anonymized medium to large cities. There's a surprising amount of solidarity between businesses in small towns. Business owners go to meetings together at the chamber of commerce or one of a million other similar organizations. Employees at small businesses are treated like family in many cases. I'm certain many business owners would want to strike back at a deadbeat who wronged his employee's family.

The only thing that's really needed to bring back communal policing/shaming is to give business owners back the right to exclude who they choose.

Replies from: Viliam, ChristianKl
comment by Viliam · 2023-06-29T07:54:30.624Z · LW(p) · GW(p)

The only thing that's really needed to bring back communal policing/shaming is to give business owners back the right to exclude who they choose.

That would also allow the owners of Google, Facebook, and Twitter to choose the groups of people they want to remove from their parts of internet.

Replies from: frontier64
comment by frontier64 · 2023-06-29T14:05:08.511Z · LW(p) · GW(p)

Which like already happens. Somehow major tech companies have more leeway in banning people from their businesses than local bakeries

edit: and also those two don't really need to go together? They definitely don't go together now. Physical stores are forced to entertain basically everybody while tech companies ban people with impunity. I don't see why the laws can't reverse. Let physical store owners ban who they wish and make it illegal for online forums to ban people for anything other than spam, illegal behavior, and whatever other clearly bad thing I'm missing here.

comment by ChristianKl · 2023-06-30T15:44:12.448Z · LW(p) · GW(p)

What kind of sources do you have for the norms that existed 100 years ago that drive your predictions of how people were driven out of town back then?

Replies from: frontier64
comment by frontier64 · 2023-06-30T19:05:08.894Z · LW(p) · GW(p)

Mainly reading and watching period fiction, court opinions from around 1900 and earlier, and hearing about what life was like from my grandfather (who wasn't alive prior to 1920 but told me stories from his youth and from his dad). I definitely could be totally wrong and it was rare for a community to punish a deadbeat for leading a girl on and abandoning her or that the typical punishment is very different from my understanding.

Data of course shows that pre-marital sex was both much less common and reported much less prior to 1920.[1] This however doesn't necessarily mean there was community policing/shaming of the sort that I describe. Only that there were some prevention/punishment mechanisms in place that have eroded.


  1. https://www.sas.upenn.edu/~jesusfv/fgg.pdf ↩︎

comment by frontier64 · 2022-04-07T17:16:56.255Z · LW(p) · GW(p)

I don't see how we could ever get superhuman intelligence out of GPT-3. My understanding is that the goal of GPT neural nets is to predict the next token based on web text written by humans. GPT-N as N -> inf will be perfect at creating text that could be written by the average internet user.

But the average internet user isn't that smart! Let's say there's some text on the internet that reads, "The simplest method to break the light speed barrier is..." The most likely continuation of that text will not be an actual method to break the light speed barrier! It'll probably be some technobabble from a sci-fi story. So that's what we'll get from GPT-N!

Replies from: Chris_Leong, sil-ver, thomas-kwa
comment by Chris_Leong · 2022-04-09T03:39:33.431Z · LW(p) · GW(p)

Have you seen instruct GPT?

Replies from: frontier64
comment by frontier64 · 2022-04-12T16:00:42.273Z · LW(p) · GW(p)

I hadn't until you mentioned it here. I have now read through an explanation of InstructGPT by Openai here. My understanding is that the optimization in this case is for GPT-3 outputs that will be liked by the humans doing the reinforcement learning by human feedback system.

The openai people say that, "One way of thinking about this process is that it “unlocks” capabilities that GPT-3 already had, but were difficult to elicit through prompt engineering alone." Which I guess kind of points out the problem I was thinking of. GPT-N is optimizing for predict the next token based on a bunch of internet text. All the addons are trying to take advantage of that optimizer to accomplish different tasks. They're doing a good job at that, but what the big compute is optimizing for remains the same.

On a slightly different note, this paper kind of reinforces my current thoughts that alignment is being co-opted by social justice types. Openai talks about alignment as if it's synonymous with preventing GPT from giving toxic or biased responses. And that's definitely not ai-aligntment! Just read this quote: "For example, when generating text that disproportionately affects a minority group, the preferences of that group should be weighted more heavily." It's disgusting! Like, this is really dangerous. It would be horribly undignified if alignment researchers convinced policy makers that we need to put a lot of effort into aligning AI and then the policy makers make some decree that AI text can't ever say the word, "bitch," like that's some solution.

ETA: Pretty troublesome that this is where we're stuck at on alignment while Google has already made their improved version of GPT-4 and openai has created a new artistic neural net that's way better than anything we've ever seen. I still think not too troubling though if they keep using these methods that plateau at the level of human ability. It might be an interesting future if AI is stuck at human level thought for a while.

Replies from: lahwran
comment by the gears to ascension (lahwran) · 2022-04-12T16:14:18.290Z · LW(p) · GW(p)

The problem isn't that people are trying to parent AIs into not being assholes via social justice knowledge, the problem is that the people receiving the social justice knowledge are treating it as an attempt to avoid being canceled when they need to be seeking out ways to turn it into constructive training data. social justice knowledge is actually very relevant here. align the training data, (mostly) align the ai. worries about quality of generalization are very valid and the post about reward model hacking is a good introduction to why reinforcement learning is a bad idea. however current unsupervised learning only desires to output truth. ensuring that the training data represents a convergence process from mistakes towards true social justice seems like a very promising perspective to me and not one to trivially dismiss. ultimately AI safety is most centrally a parenting, psychology, and vibes problem with some additional constraints due to issues with model stability, reflection, sanity, "ai psychiatry".

also AI is not plateauing

comment by Rafael Harth (sil-ver) · 2022-04-07T19:00:54.077Z · LW(p) · GW(p)

The average internet user isn't smart, but you can set up the context such that GPT-3 expects something smart.

You can already observe this difference with GPT-3. If you set up a conversation between an AI and a human carelessly, GPT-3 is quite dumb, presumably because the average conversation with an AI assistant in the training data is quite dumb. But if you give a few smart responses from the AI as part of the context, the continuations become much smarter.

Also, I think it's more helpful to view it as a 2-stage problem: 1) get a neural net to builds a world model, and 2) query that world model. The first thing happens during training, the second during deployment. It's not clear that the first is limited to human-level intelligence; since the task of GPT-3 is open-ended, wouldn't getting better and better world models always be helpful? And once you have the world model, well let's just say I wouldn't be comfortable betting on us being unable to access it. At the very least, you could set up a conversation between two of the smartest people in the world.

comment by Thomas Kwa (thomas-kwa) · 2022-04-07T17:42:32.429Z · LW(p) · GW(p)

In the limit, GPT-N models the entire Earth that created the Internet in order to predict text completion. Before then, it will invent new methods of psychoanalyzing humans to better infer correlations between text. So it surely has superhuman capabilities, it's just a matter of accessing them.

Replies from: frontier64
comment by frontier64 · 2022-04-12T16:12:32.348Z · LW(p) · GW(p)

My understanding is that it's possible there's a neural net along the path of GPT-1 -> N that plateaus at perfectly predicting the next token of text written by a human that stops way short of having to model the entire Earth. And that would basically be a human internet poster right? If you create one of those, then training it with more text, more space, and more compute won't make a neural net that models the earth. It'll just make that same neural net that works perfectly on its own with a bunch of extra wasted space.

I'm not too sure my understanding is correct though.

comment by frontier64 · 2021-08-28T20:53:56.146Z · LW(p) · GW(p)

The ability to destroy parity between decisions made by the real agent and simulations of the agent lets the agent win games against simulator opponents.

  1. Different types of incoherence between real vs simulated choices grant different levels of power.

You're playing a game against a simulator Allen the Alien. The game is that you and Allen separately choose one out of 10 paths. If you pick the same one Allen wins; if otherwise you win. With no simulation Allen has a 1/10 chance of winning while you have a 9/10 chance of winning. If Allen simulates you accurately he then has a 1/1 chance to win.

If you're fully able to notice you're being simulated and completely obfuscate this fact from the simulator then simply having all simulations pick option 10 while real you picks option 1 is a 1/1 winning strategy. To achieve this you need some sort of pre-chosen general formula to break parity based on whether or not you're in a simulation.

You might not be able to tell if you're in a simulation while you do have the ability to break parity with other simulations of you and reality. Randomizing your path choice such that the simulation has a 1/10 chance of choosing each path and reality's choice has no correlation to the simulation's choice doesn't give you an automatic win, but completely destroys the simulator's advantage.

  1. Sometimes creating incoherence between different simulations is more powerful than just creating incoherence between all simulations and reality.

Let's say we change the above game so Allen can pick 9 of the 10 paths and you only win if you choose the one path Allen leaves open. In this case a non-simulating Allen has a 9/10 chance to win and there's really nothing you can do about it.

If Allen is a simulator (and believes his simulation of you is accurate) then creating decoherence between reality and simulations of you improves your chances from 1/10 to 1/9. Simulations of you pick path 1 so Allen will be sure to pick that path and distribute the rest of his 8 choices among the remaining 9 paths leaving you a 1/9 chance of winning.

If you can not only create decoherence between simulations of you and reality, but can also create random decoherence between individual simulations then you can improve your odds further. If Allen is a (very) naive simulator then having your simulations each pick a path 1-9 with equal likelihood while reality picks path 10 is a guaranteed victory.

Of course the above strategy will not work against any reflective simulator. You will have to use steganography to conceal your simulation-defeating ability in the results of the simulations as well. This may involve using a different probability distribution for chosen paths or only having simulations select a subset of the paths reality will not choose. These techniques are bounded by giving you a better than 1/9 chance at worst and 1/1 chance at best.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2021-08-28T22:09:56.410Z · LW(p) · GW(p)

Specifically, you want to notice that you are in a counterfactual, pretend that you don't notice and bluff, act in a way that bends the decisions of your opponent to your will. Which means steganographic hard-to-notice decision making that covertly controls your apparent decisions and doesn't get rounded down to not happening in a simulation.

At the same time, you don't want this to trigger if it's a counterfactual being considered by you, or by an ally. So there should be authentication protocols between simulation controllers and agents in simulated counterfactuals that lets them know when to reveal actual decisions. Something something homomorphic encryption something, so that you know secrets that can be communicated to the simulations you are running within your own cognition but can't be extracted from your algorithm?

comment by frontier64 · 2021-09-23T15:35:51.905Z · LW(p) · GW(p)

The future may have a use for frozen people from the current era. In the future, historical humans may be useful as an accurate basis to interpret the legal documents of our era.

Original pubic meaning is a is a fairly modern mode of legal interpretation of the US Constitution. It's basis is that the language of the constitution should be interpreted the way that the original meaning of the text was when it was drafted and amended into the constitution. A similar mode of interpretation is used less commonly for statutes. It's likely that this mode of interpreation would become more common in the future as a way to prevent value drift.

One of the struggles of modern times employing the original public meaning test is that for older amendments there are no people currently alive who lived in the culture that the amendments were drafted in. It would be very helpful to have just a single ordinary man who lived in 1790 who could explain his understanding of the constitution and the language therein.

It's possible the future will have similar issues interpreting constitutional language from our time and will appreciate the ability to benefit from questioning a portion of the population of that era.

This theory is a subset of the idea that humans from past eras will be useful in the future to prevent value drift generally. My first brush with this idea was Three World's Collide.

comment by frontier64 · 2022-06-27T00:37:44.687Z · LW(p) · GW(p)

We are ancient apes who can say yes or no to humans evolving. Tough choice. What do you think the apes should have chosen?