Posts

Comments

Comment by Rich D (rich-d) on Graceful Degradation · 2024-11-07T14:29:36.857Z · LW · GW

The "sin, confession, priest" metaphor is a great anchor - that's very similar to the feeling we try to project when we have these discussions with clients.  And, related to Raemon's question, that's part of how I try to address the issue.

Specifically, we cast the instruction as much more of a simple commandment, leaving off the "But if you do" clause entirely from the particular guidance.  So in this case, the instruction is: "Don't reverse engineer competitor technology.  Including, don't buy copies of competitor tech or go searching for inside information.  This information is risky for us to have (for various legal reasons I can bore you with, but aren't important right now), and so you DON'T WANT IT."

This guidance, along with any other 'commandments' goes into the general body of "Legal says 'Don't'" instructions.  Which is more or less what most folks expect from a legal department.  (We work hard not to be, but a lot of legal teams are considered the "Department of 'No' ".)

And then, we overlay the number one rule over the top of all other guidance, and we finish with this every time: if you think you did something you shouldn't have, tell us right away.  If we don't know, we can't help you.

 

So to the original metaphor, we try to make ourselves part of the path to absolution (or at least mitigation), so that the rules can be simpler.

 

On the whole, it's a little bit of the reverse of the original "Don't ...  But ..." construction.  We try to flip it to: "We can help you if you screw up, but only if you tell us.  Here's what we hope we don't have to help you with."  This way, if they remember a "Don't", it's all good.  If they forget a "Don't", but they remember "in case of doubt, ask legal", we're mostly OK.  (If you sin, confess and get forgiven.  If you don't sin, good for you.)

 

In more semantic terms, we try to avoid the "But if you do ..." construction, since that seems to stick with people in a way that ignores the "Don't" part of it.  By making the top rule, "let me help you if you break a rule", it's easier to keep from making the activity that breaks the rule sound like a premise for positive action.

 

I wouldn't say we do great with this, but it's part of an overall effort to cast legal as useful advisors (also a priestly role), rather than enforcers. 

Comment by Rich D (rich-d) on Graceful Degradation · 2024-11-07T00:16:47.624Z · LW · GW

Graceful degradation was something that I had originally heard of in a computing context, but that I find has real application in the legal field (which is my field of work).  When giving legal advice, whenever possible, you want to give guidance that will work even if only part of it is followed.  (Because even as an in-house lawyer, I can pretty much count on my clients ignoring or reinterpreting my advice pretty regularly.)  

This is especially important when some action or behavior becomes critical, but only in certain circumstances.  For instance, advising my engineering clients NOT to do their own research into our competitors' proprietary technology is very important advice (because if they do, it leads to higher damages if they are found to infringe patents on said technology, and can also put the company at risk for misappropriating another company's trade secrets).  On the other hand, if they are to learn something proprietary about a competitor, it is critical that they let the legal team know about it, since the consequences of mishandling that information are so high.

So to attempt to give gracefully degrading instructions in this space becomes a little self-contradictory if half the instructions are forgotten.  "Don't try to reverse engineer competitor's code.  But if you do, make sure to tell me about it."  This usually results in clients remembering either: "Don't tell the lawyers if you learn competitor information" (resulting in not warning us they have competitor info) or "Be sure to tell the lawyers about any reverse engineering you do" (resulting in teams going out to try to specifically research competitor information).

 

This type of "Don't ..., But if you do ..." situation resists degrading gracefully, but comes up more than I'd like.

 

On an unrelated note, when I first learned this term, I just had an image of a very refined woman at a fancy dinner party, taking a sip of her wine and then turning to her husband and saying, "Darling, I love you, but this is simply the worst affair you've ever dragged me out to."

Comment by Rich D (rich-d) on Outrage Bonding · 2024-08-09T21:51:37.770Z · LW · GW

It's not exactly the same thing, but I've been known to try to explain my lack of outrage/engagement/joiner-ism when it comes to things like this, by saying: "I get why you disagree/why that's awful/whatever, but really, I just can't get that worked up just because somebody's wrong on the internet."

 

It's a little disingenuous, because the issue isn't really "someone being wrong on the internet", but rather that folks feel that there's something wrong in the world, as reflected by a third-party's opinion.  But since we all get our news and opinions delivered by internet these days, this has often (but far from always) worked to shift the topic for me.  

To be fair, sometimes it shifts the topic to a meta-discussion about whether "the internet" (or specific media/social media apps) are "the problem", but even that I find to be a more interesting (and less unhealthy) discussion than dancing around the picked-over carcass of some absent opponent's opinion.

 

The snarkier response might be: "You disagree with someone on the internet?  You should blog about it!"  But that just piles on the negativity.

 

Obviously, YMMV.

Comment by Rich D (rich-d) on What Other Lines of Work are Safe from AI Automation? · 2024-07-11T19:57:06.046Z · LW · GW

Regarding category 2, and the specific example of "lawyer", I personally think that most of this category will go away fairly quickly.  Full disclosure, I'm a lawyer (mostly intellectual property related work), currently working for a big corporation.  So my impression is anecdotal, but not uninformed.

TL; DR - I think most lawyer-work is going away with AI's, pretty quickly.  Only creating policy and judging seem to be the kinds of things that people would pay for other humans to do.  (For a while, anyway.)

 

I'd characterize legal work as falling primarily into three main categories: 

  1. transactional work (creating contracts, rules, systems for people to sign up to to advance a particular legal goal - protecting parties in a purchase, fairly sharing rights in something parties work together on, creating rules for appropriate hiring practices, etc.);  
  2. legal advocacy (representing clients in adversarial proceedings, e.g., in court, or with an administrative agency, or negotiations with another party); and
  3. legal risk-analysis (evaluating a current or proposed factual situation, and determining what risks are presented by existing legal regimes (either law or contract), deciding on a course of action, and then handing an appropriate task to the transactional or adversarial folks to carry out).

 

So in short: paper-pushers; sharks; and judges.

 

Note that I would consider most political positions that lawyers usually fill to be in one of these categories.  For instance, legislators (who obviously need not be lawyers, but often are) do transactional legal work.  Courtroom judges are clearly the third group.  Prosecutors/DAs are sharks.  

 

Paper-pushers:

I see AI taking over this category almost immediately.  (It's already happening, IMO.)

A huge amount of this work is preparing appropriate documents to make everyone feel that their position is adequately protected.  LLM's are already superficially good at this, and the fact that there are books out there to provide basic template forms for so many transactional legal matters suggest that this is an easily templatized category. 

As far as trusting the AI to do the work in place of a human, this is the type of work that most corporations or individuals feel very little emotion over.  I have rarely been praised for producing a really good legal framework document or contract.  And the one real exception is when it encapsulated good risk-analysis (judging).  

 

Sharks:

My impression is that this will take longer to be taken over, but not all that long.  (I think we could see it within a few years, even without real AGI coming in to existence.)

This work is about aggressively collecting and arguing for a specific side, pre-identified by the client.  So there is no judgment or human value that is necessarily associated with it.  So I don't think that the lack of a human presence will feel very significant to someone choosing an advocate.

At the moment, this is (IMHO) the category requiring the most creativity in its approach, but ...  Given what I see from current LLMs, I think that this remains essentially a word / logic game, and I can imagine AI being specifically trained to do this well.

My biggest concern here is regarding hallucination.  I'm curious what others with a real technical sense of how this can be limited appropriately would think about this.

 

Judges:

I think that this is the last bastion of human-lawyering.  It's most closely tied to specific human desires and I think people will feel that relinquishing judgment to a machine will FEEL hardest.

Teaching a machine to judge against a specific set of criteria should be easy-ish.  Automated sentencing guidelines are intended to do exactly this, and we already use them in many places.  And an AI should be able to create a general sense of what risks are presented by a given set of facts, I suspect.

But the real issue in judging is in deciding which of those risks present the most significant risk and consequence, BASED ON EXPECTED HUMAN RESPONSES.  That's what an in-house counsel at a company spends a lot of time advising on, and what a judge in a courtroom is basing decisions that extend or expand existing law decides on the basis of.

And while I think that AI can do that, I also think that most people will see the end result as being very dependent on the subjective view of the judge/counselor as to what is really important and really risky.  And that level of subjectivity is something that may well be too hard to trust to an AI that is not really transparent to the client community (either the company leadership, or the public at large).

So, I don't think it's a real lack of capability here, but that this role hits humans in a soft spot and they will want to retain this under visible human control for longer.  Or at least we will all require more experience and convincing to believe that this type of judging is being done with a human point of view.

Basically, this is already a space where a lot of people feel political pressure has a significant impact on results, and I don't see anyone being comfortable letting a machine of possibly alien / inscrutable political ideology make these judgments / give this advice.

 

 

So I think the paper-pushers and sharks are short-lived in the AI world.

Counselors/judges will last longer, I think, since they are roles that specific reflect human desire as expressed in law.  But even then, most risk-evaluating starts with analysis that I think AI's will be tasked to do, much like interns do today for courtroom judges.  So I don't think we'll need nearly as many.

 

On a personal note, I hope to be doing more advising (rather than paper-pushing and negotiating) to at least slightly future-proof my current role.

Comment by Rich D (rich-d) on Changes in College Admissions · 2024-04-25T18:44:12.399Z · LW · GW

As a former smart person who decided that actual productive work was undervalued, so therefore I might as well become a lawyer, this line made me chuckle:

"Normally I would be against dumbing down our testing, but keeping smart people from becoming lawyers is not the worst idea."

Unfortunately, given what's on the LSAT, even removing the logic puzzle part of it probably doesn't help that much in dumbing it down.  I think it only ends up mattering in the broadest categories.  (That is, while folks' percentiles might change without the Logic Fun section, I suspect that most folks' deciles won't change by more than one, and most won't even change at all.)

In my experience, there are enough "Top 10" law schools (there are about 20 by my count) that anyone smart enough to be in the top 10-15% of LSAT who sends enough applications will get into at least one of those "top" schools.  So even at the limit, maybe someone who previously would have been admitted won't get into Stanford law with their "new" LSAT score.  But they'd still get into at least one of Harvard, Yale, Cornell, NYU, Columbia, Berkeley, or Georgetown.

So I guess my comment is: this wouldn't keep smart people from becoming lawyers - but it might discourage those that are smart, but either (1) aren't all THAT smart, or (2) aren't all that willing to think it through, from becoming lawyers.

 

But I agree that it's not the worst idea.