2 Comments
User's avatar
Karl P Baker's avatar

Interesting commentary. As this applies to law, one thing to consider is how the licensing of attorneys factors into this analysis. As opposed to advertising, where there are no barriers to entry, a license is required to practice law. I think AI could certainly enable much more pro se activity, both in litigation and transactional. However, practicing attorneys may be constrained to some extent from going too far beyond the taper by ethical rules and also reputational factors. The rosier scenario is that AI may actually help streamline processes. As an anecdote, I've found the writing produced by legal AI software to often be more concise, clear, and to the point than what you typically receive from other attorneys.

Expand full comment
Ryan Baker's avatar

The effects licensing is one of those realities that apply to each individual role. Considering the effect of current rules and potential future rules is very much the type of thinking I'd want to provoke.

Keep in mind that the barrier to entry wouldn't directly place any limits on the amount of AI output created. That's a limit to one type of input, the time of licensed lawyers, but the new inputs of AI aren't constrained there.

The framework I'd use to reasoning here is to think about the outputs, how those might change, and what the impact of those changes would be. One type of output is contracts. I think you're right that the consistency and quality could increase here, especially at the margins where the extra cost of higher quality wasn't matched by the value to either party. They could however become more complex, covering situations left unspecified before, though there would be a counterbalance from the availability of tools to help understand the complexity. You might conceive of more adversarial activity or attempting to hide beneficial clauses, that would give the represented side an advantage. But since the other side can walk away if such activity is detected, it seems like there'd be limits to that activity.

Another output though is claims and counterclaims, and the entire process of presenting arguments to a judge, jury, or arbitrator. Without any ethical rules, you would expect an explosion of such behavior. Initial claims would be lower cost to file, and so the bar for minimum chance of success would lower. This could be used not just in the hopes of winning, but to create a nuisance for an adversary, or for the purpose of extortion.

But, as you point out, there are ethical rules around this, and lawyer engaged in such behavior would be up not just against their direct adversary, but the ethical system itself. In addition, the adversaries would have access to AI as well, and might use it to deflect such nuisances at lower cost. The areas needing the most care, are those where the effects might spill out of this arms race, such as a reduction in confidence in fair and predictable outcomes. If the adversarial use of AI and the ethical rules of the profession collectively prevent this, then we are left with an efficiency question rather than a risk of negative social impact.

Expand full comment