AI is advancing quickly, and if there’s any one consensus about it, it is that it will have broad impacts on jobs. What impact, is an area of more debate, but it’s uncommon to view it as non-impactful. Some believe that jobs will disappear, and there would be large amounts of unemployment. Some draw on past periods of technological change, such as the Industrial Revolution or the advent of the internet, and believe that advances ultimately lead to new jobs that didn’t previously exist.
That debate depends upon several sub-debates, and I won’t try to answer them all today, but I wanted to focus on one specific area, the nature and effect of adversarial roles. Adversarial roles are those roles in which a significant part of them is in zero-sum competition with others filling the same role.
Now to be clear, adversarial roles are not without value, or at least, such roles are uncommon. Most adversarial roles provide some important value in the process of pursuing their competition, but also, most of them have a taper effect to this, where the greatest value is provided by the first efforts put to them, and greater and greater amounts of effort yield less value.
I’ve been using value loosely so far, so let me clarify; there is both social value, and individual value. What I’ve described so far is social value. A zero-sum competition, provides no social value, but yields individual value to the victor.
Law serves as a prime example of an adversarial role. When functioning effectively, it provides significant value to society by communicating expectations, resolving disputes, and fostering trust and predictability.
When engaging in a marketplace, law provides both explicit and implicit value. On the implicit side, our trust in marketplaces is enabled by the laws that stand below them. We may not know all of those laws, but we have some experience of the effects. If misrepresentation (fraud) wasn’t illegal, there would be more of it, and we’d have to engage with more care. That care would take more effort, and thus an implicit value is the efficiency. An explicit value is when law gives us recourse to a harm, like recovering assets lost to theft. The point, here is that we should think about value somewhat deeper than our personal explicit experience, and also give credit to the value we enjoy without an active role.
This implicit, explicit divide is common to many, maybe all adversarial roles, and is one aspect that makes adversarial roles relationship with society complex. Many of those roles paradoxically have low-reputation, but high social status. Partly this derives from the low social to personal value ratio at the end of a taper, but it also stems from mixed awareness or acceptance of implicit value.
It’s usually unclear, but there’s good reason to believe these roles can also burn past a taper, to the point where additional activity is providing negative social value. An example here is advertising. At it’s taper’s wide end, advertising is spreading useful information, connecting a provider with a consumer, and creating a positive-sum transaction. At the narrow end, advertising has abandoned information, and is engaged in influence. As it burns past the taper, advertising can become manipulative, setting its flame to our happiness, creating new insecurities that products then promise to resolve, rather than resolving preexisting needs.
Now, my intent in describing these roles value is not to pass a moral judgement on actors. I believe that actors in these roles often lack agency to control the social to personal value ratio. Even as a role burns past the taper, it can be both difficult to recognize one’s own role in that, and there are few options other than leaving the role, which presumably would be filled by another as the demand for the competitive value will remain.
The alternate to adversarial roles are productive roles. I think all roles have some degree of competition, we can only find the pure role in thought experiments, but those thought experiments are enough to help us recognize roles that are primarily productive. For example, a farmer must at some point sell his crop in the market, and in that market competes with other farmers. That reality aside, we’d think of the toil the farmer puts into producing that crop as productive. We have high confidence someone needs or at least wants the corn, potatoes, meat, chocolate, or coffee the farmer grows and harvests.
In case this hasn’t clarified my intent of these roles, I’ll offer some examples of roles I’d consider adversarial, or productive and allow you to follow the threads here.
Adversarial: Advertising, Sales, Law, Negotiation, Finance, Politics
Productive: Farming, Science, Manufacturing, Transportation, Medicine, Education
AI and scaling
How does all this relate to AI? It behooves us to wonder, how much social value will AI produce when applied to adversarial roles. Because there is a personal value to the client of an adversarial role, when the cost declines we can expect more demand. Demand in terms of inputs might decline, but in terms of output it will certainly increase.
I say inputs might decline, because there is a complex relationship within the equilibrium of invested inputs to the excess elsewhere in society, which we can also expect to be changed by AI. But let’s put that aside and focus on the outputs.
In the output section, we should expect clients to demand more of it. When the cost decreases, the marginal personal value for more output would go up. For example, more advertising, or more influential advertising. More financial complexity, more attempts to exploit complexity, and more attempts to exploit others lack of financial sophistication.
A key concern is whether the increased outputs from AI-driven adversarial roles will actually generate social value. If AI simply reduces the inputs needed for these roles without pushing them past their 'taper,' then intervention may not be necessary. In this scenario, AI could even increase social value by improving the efficiency of resource allocation.
However, the more serious risk arises when AI drives these roles to produce outputs that have a negative social impact. This is where action becomes not just optimal but essential.
The reality of this is specific to each type of role. The action that is possible is within market design, to better align the personal value outputs with the social value outputs. In other words, incentive alignment. Better alignment is always useful, producing greater efficiency, but we should think of it as critical when we transition from inefficiencies to negative social impacts.
I don’t think I have all the solutions here; in fact I know I don’t. No one could as these are all complex fields. I’ve spent time explicitly thinking alignment solutions for advertising and finance. Advertising because I think it has the most obvious negative value examples, and finance, because of role, experiences and other personal relationships.
My point here though at this time isn’t specific solutions, it is that this is likely to become more important and will be worth taking steps we wouldn’t have in the past.
Footnote: When I started writing, I did not intend on using the word alignment here. It’s interesting that it arose organically in the though process, given that it’s so core to discussions about AI. This is a reminder that these adversarial systems, to some degree are a form of intelligence, enabled by the actors, but with it’s own emergent properties. In the same way that we want to think explicitly about alignment of AI with our goals, we should be concerned about the alignment of these systems with our goals. However, achieving this alignment is complex, particularly in systems like politics and law, which are themselves the mechanisms for encoding societal preferences and aligning other systems.
Interesting commentary. As this applies to law, one thing to consider is how the licensing of attorneys factors into this analysis. As opposed to advertising, where there are no barriers to entry, a license is required to practice law. I think AI could certainly enable much more pro se activity, both in litigation and transactional. However, practicing attorneys may be constrained to some extent from going too far beyond the taper by ethical rules and also reputational factors. The rosier scenario is that AI may actually help streamline processes. As an anecdote, I've found the writing produced by legal AI software to often be more concise, clear, and to the point than what you typically receive from other attorneys.