Humanity‑First AI: Subsidiarity, Agency, and Servant Leadership
The motivation to write this piece has come from a phrase I have been seeing a lot as an organizing principle: “AI First”. History shows that technology first approaches:
Promise increased efficiency
Concentrate power
Erode human agency
Subsidiarity is a framework that is actually all about how we interface with each other, and how different levels of governance will interface with each other and individuals, and I am suggesting that subsidiarity provides a “humanity first” organizing principle and how we can be better leaders including the responsible and sustainable deployment of AI.
Aristotle is credited as one of the originators of the foundational philosophy that led to the modern conception of subsidiarity with his ideas that society is composed of layers with the household unit (oikos) as foundation, building up to the village and further up to the city-state (polis). His idea was that each layer has its own proper function and ends and that the higher layers should not suborn the other layers underneath. Aristotle argued that the healthy functioning of society was in fact dependent on this decentralization. Jumping straight to a modern thinker, Nicholas Nassim Taleb and his conception of Anti-Fragility is very much aligned with this idea of maximal decentralization: that overly centralized and overly optimized organizations tend towards brittleness and under the right applied pressure will shatter. Think of a monolithic structure vs one composed of thousands of smaller pieces. A single crack can topple the monolith. While a crack may form in the structure with many bricks catastrophic sudden failure is less likely.
The idea of subsidiarity was codified in 1931 by Pope Pius XI (Pope from 1921-1939) in his encyclical “ON RECONSTRUCTION OF THE SOCIAL ORDER” Quadragesimo Anno (May 15, 1931):
“Just as it is gravely wrong to take from individuals what they can accomplish by their own initiative and industry and give it to the community, so also it is an injustice and at the same time a grave evil and disturbance of right order to assign to a greater and higher association what lesser and subordinate organizations can do. For every social activity ought of its very nature to furnish help to the members of the body social, and never destroy and absorb them. …. Therefore, those in power should be sure that the more perfectly a graduated order is kept among the various associations, in observance of the principle of "subsidiary function," the stronger social authority and effectiveness will be the happier and more prosperous the condition of the State.”
It is a long and interesting read that is as relevant to our present historical context as it was in 1931.
I think that servant leadership is in fact the real-world application of subsidiarity. In a few bullet points:
Not doing for people what they can do for themselves
Enabling people to become more competent, more capable
Intervening when necessary to support people consistent with #2
Respecting people’s dignity and humanity
I want to digress a little on the first idea above, and how this is at the intersection of Taleb’s and the Pope’s thought. Taleb has discussed how bail-outs prevent local failure and the necessary error correction. Good parents know that they need to let their children try and do things independently and fail (within reason). We can not “nerf” everything otherwise they will do truly irresponsible things as adults being unable to actually assess risk. The connection to and feedback from reality based consequences of our actions is critical to becoming an effective human being (and leader).
A servant leadership and subsidiarity oriented deployment of AI will follow a few principles:
“Humanity First” as a guiding principle for AI deployment where the results are an improvement, and not a deterioration in the human condition.
Be from the ground up. The people doing the work are trained on the use of AI, and are invited to identify how they can be more productively using these tools
People identifying improved workflows and techniques should be rewarded
Reductions in work force size as a result of increased efficiency should as much as possible arise through attrition
Presently, there is an impulse toward the use of AI as a means of enhanced measurement and control.
There are those in positions of high corporate and governmental power who are contemplating AI‑enabled influence and persuasion techniques that depend on the subject’s lack of awareness—techniques that would be experienced as manipulative if disclosed. This crosses an ethical line. A question I ask myself when contemplating how to persuade someone is: “Would they resent what was done once they knew what was done?”
Manipulative persuasion goes directly against the principles of subsidiarity and servant leadership.
A subsidiarity-based framework is a preventative to the identity crisis and exacerbating of social disorder that I alluded to in my previous piece.
In a more applied leadership perspective, when subsidiarity is a guiding framework in a business environment a pipeline of future leaders to succeed the current leadership group is one highly desirable result.
With gratitude for your readership and thoughts,
Nik