Opportunity at the Interface Part 5 – Tools, Our Interface with Reality, Identity and AI
A friend put me on to HBR Ideacast #1066 (With Rise of Agents, We Are Entering the World of Identic AI) recently on the topic of identic AI. The high profile guest asserted that “we are not our tools” as a throw away statement. Overall, I found his analysis of AI adoption utopian (if you know me that is NOT a compliment). While “we are not our tools” is literally true this statement serves to obscure deeper truths about our relationship with our tools and how the use of tools impacts our perception of reality and sense of identity. This is what I will explore today along with how this relates to AI adoption and speculate on leadership consequences.
“We shape our tools, and thereafter our tools shape us.” – Marshall McLuhan
“Men have become the tools of their tools.” – Henry David Thoreau
The use of tools is a special human trait. Only a few other species use tools, albeit in a primitive fashion. The creation and use of ever more powerful tools is what has enabled humanity to become an apex species. When we meet new people conversations inevitable get to what we do for a living…what tools we use…what dialect we speak. Make no mistake, based on what we do for a living there is a whole tribal sub-language that is spoken that the uninitiated will not understand.
The point is that the tools we use play a big part in how we perceive and interface with each other and with reality itself. They shape our very sense of identity. So, are we our tools? No, but when our tools are mental, they are a part of us.
The pace of tool development (ie: technological change) viewed over a very long time frame is one of continual acceleration. Another feature of humanity is niche switching, where we are able to adapt to changes and opportunities that are either imposed by nature, sometimes revealed by new tools, or imposed by us via our tools on nature. This is a software level adaptation that humans are making via cognitive change. Are we approaching the limits of adaptability? What will happen when we reach those limits?
The first order consequences of the advent of AI enhanced workers are that:
Productivity will go up for everyone
A large number of current professions will be disrupted requiring fewer workers.
The tools we use will change and new jobs and professions will be created.
15 years ago there was a popular and emotionally unintelligent saying directed at unemployed coal miners: “learn to code”. 5 years ago it was directed back at unemployed journalists. Some proportion of the people to whom this was directed may have the capacity to learn a new skill like coding; and did. A large number won’t. People using this kind of saying reveal their sense of superiority, condescension and lack of appreciation of others’ basic humanity. This kind of saying promotes division and animosity. The people being advised to “learn to code” had spent years honing a craft, view the world through that craft, identify with that craft, feel attached to that craft. Yes, they need to confront the reality that a new way to make a living is needed, but this must be communicated by people in leadership positions with a sense of empathy and gratitude for what their contribution was and can still be in the future, and to support the ability of the affected people to retrain in realistic adjacent fields.
I think the second order consequences of AI adoption will include the following:
Large scale identity crisis.
Large scale cognitive impairment.
I’ll explain why I think the second first: there is already evidence coming out of education that people’s levels of deep learning is being impaired through the extensive use of AI, for instance in the preparation of essays and reports. When a human being prepares a report themselves they need to internally synthesize and organize the information into knowledge and then compose it into a format that is transmissible to other human beings. This exercise in effect causes that knowledge to be written into long term memory…to be learned deeply. When we cognitively off load this work to AI we get an immediate productivity boost, but we no longer learn that knowledge deeply and no longer think deeply. How many people today are incapable of navigation without a GPS to tell them where to go?
“Cogito, ergo sum” (I think therefore I am) – René Descartes
I have a lot more to say, but I’ll leave it at that. Moving on.
Over the last 5 years it has been revealed that many very highly placed tech executives forbid their children from using social media. I can only wonder what kind of hygiene people “in the know” will apply to their use of AI. My prediction is that it will be to utilize it in a Socratic way to speed up research and challenge their own work, but they will use it in a way that will not lead to their own cognitive impairment. Will they care if the average user ends up cognitively impaired? My confident prediction to this question is NO. In fact I think the opposite will be true. The user’s cognitive impairment is a feature, not a bug that will make people more dependent on the technology.
I am not arguing against use of AI. I am advising leaders to use it responsibly themselves, to encourage the people they lead to do so responsibly as well so that human potential is maximized, and not diminished. I think that as in most things there is an optimum balance, more of a good thing is not always better, or perhaps more ominously the difference between medicine and poison is dosage.
Now on to the identity crisis, I think this is more straightforward: as AI use unlocks enhanced productivity, fewer people for a given job are needed and terminations follow (see Block Inc.’s recent 40% workforce reduction [~4,000 people]). A number of people will be unable to find a similar job at similar pay. These people will likely face an identity crisis and will be vulnerable. Vulnerable to exploitation of various kinds; economic, political and more. The political exploitation will be from opportunists on both the far left and far right, further exacerbating polarization. Simplistic narratives and solutions will be presented playing on people’s vulnerability.
I think there is a win-win opportunity on the part of business leaders to grow their businesses at the same time as mitigating the impacts of AI adoption on this possible mass identity crisis. As productivity grows, the number of employees needed goes down and there is the opportunity for some of your most talented people to learn new skills. If you have been doing things the right way (ie: your HR processes include career planning) you have people who also have a deep understanding of your business and you know which ones are capable of and keen to retrain and how much. This will then help your business to adapt to this new world incrementally if started early enough. If leaders take a later in the game approach of “cutting headcount” (I loathe this expression) it will demotivate the left behind. They will be asking themselves “is all I’m doing training my agentic AI replacement?”
The opportunity is there for leaders to find win-win’s and we can find a real future that is neither dystopic nor utopic but hopefully something reasonable in how we adapt to the new ways we will interface with reality on account of powerful new AI tools.
All of what I’ve talked about today has to do with how we interface with reality and with each other. My intention with this piece is to present a perspective I am not seeing in popular discourse and to get you all thinking and talking about the higher order consequences of how we go about changing these interfaces between complex and difficult to predict systems.
So in closing, how are your tools filtering your reality? How are the tools the people you lead filtering their reality? What will you do to make sure you do not end up cognitively impaired?
With gratitude for your thoughts and readership,
Nik