It’s impossible lately to avoid reading or hearing about the future implications of Artificial Intelligence (“AI”). Regardless of qualification, everyone seems to have an opinion on where this technology is taking us— running the full spectrum from “Utopian Ideal” to “Robot Apocalypse”.

What a perfect topic for an essay written by an uninformed old-fart with a vanity-project blog site!

First of all, count me as a macro-economic optimist. As I recall from the musty corners of my attic-brain, economic growth[1] is created by the combination of three inputs: capital (i.e. $$$$), labor, and a more difficult-to-define concept known as productivity. Productivity, in this sense, can be thought of as anything that drives efficiency so that more economic output can be created with the same amounts of capital and labor.

Until recently, economists have been concerned about the lack of increasing productivity in the U.S. This is especially troubling in an environment where our elected officials are determined to reduce the amount of available labor by eliminating immigration and create a the kind of unstable geopolitical environment that discourages investments of capital. Assuming you are pro-economic growth, it’s a really good time for productivity to begin increasing again, and that’s exactly what’s been happening.  Many economists are guessing that this productivity boost is the result of increased use of AI.

The glass-half-empty crowd look at this type of productivity transformation and see jobs being eliminated. They aren’t wrong. We’ve seen similar transformative technologies result in significant social disruption.

This is kind of hard to address without sounding like a heartless elitist. I have no doubt that certain jobs, and some entire industries, will be made obsolete by AI. Historically, that’s exactly what happens following the introduction of a transformative technology. The classic example is buggy-whip manufacturers, who were no longer needed when the automobile was invented. See also telegraph operators, coal miners, manufacturing assembly workers, and anyone who used to work in the production side of print media. I also believe that in a growing economy, more jobs are created to replace those that are made obsolete.

The bad news is that if you still have a job that consists of a highly repetitive task, the output of which is easily validated, the AI bots are probably coming for you. The good news, I would argue, is that you are no longer going to be asked to go to work everyday to perform a job that consists of highly repetitive tasks the output of which is easily validated. If it sounds more interesting to have a job where your value is contributed through your ability to read and appropriately react to human interpersonal cues, recognize and respond to nuance, and ask insightful questions in response to complicated and incomplete data sets, then the future is bright.

That, of course, is an oversimplification, and may still scare the hell out of people, but I have a feeling it’s directionally correct. Almost all jobs are a combination of (1) a certain amount of repetitive drudgery— often absent of human interaction— easily and objectively measured; and (2) more nuanced work requiring emotional intelligence and judgment undertaken in an environment of uncertainty. The second element is much more valuable to most firms and society in general. One other stray bit I recall from my economic courses is that the most valued and productive roles in an economy tend to be the most highly compensated. So, a world where the boring, repetitive and lower compensated portions of our jobs go away— leaving more time for challenging and highly compensated work– seems like a good thing.

One of the things I hated as a young lawyer starting work in 1993, was having to track the time I spent at work in increments of 6 minutes (one tenth of an hour). This was done so that my time could be appropriately billed out to clients and monetetized for my firm. That model, as brutal as it seemed, indelibly ingrained the value of how I spent my time. I vividly recall more than one partner calling me into their office to discuss a time-sheet he or she was getting ready to convert into a billable invoice to a client. “Clay, clients don’t pay us a lawyer-rate to collate and staple court filings;” “Clay, clients won’t pay for 1.75 hours of research when it took you .5 hours to find a case in our library that should have taken .25 hours.” And on and on. I eventually left the private practice of law, but never really lost that billing clock in my head and throughout my career found myself asking, “am I doing something right now that a client/shareholder would find value in?”

So, back to the two components involved in most jobs— the more work you do in the second category— requiring emotional intelligence and judgment undertaken in an environment of uncertainty— the more economic value you are creating with your time and the more highly you’ll be compensated you’ll be. That, as I understand it, is exactly where AI technology should take us.

How do we prepare people for this world?

That’s exactly the question that we’ve been asking in Trustee meetings at my Little College in the Cornfields[2]

Earlier this month, I listened as a group of faculty discussed work that a committee is currently undertaking to help define how do we best equip young people to utilize, but not abuse, AI. Their first step was to establish a Vision Statement[3], which I loved. It read:

We want to help the students at Central College

  • Think with AI;
  • Think without AI; and
  • Think about AI.

This simple statement not only establishes the objective of the committe’s work, but in so-doing highlights 3 essential principles for optimizing cognitive productivity in a post-AI world.

The first bullet of the statement acknowledges AI is here and cannot be ignored. Knowing when and how to use it is going to be an increasingly important life and job-skill. It’s impossible for colleges to bury their collective heads in the sand and fight that gravity. An important role preparing students for work and for life is to ensure its graduates have the capability to use this tool.

It’s equally dangerous, however, to think of AI as a tool to outsource cognitively strenuous work, and that’s acknowledged in the second bullet. When I consider how my iphone— and more directly the apps loaded on it— have changed the wiring of my brain over the last 15 years, I shudder to think about the attention span of a generation who views AI as a shortcut to avoid the need for cognitive focus. Individuals who create value in the next generation of organizations will understand that their role is to do the difficult thinking and to outsource the easy stuff to AI. There’s an enormous value in doing hard things and college is the perfect place to build the foundational habits necessary to get people comfortable with the discomfort that comes from cognitively challenging work.

Finally, I love the idea that it’s important for college students today and in the future to carve out time to think about AI. For thousands of years, intellectually gifted people have been asking:

  • What does it mean to be a person?’
  • What does it mean to be a good person?
  • What does it mean to be a good citizen?
  • What does it mean to be a good family member?

 These questions are only becoming more important in a post-AI world where much of the fabric of civil society is rapidly shifting.

I’m looking forward to hearing how the Committee intends to deliver on its vision statement. Frankly, I don’t anticipate that AI should drive huge change in terms of the function and delivery of a liberal arts education. There’s a reason that despite all the technological advances in the last 1000 years, the essential liberal arts model hasn’t changed much since the founding of the University of Bolgna in 1088. A classic liberal arts education demands that students consume complicated texts and learn to both explain and question the ideas they present. In so doing, the students learn to think critically, enhance their understanding of humanity and communicate clearly. I can’t imagine a better method for teaching discernment, moral reasoning, and the curiosity necessary to utilize a technology uniquely positioned to amplify our humanity rather than replace it.


[1] Being an old-fart with a musty attic-brain, I generally regard economic growth as a good thing, but I am aware and respect counter-arguments on this point. Perhaps this is a good pin for a future essay.

[2] More commonly known as Central College. (Hoo-Rah, Hoo-Rah, Central, Central, Rah, Rah!!)

[3] Vision Statements, like a lot of corporate jargon, have been abused and misused into a state of satirical content gold. See also faculty committees. However, too many committees in any organization begin their work without establishing, or for that matter, understanding, what they are trying to accomplish. Thus, the value of a well-crafted Vision Statement.

Leave a comment