by Scott Hartley
December 4, 2017
Every day there is a flurry of media coverage about machine automation and the latest industry to be taken over by artificial intelligence (AI). But how credible and imminent are these threats? And how should we, as forward-thinking leaders in education, consider both the potential impact of these new technologies, and how we can prepare students for them?
The counter-intuitive truth in all of this is that the liberal arts are becoming more, not less, important. This is not to say that technical literacy, even fluency, is not important. But the argument that Science, Technology, Engineering, and Math (STEM) is now the antidote to relevance in our future economy is overblown; STEM is necessary, but not sufficient.
The liberal arts, which include the natural sciences and mathematics, is not mutually exclusive of STEM. And the ways in which new tools can be engaged and incorporated can involve the methods of liberal arts teaching. For example, computer science need not be rote learning, but rather can be deeply collaborative and communicative. Philosophy can be taught through not only re-reading the old masters, but also considering their tenets in a modern world. We need to consider both subject matter and the applicability of methodological thinking.
Daniela Rus, director of MIT’s Computer Science and Artificial Intelligence Lab, speaks frequently on the topics of automation. Rather than the all-or-nothing race to fully autonomous new tools, “serial automation,” as it’s called, she calls for more of a “guardian angel system,” where machines work alongside human counterparts. Looking at commercial airplanes as an example, we assume they have been largely autonomous for many years. But when we discuss automation, this may mean pilots spend less time than they once did leveling the wings, but it does not mean these planes take off, fly, and land unassisted. In fact, pilots are not just chaperones in the sky; they manage complex checklists and inputs to tell the plane exactly what to do, so that it can maneuver itself in specific, isolated ways.
In a world of more intelligent driver-assist on the road, this process has been similarly gradual. We may receive breaking, steering, or parking assistance, but these supplement actions and judgments in ambiguity made by drivers. At the desktop, in the office, this won’t be any different. The comparative advantage of humans is our ability to make judgments and ethical determinations in ambiguity, to be spontaneous, to collaborate, to communicate. We are exceedingly good at extracting context, being adaptive, and acting with empathy. These are exactly the skills we will use to supplement the data-crunching analysis of machines.
As leaders, we ought to consider how this new technological scaffolding will supplement our abilities. The McKinsey Global Institute published a study in the summer of 2016 that analyzed 800 occupations and the constituent tasks that made up those jobs. They looked at current and projected technological advances and estimated the extent to which tasks could be fully or partially automated. What they found was concerning, but not alarmist.
Five percent of jobs, they surmised, could be fully automated by machine learning and AI. But for 60 percent of the jobs surveyed, 30 percent of the tasks could be performed with higher accuracy, fidelity, or speed by machines. What this means is that our business world looks much more like Rus’s “guardian angel system,” or process of “parallel automation.” Robots aren’t going to suddenly sit next to us, do our jobs better, and make us a better pot of coffee. Rather, they are going to be embedded into the many tasks and processes we perform at work. They’re going to be our “driver-assist” at the desktop.
As we consider the path forward, we ought to be training students for a world where all employees need literacy with our newest technological tools, but where humans are also rewarded for their comparative advantage — namely, being human. Machines will inevitably be better at sensing, monitoring, and synthesizing data, for example, but the vital function of humans will be to ask the questions, frame the hypotheses, and consider bias.
Humans will play the role of either technology translator or human-to-human interface. In the former category, critical thinking, data literacy, and technological literacy will be important. In the latter, empathy, communication, and collaboration will be vital. Social scientists will also play the role of auditors, questioning, probing, and perhaps peer reviewing data sets for bias before they are baked into black box algorithms, hidden in ones and zeros.
As machines take on routine, highly scripted tasks within our jobs that can be programmed away and as machines become our desktop driver-assist and our technological office scaffolding, what will remain will be those tasks that cater to our very advantages as people. The tasks reserved for employees will be manifold, but they will require us to be adaptive, creative, collaborative, and comfortable with complexity and ambiguity. It will be a world not focused on looking up answers, but one predicated on asking the right questions.
For this reason, training in the liberal arts, in addition to technology, should flourish. Keen companies will recognize the value of someone with not only strong cognitive skills, but also sharp social skills. More than just an insurance product guaranteeing relevance, or an investment for which the returns are constantly calculated in terms of employability, education is also for consumption, for slowing us down, and fostering well-balanced citizens. A liberal arts education develops deep thinking humans. And it is they who will be fit to steward our wide-reaching technology, and help us navigate its complex impact on society.Scott Hartley is a venture capitalist and startup advisor. He is the author of “The Fuzzy and the Techie: Why the Liberal Arts will Rule the Digital World” (Houghton Mifflin Harcourt, 2017).
MAIN IMAGE SOURCE: Wright Studio/Shutterstock