Concerns regarding artificial intelligence, machine learning, and automation have recently received a considerable amount of attention. Do these advancements pose a threat that will destroy the jobs created in the post-industrial revolution at best or at worst leave us all as physically passive creatures reclining on lounge chairs in the world anticipated by “The Matrix”?
Unquestionably, however, recent advances in artificial intelligence appear all-encompassing and more rapid than previous workplace disruptions.
At times, we are guilty of believing that all things are new and unique to the present. Fears of the dehumanization of the workers of the world are at least as old as the industrial revolution and Charlie Chaplin’s character in “Modern Times.” They probably predate recorded history with early man wondering if the invention of the wheel would lead to unemployment and the moral decay of subsequent generations. Unquestionably, however, recent advances in artificial intelligence appear all-encompassing and more rapid than previous workplace disruptions. The question is whether they will spawn new opportunities for human creativity and innovation leading to more and better jobs or begin the end of post-industrial society. Will Neo win against Agent Smith?
The question comes down to how critical both individuals and groups are to societal, organizational, and managerial decision making. There are a number of uniquely human capabilities that come to bear in decision making. (1)
Individuals are especially capable of defining problems—figuring out what the questions are and asking them. Similarly, distinguishing between “good” and “bad” is a uniquely human capability. Unfortunately, as Walter Persig eloquently illustrated almost 50 years ago in Zen and the Art of Motorcycle Maintenance, this is not an easy process. Humans have also shown marked ability to detect false positives—computers are able to sort but humans make fine-tuned judgments much more effectively. Finally, development of novel combinations not anticipated by previous experience is a uniquely human contribution.
If you put groups of people together, the results are even more impressive. Groups are able to outperform individuals because of the process labeled “social facilitation.” Social facilitation operates in several ways. Groups outperform individuals because people are competitive. If you put individuals together in a group without any ability to interact, the mere presence of other individuals will increase individual performance. Individuals also benefit just from watching others performing the same task. The effect is magnified if individuals in groups are allowed to cooperate—group members will divide up tasks into component parts where a division of labor can yield even greater increases in the performance of the group.
Individuals can be slow to make decisions and, at times, extremely inefficient.
This is not to say, however, that both groups and individuals come to decision making situations without problematic baggage. Individuals can be slow to make decisions and, at times, extremely inefficient. Of course, the limits to the knowledge of any single individual are huge. Groups are likewise not without their problems. They can be guilty of premature closure settling on a seemingly acceptable solution when a more comprehensive exploration of alternatives would yield a better one. Groups can also be too risky as well as risk-averse, letting preconceived expectations of appropriate risk distort decision making. And, of course, seemingly rational individuals can be led to horribly poor decisions when caught in the throes of Groupthink.
Thus, humans bring both advantages and disadvantages to the decision making process. Much of the focus of the psychology of organizations and the workplace has been on how to take the good with the bad by developing ways of managing individual and group decision making. Possibly, artificial intelligence offers not a way to replace human decision making but rather a way to facilitate human decision making keeping the good and eliminating the bad.
Two recent articles highlight the implications of this perspective.
Christopher Mims makes a strong case in the Wall Street Journal for artificial intelligence being “pretty stupid” without humans. (2) He humorously notes that Facebook’s visual recognition algorithm cannot distinguish between “your naked body and a nude by Titian,” requiring the input of thousands of content moderators to police postings. He notes that Facebook recently announced that they would add an additional 10,000 content moderators to their current cadre of 10,000. Banks, according to Mims, employ teams of non-engineers to make their artificial intelligence systems function effectively. He notes that “…bank workers who previously read every email in search of fraud now make better use of their time investigating emails the AI flags as suspicious….”
This leads to an interesting article by Miranda Katz in Wired (3), “Welcome to the Era of the AI Coworker.” Katz follows the development of a number of jobs from translation to story editing that have been dramatically changed by the development of AI algorithms. Artificial intelligence did not eliminate these jobs; the work became more efficient and more sophisticated and opened up new applications that have actually increased employment in the field.
I am reminded of the situation at the start of FedEx. In the early 80’s, I was working as a consultant with FedEx. In retrospect, it is amazing that everyone at FedEx thought the demand for overnight package delivery was highly restricted. I vividly remember a conversation when the top executives at FedEx were wondering if the demand for overnight packages could possibly be greater than 100,000 packages a night. Today, FedEx and UPS average over 17 million packages a night. The reason no one could anticipate the growth in demand was that the uses for the service did not even exist at that time—the uses developed because the service was available. I suspect that this will be the case with artificial intelligence.
If individuals and groups have a problem coming to premature closure, an “AI Co-Worker” can guide them to develop more alternatives.
How does this relate back to the strengths and weaknesses of individual and group decision making? The key to developing “AI Co-Workers” will be identifying ways to minimize the shortcomings of individuals and groups and to accentuate their effectiveness. If individuals and groups have a problem coming to premature closure, an “AI Co-Worker” can guide them to develop more alternatives. If group dynamics tend to stifle some individuals’ contributions, an “AI Co-Worker” can give these individuals the opportunity to contribute in a less threatening situation. There is much research, including my own (4), that shows that the use of anonymous input systems, when added either to regular classrooms or to online ones, increases the participation of individuals who would normally shy away from raising their hands or volunteering comments. Learning management systems are becoming sophisticated in increasing and equalizing the participation of learners. Interestingly, we have also found that, while the participation of “under participators” goes up, that of “over participators” learners goes down. Could it be that individuals who dominate conversations can learn to listen? Not surprisingly, the “under participators’” satisfaction goes up while that of the “over participators” goes down.
Possibly the most dramatic example of the synergy of artificial intelligence and humans comes from a classic application—developing a computer program to play chess. Big Blue’s success in defeating the world chess champion, Garry Kasparov, made headlines 20 years ago. Similar attention went to the computer program that was able to win a “Go” tournament just last year. A more interesting result, from my perspective, was the success that a couple of amateur chess players had in winning a tournament that combined computer program and master chess player competitors. (5) These amateurs developed a program that allowed them to combine the power of several chess-playing computer programs with their own assessments of particular games to win the tournament beating computers and chess masters alike. Obviously, artificial intelligence and social facilitation is a powerful combination–potentially, a perfect co-worker combination. It should not be surprising that David Deming of Harvard recently found that jobs demanding sophisticated “soft skills” are growing at a faster pace than those demanding technical skills. (6)
The opportunities to develop artificial intelligence applications in higher education are unlimited. Whether we are looking for ways to improve the effectiveness of the enrollment management process, the learning management process, or scholarly research, these applications have the potential to improve both efficiency and effectiveness. It is unlikely, however, that the most effective strategies will involve artificial intelligence working alone. Artificial intelligence, on the other hand, as a supplement to human/group decision making or as a “front end” allowing human processes to become more sophisticated and, might I add, interesting is a more likely occurrence.
In the end, Neo defeated Agent Smith. I think…
1 Christopher Mims, “Without Humans, Artificial Intelligence Is Still Pretty Stupid,” Wall Street Journal, November 12, 2107.
2 Mims, November 12, 2107.
3 Delone, W.R. and Alexander, E.R., “Technology Supported Anonymous Voting and Large Group Participation: An Experimental Study,” Journal of Information Technology Management, 1991, 25-31.
4 Chris Baraniuk, “The Cyborg Chess Players That Can’t Be Beaten,” BBC Future, December 4, 2015 (http://www.bbc.com/future/story/20151201-the-cyborg-chess-players-that-cant-be-beaten?ocid=ww.social.link.email).
5 Iranda Katz, “Welcome to the Era of the AI Coworker, “ Wired, November 15, 2017. https://www.wired.com/story/
6 David Deming, “The Growing Importance of Social Skills in the Labor Market,” https://scholar.harvard.edu/files/ddeming/files/deming_socialskills_may2017_final.pdf.
Avoid costly mistakes and wasted time – talk to an impartial peer in Higher Ed!
There is nothing like speaking with a peer who has implemented the same product – send us a request.
You can also provide general feedback, inquire about additional free resources, submit a topic you’d like us to cover, tell us about a feature you’d like to see, or request the best staff for your project.