Studies examining how artificial intelligence (AI) and automation will shape our lives – and especially our jobs – in the years and decades ahead have been piling up. The consensus is that the impact will be mixed: many jobs will be eliminated, many will be transformed, and many new ones will be created. But agreement on how many will be eliminated, transformed, or created remains elusive.
For example, in 2018, a World Economic Forum report projected that between that year and 2022, “75 million jobs may be displaced by a shift in the division of labor between humans and machines, while 133 million new roles may emerge that are more adapted to the new division of labor between humans, machines, and algorithms.” Not surprisingly, the International Federation of Robotics also concludes that the adoption of robots in sectors like auto manufacturing will result in net employment growth.
Likewise, studies by the McKinsey Global Institute and IZA, a German think tank, find that 14% and 35% of jobs, respectively, are vulnerable to automation. But a now-famous study by Carl Benedikt Frey and Michael Osborne of the University of Oxford estimates that almost half of US employment is at risk.
The differences between these findings reflect differing methodologies. But most share a deterministic view of technology. For the purpose of arriving at some headline-generating range of job displacement, it is assumed that any task or occupation that can be automated eventually will be.
WHO’S IN CHARGE HERE?
The inadequacies of the prevailing approach become clear when it is applied beyond developed Western countries. In Sub-Saharan Africa, for example, 50-80% of farmland is still cultivated manually, despite decades of technological advances in the agriculture sector. One reason is a lack of access: Whereas South Asia and Latin America each have ten tractors per 1,000 hectares, on average, Sub-Saharan Africa has fewer than two.
An equally important factor that receives little attention is the congeries of social forces that typically operate in the background of technological development, ultimately shaping how technologies are designed and deployed. In addition to political, regulatory, and cultural issues are the principal-agent relationships that one finds embedded in any organization, community, or society.
Simply put, the conditions under which decisions to introduce technology are made vary. For example, until recently, farming technology was capital intensive and thus tended to favor larger farms that could afford the high up-front costs. The difficulty of securing spare parts and accessing other post-sales services made these investments too risky for smaller players, and the fact that most African farm workers are women doubtless also played a role in limiting investment in smallholder farming technology.
TECHNICS AND CIVILIZATION
The literature on the social construction of technology shows that technology is designed and deployed under the influence of a wide array of internal and external social forces. Nothing is inevitable. The twentieth-century city planner Robert Moses famously installed low bridges to prevent buses – and thus poor New Yorkers – from reaching Long Island’s beaches. The standardized shape of the modern-day bicycle came about as a result of multiple competing social conceptions of what a bicycle’s primary use actually is.
Notwithstanding the variety of the contexts in which it is used, technology is generally seen as an instrument for enhancing human agency by expanding the range of possible choices. It allows humans to punch above their weight, or to move on to other tasks. But behind these dynamics are always social forces born of deeper struggles between classes, groups, and individuals. As such, any consideration of automation technologies and work is also a study of human agency and its social context.
Historically, technology has been the main vehicle for overcoming the natural constraints on human agency – the physical and mental limits to what we can do. But humans also face social constraints. Most human relations exist within a principal-agent framework, and it is here that specific technological applications can be contested.
In their seminal work on the theory of the firm, economists Michael C. Jensen and William H. Meckling noted that in any principal-agent relationship, one or more persons (the principals) engage another person (the agent) to perform some service on their behalf, thereby delegating some decision-making authority to the agent. Yet, because both parties are presumed to be “utility maximizers,” the principal will always worry about the agent not acting in his best interests, and often will take steps to encourage or circumscribe the agent’s actions accordingly.
To illustrate this dynamic, consider the leverage that big data gives to large retailers over their suppliers, or the leverage it gives the big tech companies over their users. Here, the individual user is a principal with a structurally weak position vis-à-vis the agent, whereas the individual smallholder farmer in Africa is an agent in a structurally weak position vis-à-vis the principal.
But not all transactions are driven by financial and monetary value. Particularly in the social, cultural, charitable, health, and educational sectors, non-monetary values can shape transactions between individuals and organizations. Thus, to understand the choices made in a given context, one must understand the relevant value system.
For example, a co-op might choose a semi-automated solution over a fully automated one if doing so will keep its staff employed. In Belgium, Mariasteen, a company that offers metalworking, assembly, woodworking, groundskeeping, and enclave services, uses robots to help – rather than replace – staff members with disabilities. But one could also imagine a charity organization deciding to replace human drivers with drones in order to deliver more food to the needy. In this case, the overarching value is fighting hunger, not job creation.
FOLLOW THE MOTIVES
Taken together, principal-agent dynamics and the value systems underlying them can shed light on technological developments within a given social context. When it comes to automation and jobs, one can conceive of four possible principal-agent configurations.
In the first case, the principal’s interest is in substituting for human agency, perhaps because the cost of labor is too high, the quality is too low, or there simply are no workers to be found. Examples of this configuration in agriculture include the use of fruit-picking robots or drones to spray liquid pesticides, fertilizers, and herbicides. Automation technologies may also play an intermediation role between buyers and sellers, as with ride-hailing and delivery services. In Bangladesh, where only 2% of farmers own a two-axle tractor, most rely on harvesting-services companies. Because the transaction costs between small farmers and these service providers tend to be high, Uber-style platforms have emerged to automate the task of intermediation. In this way, technology allows small farmers to act as principals deciding which tasks to automate.
In a second configuration, the principal’s priority is to preserve human agency in the interest of cost-effectiveness, quality control, flexible resource allocation, client preference, political gain, or some other objective. Here, the obvious example is customer care at hotels and restaurants, or other forms of care work in education, health, and social services, where human empathy, judgment, and discretion command a premium.
Similarly, some Japanese manufacturers have found that automation limits their ability to adapt to changing circumstances, and thus prefer more flexible human agents. And, of course, there are cases where political factors (such as community, union, or local government pressure) will lead a business to decide that the savings from automating human tasks aren’t worth the backlash.
In a third configuration, the principal and the agent are essentially the same, as in the case of creative work, where the agent is also a principal through some form of shared ownership or alignment of interests. Many Italian businesses derive their comparative advantage from a long tradition of craftsmanship. If they were to replace their human workers with robots, they would lose this artisanal brand advantage.
In such cases, technology will be deployed only to the extent that it enhances human agency, such as by enabling human designers to produce even more refined products. That is what happened when traditional Swiss watchmakers adopted automation in the 1980s and 1990s: technology was used only to enhance the traditional craftsmanship and expand external manufacturing capacities.
In the fourth configuration, both principals and agents have an interest in machines being used to perform dirty, difficult, dangerous, or demeaning job functions. In Japanese and German automobile plants, automation has enabled an internal reshuffling of the workforce toward more desirable occupational roles. Tasks, not workers, are displaced, which is why auto unions historically were in favor of mechanizing monotonous, unsafe tasks. At Ford Motors, robots were introduced to conduct tedious oil-leak inspections. By the same token, it is safe to assume that machines will eventually replace humans altogether in dangerous tasks such as mining.
With this analytic framework, we can begin to see how dependent technological design and deployment are on socioeconomic factors. The extent to which principal and agent interests are aligned often determines the extent of automation. Although the economic (efficiency-maximizing) case for automation is often easy to make in abstract terms, the introduction of non-economic factors quickly confounds the argument.
These non-economic variables help to explain why so many predictions about labor-replacing technology run aground. In cases where principal-agent interests are both non-economic and aligned, automation decisions will be guided by considerations other than cost. But non-economic transactions can be misaligned too, as is often the case with government bureaucracies and providers of public services. In these settings, managers often use technology to limit staff discretion, and staff often use technology to carve out more time for socializing or personal pursuits.
These principal-agent dynamics can be summarized according to interest alignment and type. When principals and agents have aligned economic or non-economic interests, technology will likely be used to assist humans; when those interests are not aligned, technology will likely be used either to replace or to control humans. Crucially, these transactional configurations influence not only how technology is designed and deployed, but also which business models are used to rationalize investment in technology.
DO ANDROIDS DREAM OF ROBOT WORKERS?
Whether one conceives of the current moment in terms of “Industry 4.0” or the “Fourth Industrial Revolution,” the coming technological changes won’t be driven by a competition between humans and machines, but rather by competing social constituents aiming to improve their positions along various value chains.
Hence, according to Frey, the Industrial Revolution happened first in Britain partly because political power resided with those who stood to gain from mechanization. In thinking about the twenty-first century, it follows that the extent to which labor-replacing technologies are adopted will depend on the material interests of those with political power.
In this context, the Italian economists Giovanni Dosi and Maria Enrica Virgillito foresee three plausible scenarios for automation today. The first is what they call the Blade Runner scenario, in reference to Ridley Scott’s 1982 science-fiction film (itself based on a 1968 Philip K. Dick novel), where a small “techno-feudal” oligarchy lords over the masses by dint of its exclusive control of essential technologies. In a second scenario, the “techno-feudals” share power with a largely ignorant rent-sharing class – an arrangement that resembles ongoing trends in some emerging markets today. And in the third scenario, all of the technology is owned collectively and deployed in the common interest.
Obviously, one could imagine additional arrangements. But the larger point is that the future of our technological society will depend largely on today’s principal-agent relations. Precisely who qualifies as a principal, and who as an agent, will always be a tricky question. But governments will have to grapple with it if they are to have any chance of shaping these configurations in the interest of the many rather than the few.
Nowhere is this difficulty more evident than in the debate over the legal designation of Uber and Lyft drivers. Are they agents or principals vis-à-vis Uber? Somewhat forgotten in the debate is a third party without whom neither Uber nor the drivers would have any future at all: the passengers. It is in their hearts and minds that the battle ultimately may be decided.
In any case, governments will need to develop a better understanding of the modern economy’s evolving value chains and the transactions that define it. In an efficient and equitable economy, technology would be used to eliminate rent-seeking, monopoly, and oligarchy, while enhancing human agency and enabling the pursuit of economic and non-economic goals. Recognizing the centrality of these dynamics would represent a crucial step toward understanding where, how, and when automation technology will eliminate, transform, and create jobs in the years ahead.
Comment here !
At the end of October, an intergovernmental working group will meet again to push for an international treaty governing multinational
Surging inflation, skyrocketing energy prices, production bottlenecks, shortages, plumbers who won’t return your calls – economic orthodoxy has just run
China watchers have grown ever more anxious as President Xi Jinping has concentrated power in his own hands, and as
China is having an eventful month, marked by proliferating power-supply disruptions and the debt crisis of the country’s second-largest property