Volume 71, Issue 4 pp. 1171-1204
INVITED LEAD ARTICLE
Open Access

Automation, Algorithms, and Beyond: Why Work Design Matters More Than Ever in a Digital World

Sharon K. Parker

Corresponding Author

Sharon K. Parker

Curtin University, Australia

Correspondence

Sharon K. Parker, Centre for Transformative Work Design, Future of Work Institute, Curtin University, Perth, Western Australia, 6000. Email: [email protected]

Search for more papers by this author
Gudela Grote

Gudela Grote

ETH Zürich, Switzerland

Search for more papers by this author
First published: 30 December 2019
Citations: 358

Funding information: Australian Research Council, Grant/Award Number: FL160100033

Abstract

We propose a central role for work design in understanding the effects of digital technologies. We give examples of how new technologies can—depending on various factors—positively and negatively affect job resources (autonomy/control, skill use, job feedback, relational aspects) and job demands (e.g., performance monitoring), with consequences for employee well-being, safety, and performance. We identify four intervention strategies. First, work design choices need to be proactively considered during technology implementation, consistent with the sociotechnical systems principle of joint optimization. Second, human-centred design principles should be explicitly considered in the design and procurement of new technologies. Third, organizationally oriented intervention strategies need to be supported by macro-level policies. Fourth, there is a need to go beyond a focus on upskilling employees to help them adapt to technology change, to also focus on training employees, as well as other stakeholders, in work design and related topics. Finally, we identify directions for moving the field forward, including new research questions (e.g., job autonomy in the context of machine learning; understanding designers’ work design mindsets; investigating how job crafting applies to technology); a reorientation of methods (e.g., interdisciplinary, intervention studies); and steps for achieving practical impact.

INTRODUCTION

The acceleration in technological change, and the associated potential for radical societal change, has had a vast amount of attention in the media. On the one hand, the new technologies bring enormous opportunities for work and society. As described by Walsh and Strano (2018, p. xix), technology can replace “dull, dirty, and dangerous work”, such as drones being used to detect hazards. Technology can also enable better services, such as the example of Zume in the USA, in which pizzas are cooked in a truck with ovens, timed so that the pizza is cooked when the truck reaches the destination. Technology can make services so cheap they are transformative, such as the Chatbot lawyer developed by a British Stanford University student, which has contested, successfully, over 160,000 parking tickets in London and New York for no charge. Technology can also augment human performance, resulting in amazing successes, such as the diagnosis of rare diseases or the performance of remote surgery.

Positive examples such as these are matched by much enthusiasm about the economic benefits that arise from embracing digitalization. For instance, digital goods and services are usually far cheaper, meaning that they can be distributed with economies of scale, leading to the removal of the need for more expensive localized production (DeLong & Froomkin, 2000). Larger-scale redesigns of business models are also occurring as a result of digitalization, such as the possibility to use cross-location teams (Society for Human Resource Management, 2012), the replacement of hierarchies with flexible network structures (Zammuto, Griffith, Majchrzak, Dougherty, & Faraj, 2007), and more permeable boundaries in which people and work move freely within and across organizations (Boudreau, Jesuthasan, & Creelman, 2015, p. 11).

However, the technologies, and the work practices they enable, bring risks for work and workers as well. The most publicized risk is the erosion of the need for human workers at all. In their seminal study, Frey and Osborne (2017), focusing on the effects of AI (artifical intelligence) technology, predicted 47 per cent of jobs in the USA will be eradicated through automation. Although these figures have been challenged by several follow-up studies (e.g., Arntz, Gregory, & Zierahn, 2016), there is considerable agreement that new technologies will significantly change the overall workforce structure (Brynjolfsson, Mitchell, & Rock, 2018; Danaher, 2016; Huang & Rust, 2018), with many commentators being especially concerned about the effects of digitalization on the less skilled workforce (Dellot & Wallace-Stephens, 2017).

Furthermore, there are challenges arising from the possibility that human work itself will change in quite radical ways. Amongst other criticisms of the Frey and Osborne (2017) research, this study disregards the fact that it is tasks that are automated, not usually whole jobs, and these tasks exist within a broader role alongside other tasks that will not be automated. For example, Brynjolfsson and colleagues (2018) reported from their analysis that most occupations in most industries have at least some tasks that could be replaced by AI, but there is at present no occupation in which all the tasks could be replaced. Such research means that the already existing trend that humans and digitalized machines/robots work alongside each other and depend on each other, will intensify, calling into question how tasks, jobs, work, and technology should be designed as a whole. Hence, rather than solely speculating about which jobs will vanish, research should address the urgent and prevalent matter of how tasks might best be shared between humans and machines, and what might be the consequences of different choices in this respect. Technology-enabled changes in work, such as crowd working, also give rise to broader organizational questions about how work should be structured for positive individual and organizational outcomes.

In this article, we argue that insufficient attention is being given to how technology, and technology-enabled changes, actually alter tasks and work designs. We suggest that the existing, overly passive perspective focuses on how humans need to adapt to technology, rather than how work designs and technology might be adapted to better meet human competencies, needs, and values. Proactive efforts to shape work design, alongside human-centred technologies, are also likely to generate performance benefits, with plentiful evidence showing that technocentric change lacking consideration of social and organizational factors is more likely to fail (e.g., Clegg & Shepherd, 2007). We propose that work design theory is ideally positioned to reorient the current debates towards a more proactive stance on what work is desirable and how we can get there in the future.

Our goal in this paper, therefore, is to set out an agenda for better understanding how work design can be affected by new technologies, and in so doing, to suggest how to minimize the risk, and maximize the opportunities of, new technologies through effective design of both the technology and people’s work. In what follows, we briefly recap the key features of contemporary technologies. We then give examples of how work design is affected by technology directly (e.g., as a result of AI and digitialization) and indirectly via technology-enabled changes such as new business models (e.g., the gig economy, crowd sourcing). We then advocate various ways to move forward.

CHARACTERIZING THE CHANGE: IS THIS TIME ANY DIFFERENT?

The digital era we are in involves extensive technologies that not only change how people do things but also how work is coordinated and controlled, including the low cost per unit of additional output, and the removal of time and location as borders (Cascio & Montealegre, 2016). The emerging technologies and the changes they are assumed to foster have been characterized in various ways. For example, the “fourth industrial revolution” (K. Schwab, 2017) highlights the core role of AI and in particular machine learning, which involves a shift of agency from humans to technology as technology becomes capable of self-directed learning. Ubiquitous computing refers to the way in which computer sensors and other devices are linked to objects, people, the physical environment, information, and other devices, with the aim, as stated by Weiser (1991, p. 94), that technologies “weave themselves into the fabric of everyday life until they are undistinguishable from it”. Wooldridge (2015, p. 29) referred to a “hyperconnected and data saturated” world, which is linked to the notion of “big” data that refers to the automatic collection of vast amounts of digital data as a consequence of technology becoming part of all work-related activities, but also all other life domains. The availability of big data is core to many applications of AI because these data are the basis for self-learning systems (Nedelkoska & Quintini, 2018). Furthermore, new business models (e.g., the “platform economy”) have emerged that exploit more traditional information and communication technology (ICT) for offering services based on decentralized coordination between supply and demand (e.g., Airbnb, Uber). Altogether, the collective changes, referred to by Brougham and Haar (2018) as Smart Technology, Artificial Intelligence, Robotics, and Algorithms (STARA), are reshaping the information workers have access to (e.g., real-time data), where people work (e.g., co-working spaces), collaboration patterns (e.g., increasing interaction with robots), and, most fundamentally, people’s work designs.

In terms of the question as to what marks the current developments as “different” to previously, most authors point to the combination of big data and AI which enables machines to substitute humans in cognitive and higher-skill domains (Autor & Dorn, 2013; Brynjolfsson & McAfee, 2014; Frey & Osborne, 2017). Unlike in the past, complex cognitive tasks are increasingly being automated, with some warning that knowledge work and even management can be replaced by AI because the related tasks involve analytical and rational knowledge processing (Ferràs-Hernández, 2018; Loebbecke & Picot, 2015), such as reflected in the notion of algorithmic management (Schildt, 2017). Many commentators expect that AI-based decision-making will increasingly replace human judgement (Brynjolfsson & McAfee, 2014). Because AI-based systems can directly interact with the environment and “learn on their own”, they gain unprecedented abilities, which reshape the interaction between technology and humans in ways that revolutionize our understanding of control and accountability in work organizations and beyond (Boos, Guenter, Grote, & Kinder, 2013), as we discuss shortly.

A CENTRAL ROLE FOR WORK DESIGN

Even with more agentic and automated technical systems, human work remains crucial. As discussed above, it is most likely that tasks will be automated, not whole jobs, such that much work will entail an intense interaction between humans and self-learning autonomous technology. For example, radiology-based medical diagnoses are increasingly automated via machine learning, but it remains (and likely will remain) the case that workers need to order an x-ray, set the machine up to x-ray the relevant body part, talk to the patient and their family, send a bill for the work, and so on. All of these tasks need to be closely coordinated with the radiology machine. Hence, the long-standing principle of joint optimization of the social and technical components of work systems is as important today as it was in the early days of sociotechnical system design (Clegg, 2000; Trist & Bamforth, 1951; Waterson et al., 2015).

It is essential to consider work design issues to come to grips with potential effects of digital technologies and associated changes, and to help steer technological development towards desired futures of work. In essence, we place work design at the heart of understanding and shaping new technologies because there is a sound body of knowledge on the relationship between work design and individual, team, and organizational outcomes (e.g., Morgeson & Humphrey, 2006; Parker, Morgeson & Johns, 2017a). Accordingly, we are able to evaluate the impact of technology in as much as it affects work design. For example, if technology deskills work, it is likely to reduce motivation-related and learning-related outcomes. Of course, effects of digital technologies and related changes on work design are not deterministic but depend on various factors including attributes of the technology itself, organizational attributes, and managerial choices about that technology (e.g., Coovert & Thompson, 2013), and it is crucial to consider these factors. For example, the same technology could have different effects on work design depending on whether, for example, a human-centred approach to technology development and deployment is adopted, the skill levels of current workers, the organizational strategy and design, and so on. Organizations can thus actively make choices to improve the effect of technology on work design, and hence on important outcomes. Moreover, human responses to technology can, over time, shape the way it is used and hence affect work design. For example, if individuals mistrust the technology, this changes how individuals use that technology and hence work outcomes.

In Table 1, we summarize examples of how technology can affect work design, both positively and negatively, according to five broad categories of work characteristics. These include, first, job autonomy and control; a fundamental aspect of work design that affects multiple outcomes (motivation, stress, learning, performance, for example). Second, we focus on what we summarize as skill variety and use, or work characteristics that capture the degree of interest, skill use, and variety in one’s work, with subsequent effects on challenge perceptions and intrinsic motivation. Third, we focus on job feedback and related work characteristics (e.g., role clarity, opportunities for practice) that support learning, skill maintenance, and effective job performance. Fourth, we consider social and relational aspects of work, such as social contact and social support, which are also important for motivation. These first four categories of work characteristics are all “job resources”, or those aspects of jobs that help achieve work goals, cope with job demands, or stimulate growth and learning (Bakker & Demerouti, 2007, p. 312). We also consider the effect of technology on job demands, or “those physical, psychological, social, or organizational aspects of the job that require sustained physical and/or psychological (cognitive and emotional) effort or skills and are therefore associated with certain physiological and/or psychological costs” (Bakker & Demerouti, 2007, p. 312). Collectively, these five categories of work characteristics capture the key aspects of work design from a range of theories, including the dominant Job Characteristics Model that focuses on motivating aspects of work (Hackman & Oldham, 1976) as well as other models (see Parker, Morgeson, & Johns, 2017a for a review).

TABLE 1. Possible Effects of AI, Robots, Algorithms, and Other Contemporary Technologies on Key Aspects of Work Design, and Example Moderators of These Effects
Work Characteristics Potential Effects on Work Design Potential Moderators of Effects
Example Positive Effects Example Negative Effects [Applicable to All Work Characteristics]
Job autonomy and control (decision-making as part of work processes) Localized decision-making as a result of wider distribution of information Being “out of the loop” as a result of automation, with possibility for total loss of human control Individual-level
Information from big data and machine learning to support decision-making Automated decision-making that replaces human judgement

  • Personality, ability, skill, education, technology self-efficacy, age, etc.
  • Under-acceptance of the technology (mistrust, under-use, etc.)
  • Over-acceptancy of the technology (complacency, etc.)
  • Time and exposure to technology (e.g., people adapt over time)

Internet-enabled knowledge for enhanced self-organization Algorithmic management that reduces influence over decisions
Job autonomy and control (choice over where and when to work) Technology-enabled virtual/remote and other forms of flexible work Expectations for constant connectivity that reduce control Technology-related characteristics
New technology-enabled business models that allow greater self-direction in work Algorithmic management that pressures workers about when and how much to work

  • Type of technology
  • Type of technology in interaction with task type
  • Extent of human-factored design of technology
  • Strategic design focus—technology to replace or augment humans (e.g., left-over function allocation model)
  • Performance of the technological systems

Skill variety and use Replacement of “dull, dangerous, and dirty” work Increased standardization of tasks

Team/organization-level conditions

Replacement of routine cognitive tasks Automation-caused decline in active use of skills with increased monitoring

  • Work methods and behaviour prior to technology
  • Level of routineness in work tasks
  • Operational uncertainty
  • Choices about work organization/management ideologies
  • Employee participation in technology design and implementation
  • Organizational strategy (e.g., cost versus innovation focus)

Technology-enabled “micro-tasks” that lack meaning and interest
Job feedback and related

Wearables and other technologies that increase customized feedback

Algorithmic management and provision of “objective” feedback

Automation reduces feedback and impairs situational awareness

Reduced opportunities for learning as a result of automation

Occupational
Devolution of information to lower levels of the organization via information technologies Algorithmically mediated feedback that is punitive, idiosyncratic, biased, etc.

  • Education and skill requirements
  • Routineness of work

Social and relational Information communication technologies that support social connections esp. if remote Technology-mediated communication that impairs connections and coordination, removes empathy, etc. Macro-level
Information communication technologies that enhance coordination and team working Excessively abstract data that reduces shared understanding

  • Laws and regulations relating to privacy, mental health at work, etc.
  • National institutions and regimes, e.g., workers’ councils, unions

Computers as “team mates” Changes to physical aspects of work that disrupt social connections
Job demands Increased cognitive demands due to automating simpler tasks Variation in workload due to automation (e.g., from underload to rapid overload)  
Reduced physical demands due to automation Increased administrative demands
Reduced work load due to labour-saving aspects of technology Increased surveillance and performance monitoring demands

Effect of Technology on Job Autonomy and Control

Job autonomy (which we use interchangeably with the term ‘job control’) is one of the most important work characteristics because of its positive effects on multiple outcomes. First, from a stress perspective, autonomy allows individuals to actively manage demands in the environment (Karasek Jr, 1979). Second, following the original job characteristics model by Hackman and Oldham (1976), much research shows that having job autonomy enhances meaning and motivation at work which, in turn, reduces turnover and absenteeism, and fosters behaviours such as job performance, creativity, and proactivity (for a review, see Parker et al., 2017a). Third, from a sociotechnical systems perspective, high autonomy can support efficient decision-making because decisions about managing variances and disturbances in work processes are made locally, at the source of those variances and disturbances, rather than having to be referred to higher levels in the hierarchy (Emery, 1959; Grote, 2009; Wall, Cordery, & Clegg, 2002). For example, Wall, Corbett, Martin, Clegg, and Jackson (1990) showed that, when the implementation of a stand-alone advanced manufacturing technology (AMT) had an “operator control” model in which operators had the autonomy to deal with problems, system performance was better and operators had higher well-being, especially in systems that involve many breakdowns and other such variances, compared to a “specialist control” model in which operators’ roles were limited to monitoring the technology with problems handled by specialists. Local management of uncertainty helps to explain why self-managing teams or autonomous work groups (a team-level version of autonomy) have been found to enhance team performance (Cordery, Morrison, Wright, & Wall, 2010) and improve employee outcomes such as job satisfaction (Wright & Cordery, 1999).

Two broad types of autonomy and control have been identified. The first type concerns the work itself, that is, decision-making over work processes, including one’s influence over general decisions (decision-making autonomy), the opportunity to choose timing of work tasks (timing autonomy), and being able to choose one’s work methods (method autonomy). The second type of autonomy is about having influence over when and where one works, such as the notion of flexible working, sometimes referred to as boundary control. We discuss each in turn.

Decision-Making as Part of Work Processes

Many scholars have argued that technology enables decentralized decision-making, and hence greater job autonomy, in large part because of the wider distribution of information and the potential this offers for localized decision-making (e.g. Grote & Baitsch, 1991; Zuboff, 1988). Technology-enabled work practices, such as agile development teams (Tripp, Riemenschneider, & Thatcher, 2016), can also potentially increase job autonomy and hence employee satisfaction. In the context of manufacturing, some have argued that cyber-physical production systems (in which machines are connected to each other in a factory-wide information network) will require, more than ever, strategic human decision-makers (Waschull et al., 2020). More broadly, the broader distribution of knowledge that the internet enables, alongside the growth of increasingly accessible technologies such as 3D printing, has fostered the emergence of new types of self-organized, grass-roots communities, such as the Maker Movement and Fablab spaces.

On the other hand, new technologies and their associated work practices can undermine or interfere with human autonomy. We consider this issue in relation to automation, where most research has been done. The early focus of technology designers and implementers was to automate as much as possible, giving humans only the “left over” monitoring tasks that could not be performed by the technology. However, as Bainbridge (1983) noted in an article on the “ironies of automation”, this assumes that designers get everything correct (they don’t) and also that, by automating the complex tasks, only the “simple” ones are left. But in fact, “by taking away the easy parts of his task, automation can make the difficult parts of the human operator’s task more difficult”. In essence, humans are turned into “supervisory controllers” (Sheridan, 1987) of systems they are no longer able to fully understand, impeding adequate intervention when these systems fail. In what has been referred to as the “out of the loop” performance problem (Billings, 1991), operators of automated machines, compared to manual operators, become increasingly unable to detect system errors and perform manual tasks in the face of automation failures due to the loss of manual skills and loss of situational awareness of the system. For example, in aviation, the introduction of “autopilot” helped to increase flight safety up to the point when pilots were not sufficiently in the loop anymore, thereafter resulting in accidents and incidents due to “automation surprises” (Sarter, Woods, & Billings, 1997). In these instances, actions performed by the autopilot are unexpected or misunderstood, triggering inappropriate reactions by the pilots, often because the pilots have lost awareness of the particular flight mode the aircraft is in.

Since that time, and given that humans are held accountable for system functioning, “human-centred automation” approaches have been introduced to ensure technological systems are sufficiently transparent and predictable, with adequate means for workers to influence technical processes (e.g., Grote, Ryser, Wäfler, Windischer, & Weik, 2000; Young & Stanton, 2007). For example, there have been considerable efforts in the human factors field to develop principles and methods to ensure automated systems that, for example, do not overload or underload humans and ensures they are sufficiently able to handle complex emergent situations, thereby remaining “in control” of the overall system. Such approaches are consistent with human control theories (Grote, 2009; J. D. Lee & See, 2004) which maintain that people should only be held accountable for work processes if they can understand, predict, and influence those processes.

However, with increasing levels of automation, and the growing impenetrability of the resulting systems, human control becomes more elusive. This has led some authors to argue for full automation rather than “human-centred automation” for very complex systems (Grote, Weyer, & Stanton, 2014). The idea here is that it might not make sense to maintain humans as accountable for systems that they can no longer understand and control. Further, when machines acquire knowledge and act on that knowledge fully autonomously, they increasingly become as unpredictable as humans (Vallor & Bekey, 2017). This implies that machines have to be granted an independent role in human-technology interaction which surpasses the responsibility of both the human interaction partners and the developers of these machines. This extreme situation has been discussed especially with respect to self-learning robots, where standards have been developed to try to capture ethical questions of fully autonomous technology (Grote et al., 2014). To date, control for human operators has also been called for because they are the ones that are held accountable if things go wrong. However, if control cannot be guaranteed anymore, some argue that accountability should also shift to system designers and organizations operating the technology rather than operators (Boos et al., 2013). For example, Itoh and Inagaki (2014) presented results of an experiment showing that a design option that allowed no opportunity for a human to intervene was safer than two design options giving humans the ultimate control. Wessel, Altendorf, Schreck, Canpolat, and Flemisch (2018), on the other hand, described possible ways to share accountability and control between drivers and automation.

Similar complexities around control arise as a result of algorithmic decision-making in which machines learn from big data, often gathered from local sensors, to make decisions. For example, as well as the well-known application of algorithms to match passengers to Uber drivers, algorithms match patients to therapists and doctors, match subway workers to maintenance tasks, engage in hiring, evaluate call centre agents’ calls, identify who is likely to quit, and make bail decisions by predicting who is likely to commit crimes. Rather than local decision-making, centralized algorithmic decision-making is argued to be more efficient and effective, with the latter decisions optimizing the whole system. Algorithmic decision-making is also often assumed to be more objective. For example, Brynjolfsson and McAfee (2014) argued that algorithmic decision-making can be better than HiPPO—the highest paid person’s opinion—because it removes bias and includes many more data points. However, the assumption that algorithmic decision-making is unbiased and of better quality than human decision-making is contested by other authors. As one example, an analysis of the use of algorithms in financial decision-making, Bhidé (2010) described how the local decisions of credit-worthiness made by lending officers have been replaced by decisions based on statistical models, yet the latter models can be inaccurate in volatile situations (indeed Bhide argued, the use of these algorithms contributed to the financial crisis). Bhide argues that “predictions of human activity based on statistical patterns are dangerous when used as a substitute for careful case-by-case judgment”. Newell and Marabelli (2015) similarly argued that over-controlling decision-making systems might have short-term productivity benefits but, in the long term, undermine freedom, motivation, and innovation.

Algorithmic management, in which algorithms take on management tasks, is also on the rise (Lee, 2018). Increasingly, organizations are delegating tasks such as selection, task allocation, scheduling, and performance ratings to algorithms, with these decisions often being impossible for employees to influence, thereby reducing employee control (Kellogg et al., 2020). As Mohlmann and Zalmanson (2017) observed, whereas in the past technology has mostly been used as a decision support tool with managers or professionals making the final decisions, now decision-making is sometimes entirely carried out through algorithms, such that “algorithmic management leaves no time to discuss or revise decisions arising from special circumstances or not wholly captured by the data” (p. 5). As an example, Moore and Hayes (2018) reported how the use of an i-pad to algorithmically track care workers’ activities (e.g., measuring tasks minute by minute, lateness of visits, etc.) resulted in reduced nurse autonomy, with nurses then having to visit longer in patients’ home when not needed, yet having to cut short other visits when needed. Some workers even returned to patients’ homes after formal working hours to provide additional care that they felt was needed, thereby restoring some control and meaning in their work.

Choices of Where and When to Work

Information and communication technologies (ICT) enable connection across geographic and even temporal boundaries, which can support telework and remote working. In theory, therefore, ICT results in more autonomy over when and where people work (i.e., flexible working patterns), with technology and improved internal access resulting in more organizations allowing workers to work away from the office and in virtual teams. Commentators argue that gig work, such as online piece work and Uber driving, provide workers with a choice about when and where to work, and the chance to achieve a better balance between work and other commitments (T. Johns & Gratton, 2013; Malone, 2004; Sundararajan, 2016).

However, the theoretical benefits of technology for employee autonomy do not always hold. Research shows that, when people work flexibly at home, this can also bring a demand for constant connectivity from coworkers in the office, with expectations of immediate responses and consequent home-work conflict (Hislop & Axtell, 2007). The net effect is a paradox in that workers are not able to achieve the balance that led to them opting for telework in the first place. Leonardi, Treem, and Jackson (2010) similarly described how teleworkers often experience reduced flexibility because of expectations for constant connectivity, leading the workers to respond by introducing personal strategies, such as “unplugging”. Uber driving, too, is promoted as flexible work in which drivers can “be their own boss”, but research identifies many “soft controls” over workers’ decisions, such as algorithmic determination of surge pricing, drivers having to accept ride requests without knowing destination or fare information, drivers being “deactivated” if they cancel unprofitable fares, and messaging and incentives keep drivers driving at peak times (such as, “Are you sure you want to go offline? Demand is very high in your area. Make more money, don’t stop now!” (Rosenblat & Stark, 2016, p. 3768). Mohlmann and Zalmanson (2017) go further, and describe a power asymmetry between Uber and their drivers, with workers’ desires for autonomy clashing with control features of algorithmic management, leading workers to resist the system, game the system, or to switch to new systems.

Moderators

All together, it is clear that there are mixed effects of new technologies on work decision-making and employees’ autonomy over when and where they work. One influencing aspect is that there is skill-based schism as to who captures the autonomy benefits of technology. The extent to which workers genuinely have flexibility likely applies more to highly skilled workers, such as professional freelancers who choose not to have a standard employment contract, and workers on secure contracts who can choose where and when they work (Spreitzer, Cameron, & Garrett, 2017). In contrast, for low skilled/low wage workers who often have less bargaining power, flexibility can be more about the “organization’s flexibility”, with employees expected to be on call for unpredictable work hours. Job insecure online piece workers, for instance, often have such low wages that they need to work extremely long hours, despite apparent “choice” over work hours. For such workers, flexibility more simply often means uncertainty.

The observation that the effects of technology depend on skill accords with the skill-biased technical change perspective, and the routine-biased technical change perspective, which propose that high skilled jobs rather than low-skilled jobs (Autor, Levy, & Murnane, 2003), or jobs with non-routine rather than routine tasks (Autor et al., 2003), respectively, fare better with regard to technological change outcomes, in part because these workers do tasks that are not easily replaced by technology, and hence have more bargaining power and can demand better conditions, pay, and work designs (Goos, Manning, & Salomons, 2009; Kalleberg, 2011). In addition, consistent with the sociotechnical systems principle of controlling variances at the source, high skilled and non-routine work is more effectively performed under autonomous work designs (Bresnahan, Brynjolfsson, & Hitt, 2002; Milgrom & Roberts, 1990), which increases the imperative for designing quality jobs. However, despite the greater bargaining power of high-skill employees in which to secure better work conditions, and despite evidence that high skilled and non-routine work is more effectively performed under autonomous work designs, even in highly skilled or non-routine work, technology use does not always result in better work designs (Gough, Ballardie, & Brewer, 2014; Leverment, Ackers, & Preston, 1998), showing there are other factors at play, including institutional regimes, management ideologies, pre-existing work practices, the particular choices made in a local setting, the degree of operational uncertainty, and more (Buchanan, Boddy, Black, MacDonald, & Trushell, 1983; Frenkel, Korczynski, Shire, & Tam, 1999; Parker, Van den Broeck, & Holman, 2017b; Slocum Jr & Sims, 1980; Wall, Clegg, & Kemp, 1987).

The type and configuration of the technology itself also influences its impact on autonomy. Bloom, Garicano, Sadun, and Van Reenen (2014), for example, showed that, at the firm level, information technology is a decentralizing force that is associated with greater worker and local manager autonomy, whereas communication technology such as data intranets is associated with centralization and reduced autonomy. Likewise, in an investigation of online piecework involving tasks with a very short cycle time, Lehdonvirta (2018) concluded that the degree of job autonomy depended on the particular platform and the choices made about the work organization. For example, in Mobileworks, tasks are assigned algorithmically, yet how much and when workers worked was highly flexible, whereas for Mturk, the low pay meant the workers were often working continually, with little flexibility, in order to achieve sufficient pay. As a final example, in a study of the use of algorithms for forecasting, when participants were given some control such that they could modify the algorithms, their sense of empowerment improved their trust in AI, their performance, and their satisfaction with the work (Dietvorst, Simmons, & Massey, 2016).

Effect of Technology on Skill Variety and Use

Work design theory and research recognizes that a well-designed job involves doing varied, meaningful tasks that make good use of people’s skills (Hackman & Oldham, 1980; Morgeson & Humphrey, 2006; Morrison, Cordery, Girardi, & Payne, 2005). Work characteristics that capture this focus on interesting work include task variety, skill variety, job complexity, job challenge, task identity (doing a set of tasks that make up a whole), task significance (doing work that feels worthwhile), and problem-solving demands. These work characteristics, whilst having nuanced differences, all predict intrinsic motivation, job satisfaction and related outcomes (e.g., Humphrey, Nahrgang, & Morgeson, 2007). For simplicity, we refer to them as “skill variety and use” forthwith.

Positive effects on skill variety and use can be expected whenever technology takes over the “dull, dirty, and dangerous” tasks (Walsh & Strano, 2018, p. xix) as it can provide greater opportunity for individuals to engage in skilled and meaningful tasks. Many commentators predict the growth of highly skilled jobs with few algorithmic components, alongside a decline of jobs with lower skill requirements and based on clear algorithms that can be automated (e.g., Dellot & Wallace-Stephens, 2017). Many calls have been for policies to “upskill” the workforce (albeit also with concerns raised regarding the future availability of jobs for less educated and/or able workers; a point we return to later). There is thus considerable discussion about the upgrading of work in terms of skill use and variety as a result of technology.

At the same time, it has also long been understood that technology, and the work practices it enables, can reduce people’s skill variety and use. For example, lean production, with its emphasis on specialization and “standardized” processes, can reduce production operators’ task variety (e.g. Delbridge, 2005; Parker, 2003), as can ICT such as enterprise resource planning systems (Venkatesh, Brown, & Bala, 2013). Automation has also in some cases meant a move from active use of skills to mostly passive monitoring jobs, such as in process control in chemical or nuclear power plants or in railway operations. The excessive level of vigilance required in such jobs creates problems for motivation and performance. As Bainbridge (1983, p. 776) noted: “it is impossible for even a highly motivated human being to maintain effective visual attention towards a source of information on which very little happens, for more than about half an hour”.

Most research on technology and skill consequences to date has been conducted in the aviation industry. There is well-documented evidence of the degradation of manual flying skills due to a lack of opportunity to practise in highly automated aircraft (e.g., Casner, Geven, Recker, & Schooler, 2014; Haslbeck & Hoermann, 2016; Wiener & Curry, 1980). In response, aviation regulatory authorities such as the UK's Civil Aviation Authority and the European Aviation Safety Agency have recommended preventative steps, such as mandating that pilots manually fly the plane a certain amount to maintain their skills and extensive and recurrent simulator training. Similar fears about the problems of excessive monitoring tasks have been raised in regard to other technologies, including autonomous driving. Reducing the driver to the role of monitor, and the associated problems with maintaining attention, makes it exceptionally difficult for people to take over control of the vehicle if problems arise (Stanton, 2019).

Technology also has enabled changes in business models that potentially affect employees’ skill use, such as breaking down jobs into “microwork” that can then be sourced via ICT platforms (Lehdonvirta & Ernkvist, 2011) and paid on a piece-rate basis (Lehdonvirta, 2018). Kittur et al. (2013) argued that some sorts of crowd work result in “gigs” that echo past poor jobs, not only in terms of extremely low pay and poor work conditions, but also poor work designs because variety, skill use, and meaning of work is reduced (e.g., Amazon’s Mechanical Turk). More broadly, with digitalization, cognitive tasks are increasingly being replaced, which can be expected to lead to even broader skill erosion.

Importantly, as with job autonomy, there has always been research that also shows that effects of technology on skill use can vary depending on various factors, such as how the technology is implemented and used. In a seminal study by Barley (1986), the introduction of CT scanners led to the empowerment of radiological technologists in one hospital, with a high level of engagement in decision-making, whereas the radiologist technologists had lower decision-making influence in another hospital with, instead, radiologists retaining the decision-making power. Similarly, in an ethnographic case study of the introduction of a Da Vinci robot in surgery, where the main surgeon operates the robot from a console away from the patient, Sergeeva, Huysman, and Faraj (2015, p. 5) showed how new divisions of labour emerged. Initially, scrub nurses resisted the change “because we only were allowed to make the instrument tables ready … [and] at the end of the surgery we could clean up the whole mess. And we did not really have a role.” As a result of an improvised and almost accidental work redesign, 1 the nurses’ role was then enriched to the point they took on more skilled roles (that is, manipulating the instruments inside the patient’s body) than even prior to automation, highlighting how the effects of technology on skill are not deterministic and are shaped by work organization choices.

Effect of Technology on Job Feedback and Related Work Characteristics

Here we focus on feedback from the job and related work characteristics. We distinguish these characteristics by their particular importance in fostering mastery of one’s job. Job feedback, one of Hackman and Oldham’s (1976) original work characteristics, promotes “knowledge of results”, which in turn enhances motivation. Irrespective of motivational effects, job feedback is important for ensuring effective performance because it enables and supports learning. For example, Leach, Jackson, and Wall (2001), showed that, in the context of operating complex technology, a previously unsuccessful empowerment initiative was made successful through the introduction of a feedback intervention that enhanced operator knowledge. Related aspects of work design that support mastery, beyond those already discussed in earlier sections, include being clear about one’s role (role clarity) and doing the whole set of tasks within a job (task identity).

Technology can enable job feedback and thereby foster mastery and learning. At a personal level, the feedback arising from the use of wearables and devices can improve individual learning and productivity, such as a call centre agent receiving feedback on their empathic tone when talking to customers. At a team and organizational level, information technology and the widespread use of data also means that information can be devolved much more easily to all employees, with the result that employees have more knowledge for decision-making and can better understand how their tasks fit into the bigger picture.

However, there is also a potential for technological change to reduce feedback, disrupt mastery, and in the long term, result in skill loss. A concern about impaired mastery through lack of feedback, and the resulting decline in a person’s situation awareness, has long been raised in the field of human factors in regard to work designs involving considerable levels of passive monitoring. Norman (1990, p. 585) observed that: “the problem is that the operations under normal operating conditions are performed appropriately, but there is inadequate feedback and interaction with the humans who must control the overall conduct of the task. When the situations exceed the capabilities of the automatic equipment, then the inadequate feedback leads to difficulties for human controllers.” Improving feedback can help. For instance, in Airbus aircrafts, the plane is controlled by sidesticks that do not give tactile feedback, whereas Boeing aircraft still have yokes to steer the plane which give tactile feedback and, because the yokes of both pilots are connected, also provide visual feedback for each pilot of the other pilot’s actions.

As an example of impaired feedback and learning in the health sector, Beane (2018) showed how robotic technology can reduce the opportunities for trainees to engage in challenging tasks “at the edge of their competence” which, combined with impoverished feedback, impaired learning. Traditionally, in surgery, to learn to deal with complex and dynamic problems, surgeons acquire expertise through informal, legitimized, on-the-job learning, by “being there with old hands”, with the work getting increasingly complex as their skills develop. However, in this study, robotic technologies—designed for efficiency—created a finer-grained division of work. The result was that the attending physicians (who are the senior more experienced surgeons) did more of what they were best at, whilst restricting residents’ (juniors’) roles to more routine tasks, thereby radically reducing their time for learning. In addition, research showed that, even when residents took over the robot console, they were more closely supervised and, with their action magnified by up to ten times on the screen, often with more intense and critical feedback. The net effect in this case was that trainee surgeons completed their residency with the legal and legitimate authority to perform robotic procedures, but not the knowledge and skill to do so. In a similar vein, Dominiczak and Khansa (2018, p. 369) argued in relation to the use of automation in intensive care that “rather than completely relying on automation, health care professionals should make automation work for them by ensuring that the joint human-agent system is ‘mutally predictable’, ‘mutually directable’ and maintains ‘common ground’". Adequate feedback is crucial for this process. More generally, Newell and Marabelli (2015) observed that skill preservation strategies recommended in the case of pilots (such as extensive simulation training) are less likely to be applied to other skills (such as driving), which means that both at work and more generally as a society we may become overly dependent on technology and can no longer compensate for malfunctioning technology because of skill loss.

Job feedback is also affected by systems in which workers’ performance is automatically tracked and evaluated. For example, for Uber workers, the following (and more) are automatically tracked: driver whereabouts, work versus idle time, and feedback from passengers (Mohlmann & Zalmanson, 2017). Such feedback is essential for algorithmic systems to work, providing a form of quality control in the absence of human supervisors. However this algorithmically mediated feedback can also be problematic. For example, the information can be used punitively, such as when drivers are automatically penalized for an apparent lack of compliance. Peer and customer reviews can have a powerful effect on the reputation of the digital workers (Yoganarasimhan, 2013), yet at the same time, be subjective and idiosyncratic (Orlikowski & Scott, 2015), causing worker distress and frustration.

Again, these examples show that the effect of technology on job feedback and associated work characteristics can have very different effects on workers’ mastery and learning, depending on their design and implementation. Interestingly, one of the early studies of automation (Zuboff, 1988) pointed out that, especially with respect to feedback, there are two strategies: “automate” and “informate”. Whereas “automate” focuses on automation of operations, with the main aim of machines replacing human effort and skill, “informate” strategies deliberately use the information generated by the automated processes to provide feedback to workers, who are then empowered to make complex decisions. Thus, whereas an automate strategy for technology is likely to result in lower quality jobs, an informate strategy will likely result in higher quality jobs because technology augments human capabilities rather than replaces them.

Effect of Technology on Social and Relational Aspects of Work

Consistent with wider theoretical recognition of human needs for social connection (Deci & Ryan, 2004), work design theory highlights the importance of relational work characteristics, such as social contact, social support, interdependence, and contact with beneficiaries (e.g., Grant & Parker, 2009). Meta analyses (e.g., Humphrey et al., 2007) show social aspects affect job satisfaction, commitment, and other affective outcomes, and in-depth studies show the power of social work characteristics for human performance (e.g., Grant, 2008; Parker, Johnson, Collins, & Nguyen, 2013).

As with other types of work characteristics, the effects of digitalization and technology on relational aspects of work are varied. Focusing on information communication technologies (ICT), for instance, communication mediated through technology has fewer temporal or spatial constraints, which means it can support social connections and help to build social networks. Thus, studies document how social media can buffer against loneliness for remote workers or homeworkers (Hislop et al., 2015) and can facilitate the development of shared understanding about coworkers (Neeley & Leonardi, 2018). Technologies can also enhance coordination and enable stronger connections across workers engaging in distributed work (e.g., Kellogg, Orlikowski, & Yates, 2006; Nardi, Kuchinsky, Whittaker, Leichner, & Schwarz, 1995).

But, on the other hand, the opportunities afforded by technology for virtual working can hinder relational experiences. For example, virtual workers can have difficulties in establishing bonds and social content (e.g., Kiesler & Cummings, 2002), and often experience problems with coordination when having to interact through information technologies (Cramton & Webber, 2005; Cummings & Turner, 2009; Hinds & Mortensen, 2005; Mortensen & Neeley, 2012; O'Leary & Mortensen, 2010). Mohlmann and Zalmanson (2017) observed that in some online platforms, almost all communication is mediated through the platform, such as through email or chatbots, with little opportunity for interaction with either supervisors or peers, and little chance for human support. In the words of an Uber driver, “you email everything” when things go wrong, and “if something goes wrong with your app, you just have to wing it” (p. 8).

More broadly, roles are changed in diverse ways by digital technology. Barley (2015), for example, described how the internet-based car sales changed the interactions between floor car sales people and their customers, resulting in sales people using new “scripts” to sell cars. Social connection and coordination practices are also likely to be affected by the increasing use of large amounts of abstract data, which as Beane and Orlikowski (2015, p. 1571) argued, can create “significant challenges to effective comprehension and coordination”. A related point is that socially oriented constructs such as trust, that are essential to effective coordination in complex settings, increasingly need to be applied not just to humans but to technology and data. In other words, when people operate through digital interfaces, they need to be able to trust the objects they work with, including the data (Bailey, Leonardi, & Barley, 2012).

As with other aspects of work design, how technology affects relational aspects of work design is dependent on a range of factors. One important factor is time, and how the effects on work can shift with greater experience with the technology, in combination with agentic human action. Thus, Leonardi et al. (2010) reported that, although teleworkers worried about forming relationships, this worry dissipated with more time in the role as workers realized they were able to engage effectively with others. Indeed, Leonardi et al. found that many workers ended up distancing themselves from the constant connectivity in an effort to create clearer home–work boundaries.

The design and set up of the technology itself also strongly shapes its impact on social aspects. For example, online platforms can be designed in very different ways, with different consequences for relational work design. In Lehdonvirta’s (2018) research (described above), the social aspects associated with each platform were also very different, despite similar online piecework tasks. In Cloudfactory, for example, workers were assigned to teams of five, and each had a team leader. They were explicitly encouraged to share tips with other team members electronically, and teams were also ranked against each other to encourage inter-team competition. In contrast, such a team structure was lacking for Mturk workers, although some workers participated in externally organized online communities and private chat channels. How technology affects work design, and hence outcomes like performance, also depends on its appropriate use, given the tasks. For example, in a study of geographically dispersed teams, Malhotra and Majchrzak (2014) found that for teams with non-routine tasks, using ICT to boost task awareness was important, whereas for teams with tasks involving cross-disciplinary members, ICT that boosted “presence awareness” (e.g., a sense of shared context) was important for performance.

A further influential factor is how people coordinate before the technology is introduced. For example, Beane and Orlikowski (2015) investigated the effect on coordination of a robot (a form of mobile teleconferencing) in which an attending physician could remotely see the intensive care ward to interact with residents (trainees) and nurses, rather than—with the previous systems—the physician relying on calling in via a land-line telephone in a fixed position with a resident present but no nurses. The introduction of the robot affected coordination both positively and negatively, depending on how the work was being carried out before the introduction of the robots. For those residents who engaged in “skimming” (that is, preparing just on the basis of records, with little conversation with either patients or nurses), their input became increasingly irrelevant with the robot because the physician and the nurse talked more directly to each other. But for those residents who fully prepared for rounds through interacting with the nurses, the robot enhanced coordination, allowing for richer interactions with patients, nurses, the resident, and the physician. This example shows that, again, technology has no predetermined effect on relational work design.

Effect of Technology on Job Demands

We discussed earlier that cognitive demands can change as a result of new technologies. For example, sometimes automation results in more stimulating work (when low-skill components of jobs get automated) and sometimes it results in less stimulating work (when workers become “stop gaps” for tasks that are difficult to automate). The latter strategy can, ironically, create more mentally stressful jobs because of the need for sustained vigilance, which research shows is highly fatiguing (e.g., Bainbridge, 1983). Much research in the cognitive domain is conducted from a microergonomic perspective, which aims at specifying design requirements for system interfaces. One example of such research concerns alarms in process control—alarms are meant to reduce cognitive load, but too many alarms increases cognitive load, which can be counteracted by automatic alarm filtering of alarms (Papadopoulos & McDermid, 2001). The difficulties associated with defining appropriate filter algorithms are increasingly being addressed through machine learning (P. Schwab et al., 2018). As our focus is on broader work design, we do not further discuss ergonomic system design research, and refer the reader to Ritter, Baxter, and Churchill (2014).

Physical demands can also change with technology as heavy manual work is replaced by automation. However, more computer-related work often means sitting in front of a computer, with increases in musculoskeletal disorders. Through changes in the physical space, sometimes also psychosocial demands change. For example, robots can shape the physical space, and hence the ways people work, which might have unintended implications. For example, in a study of the introduction of digital robots for dispensing medications, the layout of the pharmacy was changed, which affected the visibility of assistants’ work, and thereby reinforced their low status (Barrett, Oborn, Orlikowski, & Yates, 2012). Likewise, Jones (2014) showed how the introduction of a computer-based clinical information system in a hospital critical care unit created visual barriers between patients and nurses, making it harder for nurses to engage in what they referred to as “proper” nursing care.

The introduction of various time-saving electronic systems often, perversely, increases employee work load demands (a situation many academics are all too familiar with). For example, the human resource element of an enterprise resource planning was introduced within a multinational manufacturing company, leading to the expectation that all employees would administer their own travel, accommodation, leave, personal data, overtime claims, and so on. Despite the technology that would supposedly simplify this work, the tasks were often demanding and incompatible with employees’ professional identities (Shepherd, 2006). As another example, the introduction of aspects of an electronic care records system in the UK’s National Health Service resulted in multiple system failures, including increased staff workload due to “bureaucratic, intrusive and unworkable” processes (Greenhalgh, 2010, p. 5) and the need for constant workarounds. Challenger, Clegg, and Shepherd (2013) attribute the problems of this and other similar technological systems to the technocentric and “one-size-fits-all” design and implementation of this system, and neglect of the user perspective; a point we return to shortly.

One of the most important demands that is being affected by the growth of sensors, big data, and algorithmic management technology, implicated above in our discussions on autonomy and feedback, is what has been referred to as surveillance demands or electronic performance monitoring. Indeed, some argue that it is this capacity of technology which threatens to make work even more deskilled than in the time of the industrial revolution: “What is new is the availability and inclusion of a range of unprecedented technologies that can be used to measure, track, analyse and perform work in ways hardly imagined (in the past)… New tracking and monitoring technologies allow management to control work at ever more intensified levels” (Akhtar & Moore, 2016, p. 102). Rusli (2013), for instance, described the “digital secretary” that constantly collects and examines all digital work produced by workers (e.g., emails, calls, etc.), and then—if deemed necessary—sends workers reminders as to what they should be working on. There are many other examples of such invasive technologies being used to control employee performance, with rather obvious consequences for employee morale and job satisfaction, and mixed effects on their performance. Indeed, algorithmic management goes hand in hand with close tracking of performance because algorithms depend on gathering detailed about worker behaviour data for their effectiveness.

Drawing on the now rather large literature on electronic performance monitoring, Tomczak, Lanzo, and Aguinis (2018) made several recommendations for mitigating its negative effects, such as using it only for learning and development rather than deterrence, and recognizing it is less likely to be suitable in more autonomous and complex work. Similarly, Stanko and Beckman (2015) studied how the US Navy sometimes excessively uses “situational controls” (such as pop-up boxes triggered by particular emails) to control people’s attention. The resulting extreme levels of tracking means people get disconnected and feel disempowered. The authors also, however, observed some cases with insufficient managerial control, resulting in people being distracted and having security breaches. These authors therefore advocated the “artful” and balanced use of situational controls to manage information communication technology-based work.

CONCLUSIONS AND INTERVENTION STRATEGIES

We draw several key conclusions from our analysis, summarized in Figure 1, and elaborated next.

Details are in the caption following the image
Work design as key to achieving benefits of digital technologies

First, work design is a valuable perspective for understanding the effects of new technology such as AI, robots, and automation. As depicted by the blue shaded path in Figure 1, the positive effect of high quality work design on a range of important outcomes for individuals and organizations is very well established. At the same time, we have shown that technology has the potential to affect key aspects of work design, such as the level of control/autonomy, whether people use their skills, the quality of feedback people receive, social and relational aspects of work, and job demands, such as work load and performance monitoring. Importantly, we showed that technology doesn’t affect just single work characteristics, but can affect multiple aspects of work characteristics at the same time, such as the introduction of robots in surgery changing the opportunity for varied and challenging work, the type and level of feedback, and the level of job control, simultaneously, which then impacts multiple outcomes such as engagement, learning, and job strain. A key implication, therefore, is that the more that we can map out how, in what domains, and why technology affects work design, the more we will gain important insights into how to optimize technology’s benefits.

A second clear conclusion is that there is no predetermined effect of technology on work design. This non-deterministic impact of technology is already well known from past research investigating technologies such as Advanced Manufacturing Technology, or enterprise resource planning systems, but it is also clear from the more contemporary examples given above. Figure 1 depicts how the particular effects of technology on work design depend on: the technology per se (e.g., different types of ICT), various higher-level factors (e.g., the level of skill in the occupation; institutional regimes; management ideologies), individual factors (skills, attitudes, personalities, behaviours), and on the inter-relationships of these elements. Altogether, there is a complex array of forces that shape the impact of technology on work which, although challenging to unpack, shows that technology certainly affects work quality yet what that effect is cannot be assumed. In the words of Kranzberg (1986, p. 545), “technology is not necessarily good, nor bad; nor is it neutral”.

A third conclusion is that, of the myriad of factors that shape the impact of technology on work design, many of these reflect “choices” about work roles during technological change. In other words, when technology is introduced, there are different potential work design options, and these should be—yet most often are not—actively considered by implementers. We saw this in the case above in which (albeit almost by accident) new role allocations were given to nurses after existing work designs were not working effectively with robotic surgery, with significant positive consequences of these reconfigured roles, not only for employees’ work meaning, but for surgical effectiveness. These work roles could have been proactively considered up front, and consciously designed, and monitored, as the technology was implemented. Considering actual work practices and human roles, alongside technology, when designing systems is the very essence of sociotechnical systems theory developed in the 1950s; a perspective that has had some support via the development of tools and methods that enable the design and implementation of more human-centred forms of technology (e.g., Clegg, 2000; Grote et al., 2000; Waterson et al., 2015). Like the sociotechnical systems approach, and the recent and similar socio-digitial systems perspective (Howaldt, Kopp, & Pot, 2012), we call for more proactive perspectives in which work design issues are actively considered, alongside individual, technology, and higher-level factors (Parker et al., 2017b), as we depict in Intervention Strategy A (Figure 1; the multi-coloured shading depicts the need to simultaneously consider all of the influencing factors and how they inter-relate).

But applications of sociotechnical system thinking have been criticized for favouring adaptations of work organization to technology, rather than changing technology to suit people and work, thereby giving technology a privileged role as an independent rather than dependent variable (Leonardi, 2012). A fourth conclusion, therefore, is that we need to put even more emphasis on proactively shaping not just the way technology is implemented and the roles around it, but also the design of technology per se, in order to maximize its positive consequences. To date, with the possible exception of the military and defence sectors (Challenger et al., 2013), most technological design only takes human needs and capabilities properly into account when severe accidents have happened. Current examples are autonomous cars, where with recent problems, car manufacturers such as Tesla have become more cautious in promising fully autonomous driving, or the redesigned Boeing 737, where pilots did not have a chance to override malfunctioning software. We have witnessed similar types of neglect of social and human issues with the design of ICT systems such as Facebook or Google, mostly raising privacy issues. In essence, the social, human, legal, and ethical aspects of technology development are not keeping pace, and these considerations should be made up front. Davis, Challenger, Jayewardene, and Clegg (2014) and Grote (2014) posed a similar challenge, arguing that socio-technical thinking should be used in a more predictive capacity than hitherto has been the focus, with expertise being applied to the design of new systems. We depict this need for explicitly considering human-centred design principles in the development, design, and procurement of new technology in Figure 1 (Intervention Strategy B).

Fifth, given our status as organizational psychologists, our analysis focused most on factors that are within the control of organizations. But we also acknowledge the crucial role of both more “macro” forces and more “micro” forces, and their associated implications for intervention. With respect to macro forces, it is no coincidence that technology most often benefits employers over employees, given the relative power of employers in social and economic systems. This situation means that higher-level policies and regulations are needed to help ensure safe, healthy, and meaningful work designs, such as policies around technology, precarious work, monitoring, and the like. Various agencies have put in place principles for robotics, from Asimov in 1942 (such as the principle that a “robot must obey humans except when the human might be harmed”) to, in 2011, the UK’s EPSRC/AHRC’s principles for robotics 2 (for example, “humans, not robots, are responsible agents. Robots are tools designed to achieve human goals”); to the Partnership on AI (which is a non-profit organization supported by Google, IBM, Amazon, Facebook, Microsoft, academics, and non-government organizations, with a broad mission to benefit people and society through AI). 3 Europe’s General Data Protection Act, concerned with how data is collected and stored, has important potential implications for algorithmic decision-making, albeit we are yet to fully understand these (Goodman & Flaxman, 2017). More specific to work and organizational psychology, other large policy-oriented initiatives seek to rebalance the focus from technological innovations to work innovation, such as the European Workplace Innovation Network (EUWIN) established in 2012 to simultaneously improve productivity and well-being (see Oeij, Rus, & Pot, 2017). These sorts of larger-scale and policy-oriented interventions (Intervention Strategy C in Figure 1) are an important complement to the more organizationally oriented sociotechnical design intervention strategies identified above.

Finally, with respect to individual-level forces, much attention in the media, policy statements, and consulting reports has focused on building the education and skill level of employees, and fostering their adaptivity, so that workers can cope with the new technology and remain employed. Indeed, this focus on skill development and the cultivation of life-long learning appears to be the dominant intervention strategy recommended in almost all analyses about future work (depicted in Figure 1, Intervention Strategy D). As an example, in regard to algorithms, Faraj, Pachidi, and Sayegh (2018, p. 63) argued “What will matter is the capacity of contemporary workers to adapt their ways of knowing and working and embrace novel technologies, with augmentative effects”. We concur with the value of upskilling the workforce, but also observe that this strategy currently overwhelms any discussion as to how the work and technology itself can be changed to adapt to humans (Crawford & Calo, 2016), that is, Intervention Strategies A and B.

To help reorient the discussion towards adapting technology to better suit humans we need to better educate and train key stakeholders about work design. This includes employees and managers, as well as those involved in new system procurement, design, and implementation. For instance, whilst design thinking training is now rampant in business schools, little attention is given to work and even organizational design in most programmes. Work design is most often a small component of one lecture in a typical MBA organizational behaviour module, and is largely neglected as a topic in executive education. Managers and management consultants are getting little education on this topic, and we are quite confident the same is true of information systems graduates, engineers, operation managers, and others. Recent evidence shows that enriched work design does not “come naturally” to managers and other professionals (Parker, Andrei, & Van den Broeck, 2019). To the extent that work design is known about, it is usually as a motivational or stress-reduction approach, with relatively little understanding about the learning and performance benefits of well-designed work. Thus, multiple forms of education about work design are needed—not only including work design topics in undergraduate and graduate training programmes for these professionals, but also to the creation of user-friendly materials, tools, and cases, such as powerful examples of performance failure when human elements and work design are neglected. There is a need to educate and influence system experts and designers, as well as system “influencers” (Dul et al., 2012) such as government, media, regulators, and general citizens, to proactively build, procure, and support human-centred forms of technology. Grote (2014) has argued that the management of uncertainty can be an entry point for discussions of socio-technical design principles, such as control of variances at their source, because everyone acknowledges the need to handle uncertainties well in any kind of organization or larger system.

Last, but not least, there is value in educating employees themselves about work design. In this article, we have recognized how the impact on work design of technology depends on how technology is used, enacted, and contested by users and other stakeholders (Leonardi & Barley, 2010). In a similar vein, review papers (e.g., Wang, Demerouti, & Bakker, 2016) and meta-analyses (e.g., Rudolph, Katz, Lavigne, & Zacher, 2017) show the positive effects of job crafting (or the self-initiated behaviours that employees take to shape, mould, and change their jobs, Tims & Bakker, 2010; Wrzesniewski & Dutton, 2001). Educating workers about work design and job crafting is therefore an important element of Intervention Strategy D (Figure 1).

MOVING FORWARD AS I/O PSYCHOLOGY RESEARCHERS AND PRACTITIONERS

To help ensure the best possible human outcomes of unprecedented technological opportunities, we propose directions for our field in respect to the research questions we ask, the approaches we use, and our practical focus.

Expanded Research Questions

Work design is a key way to understand technology. Nevertheless, we recommend some developments to this literature. For each of the recommended topics below, example research questions are shown in Table 2.
  • Continued focus on job autonomy. We expect that attention to job autonomy as a crucial work design variable will become even more important because the roles of humans and autonomous, self-learning technology will have to be renegotiated. Adhering to the human-in-the-loop principle, which is core to human-centred automation, will become more challenging due to the rapid decline of transparency and predictability in ever more complex technical self-learning systems. As these prerequisites for human control can no longer be guaranteed, not even for the developers of technology in the case of machine learning based on deep neural networks, job autonomy takes on a new meaning with technology becoming an increasingly equal partner with the same opaqueness that humans have for other humans (Boos et al., 2013).
  • Renewed attention to other work characteristics, including their interaction. While we would argue that job autonomy most likely will undergo the most profound transformation, all work characteristics should be scrutinized for both likely and (un)desired changes due to technological development (see Table 2 for examples).
  • More attention to antecedents of work design, including how multiple factors interact with technology to shape work design. Much less attention has been given to the antecedents of work design relative to outcomes, and the literature that exists comes from multiple disciplines. Parker et al. (2017b) recently summarized this literature and identified multi-level factors (e.g., institutional regimes, national culture, organizational design, technology, local leadership, individual job crafting) as operating together in complex ways to shape the design of work. Here, we have discussed how technology interacts with various individual, team, and organizational variables to shape work characteristics, but there is scope to go much further in unpacking this complexity. For example, despite considerable research on job crafting, little attention has been given to how people might craft the impact of new technologies. Interestingly, when the notion of job crafting is applied to technology, there is overlap with the sociomaterial perspective which proposes that social and technological aspects are not separate but are entangled and emerge together (e.g. Orlikowski, 2007), although these literatures do not speak to each other.
  • Bring back and further develop sociotechnical thinking in our research. Not only in applied psychology, but also in management research more broadly, technology receives little attention (Orlikowski & Scott, 2008). We need to include technology as an important factor in our research designs, not just in documenting technological aspects as part of the context (see, for example, Johns, 2006), but we should go further, and actively consider human and technological systems hand in hand (Waterson et al., 2015).
TABLE 2. Expanded Research Questions Regarding Work Design and Digital Technologies
Topic Example Questions
Continued focus on job autonomy given new challenges
  • What design principles apply if there is a fundamental redistribution of control and accountability (as, for instance, discussed in the context of autonomous driving, and whether self-driving cars should still have a steering wheel)?
  • How do we build trust in automation as system transparency and human control decreases (Lyons, Clark, Wagner, & Schuelke, 2017; Miller et al., 2016)?
  • What are the effects of algorithmic decision-making for job autonomy and workers’ use of their knowledge and skills, and what are the resulting consequences for motivation?
  • How will uncertainty be taken into account in automatic decision-making algorithms; a concept that has been a neglected one in most mainstream work design research (for exceptions see, e.g., Cordery et al., 2010; Wall et al., 2002)? Will uncertainty be designed out of the work system or considered an unavoidable or (in view of innovation and learning) even desirable characteristic of work systems for which workers and technology need to be prepared (Grote, 2009, 2015)?
  • What new forms of technology-enabled control might emerge, such as Stanko and Beckman’s (2015) argument that managers increasingly need to control people’s attention rather than, as per the traditional focus, their physical presence at work?
Renewed attention to other work characteristics, including their interaction
  • How will skills be preserved as automation extends into new domains such as law and medicine? What work characteristics and work designs will help maintain human skills?
  • Will we observe more variation in work load in routine versus non-routine situations as a result of automation, such as in jobs that require long periods of machine vigilance punctuated by quick, challenging responses during breakdowns?
  • How might traditional notions of task interdependence, which refer to relations among workers and their tasks, need to be expanded to consider interdependence amongst technologies (see, for example, Bailey, Leonardi, & Chong (2010)?
  • What are the effects of configurations or profiles of work characteristics (e.g., Parker et al., 2017a), and are these different from the additive effects of single characteristics?
More attention to antecedents of work design, including how multiple factors interact with technology to shape work design
  • What are the specific work design choices made by individual stakeholders in the system? In other words, considering work design as a behavior (see Parker et al., 2019), what are the design mindsets and behaviours of different stakeholders involved in the design, procurement, and implementation of new technology (e.g., what are the mindsets and behaviours of engineers, information technologists, operations managers, managers, management consultants, architects, interior designers)?
  • What factors affect different stakeholders’ work design mindsets and behaviours (e.g., their personal values, job experiences, gender, disciplinary background, culture, etc.)? What interventions can help to change work design mindsets and behaviours?
  • Do countries with on average better work designs also have more human-centred approaches to technological design?
  • What is the role of individual job crafting within the context of technology? For example, how do people craft ways to obtain control over automated systems, or how do they actively bridge gaps between interdependent technologies?
Bring back and further develop sociotechnical thinking in our research
  • How do we design and implement contemporary technologies to accommodate the needs and capabilities of humans and to promote high job quality as well as effective and safe work processes? What methods and tools are best?
  • What design criteria are appropriate when evaluating work systems, especially in relation to (mis)matches between control and accountability?
  • How do we influence stakeholders (designers, venture capitalists, entrepreneurs, engineers, managers) to proactively consider human and organizational requirements in the design and implementation of new technology?
  • How can we learn from the military and defence sector, where sociotechnical systems approaches are more advanced?
  • What are the key dimensions of the different technologies (AI, robots, etc.), and how do these affect work characteristics? (see, for example, Hertel, Stone, Johnson, & Passmore (2017), who specified key elements of internet-based work so as to more readily articulate the work and psychological consequences of internet-based work, and Kirkman and Mathieu (2005) who did the same in relation to virtual team work).
  • Can design thinking ideas be applied to help foster sociotechnical development and especially attention to work design? To date, most applications of design thinking have focused on the user or customer experience, not the employee experience, even though Gruber, De Leon, George, and Thompson (2015) see potential in this respect, so how can this potential be harnessed?
  • How can existing sociotechnical approaches be scaled up for use in early stages of major technological innovation, where vague knowledge of possible users and application domains exists at best?

Reoriented Research Approaches

Addressing some of the above topics and questions (see Table 2) will require adaptation to the type of research that is routinely conducted in our field, as proposed next.
  1. More interdisciplinary research. We recommend working with researchers from other disciplines to tackle some of the big issues. It has been argued that more impactful research tends to be done by interdisciplinary teams. More complex problems can be tackled by reaching beyond the boundaries of a single (sub)discipline and combining disciplinary approaches in new ways. Work and organizational psychologists arguably need to expand beyond their home base of individual-centred research on work and integrate knowledge from: human factors research on human-technology interaction (e.g., Dul et al., 2012); from design thinking and its focus on user participation and experience (e.g., Gruber, De Leon, George, & Thompson, 2015); from sociology-informed information systems research (e.g., Leonardi & Barley, 2010), which addresses wider social and organizational consequences of technology use; and from labour economics with its emphasis on firm-level and labour market outcomes (e.g., Autor & Salomons, 2018). Such inter-disciplinary research will help to build the foundations for a renewal of the socio-technical systems approach, meeting earlier criticisms of a one-sided adaptation of the social subsystem to the technical subsystem (Leonardi, 2012). Interestingly, strategy researchers (Markus & Loebbecke, 2013) echo this plea for interdisciplinary research for their discipline, arguing that the consequences of big data and algorithms are so far reaching that cross-disciplinary collaborations are required.
  2. Detailed studies of work in context. We recommend that to fully understand how work design and technology interrelate with each other, as well as how they interact with both higher-level forces (e.g., laws, management choices) and individual-level action, there is value in getting “closer to the work”. In 1983, Perrow counselled the need for more attention to studying the work people do when using digitized control systems, but this call has rarely been heeded (Vallas & Beck, 1996; Zuboff, 1988), which led Barley and Kunda (2001) to write about the necessity to “bring work back in” to organizational studies. These sorts of studies involve close, detailed observations and analyses of work being carried out, which is likely to be especially useful for understanding the complex interactions between work design, technology, individuals, and other factors. In a similar vein, Markus (2017) concluded that the debate concerning algorithmic versus human intelligence “calls for careful investigations of organizations’ evolving support versus replace decisions and the multilevel sociotechnical conditions and stakeholders influencing algorithm design, use, and consequences”.
  3. More intervention research. Especially in human factors research, a multitude of design methods have been developed to support more integral approaches to sociotechnical system design (e.g., Clegg, Ravden, Corbett, & Johnson, 1989; Grote et al., 2000; Waterson, Older Gray, & Clegg, 2002). However, the success of these methods in truly making an impact on how technology is designed and implemented has been mixed at best, with several scholars reflecting on the reasons for the disappointing outcomes (see, for example, Dul et al., 2012). At the same time, there have been broader discussions bemoaning the lack of impact of organizational psychology/behaviour research (e.g., Latham, 2019), with a consequent call for more intervention research. We endorse this call, and argue the need for studies in which work and organizational psychologists work alongside managers, designers, and others to actively redesign work during the implementation of new technologies. Realistically, it will often be impossible to uphold what is assumed to be the “gold standard” of intervention research designs (field experiments). We thus see the value of in-depth case-study style articles (an example is Beane, 2018), in which how people work with technology is closely observed, ideally coupled with longitudinal tracking of change over time. Because of the need to simultaneously consider technical, individual, group, organizational, and societal issues, convergent research will help to build the deep knowledge required to design work that makes considered use of expanding technological capabilities. In this research, it will be necessary to show effects of work and organization design choices on system levels, such as firms or even industries, which is the hallmark of economic research, explaining its tremendous impact on policy-making.
  4. Increase incentives for different types of research. To encourage interdisciplinary research, closer examination of work, and intervention studies, research incentives are important. On the funding side, we see trends that might help, such as the escalating emphasis on research having an impact beyond academia (e.g., Australia has recently introduced “impact” as a criterion in its ranking system). Nevertheless, there is room to go further, such as funding schemes that require collaboration between technical and social sciences. There is also the publishing side to contend with, and the current academic promotion and tenure systems can be brutal in its emphasis on “top tier” articles, which make some of the research we have advocated extremely risky. Other scholars have critiqued journal review and editorial processes (e.g., Latham, 2019), as well as the increasingly precarious nature of academic work, amongst other institutional challenges (Bal et al., 2019). The pressures against impactful science is too big a topic to go into any depth here, but we at least hope to see publishing in multidisciplinary journals being positively evaluated in tenure and promotion decisions.

Agenda for Action

Earlier we argued for different types of intervention. With their intimate knowledge of work design and conditions that foster individuals’ and teams’ abilities, motivation, and opportunities to perform effectively and stay well in work organizations, work and organizational psychologists should be at the forefront of shaping the future of work via these interventions. We call for the following:
  1. Strengthen a design focus in work and organizational psychology education: Over the last decades, education in work and organizational psychology has increasingly favoured topics from organizational psychology and organizational behaviour, such as leadership and teamwork, over topics from work psychology, foremost work design and human–technology interaction. This trend needs to be reversed in order to prepare psychologists for an active role in the design of technology and work. In some organizational psychology programmes that we are aware of, for example, work design receives less than one half a day of discussion. Moreover, we suspect too much attention is given to understanding different design choices (e.g., effects of particular job characteristics), with insufficient attention given to training psychologists to help organizations make better work design choices. Those doing the design work, such as shown by the current trend in “design thinking”, are often management consultants, architects, and the like, with these professions bringing little of the systems-oriented and organizational thinking possessed by work and organizational psychologists.
  2. Reach out to policy-makers to help shape the wider agenda: We discussed earlier higher-level policies being introduced in regard to new technologies. The success of these policies so far, however, is questionable, given continued wide-scale system failures, privacy violations, and the application of all sorts of untested and questionable technologies in organizations. As noted by Domingos (2015) in the book “The Master Algorithm”, “people worry that computers will get too smart and take over the world, but the real problem is that they’re too stupid and they’ve already taken over the world.” Likewise, Oeij et al. (2017) observed that there is still far more financial investment in technological and business-model innovation rather than workplace innovation. There is thus much scope for work and organizational psychologists to join the push for better policy.

CONCLUSION

New technologies can make work designs better and make them worse, with flow on effects for employee health, well-being, engagement, and performance. This situation has been well understood for some time, with scholars since the 1980s having identified how technology design and implementation choices, as well as other factors, shape the ultimate impact of technology on work and employees. Nevertheless, it seems that—even in the face of many technological failures—few lessons have been learnt, with technocentric perspectives still dominating. There is little evidence that Clegg’s (2000) pleas at the start of this century have been heeded that systems should be designed to meet the needs of the organization and its employees, rather than simply keeping up to date within new technologies. But what makes this deficiency of even greater concern than hitherto is that the relationship between humans and AI is fundamentally different from the relationships of humans and technology of the past, as both humans and technology have agency. Humans and technology must now function as an interdependent team of equals. It is this change in technology/human relations that we, and others, see as particularly significant and distinct from previous technological developments. Now, perhaps more than ever before, there is a need for a revitalized focus on the joint—and proactive—consideration of technology, people, and organizations to create work that is both healthy and productive.