A first step toward algorithm-based HR decision-making was the introduction of electronic performance monitoring during the last decades of the twentieth century. Electronic performance monitoring includes, for example, automated tracking of work times as well as internet-, video-, audio- and GPS-based observation of employees on the job (Stanton 2000). Alder and Ambrose (2005) estimated that this type of control affects between 20 and 40 million U.S. workers. Electronic performance monitoring is traditionally geared toward standardized jobs, targeted explicitly and mostly overtly to monitor job-related behavior, task performance, and compliance with company rules (Ball 2010). Yet, current algorithm-based HR decision-making tools go far beyond the monitoring activities described in the electronic monitoring literature (Ananny 2016; Dourish 2016; Seaver 2017).
Hence, personal integrity is needed as a baked-in compass as employees are required to hold themselves accountable to the standards they have set for themselves based on their individual convictions and values. Such self-regulation, however, implies autonomy and self-determination as a prerequisite (Weibel 2007). Yet, self-determination is jeopardized by algorithm-based HR decision-making tools through three avenues: (1) diminishing opportunities for human sense-making, (2) tendency to rely on technology in situations where reflexivity would be needed, and (3) a lack of moral imagination.
On the one hand, algorithm-based decision-making can enhance human sense-making because it can help make decisions more rational, more fact-driven, and more reliable. Descriptive algorithms, in particular, provide an increase in the amount of information, usually without stipulating interpretation patterns.
This trend toward prescriptive analytics puts pressure on individuals not to rely on their specifically human skills, such as critical reasoning, emotions, and intuitions, but instead to put all their trust in the supposedly neutral and superior decisions made by algorithms. It also challenges organizational sense-making processes and routines that, up until now, allowed individuals to maintain personal integrity by interacting with one another at eye level and discussing their convictions and deeds in a non-hierarchical manner. The appreciation for such human encounters comes under siege when algorithms are being marketed as infallible compared to volatile, emotional, and deficient human beings.
Whether we like it or not, human action often leads to human error. Human errors frequently result from issues such as oversight, intrinsic human decision biases, conflicting interpretations of information, and opportunistic behavior. All of these issues are seen as shortcomings not prone with algorithm-based HR decision-making. Algorithm-based decisions are often expected to be objective because they remove irrelevant sociocultural constraints from the equation (Parry et al. 2016). Therefore, in line with the worldview described in the last section, the U.S. technology community views human reasoning capacities as inferior to those of ever-improving machines. In fact, this has been a matter of a public discussion in recent years, as several private initiatives and publications have addressed concerns over the singularity, i.e., the point at which machines become too smart for humankind to maintain control over its own fate (Bostrm 2014). But even if we remove the underlying notion of dystopian science fiction from this line of reasoning, it is difficult to deny that the assumption of machines being superior to human reasoning and moral convictions, leading to an overly strong belief in rules and the ability to produce predictable outcomes.
Human errors often trigger learning processes and, thereby, may enable individuals to find the right, value-consistent answer to complex problems. This learning process is an important part of personal identity and self-regulation as it both enlarges the action repertoires of individuals, giving them more options for expressing their self-determination, but also enables personal growth, which is also linked to integrity (Ryan and Deci 2000). Hence, errors can trigger organizational learning processes that may actually strengthen integrity in the long run. Also, as already elaborated on, machines are by no means bias-free. They may threaten integrity at the organizational level, as legal and moral accountability are difficult to determine in the complex interplay of humans and machines.
This lack of moral imagination is problematic because algorithms make decisions within defined parameters and under restrictions, following reductionist principles (Bhattacharya et al. 2010). They are thus unable to operationalize qualitative criteria and to think outside the box. Ethically challenging scenarios that require creativity, e.g., to solve dilemmas, are problems beyond the realm of what analytics tools can solve. This becomes problematic whenever prescriptive analytics software suggests a course of action, implying there is no alternative. In such a case, personal integrity is especially important for interventions that confront the alleged superiority of the machine to perceive and correct an error.
Despite this rather grim outlook, formal control may yet be useful in supporting an integrity-based culture of trust within organizations, but only when certain conditions are met. In this context, Weibel (2007) as well as Weibel and Six (2013), emphasize the vital role of individual autonomy, which is expressed mainly in participatory decision-making processes. Furthermore, they stress the importance of honest, learning-oriented, and constructive feedback mechanisms, as well as a holistic appreciation of work performance. These factors will become increasingly important in order to maintain a balance between compliance and integrity, as algorithm-based decision-making tends to overemphasize quantifiable targets and quantitative indicators (Parry et al. 2016). Thus, while algorithm-based decision-making promises to make good on Taylorist ambitions, removing the unpleasant messiness of human experience and conflict within organizations, it may lead to a data-driven, performance-oriented, and overly compliance-focused organizational culture in which there is little room for moral autonomy and integrity. This turns employees into mere bystanders of algorithmic decision-making. In the final section, we will make suggestions on how to lessen these detrimental effects on personal integrity.
Furthermore, moral imagination is particularly conducive to initiating a self-critical, reflexive process in organizations, because moral imagination helps anticipate the perspectives and moral concerns of third parties (Werhane 1998). Algorithms can cope with quantifiable phenomena, yet still struggle to deal with qualitative questions and normative controversies (Bhattacharya et al. 2010). Moral imagination lends itself well to challenge this quantitative logic of algorithms, modify the scripts of human behavior implied by algorithms (Verbeek 2006), and offer organizational members guidance on how to behave appropriately in specific situations (Vidaver-Cohen 1997). To this aim, corporate actors can create spaces for discourse and reflection (Rasche and Esser 2007) that are not subordinate to the quantitative logic of algorithms. This endeavor will not be trivial, as the idea of algorithm-based leadership decision-making without a human-held veto already looms around the corner of the current debate (Parry et al. 2016). Nevertheless, organizations can cultivate and encourage both ethical awareness and moral imagination through role modeling of the leaders and communicative and regulative structures (such as codes of conduct, trainings, and policies).
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
To protect information and communication technology (ICT) infrastructure and resources against poor cyber hygiene behaviours, organisations commonly require internal users to confirm they will abide by an ICT Code of Conduct. Before commencing enrolment, university students sign ICT policies, however, individuals can ignore or act contrary to these policies. This study aims to evaluate whether students can apply ICT Codes of Conduct and explores viable approaches for ensuring that students understand how to act ethically and in accordance with such codes.
Compliance with ICT Codes of Conduct by students is under-investigated. This study shows that code-based scenarios can measure understanding and suggest that targeted priming might offer a non-resource intensive training approach.
To help motivate ICT compliance, students need more education around cybersecurity risks (Snyder, 2004), the content of ICT policies, the importance of intellectual property rights (Kruger, 2003), and the ethical foundations of ICT and cybersecurity policies (Formosa et al., 2021). However, the time required to design and deliver such training necessitates significant organisational commitment and investment of resources to potentially restructure curricula and ensure students from all disciplines receive relevant training. But the current absence of training related to ICT Codes of Conduct in many universities and organisations exposes them to cyber risks.
To understand how well students recognise cybersecurity issues and judge risks associated with various ICT behaviours, Yan et al. (2018) conducted a scenario-based survey with 462 university students from north-eastern United States public universities. They found that 12% of the 16 scenarios were incorrectly judged by the students, and the judgements of 23% of the students were below 50% accuracy. The study suggests that students were the weakest link in the organisation regarding sound cybersecurity judgements. The survey concluded that accounting students require cyber education and knowledge to support good cyber practices (Yan et al., 2018).
90f70e40cf