Abstract
Humans are and have been the weakest link in the cybersecurity chain (e.g., [1,2,3]). Not all systems are adequately protected and even for those that are, individuals can still fall prey to cyber-attack attempts (e.g., phishing, malware, ransomware) that occasionally break through, and/or engage in other cyber risky behaviors (e.g., not adequately securing devices) that put even the most secure systems at risk. Such susceptibility can be due to one or a number of factors, including individual differences, environmental factors, maladaptive behaviors, and influence techniques. This is particularly concerning at an organizational level where the costs of a successful cyber-attack can be colossal (e.g., financial, safety, reputational). Cyber criminals’ intent on infiltrating organization accounts/networks to inflict damage, steal data, and/or make financial gains will continue to try and exploit these human vulnerabilities unless we are able to act fast and do something about them. Is there any hope for human resistance? We argue that technological solutions alone rooted in software and hardware will not win this battle. The ‘human’ element of any digital system is as important to its enduring security posture. More research is needed to better understand human cybersecurity vulnerabilities within organizations. This will inform the development of methods (including those rooted in HCI) to decrease cyber risky and enhance cyber safe decisions and behaviors: to fight back, showing how humans, with the right support, can be the best line of cybersecurity defense.
In this paper, we assert that in order to achieve the highest positive impactful benefits from such research efforts, more human-centric cybersecurity research needs to be conducted with expert teams embedded within industrial organizations driving forward the research. This cannot be an issue addressed through laboratory-based research alone. Industrial organizations need to move towards more holistic – human- and systems- centric – cybersecurity research and solutions that will create safer and more secure employees and organizations; working in harmony to better defend against cyber-attack attempts. One such example is the Airbus Accelerator in Human-Centric Cyber Security (H2CS), which is discussed as a case study example within the current paper.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Why Are Humans Regarded as the Weakest Link in Cybersecurity?
1.1 Cybersecurity Incidents with Humans as a Cause
There is a proliferation and increasing sophistication of cyber-attack attempts targeted at individuals and many of these are designed to gain access to accounts and systems within the organizations they work for. In 2018, over 53,000 financial gain motivated cybersecurity incidents were reported across only 65 countries [4]. During the same year more than 990 million records were exposed due to human error [5], and phishing email rates increased 250%, representing slightly more than one out of every 200 emails received by users [6].
Login details and passwords were stolen from Sony Pictures in 2015, allowing fraudsters to hack-in. A key cause: employees clicking on fake links [7]. The Pentagon network breach, also in 2015, was in part caused by employees being lured by links within malevolent emails masquerading as genuine communications. Staff not accepting urgent updates and non-regular scans of websites were cited as a key cause for the personal data of 157, 000 TalkTalk customers being stolen in 2016. There are many other infamous examples, including breaches at LinkedIn [8], Marriott [9], Equifax [10], and Yahoo [11]. These are just some high-profile examples with human vulnerabilities often being exploited by cyber criminals’ intent at infiltrating organizations to inflict damage, steal data, and/or make financial gains.
The extent of the problem paints a picture where, at first blush, it seems that cyber-attack methods targeted at humans have a much higher than acceptable chance of success, with potentially colossal implications for employees and the organizations they work for. This is in many cases despite positive steps taken by many organizations such as cybersecurity training and other awareness-based interventions aimed at informing and, in some cases, educating employees to be more cyber safe. However, these alone do not seem to be the solution to mitigating human susceptibility to scams and other malicious attempts to gain access to organization accounts and systems [12]. We suggest that a better understanding of underlying human vulnerabilities at the individual level is needed such that bespoke (not ‘one-size-fits-all’) interventions can be developed. And, that it is especially important to develop, test (and in some cases improve and retest) and implement these within the context of organizational settings with employees who are meant to benefit from them the most.
1.2 Why Are Humans Seemingly so Vulnerable to Cyber-Attack Attempts?
In order to understand why humans are, on occasion, vulnerable to cyber-attack attempts at work, we need to consider a range of factors that go beyond risk and risky decision making. These include cognitive factors such as awareness, perception, understanding, and knowledge, environmental (including organizational) factors such as security culture, and, factors that are likely to increase maladaptive behaviors such as work pressures and stresses (e.g., time constraints, high workload).
Some of these factors can be grouped into perception of security risk. Examples include: level of information and cybersecurity knowledge; psychological ownership of work devices; threat appraisal factors; and experience(s) of a previous cyber breach. The lower a person scores on such dimensions, the higher the risk they present to the cybersecurity integrity of the organization. Incorrect or suboptimal perceptions can negatively influence cyber decisions and behavior [12,13,14,15]). Others are security culture and awareness factors and are related to attitudes that are formed within and about workplaces [16]. Attitudes are multidimensional, and influenced by, for example, actions and behaviors of others (e.g., ‘we all do it so therefore it is okay’, ‘no-one does anything about it so therefore it must be fine’), ability (e.g., technical expertise or lack of, knowledge of social factors), and motivation (e.g., job satisfaction, desire to work within and/or excel in the same company). The more a person adheres to an organizational cultural that does not engender cyber-safe actions and behaviors, and the less motivated they are about their job and/or role in the organization, the more likely they are to engage in unsafe cyber behaviors.
Heuristics and biases in decision making are also likely to increase human vulnerability to succumbing to cyber-attack attempts, and people are far from immune to these when at work. Examples include: relying more on information that comes to mind easily (e.g., check email sender details) and missing other possibly very important information (e.g., a suspicious hyperlink) (availability heuristic); making decisions based on the way information is framed (e.g., the system is 95% safe – positive wording, versus the system is 5% unsafe – negative wording) (framing effect); continuing to invest into something that is unlikely to result in success in order to try and avoid failure or blame (sunk-cost effect); and making emotional decisions based upon fear, threat, or panic (e.g., ‘we could lose the contract if I don’t respond immediately’) (affect heuristic). Another bias, very much rooted in human interaction with interface and computer mediated communications is the tendency to adopt a trustworthy (truthful) rather than suspicious stance when interacting with communications (truth default, e.g., [17, 18]. This is related to more automatic heuristic processing where, for example, influence cues within communications (e.g., urgency, compliance with authority, avoidance of loss: see [19]) are less likely to be noticed and processed than perhaps more obvious cues to malevolence such as authenticity cues (e.g., accurate email address). This is a major parameter of the Suspicion, Cognition, Automaticity Model/SCAM [18]), and a key challenge is to find ways to encourage humans interacting with computers at work to take a less trustworthy stance and process information to a deeper level using more cognitively intensive strategies.
Other factors can also increase human susceptibility to cyber-attack attempts and/or, in some cases, exacerbate the effects of other vulnerabilities such as using heuristics and biases in decision making. For example, human individual differences ([19, 20]) such as a high propensity to trust, low self-control/high impulsivity, low self-awareness, high risk taking, high self-deception, low expertise, and a high need for affiliation, can increase the likelihood of unsafe cyber behaviors. Williams, Beardmore, and Joinson [19] also stress the important role of individual contextual factors such as cognitive overload, financial need, and fatigue, as well as more deep rooted organizational factors such as hierarchical organizational, individualistic and relational cultural values.
As well as, and in some cases despite, low vulnerability to many/all of the above factors – maladaptive cybersecurity behaviors can occur as a result of other things that are largely out of one’s control. For example, when working: under pressure, with high cognitive load (e.g., performing a complex task and/or needing to switch to and from more than one task), under stress, and in conditions where performance on a demanding task is interrupted [20]. Such workload related factors can reduce the ability to detect potential cues to malevolence and lead to cyber risky behaviors, and in some cases exacerbation of likelihood of falling prey to other vulnerabilities such as cognitive biases. We know that time pressure can have negative effects on the performance of tasks, and it is argued (despite little research evidence to date) that those involving human cybersecurity are not an exception [21]. Williams, Beardmore and Joinson [19] posit that when operating under high workload conditions (e.g., due to high cognitive load, time pressure, and so on) and even when suspicion is roused, people may feel that they do not have the resources (e.g., time) required to try and deal with them in a cyber safe manner, potentially disregarding or ignoring the possible risks in order to achieve what seems to be the most important goal, such as meeting a deadline.
Taken together, it really is of little surprise that many humans do and can fall foul of engaging in unsafe cybersecurity behaviours and that these will be displayed within workplaces as well as at other times (e.g., at home). Also, many cyber criminals are aware of at least some of these factors, and can and will exploit them to try and gain access to computer systems for malicious purposes. However, we – the defenders – are now more than ever aware of the human vulnerabilities, which in itself is a key step forward to tacking the issue. That is, if information about the vulnerabilities are communicated effectively to as many organizations and their employees as possible, such that socio-technical and not just technical cyber hygiene workplace practices can become the norm.
2 Humans as a Line of Defence in Cybersecurity: Human-Centric Cybersecurity Research Within an Industrial Organizational Setting
It is not enough to simply be aware of human cyber vulnerabilities; we also need to better understand them and how they manifest within organizations, and to develop solutions to alleviate their effects within and amongst employees working within organizations. In this section, we introduce how we are rising to the challenge within Airbus with a new Accelerator in Human-Centric Cyber Security (H2CS) The core team within the accelerator are psychologists with a plethora of research experience and methods, not only in cyber psychology, but also in areas such as human cognition (e.g., perception, attention, memory, decision making), neuroscience, neuroimaging, human-machine interface (HMI) design, human-computer interaction (HCI), artificial intelligence, automation, and human-robotic interaction. All are embedded within Airbus to best deliver the outcomes of the accelerator, including a range of research themes and developing industry-appropriate solutions to tackle and alleviate human cybersecurity vulnerabilities. Example Airbus H2CS research themes (discussed further in this section) include:
-
Developing best-in-class tools to measure human cyber strengths, vulnerabilities and behaviors (Sect. 2.1);
-
Exploring factors known to cause error-prone and/or risky behaviours in the context of cybersecurity within industrial organizational settings (Sect. 2.2);
-
Developing understandable & trustworthy human-centric cybersecurity communications that meet the needs of the wider employee base (Sect. 2.3);
-
Utilizing the research findings and other best existing human factors principles to inform the design and ensure the security from a human perspective of HMIs used within industry-based workplaces as well as HCI principles for using them (Sect. 2.4).
2.1 Developing Best-in-Class Tools to Measure Human Cyber Strengths, Vulnerabilities and Behaviors Within Industrial Organizational Settings
Most of the human cyber strengths, vulnerabilities and behaviors discussed in Sect. 1.2 are known unknowns; i.e., factors that individually or in combination could cause cyber risky behaviors amongst employees within organizational settings. However, these factors will be manifested by some individuals (not all), to different degrees, in different ways, and under different circumstances. It is therefore crucial to develop measures to identify vulnerabilities for use within industrial (and related) organizations. These measures need to speak to a range of questions. For example, are any of the human vulnerability factors so strong that they will be prevalent across most individuals and within most organizations? Are some of the factors an issue but not as powerful such that they are only apparent amongst certain personas and/or very large samples are required to detect them. Are some of the vulnerabilities different within different departments of the same organization, and if so, why (e.g., security criticality linked to hardware and/or software being used and/or developed, level of technical expertise, work cultures)? Are some of the vulnerabilities more (or less) apparent when individuals work away from their normal workplace environment? These and other questions need to be answered in order to identify human-centric cyber metrics to inform the development of solutions (e.g., interventions) that will be most effective for individuals (bespoke), departments (wider applicability) and organizations (generic) as a whole.
To begin the speak to these questions, the Airbus H2CS team have developed and are testing a range of tools to measure human-centric cyber vulnerabilities (as well as strengths) and risky cyber behaviours amongst people working within industry settings. For example, the Airbus Cyber Strengths and Vulnerabilities tool consists of a battery of established scales to measure the influence of demographic factors (e.g., age, gender), individual differences (e.g., impulsivity, risk taking, decision-making styles), contextual factors (e.g., job role, tools used), as well as aspects of organizational commitment and job satisfaction, protection motivation (e.g., role in cybersecurity), and knowledge, attitudes and behaviours (see [22]). The selection of some scales and measures within this tool have also been informed by well-established theories, such as the Theory of Planned Behavior [23] and Protection Motivation Theory [15]. Initial findings suggest that such a comprehensive tool is needed to identify not only factors that strongly predict cyber risky behavior(s), such as security self-efficacy and psychological ownership of devices, but also factors that seem to be weak predictors of such behaviors. Correlations between factors are also being identified and considered during the process of iteratively developing and streaming the tool. Findings from such tools are being developed into human-cyber vulnerability metrics and personas that we are using to develop interventions – bespoke and generic – to mitigate and alleviate vulnerabilities and therefore human risk of unsafe cybersecurity actions and behaviors.
2.2 Exploring Factors Known to Cause Error-Prone and/or Risky Behaviours in the Context of Cybersecurity Within Industrial Organizational Settings
When trying to map factors that are likely to increase human cybersecurity vulnerabilities, as well as known unknowns, there are known knowns. By this, we mean factors that almost universally have a negative effect on task performance, behavior(s) and sometimes well-being, such as when working: under time pressure [24]; with high levels of stress [25]; under cognitive resource depletion or high cognitive load [26]; and in situations where tasks are disrupted, for example due to interruption [27, 28]. Despite many thousands of research outputs on these topics, there has been a dearth of literature and research on their effects and possible mitigations in the context of cybersecurity. Chowdhury, Adam, and Skinner [21] recently conducted a systematic review examining time pressure effects on cybersecurity behaviour, and identified only 21 relevant articles. Of these, few used explicit manipulations of time pressure and fewer still included cybersecurity workers focusing instead on, e.g., student and home computer-user samples. There is much work to be done, and we are tacking this head-on through a number of cybersecurity themed experimental studies conducted within industrial work settings that are determining the effects (and boundary conditions) associated with these and other known knowns.
It is important to add that many of the factors are difficult to control, manage or indeed at an organizational and/or employee level. For example, companies cannot simply operate in a way that all staff never (or even rarely) work under time pressure and/or with high cognitive load, and things like interruptions (e.g., emails, drop-in visitors) and other types of distraction (e.g., having to switch between tasks, background sound/speech) are often part of the fabric of the jobs of many employees. Thus, solutions need to be researched and developed to (1) better manage them (e.g., technical solutions to better schedule when employees engage with computer-based communications such as non-urgent emails) and (2) help to mitigate their negative effects before, during, and/or after their occurrence (e.g., interface features that encourage making notes on or committing important information to memory before switching to another task).
2.3 Developing Understandable and Trustworthy Human-Centric Cybersecurity Communications that Meet the Needs of the Wider Employee Base
Asquith and Morgan [29] assert that as well as a need to better understand the cyber strengths, vulnerabilities and behaviours of humans when working with technology (within a ‘human-centric cyber space’), of paramount importance in cybersecurity defence is the efficient and effective communication of cybersecurity information (including metrics derived from e.g., tools such as those discussed in Sect. 2.1) to the organizations and employees that they are developed for. Noting that solutions may very well need to be bespoke, for example they may differentiate between technical and less technical employees and adjust the terminology used, accordingly. As much as targeted cybersecurity communications may be effective at individual team levels, solutions need to be developed and implemented on a wider scale to instill and/or improve the cybersecurity culture of the organization [30, 31], by for example, encouraging employees to engage more with each other about cybersecurity information.
The communications need to be up-to-date and clearly linked to the technologies and systems they represent. Those intended to benefit from the communications (i.e., employees) should be able to easily interpret and understand them to a meaningful level. To protect against a successful attack, those interacting with the communications (e.g., employees) need to understand the value of the system or data to potential attackers, vulnerabilities in the attack surface and the resources available to potential attackers to aid them in a successful breach [32]. This more tailored use of human-centric cybersecurity communications will be more likely to support decision making than is the case with many existing systems and methods, by developing a resistant defence rather than purely quantifying and displaying risk factors.
An approach we are adopting within H2CS, to develop effective human-centric cybersecurity communications is one of co-development with the people who are meant to benefit from them. For example, understanding the effectiveness of presenting risk information in improving security behaviour is only possible by measuring effect or through receiving feedback from staff members, themselves. This process will help to increase the feeling of job involvement and commitment [33], which has been shown to improve cybersecurity awareness and behavioral intentions [34, 35].
2.4 Drawing upon Research Findings and HCI Principles to Inform the Design of More Secure HMIs for Use by Employees Within Industry Settings
There will always be some human cyber vulnerabilities that cannot be mitigated by solutions informed by the key known unknowns and known knowns, discussed above, and/or with improved methods and content of communication to the wider employee base. For example, behaviours and habits that can be so hard to break (e.g., Einstellung effects, and so-called hard-grained task performance strategies that are difficult to break even when an alternative strategy is more beneficial) that hard constraints (i.e., changes to a task or process that prevent certain actions) need to be considered. By hard constraints here, we refer to HMI features that prevent people from doing certain things (e.g., replying to an email without verifying the credentials of the sender) in order to reduce risky actions and behaviors and encourage people to learn from the hard constraints and try to apply characteristics of them when performing other tasks. It could be beneficial to add a hard constraint(s) to some aspects/features of HMIs – e.g., to access HMI features that could be targeted by cyber criminals online and/or when working with sensitive data on a device that could be targeted.
Hard constraints such as information access costs (e.g., masking information with a small time and mouse/cursor cost to uncover it) and implementation costs (e.g., a time cost to implement an action(s) such as when trying to reply to an email from a non-verified sender), can lead to powerful shifts to cognitively effortful information processing strategies. This shift discourages automatic surface processing strategies that are known to lead to risky cyber behaviours [18]. Such HMI design principles are known to encourage more task-relevant, planned behaviour and more intensive memory-based processing that can, for example, protect against forgetting important information after a task is interrupted [28] and improve problem solving behaviours [36]. The benefits of such methods have been demonstrated multiple times using basic laboratory tasks ([28, 36,37,38,39,40]). What we aim to understand within our current studies is whether, and if so to what extent, such manipulations of HMI hard constraints can encourage more cognitively intensive strategies that will encourage humans to act and behave even more securely in HCI situations within workplace settings.
3 Conclusions
We do not dispute that humans possess a number of characteristics and limitations that increase vulnerability to cyber-attack methods. Statistics relating to humans being involved in successful cybersecurity breaches are staggering and very alarming. Within the current paper, we have presented and discussed a number of factors that can likely account for many of the human vulnerabilities that have resulted in such breaches within organizations, with the exception of more malicious insider threat factors that are beyond the scope of the current paper. Vulnerabilities such as: suboptimal perceptions of security risks; issues with security awareness and culture within some organizations; overreliance on flawed heuristics and decision making biases; individual differences in relation to factors such as risk tasking, impulsivity, trust, and self-awareness; and, maladaptive behaviors due to factors such as time pressure, high cognitive load, high stress, and working under conditions where interruptions and distractions are prevalent.
A number of these issues are receiving some research attention, but there is much work to do. Like us, other researchers are acknowledging that humans have the potential to be a solution and not a problem to a number of cybersecurity challenges [41]; in fact, some have been suggesting this for quite some time [1], although doubts have been raised. In a number of cases, some factors are being examined in relative isolation to others, such as work that often focuses on a subset of individual differences without considering others that might also explain risky cyber behaviours. Some research is perhaps too focused on human vulnerabilities without consideration of environmental or situational factors, and vice versa. Whilst not a criticism per se, far too much research involves studying population samples (e.g., university students) that are not representative of the sectors in which possible solutions to human-centric cybersecurity issues are intended to benefit, such as workplace organizations and the employees who work within them. We need to embrace the idea that humans can be a significant part of the solution to cyber-attack attempts as advocated by others [1, 41], and drive forward with cutting edge research that tackles the wider range of human cyber vulnerabilities discussed within the current paper.
Within the current paper we discuss a step-change involving human factors psychology research in the context of cybersecurity conducted for and within an industrial organization with potential wide-ranging benefits to other organizations and workplace settings. Airbus have established a first-in-class Accelerator in Human-Centric Cyber Security (H2CS) with a core team of psychologists driving forward and working with others on research to examine human cyber vulnerabilities within workplace settings and to develop interventions to alleviate and in many cases mitigate many of these vulnerabilities. The team are working on a number of research projects to develop and test industry-appropriate solutions to tackle and alleviate human cybersecurity vulnerabilities. These include: best-in-class tools to measure human cyber strengths, vulnerabilities and behaviors that also consider workplace environmental factors; investigating factors known to cause risky behaviours in the context of cybersecurity within workplace settings; development of understandable & trustworthy human-centric cybersecurity communications that meet the needs of the wider employee base (Sect. 2.3); and implementing and testing human factors HMI and HCI techniques that discourage sup-optimal and trusting information processing strategies and instead encourage more effortful cognitive strategies that encourage people to think deeper about the decisions they make and actions/behaviors they engage in. Within the paper, we have provided insights into these research projects to not only promote the work of the accelerator, but also to encourage others to be involved and to consider the value of human-centric cybersecurity research embedded within organizations.
References
Sasse, M.A., Brostoff, S., Weirich, D.: Transforming the ‘weakest link’–a human computer interaction approach to usable and effective security. BT Technol. J. 19(3), 122–131 (2001). https://doi.org/10.1023/A:1011902718709
D’Arcy, J., Hovav, A., Galletta, D.: User awareness of security countermeasures and its impact on information systems misuse: a deterrence approach. Inf. Syst. Res. 20(1), 79–98 (2009)
Stanton, J.M., Stam, K.R., Mastrangelo, P., Jolton, J.: Analysis of end user security behaviors. Comput. Secur. 24(2), 124–133 (2005)
Verizon. 2018 data breach investigation report 1–8 (2018)
IBM. Cost of data breach report 2018 (2018)
Microsoft SIR 2018 (2018)
Perera, D.: Sony hackers used fake emails. Politico, 21 April 2015 (2015). https://www.politico.com/story/2015/04/sony-hackers-fake-emails-117200
Schuman, E.: LinkedIn’s disturbing breach notice (2016). https://www.computerworld.com/article/3077478/security/linkedin-s-disturbing-breach-notice.htmlComputerworld
Forbes. Marriott breach: starwood hacker gains access to 500 million customer records (2018). https://www.forbes.com/sites/forrester/2018/11/30/marriot-breach-starwoods-hacker-tier-rewards-millions-of-customer-records/#3f90b0245703
Yurieff, K.: Equifax data breach: what you need to know (2017). http://money.cnn.com/2017/09/08/technology/equifax-hack-qa/index.html
Weise, E.: It’s new and it’s bad: Yahoo discloses 1B account breach (2016). https://www.usatoday.com/story/tech/news/2016/12/14/yahoo-discloses-likely-new-1-billion-account-breach/95443510
Bada, M., Sasse, A.M., Nurse, J.R.: Cyber security awareness campaigns: Why do they fail to change behaviour? arXiv preprint arXiv:1901.02672 (2019)
Pfleeger, S.L., Caputo, D.: Leveraging behavioral science to mitigate cyber security risk. Comput. Secur. 31(4), 597–611 (2012)
Egelman, S., Peer, E.: Scaling the security wall: developing a security behavior intentions scale (sebis). In: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, pp. 2873–2882. ACM (2015)
McGill, T., Thompson, N.: Old risks, new challenges: exploring differences in security between home computer and mobile device use. Behav. Inf. Technol. 36(11), 1111–1124 (2017)
Scholl, M.C., Fuhrmann, F., Scholl, L.R.: Scientific knowledge of the human side of information security as a basis for sustainable trainings in organizational practices. In: Proceedings of the 51st Hawaii International Conference on System Sciences, pp. 2235–2244 (2018)
Levine, T.R.: Truth default theory: a theory of human deception and deception detection. J. Lang. Soc. Psychol. 33, 378–392 (2014)
Vishwanath, A., Harrison, B., Ng, Y.J.: Suspicion, cognition, and automaticity model of phishing susceptibility. Commun. Res. 45, 1146–1166 (2016)
Williams, E.J., Beardmore, A., Joinson, A.: Individual differences in susceptibility to online influence: a theoretical review. Comput. Hum. Behav. 72, 412–421 (2017)
Williams, E.J., Morgan, P.L., Joinson, A.J.: Press accept to update now: individual differences in susceptibility to malevolent interruptions. Decis. Support Syst. 96, 119–129 (2017)
Chowdhury, N.H., Adam, N.T.P., Skinner, G.: The impact of time pressure on cybersecurity behaviour: a systematic review. Behav. Inf. Technol. 38(12), 1290–1308 (2019)
Bishop, L., Morgan, P.L., Asquith, P.M., Burke, G-R., Wedgbury, A., Jones, K.: Examining human individual differences in cyber security and possible implications for human-machine interface design. In: Moallem, A. (ed.) Human-Computer International: 2nd International Conference on HCI for Cybersecurity, Privacy and Trust, LNCS 12210, pp. 1–17 (2020, in press). To appear
Ajzen, I.: The theory of planned behaviour: reactions and reflections. Psychol. Health 26(9), 1103–1127 (2011)
Kelly, J.R., McGrath, J.E.: Effects of time limits and task types on task performance and interaction of four-person groups. J. Pers. Soc. Psychol. 49(2), 395–407 (1985)
Henderson, R.K., Snyder, H.R., Gupta, T., Banich, M.T.: When does stress help or harm? the effects of stress controllability and subjective stress response on stroop performance. Front. Psychol. 3–179, 1–15 (2012)
Paas, F., Renkl, A., Sweller, J.: Cognitive load theory and instructional design: recent developments. Educ. Psychol. 38(1), 1–4 (2010)
Monk, C., Trafton, J.G., Boehm-Davis, D.A.: The effect of interruption duration and demand on resuming suspended goals. J. Exp. Psychol. Appl. 14, 299–313 (2008)
Morgan, P.L., Patrick, J., Waldron, S., King, S., Patrick, T.: Improving memory after interruption: exploiting soft constraints and manipulating information access cost. J. Exp. Psychol. Appl. 15, 291–306 (2009)
Asquith, P.M., Morgan, P.L.: Representing a human-centric cyberspace. In: 6th International Conference on Human Factors in Cybersecurity, 2020, 11th International Conference on Applied Human Factors and Ergonomics, San Diego, US, pp. 1–7 (2020)
Rathburn, D.: Gathering security metrics and reaping the rewards. SANS Institute, Information Security Reading Room (2009). https://www.sans.org/reading-room/whitepapers/leadership/gathering-security-metrics-reaping-rewards-33234
Herrmann, D.S.: Complete Guide to Security and Privacy Metrics: Measuring Regulatory Compliance, Operational Resilience, and ROI. Auerbach Publications, Boca Raton (2007)
Fleming, M.H., Goldstein, E.: Metrics for measuring the efficacy of critical-infrastructure-centric cybersecurity information sharing efforts. In: Homeland Security Studies and Analysis Institute Report RP: 11-01.02.02-01, pp. 1–57 (2012)
O’Driscoll, M.P., Randall, D.M.: Perceived organisational support, satisfaction with rewards, and employee job involvement and organisational commitment. Appl. Psychol. Int. Rev. 48(2), 197–209 (1999)
Reeves, A., Parsons, K, Calic, D.: Securing mobile devices: evaluating the relationship between risk perception, organisational commitment and information security awareness. In: Proceedings of the 11th International Symposium on Human Aspects of Information Security and Assurance (HAISA), pp. 145–155 (2017)
Hearth, T., Rao, H.R.: Protection motivation and deterrence: a framework for security policy compliance in organisations. Eur. J. Inf. Syst. 18, 106–125 (2009)
Morgan, P.L., Patrick, J.: Paying the price works: increasing goal access cost improves problem solving and mitigates the effect of interruption. Q. J. Exp. Psychol. 66(1), 160–178 (2013)
Gray, W.D., Sims, C.R., Fu, W.-T., Schoelles, M.J.: The soft constraints hypothesis: a rational analysis approach to resource allocation for interactive behavior. Psychol. Rev. 113(3), 461–482 (2006)
Morgan, P.L., Patrick, J.: Designing interfaces that encourage a more effortful cognitive strategy. In: Proceedings of the 54th Annual Meeting of the Human Factors and Ergonomics Society, Cognitive Engineering and Decision Making Section, San Francisco, California, USA, pp. 408–412 (2010)
Morgan, P.L., Patrick, J., Patrick, T.: Increasing information access costs to protect against interruption effects during problem solving. In: Proceedings of the 32nd Annual Meeting of the Cognitive Science Society, Portland, Oregon, USA, pp. 949–955 (2010)
Patrick, J., et al.: The influence of training and experience on memory strategy. Memory Cogn. 43(5), 775–787 (2015)
Zimmermann, V., Renaud, K.: Moving from a ‘human-as-problem” to a ‘human-as-solution” cybersecurity mindset. Int. J. Hum Comput Stud. 131, 169–187 (2019)
Acknowledgements
The research and Airbus Accelerator in Human-Centric Cyber Security (H2CS) is further supported by Endeavr Wales and Cardiff University. The first author (Dr Phillip Morgan) as Technical Lead, the second author (Dr Phoebe Asquith) as a Cardiff University Research Associate, the fourth author (George Raywood-Burke) as a PhD student funded by the programme, and support in kind for the third author (Laura Bishop) who is funded via a PhD studentship from the School of Psychology at Cardiff University.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Morgan, P.L., Asquith, P.M., Bishop, L.M., Raywood-Burke, G., Wedgbury, A., Jones, K. (2020). A New Hope: Human-Centric Cybersecurity Research Embedded Within Organizations. In: Moallem, A. (eds) HCI for Cybersecurity, Privacy and Trust. HCII 2020. Lecture Notes in Computer Science(), vol 12210. Springer, Cham. https://doi.org/10.1007/978-3-030-50309-3_14
Download citation
DOI: https://doi.org/10.1007/978-3-030-50309-3_14
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-50308-6
Online ISBN: 978-3-030-50309-3
eBook Packages: Computer ScienceComputer Science (R0)