Cyber Ways Podcast

Dewald Roode Workshop with Dr. Karen Renaud

September 19, 2023 Tom Stafford, Craig Van Slyke Episode 21
Cyber Ways Podcast
Dewald Roode Workshop with Dr. Karen Renaud
Show Notes Transcript Chapter Markers

Are you ready to shift your perspective on cybersecurity? We've got Dr. Karen Renaud, the general chair of Dewald Roode Workshop (DRW) this year and a renowned figure in information security research, to guide us on this fascinating journey. We'll be dissecting the paradigm-shifting presentations, lively debates and thought-provoking discussions from the workshop, with a special focus on Basie von Solms' revolutionary thoughts on the future of cybersecurity.

Looking to understand why people often disregard security procedures? Or how personality traits can impact the security decisions we make? Our discussion reveals that cautiousness, morality, and self-consciousness can positively affect security decisions, but increasing security knowledge doesn't always correlate with safer decisions. As we navigate through the papers, we'll also investigate how AI-enhanced security systems could alleviate user stress and transform the way we approach security training.

We also tackle an under-discussed issue in the cybersecurity sphere: the misuse of system access and the potential for computer abuse by managers. With their unique position of trust and autonomy, could managers be the new insider threat to watch out for? We'll also delve into the role of habits in cyber hygiene, the promises and perils of AI in the field, and how these insights can be applied in the workplace. Join us for this enlightening discussion -- it's an episode you won't want to miss!


DRW Website: https://drw2023.github.io/
(All papers and the Key Note slides are available on the website.)

Papers discussed:4

  • Personality Facets and Behavior: Security Decisions under Competing Priorities,  Sanjay Goel, Jingyi Huang, Alan Dennis, Kevin Williams
  • An Examination of How Security-Related Stress, Burnout, and Accountability Design Features Affect Security Operations Decisions,  Mary Grace Kozuch, Adam Hooker, Philip Menard, Tien N Nguyen, Raymond Choo
  • Bosses Behaving Badly: Managers Committing Computer Abuse, Laura Amo
  • Encouraging Peer Reporting of Information Security Wrongdoings: A Normative Ethics Perspective, Reza Mousavi, Adel Yazdanmehr, Jingguo Wang, Fereshteh Ghahramani
  • Impact of Cyber Hygiene Behavior on Target Suitability using Dual Systems Embedded Dual Attitudes Model, Harsh Parekh, Andrew Schwarz
  • The Blend of Human Cognition and AI Automation: What Will ChatGPT Do to the Cybersecurity Landscape?, Hwee-Joo Kam, Chen Zhong, Hong Liu, Allen Johnston



Intro audio for the Cyber Ways Podcast

Outro audio for Cyber Ways Podcast

Cyber Ways is brought to you by the Center for Information Assurance, which is housed in the College of Business at Louisiana Tech University. The podcast is made possible through a "Just Business Grant," which is funded by the University's generous donors.

https://business.latech.edu/cyberways/

Tom:

Hello colleagues and welcome back into the Cyber Ways podcast. It's a production of the Louisiana Tech College Business Center for Information Assurance. Today we're happy to have one of our favorite guests and good friends and research colleagues, Dr Karen Renaud of Strathclyde University. She's going to talk about the recent DeWald Roode Workshop. That's a very special occurrence for us in information security research. She chaired this year's workshop and her research focus has been on human-centered security, with an overall goal of finding ways to create easy, natural and secure ways for people to interact with their digital devices. Dozens of articles under her byline on this topic and her work's been cited all over the place thousands of times. Welcome back with Dr. Renaud, .

Karen:

Thank you so much. It's lovely to be here.

Craig:

Great, I think you are our first three-time guest. I'll have to go back and double-check, but I think you have officially set the record now. As Tom said, karen is here to talk about the DeWald Roode Workshop, which is one of our premier workshops for information, privacy and security research. She was the general chair of the workshop this year. Her program co-chairs and I apologize if I butcher any of these names Jock Ophoff, justin Djiboni, alex Dersecova of the University of Oklahoma. I think there were 16 papers presented and had 30-ish people attend something like that.

Craig:

We're going to talk about some of the papers from the conference. Before we do that, Karen, can you just tell us how the workshop went?

Karen:

Yes, we booked out a little hotel which is on the West Coast, just on the north side of the Loch. We didn't know how many people we were going to get, but we did get 29 attendees, which was absolutely amazing. We built the hotel and had the hotel to ourselves. We had a nice little area where we could socialize in the evenings. We had a very nice banquet with a saxophonist playing in the background. People were very enthusiastic, everyone contributed and it was just a very, very pleasant workshop to attend.

Karen:

We arranged two different excursions. The first one was on the day before the workshop and everyone went to Sterling Castle if they signed up. We had a bus took us up to Sterling Castle. We went through the castle and came back. Then, on the day just after the workshop, we went off to a distillery which is in the centre of Glasgow called Glen Gwynne. That one was very, very popular with people who went there. It was so wonderful bringing everybody to Scotland and letting everybody's beautiful melodies and to visit all the sites, and a lot of people came either early to spend some time. I know that Merrill and Tony did some walking through the hills before they came down to the video about Rida and to the Lake District after. It really worked well for touristy things, either before or after as well. Just quickly to say that we had Bossy from the Sollums. He went to the trouble of contacting dear Vaudred's children, who had no idea that we had this workshop. They were so happy that their dad's name was still being remembered in this way. That was lovely that he did that.

Karen:

He gave a very interesting presentation. I actually have this slide here which I'm very happy to share with Craig and Tom in case they want to make it available on their podcast as well. He suggested that cybersecurity even though budgets are going up, we're not winning. He has a number of premises, but the first premise is that cybersecurity awareness is not helping. It's sort of radical. The second premise being that cybersecurity is all about cybercrime. We use the wrong terminology and we need to change our mindset, according to him. He has a premise about users need to understand cybersecurity as a weapon to fight cybercrime rather than a bunch of list of rules to follow. That premise is that people need to understand that fighting cybercrime is part of their daily job.

Karen:

Then he suggests that we have four different paradigms to move to the future. The first paradigm is the 1970s paradigm, which not many of your listeners will have lived through, but the idea would be everything being centralized, with mainframes and only dumb terminals full of staff members. Therefore, there's nothing to attack with that the end user has got access to and control of. That's what he calls the 1970s paradigm. The next one is the fighter approach.

Karen:

He said when you're in firefighting, people are trained to fight fires in an emergency, but we've put a lot of our efforts in cybersecurity into prevention. We don't actually talk to teach even staff members how to cope if an exploit does in fact occur. Obviously, organizations have experts to do that, but the average person is not taught what to do. The next approach that he talks about is the ownership approach, where we help people to understand and take ownership over their device and the information they have control of. Where it's not the organization's stuff that they have to be worried about, it becomes something that they're accountable for.

Karen:

His fourth paradigm suggestion is the workplace approach, where this is based on the health and safety paradigm, that the way everyone in the company plays their part in helping to ensure that people don't get hurt on the job. This is the way we do things here. It's a sort of creating of norms, of always being health and safety conscious, and he thinks that maybe cybersecurity should go that way. His favorite approach is the firefighting one, but he was very interested to hear what other people had to say and it was a lively discussion afterwards, very enjoyable conference. We had an awesome mix of early career PhD students and seasoned academics there.

Craig:

Yes, I would imagine that Lively is an understatement discussing his views, but that's refreshing though.

Karen:

Yes, I think that's why people are attended DRW.

Craig:

Detmar is one of the founders.

Karen:

Yes, it was lovely having him there and Merrill was there. These other folks tend to be Merrill. He's a very regular attendee and always contributes his insights, so that was good. So one thing.

Tom:

I noticed about DRW early in my time at Louisiana Tech, brought into it by Merle Larkin and who we're just talking about, is it's eminently behavioral. It has a favored focus on the behavior of people in the workplace as regards security, and all the papers we have to talk about today are behavioral in some aspect or another, including the first paper, which is a paper on personality, and I was pleased to see that it was written by an old friend, sanjay Gowell, out of the Buffalo contingent right, and maybe he's that's right. We were hoping you could tell us about some of these high profile papers that yeah, the emblematic of this year's meeting, and that paper personality facets and behavior, security decisions under competing priorities has my full attention.

Karen:

Yes, indeed, it was a very interesting paper. It was presented by Alan Dennis. The idea was that people are not just not complying with security procedures because they're bad or because they're lazy, but because they have other competing responsibilities and things that they have to take care of, and so that was what he was exploring. And then the idea that personality might lead into this as well and that people would make trade-offs depending on their personalities, and they tested specifically dutifulness, cautiousness, achievement, striving modesty, morality, assertiveness and self-consciousness. It was a really interesting study, which I was very glad to see. This because there's always a, I think in cyber there's always a tendency to want to say people are lazy or they are ignorant and things like that, but what they're saying is that if you're rewarded for doing a particular thing at work but you're not really rewarded for carrying out cyber hygiene behaviors, then maybe you will do the things that you're rewarded for.

Karen:

The finding was, very interestingly, that dutifulness, cautiousness, achievement, striving self-consciousness were positively affecting security decisions, so they'd be more likely to do those things. However, the most interesting part for me was security knowledge did not affect security decisions after a baseline level. So people had the knowledge, but it was the competing things that they were worried about that would make more of an impact on their decision whether to do a security behavior or not. The takeaway was that we're not going to train our way out of these kinds of things. We're going to have to think about the competing things that people are subject to, the emergencies that are part of their working jobs, and how they're going to have to decide one way or the other, and think about how we can understand those competing priorities where security may not come out tops.

Tom:

It's almost seeming to me as though there are security people and there are non-security people. So if you want to pick people for a classified workplace, you'd want to identify the security types, because they're the ones most likely to earn classification and also to observe the regulations in the highly secure workplace.

Karen:

But if you give them a lot of other things to do, maybe they'll still not do the security things. It also depends on how many duties people have.

Craig:

I think this is a fundamental point and it's a thread that's run through a number of our episodes. Workers view security as overhead. It's friction. It's not something that helps them do their jobs directly. It would be better if they didn't have to deal with it. So I think one of the things kind of along the lines of the keynote you discussed is we've got to find ways to put security in the background, where users don't have to deal with it as much. I don't know exactly how that gets done, but we've had what? How many decades now of trying to get users to be security workers and that hasn't been effective, and I don't think it's ever going to be effective if we go down the same path.

Tom:

I think we make an error in assuming that everybody is able to engage in security activities. I've viewed security research starting from presumption that you can get everybody on the same sheet of music and I'm starting to think maybe not everybody will do that. I mean certainly have the evil doers, the bad element, the criminal justice perspective, but Craig's point about cognitive overload as well taken. Not everybody has the bandwidth to do this kind of work. There was another paper that dealt with how people freaked out from all this pressure. The security related stress was in the title Mary Grace's paper on how burnout actually is an impediment to a security regime in workplace.

Karen:

Yeah, and this was really a paper which I felt like they should have had AI in the title as well, because they were talking about the human in the loop, harnessing AI enhanced systems, using AI enhanced security systems and trying to make decisions based on the recommendations coming from the system, but also based on their own context awareness, going on. So this paper was presented by Phil Menard. They were basically proposing a piece of research here and they designed the whole study, but they didn't actually carry it out yet. There were a lot of parts that were really interesting to me, where they were talking about, once again, the human is there, but they susceptible to burnout over time because they're just keeping their tabs on all these things that are happening all the time, and they can get burnt out and stressed by that.

Karen:

This comes down to how do we? We can't trust AI agents to get it right all the time, so we have to have a human to say, well, okay, this makes sense or it doesn't. And apparently the fact that these things are conversational helps us to anthropomorphise them and therefore it's almost like a friend that you can talk to. But on the other hand, these things are biased and they do make mistakes, and so, therefore, people have a responsibility to stay on top of that kind of thing as well. So that was a very interesting one that we actually start looking at. Who is accountable when a bad decision is made. Is it the AI engine Well, you can't hold an engine accountable or is it the human who's trying to take all these inputs and make a good decision?

Craig:

It's tough to hold the human accountable if AI is making some of those decisions, especially if AI is actually making the decisions and the humans don't have the autonomy. Responsibility requires autonomy. It also requires understanding. You have to know that your actions are likely to lead to some consequence. And if AI is in the loop, how can humans even know that?

Karen:

Yeah, and they talk about inscrutability, right, in fact, that these AI engines make a recommendation, but you don't know how they arrived at that, where a human can say why do you think that? And they will explain that to you but the engine says 2X, but you don't know why they're saying that. And this inscrutability is a real issue in terms of having it as a reliable tool.

Craig:

And this idea of stress. So one of the causes actually stress is caused when the requirements of something overload your capacity, and so the idea is that AI should reduce stress because it relieves some of the load off of the workers. But then if you replace that with workers having to monitor the AI, I'm not sure it gets us anywhere.

Karen:

But you know, ai reflects our biases as well, right.

Craig:

Yeah, absolutely.

Karen:

It's been trained on data from humans, and humans are often biased, and so it's not necessarily going to be any more objective than we are.

Craig:

Right.

Karen:

And we all know that it hallucinates. If somebody told me, if you ask a time manager officer going to the moon until she 18, quite confidently, yeah, it is very confident in its errors.

Craig:

But you know we're talking a lot about generative AI, but we don't want to forget that there are other types of AI that can play a role in the security identifying the threats before they really get to the point where they can be executed. You know that's not really generative AI, that's machine learning.

Karen:

Yes, and spotting anomalies that signal and emerging tech. That's a fantastic way, and of course that you know there's no subjectivity involved there.

Tom:

Right, there's a paper to be written about all of this, because I'm sitting here thinking that we must have already implemented AI in their malware detection routine. That are running on all my machines.

Craig:

This is a little bit of a tangent, but one of the dangers of where we are with AI is there's a rush to roll things out before we carefully consider them, and I just wonder if some of the AI applications around security are going to move us in a bad direction before they move us in a good direction.

Karen:

There was a case here where there was a charity that advised people who have eating disorders, and the charity didn't want to pay humans to do the advising, so they thought they'd use a conversational AI engine and within three days it was telling people to do all the things that they shouldn't be doing with their eating disorders. So they quickly had to switch that off and get humans back again, and so this is the problem it's not going to replace humans.

Craig:

No, it's not. Not anytime soon, that's for sure. But we will have bad applications of it before we get to the good applications, I think.

Karen:

Yeah.

Craig:

By the way, before we move on to the next paper, we will have links to all of the information about the conference, including the papers and the keynote slides and our show notes. Let's move on to the next paper. Bosses behaving badly, managers committing computer abuse. And it's Laura Amo, or I think that's how you pronounce it. So this was an interesting paper.

Karen:

It was my favorite paper, absolutely favorite paper, because you read it and you go, oh yeah, of course the bosses are going to behave with. So what she says is that it's been an understudied area where she went and had looked at all the possible studies on computer abuse intent and behavior spanning 35 years and only four had actually considered managerial status as a predictor. And even those they had. Really, they mostly used it as a control variable. That was a really interesting thing. But what she said, the argument she makes, is that managers are the people who grant access to staff to various things, but it often doesn't occur and you know when somebody leaves the organization they'll immediately take all their access away, but it often doesn't occur to the organization to control the amount of access the manager has.

Karen:

So if you look at Cressy's fraud triangle, it's opportunity, it's motivation and rationalization, and her argument is that here it's opportunity that is maximized because they have access to everything and they can do that. However, it's not going to be every single manager that's going to be behaving that way, because you still have to be able to look at yourself in the mirror and you have to be able to rationalize behaving badly, and many people cannot do that. They have too much integrities. She did a little study with over 400 people, if I remember correctly and she was able to show that in fact managers were significantly more likely than non managers to engage in computer abuse. That's a really significant finding.

Craig:

When you stop to think about it, that may not be all that surprising. I mean, it's novel, certainly. You know it is not something that's been looked at before, but managers have a lot of autonomy.

Karen:

Yes.

Craig:

Typically, as you move up the managerial ranks and you increase your span of control and that sort of thing, you not only increase your autonomy, but you're more trusted. Yes, so you have less oversight. The other thing is that you also have extreme pressure and often competing responsibilities.

Karen:

And so that's where the first paper comes in now, exactly.

Craig:

Exactly and well in the second paper. Because that creates stress. You want to relieve that stress.

Karen:

Yeah.

Craig:

And one way to do that is to maybe do some things you shouldn't do with respect to computer use.

Karen:

She talks about this as a new kind of insider threat, which I found very interesting because I've been looking recently at a lot of taxonomies of insider threats. I've never actually seen this manager type. It reflects it. I agree.

Tom:

It's a fresh new perspective in that regard and I find that welcome, because there's so much. Let's police the employees, let's find better ways to lock them down, when the real question in this paper seems to be who's watching the watchdogs?

Karen:

Who's watching the watches Exactly, but the interesting part, I feel. I wonder whether her findings, whether the real situation is perhaps worse, because people have to admit to having an intention to commit abuse and a lot of people may do it but not admit it.

Craig:

That brings up another really interesting question that I've been pondering a lot is just exactly what is abuse in the minds of these individuals? So if I've got obligations to you know, various stakeholders and I do something that may be from a security perspective I shouldn't do, but it's good from all these other perspectives Is that abuse? And I'm not talking about rationalizing here, I'm talking about actually not viewing it as abuse. You know I take it.

Karen:

I've got to I've got to take.

Craig:

I've got to have data to go to a client meeting and, yes, I'm not supposed to do that, but if we want to land this big contract or you know whatever it might be, I need to have this data for the meeting. I don't view that as abuse. Sure, it's a violation of some policy, but organizations have all sorts of stupid rules.

Karen:

That's true.

Craig:

And maybe this is just one more.

Tom:

My own work with William Lee has gone that direction the notion of there being subgroups, clans, if you will, or tribes of security beliefs within the organization that don't adhere completely to the information security policy in large firms.

Karen:

But does that mean, our security policies are not adequate?

Craig:

I would say they're, they aren't you know security policies, overprivileged security. Yeah at the expense of the rest of the organization. And as much as we might not want to accept it, regular workers do not think about security as the top of mind. You know, maybe in some high consequence industries you know defense and some of those sorts of things maybe they do. But to go back to that, that prior paper, they're just trying to get their work done.

Tom:

Exactly and adhering to the letter of the law on the information security policy sometimes impedes you making that meeting you were talking about earlier, craig, where you need to break policy, get that data go. It's more important to make the meeting satisfy the client, bring the cash register, generate revenue. In some cases the security officer I talked to about it said well, you know, you have to formalize the expectations, even if we wink and nod at them every once in a while, otherwise it does become anarchistic.

Craig:

That makes sense. I want to take another tangent, if you don't mind, because as I look through these papers and you know, read other work and look at what's going on out there in practice. We're trained to systems, people, and if a system can go wrong, there's a problem with that system. There's a control problem, there's some sort of a problem in that system and I think, fundamentally, if we think of how we approach information security as a system which may or may not be appropriate, the system is at fault. We keep blaming the users. You mentioned this from the keynote, this idea of every time I look at some criminal justice theory being applied to users, it makes my head explode, because we're treating people that are trying to do their work as criminals. There are criminals out there that are actively trying to do bad things, but that's a tiny, tiny, tiny portion. That's the 1%.

Craig:

Yeah, whatever it is Now, they can do a lot of damage, so we can't ignore those. But a lot of the problems come in from somebody who doesn't. They're not trying to do anything wrong. They're trying to do what they think is right, but the system allows them to put the organization at risk.

Karen:

Yeah, if by one click, people can endanger the organization, the system is wrong.

Craig:

Right, right. So I think we may need kind of to Professor Von Salm's paradigm. I think maybe we do need to have a mindset shift.

Karen:

The only thing that one person asked the question, saying but all you measured was intention. And my question was how could you measure actual bad behavior? Because people are never going to admit to that.

Craig:

I wonder about that. I think your point is well taken, so I'm not necessarily disagreeing. But again, what if they don't think it's abuse?

Karen:

Yes, if they don't think yeah exactly.

Craig:

So that's an interesting question.

Karen:

But going back to this, the bosses thing, somebody and I'm not going to say who they were, but very high up, not in my current university, but another one I said to me that he everything was tied down so tightly on the university machines that he would email documents to his home email to work on because his home machine was easier to use. And I just thought security is wrong when that's what it's doing, because this man has access to all these documents, not doing anything wrong, but he's not supposed to email it to his home Gmail address, right, right, but he does it because he wants to get his job done. I really had sympathy with him.

Tom:

I'd like to bridge to the next paper because it seems like it's becoming relevant to consider people doing bad things for good reasons and then what happens? And the sense of the watch the formal watchers catching you is not as effective as we'd like. But this fits with something some of our students been looking at the notion of just people in the workplace seeing something wrong going on and raising a red flag higher up the food chain. That fits with that same guy who was telling me well, we've got this policy and we know there's going to be wings and nods, but at the same time we get a formal report from somebody complaining about bad behavior. We act on a promptly the whole notion of pure reporting, Karen.

Karen:

Yeah, I'm conflicted about this paper. I grew up in the era of Eastern West Germany and being aware of how porting happened in East Germany and where nobody was able to ever relax because they didn't know who was who was going to report on them. And so I guess, because of knowing that, I really have a problem with always knowing that my, my colleagues are going to be reporting something. I'm making a mistake with that. That's what worries me about this kind of mechanism.

Karen:

However, I have friends who are, you know, students, graduated students, who are now seesaw, and I spoke to one last evening and he said, oh, no, no, no, this would be very helpful because we don't know if there's a unit in the organization where they're getting up to a bunch of weird stuff that they shouldn't be getting up to, and if we, if we knew we could set them back on the right boss. And I said, yeah, but what about worrying about turning into East Germany and stuff? And he said, no, he really didn't think. He thought that that was not a consideration that I should be worried about. So there, that's a practitioner and he probably knows better than I do, but I feel like they're always people who are going to report for their own petty reasons and the officious types that just like reporting. So it would have to be done in a way where it wasn't about scoring points of other people and it would have to be really about the security of the organization.

Craig:

I want to jump in if I can, and no disrespect to your former student, but that's such a security oriented thing to say, because it really and you know that that's their job, so it's natural that they have that kind of a predilection. Yeah but you know that, think about the cultural damage that this sort of thing can do in the cohesion damage and, yeah, you know, when we're pushing teamwork and cooperation and this could be a huge step backwards, I mean it makes sense to study the effects.

Craig:

So I'm not criticizing the authors for doing this paper, but you know, I think there's some pretty clear downsides to this approach.

Tom:

It presumes that everybody's going to report everything, and I think the reality of the situation is more in terms of normative work groups where everybody knows there are rules. Everybody knows we're going to budge on a couple of these rules, but there are steps beyond which we would not permit and then we will report. There's a researcher in management who talks about organizational clans, where people have in groups of what's considered to be right and wrong behavior and they share that. They will report egregious violations, but they will. They will be comfortable with normatively agreed violations that are at a low level for the productivity purposes we've talked about before. I've always felt it's important when somebody's really busting the rules, it's important that you let somebody else know. And in my work with large organizations, typically it was cross work group violations that you saw, where you'd share passwords within a group. But there's this guy over there and if he shared a password, well, that would be wrong because we know he'd be doing it for the wrong reasons.

Karen:

I did feel like that they didn't make a very strong difference here between reporting as Craig talks about 99% of the people who are really just trying to do their jobs. We make them a state and we're whistleblowing. So if we see somebody doing something illegal and unethical, I'm all for reporting that.

Craig:

Well, and we're asking people that we already have acknowledged, aren't that really knowledgeable about proper security and we're asking them to report. Do they really have the knowledge to report? The other thing bothers me a little bit. So they thought one of the things they looked at was personal responsibility to report and claim that as based on virtue ethics. But you know that that's kind of based on the view of duties. So I'm not so sure it's virtue ethics to begin with. But duty to whom? You know who is your primary duty. Two, and to Tom's point, generally, if you have a clan within an organization, your primary felt duty is to the clan, and so I'm just not so sure how this would all play out. But it's very interesting research.

Karen:

I'm glad that their paper was included. But you know, I did some work with a health board here in the UK and there were groups of people you talk about clans I love that word and they said that they all worked. There was a group there that did social work. They all worked with the same set of patients. So you know, the patient would come in and one of they wouldn't see the same person every time, but one of them would always treat that patient and they told me if somebody came in at that morning and they'd forgotten their password, they would give them the password because they all had access to the same customer patient data. They weren't giving them access to anything they weren't allowed to have and if they didn't give that person a password, they wouldn't be able to work that day. And I just thought that made so much sense to me what they were doing there. It wasn't anything they should have had access to.

Craig:

Well, and think about what the impacts would be if one person said I'm never going to do this. You know that's against the rules, I'm not going to do it. You know that's going to have pretty negative implications for the clan, yes, and for how they work together. It's a complicated landscape, you know. I don't want to give the impression that we think they're simple answers, and I think they're absolutely or not. I'm not sure where that all goes, but but it's certainly worth thinking about. Shall, we move on to the next paper We've got two to go, I believe and this is about the impact of cyber hygiene behavior. I thought this was a pretty interesting paper, this idea of hygiene is interesting.

Craig:

So, karen, what can you tell us about this paper?

Karen:

Yeah, so this, I think what when he put his finger on? So he, both gentlemen, put their finger on in this paper. It was presented by harsh, harsh parakeet, and what he put his finger on is that there's really no agreement on what cyber hygiene means. And if you think about the fact that we're always talking about cyber hygiene, even if they don't define it very well in the literature, and so, for example, he found one one definition that says cyber security practices that consumers should engage in to protect the safety and integrity of their personal information on their devices from being compromised in the cyber attack, and when I read that, I thought what about their actual device? I thought they put their finger on that. And the other thing they put their finger on is what is encompassed in cyber hygiene, because that's another thing that our field cannot agree on.

Karen:

Everyone agrees about strong passwords, but other behaviors some people will say, of course I'm a hygiene and other people won't.

Karen:

What they're focusing on, to my mind, is that they're talking about some behaviors are completely habitual and that you get involved in those behaviors, and then it becomes very difficult for that person who is, for example, maybe used to using a password like 123456.

Karen:

Then if you come along and say, oh, you know, you shouldn't be doing that anymore, the habits is far more powerful than your instruction or their, even their own desire to want to do a stronger password. The habit has become the thing that's going to overpower everything else, and I thought very interesting that they were talking about this interplay between what we are habitually doing and what we have to think about doing and the way cyber hygiene is affected by that. And then what they call target suitability are things that we do that make us more likely to fall victim. So the habits coming at one end, where the things that we habitually do with respect to cyber hygiene, which are then resistant to change, and then those habits then making us more of a target to cyber criminals. I thought that was a very interesting way of looking at things. They didn't do a study yet. This was basically laying out the study they wanted to do.

Tom:

It sounds like hygiene as an analog to washing your hands, keep your computer clean. I had the wrong idea based on the title, because hygiene means something very specific in the industrial organizational literature. Yes, in terms of things the employer provides to the employee to achieve objectives and I usually analogize a hygiene factor as paycheck I can see, there's a paper to be written on what hygiene is and is not in computer security, because washing your hands is the better metaphor keeping your computer clean. I like that.

Karen:

Yeah, so the ones they mentioned here are storage and device hygiene, transmission hygiene, social media hygiene, authentication hygiene and email and messaging hygiene. Those are the five they mentioned.

Craig:

That's pretty revealing and I'm sorry if I'm going to sound like a broken record here, but could you go through that list again?

Karen:

So storage and device transmission, social media authentication, email and messaging.

Craig:

You know that that's a lot to keep track of, yes, and then that goes back to the idea of stress and some of the other things we've talked about. That's just a lot. So you can almost see people throwing their hands up and saying you know enough. I'm just going to live my life.

Karen:

It's very interesting that you should say that, because in this country, companies can apply for something called cyber essentials and it requires them to do five things and then they can get accredited and they have cyber essentials accreditation. And when the people came up with these five things, I asked them why five and they said it was manageable.

Karen:

Yeah because I said, oh, why don't you add this and why don't you add that message? Yes, those are all valid, but remember, we want people to start with five, and once they've got those five in place, it's easier for us to say hey, you know, this would also be a good idea with that one, and I thought there was a lot of wisdom in that.

Craig:

Yeah, and as I think I've said before, if you give people too many rules and they end up with no rules.

Karen:

That's right.

Craig:

That's interesting and I really like this idea of habit, because habits can be a power for good or they can be a power for evil, yes, but either way they are, habits are extremely powerful.

Karen:

Yes.

Craig:

That's a challenge for cybersecurity. Is you know, how do we overcome some bad habits? If we can embed good habits, then it reduces some of the workload, which reduces the stress and etc. Etc. Etc. So it could have a real, a real positive domino effect and then we can really turn some of those habits around.

Karen:

Yes, but changing a habit is, I think, not as easy as giving people new facts, which they also make that point right. You can't just say hey, you're doing this wrong, do it like this.

Tom:

Well. So maybe the way AI works, integrating with security going forward, is as kind of like a background teammate that will tap you on the shoulder and say hey you know you're doing the wrong thing Either your habit is caused you to possibly cause a breach. And I think we're back to AI's place in security with the last paper what people think about security and what AI automation does with security, and how we're going to merge those two together.

Craig:

I see chat.

Tom:

GPT is in the mix in Alan Johnson's paper there and I'm really curious to see what you thought of that.

Karen:

Yeah, alan Johnson was the person to present at this conference.

Karen:

Well, there was really, maybe just as an aside, they had to collect the data from Reddit, and I found that very interesting because previously you would see Twitter data being used, but with the recent changes in the cost of Twitter, I see that people are now turning to Reddit to get data.

Karen:

What they were trying to do really was get a sense of how people were using AI in the cybersecurity industry, and they used a grounded theory approach to try and investigate this and to reveal exactly what was there to glean from Reddit comments.

Karen:

I just thought it was a really interesting approach to do things in this way, and especially at this point when chat GPT has become so suddenly, so popular. They also talk about this and inscrutability the fact that you don't really know where this is coming from, and this then causes a tension between the human and the AI and who understands the thing correctly. And the whole inscrutability of the thing is that you just don't know how it's arrived at the advice it's giving you and, on the other hand, once again, the fact that it can chat to you the way a fellow human can chat to you, or apparently the same way and therefore that would make people to be more open to using it. And so they're saying that there are collaborations and tensions in human AI interactions which are just fascinating, and the suggestions for future are that there are currently a lack of studies in IS with respect to the relations between AI and cybersecurity, and this is a really nice ripe area for people to start working on.

Craig:

It's going to be difficult to figure all of this out, but it is a fruitful area of research and our practitioner listeners. God bless you. You've got a lot to think about in this area.

Karen:

Indeed, I think one area I've heard of that seems to hold promises explainable AI. I think if people just knew what is this thing doing and how is it arriving at the things it's telling me, that would already help a little bit.

Craig:

Yeah, and I know there's a lot of work going on around explainable AI. I still wonder about the balance of that. If you need to know more about AI, then that kind of hurts the efficiency of the use of AI.

Karen:

Yes, you're right.

Tom:

So it's been an interesting session because we've covered a lot of ground we haven't touched singly in any one podcast before, particularly the AI and the human habituation, the intermixture between those, what does the conference chair have to think is an overarching notion and theme about where security is going in the modern day, based on this year's conference.

Karen:

There were six papers on. Five papers out of the 16 were on security policy compliance. So I feel that for me, that felt like tradition that was showing, but I feel like this impact of AI on cybersecurity is the thing that really stood out, as this is the way this is going to affect all of us pretty forward and we need to figure out how best to harness AI. That was my impression. That's what was coming out of the talks. The other thing that really struck me were the young, the early career researchers their level of commitment, the way they were so articulate, and I really, really enjoyed listening to them talking about their work. That was lovely.

Craig:

Well, karen, thanks so much for joining us, and I don't know about you all, but I'm a little tired. We've covered a lot of ground here today. But it's been fantastic for you listeners. I want to remind you that we'll have links to all of the papers, to the keynote, to the conference, information in the show notes or you can go. The show notes will be available on your podcast app or at cyberwayspodcastcom. So any last words, anybody.

Tom:

Well, speaking as one of the podcasters here, our goal has always been to reach industry executives with actionable advice based upon the science. We're doing so. I highly encouraged our interested listeners who find these notions of worth to share it with their friends and make sure that they spread the word about the podcast, because there's good stuff here that we can put into place in the workplace right away, as my thinking it's been Cyberways it's a production of the Louisiana Tech College of Business Center for Information Assurance Under Support from the Just Business Grant, courtesy of Dean Chris Martin. We come to you monthly with new topics of interest to industry based upon the science we do, and we hope you'll join us again next time. Find our podcast wherever you get your podcast from. And for now, goodbye to all and stay safe.

Craig:

Great and thanks again, karen, we appreciate your time Okay.

Cybersecurity Workshop Highlights and Research Discussion
Security Compliance and AI Factors
Misuse of Access and Reporting Concerns
Habits, Hygiene, and AI in Cybersecurity
Podcast for Executives