Cyber Ways Podcast

Cyber Complacency with Dr. Tom Stafford

September 30, 2021 Tom Stafford, Craig Van Slyke
Cyber Ways Podcast
Cyber Complacency with Dr. Tom Stafford
Show Notes Transcript

In this episode of Cyber Ways, Craig interviews co-host Dr. Tom Stafford about his 2021 paper, Platform-Dependent Computer Security Complacency: The Unrecognized Insider Threat, which was published in the IEEE Transactions on Engineering Management.


Dr. Stafford is the J.E. Barnes Eminent Scholar in Data Analytics at Louisiana Tech University. He holds doctorates in Marketing from the University of Georgia, and Management Information Systems from the University of Texas at Arlington. In addition to publishing dozens of articles in high-quality journals, he has served as Editor-in-Chief of the Decision Sciences Journal, and is currently co-Editor-in-Chief of The DATA BASE for Advances in Information Systems,  which is the oldest continuously-published journal in information systems. Dr. Stafford also co-chaired the 2018 Americas Conference on Information Systems, and the 2019 IFIP 8.11/11.13 Information Security Workshop. He is also co-chairing the 2025 International Conference in Information Systems. 


Tom’s paper discusses how many problematic security behaviors are the result of complacency or ignorance, rather than explicit malicious behavior. He also describes the concept of cyber-complacency, which he defines as an unconcerned dependence on technological security protections.


Abstract (direct copy from the paper)

This article reports on a grounded theory investigation of subject response anomalies that were encountered in the course of a neurocognitive laboratory study of computer user cybersecurity behaviors. Subsequent qualitative data collection led to theoretical development in specification of three broad constructs of computer user security complacency. Theoretical insights indicate that states of security complacency can arise in the form of a naïve lack of concern about the likelihood of facing security threats (inherent complacency), from ill-advised dependence upon specific computing platforms and protective workplace technology implementations for protection (platform complacency), as well as the reliance on the guidance on advice from trusted social others in personal and workplace networks (social complacency). Elements of an emergent theory of cybersecurity complacency arising from our interpretive insights are discussed.


Link to the paper:
https://ieeexplore.ieee.org/document/9373614

The Cyber Ways podcast is brought to you by the Center for Information Assurance, at Louisiana Tech University’s College of Business. Cyber Ways is funded through a Just Business grant, made possible through the generosity of donors to the Louisiana Tech University College of Business.

Intro audio for the Cyber Ways Podcast

Outro audio for Cyber Ways Podcast

Cyber Ways is brought to you by the Center for Information Assurance, which is housed in the College of Business at Louisiana Tech University. The podcast is made possible through a "Just Business Grant," which is funded by the University's generous donors.

https://business.latech.edu/cyberways/

Tom:

Folks, this is the cyber waves podcast and we translate our academic knowledge about information security into stuff that you can use as a security professional. We think it's a unique mission, we think you'll like it. I'm Tom Stafford,

Craig:

Craig van slyke. Tom and I are your hosts on your journey to knowledge. Cyber waves is brought to you by the Louisiana Tech College of businesses center for information assurance. The center offers undergraduate and graduate certificate programs in cybersecurity and sponsors academic research focused on behavioral aspects of cybersecurity and Information Privacy. Today on cyber waves, we're going to discuss an interesting paper written by my co host, Dr. Tom Stafford. Dr. Stafford is a JD Barnes eminent scholar and data analytics at Louisiana Tech University. He holds doctorates and marketing from the University of Georgia and management information systems from the University of Texas at Arlington. Yes, you heard that right. He's got two PhDs. In addition to publishing dozens of articles in high quality journals. He has served as the editor in chief of the Decision Sciences journal, and is currently co editor in chief of the database for advances in Information Systems, which is the oldest continuously published journal in our field. Dr. Stanford also co chaired the 2018 Americas conference on information systems and the 2019 ifup eight dot 11 slash 11 dot 13 Information Security workshop. He's also co chairing the 2025 International Conference on information systems. Today we're going to discuss Tom's paper platform dependent computer security complacency, the unrecognized insider threat, which was published in IEEE Transactions on engineering management. Tom, can you give us the big picture your paper so what's the paper about

Tom:

lackadaisical workers can sink the ship? However well being they may be my concern, in getting involved in security research was that everybody seemed to be trying to catch a criminal. And there aren't too many criminals in the workplace, there are a few. And we're very good at detecting them and ejecting them. But the problem always seemed bigger than that. And I just thought so intellectually up until the point that I tried to test a couple of theories about how people would respond to threats, and found out they weren't responding the way that theory suggested and had to go find out why. And in the process of finding out why I reminded myself that cybersecurity is probably more about keeping the good people in your company on their toes than it is about finding and ferreting out the bad people in your company. And that's not to say the bad people don't need to be detected and rejected, it just means that cybersecurity is everybody's job. And some people may forget that in the day to day routine of work. And then that's the the high level high level overview it was it was an attempt to explain something that didn't fit with theory and in process, finding out something very interesting about routine workplace behavior.

Craig:

That's where the interesting bits are is when what we think doesn't actually turn out to be, if I recall correctly, there's kind of an interesting backstory to how this article came about, wasn't it part of a larger study,

Tom:

it was supposed to be a neurocognitive study, I had engaged time at the the Center for neuro research up at University of Memphis to to run a test that biometrically identified computer user responses to security threats, and it was well designed. And it followed accepted theory protection motivation theory is the one that I'm thinking about here. The idea that if you show people a problem and show them a solution, and explain that they know how to implement the solution that they will take care of the problem. I didn't get that response. And it's a very robust theory. And so I was troubled by the lack of response, I thought maybe I had a problem in my lab setup. So I started asking my subjects, my subjects, we're all technology students, by the way, most of them were employed either part time or full time with local fortune 500 companies. So they were they were sophisticated technology users. And I expected that their response would be more in line with what typical responses would be in the workplace. I asked them, well, how come this threat? It was a spyware manipulation? How come this didn't seem to bother you at all? And the the most alarming answer I got was, Well, I'm not really worried about that. I use a Mac, you know, and I'm safe. I abstracted that into the statement of I've got a Mac No worries. And I'm a Macintosh user, myself, I know the Mac is as much a target as any other computer. It just isn't targeted as prevalently because it's not as it's not used as widely as the PC is even so I felt I had to get the bottom of that. And so having learned that the people who were not responding to mind manipulation were the ones who are typically Apple computer users. I started engaging in an after experiment interview with everybody asking them about their computer use their perceptions of the safety the computer platform had and why or why not they felt secure in the environment they had chosen and that led led to this theory of cybersecurity complacency

Craig:

You know, it's just fascinating. And as a Windows user, I find it slightly amusing that the Mac folks think that way. But one of the things I wondered about when I was reading through your paper is what does that mean for organizations? I mean, does that complacency carry over into the workplace? And I can see

Tom:

you read the paper closely, because I'd forgotten to bring that up in my previous response. But I noticed this, we'll call it a lackadaisical response from two different types of computer users, the Macintosh users were the most obvious. The first people I asked, and the most people I asked said, Well, I'm not concerned about security threats. I've got a Mac, my Mac keeps me safe. But some of them said, Well, you know, I work for company x. And company x is very good security company. And they've got my back, PC users who worked in a secure operating environment felt complacent, because they were assured improperly. So I think that their company's security implementation had them covered. But the troubling issue to that was, you know, the corollary to people who use a Mac and aren't worried about security is people who work for a fortune 500 company with a chief information security officer who also feel like they don't have to be careful. And when we talk about careful, we're talking about simple things like not opening the wrong email, not clicking on the wrong link, not sharing your password, failing to update or even install antivirus software, people in PC environments who work for companies that had robust internal security implementations tend to be similarly unwatchable, the way I would phrase it about their security behaviors as the Mac people, there weren't as many of them, but it was it was a noticeable fraction. And the concern there was that people who engaged in that practice either on the Mac or on the PC at work would take the similar practices home and do it in their home network setting where there was a whole lot more to lose, like their bank account and spam bot on their, on their computer running security operations against their friends on their address book. And the things that typically happen with exploits

Craig:

was so if I'm hearing this correctly, it seems like there's a almost a regression to the, to the lack of effort side, I don't think I express that very well. But from what I'm hearing, they're going to they're by fate behaviors are going to default to whichever requires them the least amount of effort. And so you know, if I'm covered at home, well, then I'm not going to worry about it at work, or if I'm covered at work, I'm not going to worry about it at home, even though those are different environments. I mean, we're they're increasingly overlapping these days, but they really are different computing environments. Does that make sense? It's actually

Tom:

something I wish I had thought of when I was writing implications that the computer workers people in the workplace tend to take the path of least resistance and they assume or take security for granted, if you will, if they're working in a corporate workplace, they just assumed it's been taken care of for for them, if they're using their Mac, they assume that Apple is taking care of it for them. And it's an effort minimal zation approach, I suppose when in fact, you and I both know that anybody anywhere, using any kind of technology attached to a network has to be vigilant because the threats becoming evermore sophisticated and even highly informed security researchers like us oftentimes are fooled I am I ended up falling for a phishing exploit one time recently, I only knew it was an exploit. When I found out it wanted me to install Adobe and I knew that was a problem because I've got a W on every computer I own. So they were trying to get me to download malware so that I could read the president of the universities report on our strategic planning committee that I was a member of security exploits have gotten scary good. There's no place for complacency in that, in that kind of perspective, as much care as you can take may not be clear enough, but certainly taking no cares is a route to disasters, the sort of the layman's explanation I would give of what I've learned of this.

Craig:

Well, what's a little bit ironic, maybe ironic is the right word is that they were able to take advantage of you wanting to do your job. You know, it's not some random thing here where you're gonna we're gonna send you you know, 25% of the $17 million from Nigeria. It's like while you're on this committee, and I guess I get just got lucky that way, but anybody at the university would want to see that report. You

Tom:

know, I did not really delve deeply into fishing as an exploit and fishing in my mind is the most dangerous one we face I was looking at spyware, perceptions of threats from malware, computer behavior, monitoring software, things that would log your keystrokes, try to get your passwords credit card numbers. I've been interested in that since I think 2004 phishing in my my work. And when I speak of my work, my work is computer user in the university campus phishing is the biggest threat I face on a daily basis. Lately, the one from the president of the university was outstanding in my mind because they had that's what we call spearfishing directly targeting a person because their connection to other important people so that your message your exploit message appears to be legitimate. I haven't seen another one like that. We've got a very good security manager on our campus. I forward everything like that to him and he gets right on it. What I haven't seen a lot of Lee and I mentioned this just as a public service as well as a talking point, our messages purporting to come from amazon.com saying we have received your computer order, we will be shipping next week, click here for tracking or click here to modify the order name, the exploiters, no good and well that anybody receiving an unexpected notification of computer order is going to jump right on that because that's a couple of $1,000. I don't respond to those because they come to my work address. And my work address is not my amazon.com address, they have no way of knowing that, but at the same time, pretty good guess on their part. And I know other people must be opening those messages and checking it out. Because who wants to be responsible for an accidental expenditure of 1500 bucks in the workplace?

Craig:

Well, and it's a numbers game. So you figure they send out a million of those, some percentage of those folks would actually have just purchased a computer from Amazon. And yeah, you have to be really careful. It's long for the days where you could tell it was a phishing attack by the poor use of English in really obvious errors. But those days are gone. It's Well,

Tom:

I think I've learned why, as part of this study, I collected some data that I did not end up using in the study because it was tangential and interesting, but not directly relevant to the emerging theory. I was talking to some people involved in the national defense industry. And their point on what I'll call cyber slagging, if you will, is that we have to be very, very careful about people being alert, because the exploits are no longer being designed by other people. They're being designed by AI applications that are so capable of figuring out all the deep details that would lead you to believe the legitimacy of efficient exploit. And it was a very disturbing point, because the corollary to that was their their other point, which is at this point, if we're being attacked by AI itself is our only defense. And so it's going to be AI bought against AI bot in the future years as we seek to keep our networks secured. In this case, it wasn't it was network intrusion detection system. And as we seek to ignore things that look like it might be relevant to us, because they include personal information, AI can dig up any of that stuff. It's all online, thanks to Facebook, etc, there's very little about us that isn't available to somebody who has the resources to scrape it up in a web search.

Craig:

Well, that's Stafford's rule number one, never click on a link in an email.

Tom:

I almost want to say don't use email to notify people about web resources. But so much of our professional life as researchers revolves around that I'm looking here at a webinar coming up that Andrew Burton Jones is going to sponsor on a really fascinating issue of one of our main journals about the emerging role of artificial intelligence in management. And it's got five different links to the manuscripts that the people are going to discuss. And if I didn't know, well, actually, I can see that the the URL for the the source is showing, and I can see that it's a legitimate one. But yeah, I have to click on that to find out what's going on. I don't know where we get to the point of having secure ways of communicating internet resources that aren't susceptible to spoofing and phishing.

Craig:

I think I saw a subtext in your article, but maybe it was just me reading into it. But you know, people want to get their work done. Employees are not rewarded for practicing good security behaviors, they may be punished for not practicing good security behaviors if they get caught. And it's bad enough and that sort of thing. But generally, any anything extra you have to do to engage in sound security behaviors, represents overhead, and it gets in the way of people doing their jobs. And so I think one of the things that security professionals and researchers like us need to do is really focus on how do we get sound security behaviors embedded into the work where it doesn't represent extra work on the part of the workers. I mean, it makes me we've talked about how inconvenient it is to have multi factor authentication when you're trying to get logged into a classroom computer.

Tom:

So worse yet on the multi factor, I'm discovering this lately, the difference between my work computer in my home computers, I've got my home computer tied down very, very tight, because that's the one I pay bills on. It has access to my bank account from time to time during the month. And I've secured to a point of protection that the multi factor authentication works. But it doesn't work the way it was designed to which is click this button to keep your authentication active for 45 days I have to authenticate each time I engaged on my home computer with work resources, which is how I would want it it's it's a it's a pain to do but I know that I'm safe when I do it. And I sure don't want to jeopardize my bank account just to save a few keystrokes. Looking to the paper there were a couple of distinctions between the types of workers who engage in non secure behaviors and we have the clueless at the top of the list people who simply don't know And that would be the fault of the manager because the the job in cybersecurity management is to make sure people are informed and are trained, that they, they know what the threats are, and they know what to do about the threats. Maybe people skipped their training, maybe they didn't pay attention to it. But then again, the manager's job is not only to train but ascertain training took but the next non secure user would be the one who breaks rules in the name of expediency, which is what you're just talking to. And I've seen this in several of my investigations of security practices in the workplace, we don't get paid for being careful, we get paid for being productive. And sometimes productivity requires things that we would frown on, like the sharing of system credentials, like, like post it notes, and passwords on post it notes I'm, I freely admit, I am as likely to fall prey to the desire to keep my passwords on. I have on my telephone, a 128 bit encrypted password file where I keep all the things that I would normally write on a post it note, and used to write on a post it note, it's an extra step to open up the password file because it is also password protected. But that's my global password. And once I'm in there, I can find the passwords to everything I need. It's hopefully not something that could be exploited, maybe something you would want to think about. Dear listener, it's called a wallet. It runs on the iPhone, it runs on the Android I've been using it for a decade because I have so many passwords I use that I cannot remember them all. And I would be that guy writing them down on a piece of paper hiding them under the desk blotter where the social engineers know to look. One of the most fascinating speeches on cybersecurity ever heard was KPMG consultant who had come from a CIA or NSA, one of the main security agencies. And after explaining that how easy it was to infiltrate the open hotel network, he then went on to explain how difficult it is to keep your passwords protected if you don't lock them down or keep them in memory. Because the Eastern Bloc and in primarily he's referring to people who worked for Vladimir Putin, and all the former KGB he characterize them as mafia because frankly, the whole KGB was trained in criminal activity, how to infiltrate how to steal, how to exploit etc. And they have no jobs now that the Soviet Union has fallen. So they've been tasked into finding ways to exploit us and what they do is they hire underpaid janitors in company headquarters to go around and look in the wastebasket. Look, under your desk blotter, look on the wall, look at the obvious places, you would put that post it note with your password, collect that information, pass it back to Putin's security minions, there are people in your workplace who are being engaged by foreign operators, this consultant asserted and I have no reason to disbelieve Him, who are looking for the low hanging fruit and low hanging fruit is the password not securely stored. Something to think about.

Craig:

This is a good example of how technology can actually remove some of the overhead of engaging in sound security behaviors. And so a password manager, it's really not an extra step.

Tom:

Much like using our dual factor authentication. It's just an app on my phone that I have to pull up and use when it's time to use secure stuff. You're right.

Craig:

But that keeps people from trying to reuse passwords. I mean, that's one of the big benefits, right? You know, you don't have to keep it in your own memory. You've got this technology tool that can help you with it. So you were talking about different kinds of complacent users. One was the clunis.

Tom:

Then there's the expedient rule breaker. The third is the kind we worry most about because they are the purposeful violators. They are the insider threat that most of the security research rightly focuses on the the criminal justice perspective to finding somebody who is trying to break the rules. for a reason somebody wants to take down your data center, somebody that wants to log into the HR system and give themselves a raise, I don't know whatever exploit it would be. They're intentionally non secure users. And there's not a lot we can do about that other than find them and eject them from the firm. The other types, the the the incautious ones who are not aware of their lack of caution, or the ones who are simply lazy and trying to save time and achieve more work. They can be trained through the proper types of counseling about what needs to be done and the importance of doing it. But the people who are really out to take advantage of the company on purpose of the ones that training can help that there was the last type of user are characterized, but that was sort of the default in a theoretical platform of the highly secure user. And there are a few people in every firm who are more secure than you would even care to know. You see if I can think of an example of somebody in our organization. Oh, are our local services provider, the guy that runs the it in our building, he is secure in several different ways that you would never expect this. He has responsibilities that span security. So he has to secure the security, if you will. It's very interesting talk to him about that. He's also the guy who runs all the surveillance cameras that we we utilize to make sure people don't walk out of the building with the computers and the screens or all that expensive hardware. It was an interesting study, because I didn't expect to go where I did. And also because if it weren't the way I expected, with the neurocognitive manipulation to be just one more test of an established theory wouldn't have been all that different or dramatic from the last test of that same established theory. But as it was, I ended up jumping into a new theoretical area out of sheer desperation to figure out why my theory wasn't working. And I kind of enjoyed where I ended up.

Craig:

Yeah, I think you've came up with something that was at least as interesting as what you would have otherwise. But I want to make sure that I get a chance to put in one of my ideas that you've heard me talk about, we need to get away from assuming bad intent or laziness. You even mentioned this in the paper. Now, a lot of poor security practice is the result of an employee trying to do what they think is best for the company. You know, it's almost like you're trying to bring in a bunch of supplies through the back door of a restaurant. So instead of locking it and unlocking it each time you leave it propped open. Well, you're, you know, you're doing that because you're trying to be more efficient for the organization. And I think things like sharing credentials, you know, letting somebody else use your terminal could go on and on with examples of this. But in my experience, some security professionals treat that as inherently bad behavior. And I think we need a little bit of a mindset change where it's not inherently bad behavior. It's a problem with the way security is implemented in the organization. Anytime security gets in the way of efficiency and effectiveness. There's room for improvement, how is an entirely different question, but I think that's the mindset we want to promote, is don't like bad user. Thank bad system. That's my take on it at least.

Tom:

And I suppose that's probably the best way to approach this for our audience, because I'm hoping that we have a lot of managers with security responsibilities. Clicking into this to learn how to manage security better. I'm a management buff I took a minor in management of I'm an industrial psychologist, I belong to the APA. To me the the ability to present a caring and informed leadership to the workforce is critical to the success of the company. To treat your employees as though you expect the worst of them is inviting a self fulfilling prophecy. In a certain sense. I'm not saying expecting everybody to be bad will make everybody be bad, but it will sure dent morale a little bit and your most productive workers are the ones who are happiest who feel most trusted and most valued and a regimented cybersecurity approach can take away the personal dignity of the highly motivated work who wants to do his or her best for the company, the fact that they don't know how, or the fact that their their work schedule requires them to occasionally take quick workarounds against security, as you point out is is not their fault. They're trying to do their best for the company. The company may not be doing its best for them. It's I'm not faulting anybody for this. The way work is somebody has to be in charge. Somebody has to follow orders where the two meet is the the intersection of efficiency versus motivation. I think the other point in the paper that I thought was useful for people to know is that there's the distinction between people who aren't doing security, right, because they're lazy, versus people who aren't doing security, right, because they just they missed a point they weren't aware of security is tricky stuff. The exploiters are getting better and better. And it's hard to stay up with them such that they'll often be very well motivated and well informed employees who have gotten caught by a new exploit. Remember my anecdote earlier of trying to open the link that the President University sent me and realizing Holy smokes, that is an exploit and they caught me. But contrast all of those against the ones who are simply apathetic. And the research up to this point before the the notion of cyber complacency was introduced, has only revolved around the idea of people not doing their best for security because they don't care. There's a colleague of ours Scott boss, who was well he's who I cite when I cite the apathy perspective, Merrill argot, and who we've talked to also uses it occasionally, but he draws also from Scott boss's work apathy is is his company killer. If you're if your people aren't happy, well intentioned workers, then then you failed as a manager not not that they failed as a worker. 99% of workers are well intentioned and well equipped and well motivated to do the job you want them to do. You just simply have to structure the work for them. And it's that way and security as much as anywhere else in the company.

Craig:

To me, that's the real nugget of gold in your paper is if security professionals will kind of challenge the inherent assumption that users are just, you know, being lazy being bad actors, then they can start to use your framework to identify why that might be why are they not behaving the way we think they should behave? Is it that they're bad people? Well, that's a different solution than if they just don't know how to recognize a phishing attack. And that's different than if they know, but some of their security behaviors get in the way of getting their work done and their renumerated for getting their work done. And so I think your paper, if that's all it did, and it does more than this, but if that's all it did, I think it's extremely valuable to security practice, because it gives you this tool to do a sort of root cause analysis, because if you provide more training, and the processes are such that it gets in the way of work, you're solving the wrong problem. You know, so you really have to think through and you've got a tool, you've provided a tool that can really help people think through that. What's the real cause of the problem here?

Tom:

I think you probably nailed it. It's it's a management problem, except for the one out of 100 people who are psychopathically maladjusted. psychopathy is a term that they use in the criminal justice perspective on security, except for the one out of 100, who are going to do bad no matter what everybody else is a well meaning happy camper who simply doesn't have all the tools they need, if they're not following the right protocols, and we can't really put that on them can wait we have to use a well organized workplace is where a good work takes place. But it's the manager's job to make the workplace well organized, not the employees.

Craig:

That's exactly right. And I'll put in a plug for a paper that I did with Michelle Mossberg, who's now at the Naval Academy, Selwyn Ellis, who's our department chair and Nicole BB, who's at UT San Antonio, where we looked at the dark triad, which is psychopathy, narcissism and Machiavellianism. And how that affects, we looked at computers sabotage, sabotage, specifically, but but you're right, it's a really small percentage, where that's the driver, what I think it's good time to wrap up any last points you want to make.

Tom:

When you look at the research that we're reviewing on on this podcast, read it like I would have read it before I got my doctorate go to the conclusions at the back of the paper first, that's where the real actionable managerial information is the stuff in between is all the professorial stuff about I use this method, I use these statistics, here's a theory. And that's useful reading too. But things that will guide you in your practice of security are largely found in the last three or four pages of the paper. And my paper is no different than that it spends about half of the space, describing the grounded theory method of interviewing people and making theoretical sense of what they said, That's interesting. And that's useful, particularly if you're in the HR department. And you're going to be engaging in workforce research as well, which many HR departments do. But if you're a manager handling security, you probably just want to get a deep dive into what the differences between the oblivious versus the lackadaisical deep dive into why people decide that post it notes or password sharing is okay, versus the manager perspective of never do that. Because the fact is, the reality in the ground is I'm finding that there are groups who are doing it against company policy with management knowledge. That's another story to be told a paper I haven't written yet, but it's a wink and a nod situation because they know that the employees are doing it, they're doing it in an agile development scenario where the ability to quickly interact is essential and having to log in log out actually inhibits the the work product they're they're needing to do and so they're they're permitted to do that in their own closed situation with full knowledge that don't let anybody else do this. But that's, that's a topic for another day in another paper.

Craig:

Yeah, I think you're right. I'm sure some of our listeners have had experience with someone sharing their credentials over the phone to try to get some report or some email or something like that. You know, it happens. I have to use

Tom:

an anecdote Kevin Mitnick is a name everybody should know he's got a couple of books. He's the guy who he perfected the captain crunch hack, for example, is anybody remember captain crunch the whistle that came in the cereal box, and it activated the at&t long distance switches. Mitnick is the he's a white hat hacker now but only after he got caught by the FBI, in criminal exploits jailed and got some time off for good behavior by helping the FBI figure out how to how to counter that. Mitnick said his best successes when he was exploiting companies did not come from brute force hacking, firewall intrusion, that sort of thing is very difficult to do. It's resource expensive, his best hacks were getting people to give him their passwords by social engineering, calling up and saying well, you know, I don't have the password update the firmware on this telephone. Have you gotten that? Just asking and it's it's amazing how many people it is actually an aspect of human psychology when you're asked politely for something. The societal norm is to politely accede to the request. They call it mindlessness and it can lead to all manner of harms. And that's that's a theory for another day is Well, but I just want to leave that closing comment. The Uber hacker of the West says, watch out for social engineering. That's where he made his most progress.

Craig:

The audience if you're into those sorts of stories, I would highly recommend another podcast called dark net diaries with jack Reese hider. He's a fantastic storyteller. And he interviews people like penetration testers and social engineers. And the stories are fascinating and scary at the same time. So if you're interested in that sort of thing, I highly recommend darknet diaries. Well, we're gonna wrap up this episode. Thank you, Dr. Stafford been very enlightening. And I do want to say those of you who are interested in reading the paper, there'll be a link in the full citation in the show notes. This paper is exceptionally well written and exceptionally accessible for an academic article. Really, there's one section that gets into the academic II stuff. That's the one that talks about the methodology section three, the rest of it, I think most practitioners will be able to read quite easily. So I really commend you on that. Next time on cyber waves, we're going to talk to Dr. Franz blonder and Dr. Rob Chrysler, about a very interesting article where they don't just look at one security related behavior. They look at groups of security related behaviors, which is a pretty unique perspective in our field. So thank you very much, and we'll see you next time.

Tom:

And it is important to say that the cyber waste podcast is funded through the just business grant program and the Louisiana Tech, College of Business. And we're grateful for that.

Craig:

So join us next time on the cyber waste podcast, which is available on all major podcast platforms. We want you to subscribe or follow or whatever button your favorite podcast app hacks. Thank you very much.