Cyber Ways Podcast

Fortifying Financial Data: Decoding Cybersecurity With Jake Lee

January 28, 2024 Tom Stafford, Craig Van Slyke Episode 22
Cyber Ways Podcast
Fortifying Financial Data: Decoding Cybersecurity With Jake Lee
Show Notes Transcript Chapter Markers

Discover the forces shaping your financial data's safety as we sit down with the eminent Jake Lee Jaeung, the Clifford Ray King Endowed Professor of Information Systems. In a landscape where cybercriminals lurk at every digital corner, we dissect how a blend of routine activity theory and practical cybersecurity can alter the terrain to our advantage. Together, we plunge into Jake's rigorous study with 461 financial institution employees and unravel the factors that skew risk perception and the likelihood of data breaches.

With Jake's expertise, we peel back the layers of data security, challenging the conventional wisdom that greater transparency equals higher risk. This episode illuminates how the value of information, the effectiveness of guardians, and the strategic reduction of data availability can form a robust shield against unauthorized access. We also navigate the nuanced chess game of social engineering defenses, providing valuable insights and tangible actions that can be applied across industries to shield your organization's most precious assets from the prying eyes of the digital underworld.

Intro audio for the Cyber Ways Podcast

Outro audio for Cyber Ways Podcast

Cyber Ways is brought to you by the Center for Information Assurance, which is housed in the College of Business at Louisiana Tech University. The podcast is made possible through a "Just Business Grant," which is funded by the University's generous donors.

https://business.latech.edu/cyberways/

Tom Stafford:

Howdy folks, it's. Cyberway is a production of the Center for Information Assurance Louisiana Tech College of Business, courtesy of Dean Chris Martin's Just Business Grant. We're glad to have you with us today. We have a very special guest, our friend Jake Lee . Am I saying that right, jake?

Tom Stafford:

Jake Lee, who is a noted security researcher. He's one of, I think of them as the Buffalo Group, the folks who studied under the eminent senior scholar H Raghav Vrall up there at SUNY Buffalo About half the people that do security research in our world came from that school of thought and he's here today to talk about his recent publication with some colleagues in that group Investigating Perceptions about Risk of Data Breaches in Financial Institutions a routine activity approach that appeared in Volume 121 of Computers in Security. If you want to have a look at it. Jake is the Clifford Ray King Endowed Professor of Information Systems in the College of Business at Louisiana Tech, with his doctorate, of course, from SUNY Buffalo.

Tom Stafford:

As I said, primary Research Areas Information Security and Emergency Response Management and Cloud Computing. He's a member of a number of different security focused groups the IFIP Working Group, 8.11, 11.13, that's the DeWald Rude Information Security Group. He's on the program committee was on the program committee for information security and privacy in the Korean chapter of the AIS Research Workshop. And, jake, tell us about your approach, your data breaches in financial institutions, but I'm assuming it goes much deeper than that in terms of theoretical background.

Jake Lee:

Okay, first of all, thank you very much for having me here. I'm happy to talk about my research today. We studied around 461 financial institution employees to understand what kind of factor influence their perception of risk related to potential data breaches in their institutions. So we were interested in how employees assess the likelihood and severity of potential data breaches, which is like risk of sensitive data breaches. We call it because risk is a combination of likelihood and severity, so uses routine activity theory as a foundation, which is a criminology theory focuses on environmental factors that can increase risk of victimization. So there was a brief idea of my paper.

Tom Stafford:

So the routine activity theory is captured our attention and usually I know about these things because I'm really into a lot of different psychologically based theories but explain what routine activity theory is so the listeners can follow along in the practical discussion.

Jake Lee:

Yes. So as I mentioned, routine activity theories focusing more on environmental factors, a lot of researchers studied individual factors or organizational factor to minimize the security incident or security breaches. So instead of focusing on those kinds of individual or organizational factors, we try to focus on environmental factor because even though people studied individual or organizational factor, a lot of security breaches still going on. So we try to figure out with thought, maybe focusing on environmental factor helpful to minimize the security breaches. So under that routine activity theories there are three pre-requirements. First, one is motivated offender should be exist. Second, target should be suitable. And third one absence or presence of guardians heavily impact on the potential issues. So those three things form this routine activity theory. So I'm gonna say a little more detail about target suitability when I explain my research model. So that's a brief idea of routine activity.

Craig Van Slyke:

Let me interject with a point here that I think is really critical about the tremendous value of this research. You're looking at environmental factors, right. What's really interesting to me and I want to see if the two of you agree with me here it's often really hard to do something about the people factors, especially if they're external threat actors. You have no control over them. But you do have control over your environment. So I mean, it's good to understand the motivations and some of the factors about bad actors as individuals, but organizations can do almost nothing about that. But these environmental factors that you're looking at are things that are at least somewhat under the organization's control, and for some of them they have a lot of control over them. Does that make sense? Yes, for me it makes sense.

Tom Stafford:

I concur. I see, for example, in Jake's model, which I have a graphic of here in front of me, that the suitability of the target can be the intervening factor the company can manipulate to keep that motivated offender away. We can't do much about the offender being motivated to steal data, but we can certainly make our data look to be unattractive to try to steal that. Am I constring that right?

Jake Lee:

Jake yes.

Craig Van Slyke:

So tell us more about your model.

Jake Lee:

Actually, I have six hypotheses. Seven hypotheses include moderation.

Craig Van Slyke:

Pause for just a second and explain real quickly that moderation is a factor that affects the influence of two of one factor on a third factor you have A influencing B and C affects that relationship between A and B.

Tom Stafford:

Often the strength of the effect or maybe even sometimes the direction of the effect, though I haven't seen that as often.

Jake Lee:

So first thing we want to study was definitely if suitability of target and presence of guardians, how that impacts on risk-co-sensitive data breaches. So that was the first part of my model.

Craig Van Slyke:

So explain to us and to the audience what those two things are, what suitability of the target is and what the presence of guardians is.

Jake Lee:

Let me explain presence of guardians first. In our study we consider presence of guardians as an existence of controls such as technical security controls like fireworks intrusion detection system or sometimes monitoring features, or policy existence, that kind of things. We consider that as a presence of guardians. Which means, let's say, if influence, no your computing behavior, is monitored, that consider as a presence of guardians.

Craig Van Slyke:

Is it the perceptions of actors about the presence of guardians or is it whether the guardians are actually there? So, in other words, if I have a firewall but nobody knows I have a firewall, does it still have the same effect?

Jake Lee:

It is a perceived presence of guardians. And the other thing was suitability of target. According to the theory, routine activity theory, if potential target is suitable for a motivated offender, there is a high risk to be stolen. So that's the actual suitability of target. And this suitability of target formed by four factors. First, one was value of the target, which is also perceived value of the target, perceived inertia of the target, perceived visibility of the target and perceived accessibility of the target. But for the digital asset we consider inertia of the target as a size of the data, Because actually inertia is how easy to move the kind of things.

Craig Van Slyke:

Okay, so a really big file would be high in inertia.

Jake Lee:

Yeah, high in inertia.

Craig Van Slyke:

yes, Okay, let's see if we can take this to maybe a little bit of a metaphor or analogy, whatever it is level. So, if I'm in a store, the suitability of the target would be how attractive that is as something to steal. So if it's a bottle of liquor, it might be a lot more attractive than if it's a box of band-aids or something like that, right? Okay? And that suitability is made up by those four things that you just mentioned. So you know, a small bottle of liquor would have less inertia than a big, giant bottle of liquor, which would make it a more suitable target, right? And I think the rest of them are kind of self-explanatory. All right, I just want to make sure that I had this right in my head.

Jake Lee:

So these four we call VIVA in our paper Value Inertia, Visibility, Accessibility that form suitability of target. So we studied how those four factors impact on the suitability of target in the financial institution. So those are the main hypotheses I just explained.

Craig Van Slyke:

Okay, great, maybe we can dig into value a little bit, because I found the way that you thought about value to be really interesting. So you had three different kind of sources of value, right? Can you tell us about those?

Jake Lee:

Actually, it is not a source of the value, it's a type of value.

Craig Van Slyke:

Oh, okay, sorry, Sorry. Types of value.

Jake Lee:

We consider three types of value hedonic values, monetary values and non-monetary values.

Craig Van Slyke:

And hedonic is kind of the pleasure.

Jake Lee:

Yes, such as sense of drill by stealing something, by having that data, that person may have a sense of drill or personal satisfaction or thrill of the chase, enhancing personal image, that kind of thing. Consider that as a hedonic value Okay. And monetary value is like a value willing to pay, or actually the monetary value for that data and the non-monetary value we consider like when we have that data it will helpful for us, for him or her to have a job, have a better job performance or gaining competitive edge or effectiveness for their job performance.

Tom Stafford:

That kind of thing Is this like a trade secret sort of thing stealing data from other companies. I'm trying to get the metaphor for this, because I was not envisioning non-monetary value. Well, as I was looking at the model.

Jake Lee:

Let me give you a simple example If I have companies' sensitive data that not everybody has, maybe it will be helpful for me to promote other than other people. It's a perception. Again, it's a perception.

Craig Van Slyke:

Let me throw out maybe another example. So if I'm angling for a promotion and I can find something out that others don't know, I might be able to use that to wrangle a promotion or to enhance my status in the organization or something along those lines. I think we've all known people that just like to know everything that's going on right and that's not really going to help them monetarily, but it does have value to them to just know these things. Does that make sense? I like it Okay.

Tom Stafford:

So, leaving the model aside for a moment, you applied this in a financial services context. Was that a choice of convenience you had a bank willing to do the study or is there a more compelling technical problem to think of in terms of operating a bank with security?

Jake Lee:

The main reason we select this financial industry as our sample of this research. We don't want to apply routine activity theory. These financial institutions really represent where the entire population, because employees in financial institutions. They handle sensitive data every day, every day setting. So that was the biggest reason we decided to study financial institutions.

Craig Van Slyke:

And I think it's really an interesting setting because it's not just the company's data that's at risk, it's the customers and clients' data. I mean, you could see a huge value to being an insider for some financial services firm and being able to perform identity theft or sell information to identity thieves. So it's really a good industry to kind of understand all of this, because there are a lot of motivations or potential motivations there and it's high consequence.

Jake Lee:

I was also thinking to study head to care industry because they also handle a lot of sensitive data, but we decided to go with the financial industry.

Tom Stafford:

I once heard it said by a graduate of ours who was a CIO of the health organization. He says that one electronic medical record on the dark web is worth more than one credit card record. I was struck by that observation. In terms of how your model operated, jake, not every one of the predictors of suitability panned out. I think value reverse, inertia and accessibility were the significant ones. Talk to us a little about how you can look at those as a manager and maybe manage them to the benefit of the process. Or is that just the environmental circumstances and you can do nothing about it? You'll have to see to your firewall instead. I guess I'm looking for managerial implications.

Jake Lee:

We hypothesize, value, inertia, visibility, accessibility have an impact on the suitability of the target and, as we hypothesize, based on the theory, value, inertia and accessibility has a positive impact on suitability of the target, which means whenever those three factors are high, data easily be suitable to be a target.

Craig Van Slyke:

And those were all more or less equally important. Inertia was a little bit more important than the other two, but they were all kind of the same.

Jake Lee:

One interesting finding from these hypotheses was the relationship between visibility and suitability of the target. So it showed that opposite direction. Then we hypothesized, which means whenever visibility is high, suitability of the target becomes low. For us this was really interesting. So we thought why this result happened or why we got this result. Our analogy was people might think when data is more visible, that data may not be worth assessing because people assume important data should be hidden.

Tom Stafford:

I was thinking along the lines of well gee, if it's that visible, my attempt to steal it is going to be noted as well.

Craig Van Slyke:

That makes sense too. I think a third factor might be if it's visible, then I might think others are going to be able to get it too, so it might have less scarcity value for me. But this is all kind of speculative, right. We're just trying to figure out this interesting result, but the effect was pretty small there too. So it's about half to a third the size of the effect of value and accessibility.

Jake Lee:

So yeah, but there was one of the most interesting findings we got from this research.

Craig Van Slyke:

So, just to be clear, we are not recommending taking your most important data and plastering it all over your homepage right? We're not saying that at all. We're just saying this was an interesting little result For the record. That's a bad idea. Don't do that.

Tom Stafford:

The interesting result, though, is the impact of information ability in the eventual outcome of the suitability assessment. I can see that your moderator effects, that you suggested for that all sort of manifested themselves.

Craig Van Slyke:

If you could first remind us what information availability is.

Jake Lee:

Yeah, so information availability is information source that any people can have information about the data, so it can be like website, it can be magazines, like public site, so sometimes people posted information on the public website or public magazines. So we were interesting to see if that kind of information availability impacts on the relationship between Viva and suitability. So we studied that and our results showed only one relationship, which was a relationship between value and suitability moderated by information availability.

Craig Van Slyke:

Jake, can you talk us through that relationship, because these moderating relationships get a little bit hard to understand.

Jake Lee:

Okay, so original result between value and suitability when value is high, suitability becomes high, but when people have information about data, which is information availability, that relationship becomes stronger.

Craig Van Slyke:

Okay, just to make sure that I understand. So, as information availability goes up the relationship between value and suitability gets stronger.

Craig Van Slyke:

So I wonder if we can take this back to kind of what organizations can do and what they can't cannot do. So if I look at your results, what can I do something about? And we should also mention that the suitability of the target had a pretty decent positive effect on the risk of a sensitive data breach and presence of guardians had a reasonable negative impact on that risk. So I'm a security practitioner, I'm a security manager.

Jake Lee:

What do I do? The first thing I'm recommending for them was having as many as security controls around the data. That's the first thing. And second, according to this result, lower the suitability of the target by managing the value inertia. Visibility of the data. Also information availability.

Craig Van Slyke:

Are there any ways that an organization can manage value that seems like it would be the hardest one of these to do anything about from a management standpoint.

Jake Lee:

Let me use this result to explain that. So, as I explained, moderator information availability make the relationship between value and suitability high. So we thought why only this relationship, moderated by information availability? This relationship means relationship between value and suitability. So let me use this example. I said I want to buy a house.

Jake Lee:

If I go to the Realtor's website, information about the house listed, the biggest number I can see is the monetary value of the house. But in Asia, which can be size of the house also, there is a size of the house, information about size of the house, but visibility of the house and accessibility of the house I don't think there is information about those. So, also in order to figure out the inertia of the house, we can actually visit the location of that actual house. But by visiting actual house I may not able to get the information about the value of that house, which means I don't think posting value on the public source is not recommended. Also, this value means can be hedonic value, non-monetary value and monetary value, but I don't know. We need to figure out how we can minimize the chances people not to post that kind of information on the public sources.

Craig Van Slyke:

Okay, so let me see if I can kind of bring this back to security. So let's just assume for the moment that I really can't do much about perception of value, which I think is probably accurate, that I really you know the bad actors out there. I can't necessarily manipulate how they view the value of certain types of information, but what I can do is do something about information availability. So if I can drop the information availability, if I can reduce that, then I'm also going to reduce the effect of value on the suitability of the target. So I'm not really changing the value, but I am changing that relationship between value and suitability. Does that make sense?

Jake Lee:

Yes, it.

Craig Van Slyke:

So if I'm a security practitioner, I'm a manager. You're telling me reduce information availability, try to increase inertia I'm not sure what to do about visibility and try to decrease accessibility. Is that all correct? Yeah, okay, and then let's talk about the presence of the guardians, because I think that's another thing that security managers can manipulate, right? Yes?

Tom Stafford:

I was actually framing a question I had myself. As I'm looking at the model, I don't see a link between presence of guardians to suitability of target, and it's been feeling to me all this time that making your target unattractive is a working strategy based upon findings of the study.

Jake Lee:

Yeah, the reason we hypothesize this presence of guardians to the risk-obsessive database is because we just follow the theory. So that's why we didn't hypothesize the relationship between presence of guardians and suitability of target.

Tom Stafford:

I'm not trying to be difficult. I just have this clear sense that making the target unfriendly, as it were, is an important approach to keeping people from being too interested. That's my takeaway from a lot of what I'm saying.

Jake Lee:

But our ultimate goal is reducing the risk-of-sensitive data breach. So again, if we have as many as presence of guardians around the sensitive data, ultimately the risk-of-sensitive data will be lower, I guess.

Craig Van Slyke:

Well, and we need. We need to keep in mind too that when we do this kind of research, we necessarily put things in neat little boxes when in reality they aren't. So, for example, the presence of the guardians probably has an impact and probably is kind of mixed up with accessibility perceptions as well. Right, if there are a lot of controls that I have to go through, a lot of barriers I have to go through, it's not going to be as accessible. So this you know, for those of you, the practitioners in the audience, that don't do this kind of research, we have to necessarily oversimplify things in order to try to understand it.

Craig Van Slyke:

But we recognize in the real world what we do is much messier than kind of the way we lay out in research. But I think there's a clear message here that one of the things that you can do is to make those barriers, those garden, those guardians, rather make them very evident. Let me use kind of a silly analogy you can put an invisible fence up, or you can put a real fence. Well, here's an even better analogy you can go out and buy fake surveillance cameras. So that's just all about the impression, giving the impression that this is not a good target because you've got this guardian present. So I mean, we're not recommending that you have fake controls in place, but it seems to me, based on these results, it would make sense to make a lot of your controls pretty apparent.

Jake Lee:

Yeah, I agree, I definitely agree with it.

Craig Van Slyke:

Tom, you look like you were getting ready to say something here.

Tom Stafford:

I had a point, but it's way overly theoretical my notion. I'm seeing a mediator model with presence of guardians mediating between suitability of target and risk of sensitive data breach as a potential ongoing expansion of the study. I mean, from the same data panel you can simply put a causal link pointing from suitability to presence of guardians and you've got the mediator specified. I'd be curious to know if that was a significant link and I apologize to the audience for going full theoretical on that, but it's the whole time I'm thinking gee, those, those, those resources that make the data hard to get to, are the big inhibitor. How does that play out in real life?

Craig Van Slyke:

That's an interesting question. I want to throw out something that I'm thinking about and I want to see what both of you think about this. As you were talking through all of it, I kept coming back to different kinds of information. How does all of this relate to social engineering? I mean, if there's information about the company, about company events, you know, promotions, birthday parties, dinners, that kind of thing that can have really high value, so we often don't think of that as sensitive information, right, I mean, it's not okay. This is not somebody's social security number, it's not their health record. But for social engineers, that kind of seemingly innocent information has tremendous value, doesn't it?

Tom Stafford:

I'm thinking of Kevin Mitnick's speech when he came to Monroe he died recently, by the way, did you notice that? But he basically said all of his great exploits were where he got people to give him information they shouldn't have. And so I'm seeing your information availability variable kind of an example of somebody being willing to work the network to get accessibility credentials, let's say, for the accessibility variable. I'm just thinking out loud. But you know Craig's exactly right. Social engineering is our biggest threat, no matter what. What else is going on? People trying to get the secrets, as it were.

Craig Van Slyke:

And it's a common technique for social engineers to go in and look at LinkedIn and Facebook and say you know, call up and say, oh, did you go to Tom's birthday party or you know whatever it was, and kind of do that, do those sort of things to lower the skepticism of the target. I know this may be way out on a tangent, but I could see this whole model being applied to a completely different kind of data around social engineering, and that could be another thing that organizations can do something about to educate their employees about the importance of keeping some of these kind of seemingly innocent pieces of information more private and also not being so susceptible to the social engineering when data is out there and publicly available. You know in a magazine, you know in social media or whatever it might be. So wondering what you think about that.

Jake Lee:

I agree with that. So we can. We may able to apply this similar model to the in the context of social engineering. I think personally think sensitive data is important, but public data is also really important to secure, because some public data contains some important information for somebody.

Craig Van Slyke:

Yeah, absolutely, yeah, yeah it. What's important may not be what you think is important.

Tom Stafford:

So I had a question. We were talking about how you chose the financial markets as a test bed earlier, and then you mentioned medical might be a place to go with it, and that got me thinking about how this might work. And I don't know hospital data systems and that sort of things. You know I dabbled in electronic medical records, as some of our students do. What practical implications could you see in terms of securing patient data with the way your model works here?

Jake Lee:

Securing patient data for the medical healthcare employees right, Because our model can be applied for that kind of context.

Craig Van Slyke:

Let me try to drill down to a particular aspect of this. I m thinking about personal health records, which makes information more available. It makes it more visible, it makes it more accessible. But I think this is an interesting tension with these kinds of systems, and we could say the same thing about online banking. You know, what makes it vulnerable is what makes it useful in a lot of respects. So if all of that is hidden and only accessible through secure networks, or you have to go into the bank or you have to go into your doctor s office to see it, it s more secure that way, right, because it s less visible, it s less accessible, there s less information availability, but it also isn t very useful. You see what I m getting at. There s a tension there.

Jake Lee:

There is a trade-off between tension and history.

Tom Stafford:

What s next, Jake? There s so many different implications we talked about here. I m sure you have several ideas of studies that might seek to develop broader explanations in different contexts. So where do you go next with this research?

Jake Lee:

So we only studied perceived risk of sensitive data breaches in our current study I would like to study how employees behave when they feel there is a risk of data breaches. So that s another step further. So we were thinking to study extra low behavior of the employees Whenever they thought risk situation as a normal situation. They may do something to make their environment better.

Craig Van Slyke:

An extra low behavior is just kind of going beyond what your expectations are for your job. Now, that could be really interesting.

Jake Lee:

I personally believe this extra low behavior may help to reduce that risk. They thought so that was my future research.

Craig Van Slyke:

That would also tie into security culture and some of those kinds of things.

Jake Lee:

Yes.

Craig Van Slyke:

Yeah, it could be very interesting.

Jake Lee:

Also, I will also think about the healthcare related context.

Craig Van Slyke:

All kinds of possibilities. All right, Tom, you want to take us out?

Tom Stafford:

Well, ladies and gentlemen, you've been listening to Cyber Ways. It's information security podcast production of the Louisiana Tech College of Business, courtesy of Dean Chris Martin and the Just Business Grant. It's a production of the Center for Information Assurance as well. We hope you tell your friends about it, listen to it frequently and let us know what you think as you're thinking about it. Until next time,

Perceptions of Data Breach in Financial Institutions
Information Availability and Value in Data Security
Guardians and Target Suitability in Data Breach