Interview

“It’s not always about hacking” – Brown-Forman CISO Sailaja Kotra-Turner talks cybersecurity

Brown-Forman has been among the major businesses to have suffered a cyber breach. Conor Reynolds discusses security threats, reducing risks and the pros and cons of AI with the spirits giant’s Sailaja Kotra-Turner.

Sailaja Kotra-Turner, chief information security officer at Brown-Forman

Brown-Forman, the owner of Jack Daniel’s whiskey and Herradura Tequila, was founded in 1870 but like any modern company it has to spend resources securing its systems and data. 

However, the company has not been immune to security incidents. In 2020, the threat actors REvil ransomware breached the distiller’s systems and exfiltrated data, following which Brown-Forman invested further in its IT infrastructure. 

Just Drinks sat down with Sailaja Kotra-Turner, Brown-Forman’s chief information security officer and director of global infrastructure, to talk threats and mitigating risks.

Conor Reynolds: What’s a brief definition of your role at Brown-Forman?

Sailaja Kotra-Turner: My role is security, infrastructure and operations. That’s three wings to it. Operations is combined for both security and infrastructure but I also have network servers, workstations, all that part as well, and also the security engineering, access management and governance.

Conor Reynolds: What types of cyber threats do you deal with?

Sailaja Kotra-Turner: ​​​​​​​Areas of cybersecurity my team deals with is engineering, which is, figuring out where the new threats are and what tools and process improvements are needed. Anything that needs to be done in order to stay current with the threat landscape across the board. 

For example, we had CrowdStrike recently. We were not directly impacted by it. However, we’re also very cognisant of the fact that anybody could be impacted by a similar issue because the whole security landscape right now is very interconnected. We use third-party vendor software, whether it’s for finance like SAP or Workday for our people management.

As fast as we’re coming up with ways to protect ourselves, technology improvements work in all directions unfortunately.

We’re still dependent on so many vendors, any vendor making a mistake on their end and we will be impacted one way or another. Some of the conversations that are happening right now within my team is: ‘What can we do to minimise the risk of that happening? How do we mitigate it? And what kind of plans do we need to be prepared with?’ And, more accurately, ‘okay, we have some plans, are they adequate for this new type of issue that we have not had to face in the past?’

Conor Reynolds: How does the threat landscape change and how quickly are new threats developing?

Sailaja Kotra-Turner: ​​​​​​​As fast as we’re coming up with ways to protect ourselves, technology improvements work in all directions unfortunately. Those technology improvements are also making more attacks, more difficult attacks, more skilled attacks. Two or three years ago, I would have said you need a certain level of knowledge to be a hacker. Thanks to generative AI, now you don’t need that.  

You know, ten years ago I remember first training about phishing, when we told people ‘Hey, look for grammar. That word is spelled wrong.’ We don’t see that anymore. Phishing now is so immaculate, it’s very easy. So, we’re constantly training people about how do you recognise the newest phishing, not phishing as it was ten years ago. Now we look more for tone language. Does it sound like the person you’re talking about?

Conor Reynolds: How big a threat is phishing these days?

Sailaja Kotra-Turner: ​​​​​​​At this point, 90% plus of phishing attacks actually should be caught technologically. We use Google (Gmail), but it doesn’t matter whether it’s Google, Microsoft or whatever platform we’re using, at the end of the day 90% of phishing attacks actually are caught internally. 

They’re caught by technology. The end user will not see them. We actually had a real life test of that when, I think it was back in February where Google’s spam filters did not work for a day or two, and we saw a huge spike in the number of phishing emails that actually hit people’s desks, and the number of people that clicked on it and clicked on them definitely went up. 

So, we’re actually training our end users for phishing, specifically on what I call the more difficult, the ones that look extremely realistic and have passed the basic filters.

Credit: T. Schneider / Shutterstock

I’ve always believed that phishing exercises are a training mechanism, an awareness mechanism, not a punitive mechanism. Because the large part of what we’re really trying to do is recognise it sufficiently but, more importantly if you do actually click on a phish, please let us know, because we have so many layers of protection a person clicking on a phishing email can be the start of a larger attack but it doesn’t have to be. The more important part in the phishing training for me is the reporting.

Conor Reynolds: You mentioned generative AI earlier. What impact is that having?

Sailaja Kotra-Turner: Phishing has gotten better because generative AI can be used to write that phishing email. You have to be a little tricky about it, know what prompts to use but at the end of the day it can do it.  

Generative AI can write code for you. They have some basic protections, so, if you say ‘help me make an attack against company X’, it will say I’m not going to do that. But if you’re smart enough to know how to use the generative AI system and there’s lots of examples online where you can do that, you definitely can get the code for that

It’s a whole different type of training that we’re having to do now.

Social engineering across the board is easier again. Generative AI can use a video clip to generate – and these are all free tools by the way – a voice or even video likeness of a person. I don’t know if you heard this story from a few months ago, where there was a guy that was on a meeting and actually authorised like half a million dollars or something like that and every other person in the meeting was an AI-generated, I’m going to say, ‘participant’ for lack of a better way, that were made to look like his co-workers. Now that’s a bit of an extreme example but those are things that are now doable. 

Our training has to keep pace with that. Now, we’re not looking at grammar errors. Now, if you’re looking at a video of a person talking, what do you watch for to make sure it’s real and not AI, what are the things to watch for? It’s a whole different type of training that we’re having to do now.

Conor Reynolds: Do you think we will hit a point where it’s so realistic it’s hard to tell difference in online calls?

Sailaja Kotra-Turner: ​​​​​​​Yes and that’s why we need to strengthen our processes. We cannot rely on just detection. Because no matter how good we think we are at detection, at the end of the day as human beings we will make mistakes. 

It could be that I’m attending a meeting on my phone, which I know a lot of people do and you cannot catch those little telltale signs, which are so minute that you have to watch for them.

Conor Reynolds: How can you protect against that type of attack?

Sailaja Kotra-Turner: We rely mostly for security on what is called layers of protection. [The] old-fashioned name was defences in depth, the military term. What we’re basically talking about is we have multiple layers, multiple gates that we try to stop people from moving laterally. 

The first gate is that the people, you know, make sure you understand whether it’s real or not. That’s all that phishing training and stuff. That’s level one. 

Some people will click on those emails. Some people will fall for it. It’s not the end of the world. It should not be because then we have other layers that will stop it. In the case of phishing, it could be software that’s residing on people’s workstations that will actually detect malicious software and try to stop it there. If it passes that, there’s another layer and another layer. 

For some of the stuff it’s actually processes. For bank information, if people are actually sending bank information by email we actually tell people to pick up the phone and call somebody you know that you have a known phone number for, don’t reply to that email. If you have a different email or whatever official process there, use that official process.

Conor Reynolds: As the points of data collection increase in manufacturing, what are the challenges you face?

Sailaja Kotra-Turner: ​​​​​​​We collect a lot of data. We collect data for use in running our business, okay, so that’s on the data science, data analytics, and the platforms used to build that so let’s set that aside. That’s one set of data, the other set of data, from a security perspective, is what are our systems doing? It’s all the log data and the body of data that’s collected in either scenario is huge.  

I’ve talked about the downsides of AI earlier, now let’s talk about the positive. What AI does for us in the security space or in the data science space is it looks at all the data and tries to find patterns that are useful. You know, the security space, a useful pattern generates an alert that says, this one doesn’t look right, take a look at this it’s anomalous data. We do have sims that will actually do that for us and that creates dashboards, alerts and a super-concentrated look at it. 

Credit: Alexey Stiop / Shutterstock

The data generated, for me, from a security perspective is very useful. But yes, there’s such a huge amount of data that it’s not easy to make sure that we have all the data we need and to parse through that data, so we rely on a security incident and event management system to do that. 

On the data science side, that’s where it gets more interesting as we talk more and more about the proliferation of AI and wanting to use AI tools. Any AI system is only as secure as the underlying data. We can set up all kinds of access on the AI tool itself but, if the underlying data is not secure and people can access it without being authorised to do so, the system overall is not going to be secure. A lot of our focus now is on improving data governance and data security, because that will address a large part of the concern with AI security. 

Conor Reynolds: In 2020, the REvil ransomware gang hit Brown-Forman and a significant amount of data was taken. What lessons did you learn from that incident?

Sailaja Kotra-Turner: From a personal part, for anyone that’s on the security team a breach is a major growth and learning opportunity.

Being in cybersecurity, we cannot just focus on the technical aspects. We need to be actually enmeshed in the business.

When we get stress-tested we learn so much during that process it’s actually kind of amazing because those learnings are what make the company better.

It’s a little bit of a joke when I say a company that’s most secure is about three to six months after a breach. Because they take all those learnings, they invest in the security. They make things a whole lot better. It’s about six months ish, they actually have all those in place and it’s the most secure it will ever be. Then things start to kind of level off again.

Conor Reynolds: Is all the security done in-house or do you work with external partners?

Sailaja Kotra-Turner: We do work with a lot of third-party security firms that can range from we have a managed service provider that does our security operation centre stuff. We have some level of managed service providers again for a lot of the, what I call, run-maintain-operation activities, like doing vulnerability scans, making sure they get patched things like that.  

We have internal teams that do the engineering and architecture because that requires knowledge of Brown-Forman, both the business as well as the history of Brown-Forman and what we do. That part is almost completely internal.  

Then, on the other hand, we have external providers for penetration testing, for risk assessments, again, that’s to preserve a level of segregation of duties and objectivity. We don’t want our own teams doing those assessments because we get a much more realistic picture from external providers, so, a kind of an audit function there. 

The one thing I’ll say about cybersecurity today is it’s broad. Being in cybersecurity, we cannot just focus on the technical aspects.  

We need to be actually enmeshed in the business because any risk that comes up in the world will have a cyber component to it. It’s not always about hacking. It’s about helping the company deal with any threat that might come up that has a cyber component to it.