1. Library
  2. Podcasts
  3. The Secure Developer
  4. Ep. #42, News Media Security with Kate Whalen of The Guardian
The Secure Developer
23 MIN

Ep. #42, News Media Security with Kate Whalen of The Guardian

light mode
about the episode

Note: The Secure Developer has moved. Head to mydevsecops.io to subscribe and listen to the latest episodes.

In episode 42 of The Secure Developer, Guy speaks with Kate Whalen, a security engineer at The Guardian, to discuss news media security and advocating security across many teams within a large organization.

Kate Whalen is a security engineer at The Guardian in London, England. Along with developing new tools, she has also been closely involved in fostering cross-team collaboration and adoption of tools and security best practices.


Guy Podjarny: Hello, everybody. Thanks for joining us back at The Secure Developer Podcast.

Today we have a great guest, we have Kate Whalen from The Guardian. Thanks for joining us on the show here, Kate.

Kate Whalen: Thank you for having me, Guy.

Guy: Kate, we're going to dig a lot into The Guardian and how you work on security and the like, but before we do that, can you tell us a little bit about yourself?

What is it that you do? Maybe, how did you get into this world of security?

Kate: OK. I'm a security engineer at The Guardian, which means that my day to day is writing a lot of code, but also doing a bit of security advocacy as well.

I haven't actually been a security engineer for that long.

I moved from a regular developer role into security engineering because it's an area that I've always been interested in, but my background is actually in pathology and microbiology.

I like to think that I came into security mostly because I'm constantly interested in how systems get compromised, or infections and viruses spread.

Guy: I guess it's not so different. You went from one type of virus to another.

Kate: A lot of the terminology is the same, though. So that's nice and familiar, at least.

It's been an interesting three years, upping my security knowledge.

I actually went to DevSecCon in London about three years ago, and that's where I started learning about security-rated things, and just started trying to teach myself as much as I could in my free time.

Eventually I applied for the security engineering role.

Guy: How was that? What was the response in the company to think about moving from dev to security, was that a frequent practice or were you a pioneer?

Kate: I'm the first person to do it to my knowledge, it is actually quite encouraged.

At The Guardian, we encourage people to move around inside the organization.

We have people having quite long tenures, which I think is amazing because we encourage our developers to move back and forth between a developer-manager role, or maybe a tech-lead role, or to pursue careers in other areas of the business that interest them.

Which allows them to maybe try out different skill sets or practice other areas that they haven't had a chance to.

For me, it was a bit of a big move, at least an internal big move, but I'm really enjoying it so far.

Guy: Do you feel it was helpful, from when you're in the security engineering role, is it useful that you came from the dev team in the company?

Or is it all the same? How helpful is it that you were a developer before you got into the security engineering role?

Kate: I think it's really helpful, because a lot of companies have an issue whereby different teams or different groups are a bit siloed.

It can definitely be the issue where maybe you have a quality team or application security team, and it's more of a throw it at the wall attitude whereby developers and security don't get a chance to talk or developers in operations don't get a chance to talk.

I know a lot of people around the company, which is nice because then when I have a problem or I need some support or I need someone to review a PR, I've got a large team of developers who I used to be part of the team of that I can still go to and rely on.

It allows us to be a bit more cross-functional.

Guy: Yeah, I love that idea. I think it's a good personal people-type aspect.

You connect, you can relate to the problems, people individually.

It's also sometimes professionally valuable, because you know the systems and you come with a certain amount of knowledge over there, and bring that into the security team and help educate.

Kate: From the point of bringing developer tooling or sharing best practices, there are things that a lot of our developer teams might be using or our internal tooling that in infosec we can really benefit from adopting.

Guy: Very cool. Did you find you brought along a good number of those?

I mean, are there a couple of examples to tout from devtools that you brought along to the journey?

Kate: Definitely at the moment, my remit for the quarter is a lot more of CICD, so continuous deployment for some of our infosec tooling.

Being able to rely on the tooling that other members of my company are working on, and being able to request features or help from them has been really helpful.

Guy: Give us a slightly bigger picture of the security org, you're a security engineer, what are the different teams in the security org and how do they relate?

Kate: OK, to be honest there's only five of us in infosec. We are a small team.

I'm a security engineer within it, but we've all got different roles and we've all got different specializations.

What we want to do is be able to secure an awful lot of The Guardian, and that can be everything from helping employees manage their passwords to advocating for secure developed practices, to building out tooling so that we can be more strategic and less tactical.

Our remits are all quite broad, and we have all got our own specializations because as well as doing the development side of things, we might also be looking at join us / leave us processes, or how do we do account management or email auditing?

Guy: Yeah, and how is it different to be a security engineer versus maybe the other titles in the team?

Kate: To be a security engineer in the team, for me it means that I'm trying to look at our current workflows and design tooling or solutions that can automate some of it, or at least automate the boring parts.

Also we have an awful lot of alerting and monitoring that isn't really integrated with itself at the moment, or integrated with all the different systems.

I would quite like my inbox not to be quite so inundated with all of these alerts, so I am trying to find better ways to ingest and respond to security alerting and data.

That would be one of my other big remits, so I'm looking at building tooling and then also supporting the developer teams around things such as security reviews.

So if they maybe want to look at how their AWS accounts are configured, a lot of our infrastructure is deployed on AWS and we are a quite big user of cloud services.

But sometimes you need a second pair of eyes, or you want someone to look through how you've configured all of your security groups or your applications, to make sure that you haven't left any security holes.

That's something I can help with, and other things such as doing security reviews of applications to see if people are adopting secure coding practices.

Guy: So basically, you're a security engineer in two capacities. On one hand, you are an engineer yourself, you write and you build stuff, you put the security tools into the pipelines and the likes.

Then on the other side, you support the engineering team working with them. I guess your engineering skills come into the forefront when you need to review code or you need to understand how an application works.

Is that a fair assessment?

Kate: Yeah, that's an excellent summary.

Guy: OK. So you're in The Guardian, and that's a fairly influential organization. It's got a lot of good news that comes out of it.

Kate: Thank you.

Guy: How do you look at, indeed, all these different kinds of risks that you face? How do you approach the threat model of The Guardian?

Kate: I suppose, through different attack vectors.

We might have a digital threat model, so if we're worried about how we might be attacked through infrastructure or through applications, that's one risk assessment or threat model to do.

Then the other one might be more of physical safety or operational safety and practices, so how do we ensure that people's accounts don't get hacked or that their passwords don't get compromised?

Doing that type of threat modeling, or risk assessment.

Guy: When you look at the population in The Guardian, of course you have the tech as you build the web content and all the different technology pieces.

You also have a fair bit of journalists in the company, as one would expect.

How does that change the security work? Does that have any impact versus the technology stuff?

Kate: Yeah, absolutely. The assessments of threat models we might want to do would depend on the individual, and also sometimes the locations.

If you are traveling abroad you might want to consider which devices you bring with you, or what checks you put in place, or other security practices you might want to adopt before doing that.

I apply that to myself as well, so if I'm traveling to different countries I'll have a different threat model for each one of those countries according to how concerned I feel I should be about me going there.

You might want to do the same thing with your employees, if they are traveling it might be to cover a protest or it might be to cover a sporting event, but sometimes you do have to think about "OK. How do we look after them and their devices in the field?"

Guy: So this is really the level of safety you might attribute to cellular connectivity, even if you're in a more dictatorial regime-type surroundings, or your phone being physically stolen if you're in the midst of a protest?

Is that what you're referring to? That you have to think about those types of threats?

Kate: You'd have to go through all of the worst case scenarios, and it might also not just be an employee it might be someone that you're meeting with.

So, what are their expectations around their confidentiality and their privacy or anonymity?

They want to have certain boundaries respected, and so how do you ensure that employees or individuals can do their jobs but also look after themselves and whoever they might be in communication with?

Guy: Interesting. I'm tempted to drill into that journalist route, but I think we're at The Secure Developer podcast here, so let's take a look at the other population which is your developers.

You mentioned a bunch of this when you described your role and about how you work with developers to do code reviews and the likes, but if you had to look at the learnings over the course of these years that you are in security engineering.

What do you find was effective working with the developers?

How do you try to engage the team in helping you make the software secure?

Kate: Before I even joined The Guardian, we always had a system of "Security champions" or "Security agents."

We switch between the two names, so trying to get developers who are interested in security or just want to learn more about it to come to semi-regular meetings where we might try and learn something together, or run through one of the small training games that various websites have, or maybe use it as a bit of a knowledge sharing opportunity.

That's been really good as a way of ensuring that there is somewhere that people can ask for help, or talk about things that might have happened to them.

It's also a really good forum to discuss security incidents or potential security incidents, so if someone's noticed something strange going on then they might mention it at that that type of meeting, and then it drives a bit of discussion, and then you might have someone ask a question about a particular vulnerability and then someone else explains it.

So, that's a really good way of doing communication and engagement.

Guy: How does it logistically operate, this group? You mentioned sharing and meeting, how often does this group meet? How many--?

What are the rough ratios of security agents to developers?

Kate: I'm only the one that organizes it, so if I'm being organized it's once a month.

It might just be like myself from the infosec team there, and then hopefully developers from our different development teams.

A mix of people from different seniority levels and different projects, so hopefully covering most of the areas, which is good because then if we have an announcement to make or a bit of guidance or advice to give out, then you can encourage them to share on with their teams.

On top of that, you might also want to do additional sessions.

I'm running a Halloween session so I can share all of the scary stories that have happened over the last year.

Guy: That's a great one. Do you also use--? You talked a lot about these people coming along and sharing within the group, problems with sharing experiences?

Do they also serve as good advocates, or as extensions of you inside the different teams?

Do you have any learnings about what did and didn't work in trying to achieve that?

Kate: I think something that works quite well is having smaller focus sessions, not necessarily having too many people along, because if you've got lots of people in the room it tends to make some people less likely to speak up or more worried to admit that they don't know something, or to ask questions.

Particularly with new developers, I like to try and have them for a one on one or maybe two on one intro session to explain how we approach security at The Guardian, and the fact there is a shared responsibility and that they should never feel bad about asking questions about it, even if it's just saying "Is this quite right?"

Then it's a good time to also talk to people about how to look after passwords and how to set a strong password, and why multi-factor authentication is amazing.

Because with our developers, I imagine like most places, our developers have got quite privileged access. They can run pretty much anything on their laptops and they can access an awful lot of systems, so you really want to make sure that they understand how not to get hacked.

Guy: Big fan of two factor authentication, definitely one of the greatest thing since sliced bread.

We need one of those for all the security problems, a couple of those magic bullet solutions.

You talked a lot, are you doing these one on one, two on one developer interviews--? Or, not interviews, conversations around security.

What are some common misconceptions you come across?

Kate: I find a lot of people think that they have a password system which solves the "Having a unique password" problem.

So they'll have a staff password, and then they'll iterate on that and do variations on that. They have a unique system that no one else could ever figure out.

So, that's an interesting one to talk to people about and maybe suggest other alternatives.

Other misconceptions about security, I haven't run into too many.

I suppose the main one is that people are worried that they shouldn't get involved in it, because security is something that they don't know much about and that they might do something wrong.

"Isn't there probably an expert or someone else who should be looking after all of that?"

So trying to persuade everyone that actually best efforts are better than no effort, so even if you feel like you don't know that much about security, actually just ask questions.

We're never going to be annoyed if someone sends an email to us asking, "Does this look quite right?" Or "Should I click on this link?"

We'd much rather everyone send that then get phished or open up malware.

Guy: That's a great one and very much an emphasis, I guess, or an aspect of this shared responsibility.

They need to accept that responsibility. Before you used the term "Security nihilism."

The description of it, how do you battle it with this type of "Just know that it's OK. We're going to be receptive," or how has the response to that been?

Do people use that, do they embrace the responsibility?

Kate: Yeah, I hope so. I definitely had a new joiner flag an interesting blip in some of our monitoring to us, which was good to see and then go chat to them.

It was about a week after they're joined as well, so it was wonderful that they immediately thought "I should tell infosec about this."

Definitely feels that it's been helpful, having these conversations.

Then also people know who you are, they know where you sit and what you look like, and you're a bit more approachable rather than being something scary to involve security or infosec.

Where we don't want to only turn up when something bad happens, but with security nihilism, often whether I'm doing training internally at work or of I get some external meetups, once people start learning a bit about all of the scary things out there or all of the ways that you can be insecure, it can feel that it's an unsolvable problem.

There's too much to fix and we'll never be secure. Where even to start?

Trying to combat that feeling of just pick up something, the work is endless but at least you can make progress to being in a better place than you were yesterday.

Guy: Indeed. If you don't do it, nobody else will is the reality of it. Nobody else can keep up, really.

Developers have to embrace a lot at work. How do you assess?

Also having spoken to you before, a lot of the approach started with your comments on CICD is through tooling.

You bring tools into the mix, you put them into CICD. What do you look for in the solutions that you bring in?

How do you assess tools that would work in your context versus ones that you don't think would be a fit?

Kate: Tooling is one half, and then the other one is adoption, because you really need to make sure that if you're getting tooling in or if you're building tooling that it's solving the problem that developers actually have and that they want to use it.

Otherwise it's just going to collect dust, and no one's going to log in or check any of your dashboards.

If I'm building internal tooling, then that's great because my users are normally trapped in the same building with me and they can't get away.

You can do an awful lot of UX and testing and feedback and ask people what they would like to be automated, and show them what you're currently displaying, and then get feedback on that.

That also makes sure that they are aware of all the features of the product, so if you have a short sit down and chat with them and show them through it they will actually say, "I didn't realize that I could get the information from this panel."

So that's really useful, making sure that people actually have a chance to spend some time with the tooling will mean that they can understand how they can get value from it.

We're picking out tooling and it's really great if our developers have some tooling in mind that they're already using.

Funnily enough actually, with Snyk one of the reasons why we adopted it as a organization is because we had about four or five different teams all trying out the free version.

So when we went to the enterprise, that was interesting to reconcile because we had five different The Guardian Snyk accounts to try and integrate.

But that was ideal, because it had five different developers all decide that they wanted to use Snyk.

Guy: I always appreciate it, and I think I love the comment on the developer usability bit.

So like you do, I guess it makes perfect sense, but it's still unfortunately not terribly common to do this usability testing for your security tools with your developers and iterate on those with them.

It seems like a really bright idea and yet not one that is done often enough.

Kate: It's really important for developers, because we tend to have a lower threshold for bad UX, particularly if we're working on applications ourselves.

We've done a/b testing, we know what's a good user flow and what's a bad user flow, so when we're confronted with a very unintuitive system that doesn't seem like it's had any user testing, it's intentionally frustrating us.

Guy: That's excellent, and it's actually also using the talent and the skills in house of how to do it right.

They're probably tapping a little bit into your skills as having come from the dev side of the fence within the organization, but also it gets people to have skin in the game.

If you've commented and you've given your feedback, and your feedback was implemented, then you were that much more inclined to actually embrace and use the solution because you feel you had a hand creating it.

Before I let you get back into that security work, I like to ask every guest that comes on the show if you have one bit of advice, one tip to give a security team looking to level up their security foo, what would that be?

Kate: Probably to adopt password managers. It's a very quick win, and then get everyone else to adopt password managers.

Ideally for their professional life and personal life, because I think the boundaries between personal and professional lives are blurring an awful lot.

I know that my GitHub password reset goes to my personal email account, so I want to have a similar level of security on both and ideally I want to be using password managers everywhere.

With password managers you can also get an enterprise account and get team shares, so that you don't have developers sharing secrets or API keys via Slack, or other less secure channels.

So, not just good for passwords, good for everything else you don't want to be shared in the clear.

Guy: Not in sticky notes on the board, or shared over Slack. Excellent tip, and I fully well appreciate it.

I'm a big fan and I don't know any of my passwords. They're all in the password manager, as they should be.

Kate: Ideally, yes.

Guy: Kate, this has been a pleasure. Thanks a lot for coming on the show.

Kate: It's been lovely. Thank you so much for inviting me.

Guy: Thanks everybody for tuning in, and I hope you join us for the next one.