September 27, 2017
Algolia’s 6 Steps To Content Contributions
In his DevGuild: Content Strategy talk Liam Boogar, Algolia’s Brand Director, outlined the 6 step process he built to source great content...
In episode 33 of The Secure Developer, Guy is joined by Leif Dreizler and Eric Ellett of Segment. They discuss motivating security teams, the importance of investing time in your business relationships, and the longterm rewards of proper security training.
About the Guests
Leif Dreizler is the Senior Application Security Engineer at Segment, and was previously Manager of Program Architecture at Bugcrowd Inc. Eric Ellett is the Security Engineering Senior Manager at Segment, and was previously security lead at Credit Karma.
Show Notes
Transcript
Guy Podjarny: Hello, everyone. Welcome back to The Secure Developer. Today I have a security team here from Segment, which I'm really excited to talk to and hear about developer-focused security practices at Segment.
We have Leif Dreizler and Eric Ellett here, thanks for coming on the show Leif and Eric.
Leif Dreizler: Thanks for having us.
Guy: Before I dig into a variety of topics over here, can I ask you to just give the listeners a little bit of a background about how is it that you rolled and found yourself into security and into this role?
Leif: Sure. It all started, for me, studying computer science in college. I started working at a security consulting company while I was still in school, and worked there for a couple of years after graduating as well.
Then from there I was a sales engineer at a Bugcrowd for a couple of years, and then after leaving Bugcrowd I joined Segment on their application security team about a year and a half ago.
Eric Ellett: For me, I was a software developer as a contractor
doing DARPA projects over in DC, and
then I followed my dreams to try to start a SCN security
company here in the Bay Area, which was way too
early.
I ended up going to Credit Karma as an application security engineer, and then ended up coming over to Segment after Leif stole me from Credit Karma to work with him over here.
Guy: Very cool. So you both came into
AppSec a couple of companies ago. Two, three
companies ago.
Although specifically AppSec role, it sounds like, Leif, at least for
you.
It was from an SE, from sales engineers dealing with security, more in the bug bounty side, to now switching to AppSec itself.
Leif: I would definitely put Bugcrowd in the application
security space, and then most of the
security consulting I was doing was application security
assessments.
I was doing third party pen testing, so I've been in some portion of the AppSec space since the beginning of my career.
Guy: It's interesting.
You hear about a lot of people that move from the dark
side, if you will, from on more of the red team and the pen testing side into the
defender side.
It's a version of that, because you have a solution that helps people do that type of pen testing. Even if you know it's not a classic pen test, it's a Bugcrowd platform, but fundamentally you go from being the attacker to being the defender still.
Leif: Yeah.
Guy: So tell me a little bit about Segment security. Where do you sit in the org, what's the team structure? How does it work?
Eric: Yeah. Segment security as a whole is led by our CISO,
Coleen Coolidge. We have several sub teams underneath that,
we have our GRC team.
We have our corporate security team, which is focusing on security outside of production and AWS and GCP. Then we have cert, so when inevitably security incidents happen, how do we handle them with grace?
Then we have application security, product security and cloud security. That's under an umbrella of security engineering, which is the team I lead.
Leif's technically on application security, but my team is pretty much very flexible in the sense that we work on the problems that make the most sense at a given time.
It's not unreasonable for an application engineer, application security engineer specifically, to work on cloud security projects or product security given the demand in a given domain.
Guy: So, it's a security engineering group? That's the title of it? Is that title more inspired, if you will, by the people inside of it? Or that it's working with engineers?
Eric: Yeah, I think we just put a title on there just because we didn't want to
silo the teams too much, I think it's a little too early for us to do
that.
Granted we have five people on our
team, and having hyper
specialization per team, it's just too
early.
We also believe pretty heavily that to be good at application security or cloud security, it requires a foundational understanding in the other domains.
It's great to just be under the security engineering umbrella. Again, like I was mentioning earlier, being able to transition and put engineers where it makes the most sense given the problems that we're facing at any given time, or quarter.
Guy: Got it.
Leif: I think part of it also is to capture the idea that we
do expect the people on our team to be software
engineers at a certain level.
Software engineering and security engineering
aren't that different.
Obviously, they're specializations, but we're still writing code. We're still deploying stuff. We're still working directly with software engineers.
The security engineering title does a good job both encapsulating that work as well as, as Eric said, creating an umbrella for product security and cloud security and application security all under one group.
Guy: Cool. Does the security engineering group also own and operate these tools? Basically like an engineering group inside, like internal tools or the platform storage? How does that work?
Eric: It really depends on the type of tooling. We're trying to figure out, for example, right now where do things such as the WAF live?
Yes, it makes sense for us to be there for the tuning
and to put the rules in and ensure that we're being effective there.
But the operational side, I think our SRE team
would feel more comfortable owning that.
It
just really depends, per tool, where things
would fall ultimately based off of who's the best person
to handle a given aspect of a given
tool.
Operationally, for the WAF, it's definitely SRE whereas maybe the tuning and ensuring that the rules make sense is more on the AppSec side.
Guy: Makes sense. These are the tricky questions in those elements you work. It's between the expert center and the people maybe dealing with operating tools day to day.
Eric: Those types of things always vary. Having those candid conversations with the other directors or the other engineering leaders in the org to figure out who is best suited to manage that different aspect of the tool is always typically what we do.
Guy: Sounds right. That's a good tee up, indeed on how does your group-- Be it security engineering and maybe specifically application security, work with the engineering team?
It sounds like it's not a part of the same org. What's the high level ratio would you say that you have of security engineers to engineers on the regular R&D side?
Leif: I think it's roughly about five of us
to around 100 engineers, and then security
as a whole, which also includes IT is about
15.
Guy: That's a pretty good ratio, as compared to a lot of companies we talk to,
that's a high level of investment for those components.
How is the affiliation?
Is it still one team, application security works with the
entirety of the engineering organization?
Or is there some lower-level partnering of working with this group or the other of the engineering org?
Leif: Similar to what Eric said earlier, it really just depends. Quarter to quarter, what the most high priority projects are for both engineering as well as security.
And we get called in to consult on
pretty much every project, but we may not have a super
hands-on part of it.
Then other projects, it may be a formal
partnership where we're both contributing equal amounts of
work.
An example of that is we added MFA for our
customers last quarter.
I did almost all of the back end work, and then
an engineer cap from our enterprise core team, she did most of
the front end work.
That was a shared responsibility delivering that feature, so that would be a very high level involvement from us, but it really varies. Project to project, quarter to quarter.
Guy: That's awesome. You choose a specific initiative and then you figure out how is the collaboration going to work, and then is that mapped into sprints? It comes back to software development chunks of work?
Eric: Yeah. We don't do sprints holistically as an organization, each team does their own thing.
We do have quarterly OKRs, so
before the quarter starts typically what happens is I'll
reach out to all the engineering leaves and give our
spiel, "If you have these types of projects, let me
know.
If you think of something security-heavy maybe we should do what we call a partnership," which is about 80 percent FTE time from my engineer to help implement or just provide support where possible for the duration of the project.
We have more of the consultation-style partnerships
which are typical in most organizations, where we'll
do like a design review and a threat model and
follow up from that threat model.
Maybe
pen test down the road, if necessary.
That's typically how we break down quarter to quarter, and we do another
thing which is we do an embed program.
One of the things that that Leif talked about, definitely in his AppSec California talk, which is we want to get our engineers working with other teams at least for a quarter.
We've been experimenting when this should happen in a given
security engineer's lifetime at
Segment.
So we've basically
decided with one of our recent
cloud sec engineers within the first month he was
sitting with our tooling team for a quarter and just
doing tooling work and understanding how
that tooling team operates.
Understanding how security processes impact
the tooling team, and the goal of that quarter is
for them to work on a capstone project together to ship
something that will hopefully benefit both tooling and ideally
security.
Then other engineers, like Leif
with the MFA, that was part of an embed as
well.
So the enterprise core team, we reached out to them ahead of
time and said, "We really want to get MFA in the app.
We know that you've been responsible for the other parts of auth
historically, let's do an embed and have
Leif come sit with your team for a quarter and work with
you."
Ideally it's a resource on your end to build this out, and again, it really helps. Leif coined the term "Walk a mile in the developer's code."
Leif: Yeah.
Eric: To really understand what the development process is like, what our processes or our controls look like from the other side, and then really bring that back to the team to figure out how we can improve those processes and get a better understanding and empathy for the developer.
Leif: Yeah. It also really helps just understand
whatever you're trying to protect, so if there's ever a
question by somebody else on our team or cert or
somewhere else within engineering and they have questions about "What is our
authentication flow?"
I can walk anybody through that whole process because I made so many changes and modifications as we were rewriting an older service as well as adding MFA at the same time.
So having that knowledge within the security team and not having to rely on an external team to answer questions about something that's as important as authentication, I think is really valuable.
Guy: I really enjoy affiliations to changes that have
happened in the dev ops world.
It feels like this was very much a part of it, the walk a mile in
the other team's shoes.
But it sounds like it also goes hand-in-hand with your commentary at the beginning around the team being engineers, because you can only do this if the team is indeed capable of acting as an engineer inside that engineering team.
If you bring somebody that doesn't have that sense for
what code is, and how you do that software development process,
then you're not as able to do it.
Do you do a little bit of the other way around? Have you considered taking an embed from the development team and having them walk a mile in the security engineers' shoes?
Eric: You're blowing our cover, here. Because one of the things we're trying to do is get people comfortable with the security embed and then flip the script, which is like, "How about you come on the security team," and again, exactly what you said.
Walk a mile in our shoes and maybe get some empathy from a security perspective, and understand what we have to deal with on a cross-functional basis.
Also, just given that they are developer powerhouses, they can probably build some really fancy and pretty tools for us in the meantime as well.
Leif: A lot of this is-- I know a lot of people
hate the term dev sec ops and that's OK, you can hate the term.
But I think that this is what the goal of dev sec ops should
be.
Similar to dev ops, where you have
operations people learning how to code and now
everything at Segment's infrastructure is code.
You have developers that are running their own services built on top off
the building blocks provided to them by the foundational teams at
Segment.
I think it's just about becoming a more
well-rounded engineer, and whether you're a
foundation person or a developer or a security person,
you need to know at least a little bit about all those other aspects
because it's just part of delivering quality software
in 2019.
You need to know enough about the whole stack of your application, and part of that is knowing how to keep it secure.
Guy: Makes perfect sense.
Eric: Yeah. I think it's similar to how developers have just adopted reliability in general. How do we get them to adopt security as part of how they have adopted reliability over the past few years with the services that they've been deploying?
Guy: Let's get philosophical here a little bit.
We've talked about the org and about embed and about
exchanging people and getting exposure, maybe a bit about the
skill set.
How do you operate not from an embed or from a security capabilities
built into the product, but these security controls around.
What are some of the principles or the guiding lines that you use when you go to consider a new security control, a new security program and try to get developers to be engaged with it?
Eric: The first thing is "Would this tool be used by the developer?" I think answering that question as quickly as possible is the quickest way to de-risk any control or any vendor that you're going to use.
If it's something that has an awful UI or
it's awful from a usability
perspective, developers aren't going to use it.
Ideally, you always want to see if they have an API
where you can build or extend
the tooling. Really, the
first thing is just getting the developers part of
that eval process for any vendor that you're looking at or any
control that you're going to
implement.
Because they're ultimately going to be your users at the end of the day, and just like product we don't develop product without having user input, we shouldn't be developing security features without our users who are the developers input during that process.
Guy: That makes sense to me. You're bringing them in, and I love the customer-centricity. In this case, the customer being your developers.
Leif, I heard you refer to this notion-- Or, made some quote around making it easy. "Make it easy for someone to write secure code and you'll get secure code." How does that manifest in the day to day?
Leif: Yeah. There's been a lot of
improvements to languages and frameworks over the
years. I really think that is where the industry is getting some of
the biggest security lift.
It's just making it really easy for people to do the right thing, and making it harder to do the wrong thing than doing the right thing.
I think that isn't a notion that's unique to developers by any
standpoint. That's just humans in general, it's like "Just make it
as easy as possible for people to do what the right thing
is."
I think a really great example of this is in REACT it's really hard to introduce cross-site scripting. In our security training that we give developers, if there is somebody who's working on the front end, if there's one thing that they remember, it's "Don't use dangerously set in your HTML."
At our security org we very rarely say
never do something, but dangerously set in your HTML is one
of those things.
Even before any of us got here from
the security team, just because our developers had made the
choice to use REACT because it was easy to use and
cool, and Facebook built it and a bunch of other more valid
reasons from a development standpoint.
We've only had a few instances of cross-site scripting.
I think it's less than five in our app, like
ever.
Compare that to somewhere
else that isn't using REACT and is
having to remember to
escape user input every single place in the app.
That is so much harder than just using REACT and not using dangerously set in your HTML. I think that's a really good example of an instance where it's just been very easy to write good code.
Guy: Yeah, we're all humans and we're lazy.
At the end of the day, though, the path of least resistance is the one that's going to
prevail so you might as well make that the secure path.
Eric: We also adapted ourselves. At least the dangerously set in your HTML, I love the fact that "Dangerous" is in the name.
We used TerraForm for most of our
infrastructure here at Segment, including S3 three buckets, and
our public bucket module is like a dangerously
public S3 bucket.
So when they're using that, they're like, "OK. Maybe I should think about this a little bit more." We try to adopt that where possible, just because I think it does definitely send a signal as well.
Leif: Yeah, it definitely makes somebody think twice about, "Is this what I want
to do?"
Sometimes
it is. Sometimes
it does make sense to have a bucket be public, and we
have just static assets or something like that's just
getting loaded on our public web page.
But anybody who even remotely follows infosec news knows that many times the bucket should not have been public.
Eric: I think any signal you can send to a developer who has probably done the 15th code review that day, they're looking at something and they see something like "Dangerous," it's like, "Maybe this is something I should pay attention to a little bit more," versus "This is just an S3 resource that they're using."
Leif: It also makes it a lot easier for a developer who's more junior, and we don't expect every developer to be a security expert.
We expect them to try and we expect them to ask more senior people in their team or people from our team for help.
But having flags like this, even somebody who this might be their first software engineering job. They can see that something says "Dangerous" and I think that can trigger something for a brand new dev.
Like, "Maybe I don't want to be using something dangerous."
Guy: Indeed. Switching maybe into education, when we talk about making security experts out of these developers and at the end of the day, there's only so much we can become experts in.
Developers are heavily overwhelmed today with information. I know you've invested a fair bit in training and educating those developers on the important parts of security, tell us a bit about this.
How do you go about building some of these security perspectives in your dev teams?
Leif: Sure. The most important thing that we think about
when creating and delivering training is making the
training relevant.
If you're asking for developer's time, you should be using their time wisely.
So when developers start at Segment they go through a two part training. The first part is thinking like an attacker, and the second part is secure code review.
In the thinking like an attacker training, all of the examples that we talk about are things that we've had submitted to our bug bounty program or things that we've gotten in pen test reports or things that we've found internally.
Every single example that we show them is something that
was a vulnerability that was previously in
Segment.
And it's a lot easier to get a developer to care
about a vulnerability when you can say, "This feature
that you're probably familiar with, this was a
previous vulnerability.
This was the impact.
This is what the fix looked like."
Versus talking about a cross-site request forgery example from a bank where you're transferring money and the developer might think "I don't work at a bank. Why do I care about that?"
It just makes it a lot more tangible if you just have all of your examples come from stuff that is similar to where they're working.
Then the follow up to that is we teach them how to use Burp Suite
with the OWASP Juice Shop project, which is
our favorite vulnerable web app because it's written in
node and angular and it's a single page app, and
the tech stack isn't exactly the same as Segment, but it's pretty
close.
It's definitely close enough that
when we show them the architecture diagram developers
understand "This is pretty close to what Segment looks
like."
Then part two, the secure code review training, one of our co-workers David, he built a couple of small, intentionally vulnerable blocks of code that run and create a Hawaiian shirt store.
Then we asked the developers to review the code and
identify vulnerabilities based off of the training that we've given them that
day.
Again, all of that is
REACT, node or go, which
is what we used to build Segment.
So whether they're a back end engineer or front end engineer, there should be some part of that code review that is in a language that they're familiar with based off their time at Segment.
Eric: In the thinking like an attacker training, we also want it to be competitive in a sense.
After we go through the theory and examples of each type of vulnerability, then we get them smart with Burp Suite and hitting Juice Shop and we have them go after the specific vulnerabilities that we went over.
On the whiteboard, we have those
vulnerabilities, at least the names of them, enumerated.
We treat them as flags and when people capture them, the person that's giving the training, they'll have to show it to them and they'll write their name up on the board.
It really gets people in a competitive mode. Like, we've had people stay after the training was over for a half hour or forty five minutes, still trying to exploit the last flag because they were just so engaged in the training.
That's definitely a huge positive signal on our end that people are really taking something away from it.
Leif: It's also just a great way to meet new developers.
We try and give this training within somebody's first month
or so of them getting to Segment.
For a lot of these developers, this might be their first interaction with us as a security team. It might be their first interaction with the security team ever.
Maybe this is their first job or maybe their previous company didn't have a security team.
We think it's really important that they have a positive experience
with us from day one, because a lot of how we're
effective as a security team is relying on
developers and letting us know when they
need help.
So they're the ones that are letting us know that "I need a design review," or "Maybe I've identified something and I'm not sure if this is a vulnerability or not, but I just wanted to let somebody know."
None of that stuff works and you can't rely on that if you don't have that relationship where developers are encouraged to come and talk to the security team.
Guy: It sounds amazing.
I absolutely love the idea of using code
that you relate to, and the fact that it's
vulnerabilities, historical vulnerabilities
is especially resourceful here and an innovative.
It probably required a good chunk of effort, because you have to sift through
those and explain them as well in a way that is
manageable.
But on the flip side, you get something that is much
more attached and much more relatable to their
surrounding.
I also love how it's all coming together
around a lot of the overlaying theme
here is around good interaction, good healthy people
interaction.
Whether it's the embed, whether it's the skill set,
whether it's that good initial relationship with
them.
Very much about positive security, and unfortunately not yet super prevalent in our industry of security.
Leif: I do think that is changing. I think that historically that probably hasn't been super common, but I think there's a new wave of security teams that understand this is really the only way to get stuff done.
Especially in an instance where you're at a company with a micro services architecture and every developer is pushing code tens of times per day. You can't review every single pull requests, we're not pair programming with every single developer.
We just have to give them the training and resources and teach them security judgment, which is a term that I stole from a Netflix presentation a couple of years ago at AppSec California that I just absolutely love.
It's just the idea that even if you're not a security expert, you should at least know when something looks off or maybe something doesn't seem like the right way to be doing whatever you're trying to accomplish.
Just let us know. We're always available to come and help out.
Guy: Yeah. That's awesome.
I'm fortunate enough to run this podcast and have seen someone
like yourselves and people that are at the forefront and talk about it.
I very much hope that this is a trend, versus an echo chamber.
But I think over time it's a must, as you point out. Software
is accelerating too much and getting too complex for
anybody from outside to secure it.
On that note, we talked about education and we talked
about engagement with the team, when you talk about this
notion of positive--
I know you've done some things to celebrate
successes, can you give us a couple of examples of when somebody
does a good thing around security?
Do they have some stickers? I remember some mention of a crown.
Eric: Yeah, the stickers come out as part of our
training. When you complete the training we have this hacker
man sticker, that is this online meme that
we use quite a bit in the training itself so that people can show
"I did the training."
Another thing that we've started, and I'm presenting at OWASP
next month at Uber,
which is the leaderboard. It's
effectively this gamification platform that we built
that celebrates those small wins that people have.
When people come to you and say, "I think I noticed this
issue or I noticed maybe some PII in this log."
How do we recognize those small wins?
This leaderboard is basically this
UI.
I really got enticed by Halo 2 and the notion of matchmaking back when I was in high school or middle school, and how people can be ranked.
Basically what happens is when you do these small
things we'll recognize you and you gain experience
points.
Everyone starts off at level 1 and you'll get, depending on the type of thing that they've done, you'll get 15 experience points or 25 experience points. When you get 100, you go to level two.
It posts all the great things that people do every Friday in our security Slack channel so that people, not just the people that were part of that interaction like the security team and the developer.
But even the VP or the CTO or people that are higher up can say "This individual has done all of these great things this past week or this past month," you'll see that recognition happening in the security channel overall.
Guy: That's awesome. What types of actions do they get points for?
Eric: We have a vulnerability management program here, like most people, and we
rate our vulnerabilities.
If you find a P1, we'll give you 100 points.
If you go out and find these things it's 100,
because P1 is the worst type of vulnerability.
If
you fix it, we give you 50.
Because fixing could have been because we assigned you a vulnerability to fix.
But people that are out there proactively finding these things, we give you
100 and that's an automatic level up or they
just--
We have a catch-all like going above and beyond for security. If they ask someone to badge in, this isn't even just for engineers, our salespeople are on this board.
That's typically because they've asked someone to badge and that
was maybe trying to tailgate.
Another thing that we've done with this is
also open it up to other people, so it's not just security giving
these points.
We're not always around to watch people tailgate, so we've had other people that are not security engineers or on the security team submit these points through the Slack command that we have.
We're just really trying to build a culture of people recognizing each other for doing awesome security things.
Guy: That's awesome. I still think stickers are good as well, even if they're just from the training. I think you also showcase that it's important.
But I very much loved the leaderboard and those results.
Before we're getting to the tail end of the podcast, can you just rattle off a little bit of some of the tools of choice that you have today in your stack for people to consider?
Leif: Sure. As listeners of the podcast might guess, we are Snyk customers. The way that we introduce Snyk does do a good job of encapsulating a bunch of the stuff that we've talked about.
When we were evaluating Snyk, as Eric said earlier,
any tool that we buy we want developers to be able to use
because we want them to be able to take control of
the security of their services.
So when we were doing our evaluation of Snyk, we partnered up with our growth team to integrate it with some of their repositories and get feedback on the tool, accuracy, usability, etc.
Once they said, "This looks pretty good," we
added it to a couple other repositories and then
as part of our introduction to the rest of
the engineering team, we at all hands had
the engineers write down on a piece of paper how
many total JavaScript libraries
they thought we might be including across all of our repositories.
The person that was the closest, we gave them a crown at our engineering all hands. Snyk is definitely one of the tools that we rely on, that's how we introduced it.
We also use Detectify for our DAS scanning. I think the DAS scanning market can be challenging to find a tool that can log into a single page output like REACT applications where the DOMs just doing--
Guy: Crazy things.
Eric: For SaaS, we're looking at Semel right now.
We have been using-- Coinbase created this
awesome tool or concept which is called Salus, which
is a way to deploy a container to
each CI.
If you are using
Circle, you can create a new job that will
spin up a separate container that would be the Salus container.
From there you can, from a central location, inject
various different winters or other tools that you want to run that will do
some static analysis.
But now we're looking at something like Semel as well to help
supplement, and they also have a pretty good developer
story.
I love the fact that they have a good query language for their SaaS product and people that aren't just security folk can go and use it to find other types of problems that aren't just security related. So, we're looking at them.
Leif: As I alluded to earlier, we use Bugcrowd for our bug bounty
program.
That's a combination of tools and services, but I think running a bug bounty program is pretty important just to show researchers "If you do find something, we're not going to sue you, we're going to pay you. So please be responsible and tell us about anything that you might find."
We're looking at Assetnote, that's another tool that we're evaluating. They're in the asset-discovery space.
Something that will go out and look for internet-facing assets and try a variety of tools and techniques from the bug bounty world.
Some of the co-founders of Assetnote were really successful bug bounty hunters, so it'll scan your external resources and see, "Can we do stuff to make takeovers or things like that?" That's a tool that we're pretty excited to use in the future.
Guy: Very cool. Thanks for sharing.
I think those are very useful to hear the vetted set of
tools around.
Before I let you go here, I like to ask all of my guests one last question, which is if you have one pet peeve or key advice that you would like to give a team that is looking to level up the security caliber, what would that be?
Leif: I don't really know if it's a pet peeve, and I think it should be relatively obvious from the rest of the podcast, but be friends with people.
People are way more likely to do the things that you need them to do
if they like you.
So much of security revolves around getting other teams
to do work because they have domain expertise that you
don't, and you need their help to improve the security posture of your
company.
Do everything you can to build really great relationships inside of your organization.
Eric: Yeah, I think the one thing is definitely try to do the
investment in building out quality where you have the most face
time, so with your engineers.
Like training, for example. It's
paid us back in spades.
The amount of value we've gotten out of
it, yes, we could have gone down the automated route or a video
route.
But the amount of time that we have spent making that training awesome has definitely outweighed the amount of time we would have had spent dealing with the vulnerabilities or issues that would have came up if we didn't spend that time.
Guy: Cool. This is also a good time to mention that if you want to join this very forward-looking team here, you can check out some of the job openings that the Segment team has on Segment.com/jobs, especially in the San Francisco area.
But it seems like across the US as well. Thanks a lot, Leif and Eric, for coming on the show. This was excellent.
Eric: Yeah. Thanks for having us.
Guy: Thanks for everybody tuning in to the show, and I hope you join us for the next one.
Join our mailing list to receive the latest Library updates. After subscribing, tell us your preferences to receive only the email you want.