1. Library
  2. Podcasts
  3. EnterpriseReady
  4. Ep. #27, Continuous Intelligence with Christian Beedgen of Sumo Logic
EnterpriseReady
101 MIN

Ep. #27, Continuous Intelligence with Christian Beedgen of Sumo Logic

light mode
about the episode

In episode 27 of EnterpriseReady, Grant speaks with Christian Beedgen, CTO & Co-Founder of Sumo Logic. They discuss Christian’s background, security analytics, the challenges faced by first generation on-prem software companies, and the evolving security landscape.

Christian Beedgen is the CTO & Co-Founder of Sumo Logic, a secure, cloud-native, machine data analytics service. He was previously Chief Architect at ArcSight.

transcript

Grant Miller: All right, Christian. Thank you so much for joining.

Christian Beedgen: Thanks for having me.

Grant: Let's just dive right in. Tell us a little bit about your background and then how you made it into enterprise software.

Christian: Right on. I am currently the CTO, and I also founded the company Sumo Logic. For the last 10 years, we got started in 2010.

Before that I was an early developer in the crew of a company and ultimately became chief architect at an enterprise software company called ArcSight.

That was in the previous decade, started in 2001.

I am originally from Germany, I grew up in Germany and went to school in Germany, went to University in Germany and came over to the US for the first time as an intern in late 1998.

That's all code for saying I'm really old.

Grant: Cool. So the first jobs you had in '98, was that at an enterprise software company or did you start in consumer?

Christian: I don't know how they would classify themselves back then, but it was a small company called Amazon.

Grant: Amazing.

Christian: Yeah, it was a complete coincidence.

I went to a school of applied science in Germany doing a mix or a blended program that had computer science and media and internet in the widest sense.

I started that in 1996. This is roughly around the time when the internet became a thing in Germany, probably maybe a couple of years after it became a thing in the US.

But it was pretty close. I work construction that summer, I took the money and bought a modem.

I got one of those Super Fax 28k modems and dialed in and I've never hung up since, to some degree. I got completely sucked into the network.

I was actually, at the time, still studying social sciences which is also interesting in its own way.

But I had been playing around with computers when I was a kid, I taught myself programming when I was 11-12 years old by going to the library and pulling these Dot Matrix printed books about basic what have you.

Commodore 64, and all that stuff. I did that and was a little bit self-driven, and as I got into high school of course other things became more interesting.

I didn't really go back to that, I was politically active as a student a little bit, that kind of stuff.

Things that you get sucked into when you are still a little bit younger, representation and those types of things.

But that led me to eventually then go and study social sciences, which is politics and sociology. But it never really-- It didn't really stick. I wasn't really that interested in it.

But you need to write a lot of papers when you do that course of studies, so I ended up getting a Mac. Then as soon as I had that Mac, I was back into just hacking around.

Then the internet happened, so the Mac was what I hooked up that Super Fax to, and that's what got me to today. I dialed into some local mailbox, Or some pseudo-AOL thing or so in Germany.

I think for a day or two, or some sort of message board or what have you. But then there was Netscape Navigator and the internet, and that's just-- It basically never stopped.

I still live in a web browser on some level, many different web browsers and many different apps today, but that was '96 and I basically decided that I should probably--

Because I was really interested in it, I would read books, I would buy books.

I got hip to Linux and back into programming, and then I ran across this program which was in a small school outside of Berlin.

I fancied myself an artist as well, just like everybody who had a copy of Photoshop.

Apply all these filters, you're an artist. It turned out I wasn't actually that much of an artist, but I was one of the few people who actually knew how to program in this particular college program, so eventually then in 1998 I got me an internship programming .

It was with a small German company called Taylor Books, which sold books over the phone and then increasingly so over the internet. That's where I got the internship.

Between getting the confirmation in the summer and then starting it in the fall, these guys got bought by Amazon.

Grant: Interesting.

Christian: To this day, this was the foundation for the German Department of Amazon in Bavaria. As it is still, I think, there.

This internship basically then turned into, "Let's go to Seattle and do something." That's how I ended up at Amazon.

Through no fault of my own, but it actually did happen.

Grant: That's cool. What were you working on at Amazon that summer?

Christian: We did a prototype of a greeting card site. One of those things where you pick a funny picture and write a text, and you send to your parents a virtual greeting card.

Grant: Yeah.

Christian: At the time, there was a very successful site by the name of Blue Mountain Arts, and they might still be around.

I have not really been into the greeting card game for a whil e, so I'm not entirely up to speed there.

There was a hypothesis that they looked at their uniques and they were basically completely different people than would shop at Amazon.

They said, "There's a company. If we can do something to pull these types of users in that don't seem to come to Amazon at all, we can grab them with this greeting card product and maybe we can also start selling them stuff."

It was a very high level thesis to some degree, also driven by the German guys that had gotten acquired and they needed to figure out something to do.

Anyway, that was our internship. It was pretty random. We were some of the first people writing Java code at Amazon at the time, it was interesting.

We were sitting in a basement there in downtown Seattle. It was pretty wild, the whole thing.

Grant: How big was Amazon when you joined?

Christian: I have no idea. I would have to look it up, but they didn't really even have a single building.

They were already big enough to not fit into the whatever initial little office they had in downtown Seattle , so it was all spread out.

First street, Second street, it was all there. We would have to go across town to do meetings and stuff like that, and they had a little data center in a big building in downtown on Second or whatever it was, I forgot.

We got to drop a server off there once, and then there was this big sandbox that had a label on it that read "Mom."

I still find that pretty funny, because that must have been their master database or what have you. I don't know, but different times, right?

They were obviously already a thing, but clearly not to the degree-- I think they were already public at the time.

Grant: OK, yeah.

Christian: But they weren't-- If I'm not mistaken, that happened all super quickly. Their whole timeline was just ridiculously accelerated.

Grant: The timelines for all these dot com companies, that age was just so fast. You think about 18 months to IPO kind of timelines.

Christian: What I remember there was one really cool thing, is we had many years before they actually implemented it for all the customers, we had a p review of the prime functionality.

Because basically when you ordered something, you would go to some internal log in and then you would get the full catalog, so you'd order a book or something and there were guys that would basically next-day drop it off at your desk.

Grant: That's cool.

Christian: That was just outstanding, we were like "Holy shit . That's pretty cool." So we kept ordering stuff.

Grant: That's funny. Did you also have any previews for the stuff like AWS? Like, infrastructure as a service while you were working there? As in, building--?

Christian: Not at all.

Grant: Not at all? OK, so that was still totally not a thing yet.

Christian: No, they were still doing Obidos. Even that original back end, if I'm not completely mistaken.

Again, I was at best a visitor in that world. I don't really-- I believe this Obidos thing that they had initially, it was all written in C and when we showed up writing some Java code there were some eyebrows raised.

But they had already started acquiring companies, and I remember there was one company called Jungly that I think they did some comparison shopping thing, and they got acquired.

They had already some Java back end, so we ended up talking to those guys a little bit and they sent one of their more senior engineers over from that acquisition who had done a lot more with Java than we could have ever imagined.

Because remember, we were very young and we had basically no idea what we were doing.

Grant: Sure.

Christian: Anyway, we built that prototype and of course it got scrapped. I think at some time later they ended up for a little while running a greeting card site, but that's also long gone.

Grant: Wait, they still don't have a greeting card site? That's too bad.

Christian: They had one in-between, yeah.

Grant: Then you went back to school, and then you started a company while you were in school or something? Is that what happened?

Christian: What happened there in Seattle, this was just literally in late '98 early '99. Everybody's head was exploding, it was just this incredible boom.

We had gotten brought in to do this internship by the German founders of the German company that had been acquired, and they were running around in Seattle and they were entrepreneurs at heart.

They were too entrepreneurial to fit into the Amazon system at the time.

They were running around trying to figure out what to do, and they had some ideas, of course they had made a bunch of money so they could finance that also.

There was all these startup ideas floating around and I got sucked into one of them, which was a language translation thing, because we were all German and our English was not that good.

We were trying to build things that make it easier to learn English, and even online translating pages and stuff like that.

That was the high level idea, so one of those German founder types basically went and started that. I was part of that, and that was really only for a couple of months.

It got started in Santa Monica , and long story-- Anyways, it was Santa Monica and the office was in Berlin.

That's where I was living at the time, and also going to school.

So they built that up and then people started falling out over there, there was something going on but I didn't even really know what was going on.

There was some business guys that were brought in, and then of course the business guys ended up not liking each other.

Grant: As is the common-- Startups going under because of co-founder disputes, right? It happens.

Christian: I honestly never really found out exactly what was going on, but frankly I didn't care either.

Because there was one person there that had come in to run this operation, really like a COO type that came over from the US, actually from Miami, because these guys had connections or whatever. It's all very complicated.

But anyways, this person showed up and we liked her, and then me and my buddy Stefan was actually still working with me.

He's a chief architect at Sumo now, but we b asically went and started another company with this person from Miami.

Because the idea was that it would be nice to have access to your files from the internet if you're in an internet cafe. I don't know if you remember those, but there used to be times where not everybody was online and you had to go to a place.

Grant: Sure.

Christian: Not everybody had laptops at the time, and obviously no phones and all these things.

Long story short, we had this basic idea of building a file system in the internet. Like a web browser, like a Dropbox thing.

Grant: Sure.

Christian: The great redemption in this, because we completely fucked that up from an execution perspective.

We had no idea what we were doing, but the great redemption is that I can always now refer to Dropbox and everybody goes "Oh."

Grant: That was Gigaton?

Christian: Yeah, that was Gigaton. It was cool. We were having a lot of fun. So that's what got me permanently--

Grant: But it was targeted at consumers rather than targeting enterprises, is that right?

Christian: I'm not sure it was even targeted at anybody in that sense. We just thought that this needed to be a thing, and we started building it.

But during that time we grew it out and it was 30-50 people or so at some point sitting there outside of Fort Lauderdale in Miami.

It was all pretty wild and then it didn't go anywhere, but we ended up going around and talking to some VCs and trying to raise more money.

We had some trips to Boston, some trips to Silicon Valley. That was the first time-- Because I was ever in a bunch of these VC offices and met those types of people, and nobody would give us any money.

But technically people, even then, were still-- This was after the bubble burst, now we're talking early 2001.

Technical people as you know are always in short supply.

Talking to all of these VCs ultimately netted me an interview at a company of one of those VCs that they had just incubated, and that turned out to be ArcSight.

Grant: Interesting. So you had been going around pitching Gigaton to all these different investors, no one really wanted to invest in the company, so you wound it down.

But then one of the things that happened was you got an interview with ArcSight?

Christian: Exactly. They didn't want to give us any money, but they were like "Maybe you can work for one of our companies."

That was awesome because it got me-- I still remember it, here in Germany we see one of the founders and the CTO off ArcSight, they gave me a call and I was driving on a freeway in an old beat up Honda Civic in Florida.

He had started asking me coding questions on the phone, so tha t's pretty funny. Anyway I went out and I met him in Sunnyvale, that was early 2001.

That was by far the smartest set of people I've ever met. I know that sounds like a bit of a cliche, but I was just completely blown away. It was just incredible.

I could see this was my first sense of Silicon Valley. It was a clear step up in terms of just raw intellectual horsepower and intensity.

I was in Seattle and all of that with Amazon, but probably a little bit too far removed from where all the smart people were, because we were working on our little dinky prototype there.

But that interview that day, I remember, just completely blew me away. I ended up getting the offer, and then in April 2001 I moved all my stuff over from Florida.

Grant: This is pretty early in their lifecycle, right?

Christian: They were probably half a year old, or maybe at this point maybe nine months or something like that.

Grant: What was ArcSight?

Christian: It actually grew up to be a security information and event management software, or SIEM.

This is basically software that takes logs from security devices and from security software, from all of these different security point products and centralizes it so that you have a central point of view that can run analytics, a rules engine to detect threats on top of all of the point product output.

That was the big thesis, and they had come across that by interviewing lots of customers. They told them "That's a real problem." So they built a company around that.

Grant: OK, cool.

Christian: It was basically a meta product that was security analytics, you could say, but also lots of data management underneath because we brought all of these log files together in a database , and then lots of real time analytics on top of that.

It was pretty cool, it was clearly-- When they pitched it to me, it seemed like "This makes sense." Not that I knew a lot at the time, but it just seemed to make sense.

You have various vendors and they see, especially in security there's a lot of "Best of breed" buying going on. There's always lots of innovation as well.

You might have Cisco networking gear, but you might buy a firewall from Checkpoint. Back in the day, or you'd have an IDS from ISS back in the day.

But you wouldn't buy all of these different things from the same vendor, that that would not often happen.

Because you would go after a vendor that had the best product for this particular purpose , but then what you ran into is you didn't really have a centralized way of making sense of all the output.

You would set up the firewall, and if you were lucky it would block things, or you would set up the intrusion detection systems.

If you were lucky, it would find the five bytes that feel like beginning of shell code or something.

You would look into the various vendors management interfaces, but you had to do all of this in a correlation, I don't know, on paper or in your head or what have you.

Obviously, that's not great. So we brought all the data into one place and gave people a control station for that, and that was cool. It's still cool.

Grant: Did that category exists like? SIEM, was that a thing?

Christian: No, it wasn't.

Grant: OK.

Christian: The name that stuck was that name, and that's one of those things that Gartner made up, as they want to do dealing in abbreviations and stuff.

Grant: Sure.

Christian: There was actually-- It was very early in that space, and there was I think maybe a product or two out there.

It was probably a year or so old, and there was something called Net Forensics out there that was going after something similar. I think there was one more, I forgot the name of that. ArcSight ended up--

Grant: They were probably all founded around the same time, I'm guessing?

Christian: That's what always happens. Within a year or two of each other, something like that is, if I remember correctly.

ArcSight ended up becoming the category leader, ended up being very successful, being all the way up and to the right in the magic quadrant, and ended up going public in early 2008 right before the crash.

Usually when I show up somewhere, things crash. When I came to the US, I came to the US for Gigaton and it was March 2000, so yay.

Then ArcSight went public a couple months later, so by the time the lockup came off that market was basically fucked.

Grant: You were at ArcSight far longer than-- You didn't just join in a crash. You were there from the beginning.

Christian: I have absolutely no illusions about anything anymore at this point. It is what it is.

Grant: I think it just proves you have staying power despite the ups and downs. You'll stay the course.

Christian: The thing that I've seen, and that's what helps, is that it is cyclical but it always recovers.

So when you want to do your thing, like go to market, or in this case looking for an exit or what have you.

If there's all these timing considerations and you don't know what's going to happen tomorrow, nobody can see these things coming. Then you just have to stay it through.

If you set up the thing properly and you have enough money in the bank and maybe you have been conservative with spending or maybe you raced a little bit ahead to have buffer, then usually things turn around. The economy has this.

Grant: In our lifetime, it seems to go up into the right eventually.

Christian: Exactly right. It totally bounces back after a couple of years.

Grant: The long arc of history here.

Christian: I've seen it twice now, who knows?

Grant: You were very involved in the engineering side. Both as the lead engineer, and then eventually as chief architect.

Were there any key lessons you learned at ArcSight? Because this is, realistically this is your first enterprise software company.

So when you first stepped into that role, did you remember what your perspective was and how that changed? How you adapted to building for enterprise customers?

Christian: I wish I could say I've developed a criteria or something, but I really haven't.

What I've observed to some degree, and this still applies to this day is it's not EC2 get to something that customers actually need.

Like, this Gigaton thing that's all files and internet and so forth. We never established product market fit.

But the ArcSight product market fit was pretty clear, Sumo was also pretty clear.

We never really had to struggle with that, so as soon as you've got it built people will start using it.

All right. here's a whole aspect of "How do you go from nothing to something around experimenting and product market fit and so forth? What are the lessons that people have learned around--?"

Not overbuild and not get stuck, make sure that you keep yourself agile so you can tear down something and build a wholly different product in a couple of weeks.

Just to basically support that idea of finding product market fit, that makes a ton of sense to me and theoretically I totally get it, but personally I have not I have not been in that situation.

It basically starts after product market fit, basically. I got lucky there, I don't know how you want to put that, but that's where my experience comes from, from that.

What I've seen is as soon as there's even one customer, and one leads to three and three leads to 15, etc. And you have a sales team.

Of course, it doesn't always work perfectly in the beginning and sales teams are struggling, etc. Of course, always.

But there will be deals, there will be POCs. They will be-- Eventually people will buy-- At that point it's just basically an incredible tornado.

You end up being, your default mode is just being reactive.

So somehow you have to figure out in that onslaught of everyday being completely different, you have to somehow figure out how to not lose complete control over the architecture of the system.

That's very hard. That's always how I've seen that , and then the way that I've always felt that's best managed is by trying to be pragmatic.

If you could too dogmatic on how you implement things, then indeed go away.

If I've made that mistake as well, go away and say, "We can completely redesign this part of the system because clearly now we know more. Back then we were anyways idiots, so how could we have even done it the way that we did it?"

But once it's out there and once customers are using it, you're not going to ever kill it.

There's a bunch of lessons that I learned there, where y ou just have to live with-- Everything you do basically, i t's going to stick around for a long time.

I don't know. There's no magic formula there. You just basically have to be pragmatic about it and basically be able to at any given point in time continue to be able to evolve the system.

Grant: Sure. I think that the key lesson there is being, just in enterprise software generally, once customers start using something it's very hard to deprecate and take things away.

You have to constantly evolve your system without complete rebuilds. Is that what you're saying?

Christian: Exactly right. It's not like you can run-- It's not like an internal software for, I don't know, some insurance thing or what have you where you can go in and run a domain driven design process or something, which I would love to do at one point.

But because of your user base, it's right there and it's captive, and you can talk to them.

When you built these types of products that I've been involved i n, there's a lot of guesswork involved. It's not that we don't talk to users and customers, obviously we do.

But for these types of software and in platforms like we do with Sumo now, their use cases are so broad and wide you have to have a good amount of intuition there.

The feedback that you're getting is not always very well structured.

You end up having to adjust your thinking a lot and just hope that your basic idea about what this product should be doing and the basic bones of the architecture are going to survive.

At ArcSight, we made a bunch of decisions in the beginning that ended up being real bummers over the years.

Then we reintroduced-- Specifically the data back end . We're talking logs, so that's semi-structured data.

Sometimes people say unstructured, but technically it's semi-structured.

This is not something that fits into a classic RDBMS, which is the default back end if you have something where you need to store data and also query it.

It doesn't really fit well, and these days it's a little bit different but back in 2001 there wasn't a lot of alternative database technology out there, so we ended up going with Oracle and we abused Oracle into essentially being a pend only store for a set of documents.

That worked out for a couple of years, but then eventually we got to a place where the theme was so successful.

As we know, data even back then was already on this exponential growth curve.

That applies to security data and these types of documents 100%. Going from 50 events per second to 500 events per second to 5,000 events per second interested in processing and correlated, and so forth.

Then people started showing up and they wanted 50,000. At some point what we found was that the way that we had originally architected it, it just wouldn't scale out.

It was monolithic with an app server and a database, and I learned a lot about Oracle, including that it's simply not-- That their classic database just does not scale out.

That was a big bummer. That was something that we half knew in 2001, but there was really no other choice because going and building it in a clustered fashion.

It was a funny discussion, we basically raised our hand back and we said, "We should really put a clustering or--" We called it "Clustered" architecture back then, "In there as a foundation."

But it would add about half a year to the release timeline of version one. The business was not-- It didn't really want to hear that, which to some degree I also understand .

In hindsight, it amuses me to think that we thought back then that we could get that done in half a year.

Probably the business was right in not allowing us to do that, because it probably would have taken a lot longer.

Grant: That's what I was thinking when you first said it, is that you estimated at six months and I was like, "That would have stretched to how long, actually?"

Christian: Right. In hindsight, you just basically-- You can take a glass half empty view and say, "You just can't win."

Because you do this and then you have a successful product, but then eventually you run out of runway.

Or you take a look at it and say you're more optimistic about it and you just say, "But we were still around in 2008 when this has become a problem because the product was so freaking successful."

I continue to, as much as I would like to do the right thing every day and in the beginning and halfway through and what have you, it's still actually being there.

The same with Sumo, and having a right to play beats making only the right decisions. Because nobody can.

Grant: I always say "That's future Grant's problem," in terms of decisions I make now that I know will incur some debt later.

But that's what debt is for. The opportunity to incur a little now.

Christian: But part of that is I have also seen where this can really become a problem. I mean, the head product, the enterprise security manager which is the thing that I worked on, if there's any ArcSight people listening we might have to stick debate or something.

But I do think that this initial architectural limitation that that we baked in order to get it to market and which ultimately made it successful, eventually then also opened up the door for others to disrupt.

Grant: Right. But you could have disrupted yourselves, but you didn't.

Christian: We didn't. We tried, actually. That attempt was made. We built a appliance version that had a custom data store that was very fast.

It was very fast, but the didn't really distribute. Because again, that just adds to the complexity of the problem.

But it was custom built and it was pretty fucking fast, so it could go at very high view and just speeds of hundreds of thousands and so forth.

But then we didn't really know how to do the correlation, so then we had to clump it together and you had to buy software and hardware .

Then eventually people came around that had a better, more big data style, Maputo style architecture.

Splunk comes to mind, for example, they did that really well. They ended up causing post-2010, they started causing a lot of trouble for ArcSight.

Grant: So, you stayed through the IPO. Any quick insights from the IPO around just ways that challenged you as a technologist and an architect?

Christian: IPO generally since I've been through it now once as an employee, it can get pretty emotional.

Especially if you have something where you go public, log off comes off and you are basically under issue.

People thought they were going to walk out with, I don't know, $100 grand. Now they're looking at $30 grand, or what have you. Apply your scaling factor.

That can become on a tactical basis very hard to manage for individuals, and for the company in terms of getting people to stay motivated and so forth.

Then basically you have to decide, "Am I going to sell and just F-off? Or am I going to have trust that this downturn will turn around?" And so forth.

Then in early 2009, what happened was President Obama started giving a bunch of speeches around the importance of cybersecurity.

Anyways, as the market started bouncing back a little bit there was additional-- We got a bunch of additional media attention through this cyber angl e.

Because it's fundamentally a cyber-security play. The markets picked up on that, so thanks, Obama.

In the end a few folks ended up folks that actually were able to stick it out for a year or so, and stock price ended up going up and so forth, and then a little bit more when the acquisition happened.

It was a cool experience. I think a bunch of us actually ended up with a couple hundred dollars, so down-payment money basically in the Bay Area.

Grant: Yeah, sure.

Christian: Enough money to run a small country, but over here it was basically a down payment for a small house.

That was cool , and I really do, still looking back I'm still extremely grateful to the ArcSight leadership team and the guys who hired me back then.

Because it was really a fantastic experience, both professionally and I ended up making a little bit of money. It set me up to do something else and it was fantastic.

Grant: Yeah. Let's just jump into that.

You leave ArcSight, and had you already had the idea for Sumo Logic or what's the founding story? How did it come about?

Christian: It happened in my head.

The story there is that, rewind to 2008 we ended up on a hunch going to Stanford for an open talk by a guy with the name of Werner Vogels, who of course is the Amazon CTO and who was talking on the topic of AWS.

Amazon Web Services, and this was 2008 and I think they had only done that thing for about two years at the time.

I think they had S3 and SQS, and EC2. I think those were the foundational services that they had and I remember being geeky enough to read stuff on Slashdot and then Hacker News and so forth.

We were aware, but I don't think we had really understood what it was that they were doing.

It just sounded like, "This is really weird. It doesn't fit into any category. What are they doing? OK, next article."

But it was somehow, there was a hook in my head and so we went and heard Werner talk and he did a 90 minute thing.

I don't know if you've ever had a chance to hear him keynote, but he's just an incredible speaker. He has this ability of, contextualizing-- In this case, AWS.

Why what they are doing benefits customers. Then, of course, he's a professor at Cornell and he can go and talk about, distributed consensus and all these kinds of things.

But as a CTO, what he really brought across to us as a technical audience wasn't necessarily how EC2 worked or how S3 worked, but why what it did was unique.

Enabling even back then, their early customers to do just outrageous stuff .

Animoto was a thing back then, this thing where you upload a bunch of pictures and it sends you back this ridiculously overproduced video which was a huge thing.

They went from running it on a Gameboy somewhere in somebody's closet, to somehow suddenly they had, I don't know, 10,000 videos uploaded per minute or so.

Then they scaled that out and EC2 basically overnight, and so all these super cool stories about scale.

Then he had lots of other stuff that he was talking about.

Long story short is that walking out of that talk, it became very clear that what they were doing there and what it was that they were doing, why they were doing it, and also why it was just ridiculously awesome from the perspective of enabling developers to do even more.

Basically, what was clear at the time was that they are starting to-- They had already started to turn a data center into an API.

As developers, I'm sure you'll empathize with that, we know how to automate things.

That's-- We write code to make computers do things. That's our superpower.

Now we can automate entire data center, and that basically means that we can maybe think big about building hosted systems, building SaaS and those types of things without even having to know the people that know all about the racking and stacking and the networking and the cables and all of that.

What that left us with was this idea that, what if we recast what we were doing and maybe we can solve some of the obvious problems that we were seeing with the enterprise product at the time?

By turning it into SaaS and not just data as an abstract idea but with AWS as a substrate for that, a ctually realistically achievable.

Because clearly, if there's an API we can make it dance. That was very formative. I remember the parking lot discussions with a bunch of the other ArcSight guys there, including Kumar who then became my co-founder in Sumo.

We immediately went from, "This makes perfect sense," to, "OK. How can we harness this for the stuff that we are doing, and not necessarily from the perspective of immediately going and starting a new company?"

It was just like, "How does this apply to our day to day stuff?"

Grant: Sure.

Christian: I think that was the nucleus of the idea to basically-- The idea is very simple. It's to say this type of product, this type of data consolidation, and then a central analytics platform is incredibly useful.

We've seen that from the success that ArcSight had even just in the security space. Logs obviously are something that as developers, we are very familiar with.

We even had written our own remote log analytics tools to remotely debug failing servers at customer side, and so forth.

This was definitely going to be a thing that was potentially not just interesting for security, but for other use cases as well. That's the good news.

But all of this type of product was delivered as enterprise software that had problems scaling, and we just made this experience a customer struggle so much with installing and administering it that they had--

It often felt like they weren't actually-- Or, that there wasn't that much time left to actually use the product.

To us that was very unsatisfying, because we built the product so people can use it not so we can be on the phone to debug broken Oracle data files.

Grant: Right. Because for the ArcSight implementation you were basically delivering, was it one .jar file or .war file? Or, what was the primary bits?

Christian: Basically. I'd had an installer, but it was basically dumping a choppy Java application. Then there was a 300,000 page manual on how to exactly configure Oracle.

Grant: So your customer had to setup their actual servers, like rack and stack physical servers, and then bring in the Java runtime and then Tomcat and then set up Oracle databases and setup or configure the Java application.

Then did the actual application scale horizontally, or did the application scale only vertically as well? The monolithic application?

Christian: No, it only scaled vertically.

Grant: OK.

Christian: It didn't even do that so well, frankly. There used to be this company called Sun Microsystems.

They used to build these really large boxes with 12 or 24 or 48 CPUs in them.

We're talking 2003, 2004. Since our stuff was written in Java, people were like "We're just going to throw it on this E-10K or something, or a Sunfire or something of some sort."

So we went over there actually where the Facebook campus now is in Menlo Park.

That used to be a big Sun campus, and that was where their performance lab was, and it was pretty cool because they had an entire lab set up and people that were managing that.

You could come in and test your software to see if it scales out to the big Iron, basically.

We did something very similar with IBM at some point as well.

One of the cool things on that, I forgot it was a Sunfire of some sort, so it had 24 CPUs. Of course, our code was written in Java and it was multi-threaded.

Of course it would scale in all of these things. That's what you think.

Grant: That's what you walked in believing.

Christian: Exactly, and then you were actually running on a machine that has more than two cores or CPUs at the time they didn't know was actually proper CPUs.

He's tying boards and it's very cool. But they had this widget that basically showed you each CPU in a row, and how much-- Basically, the CPU usage.

The dancing meters, basically. The CPU meters.

There were 24 of them, and our application would basically run and it would keep all 24 CPUs buys for about ten milliseconds, a nd then it would go and basically everything went to zero and one CPU ended up garbage collecting for four seconds or something.

Grant: Amazing.

Christian: Because clearly, if you don't actually test this type of stuff while you're developing it, it's not just going to scale out.

That's the sort of funny side story where even vertical scaling is actually not that simple.

Grant: That's really funny.

Christian: Of course, we ended up making a bunch of fixes and so forth. But that was always a struggle.

We had customers at the time on the government's side and military and so forth, and they had all this Iron lying around it.

They were like, "It's going to run. No problem." It's not that easy.

The story there is that it's obviously useful, what we were trying to do there was there was also, on a day to day basis, there was a lot of grinding and making customers successful.

I was the core server guy, and I ended up on the phone trying to remotely debug things so many times .

I ended up taking over a lot of time I didn't actually spend developing.

Then it was more and more people maintaining this thing, and it ended up, to some degree also taking a real bite out of the innovation budget. We saw all of that.

Obviously we then saw the rise of sales salesforce, completely killing Siebel and I think Service Now was already starting to become a thing.

We had early on, I had some exposure to Remedy. I knew about that, and I knew how that went down.

This idea of basically classic enterprise software being just completely disrupted by software as a service startups. There was already an emerging pattern there.

Grant: Just to reiterate, just how it was just so difficult to set up these applications like ArcSight, there was just so much overhead.

To your point, every customer was doing this on some slightly different environment.

Christian: Exactly.

Grant: So, your team was basically-- Everyone is trying to repeat the same effort to get this thing to run.

You can't even really scale it vertically and there is all these challenges.

Then you've got the actual physical hardware that someone tripped over a cable and that's why it stopped working.

You have all these different things that can go wrong, so then to your point SaaS evolves and takes away all of this operational overhead.

Don't worry, we'll host it. We'll manage it and we'll scale it and we'll run it. That's the point you're at now, right?

Christian: Exactly right. For me, this was just-- I didn't learn that by reading a book, it just came out of doing what we were doing. It was hard earned wisdom .

At least in our space, it turned into a bit of an unfair advantage. Knowing about that and then formulating the idea that this had to become SaaS.

That's ultimately the founding idea behind Sumo. Not necessarily to a completely different product, because we thought that that basic product orientation was very useful.

Maybe broaden the aperture, or what have you. Then it turned out to not initially be security. We went after application monitoring first.

But we can talk about that more later. There was nothing, the idea that this type of product had to exist.

There was nothing wrong with that, we just felt that we could do it better .

Very incremental, Sumo didn't start from some business plan competition or three people sitting in a dark room trying desperately to think up a startup idea.

To us, it was just totally obvious that this was clearly going to be the next generation of this type of product.

Moving it into a service, and then from our perspective of course leveraging AWS because we had no idea about what a data center.

I haven't seen one from the inside in a very long time.

We just used that substrate in order to build what we needed to build and automate it all, and take the humans out of the picture for those places where it just shouldn't be in.

Have one environment and have full control, not just over the code but also the deployment environment.

As you already said, versus getting stuck in London for three weeks in a bank, basically with no return ticket until you fix the bug.

One of my guys got literally stuck in London at some point for that long.

At some point, I compiled a custom JVM with additional instrumentation for a customer, just to figure out why there was a particular buck.

It had something to do with object allocation and profiler tools were too slow, so I hacked something in to Recompile a JVM.

They ran a Dash Christian version. Introduction is-- I never thought that I would be able to recompile a JVM in the first place, but banging on it for a night it actually ended up working.

We got the additional debug information that we needed.

Grant: Interesting.

Christian: gray stuff, right? The point is that this stuff just doesn't scale.

Data completely outscales this. Humans just don't scale exponentially. If the problem, however, scales exponentially then you have to find some other way of solving the problem.

And for us that was turn it into a service, full control over the code or the development environment. Refocus the customer back on using the product.

We got there by believing that we are this particular set of people, my co-founder and I could actually pull it off even though we had no idea about running services, or data centers and all that type of stuff, because we didn't have to considering that AWS took care of the problem. So, that's how it happened.

Grant: The real platform shift there is "AWS is going to manage all these servers. This whole SaaS wave is coming."

Christian: Exactly.

Grant: You're right, the fundamental idea that "We don't have to do the core value proposition differently, we just need to do the ancillary parts of this which are that we're going to take away all the operational burden from you and give you a very familiar set of core product value from this."

Christian: Exactly. Because we felt that you should just use the product and not worry about how the product is implemented.

Grant: That was 10 years ago, so you started that and initially went after the biggest enterprises or focused on developers or small business?

Where did you see your initial customer segment?

Christian: I think if you start a company and it's VC-backed and you've raised series A-- right away before we had a product built, I think you always end up maybe falling into this trap of thinking that "This is going to be an enterprise play."

That's always a little dangerous, b ecause everybody wants to be an enterprise company because everybody thinks that's where the largest pot of gold is.

To some degree that's also true, but it turned out that our initial customers were much more mid-market.

Lots of companies that themselves had already lots of digital companies, CDNs and that type of stuff. Themselves were probably already services.

That's really how it started, because in the beginning this whole discussion about "The data goes into the where? The cloud?"

We had to basically figure out how to sell that, so our initial customers turned out to be much more companies that were philosophically aligned with how we were thinking about the world, and then it expanded from there.

Grant: OK, interesting. So you went out and built this first product, thought you'd go close every big enterprise, hired the really expensive enterprise salespeople that probably had the Rolodex, and they failed.

Then you realized that your real customers that were willing to adopt 8-10 years ago were more mid-market folks that were starting anew.

Christian: In a nutshell, yes.

Grant: Is that the whole concept? Like, did you actually hire the high-paid enterprise salespeople and they didn't hit the first round of those didn't really work out?

Christian: Absolutely. Yeah

Grant: That's as much a timing thing more than it probably is a real product thing.

Christian: Ideally somebody should write a book where it says "Don't do that." But I think everybody just has to experience this themselves.

Are you familiar with PTC? Do you know that that is?

Grant: Yeah, of course. Polymetric Technology Company.

Christian: Exactly. That was our initial crowd.

Grant: Funny, your first salespeople were from there?

Christian: Yes. That was very interesting. The East Coast culture versus us being much more of a West Coast culture. The experience was very interesting.

I like these guys, but they had a very different attitude to things. So we had some really interesting discussions.

I got to know a lot of them on their home turf as well , and sales guys and SEs and so forth. There's still a bunch that are here , but the first wave was definitely not easy.

Because it was just not at all clear how to sell this and how to deal with these obvious objections around where the data lives and all that stuff.

We certainly went through a bunch of iterations on that.

Grant: Which approach did you end up taking? The approach of, "OK. Let's collect only the metadata and process that, or try to reduce the amount of sensitive data that you were using?"

Or did you take the approach of, "Let's go after every major certification and get the pen test through fed ramp?" Which angle did you go, or some combination of both?

Christian: The latter. This type of product, you can't get away with just doing the metadata. You really have to have it all.

Grant: Got it.

Christian: That was really the only way to do it. We started with SOC2 and we ended up building around that.

If you haven't done anything like that before, even something like SOC2 is actually pretty hard because there are certain things that you need to do.

Then you also have to get into the groove of how you work 50 auditors and how to have those conversations and all of that.

Then we got that certification and HIPAA followed and then we ended up doing PCI. I think we're at a current level 3.2 DSS. Basically service provider level one certified.

It's just like a bank. We are listed as fed ramp ready, at this point, is public information. A fed ramp thing is something we're doing.

These certifications get harder and harder. By the time, I mean, even PCI is already pretty tight. By the time you get to fed ramp and it's a whole different game.

Grant: Just this summer was when you hit the fed ramp side. This is something that you've been building up over the course of nine years, right?

Christian: Absolutely.

Grant: It's almost like an arms race.

Enterprises want more and more certifications and you have to keep escalating and escalating the level of data security that you have and that you can demonstrate.

Christian: Absolutely, but at the same time I think it's par for the course.

I think philosophically, you got to have a way to convince yourself that you are not just leaking your customers data left and right, just from a moral perspective , in my mind.

So how do you do that? We're not doing formal verification. As we know, even site computers don't do that anymore these days, apparently. It's basically a very hard problem.

So the next best thing is "Hey, we have smart engineers who have security background and they're going to design a secure system."

Sure, should try. But I mean, I'm not going to buy that if you tell me that. Because we all know how hard it is, etc.

Perfectly secure system is just a really hard problem.

So you try your best and you're being open about what it is that you're doing, you think through the way encryption is happening where you need to encrypt, how do you do key management, etc.

You tried to put that into the best architecture that you can come up with. We did that early on. Obviously it's been evolving since.

Our first cut already was very strict tenant separation in different keys for every 10 and so forth. That put us basically on a good talk track. Then we got it certified.

Again, you can be very cynical about this whole certification thing, because in the end, it depends on a little bit on some level also how you present it. There's no perfect way for an auditor to really test that.

Every single word that you say is definitely implemented that way without Box. So there's a little bit of softness in that.

But still, by the time you're done with an audit, these guys are no joke. Even for SOC2, you are starting to update your processes.

You maybe you have a couple of fixes innovated, you manage data, encryption, etc.

You keep doing that and it's expensive, both in terms of actually money, but also in terms of time spent and maybe opportunity cost, if you want to say that, because there's some real engineering that's going on.

I actually don't think it's opportunity cost. But you could argue that you could build a future instead.

In the end, I think, the certifications and that whole process helps you to have more and maybe the best confidence that you could possibly have that your system is as tight as you can possibly make it, to the best of your abilities.

I think that's the bar that you need to set for yourself , apart from the fact that if we lose somebody's data that company is going to be in a really bad place.

Also just for me personally, I don't want to be that guy that loses your data. That's just not OK. So that's why we go through all of these things.

Grant: It's interesting, because it's similar to just even testing your code. You're like "Why would I need to test it? I wrote it and it works. I couldn't have screwed it up."

Then you realize that there was a bug and that bug is the thing that led to it.

Doing these security audits and all of the different angles and threat modeling and vectors gives you a context, an ability to find different angles and areas that you didn't really think through where there might be a bug.

To add security in depth, and in order to prevent as many of these issues as possible. Obviously it's hard to have perfect security , that's fairly impossible.

But you can get pretty close by really being focused.

Christian: It's not being, essentially, a business imperative. Because you have to be able to handle the objections.

It's a moral imperative, on some level, if you want. At least it was for us. Then this huge investment, but that is pretty differentiated for us today.

It's part of our value prop that we actually have all of these checks and balances in place.

In our world, among our competitive set, we have far ahead of anybody else. If I might say so.

Grant: If we dive into any of those security features specifically , you mentioned key management, did you end up doing enterprise key management like EKM?

Where the customer controls the key and then you take it and cache it for ten minutes? Is that a thing you do, or not?

Christian: When we started that was not possible, so we ended up with something-- Again, this is 10 years back.

Amazon and AWS has services for everything, probably including podcast recording at this point. Just have two Alexas talk to each other.

They didn't really have HSMs available, like these hardware security modules at the time when we started. We have a slightly different scheme there.

The integration with the HSMs is something that we are going to very likely have to do in a fairly short order.

Grant: Still, it doesn't cover everything. You still have a key, you're still processing unencrypted data, so bugs can still happen.

Christian: It's a trust thing. What happened with us being able to fairly intelligently talk to customers when they came over through what we're doing, and then have the certifications to basically lend it additional oomph.

Early on already we had guys come in-- When you're a start up and you're funded by, in our case, we have a ramp of name-brand VCs.

All kinds of interesting people will show up and try to take a look at your company, including delegations from European telcos.

Of course, then you're going to have the meeting and you have a bunch of-- In walked these 8 Swiss guys, very stern with their blue suits.

Then we're sitting there with anime t-shirts and long hair. They're just looking at us through their Porsche design eyeglasses, and they're like "What is this?"

Of course the first thing is, because they are a big service provider, Telco, what have you.

Of course they try to completely nail you on this data management thing, and then we talk them through it and I could see one guy basically just go and poke the other guy with the elbow.

They look at each other and they whisper, "These guys are doing more than we do." Some really interesting observations along those lines.

A lot of people actually have this question, but they do a lot less, actually.

Grant: It's a really interesting piece , it's just so critical.

Do you feel like in the last few years the expectations for customers around your data security and data privacy, even with GDPR stuff, do you feel like that's just continued to escalate?

Christian: Yeah, and for good reason.

Grant: You're even often listed on your customers' GDPR sub-processor list.

I'm guessing that the data that you're ingesting from some of your biggest customers ends up having consumer data in it as well, right?

Christian: Inevitably. We can't categorically exclude it, s o there's a chance.

Sometimes it's explicit because they want the data to be there, and then sometimes it just could happen because of collection and sifting through the data before it goes into the next system.

Frankly whether that's on prem or in AWS or SaaS is very hard, the classification of all this stuff.

Grant: I'm guessing your customers ask a lot of questions around that, and that was a really hot topic.

Now there's CCPA and other pieces too, is this a thing that you continuously feel pressure from your customers around having answers to?

Christian: Yeah, and rightfully so. There's got to be some way of giving consumers some peace of mind.

That not every single gesture they have and do every day and every single thought that they think on a daily basis is getting exploited for commercial reasons.

Data leaks and this whole election thing, and the Cambridge Analytica thing and the Facebook thing, and all of these things.

Whether these are the right things to bring these topics in the forefront of the discussion or not, we can argue, but the fact is that there's some pretty systematic surveillance that is basically the prices you pay for interacting with things on the internet.

There's got to be some amount of governance for this, to basically align what companies do morally.

Also I think where people sit from a values perspective, depending on which society they're coming from. It's pretty abstracting there, but--

Grant: No, I love it. Because I've been thinking about this more and realized that one of things that's happened with the zeitgeist changing about how people think about data and their data, I think that companies have started to realize that every piece of data that they process they don't necessarily own.

Just because I as a consumer give you data about me and you can process it and use it for some thing that you're going to do for me, that doesn't mean you own that data.

Instead, you're really like a custodian or a steward of that data.

It changes the relationship that the primary receiver company of that data gets with that data, where now they think about "OK. Who should I be sharing this with other than-- Or, without your permission?"

There's all these down the line implications that I think are really important and really interesting. We're just starting to see the effects of it.

Christian: Yes, and then on the on the flip side is that the way that regulators might think about this, the right to be forgotten and data deletion, that kind of stuff.

Once you flip that back into the technical realm, i t can quickly become an intractable problem.

I think that's where on the flip side then feedback from technical folks has to also be heard, where they say "Being able to delete individual bytes basically out of a large database or aggregations of data is fundamentally, technically, probably almost impossible without having blast radius, without deleting other things along with it, etc."

I think it's more of a moral code than anything else , and then obviously they have some technical implementation behind it and there has to be checks and balances in place, what I hope is that regulators are not going to defy the basic principles of computer science and end up asking us stuff that we can almost basically not do.

That becomes an exponential cost, the distraction. So I think there's a middle ground for that, but fundamentally I do think this discussion has to happen.

Grant: Because you mentioned append-only earlier in reference to ArcSight, is Sumo also append-only oriented?

Christian: Absolutely. Yes, when you created data we will-- Obviously it's record based, but the records all come off a lot of large flops.

We don't have record of traceability in our back end, and it's not like every log message becomes a row.

Every log message is part of a much larger batch of other messages , so the batches are traceable and that's where it starts.

Grant: Oftentimes for security, you need that log of history and you need immutability of this, right?

Christian: Exactly. I was going to go exactly to that one next, and to some degree the fact that what we do, the way that we store the data is in fact in blocks in history and an immutable fashion.

It's very much a feature of the system that some regulations actually require .

Grant: Right, exactly. Yeah.

Christian: Then you run into this interesting competition between regulations, where one regulation says, "I should be able to delete that record," but it's a direct violation of the other regulation that says "You absolutely cannot delete records."

Grant: Right. It's like, "Know your customer" versus GDPR.

Like, "I did all these financial transactions on your system. But now go delete all my information."

Like, "What? That doesn't seem like a thing that should be able to happen." Is that what you're talking about?

Christian: That's exactly what we're talking about.

Grant: Do you know, is there any precedent in that in terms of how that should be handled or what the legislation is?

Because this is the interesting part to your point, the legislators don't really understand the technology or the implications or necessarily even what else has been regulated.

So, is there any precedent around how that's supposed to be handled today?

Christian: I'm not sure. It is a national law, a global precedent.

Our stance on that is that we will mask data on demand if this GDPR thing comes up, but customers that have, let's say, government by PCI as well which is a lot of them, we're not going to delete it. We'll mask it.

Grant: You're basically saying you can go in there and do some type of encryption, or what are you doing to mask? What does that mean, redaction?

Christian: We basically redacted on the read path, if that makes sense. So that for all intents and purposes, people can't see it.

Grant: Interesting.

Christian: But they encrypted bytes are still on disk somewhere.

Unless the customer allows us to basically delete entire time ranges, then we blow those away along with the three messages that you want to delete you're going to delete lots of other messages as well.

That we can do, and we do that also.

Grant: Do you have some kind of Merkl tree or something in the background that's taking a hash of the every few minutes in order to prove immutability?

Christian: No, actually. We are not at the level where we need to prove immutability. But if you had to, yes.

You would do these hash chains, or what have you. Blockchains, so maybe we should have become a blockchain startup. I don't know.

Grant: No, it doesn't necessarily have to be-- We don't need to publish it to the internet, you use it yourself.

It's still primitive. I don't want to be accused of trying to say you should blockchain this.

Christian: No, I was just being a bit snarky.

Grant: I know.

Christian: But it's really about degranularity of addressability of data.

So, individual records are basically, because of the way that the system is constructed, individual records can be masked but not on their own deleted.

Instead, we can delete blocks of data . In many cases, that's all that's needed.

It's just that you don't get just the three messages, you might blow away a bunch of other stuff as well, which in many cases that's an easy tradeoff.

It really depends on the customer.

Grant: Very cool. It's super interesting, so it's a problem that I've thought about.

It really exist no matter where the data lives, even if you delivered your software to the customer.

For them to run it privately, they still need to figure out a way to handle these requests themselves.

Christian: Exactly, right? I think what we have here is really, trying to do the best we possibly can with the right mindset, I think it goes much further than what a lot of companies if they are trying to do their own backup management and what have you will be able to do themselves.

By the time the tapes are in the Iron Mountain truck somebody comes in and says, "You need to delete something."

Y ou cannot tell me that you will go and bring back all the tapes .

Grant: That's so true.

Christian: Delete individual records from those? They're going to pretty much, if at all, burn the tapes. Talk about that for blast radius.

So anyways, there are some really interesting implementation issues. Generally we try to face them head on and be just clear , "Here's what we can do and here's the tradeoffs."

Then we engage with customers and we have good relationships with those customers.

We have a pretty good team that is both technical as well as knows how to deal with these types of emergencies.

Very often, the customer is also fairly stressed as you can imagine.

I think we've developed some good procedures on how to deal with that, and with our customers we're just open about it.

Grant: The other high level point is that platform shifts like the cloud and infrastructure as a service, it creates opportunity.

These changes, if you and your business can adapt them and you can create solutions, it creates a new way for you to bring additional value to your customers.

That's how I always perceive these things.

Christian: I feel it is fundamentally--

What we can add back to their operational maturity, and today if they also use us for security analytics, for their security posture, it far outweighs in my mind what they can possibly do on their own or even with legacy products.

It becomes a tradeoff, a nd the customers from day one that came over to Sumo and are using our product, still today they understand that tradeoff. It's just not absolute.

Grant: Nothing is really black and white, everything is shades of gray, and that's I think the important lesson that we always take to things.

Christian: Coming from a technical background, of course, we're almost pre-programmed to think that it's either true or false.

If anything, even more so than the architecture or what have you, the one thing that I've learned is that the world is very gray.

Grant: Exactly. Similarly , less about regulation, maybe we'll go there but you have some interesting perspectives on the social implications of AI.

Is that something you want to talk about?

Christian: I did a couple of talks on the topic. It goes a little bit along with what you called zeitgeist earlier, over the last couple of years.

Obviously I've been doing data now for a very long time. My career basically is about building systems that--

It sounds almost trivial, because I'm just defining computer science, but basically data measurement.

Data, getting data into one place and then making analytics possible and trial decisions based off data is something that I've always done.

That's very much at the core of my career. This is something that is becoming-- It's become a topic for business.

Digital transformation, everything becomes ultimately measurable if you think about how modern business works.

It's very much based on the IT systems that process the data for the business.

I think that's all well and fair, along the lines we've also seen the three merchants of machine learning and then people very quickly jump to AI.

It's obviously been a topic before, and large expectations ended up not playing out in the 80s and all of this type of stuff.

These things are also cyclical, but I think we still are in a new hype cycle around generally the expectation of what smart algorithms can do with data in order to make better decisions.

I think there's nothing wrong with that, as long as you are--

In many ways that's what we are selling at Sumo as well, it's just fundamental functionality that you need to have in order to run your business better to have better analytics, better decision support.

Whether it's about the functioning of your IT systems or your security or the business itself, that's driven by those systems.

But at the same time, with everything that's being hyped, people start developing blind trust in something.

It's clear that the advances in ML in the last by now almost 10 years, people are learning and so forth. They are amazing.

If you can compare before and after when it comes to image recognition, it's just completely-- It's a total step function.

But I think what it's also been leading to is this idea that machines can somehow, or algorithms can somehow make better decisions than humans.

I think in the broadest sense, that's not actually true. I don't actually think that as much as I am a fan of making data driven decisions, I don't think data alone makes decisions.

I think it's still there in support for humans to add their own special sauce, whether you have bias or not, just admit it.

You will have bias, but your intuition still matters when it comes to decisions that you make about how to conduct yourself, how to conduct your business and so forth.

Something that was interesting to me was in light of all of this hype is what happens if this blind faith in data leads to people making decisions about other people?

Then that's a slightly different sphere, where you have all kinds of potentially unintended consequences and side effects on the level of society.

So something that opened my eyes to that quite a bit is this book called Weapons of Math Destruction. Nice pun in there by Cathy O'Neil.

I think that was probably the first thing that came out that was basically trying to take a look at these social implications of blind faith in algorithmic decision making.

Here we're not talking about trying to show somebody better ads, they were trying to talk about trying to figure out whether they will commit crimes again and whether or not to let them out of prison, or whether they are good teachers or whether they are worthy of social services and keeping their children and so forth.

There's a couple of literatures which came out in the last couple of years looking at the societal effects of this basically blind faith in algorithms making better decisions than humans.

I try to wrap my head around that a little bit, but I came to the conclusion that data fundamentally is human.

It's generated by humans and it's ultimately consumed and analyzed by humans, and machines can help but machines alone, I don't think, can be trusted in all cases to make the right decisions.

Machines making decisions about other machines is one thing, but machines making decisions about humans I think there's a lot of potential issues there.

Grant: Interesting. Categorically you'd be pretty much OK with machines making decisions about machines?

Christian: That was me trying to on the fly abstract things like ad placement and so forth.

Machines making decisions about machines making decisions about people, that could become a fallacy.

Grant: Sure. Because it's like bits, meat, atoms everywhere.

But I was just thinking about as we talk about optimizing performance of a database, like some type of machine learning to determine that feels like there's not a lot of blast radius for that.

Christian: I totally agree. But I think the reality is that when things happen that have effect on people making decisions about people, basically people outsourcing decisions about people to algorithms is that the stuff is not usually even that sophisticated.

It's usually a bunch of models, and sometimes it's just a bunch of questionnaires. But the idea is that it creates some score , and if it goes below the threshold you're not a good teacher and all of that.

Or you are not going to get released from prison because the chances are that you are going to commit crimes again.

Then you look at the questionnaire and it asks all these questions about "Which neighborhood do you live in and what's your race?"

The idea then becomes that these instruments are trying-- People are building these instruments, algorithms and so forth because they are weary of this messy part of human decision making, and it potentially being biased and not fair and all of that.

So they say, "If I can abstract that out into an algorithm clearly algorithm doesn't have bias. Algorithms make fair decisions. Data in, data out."

But based on everything that I've seen and it's in many different examples laid out in a book that I mentioned, there's a couple other books that came out since then.

The reality becomes that the problem that people find themselves in if there's-- Think about credit scoring as an even simpler example.

The idea is that you have an algorithm basically make decisions, because it is supposed to not be biased, because you are weary of humans making biased decisions. But what happens is that if you look at the way that the instruments and algorithms are designed, what people find is that it's not actually that easy to-- Or it seems almost impossible to design an algorithm or an instrument that doesn't actually perpetuate the bias of the designers. In the end, what comes out is anyways biased.

But now, how do you appeal that? How do you appeal to a machine? The end result is actually worse than before.

That's the line of thinking that that got stuck in my head and I thought it was actually pretty interesting.

Grant: No, it is. Though I think it actually incriminates something else, and this will be because I'm part-- I'll inform the audience, this is definitely because I'm a fairly gray free markets anarchist.

Generally my thought there is it actually is more incriminating of the centralization of decision making than it is incriminating of the AI.

I think the challenge that we're facing there is we have a centralization of who's in prison, the child services, all these other things in terms of we're outsourcing that to an intelligence that we call some centralized government.

In my opinion, we should be decentralizing as much of that as we can in a way to remove how those decisions get made.

Using AI to make those decisions in an even more centralized fashion, I would agree is a challenge, but probably from a different perspective than you would arrive there.

Christian: We arrive at the same conclusion on some level, and that's good enough for me. It's very interesting because there's progress to be made.

We just also have to continue to negotiate what the right application is of these types of tools.

We talked about this earlier, the world is really gray. It's actually a perfect lead into this discussion. People however desire it not to be gray, they desire it to be black and white.

If there is tools available that promise to turn the gray into black and white, I think any which way you get there it's both extremely appealing to people, but also extremely dangerous.

Centralization is one thing, blind faith in algorithmic decision making is another thing where people fall into the trap of thinking that there's some magical device that can turn the messiness of the real world into a simple black and white thing.

But the reality is, if you are in a position where you need to make decisions about other people you just can't outsource that into a black and white one bit output system.

That's what I think. I do think that intuition matters, and I don't think that there's a magic device to remove bias. People are inherently biased.

There's nothing you can do about that. Certainly, algorithms are not necessarily the solution, I think.

Grant: One thing that I tend to believe is that leveraging data to help inform decision making, but maybe keeping a human in the loop is a really interesting perspective.

We should be informed of our biases, we should be informed on the things that we might be taking in and hopefully systems and computers can help us make better decisions.

But ultimately, I do believe that a human in the loop is important.

Christian: There is this thing that people throw at me all the time, which is-- You probably have the visual.

"In God we trust, all others must bring data."It's like basically two of my least favorite statements together. I do actually believe that data is important, but I'm 100% with you.

I think it's still the human that makes the decision, and of course you need to look at data but you also need to understand if I have somebody who tells me that they are making decisions based on data, OK.

That's not a bad starting point. But then I also need to understand, "What's your intuition? What's your experience? What's your context?"

Because that data alone doesn't necessarily help me there.

Grant: Realistically, it's like so many people are so bad at statistics and you can look for whatever truth and data that you might want to. It's like, "Great."

Christian: Including myself.

Grant: Of course, we all can. One other thing, just back to Sumo for a bit.

When you think about where Sumo is in terms of emerging technologies and if it's ML or if it's--

I noticed that you have a pretty big call to action around Kubernetes, what are some of these trends that you're seeing that could be platform shifts today?

Is Kubernetes one of those main ones?

Christian: I think Kubernetes is interesting because it promises a simplification and an abstraction over these emerging deployment options.

Because suddenly just being able to deploy in AWS isn't good enough anymore, you need to be multi-cloud or you need to be hybrid, or whatever it is.

Now we're back to, "Your software doesn't just have to run on Windows, it also has to run on Mac."

Or "iOS is not good enough, you also need Android." It seems to me sometimes that people feel that Kubernetes is--

I've certainly gone down that road myself, so I'm not just saying other people, but it occurred to me as well that Kubernetes could be an interesting level of abstraction as a portable platform, as a service.

I think that's why people have picked up on it so much, probably less so too some degree it feels like Kubernetes is coming out of the Google Borg thing.

I don't think Borg was-- The stuff that the Google guys are doing wasn't necessarily designed as a port across deployment systems and so forth.

They have one and it just needs to really scale there. They built a lot and discovered a lot of stuff and build it into that, and Kubernetes is the open-source version of that of course.

Originally what I thought it was a way to more easily build really large systems. Which is what it was originally intended for, certainly that would seem to be obvious too me knowing from where it's coming from.

But it feels like the market is picking it up as a way of actually maybe not even building the ultimate like hyper scale system, but build lots of reasonably scalable systems inn a modern paradigm with micro services, orientation and so forth.

Someone wanted scalability and used Kubernetes as an abstraction layer, to not necessarily have to commit to any given deployment substrate. Does that make sense?

Grant: Yeah. No, I totally agree.

Christian: Then of course, your company obviously I'm preaching to the choir. Because that's my understanding of what Replicated is really leveraging as well.

Grant: Yeah, exactly. It's enabling Kubernetes-- Our whole thesis is that we do believe Kubernetes is this patterns and primitives of building reliable software.

We will often talk about how if you remember using containers or any of the Linux groups before, everybody wrote their own way to manage that.

There was Mesos, and Uber had something called Peloton and Facebook had Tupperware, and everybody had these things that were basically all inspired by Borg.

Now we see the community moving to this canonical implementation, and it's providing some of these patterns and primitives of "How do you actually build out something in a reliable, scalable fashion? "

I would say just like there's primitives for security, for encryption, it would be insane to write your own encryption algorithm today , we think that the same thing is happening for reliability and there's now these reliability primitives.

It would be insane to write your own orchestration and scheduling tools today.

Caveat, Replicated did write our own orchestration five years ago.

So part of that is from first hand experience, as we all make those mistakes.

We generally believe that there is this interesting platform shift , and if everything moves to Kubernetes, and instead of that old world of racking and stacking servers you can start to deliver--

You can bake a lot at operational knowledge that all of those old customers and ArcSight had to figure out for themselves, you can bake that into a single manifest and deliver the images to a customer and they can run it in their own Kubernetes cluster.

It starts to really solve some of those operational overhead problems of the old world, and then in this new world of data privacy and security requirements it allows a vendor to solve some of those problems without having to process the data.

Because they can just deliver the app to where the data resides.

But I'm guessing your team is using it more like you're seeing more Kubernetes environments that customers have , and so you're trying to help people take control of what's going on in their own Kubernetes cluster and use that as a source for events and things. Is that right?

Christian: Yeah, of course. Fundamentally, we are a log analytics and monitoring or observability platform, if you want. But that's what we do, basically.

In short we help people run their systems in a reliable fashion. That's one aspect of what we do, and the other aspect is make them run in a secure fashion.

So clearly we have seen Kubernetes become a thing that we now also need to be able to monitor, because people are using it a nd they need to keep that reliable, and here we are.

We built a bunch of models because Kubernetes introduced a couple of new concepts, and so forth.

So then you want to evolve your product as well to present those mental models that go into the design of Kubernetes, per say.

But also the way that this plays out into your own applications, topology wise and so forth, present some models and mental model factor, actually use it to make it easier for them to manage their reliability.

That's the product aspect, as far as Kubernetes is concerned for us. Then of course there's lots of security aspects for that as well, containing images and so forth.

The divide between operational stuff and security anyways goes away, and in these modern environments even more so.

Then the other aspect is that we also are starting to use Kubernetes ourselves. Of course if we had started a company a couple years ago, it would have probably already be Kubernetes native.

But we started it in 2010 when none of the stuff was available.

Our core architecture is still very much focused on EC2 and lots of automation tools that we wrote ourselves, so we have a version of everything that Hashi Corp eventually released. It's just very interesting.

Grant: Funny.

Christian: Which was a blessing back then, because we had the tooling. Now it's more of a curse because it's idiosyncratic.

If we hire new people then they might already know TerraForm, but they might not necessarily understand our own thing and then we need to teach them.

So again, whatever you do, here we are. We're still alive and kicking and I think that the decision to write a lot of [inaudible] and automation tooling on top of the AWS API in 2010 is part of what got us here, so it's all good.

The other thing that was noteworthy for me as far as Kubernetes is concerned is that-- This goes back to maybe sentiment that I expressed earlier, which is Kubernetes is something that came out of one of the largest hyperscale companies ever.

Then the entity competing systems, internal systems that you referred to. Facebook, etc.

There's also these guys are on that scale that most of us, including Sumo. We run a pretty large scale, but nothing compared to Google obviously.

A lot of applications actually need to scale to that level, and it's quite interesting that the adoption of Kubernetes is also somewhat removed from this.

Absolutely top five hyperscale aspect of it. It seems to scale down pretty well as well. It is quite cool.

Then the other thing in light of all of that I noticed is that actually Kubernetes ended up being a topic with a lot of our enterprise customers, customers that are still running a lot of stuff on prem.

Because Sumo itself is a service, but we get a lot of data from on prem. Hyper environment is a reality, like Silicon Valley we have a bunch of startups and all that.

Of course, they don't have on prem. But enterprise customers and even mid-market customers, depending on who they are and where they are, and you know where they are and their own maturity stream.

Of course we get lots of cloud workloads, but a lot of the workloads are on prem , so the enterprise customers really have picked up on Kubernetes big time.

Sometimes I feel like even more so than some of those more modern guys, because they're more modern people than they go and not necessarily we see Kubernetes there obviously as well, but we then also see things like Serverless and just straight up AWS ECS, etc.

The part that was interesting is how much enterprise IT has, in my observation, chomped onto Kubernetes.

Because I don't know what your take is on that but it feels like it gives them a little bit of warm and fuzzies that there's still something that requires operational management versus the utopia that everything is Serverless and you just write a bunch of functions and some background system that you can't even see or manage or have any physical approximation to.

It's just magically running your software. So I don't know how you feel about that, but I thought it was quite interesting.

Grant: We see it adopted in, and I definitely think there's some aspect of control to your point.

There's some control that that folks want , but I think there's also just an area of to the shades of gray thing.

I think that's a common theme, it's like they're going to have systems that are in data centers or in multiple different places, and having a platform that they can move to that's going to be portable and is going to be-- It helps them prepare for the future.

Then additionally, just like if you're trying to build new software and we think to digital transformation world, everything and everyone has to become somewhat software native.

If they're going to do that, you have to build software with a great SDLC and a great environment.

In using the tools that grew up inside of Google and were competing with a lot of different technologies, and emerged as the best in class solution for this I think is the right way to do it.

Because to your point, you could write all of your own versions of these things but then it's just challenging to on-board new engineers into, to fix it when it goes down and it becomes less relevant over time.

We just love the idea that the ecosystem together has really come together to create such a robust solution for this.

Christian: It's pretty crazy if you think what's possible in open source today, it goes from couple of tools and then it goes to an operating system, and now it goes to a data center operating system and it's all in open source.

It's pretty amazing actually , and then on the flip side you have these hyperscale clouds where there's a lot of proprietary stuff going on. Work is pretty interesting right now.

Grant: I think he'll be a battle that continues around customers wanting open source, and companies that want to meet that challenge delivering it.

We've evolved to do more open source, I'm sure you're doing more and more open source.

Sounds like you're working with Falco and some of these other tools, and just the advantages that we can get by working together.

Part of it's just recognizing that the market is growing so much that you don't have to worry that much about boxing out competitors, but rather "If we just all try to build really great systems and we do it somewhat in the open, even if we collaborate with folks that are somewhat competitive, if we grow the market enough everybody wins."

Christian: Absolutely, and this is one of the other things that we ended up, to some degree, lucking into.

I think at Sumo the initial thesis was, "Let's basically take this security management and maybe a little bit of application management based on logs and turn that from our experience into SaaS."

But what has happened in the last 10 years is that it's just becoming about data, big data, not necessarily structured data.

It goes very far in some of our customers use cases, away from using these tools to keep the systems reliable and secure. It goes into understanding the business itself.

Using these flexible agile analytics tools that we have in a system like Sumo, so now we are out there in a market that depending on who you ask, anywhere between $40-70 billion dollars a year. That's absurdly large.

To your point about rising tide lifts all boats. That, in 2009-2010 I could not have even predicted that.

But everything that has to do with making data available to people to make decisions, hopefully they are not taking the data blindly and all of that, as we just said earlier.

It's becoming such an incredible market that I think there is a lot of different companies that can create both good outcomes for customers as well as for themselves and their employees.

Grant: Does that play into the topic you guys referred to as "Continuous intelligence?"

Christian: Yeah, absolutely. That's absolutely the highest level aggregation of what we think is going on, really, in the new business world.

Where it's basically this idea that in order to stay competitive you just have to be able to continuously have data available to make decisions, because it's really a rat race a nd if the other guy can beat you to that then they're going to get all the money and you will get none.

To maybe oversimplify it a little bit, but I think the continuous intelligence thing, what it has to do with is agile analytics, no worshipping at the altar of the data warehouse to get a column added half a year later and all of that.

Have all kinds of data, like structured and semi-structured, and what have you.

Unstructured in one system, and to basically allow access to that and democratize fashion across the entire organization.

Of course, our strong point is around keeping systems reliable and secure.

But that's a huge thing, and these teams between developers and DevOps and ops and depending on net security, obviously.

In whatever form they are sliced up in or whatever corporation you're in or organization you're in. These things keep growing, because the IT drives more and more of the business. So once it's all IT driving the business, once it's all computers driving the business, everything happens at lion speed. The people who can actually make decisions at lion speed either automate it, and then we go back to this classic optimization problem.

What to show on the page in order to get the click, or just a little bit more abstracted strategic decisions or tactical strategic decisions on a weekly basis.

"What should we focus on? What should we feature on our e-commerce site?"Etc.

I think that's what a real competition is, and with digital transformation everything and every product the idea there being that products more and more turn into digital products.

For us, it's happy news. You've been using computers all your life, and I have on some level, so this is all like "Come on. What's new here?"

But the reality is that barely half of that market is even tapped into, so that's where a lot of the opportunity comes from. So our term for that is "Continuous intelligence."

We see ourselves as a provider of continuous intelligence for our customers, for liability, for security and more and more so for business insight.

Grant: Interesting. If we think about the arc of Sumo Logic , maybe it started off more reliability and looking at logs and looking at collecting all these pieces, then it continued to move into security.

Then now you're saying "It's not just those two areas, but let's help the business make better decisions about what direction to go with things to do.

Let's expose this into more organizational functions."

Christian: Exactly. Because if you think it through, once I have the logs and maybe additional at times there's metrics or what have you, but the logs for sure.

We just really know how to do that very well. Once you have the logs, you have a lot of diagnostic information on the systems, but these days you pretty much have all the transactions in there as well.

Basically, that tells you what the business is doing and then you can use that data to do better.

That's why I continue to be-- I'm actually even more excited about the possibilities of what we can achieve as a company, because I didn't--

I always said 10 years ago I got laughed out of a VC meeting at some point saying "Ultimately what we're building here is a new business intelligence."

But they were like, "No. Come on, go away." But I think it actually is playing out that way.

Grant: It makes sense. Because ultimately event data, one of the challenges with analytics is that you end up sampling and doing other things and it's not always that accurate.

If you actually are ingesting like all the actual events for security and for compliance, t hen the data you have is probably the most pure.

Then being able to apply some additional insight on top of that gives you the most accurate form of the data.

Christian: Exactly, and in a system like ours you can do that by just walking up to it and writing a query.

You end up in the best case scenario, it is basically-- That happens actually a lot. You don't need to do any additional prep.

Now, if you want to run that query every 15 minutes every day, then it might make sense to maybe do some additional prep step there.

But the initial answer you can get almost immediately is extremely powerful, and we use that ourselves all day and try to understand how people use our product.

Lots of stuff there that helps us optimize the product, the way that we provision underneath and all of that, which helps us save costs and that's back to being competitive.

Grant: Is that where you think that the future of Sumo is going to be going the direction of the next few years?

Christian: Customers are literally using us for that already. From a startup perspective, and I still consider us somewhat as a startup, ten years in is a long time.

But we are still, we're not IBM who can just really nearly spin up entire new departments to go after something.

We're still in a situation where frankly, that applies to every company, even IBM on some level.

Still have to be very deliberate about where you put the limited resources that you have, so very careful in terms of layering these additional concerns.

But customers are bringing us that and I think we're starting to look at that from this messaging and positioning perspective, and then they ultimately found the product/feature perspective as well.

It's interesting because I think the market for just keeping systems reliable and secure is so large that I think we can have a very successful company for many decades, even if we never actually do anything else but that.

But the reality is, just like when we started and we basically used logs only for operational analytics, customers immediately started using it for security analytics as well.

So almost "In a good way" practice kicking and screaming back into the security realm, and we were just trying to somehow keep a little bit of focus.

Then I'm obviously very happy that we have all of these now adjacent spaces.

But we had to be very careful when we could invest, and how much, these various functionalities.

The same is happening, which is more business-focused use cases. I'll say that is a good problem to have.

Grant: No, that's great. You think about the company's arc, there's Act 1, Act 2 and Act 3.

You've got to keep inventing yourself to continue to adjust to where the market is and to where you want to take it.

Christian: Exactly.

Grant: Christian, thank you so much, this was a lot of fun. I really appreciate all your time.

Christian: Likewise. It's good talking to you, too.