August 1, 2014
PagerDuty’s VP Sales: Managing Enterprise Sales
Truitt's experience defining & executing large enterprise sales and go-to-market strategies for Appcelerator, EMC and PTC make him a well of...
In episode two of O11ycast, Rachel and Charity are joined by Christina Noren, Chief Product Officer for CloudBees, to look at the ways both the development life cycle and the role of the developer have evolved.
About the Guests
Christina Noren is Chief Product Officer at CloudBees, the Enterprise Jenkins Company. Noren previously owned pricing and product at Interana, served as VP Product at Splunk, and cofounded Aura Network, Inc.
Charity Majors: Have you watched software engineering teams up close and personal as they went on call, took responsibility for their work, and learned to do operations? What happened? What advice would you give them?
Christina Noren: I wish, honestly, I had more experience of that than I did. It's still a struggle.
I started 25 years ago at a company that was doing audio workstation software and hardware development and so forth. The distance was immense, and month and months and months.
When I was at Microsoft in the late '90s, I was put temporarily in charge of the release engineering process for MSN online properties. We would have outages because developers who were used to delivering release software would go on vacation the days stuff released. And I got to carry the pager.
Charity: Oh boy.
Christina: I think it's definitely getting better. The youngest developers that I work with are the ones who just expect to take responsibility. We just acquired a company, Codeship, with a really great team. They're definitely born-in-the-cloud kind of folks and they're probably the closest I've seen in that mentality.
Charity: It feels like nowadays, like you say, the younger generation, that makes me feel old, whatever, they expect it. There's no sense that there's supposed to be a divide. Obviously I built this. Obviously I run this.
Christina: Exactly. So I think it is changing. But it's still tiny bits and you still see DevOps layered on top of existing mentalities in larger organizations.
Charity: I feel like the DevOps revolution, I know I've said this a million times, but the first stage was, "Ops people, you must learn to write software." We all internalized that and it took five, six, seven years for it to become just standard and the default.
I feel like we're about two years into the flip side of that, where it's like, "Okay, software engineers, your turn. Build operable services." And you can't do that without being exposed to it, because that feedback loop has to be tight.
Christina: I think what happened first was, with agile, I think,
software developers started to care about how users ended up using the software and users actually getting the value. And that was a revolution.
I think that is hand-in-hand with caring about it actually working and having quality. So I think there's two different continuums of that left/right movement that's happening.
Charity: We can all care abstractly about what we build being good. But
there's something very visceral about it telling you at 3:00 a.m. that it's not good.
Rachel Chalmers: So, now would be a great time to introduce yourself.
Christina: I'm Christina Noren. I just recently joined CloudBees as Chief Product Officer. CloudBees has mostly been known as the Enterprise Jenkins Company. We really deliver an end-to-end system for software delivery lifecycles, in modern and automated ways. We live and breathe what's happening.
I've been in software, as I said, about 25 years. I have gone through pretty much every form, factor, and generation of software in that time. I spent seven years running product from stealth mode, 12 people in a room, through IPO at Splunk. I ran product for multiple other log search monitoring, log management companies.
Charity: So you're a newbie, is what you're saying.
Christina: I don't know what those are, but yeah. It's funny seeing these worlds come together, that were isolated.
Charity: I love that you have experience personally being on-call, too.
Christina: Yeah, it wasn't a long time but I carried a Motorola pager, okay?
Charity: Oh boy. Two-way?
Christina: Yeah. Exactly.
Charity: No shit.
Christina: And I slept with it.
Charity: Those are great alarms, they would vibrate your entire pillow.
Rachel: Congratulations to CloudBees on getting you. Charity and I were so excited when we saw the press release. What attracted you to them?
Christina: Well, it's many, many factors. It's funny because, coming more from the monitoring-ops side, I've honestly been a little bit afraid of going towards companies that really focused on the developer side. Like, I think I know what I don't know about the development side.
There was kind of a little bit of fear and an attraction, I think, in just the last two years I spent at Interana. I saw so much of what I did impacted by what was going on in terms of the adoption of CI/CD, of DevOps automation including through Jenkins, and I found how much my fate as a head of product was tied to that.
I think the CloudBees folks helped me link those things together and also realize that I was deeper in the development side than I thought I was by osmosis. So it became very interesting.
And then I think, just from a business standpoint, effectively what we're building is an ERP for software development. I got technical early on in my career because I had to build, effectively, my own ERP systems out of FileMaker Pro and 4GL tools and then implement early MRP systems.
But on the business side there weren't integrated systems until much later. Now I think one of the things that is a byproduct of the automation we're seeing is
you actually can manage the entire software development lifecycle the way you would manage your business. And that's coming when every industry is becoming a software business, every company's becoming a software business.
So it becomes mission critical to the C-suite to see the progress of innovation through the pipeline. And it's all being triggered by the developers now, so it seems like there's a system to be built here that's super important and it's more than just CI/CD. It's a framework, it's something much bigger than that. And I like complex systems problems.
Charity: Yeah, it seems very clear to me that the center of gravity is shifting to software engineers. They're sitting here surrounded by APIs and SDKs and internal and external services, and they're just trying to craft something that works.
They don't really know how to do that. They were taught how to do algorithms and data structures, not how to build systems that are resilient.
Ops people, like me, we're never going away. But increasingly we live on the other side of an API.
And if you're lucky, maybe you know us or we work with you and we can help you run your own services.
Christina: Yeah, but I think that the thing I'm trying to get at in terms of the attraction. This is something, I think we're just beginning to tell a story out there, is what developers are still not going to care about is that business management has visibility up to what's happening.
Christina: They care about measurement on the ground. So there is this higher level of abstraction. The analogy I use is, back in the day when I was running sales ops, I couldn't get a salesperson to update their sales forecast to save their life. It wasn't of interest to them.
With tools like Salesforce.com, there's enough value to the salesperson and that was the developer in the sales process. And then the higher-level management, the finance, and sales management, and the rest of the business can get a sales forecast by means of all the work that's done.
But it's other layers of abstraction that give that value and I don't think we've gone up to that higher level, right? And so I think getting that bird's eye view... If I look at my role at CloudBees, I've got a distributed team of triple-digit people in engineering and product roles in 14 countries on as many projects.
How do I get a bird's eye view of velocity, of capacity, of impact of releases? And of where things are slowing down? How do I get that view? I think the individual development managers have that, but I think we're working towards a world where I can have that and that does matter.
Rachel: That complexity is mirrored on the systems side as well. As you get these very large-scale organizations generating very large amounts of software, you get distributed systems which are rapidly scaling beyond any one person's ability to understand or diagnose. So we need software systems that can deal with these layered, abstracted--
Charity: It's almost like we need some kind of observability for understanding how they... I don't know. Something, something there.
Christina: I think there's many dimensions to it, and I think the monitoring of what's happening post-release is a big piece of it, that bird's-eye view of that software factory process is a big piece of it.
Rachel: So how can CD, and trunk-based development and these new emerging techniques, how can they actually improve the software and the developer experience, the developers' lives?
Christina: I think it's just eliminating waste and seeing immediate impact. If we can really get to a point that, even for classic self-managed, on-prem enterprise software, you can get all the way through to a deployment to end-users having impact and you can immediately roll back releases and there's really just a very small amount of delay between a developer doing something and seeing that impact, I think you change everything.
You just changed the entire mentality of what everybody's workday is about. I think it's still a holy grail and it's still a ways away for most software, except the absolute bleeding edge. But I think it's very possible, and I think by looking at things like post-release monitoring as part of this automated pipeline and as providing the feedback into the system to make it safe to do this.
Christina: A lot of this is about focus, I think. Because doing the work isn't hard, it's doing the right work that's hard. And that's why the entire, like you said, agile, their insight was, "You do something fast and get it out there and see how it does and then iterate on it."
The scarcest resource we all have is hours in our day. As soon as we know what the right thing is to build, most of the hard work is done.
Knowing in advance is often impossible. So we have to just start by trying something. And that's where these feedback loops come in, I think.
Rachel: There's a cool psychological insight there as well. Because developers are really motivated by shipping code that people want and use and love.
What agile can do is take away that six, nine-month delay in actually people field testing software, of which their feature is one tiny part and instead make that something that you can do in a day and get an answer back from.
Rachel: Christina, you were instrumental in Splunk's success. What, if any, are the limitations of log search in these distributed cloud-based environments that we're talking about?
Christina: I think the biggest limitation is that a lot of the needle-in-a-haystack search of messy log data made sense when sys admins thought of themselves responsible for these hundred servers, physical servers, and the software running on them. And they knew the norms of those services.
Early Splunk versions, like one, two, three, actually had more at-a-glance statistical anomaly detection visually than later versions did.
Christina: And that sort of failed.
Christina: It evolved away because it became actually hard to sustain from a scale perspective. I think the limitations of that model are, it's great for finding a particular error, it's great for finding a surge in a particular error.
It's great for binding knowledge. You can do a lot of suppression. But you're still dealing at sort of the individual, error-fault level and you're very much subject to what happened to be written to a logging statement. I think structured logging changes a lot.
And then the trade-offs you make for log search is that aggregation is statistically harder. I don't think those trade-offs are necessary anymore.
Looking at things from an aggregated basis in multiple dimensions is a lot more powerful of a technique with a system that is constantly changing.
Charity: It's like the difference between grep, and all of computer science, you know?
Charity: It always drives me crazy when I have to try and convince a software engineer that they should care about structured data. The ability to do read-time aggregation means that you don't have to rely on having predicted what question you want to aggregate for and answer it to start with.
Christina: Well, I mean, I'll tell you that Facebook... It's funny, I first heard about Facebook when I was at Splunk because someone at Facebook had downloaded Splunk and there was support escalation or something.
So that's how I first heard that Facebook existed, around 2006 or seven or something. Splunk never got very much adoption at Facebook for core operating use cases because of Scribe and what I call "logging with intent."
The two years before CloudBees I spent at Interana that was founded by Bobby Johnson who wrote Scribe at Facebook and ran infrastructure engineering there. It was fun to finally meet him and work with him because he was responsible for the beginning of the trend.
The Facebook-like system where you just have this massive scale and massive dynamism, it became very obvious to people like him that you had to log in a structured way and then put those into structured commoner databases like a Saunders.
I didn't totally understand the implications of that. It was just an annoyance that these Facebook people didn't care about our lovely log indexing.
Charity: Indexes, again, something where you have to predict in advance, that you want to be able to search this efficiently. I really hate indexes. This is also the level of abstraction at which you're capturing data, right? I do think that we're still in the early days as an industry of figuring out what the good design patterns are around capturing that data.
Christina: I think so, yeah.
Charity: Because it's still being reinvented. Every place is doing it well, and I haven't really seen anyone generalize very well. It's like, you need to capture high cardinality. People are not familiar with the term "high cardinality."
I'll define it as, if you have a table of a hundred billion users, the highest cardinality is always going to be a unique ID. A slightly lower but still high cardinality would be first name, last name. Very low cardinality would be gender. Lowest of all, presumably, human species.
This is not a term that we use everyday in our lives, and yet it is the most valuable data, always. Any high cardinality value that exists in your system will probably, at some point, be either alone or in conjunction with high cardinality values. The source of an edge case that bites you.
Christina: Part of why we haven't been able to generalize is that we still have this old-data-warehouse mindset of, okay, before we start creating data, what are the use cases going to be?
I've always, just instinctively, from the first encounter with these problems, also back at Microsoft where the next project after release engineering was to build a log database. This was 1998.
Instinctively I've always seen that this data is not inherently security data or operational data or business data. It is the baseline records of what your systems are doing and the more facts that you can gather and the more--
Charity: The more context.
Christina: The more flexibility you can provide yourself.
Charity: The more you can tie them together with meaningful relationships.
Christina: Exactly. So the bad way to think about structuring log data upfront is to be restricted in thinking about what use cases you're going to have on it rather than being as rich and descriptive as possible.
So self-describing flexible structure is the way to go when thinking about what is every fact that you possibly can capture about this event in time and what is every dimension, and then getting those.
I think what is good is modern data pipelines and the idea that those data pipelines or data hubs or data buses are able to then publish data to lots of different downstream systems that are optimized for different kinds of analytics. I've moved away from thinking that it's at all possible to really have one persistence layer for this data against which live querying is done for different use cases.
Rachel: One of the things I think I see happening is that
our systems and our constraints have far outstripped the metaphors and stories that we tell about them.
You mentioned sys admins thinking that they own 100 servers. The things that people are responsible for and measured on have changed radically and become much more abstract and virtual and dynamic. And yet we're still struggling to connect that sense of responsibility to whatever it is that we're trying to manage.
Charity: Yeah, I've been thinking of a lot about the national electrical grid. This is the kind of system we're supposed to be building, right? That's the right mental model.
It's not a LAMP stack, it's not these graphs that we draw for ourselves. It's systems where you just have to embrace failure.
It's constant, and that's okay because most of them aren't catastrophic. But they're there, and people delude themselves into thinking that they're not failing just because their site's up. There's so many cockroaches living under that rug.
Some problems you can only see when you zoom way down to hyper-local, "The oak tree fell over on Main Street." Some problems you can only see when you zoom way, way out, "This bolt was manufactured in 1982 and it rusts twice as fast," and everything in between. Just like you were saying,
the same system will not encompass all of those use cases for everyone and be performant and store data forever. So we do have to get comfortable with some degree of polyglottery.
Rachel: I think one of the challenges is that people are drawn to computer science because it's one of those STEM-y fields where they think they can arrive at a concrete answer to a question. What we're talking about is systems that are almost becoming organic in their complexity.
Charity: They're ephemeral, they're dynamic, they're in and out of existence.
Christina: If I just bring it back to where we are today versus Splunk, a lot of what we're saying in this conversation is actually very much the conversation that I was part of in 2005, 2006.
I mean, SerendipITy with capital IT was my SIG file for years. And I think we thought a lot about embracing chaos. The fundamental difference was we,
because of the dev and ops divide and how much of the systems we were dealing with was commercial vendor products that our customers were deploying, we thought we had no control about the chaos of the data being written.
There was chaos, messy log data is what I talk about, there was chaos in terms of what was the signal that was coming into the system and then there was chaos, obviously, that we talked about endlessly to launch Splunk around at that time, the complexity of systems and the dependencies and embracing the fact of constant failure.
We talked about the need to have all of the data to be able to see what's really happening in production versus looking at more monitoring-oriented systems. But the difference here, I think, is that we don't have to have chaos in terms of what data gets reported if the developers are responsible for production and the developers have practices and recipes for what they record as observability into their systems, then the chaos is just the natural chaos of the complexity of where the stuff is getting deployed and raise conditions--
Charity: This is the difference between black box and white box, right?
Charity: This is that difference.
If you have the ability to plunge your fist into the beating heart of the system and change it, then you can get dramatically different and better results out of the data
than you can if you're just sniffing around the edges and trying to guess what's going on inside but you don't know actually have commit rights.
Rachel: But to be able to take that leap and empower developers that way, you have to take on the possibility that they're going to break everything. It has to be a resilient organization, one that embraces failure, one that rewards experimentation even if it doesn't work out.
That's in contrast to what you were saying before, Christina, about when you're collecting data people wanting to know upfront what the use case for a particular piece of data was. They wanted that kind of certainty about, let's--
Christina: And they still do.
Christina: I've had these conversations within the last few months with developers who are embracing this new world but not seeing the implication for the logging side of things. So it's still hard.
This logging with intent is still a cutting-edge thing. And I do think that it is a necessary prerequisite to getting the visibility and the production that it actually allows it to be safe to do this continuous deployment.
Rachel: We're using deterministic logic in a world that's rapidly becoming relativistic.
Rachel: What do you think the punishment should be for developers who don't structure their logs?
Christina: That's an interesting question. Ideally, in this world, yeah, the punishment is experiencing the consequences of their software not having the impact that it should have, right?
It's funny, I'm tangling with this in a new way because definitely, and all of our customers I'm starting to speak with are tangling with, "Do we enforce certain things in the software development process through this new tooling, or do we enable it and encourage it?"
I have to come on the side of enabling and the encouraging of, "The last week's worth of daily releases seemed to have tanked our systems and we haven't been able to diagnose them and we haven't figured out where to roll it back. So let's post-mortem it, let's figure it out.
I think more and more, there are teams within software development organizations that have names like "developer productivity" that are really providing tools and best practices. It's like, okay, you're not bad and wrong, but based on what happened in this past experience, this is some techniques, some practices that you may not have adopted and here's some systems we have inside the organization that you could take advantage of. And if you do, you may not have a week next week, like you had this week.
Charity: I think, first of all, when you start out, you absolutely have to build a culture where you are permissive. If you want to get the best talent, if you want to empower your people,
the best companies are very permissive in letting you choose the right tool for your job. But there comes an inflection point in the life of the company where that falls apart just spectacularly.
This is often a big crisis in these teams where they're just like, "We're losing our culture." But this won't work and everyone just sees it there at the end of that.
The most successful thing that I see companies do is they embrace at that point the idea of a golden path. This is the blessed path. They nominated a couple of senior engineers and be like, "You figure out what our defaults are."
These are supported by the organization. "If you follow this golden path, we will support you. We have on-call rotations. If you deviate, you may certainly do that, but you own it. You will get called, you will get woken up, you are on the hook for it."
Christina: I'm seeing similar, and I think just also the point of attracting talent, that's something that's definitely been interesting with this new perspective I have with CloudBees customers is so many of our customers are large organizations in traditional industries that are re-inventing themselves digitally and rethinking of themselves as software development organizations.
They're very upfront. I was with someone who just moved from fintech into pharmatech and started a developer productivity role. He's like, "I don't want to control or enforce, I want to encourage. I have to make this a place where developers own their own tools and own their own destinies. And I have to compete with the Facebooks and Googles and so forth for talent."
Charity: It's a real competitive advantage.
Christina: It really is.
Charity: For recruiting, for quality of service, for speed of execution and for people just being happy.
Rachel: I think we all recognize that authoritarian cultures don't produce software that's able to be resilient and agile. We tend to go to the other extreme and encourage permissive organizations.
But it sounds like what you're talking about, Charity, is there's a middle path where people feel guided and supported, suffer natural consequences when things go wrong, don't get punished but are given the tools that they need to contribute to the larger organizational goals.
Charity: Yeah, Cass Sunstein wrote the great book called Nudge. It's like that. You've got to make the defaults the right thing to do most of the time, as much as possible. And the more thought you put into the defaults...
You know, as an engineer, I don't actually want to sit here and decide what the perfect data store is every single time I want to do a project. I want there to be something default that works most of the time, that I can start with and then later decide, "Oh, this isn't working, maybe I need to do something else."
Christina: I also believe so much that turnkey things have to be completely transparent and built from bit parts. And I think so but you need to be able--
Charity: No, I'm not even talking turnkey things, I'm just saying--
Christina: No, the default. But the defaults have to be implemented in such a way that they can be tweaked. And the people can see how they're created.
That kind of transparency and openness in architectures is super important. I think we are seeing a generational shift in tools towards that model, which is nice.
Charity: This is what great ops teams are doing these days. They function more like internal consultants. They're the experts in all of the defaults.
A really successful model that I've seen is an engineering team that has to build a new service, okay, they build it. And as soon as it reaches a quality level that the ops team says, yes, we will support it, they get their support. Until then, they carry the pager. And I really like that model.
So, how is software development going to change in the next five to 10 years? And how does observability and data management play a role in that?
Christina: I think that one dimension we haven't talked about so much is that the definition of a software developer's going to change as well. I think we're going to see a seismic shift where people who were business managers of some sort are retrained or moved left into software development.
Charity: To some part I'm already seeing this, especially in the front-end side.
Christina: Well, I think we already-- Yeah, and when there was an earlier generation, 15 years ago of all these business-process editors generating code kind of thing. I think we're going to see a whole new generation of that. I think it all is just going to come together.
I think even the role of the software developer and product manager, much as I've been an advocate for the product manager for most of my career, I think that's going to merge up.
I think that the ways that code gets written are going to be more disparate, but the practices of, "Okay, this is code that now is going to trigger a whole pipeline of getting innovation to customers" is going to happen.
Charity: There's a difference between a developer and a software engineer.
Charity: Increasingly, in order to be literate in almost every industry, you're going to need to be able to automate in some way. You're going to need to know how to write code. That's different than the discipline of software engineering.
Christina: You're probably right there. I think where I'm going with this is that there's going to be a lot more different roles from which changes that make their way all the way to the right to the end user come from and I think they're all going to follow the same kind of software factory path.
I look at things like docsys code, for example. Even the documentation we're seeing, when people are writing this documentation, it's following that same path. And so I just think there's going be,
almost anything that makes its way to the end-user experience of any product or service is going to come through a software development lifecycle path.
Charity: I think that a lot of the fear that people who are entering the software industry feel is because they can't actually tell if what they did worked or not.
Christina: Right, right.
Charity: We're missing that part of the feedback loop where you can verify. Forever, we have shipped code and just waited to get paged to see if it works or not. And this is insane.
I understand why that's been this way, but this is certainly something that we care about a lot because it doesn't have to be that way. You should be able to move with confidence, right?
This helps with focus, with knowing if what you're doing is the right thing to be spending your time on or not. I don't actually know exactly how these plug together, but I feel like
we'll build better things if we can see what we've done better.
Christina: Yeah, absolutely.
Rachel: Well, when you come down to it, software is distilled best practices.
All of the time you spend on your internal processes actually feeds into how literate and informed your developers and software engineers are.
Developers are people codifying those process at the micro level. The software engineers, I think, bring in the systems knowledge to integrate it all at a macro level. And so your demo better have good processes or you're not going to generate good software. All of this stuff we're talking about, about enabling people and giving them the tools they need and--
Charity: Garbage in, garbage out.
Christina: The other thing I want to say that's also been on my mind a lot, and as I'm observing different individual examples is, I think a lot of the enthusiasm for automation and codification of the software development and delivery process also ties into a lot of the desire to remove bias from the equation.
I've personally seen in a lot of software organizations that on the ground, where women and minorities don't advance to software engineers, is because of entirely opinionated code reviews and forums like that. So when it becomes literally, objectively, you can do whatever you want, you can get it to the user. If it objectively changes the metrics in a positive way, it stays. If it objectively changes the metrics in a negative way...
I'm seeing a lot of this next generation of developers, they're almost telling me that they're passionate about this because they're tired of the nonsense of how much human intervention there is in the whole process and that being where the bias creeps in.
Rachel: Yeah, the mono culture.
Christina: I think it's true that what you said earlier, Rachel, about people want to enter computer science because they feel like it's going to be very objective, there's a scenario where I think it is actually trending that way.
There's a strange interplay between trying to move towards having more diversity and inclusion in the software industry in general, and moving towards more objective ways to measure the impact and fewer barriers to getting the impact of anyone's contribution to the marketplace and to users.
Rachel: I think the difference is the feedback loop.
Objectivity is real objectivity when it's evidence-based, not when it's opinion-based.
Christina: Yeah, exactly.
Rachel: There's been a lot of opinion-based, "This is the objective truth because I say so." When you actually look at the evidence, when you observe, it often tells a very different story.
Charity: Yeah, the blossoming of data. It's become so large that we can no longer ignore the fact that we have to throw most if it away. I think that the fact that we've been so focused on aggregates for so long has hidden that from people's view, but that's throwing things away too.
Even more so than the aggregation is the fact that I think that the events model over metrics is going to be increasingly important. Because they have so many relationships between dots and points of data.
It's interesting, I feel like we, as an industry, we took a detour 20 years ago down the metrics route because Google released a bunch of whitepapers. We had them so we built databases on them and we've kind of lost the muscles around analyzing events.
Christina: I never left events.
Charity: Well you're special, Christina.
Christina: I'm very special.
Events can contain metrics, but the other way is not true.
Charity: Exactly. You can always derive the metrics and aggregates from the event.
Christina: Exactly. The only thing, the only exception is true time-series data where you're taking a reading of a continuous variable. I think people confuse time-series and event data all the time and it drives me nuts.
Rachel: The question is not, "Are you going throw this away?" It's, "When are you going to throw it away, and what are you going to spend to keep it?"
Charity: Exactly. We have a lot, a much easier time talking to new software engineers who are trying to learn observability stuff than ops people who have been up to their neck in dashboards and metrics for so long. Because they don't think in terms of events.
They think about the system, "Is the system healthy?" Which is kind of irrelevant. What matters is, "Could your program get the resources that it needed to complete its journey?"
Christina: I think another way, and this ties a little bit more to my Interana viewpoint of two years, which is, the events are actually the thing that represents interactions with the software, that is what the software is there to do. The software is there to serve requests to do something.
Each record of it doing something is an event with all the details of what was good or bad about it doing something. So that's a little bit of the mentality. That for me dates back to what we were trying to do at MSN in 1998 with the executive dashboard based on events.
Charity: That's how our brains work. Our brains work in events. Our brains work in, "This happened and then this happened and this happened and this is what the world looked like when it happened." It's much easier to debug and understand your software.
Christina: Yeah, and then you just have, these days with the way servers are built, the granularity of what constitutes an event. One thing that I did start doing and my mental model at Interana a couple years ago before this observability is I started to propose that we talk about observations separate from events.
Say you have a classic logging case. You've got an event that, you have an event recorded at the application firewall, you have an event recorded by the server, you have an event recorded in the client logs. They're all the same event. So I've started to think about them as three observations of the same actual real-world event.
Charity: Tracing is the other interesting thing that's--
Christina: That's new, too.
Charity: It's the only other new thing that's out there.
Rachel: Thank you so much, Christina. It's been a joy having you here.
Charity: Thanks, Christina.
Christina: Thanks, it's fun.