1. Library
  2. Podcasts
  3. O11ycast
  4. Ep. #42, Continuous Profiling with Dmitry Filimonov of Pyroscope
O11ycast
29 MIN

Ep. #42, Continuous Profiling with Dmitry Filimonov of Pyroscope

light mode
about the episode

In episode 42 of o11ycast, Charity and Shelby are joined by Dmitry Filimonov of Pyroscope. They discuss continuous profiling practices, use cases, and tools, as well as insights on how data is profiled at Pyroscope and how profiling resolution is measured.

Dmitry Filimonov is Co-founder of Pyroscope. He was a former Engineering Team Lead at Sensor Tower.

transcript

Dmitry Filimonov: So one way to explain it would be to say that it's just like profiling, but it runs on your production servers-

Charity Majors: But it's continuous.

Dmitry: It's continuous.

Shelby Spees: So let's dive deep in it. So what is profiling?

Dmitry: Right. So profiling-- classic profiling would be looking at your program stack trace over time and seeing where your program spends time while it's running.

So Continuous Profiling would be the same thing, but in production environment, running 24/7, and you can go back and look at that data.

Charity: So you're basically running code that wraps the current code, right?

Then historically the reason that we haven't run Continuous Profiling everywhere has been because of the performance hit.

Dmitry: Correct? Yeah.

Charity: Has that changed?

Dmitry: Yeah. So when we talk to people they usually are concerned about two things.

One is the performance hit.

And the other thing is that they think that all this data would take up too much space.

And what we found was, well for the performance hit, in the past decade or so there's been a lot of advancements in sampling profiling technology.

And the way those sampling profilers work is they look at stack traces multiple times a second, like 100 times a second or something like that, and that is much better than getting into some method invocation calls or something like that.

Charity: Trying to wrap every single call from every single code.

Dmitry: Exactly.

Charity: Like the GDB example, like if all else fails you could attach GDB to a running process in prod, But you couldn't do it for long because it would just crash the server pretty soon.

Dmitry: Yeah. Yeah, exactly. It's exactly like that. Yeah.

And so that's how we kind of are solving the CPU issue.

The other part of the equation, and the part that we think is very important, is that if you just collect a bunch of profiles and you just store them somewhere, it's very quickly going to become cost prohibitive for you to do so, because those profiles just take a lot of space.

So what we did was we designed this storage engine to be very well tailored to storing this Continuous Profiling data.

And so that allows us to store data from lots of applications, many, many servers.

Charity: Do you store the raw snapshots, or do you store aggregates of them?

Dmitry: Kind of both. So we profile 10 seconds at a time.

So that's kind of like the resolution, the minimum resolution that we can get right now.

And then we also pre aggregate it so that you could look at, for example, one month of data or one week of data.

Shelby: So now would be a great time for you to introduce yourself.

Dmitry: Right? So my name is Dimitri Filimonov.

I'm one of the maintainers of Pyroscope, and Pyroscope is an open source Continuous Profiling platform.

Charity: Do you have a company built around that, or is it an open source project at this point?

Dmitry: Right? So it started as an open source project, but we quickly realized that we could build a company around it.

And so yes, we are also a company and I guess I would be one of the co-founders. Yeah.

But right now we're definitely mostly focusing on the open source part of it.

We're trying to get more people involved.

Trying to create more integrations with different runtimes and things like that.

Charity: You're basically trying to do what the last generations of open source tracing technologies were doing right? To get a leg up.

Dmitry: Yeah, exactly. Yeah.

Shelby: Yeah. That's cool. How many people work there at Pyroscope?

Dmitry: So there are two of us.

Ryan and I were the co-founders and we hired a few contractors and we're hoping to convert them into employees sometime soon.

Shelby: The Pyroscope.io webpage, by the way, it's really cool.

I hadn't heard of it, so I pulled it up just as we were sitting down with Dimitri and it's beautiful.

Of course it's rainbow so that speaks directly to my love language, but it's got this beautiful rainbow flow graph.

It reminds me of a lot of stuff that Brendan Gregg used to do.

Dmitry: Yeah. We definitely were inspired a lot by what Brendan Gregg has done both in terms of flame graphs, but-

Charity: It is so beautiful. I look at it and I'm-- it's also a little deceptive because if you would like to weigh it into the controversy over sampling or not, I feel like it's a bit of a bad word to a lot of people and yet it's absolutely necessary.

And from my perspective, anytime you're dealing with something interesting enough to have problems, it's almost a necessity.

You cannot run GDB on every single piece of code out there. You have to look for what matters. So what's your guys' approach to sampling.

Dmitry: I agree. Yeah.

I think you do have to do sampling where it matters, but I think it's very important to do it in the right places.

So we do sampling for this profiling part, but we try to avoid doing it in terms of-- maybe some products I saw, they only profile on 10% of machines or something like that, or maybe in the beginning of the minute, but not at the end of the minute.

We try to avoid that kind of sampling. Because we think that those signals you might lose and that's not great.

But for profiling, what we found from experience doing profiling before, is that this kind of sampling aspect of it where you look at stack traces many times per second, that sampling is usually fine and that gives you enough kind of data to make decisions, to understand how your system works.

Charity: Yeah. It all kind of comes back to the fact that you've got to know your own data.

You've got to know your own system.

You can't just blindly adopt someone else's sampling algorithms or choices or whatever, because you need to know what you're trying to do in order to know how to sample, in order to get the right results.

But it's not really a choice because it is cost prohibitive and it is network saturating.

And a lot of people talk about observability and they're like, "Oh, well, when are you going to include every method call."

And I'm like, "Well, never."

There's a separation of concerns there.

Observability is very much about understanding, where is the system, is the code that I need to understand or debug.

And you're dealing with a different order of magnitude of daily volume when you jump out into the sub-process layer and start dealing with stuff there, which--

That feels like something like Honeycomb and Pyroscope could be very complimentary in that regard because once you figured out where in the system and the code you need to care about is using Honeycomb, then you might want to jump into something like Pyroscope in order to like profile it at a very low level.

Dmitry: Yeah, exactly. I totally agree.

We, well, first of all, I also think as these systems scale, storage becomes very important and you have to really understand the data you're working with.

Yeah, and we spent a lot of time developing the storage solutions so that we can scale it cost efficiently.

Charity: Yeah. You can't just shovel that shit into MySQL and cross your fingers. It's not going to work kids.

Dmitry: Exactly yeah.

Charity: Yeah. It is kind of amazing what you can get with a custom built storage system.

I spent my entire career telling people, "Don't write a database." But sometimes you just got to write a database.

Dmitry: Yeah. Yeah. That's very true.

And we actually see some products experimenting in this space as well, and actually there's some commercial solutions too.

And just by looking at their pricing structure, I can already tell that-

Charity: You can tell what choices they've made at storage layer. Absolutely.

Dmitry: Exactly. Yeah.

Charity: Yeah. Absolutely. Observe.

Anyone who's making their pricing decisions based on how many queries you're allowed to issue, has made a wrong turn somewhere, in my opinion.

Dmitry: Yeah. Yeah.

Charity: It's not what you want to be incentivizing people to do.

Shelby: Yeah. So I actually learned about Pyroscope because I met Ryan Perry through the observability tag of the CNCF.

That was actually my first intern-- well, not my first, but it was my first introduction as Continuous Profiling being a fundamental observability concept and practice.

Liz had mentioned it in passing before, and I was like "That sounds like really low level Liz thing."

The kind of stuff she geeks out about.

And so now, having Continuous Profiling, especially with Pyroscope and similar efforts or reaching a point where it doesn't have to be cost prohibitive.

And I think Charity has started to touch on this too, is where do you draw the line?

When do you stop tracing and start profiling?

Charity: And do you think that they're going to merge at some point?

Shelby: Yeah. Is there overlap? What does that look like?

Dmitry: Yeah, these are great questions.

I think it's a Venn diagram and I think there's definitely some overlap between what tracing can do for you and what profiling can do for you. But it's not like a one circle.

Shelby: Yeah.

Dmitry: Yeah. And I think the nature of the data is also slightly different. In my ideal world, if I was building a large system today, I would probably use both, and they would kind of compliment each other for some-

Charity: It's kind of the nature of the tools here is that, when you've got the right tool for the problem that you have, it's like magic.

You just look at it and you're like, "There it is."

But if you have a problem that's like magic, when you have a Continuous Profiling tool, you might not find it when you have tracing and vice versa.

And the art of knowing in advance, which tool is going to be magic for your problem is half of the problem.

Dmitry: Yeah.

Shelby: Yeah, I guess that's what my question is, what's the level of granularity?

Because I've totally seen people with 4,000 span traces on a single rails request.

And it's just including all of the rails, like metadata, database queries that it talks to itself about.

And I don't know if all of that is relevant on every request.

Charity: It isn't until it is.

Shelby: It isn't until it is.

And so do you want to spend all your event data on big, long traces or do you cut off your traces and then direct that to a Continuous Profiling tool?

Charity: The whole philosophy of, it isn't until it is, is I think where sampling does it's magic.

If you're sampling correctly, if you have a sampling strategy that matches your problem space, then you aren't storing it until it's meaningful.

It isn't meaningful until it is, and so you don't have to pay for it until it's there.

I think it's a holy grail, correct me if I'm wrong, Dimitri.

I feel like all of our customers would like the holy grail for you to be able to install something once as a library and Bublup all of the-- almost auto trace based on the kind of stuff that comes out of a continuous profiler so that you're getting the tracing knowledge at that level of instrumentation level, without having to man-- right now people have to manually insert spans.

They have to go, "I care about this, insert a span."

Which could be very laborious.

But at the same time, if you auto instrumented everything using Continuous Profiling, it would be too messy.

And so I feel like there's got to be some convergence at some point that we haven't quite yet figured out.

Dmitry: Yeah.

Charity: I see Dimitri smiling, so I really want to hear him say what he's thinking about.

Dmitry: Yeah I have a couple of thoughts.

First of all, yeah, I think what customers would like to see is some sort of magical pill where you install one thing, it collects all the data you want and it ideally even answers the questions, it just tells you how to solve performance issues.

I don't think we're quite there yet. Maybe one day we'll get there, but certainly-

Charity: Big question mark.

Dmitry: Yeah, Big question mark for sure.

Charity: Because they do. They want magic.

And what I have found in my experience is that magic isn't good for you.

Once you've given up that, you're like, "Well, somebody else is going to tell me what to care about."

You've sacrificed a lot of your engineering process.

You've given it over to some magical algorithm and you stopped understanding how it's working and how it works is always a series of trade-offs and part of getting to a place where you understand the question and the result is it's like getting to place-- first you have to understand the problem.

We can try to make things as magical as we can.

That's the whole philosophy behind bubble-up behind our magic instrumentation things.

But if we give people too much magic, then they don't have to think about, "What am I caring about? What am I explicitly caring about, and why does it matter?"

You want people to be instrumenting as they write their code with an eye to their future self.

"What is future me going to want to care about based on what I know now?"

Because you're going to forget what you're caring about while you're writing the code.

Future you needs that gift from present you.

Telling them "This is what you're going to care about because this is the point of the code I'm writing."

Am I making any sense?

Dmitry: Oh yeah. 100%. The way I see it is, our role is to empower people to kind of learn more about their systems and understand them better.

And we provide tools for them to do that. No one else is going to do that for them.

There's no magic yet that can do it. Maybe one day but I think it's far away in the future.

Charity: The people who really want the magic, who tends to be the CTOs, the CIOs, the people who wish that they could make their people fungible.

Because in their world, I know I've said this a few times on this podcast, but it was shocking to me to realize that in their world people come and go, but vendors last forever.

And the vendors who sell them on, "Just give me $10 million and I'll make it so that your people never have to understand the problems because we'll just make it magical for them."

That's a very comforting line to be fed when you're a C-level who's looking to reduce risk.

Unfortunately, it's just not true.

Dmitry: From my experience that's what I've seen as well.

We're not even trying to go there yet. Maybe one day.

Charity: I think we should be trying to center the engineer to make the engineer more powerful and better at their jobs, but not to remove them from their process of understanding it.

Dmitry: Exactly. Yeah.

Shelby: I guess what I'm trying to picture, and this is always my experience when encountering new developer tooling is, where does this fit in my workflow?

Me, as the engineer debugging something in prod, or I get paged-- When do I use this and what prompts me to use this and how does it fit in with the other tools I'm using?

And so I guess that's kind of where I'm trying to direct my tracing question is because a lot of the exact sort of examples I see of these Flask app calls and stuff like that is like, well, you can wrap that in a trace or maybe your auto instrumentation wraps that in a trace.

And at what point do I not want that in a trace?

And I keep traces at the top level, service request level.

And then at what point do I know-- basically maybe I'm answering my own question, but "Oh, I got an error here. Let me go look at the profiles for that chunk of code that we've stored in and in Pyroscope."

So is that sort of like the flow?

Dmitry: Yeah. So I would say in my experience, traces are more useful for when your programs kind of talk to other systems.

Maybe database calls, or it talks to another service or something like that.

Continuous Profiling shines when your program itself is doing something.

Maybe it's taking up a lot of CPU compressing things or, there's a variety of things that can be-

Charity: It doesn't have to network.

Dmitry: Yeah, exactly. So speaking of the whole thing of, maybe it's too much data, again from my experience, what I found by using profiling tools was that-- and flame graphs in particular, flame graphs are great.

I think they allow you to kind of both get a very high level understanding of where the time is spent in your code, and that will help you make decisions on, "Oh, did we really need this function that takes up 30% of the time, maybe we need to optimize it."

Charity: So from my perspective-- and this is going to reveal a hell of a lot of bias in my point.

But I feel like most of the time engineers should be spending their time in observability land.

They should be looking at the consequences of their code after they've rolled it out.

Because so much of, especially the farther we go down the microservices road, the more it does limit the utility of any individual node's ability to reflect the performance.

It becomes much more of a systemic thing.

But I feel like it's good hygiene to-- Do you have people hooking this up to their CICD runs to look for regressions?

Because it feels like a thing that you just like affirmative exploration, like going out to just take a look.

Does anything look weird here? Are there any outliers?

Are there any "Oh, that's one color that's just dominating forever."

It's a good practice to just sometimes go and look for these things before they bite you in the arse.

Dmitry: Yeah, exactly.

And what is even maybe more interesting here is I feel like in many companies, the CI pipeline, as the company grows there's more features, more tests. The CI pipeline just takes more time to run.

Charity: Without giving more benefit.

Dmitry: Exactly yeah. And that slows everything down.

Your whole team's productivity goes down.

And so that's where Pyroscope shines too, where you can install it on your CI pipeline and then see exactly where the time is spent.

Charity: Yeah yeah yeah.

We found people doing the same thing, instrumenting their CI pipeline with Honeycomb, just looking where all the time is going.

Dmitry: Yeah.

Charity: That's just super compelling.

And I think this has been our toe in the door for lots of places.

I think this is a big up and coming use case for all kinds of profiling stuff.

But I also feel like, is it useful in staging?

Dmitry: Oh, that's a good question.

Charity: Feel like it's not really.

I feel like there's almost no reason to look for it in staging.

It should be in production, should be something that you just have available in case something looks really funky and you can't figure it out with your normal tooling.

Is it too expensive to just run in the background in production everywhere?

Dmitry: So we're trying to make it as cheap as possible, again both in terms of CPU overhead and the storage part.

So, if I were to kind of answer the question of, should you do it in staging?

Yeah, I don't know. I haven't heard of any of our users do that yet.

Charity: It's hard to see where the value would be because you need that sort of multi-tenant concurrency, users doing all the crazy shit that users do.

Dmitry: Yeah, exactly. Ideally you want a lot of traffic, you want the actual system and that's how you kind of find issues.

Charity: This is a really good way of finding out problems that, like you said, are inefficiencies in the code that could be found in a single node.

There are lots of inefficiencies though, that aren't like, "Oh, this 400 millisecond request is now taking 4,000 milliseconds."

It's more like, "This 40 millisecond request is now taking 60 milliseconds."

But you multiply that by 20,000 nodes and, that's not the kind of thing that Profire is going to really help you with.

That's an observability problem.

Dmitry: I actually, maybe I would argue on that-

Charity: Is there an interface that allows you to sit on top and aggregate across all of your nodes or is it per node?

Dmitry: Yeah. Yeah. So that's actually another kind of thing we are trying to innovate on.

We aggregate all the data from all the different nodes and so you can look at it.

It's actually very-- on aggregate it's really nice to look at.

Where it doesn't work so well is when you have some sort of tail latency, like maybe only a few requests take, I don't know, 4,000 milliseconds, but most of them are doing fine.

I think those use cases, Continuous Profiling it is not very good for. But in aggregate, if like you said, your request time went from 40 to 60 millisecond, you will actually see it with Continuous Profiling.

Charity: Yeah. That will really stand out.

Dmitry: Kind of. So we have this timeline feature.

And so in theory you should be able to see a spike on the timeline thing. And we also have a comparison feature.

Charity: I guess what I'm talking about though, is, is it being additive.

Like across 20,000 nodes, adding another 20 milliseconds can be a lot.

It can generate a lot of extra load when it's entering a backend system or something like that, even though on its own, it isn't a problem.

Dmitry: Yeah. Yeah. So you should be able to see that.

And you can also compare within Pyroscope, like maybe the time before you deployed and the time after you deployed, or maybe the time before incident and after, things like that. Yeah.

Charity: What competitors are there to Pyroscope out there?

Dmitry: Okay. There's quite a few.

So all the major cloud providers now have similar products.

Both AWS and Google cloud have those. I don't think Azure has one yet, but-

Charity: Probably just a matter of time.

Dmitry: Yeah. Yeah. Definitely. Datadog has a continuous profile or two.

Charity: What is Pyroscope doing differently?

Dmitry: So I would say our major advantage is this whole storage engine.

It kind of looks like all these other competitors are just taking the profiles, storing them somewhere, and put a database on top so you could find profiles kind of easier. Which is great, I mean it works. But what we do better is we have this timeline component where you can really zoom in on a specific time period. You don't have to kind of figure out where to look before you start looking into profiles.

Charity: You can zoom into the raw events basically.

Dmitry: Yeah. Yeah. So you can both kind of look at a, I don't know, a month worth of data, but then you can, on the timeline, zoom in.

Charity: What are the trade-offs that you made in the storage engine to enable your users to ask different kinds of questions or do things differently than other continuous profiling software?

Dmitry: We tried to make as little trade-offs as possible.

So that's why we spent a lot of time building this storage engine so that we store symbols separately from the actual profiles.

And we also make the querying kind of fast because we pre-aggregate profiles.

There are some trade-offs, for example, we do limit the maximum amount of nodes you have on one profile.

In practice we haven't seen that kind of affecting much, but I could see maybe for some applications where you have really a lot of separate functions and a lot of different sub-routines, maybe there that could become a problem.

Although that's also configurable, so you can always adjust that for your use case.

And it will be a little more expensive.

But other than that we tried to optimize everything as much as possible so that it would work for this specific kind of use case.

And so far we're getting good results.

Charity: Is it a columnar store or what does it most resemble? OLAP, OLAP Stuff?

Dmitry: We actually wrote a blog post that got a lot of attention on Reddit and other news aggregators.

And you can find it on our GitHub if you're really interested, but to summarize, we built it on top of a key value storage.

So our thinking is that going forward, we will also adapt it to other-- just any kind of key value storage type thing.

But right now we use Badger and it's a key value store for your storage.

And on top of that we built a bunch of systems.

There's a bunch of trees everywhere that kind of link to each other and allow for this use case. Yeah.

Shelby: One thing I guess I'm curious about is how, because we were, at Honeycomb we sort of had to do a lot of work to help educate people about what even is observability.

Why tracing doesn't have to be expensive and stuff like that.

And so do you find that developers know what just regular profiling even is?

Do you find that people do it and then want to do it in production?

Or I guess who are your users and how are you finding navigating the market that way?

Dmitry: Yeah, this is a very interesting question.

I would say yes and no to that.

I would say there's a small percentage of people who are already familiar with profiling tools and maybe they're already doing some profiling in production.

But the way they usually do it is, they do it on an ad hoc basis.

They log into some machine and they run the profiler, and it's usually a pretty involved process.

People always kind of write their own wrapper scripts to kind of automate it.

So those are kind of the people who understand the problem the best.

And we typically see them responding to it most positively, I guess.

But I do think that the other part of the question, I do think there's still so many companies and so many developers who are not very familiar with profiling tools or observability tools, metrics tracing, and yeah, I think it's our kind of collective responsibility to educate them and to show them that this can actually help them.

Shelby: Do you have any great stories about outages or problems or things that you or your customers have overcome using Continuous Profiling?

Dmitry: Yeah. So one story that we like to tell is, there was this one company and they had this large pool of servers all kind of doing the same things.

And they already had a lot of tooling around it.

They had all sorts of metrics about latency and a lot of other things.

They also had tracing, and when they installed Pyroscope, they realized that this one function was taking up 30% of the time.

And that was a compression function.

And they, after some debugging, they realized that the compression was turned up to the maximum and they didn't really need it to be the maximum level.

And so that's an example of where other tools were not capable of finding it and where Pyroscope really shined.

Yeah. Other examples, we found people using it for their test suites and they were able to kind of similarly find areas where they weren't even looking where their programs were spending a lot of time.

Another good example that I like is, this one company used it.

They used it with our EBPF integration that allows you to kind of profile the whole Linux machine, and they installed it on their Mongo cluster.

And after they upgraded their Mongo servers, they noticed that performance degraded, and they couldn't figure out why.

And by using Pyroscope, they were able to find the exact functions where Mongo was spending more time now.

And they were able to find the bug in Mongo Bug Tracker and they were able to eventually fix it.

Again I think without using something like Pyroscope it will be very difficult to even figure out what's going on.

Charity: Oh yeah. Totally.

Shelby: Yeah. So I guess I kind of see it like tracing is good for when things interact and cause slowness or errors-

Charity: That were caught.

Shelby: And profiling is good for when a single thing is causing slowness or errors, and you need to find that and isolate that thing.

Where the aggregate of another 20 milliseconds can take down your database or something.

You might not necessarily see that quickly in Pyroscope, but you'll catch that in your tracing.

But in Pyroscope you'll see that those 20 seconds are using 100% percent of CPU or something like that.

It's almost like, I saw something the other day about, if your first SRE is your CFO or something like that, you want to think about where you're spending your resources.

And profiling sounds like it helps you find where are we spending, not just time, but also just, what work are we doing?

And do we want to be spending our resources on that work?

Charity: Yeah. I think that helping the network is a very good guideline for people to just-- because it doesn't really help much if you help the network a lot and it does help if you aren't.

Shelby: Mm-hmm (affirmative).

Dmitry: Yeah. Agreed.

Charity: Cool. Well, it's been very delightful to have you. Thank you for coming by Dimitri.

Dmitry: Yeah. Thanks for having me.

Charity: Shelby. Did you--

Shelby: Oh yes. So this is actually my last episode on ollycast, at least as a host. Maybe I'll come back as a guest someday.

Charity: Absolutely. It's been delightful having you.

You've been a wonderful co-host. Thank you so much.