Parveen Khan: I think I have a very interesting story to tell,
like how I got into observability, because like I usually--
When I'm unable to attend some conferences or something,
what I do usually is like,
I try to follow the tweets or hashtags of the conferences.
So it was something like I was following tweets from one of
the conference where Abby was doing her workshop
about the observability.
Shelby Spees: This is Abby Bangser?
Parveen: Yeah. It's Abby Bangser.
Shelby: Yeah. We've had her on o11ycast before.
That's where I got introduced to this term.
Like until then I never knew something called observability.
So after like getting to know about that,
like first word encounter, and then I was like, okay,
you know, like I'm a tester.
So we're always very curious to find what
if we find something new.
So it was like, what is this observability all about?
And that's where I started to like, you know,
dig more into it, try to find what it is
and it was such a coincidence that I was at a new company.
I joined at a new company at that period of time
and I was working on a completely new product
and like within the new team,
and it was such a balance
that I was facing those similar situations
where like observability was needed
and that's where I came across this term, trying
to learn about it.
So it was a very nice combination of,
at the right time getting to know this thing
and that's where I started to learn about it
and I think I was on my testing tour
at that point where I was trying
to pair up with different testers
and developers and different topics basically.
Charity Majors: A testing tour?
What is that?
Parveen: It's like pairing up with different testers
by having different sessions like with anyone
across the globe basically.
I can pair up with anyone for an hour or two
where I'm exploring or learning about
or sharing about one particular topic
by picking up one each topic, the different testers
and developers across the globe.
So I was trying to do that.
That's where I thought, "okay, can I ask Abby,
like if I can pair on trying to learn about observability."
And it started falling on, you know,
I paired with Abby and it was like, you know,
she introduced me to this new world of like
the whole infrastructure and observability.
She taught me how to get into this,
comfortable into learning this new term
and getting to learn about dealing with infrastructure
and, you know--
as soon as I was getting into this,
I came across a lot of tweets from Charity
and that's where like everything fall in place
and I still remember reading the blog post
which you were sharing, like, you know,
like giving questionable advices.
Like I remember that blog post when I read.
So you know introduced like how
to introduce observability within the team.
So I read that.
So that's where my journey started into observability
and it's like still, trying to learn more about it.
Charity: This feels like a great time for you
to introduce yourself.
Parveen: So yeah,
I am Parveen Khan and I'm a senior tester
at Square Mobile Technology, which is based in London
and I am more like, you know--
I call myself like as a quality advocate and
because I've been mostly a solo tester on a team
where I'm trying to drive the quality.
It's like trying to work with every member of the team,
not just the developers,
but all OPS and product managers and prod owners
and trying to figure out the risks from the product
and trying to bring in the quality aspect.
So I think this is what I try to do, being a tester.
Shelby: I think it's really interesting being sort of responsible
and having ownership of a service when you're not
the one writing the code or managing the infrastructure
and so what is that like when you,
I love the term quality advocate because it's, you know,
this is important stuff for the business.
Parveen: Yeah. I think it's not about just having the ownership,
but for me, like, you know,
when I look at this observability and having the ownership
of the services, I feel like--
You know, we, as a testers,
we are so great at being very curious
and asking different questions
and I feel like observability is all about, trying
to find unknown unknowns
and being able to ask those questions.
So I think it is going to be so much powerful for testers
if we have the ownership of those, you know,
of those services and trying to work out
with different tools.
Charity: Without observability,
how would you find the answers to these questions?
Parveen: After knowing this,
I feel like I was completely blind.
After knowing this, like, I feel like there was nothing
I could know, like, you know,
and one thing I would say is like, because I am a tester,
like on my current team,
I get access to all the tools earlier.
I would never have that access to the tools.
So I was completely relying on like, what I see.
It's not about trying to find out any question
or any answer about anything,
but it was all about trying to see from the UI, like
only see from, hardly I could go into the network tab.
That is it,
Charity: So you can look at the-- believe
that the application was behaving for anomalies
but you couldn't track it down below the surface is
what you're saying.
Parveen: Yeah. If anything goes wrong, it's just like, okay,
I don't know, what's gone wrong.
It has gone wrong.
I don't know what has gone wrong behind it.
Charity: And that's when engineers started building
like sand castles in the sky,
just trying to understand,
because you can look at the code
and you could look at the output
but everything that happens in the middle,
like what the user is doing,
the infrastructure that it's running on,
what the code is actually doing when the user is using it
on the infrastructure, that moment is opaque.
So you kind of have to like trace it through in your mind
and humans are terrible at this.
Like, we are really bad at looking at code and understanding
because it's too complicated.
We can't, you know.
Like a chess board has what, you know,
hundreds of thousands of possibilities,
well code plus infrastructure
plus users has trillions of possibilities.
Like you can't model it in your head.
You can barely remember like
what infrastructure changes have happened and like
what components it might be hitting, you know?
Like we just have no hope,
Shelby: There's so many moving parts
and I also think about like communication,
even if you find an anomaly just from probing
like the output of a system,
and you have to communicate that over the developer team,
there's so much like it's lost in translation.
I don't know if you've ever had that happen
where you go back and forth and back and forth talking
about a bug or something versus like being able
to it's like, you see something weird
you're like oh,
I wonder what's happening under the surface
and being actually investigate that yourself as a tester
which you have the domain knowledge to be able do so.
Like this like my chip in my shoulder,
where it's like, people talk about being technical
and stuff, and it's not.
You don't have to write code to be technical.
We had Heidi Waterhouse a couple episodes ago talking about
what it's like to be a developer advocate
who doesn't write code
and so I think there's this whole swath of opportunity
for people who are technical and don't necessarily get
to manage code changes and infrastructure changes
that you can still investigate
and interrogate into the system.
Charity: You can still understand what's the best of them.
Like writing a code is just one way to learn about
how systems actually work.
That's so true.
Like you don't know what's going on under the hood.
Like, you know, how many services.
Like when we have these ability to look through the system,
like for me as a tester, like you know.
You know which services are talking
to what and how it's behaving,
like, what is it happening
and especially when you are raising any issues,
it's not just saying that, oh there's something wrong.
It's about trying to figure out all the information
and saying, okay,
this is talking to this and this is what is happening
and trying to give extra information,
like trying to provide more information and more help.
It's not about just raising it and leaving it
but it's about providing that extra bit of information so
that it's more helpful for the developer
or anyone who is looking into it basically.
So this kind of like, you know, it allows me
to look into that and it's also about like,
you know, for me, like,
I think what I'm trying to see at in my guaranteeing,
like when I'm working,
it's like more about like trying to be more proactive rather
than reactive, you know, trying to,
when I'm looking at something,
I know something is going wrong because I have
that information, because I can see that.
So and then trying to figure out what is it affecting
and how is it affecting instead of like waiting
for someone like, you know, for a user to come and say,
like something is wrong and that's where we dig into it.
So it's more about trying to be proactive
rather than being reactive.
Charity: Yeah, it reminds me a lot of operations.
This for a long times, operations didn't write code,
we were expected to run it, you know.
And so it was all about figuring out what was happening
in these black boxes
and the only thing that was available
to us then was these low level system metrics,
which we got very good at reading and interpreting,
but it's like Daniel says.
His analogy is, "Yes, you could probably measure
someone's heart rate or something
and if you know what you're looking for,
then you can tell that they broke their arm,
but wouldn't it be easier if you could just look
at their arm and see if it was broken, right?"
There are all of these little systems metrics
that will go haywire when something is happening
and, we've often like memorize
like certain combinations of like,
it's like scar tissue, right?
Like events, we've had outages before in the past
and we've noticed a certain constellation
of behaviors in our metrics.
So we remember that and like if it happens again and go
oh, I know what this says
but you're not actually looking at the problem.
You're not actually diagnosing it by looking at directly
and that's because like the domain that most of us live
in now is as much higher level domain.
We're not looking at little systems.
That's Amazon's job, that's as your job, right?
We are looking at code, we're looking at lines of code
and that means we need to speak the language of endpoints,
variables, you know, function names, stuff like that
and that hasn't really been available to us.
You know, I've been thinking a lot lately about how,
for most of my career, you know,
most of the engineers at any company,
we're not working on things that move the business forward,
they were working on infrastructure.
They were working on the databases
or fracking servers or imaging EC2 instances or something.
And only like, 10% of engineers might be working
on the business problems
and over the course of my career
that 10% has expanded, right?
Because now, there are so many infrastructure companies
that write the components and now mostly you are living
in the land of APIs and plugging things together
but like this explains why the tools that we have
that are mature, our infrastructure tools,
and the thing about observability is
because it lives in that high level of domain of, you know,
the way you're writing the software
that your business consists of it's time right?
It's time for observability.
The only aggregation that observability performs,
is around the request path, right?
Everything else is raw.
Around the request path it's aggregated
and that's because that is what corresponds
to the user's behavior.
Parveen: Yeah, and it's funny, like, you know, for me,
like the way I have come across this
and the way I see this as a more value
because I've seen the problems,
like, as I said, like I joined this new team.
So it was like every issue we were facing,
we didn't knew what's going wrong
because we didn't have the ability to look into the system.
We could not figure out.
It's like everything we were coming across
we've been marking it as like blocked
because we didn't know what's happening.
I think we would spend around a week or so,
and try to figure out what's happening
but then there were no answers
because there's nothing we can look into it.
That's how much we can look into it because there's,
we don't have that ability.
So that's where I was like,
what do we need?
Like, you know.
That's where I was trying to figure out like these questions
about like, what do we need for this?
And that's, that's exactly like the way I tried
to come across this is like seeing the problem
and then trying to find out, okay,
this is exactly what we need on our product.
So that this is exactly what will help us
and that's where I think, even though lot of people know,
I think somehow it's like different term or something.
Like, for example, like we know that
we need some kind of visibility into our system, but how,
like, you know, what do we need that like,
do we need logging.
If we need observability, there's lot of changes,
which we need to do on our current code
and we need to redeploy it
and we don't want to break anything.
That's the biggest challenge.
Shelby: When you trying to investigate something
when things are already degraded
and so you don't want to change anything
in an order to be able
to investigate it better.
I've totally been in that situation.
I've been in a situation where like,
it's so expensive to send certain kinds of data.
Like if you want more verbose logs,
or if you want metrics on certain like, you know, variables,
so they'll turn it off most of the time
and then when something goes wrong,
they'll turn it back on and pay a little bit extra instead
of like having that data all the time.
So then you can go back in time and look like oh.
Like, being able to investigate an incident
that happened yesterday or a week ago or a month ago,
or trends over time.
Like and so when you're flipping on
and off your data spew,
in order to like gulf your observability
or your monitoring spend,
like you lose out on a lot of that, like capability,
that ability to investigate.
And so there's so many of these teams that are in this,
like this place of like, you know,
you just have these ongoing problems, it's a known issue
and you don't know the impact of it
and you don't really know how to solve it
and so it's just like, this is just a bad part
of the app for months, right?
And that's not production excellence,
that's not what we going for
and it just feels like, you know,
people bend over backwards to make things work with StatsD
or make things work with their logging tools
but you end up having to do all this work later on
to connect the dots
and so just having your data be good to start with,
and then being able to like ask three questions.
Charity: And I understand why we used to do this
because hardware was expensive.
You know, storage was expensive.
We didn't have high throughput networks,
but now all that stuff is really cheap.
You know, there's no reason that we can't afford it.
You know, Honeycomb makes it so
that everyone retains 60 days, no matter what
because you need that in order to investigate trends.
And it feels like the pricing models
of companies haven't really caught up to the modern era
because they're still like trying to charge by megabyte
or by gigabyte or by C or something
when you know, all of that data is easy and cheap.
What is not cheap is your attention and your time.
Shelby: And I always makes me like mad, but in like
it's like angry laughter when companies won't spend,
the price of a laptop per year to have better observability
when they'll spend their engineers like hours and hours
and hours over months.
Charity: Yeah, their life force.
Shelby: And it like burns people out, you know, like, especially,
it's always the people who care the most
who burn out the fastest.
Yeah, and so it's just, it just makes me angry
and it's like such a huge cost on the human side
and also on the business side, you know,
like if your employees, if your engineers can.
and like, and everyone on your team,
like if your product managers or your BI people.
Charity: It impacts your customers, you know.
Charity: And it's very shortsighted.
It's not really expensive compared to people's time.
Replacing people who have gotten burned out
and quit is far more expensive.
There are good tools out there and yet, you know,
it's easier to get head count than it is
to just spend on a tool.
I think that this kind of comes back to the, you know,
not understanding our total cost of ownership
over the lifetime of software.
Like it is incredibly cheap to write software compared
to how expensive it is to maintain it over the long run.
Shelby: Yeah, that was something that I was very lucky
to learn just from, you know, big speakers and writers,
like very early in my career about like how 99% of the life
of a software service is in like the maintenance mode.
So like yeah it's important
to design things well up front
to make things easily changed and maintainable
but most of the cost comes from that longterm maintenance
and it's not yet on a maintenance thing
where you just like, put it on ice.
It's an active ownership, right?
It's an active and I think.
Charity: It's like a garden.
You have to prune it.
You have to like, make it grow to living thing.
Shelby: And I think that's where like a culture
of observability extends beyond the developers,
especially is if you design a system, that's observable.
If you write your data to actually map
to what matters for your business,
then your product managers can go on
and observe things in production,
your BI people can go in and ask questions.
It's like, why are things bad for this customer?
Well, let me go look it up
and so that starts to get into this like, you know,
if you're designing things well on the code side,
then it can become much more accessible
to the technical people who don't write code.
The production adjacent teams need
to understand production just as much
as the people who are writing the code.
Shelby: And then everyone's just like in the same boat, you know,
and we're all working towards the same goal.
And so that's, that's what I, you know,
I get so excited about this stuff
because I've been so lucky to work
with people who really care
and that's what sounds like Pervin like,
what you're doing at your company is you care so much
and you're trying to move the needle for your team.
Just like, look what's possible.
And so like how have you gone about like, trying
to like convey this to your teammates?
Charity: I feel like the biggest thing that we have to fight,
and I'm sorry I cut you off there
but like the biggest thing that we have to fight is like
learned helplessness, is just this idea that this is just
what it's like working with computers.
You're just never going to know what's going on
and it's just going to be shitty
and your production is just going to be like this hairball
of shit that like got coughed out one day,
nobody ever expected to understand it.
You know, people get so cynical
and the reason they get cynical is
because they've told it couldn't be better before
and they've been burned and they get tired
of believing, right?
And I do think there's a lot of truth in the idea that,
for you to really go through the cognitive exercise
of learning something new,
it has to be order of magnitude better
than what you had before,
because you know how to use the stuff you've got now right?
And I feel like the constellation of tools
around deploying better and more safely
and understanding your systems has only recently kind
of crossed that threshold of being an order
of magnitude better until it,
until now it's been like 20% better, 50% better.
You know, even if it's like twice as good,
is that really worth, you know,
forcing everyone to like throw out the known stuff
and then like it dots something new and unknown.
It's kind of not, you know,
and I think that if you look
at The Dow Report year over year,
what you're starting to see just in the last two years,
the tooling has gotten to a place
where is an order of magnitude better
and the teams that do start
to adopt it, do see just like compounding benefits
and they can move dramatically faster and more safely
with more confidence than teams that don't use it.
So, sorry Parveen, up to you.
Parveen: Yeah, as I said, like,
I've seen this as similar
to how the DevOps movement is like, you know,
it's not about getting the tools and saying
that we are following DevOps.
So it's something similar for me like
as far as I know, like, you know,
I feel like it is something like,
it's not about having some tooling in place
and saying that yeah, we are following observability.
For me like, again, it is a cultural shift.
It's a cultural change which needs like, you know,
to come from leadership basically.
Charity: I think that it can come either top-down or bottoms-up.
We've seen it both ways.
Parveen: Yeah. I think I say that majorly because for me,
like me being a tester and trying to bring in the change
because I come from using that observability point of view
and not trying to implement point of view.
So I'm not a technical person,
or I can't say that because I cannot do it
and show it like, okay, look, this is what it will bring.
So I am trying to come from a point where like, okay, look,
if we try to implement this,
this is what we are going to get into our system
and this is what we can save
and this is what we can learn about a system.
We can't just say that our system is so complex
that's the reason why we are not trying to figure out
what's going on under the hood.
It's about try to build the right thing under the hood so
that we can get those answers like, you know,
and partly because me being the tester,
trying to get this word out.
So the way I tried was like I know like a lot of people
in my team, they know about the logging and all that stuff,
but I kind of organize a lunch and learn session.
So I've said, okay, let me try to introduce this term.
Let me get this term out in my team.
So I tried to give that
and then I'm trying to use a sample so that I can show like,
you know, show what does this bring the value?
Like, you know, so that I can bring in the change basically.
So that's the reason why I said it's like, not just,
it is the change can come from anywhere,
but because I think it's majorly because me as a tester,
trying to bring this value within the team.
So that's the reason why I said like it needs
to be not just from me,
not just from anyone from the team, but.
Charity: I think you're right, like there is like a pecking order
within technical assistance teams
and that's, you know,
it shouldn't necessarily be there, but it is true.
Like the people who write code are the top of the heap,
and the people who have been there longer,
who are more senior at the top of that heap. Right?
And the people who don't write code are lower you know
and I think that you said there's something there, like,
it's not that just anyone can bring something in,
like you have to have credibility with the team
and it's only, that's only partly under your control.
That's partly under other people's control.
So that is very valid.
But I do think that, like,
I think even there,
if you can band together with a critical mass of people,
you know, like you can kind of overcome some of that.
I think that you can start persuading, you know, people who,
the people in the team who seem more pro-tool,
who are more open to new ideas,
the people in the team who love geeky out
over understanding systems
because it's not all of them, but some of them--
I think that like lunch and learns are awesome,
but I think you also often have more success if you take
them privately and just like,
get them excited about something, you know,
and then they go and championing it.
Charity: I do think that it's too early
to give up on bottoms-up as a concept
because some of the most successful teams we've seen,
have been ones where it is usually the developers
but the developers bring it in
because we're writing a tool for developers, you know,
has ancillary benefits for other teams.
But we're writing this tool for the people
who are writing the code, who need
to understand it in production,
like they're our use case.
So I just don't want to people to be left with the idea that
if it doesn't come from leadership too late, you know,
leave your job because a lot of teams do have a lot of great successes bringing it
in showing that it works and then, pitching it
Shelby: And I think that's how you get buying from leadership,
is as you get enough developers
and you get enough people on the team
to be like, "Oh, like, this is, this is where we should go."
And the hard part is if you're not
in a position to add instrumentation to code,
then it's very hard to show the impact of that
and start getting more people
to start getting involved with that.
It's exactly what happened on my last team
where I was lucky enough that one of the developers looked
I was like oh okay this is exactly
what I've always wanted
and just dove in head first
and just, I mean, he'd start spending nights
and weekends, like getting instrumentation working
and stuff and that's also not always like a good path
I don't want people to be spending nights and weekends
to get to further their observability journey,
but you know, this person's also just that enthusiastic.
And I think this is where it's important,
where for teams to make space for people to explore,
because that's how you find these, you know,
these impactful changes
and that's how you can test them out
and so what we're trying to to do at Honeycomb and just
in general and in the observability community is make it
easier for people to try things out and make it easier
to get started.
And just see how different it is compared
to the status quo of monitoring and logging
and tracing tools and if the three pillars type stuff
where you're sending your data separately
and having to connect the dots later on.
Like, what if you just sent data that's like structured
to start with and just see the difference there
and so having that,
that self-serve option can really make a difference
for a lot of developers who are just like, oh,
this observability thing seems cool,
but I don't want to spend my entire weekend getting
We've actually put a lot of engineering effort
into making it pretty turnkey, you know,
so that all you have to do is install a library
or you know, link to an implementation.
Like we gather up all of the information for you.
We, provide a framework so that it's basically
like doing a print ad, if you want
to append more data to it
and this is why,
because almost nobody adopted it until we did all that stuff
to make it, you know, pretty magical for them.
You know and partly this is a shame.
What this tells me is that,
vanishingly few people have seen the impact
of good instrumentation
and over the course of their career.
They don't know what a difference it makes.
They think that the magical stuff that they're being,
you know, sent by, you know, all the vendors are just like,
just give us millions of dollars.
We'll do all the work for you
and then you don't have to know how to instrument your code,
which is a fucking trap
because your vendors do not know your business use case
and observability is all about your business use case.
We can provide helpers, we can provide stuff
that guesses, we can provide stuff
that gives a lot of the defaults.
We can do stuff around the structure of, you know,
the parameters that are passed
in the underlying systems, but only you know
what you're trying to do with your code
and only you know what you should capture
in order to really shine a light on
what you're trying to build for your business.
And that's exactly what I was talking about, where like,
if you're designing your code to be observable,
then it can be much more self-serve--
Like it answers the business questions
that people who don't live
in the code will be asking
and so then they can self serve start running those queries
and getting those questions answered
and they don't have to bug your engineers,
you know, four times a day, every time something's weird.
And that requires writing good instrumentation
and writing instrumentation that sends the data
that's meaningful to your business.
Charity: And it kind of has to be done by the person
who is implementing the business logic, you know.
It has to be done at that time.
I don't believe it's possible to go along years
after code was written and reinstrument it
in as good of a way as you could do
while you're writing it.
I think instrumentation should be seen just
like commenting your code.
Right, is commenting your code.
It's just including a little bit of reality in there
because it's common that your code emits
from deep inside of your infrastructure, right?
Which is like, when you don't have
that original intent in your head,
when you don't know what you're trying to do,
how can you write good comments about it?
You know, you can come along later,
you can guess what they were trying to do.
You can often guess pretty well,
but you can't ever guess perfectly
and sometimes you guess badly.
The connection between commenting
and instrumentation reminds me of like what I learned
at my first job when I had to give a whole lot
of technical presentations to non-technical managers
and the advice that they gave me was like, don't just talk
about like what you implemented, talk about
the so what, like why is this important?
Why do I care?
What's the business impact?
And that's really what we should be thinking about
when we're writing our code.
Like, what's the point of this?
Like, so what, like, why is this code even exist?
Well, that's what goes in your comments
and that's what goes in your instrumentation.
And I think that's really a good point where you mentioned
that okay, look, it's about giving the ability
to anyone who can just run the query
and find out what's going on.
Like, you know, it's about like, like for example
like we have on our team,
like we have, ours is a multitenancy product.
So we have multiple project managers
who always have some questions.
They want to know certain things
when something is not working.
They have such a great questions.
Like, you know, they have such a great questions,
but then they need some time from the engineer.
So to answer those questions,
so we like we are not full blown,
like, you know, we are not completely there yet,
but we are taking step by step approach
where we have some structured laws in place.
A bay we can query and find out what's going on.
So we are trying to give that ability
to our project managers so that they can, you know,
if they are interested--
Like it's not about forcing them,
but it's about if they're interested to finding out
what's going on, you know, giving
that power to them as well.
Charity: Everyone does a better job if they can self-serve
if they can get real answers
from prod without having to have a translator
there in place.
Shelby: And I think that's like another
of the failings of these complicated tools
where you have to learn their bespoke query language
in order to parse, you know, the log spew from production.
And then it's like a 32nd,
like turnaround time to get an answer from your query
and so you never really learned the query language
because it's like the terrible feedback loop
and so frustrating.
You know, product managers
and all these folks like they're technical,
like the, you know, they go in
and they write these Excel macros that like blow my mind.
It's not that they can't go in
and investigate a system and learn these tools,
it's the tools don't allow you to learn them.
And so that's why I think like, like
at Honeycomb we've a relatively strict as I love
for career results because we want
that feedback loop to be really tight, you know
and I think it's helped me learn our systems better
because I can just go in and just start investigating
and poking around
and it's like, "Oh okay, well that wasn't, you know
that wasn't the right thing to ask.
Let me ask a new question,"
and that's how we learn as humans.
You definitely shouldn't have to like compose a query
and then go get coffee then like
by the time you've gotten back, what even were you thinking?
You know, it really does have to be rapid
in order for you to iterate on it.
Charity: This is something that like we rarely call out
and when we're talking about Honeycomb,
because it seems so obvious to us,
but so many times new people start using Honeycomb
and that's the thing that they're most blown away by.
They're just like, how is it so fast?
Shelby: Yeah and so it's just, I think about all, I mean,
I have a background in like education
and I was a teacher and stuff.
So I think a lot about just like the learning process
and the onboarding process
for like new tools and new paradigms
and it's really important to have those feedback loops.
And, you know, when I was working in a job
where like I was making changes
to Chef cookbooks and it was
like a five, 10 minute like feedback loop,
like I would forget what I was doing
by the time I got the result of my change.
And like, this is like, you know,
we are making a lot of big, heavy changes.
We were upgrading our Chef version
and it was really important for me to like, know
what broke and stuff
and so, and it was, gets really, really frustrating.
And so, you know, not having to change context,
switch context between asking your question
and getting your answer.
It's just important across the board.
Charity: Well, what would you say to other testers
who haven't tried observability yet
or who maybe are the very, you know,
just hearing about it for the first time
and what would you tell software engineers
who I guess listen to testers more?
I don't know.
What rants do you have for your software engineers?
Praveen: Yeah, I think for testers
that this is something completely new
and I think we are still in the early stages of trying
Like, you know what,
like it's something like, oh, is this for us?
Like, do we have to learn this?
Is it something for us or not?
Because I see this thing, like,
you know, when I'm trying to learn,
so I want to attend lot of like webinars
or like now we are all virtual, so virtual conferences
to like, learn more about it.
So it's about like, when I was trying to register myself,
like, you know, you have those options about like,
what do you do?
Like, you know, most of the options were like SRE, Ops
or a developer.
So I used to feel like oh, even we, as a testers want
to learn about this.
So we should have that option for sure.
So I feel like for any testers
like we are very good at having, you know,
having to learn new tools,
so which can help us in building our quality.
So, I really like to say this to any testers,
who are trying to explore about this observability is
like it is about, we trying to learn another thing.
So that that helps us in delivering better quality
and this is something very new for us
as a testers basically.
So that's the reason why I think I am trying to learn
and share about what I'm trying to learn
and I think I try to blog about it to say, like,
just sharing my viewpoints and my experience
about how I'm learning and what I'm trying
to get out of it basically.
Shelby: Well, thank you so much for joining us today.
Oh and Parveen, you're giving a talk on observability
in a couple of months, right?
Parveen: Yeah, I'm just sharing how I try to learn
and what I'm trying to do so just again
as like sharing my experience so
that any testers like, if they're interested,
they can learn from.
You can know I can share what I'm learning.
So yeah, I'm trying to share my journey so far,
how I learn and what I'm doing with observability.
So I'm doing this at another
conference in couple of months.
Shelby: And cool we'll be sure to link to that in the show notes.
Charity: Thanks for that.
Shelby: Thanks so much.
Praveen: Thank you so much for having me. Thank you.