1. Library
  2. Podcasts
  3. Unintended Consequences
  4. Ep. #4, Everything is an Experiment with Charity Majors and Liz Fong-Jones of Honeycomb
Unintended Consequences
30 MIN

Ep. #4, Everything is an Experiment with Charity Majors and Liz Fong-Jones of Honeycomb

light mode
about the episode

In episode 4 of Unintended Consequences, Yoz and Kim speak with Charity Majors and Liz Fong-Jones of Honeycomb about scaling teams and company values. They share how their team makes space for curiosity and learning as a group, and why this has been so important for the Honeycomb platform.

Charity Majors is the CTO & Co-Founder of Honeycomb. Liz Fong-Jones is the Principal Developer Advocate at Honeycomb. Together they co-host the popular observability podcast o11ycast.


Yoz Grahame: Today we're joined by Charity Majors and Liz Fong-Jones of Honeycomb, a rapidly growing, remarkably successful startup specializing in providing observability tools to software developers.

These are tools which make it much easier for those developers to work out what their code is actually doing.

That was kind of an off the cuff definition of observability, would it Liz and Charity, would it match how you think about what observability means?

Charity Majors: You're not wrong.

Liz Fong-Jones: There are many correct definitions of observability as it turns out.

Charity: And many wrong definitions of observability but yours is far more right than wrong.

Liz: People often confuse the data types that you can use to answer some of these questions with the actual capability to answer the questions because--

It turns out that no amount of having data is sufficient if you're not able to actually analyze it and use it to answer your specific questions.

Charity: They'd like this definition because it lets them sell three products and it frustrates us because it's not actually observability. It's just the same old fucking tools.

Liz: It's a product, sweet cherry. It's got more value because it's got all three elements.

Charity: It certainly costs more dollars.

Yoz: It's a dessert topping and a floor wax.

Charity: I like those three, great pillars.

Yoz: So what are the pillars that they tend to use?

Charity: Metrics, logs and traces.

Liz: Yep and I think even for a while, New Relic was saying that it was metrics, events, logs, and traces--

Where they define an event as being like something happened in your system, like a Kubernetes pod started or stopped.

Charity: Yeah.

Liz: So they're making jokes about a delicious melt sandwich and it's like, really? Really? Like you're contorting the metaphor.

Charity: Yeah.

Yoz: Oh yeah, that never happens in our industry.

Charity: Uh

Liz: Right? Like I think that's part of the challenge is that we have to kind of figure out how to avoid creating a bunch of marketing crap.

Yoz: Right.

Liz: With love and respect to my friends in marketing. There is good marketing and bad marketing.

Charity: Yes.

Yoz: Oh yeah. We all very aware of that.

And it's something that Kim and I have talked about a great deal and what we're looking for.

Charity: Yes.

Yoz: It's kind of in common with how you want people to use data, right?

You're trying to cut through huge amounts of potentially irrelevant stuff, probably irrelevant stuff, to find what is actually useful.

Liz: Yeah, there's the signal to noise ratio problem.

Charity: Yeah, observability is-- it's really like, we're borrowing it from this rich history heritage in mechanical engineering and control system, really. Right?

Which is about, can you just look at the outside of the system and understand what's happening on the inside?

Even if what's happening on the inside is something that's never happened before that you couldn't have predicted whatever happened. You haven't seen it fail in predictable ways and, you know, over and over again. Which means you need to be able to ask any question, right? Interrogate your systems and, you know, in any combination of questions to like follow the trail of breadcrumbs until you find the answer.

Which is a very different modality than the previous method of, you know, method of having dashboards where you build this sand castle in your brain and then you look at the dashboard and you're like, that's it, right?

And you jump straight to the end.

And if it's not it, you look at more dashboards and just your eyeballs scanning, right?

Looking for patterns. And that's not actually debugging.

It's not scientific and it's not you feel observability.

Yoz: There's definitely a tendency for people to mistake the symptom for the cause with a lot of this stuff.

You know, if your CPU is maxing out and just go, "Oh, that's the brain."

Charity: The dirty secret in systems is that most outages go completely unexplained forever.

Like nobody understands them. Nobody is ever going to understand them.

And we just kind of guilty look at each other and move on with our day.

Liz: Because they just keep on piling up faster than you can actually understand above them, right?

Like, so you fight a fire. The next one comes up and you're like, oh, let's just leave that smoldering. It's fine for now.

Charity: Yeah, you literally, you can either like screech to a halt and do nothing else with your life but try and understand what's happening because it's that hard. Right?

It's our tools have not made it easy to understand novel failures.

Our argument is that can be that easy. You can just follow the trail of breadcrumbs.

Liz: It requires investment, right? The problem is it requires you to change your behavior.

It requires you to stop doing the old behavior. And unfortunately, a lot of people are invested in just trying to keep the thing running the way that they always have.

Yoz: Right. So today for this podcast we're actually using the theme of Goodhart's law.

I came up with the idea for the podcast series of using a different adage for each episode and Goodhart's law which --

I really should have to hand in front of me.

Go for it, Kim.

Kim: Yeah, I wrote this down yesterday.

When a measure becomes a target, it ceases to be a good measure.

Charity: Oh, that one, 'cause everybody teaches to the test. Everybody starts trying to optimize that metric instead of trying to aim for system health and driving metrics is--

Yoz: Right.

Charity: Yeah.

Yoz: And with this as well as the concept of-- I'm sure.

Do you see a lot of that when you're working with companies on how they use observability and how to change the mindset from monitoring?

Charity: I think that it mostly comes up in the context of SLOs.

Yoz: It's the lowest being service level objectives.

Charity: Yeah, do you want to talk about that Liz?

Liz: So the concept of the SLO is that it is attempting to measure the quality of service that you're delivering to your customers so that you can weigh the priority of reliability as a product feature.

So kind of in that way, I feel like it is more resilient to Goodhart's law in that it's not something artificial that you've made up. Right?

Instead you're trying to actually measure what you're doing for your customers and measure your customer's happiness, right.

And I think that that is a good thing to target as opposed to some kind of internal productivity metric which you can gain.

Charity: Yeah, I mean the little systems metrics, there's just too many of them, like, yes, you're trying to like track as many of them as possible but like you're not gaining any of them.

Liz: Where like there's this interesting thing that happened at Google.

Like something like 10 years ago, in which someone got a hornet in their bonnet about like, you know, hey, like our systems utilization is 30 to 40%, right?

Like that should be more like 50 to 60% systems utilization. Right?

And the bad way to handle that, right, like would be to literally run, make work jobs in order to utilize your machines.

So they don't get taken away from you, right?

Like that's an example of Goodhart's law in action in a bad way, right.

Because that's completely decoupled from any reality.

Yoz: With departments that get budgets and have to use the whole budget by the end of the year.

Liz: Yep which actually, funnily enough, Honeycomb has profited from in at least one case where people like, in more than one case, we've actually had customers who spent the rest of their budget on Honeycomb.

And like, those are bad customers for us in the longterm. Right?

Like, because they just spent the money and didn't intend to use the product.

They just wanted to get the money off of their budget.

Charity: I mean, we appreciate it but we know they're going to churn, right? So it's kind of six of one and half a dozen of the other.

Yoz: So there's an aspect with this especially when you're talking about looking at the system from the outside and the inside and being able to understand how it's working.

You have a lot of experience and expertise at doing this with mechanical systems and computer systems.

And one of the things fascinated to explore with this podcast is comparing that to companies' systems of people and how they scale, how they change radically at certain levels of scale.

One thing we've seen, certainly a bunch of companies that I've been at where the problems of managing a company of 10 people and a hundred people are radically different.

You know, you need to change even just going from 50 to 150 requires a major change in how you're thinking about company.

Charity: Yeah, and also like this is why I like the word socio-technical because you can't actually measure and optimize and you know, try to make the people teams that are in isolation.

Just like you can't, you know, manage the tools that you use in isolation.

You know, they're all part of one.

Like the tools that you use change your behavior and they change over time, who you are, how you think, how you raised about things.

And, and that's why like, you know, sometimes I've heard people like physicians would say, "Why don't we just go hire some of the Starbucks managers to manage our engineering teams?"

If management is all you need is because you actually need both skill sets, right?

You need the technical skillset, you literate and goes to me.

You need to be literate human domain and in the technical domain and in how they interact with each other in order to produce good production systems.

And if you don't have observability, it's going to be very hard to like tune and improve these systems.

'Cause it's like putting your glasses on before you start working in your workshop, right?

Like if you're as blind as I am and with glasses, that's not a good idea you're going to waste a lot of effort, just fumbling around going, "Oh, whoops, that's the wrong hole, you know?"

Oh, whoops, wrong wire. Right?

If you can't even see what you're doing, everything gets harder and slower and more error prone.

Yoz: Right. So when you've been doing this, when you've been creating and growing this company, what are the major scale shocks that you've found, the things that you weren't expecting?

Charity: We're still quite small.

We're still at about 15 people. We've just gone through a growth spurt too. So I have a passionate believer.

This might be a bit of a diversion but I am such a strong believer in the power of small teams, small lean teams to move fast.

And just like, like, like a fucking arrow like to the target, right?

The larger you get, the more powerful you are in some ways, the less powerful you are in other ways.

So Honeycombs had like nine or 10 people writing code for the last two years for everything from the storage engine.

Liz: We're finally growing that, like, we're finally getting towards like 15, 18 engineers. And it is such a relief to have that slack.

Charity: But that's like everything, like, you know, the storage engine, the career plan or the APIs, the SDK, the application the UI, the UX, the integration security, fucking everything.

And yeah, we were a bit understaffed.

That's a lot of surface area for like 10 people to cover.

But also we covered a lot of ground, right.

Liz: The snarky, sarcastic thing that I'll say is like, we are running circles around, like, my former team at Google Stackdriver. Right?

Like Google Stackdriver had like 200 engineers.

Like, and it's just so slow waiting through molasses to get anything done.

Charity: All of our competitors have an order of magnitude or more people than we do.

And while we need a few more people like, we just doubled the size of the team to, you know, almost 20, like Liz said.

But like constraints are good. Right? Focus is good.

Making the ruthless decisions every day not to do all of those things and to focus on this one thing and doing it well and I will say that like having observability means that we don't waste half of our cycles just like thrashing like I think most teams do, but like, I don't think that size is necessarily all good.

I think that the parts of our brain that went to build little empires and everything need to be talked to sternly and tamp down and replaced with like the depreciation of autonomy mastery and meaning, right.

Or what give us joy in what we do.

But I feel like the small teams who are treated like adults--

One of our company values is we hire adults, right? Like we trust people to be autonomous.

And you know that trust comes with a lot of responsibility, you know, you can just--

Liz: Not just an autonomous, right?

Like, I think it's also the emotional component of we trust people to be emotionally mature to be able to handle like giving and receiving feedback.

Charity: So you're asking about scaling systems.

And I think that, like, what I'm trying to say is we're five years in and we're still only 50 people.

And we haven't really had a lot of, I mean, there's been the obvious--

The moments when Christine and I recognized we weren't allowed to like touch code anymore or like build things ourselves, you know, those are fairly pedestrian.

I think that I'm just trying to be like, I'm a real cheerleader for staying as small and compact as possible for as long as possible.

I think that it is underappreciated.

Liz: Hmm, I would say some of the stress points that I've personally seen have been you know, certainly the switch of Charity and Christine swapping the CEO and CTO seats.

That kind of was one moment of stress.

I think another moment of stress was kind of us now at 50 people having to look at our values and reexamined our values.

I think that that kind of, you know, those two are particularly interesting moments that we might be able to dig into.

Yoz: Certainly it would be fascinating to dig into what's been happening with the company values.

Charity: Sure, so like two and a half years ago, we published our company values there on a blog post and the values are basically, you know, we hire adults fast and close to right.

Is better than, you know, perfect. Everything is an experiment.

Feedback is a gift. And still remember this one, yes? Yes, we do it with style.

Yoz: Oh, yes. From the dow of Linden.

Charity: It's the dow of Linden.

Yeah, but like those values came out just from like me and Jinsu sitting down and me free associating and Jinsu wordsmithing.

And while I think they've been fairly good values for us, like I think that they reflect who we are as a company fairly well.

I think that what I was increasingly feeling was that the lack of legitimacy almost? Like, they didn't--

Legitimacy today, it's too strong of a term.

Maybe another good word would be just, they weren't well understood.

Liz: Yeah, there was a lack of shared understanding of what they meant and how to apply them.

Charity: There's a lack of shared understanding.

And so we decided to, you know, we actually weren't going to kick this off towards the first of the year but then of course the world ended, pandemic started.

Now were just like, this feels like a not necessarily thing to drop in people now but like we're circling back to it now 'cause we're going through a big stage of growth and it feels necessary for us to, you know, re-examine them and figure out are these still our values?

Have they changed? What are the scenarios and what are some stress tests that we can apply to them?

So we divided up the company. 50 people write them up into five groups of 10, right.

And we've just been, you know, leading each group through a set of exercises.

And we assigned one company value to each team to workshop.

And we're about halfway through this process.

So I don't yet know what the result will be.

It's a little bit terrifying and dizzying but this is a brilliant group of people. And I think it will be, will be good.

Yoz: Have you had any experiences in so far in this exercise that have been surprising for either of you?

Charity: So I am very cynical about company values.

I rolled my eyes and groaned about having them because every time I see people's values on our website, it just makes me want to vomit.

Like, I don't think highly of that company I'm just like, Oh great.

We strive for excellence, huh. That's nice, you know?

Or just like, we aspire to be the best, you know, it's just like, fuck you.

Like, it just like you take yourself so seriously.

And I just have no regard for this whatsoever but I will say that like I am coming around to believing that they are a way for us to invest in the longevity of what I do care about.

I care about treating people like adults. Like I hate being infantilized.

I want the company, even if I were to leave, you know, or be demoted or whatever, I would want the company to continue to spend its money on healthcare and you know, not like kombucha fountains in the office. Right?

Like there are ways in which I feel like, if you get the values right, what it does is it frees people up to make decisions knowing what you would say if you were in the room, even when you aren't.

Yoz: Right and that's to me one of the most valuable things about values.

And I share some of that cynicism of Charity.

When we were both at Linden lab, we saw the values there. It was described as culture rather than values a lot at the time.

Charity: One of them was, will you choose your own work? And that rapidly became problematic.

Yoz: Oh yes, it really did.

That was one of the first ones to go when leadership changed but ultimately values are, you know, decision-making tool and especially a way to show what is more important to us than money. Right?

What are the things that when it comes to the crunch.

I think when you were saying earlier about testing things in extremists that is the point at which the value is the most important.

And if you don't stick to your values at those points then they're not your value.

Liz: I a hundred percent agree with that.

And I think one of the, you know, to answer your question about what was surprising.

I think that it was surprising when people were willing to indulge me when I was like, "Hey, how can dispel you be misused?"

Right, like to think of the potential for harm, right?

Like people had a surprisingly positive reaction towards like, let's think this out. Right?

Like, you know, for instance, one of the values you were workshopping was talking about you know, being contrarian right.

Or like taking a different path. Right? Does that mean that people will be insubordinate?

Does that mean that people will be devil's advocating all the time. Right?

Yoz: Right.

Liz: And like, you know, we've seen the interesting things like that, where, you know, at certain unnamed companies in the Pacific Northwest, there's this idea that, you know, if you give negative feedback about someone, they give negative feedback about you.

It cancels out. But if you're nice and don't give negative feedback then you just get on the receiving of, "Hey, you know you've had X complaints about you, right."

Like you don't want to have a culture like that. Right? Like, or maybe you do.

Right but it's important to make sure that you're not arriving at that kind of thing by accident because people found a way to abuse and manipulate your values.

Kim: I'm kind of curious when you say stress test, what does that mean?

Charity: It means like you come up with hard scenarios, like, we know where we spend our money is a good one.

Another, you know, it it's like come up with an example of a time where you would need to use this value.

So yesterday one of the teams was going through this and I think that their value was, it was: you are not alone, is the value.

And just like something about, you know, camaraderie or something like that.

And one of the stress tests they came up with was someone in our community submits a pull request that is not up to our standard.

What do we do? And the outcome that we want is for someone to reach out to them, talk them through it, help them understand, you know, work with them to make it better.

So like, what should the value be that would inspire people to do that, right?

How do we phrase it so that people can use that to guide their behavior?

Another one of course is like, you know, for discretionary spending, you know, like if we were to choose, you know, how should we spend our money on our people to be able to use the values for that?

We should still be comfortable with that feeling of this is completely new to me.

It's very disconcerting and very unfamiliar.

And I feel like our natural response is to freeze and to like get defensive and to like look around and make sure they're still in control.

And like all of those things we need to practice not being like that because it's so damaging.

Liz: In other circles, like in leadership development circles, this is described as growth mindset, kind of to use the formal term.

But like, I think that for a company values, like we should also, you know, not use the clinical term.

Charity: I don't like cliches, yeah.

Kim: There's something I'm curious about.

How do your values come into play when you're thinking about building Honeycomb?

Like, are there moments where you decide, like, this would be an interesting feature or offering however, this clashes with our values or the supports, or how do you decide what to build what not to build and do your values ever come into play?

Charity: That is a great question.

So like, I think that Erin, this is going to sound a little bit egotistical but bear with me.

I think that like a part of making a company a unique, you know, it is what it is, is that a lot of it emanates from who the founders are. Right?

And so I feel like, you know, in the early days, I don't think that there was any self-conscious you know, value using, you know--

I think it just like Christine and I, we knew what we wanted, like, so, so clearly, right.

And so like the value of this much more like, okay, how do we codify this?

How do we make it so that other people have access to the same values and principles that we were making those decisions on?

Like, that is one of the stress tests that we have is like, how can this apply or how should this apply to not only like the product and how we build it but our customers and how they interact with it. Right.

Which is why curiosity is so clearly, you know our product is nothing is about the distillation of being curious about your systems, right?

Like going on that, I wonder what this is, you know like not shutting down in the face of uncertainty but like leading them on a friendly path to a novel answer.

Liz: I think the other fun one is the statement that everything is an experiment, right?

Like that we celebrate trying experiments and seeing whether or not they work.

And, you know, we don't necessarily have to commit to getting everything perfectly polished or to shipping everything that we build.

Charity: We often abbreviate, this is just EIAE.

And in the early days, Christine and I would just shoot an EIAE to each other all the time in chat because, you know, Christina is a perfectionist on that, but like, you know a lot of times we would get blocked and just kind of like, oh, this feels like a big decision, you know?

And just like reminding each other, just an experiment.

Every decision can be unmade, everything can be undone you know, just do something, try it and move on.

It was really liberating.

Yoz: A lot of companies are heading towards making sure, you know, certainly the kind of blameless culture in systems has been a huge help in getting people to have more confidence to make bold decisions and mistakes that they learned from.

But something that is still hard.

And I, interestingly, how Honeycomb deals with this is how do you learn as a group?

How do you take the learnings out of a particular experiment where the success or failure and spread them across the rest of the organization

Charity: Yeah.

Liz: Like Blameless is a company that I'm advising that is focusing on.

How do you get people to systematically conduct retrospectives, how do you get people to systematically look at the lessons?

How do you distill those lessons and spread them across the company?

And, you know, their product is more oriented at large enterprises where you can have, you know, hundreds of retrospectives per year, rather than, you know. a dozen or fewer retrospectives per year.

But kind of distilling those insights and saying, you know, what like, we've had five failures or 10 failures that are all related to lack of feature flagging, or we've had five or 10 failures are all related to deploy systems.

These are the areas that we can just systematically invest in our infrastructure, right.

So I think that that's kind of one angle but I think that almost like that can be a little bit too clinical in that we, as human beings learn from stories, right?

Like they're kind of these lessons that cause you to not make that mistake again, if you've ever lived through one of them. Right?

And I think that spreading those stories is another very powerful way of doing it that's harder to productize and systematized

Charity: Something that we've talked a lot about as a team is, just in our tool and our product.

How do we bring everyone up to the level of the best debugger in every area?

Because when you're working on a service or something, you know everything, you know it intimately, right.

You know how it lives and breathes but then you move on to another service. Right?

And, and your knowledge of that service decays.

Likewise, when you're having to debug, you know, a request that's failing, you don't only have to look at your part of system, you have to look at the entire system.

I mean, I don't know fuck all about the part that Liz is working on or part that you're working on. Right.

But in the product, because, you know, we have baked in like a sense of history.

You know, you have all of the queries you've ever done.

You know, your team's history is there.

You should be able to look and go, "This looks like a MySQL problem but I don't know anything about MySQL. But Ben is the company expert so I'm going to look at how Ben interacted with the system the last time he had a problem like this. I just want to see what questions he asked, what he thought was meaningful, what he attached to a post-mortem, what he had to comments to," and that'll probably take me within a tweak or two of the answer.

And we, you know, we see this and we only built like 1% of what we want to build here of course.

But like, you know, there's a sense of like, it should be your outsource brain.

It should be accessible to everyone by taking things out of our brains and putting them into the tool.

It has a very democratizing effect.

You know, like we've all worked with these systems where the person who's the best debugger is the person who's been there the longest.

And it's basically in Superbowl, you can't ever catch up with them because so much of their ability to be expert in the system is bound up in their scar tissue.

The number of incidents that they've been exposed to their ability to just like pattern matching, recognize anything that's going wrong, that doesn't scale.

And it's not fair. 'Cause it's all in your great, great tissue, right?

Liz: And then you can never take a vacation.

Charity: Never take a vacation. It's an anti-pattern for sure.

Kim: Something that I find interesting that's come up a lot is, you've been talking about how you have scaled your team and a general theme and topic that we're interested in is how system scale.

I mean, the fact that Honeycomb can exist right now is pretty unique and interesting.

I don't think it could have existed the way it does, even five years ago.

Liz: At least as a startup rather than you know, something at Google or Facebook. Right.

Kim: Exactly. Yeah. And Charity, you just said you've built maybe 1% of the things that you have in mind.

I mean, what does that next step? What does that next level?

When honey come scales again, like doubles in size or just hits that next level up and continues to grow?

Charity: We're definitely on a very steep upward trajectory at this point, you know.

I think that we're going to be working a lot in the bottoms up sort of, you know.

Making it easy for individual engineers to get onboarded to get using us over the next year or so but like the pace is quickening.

And I see this as being, yes, we've built some things, you know, that helped us achieve product market fit that made it easier for people to get data in and do that understanding out.

But it's almost more like the pace at which the world is coming to meet us. Right?

Like the world is changing their expectations of their systems.

They are changing their expectations of their tools.

And a large part, in response to things that we have said.

But it's funny and weird watching it happen because once the process has started, it's completely out of your control.

And I credit many of our competitors with helping to save.

They have adopted so much of our messaging and just like wholeheartedly, just like started, preaching on our behalves.

And their megaphones are much louder than ours are.

So we're kind of doing us a favor. I don't know.

Like if you look at the sheer number of companies that have changed their strategy over the past year or two, and they're now saying that they are an observability company.

Or they are building observability tooling, like everyone from fucking Elastic hired a product manager and asked them to go build them a new Honeycomb, you know, like--

So we've got database companies, monitoring companies, you know, APM companies, logging and security companies, TSDB companies, you know, it's just like.

There's this constellation of people who are all just, like, trying to get to where we sit technically faster than we can get to where they sit in terms of their business.

Liz: I think there's another element to be kind of only having built 1% of the surface area that we would ideally want to have, which is that I do not see Honeycomb as a tool that is just about querying data.

I see Honeycomb as kind of your lab notebook, right?

I see Honeycomb as where you go to perform an investigation into what your application is doing. Right.

And I think that, you know, I've been pushing Charity ever since I joined the company.

And she's the only even like, we don't have the resources to do that Liz. We don't have the resources to do that Liz. Right.

Like what does it look like to have almost like Google docs, right?

Like Google docs for debugging, like being able to actually collaborate side-by-side as if you're kind of sitting next to each other, right.

Sharing a mouse, like, you know, spotting new graphs like, that kind of collaborative thing just, you know does not exist anywhere, right.

Like I would love it if high-income were the ones to build it, right.

That's the kind of thing that's, you know, pie in the sky, like, you know, we don't have the resources to do it right now but you know, it is something that we know would make every single developer's life easier if they could.

Charity: There's a lot of stuff that goes into this.

And part of it is it, like, we are just now gearing up to invest in design, in a real big way.

And I have been coming to the conclusion over the past six months or so that, you know--

There were so many things that we have engineered the shit out of that just haven't connected with our users or you just can't find it. They don't see it the same way we do. It's buried or they, like our vision of how they should be interacted with the tool, isn't making it out to them. And I've realized that this is a design problem. This is a design problem and a product problem.

And so now we've got more funding. And so like one of the first things we did is invest in design because we had zero design people for most of the last two years.

We had one for the last year and now we're going to have like six by the end of the year, these are design problems.

And we need to have as high caliber, as highly functional design team as we do with engineering team by a year from now.

Liz: Yeah, 'cause like we have never had a problem with kind of getting that first user in.

The problem is user number two. Right? How do you get adoption within a company?

And that only comes if the tool is easy to use.

Charity: Yeah, well we struggled with the first year or two like we struggle with incentivizing users to enter words into tool.

To describe what they're doing or what they're seeing.

Like you would think that would be pretty easy.

The value is enormous, right? Like humans attach meaning to things.

That's what helps you share.

And we've tried shipping annotations like three or four times, we just haven't been able to figured out how to do it.

This is this a design problem. Right?

So like we just need to like go whole hog in design.

And I believe that we're going to start taking baby steps towards a world that Liz wants and that I went but it starts with just like making it so that where we're taking bits of wisdom out of people's head and just putting it into the tool in a way that's accessible to other people.