1. Library
  2. Podcasts
  3. The Right Track
  4. Ep. #10, Getting to Know Your Users with Boris Jabes of Census
The Right Track
81 MIN

Ep. #10, Getting to Know Your Users with Boris Jabes of Census

light mode
about the episode

In episode 10 of The Right Track, Stef speaks with Boris Jabes, CEO and Co-Founder of Census. They discuss the impact of SaaS on product releases, the challenges of stale data, insights on data ownership and how product management has evolved.

Boris Jabes, CEO and Co-Founder of Census, a platform to operationalize data through a process often referred to as reverse ETL. He previously co-founded Meldium, a groundbreaking account and password management solution for teams, acquired by LogMeIn in 2014. Prior to this, Boris was a senior Product Manager at Microsoft where he worked on Visual Studio – a widely adopted IDE.


Stef Olafsdottir: All right, Boris. It is a pleasure to have you on the show.

Could you kick us off by telling us a little bit about who you are, what you do and how you got there?

Boris Jabes: Sure. Thanks for having me, Stef. It's a pleasure to be here.

My name is Boris, Boris Jabes. I am the CEO of a company called Census and we build software for operationalizing data, which means bringing analytics into every application in a company so that business users and people in the company are all powered by the work that analytics teams do.

I've been working on this for over three years now and my career before that has always been in, what I call, building tools.

I started my career at Microsoft where I worked on Visual Studio, which are a lot of people are users of to this day.

In fact, a lot of people in the data world are discovering using a code editor, which is kind of interesting.

Then I started a company about a decade ago called Meldium that was a security company helping people mange their passwords and these kinds of things.

Stef: This is of course a very impressive background, I have to say.

Boris: I'd say it's eclectic. It's eclectic, for sure.

Stef: Microsoft for seven years, that is probably a very strong kickoff into a strong product background, and likely the reason why you went on to found, to date, two companies. Can you talk a little bit about your role over there and how that-

Boris: Yeah. Not only did I start my career at Microsoft, but I started it at a time when the Web as a platform for applications was just being born, right?

So it was before Google Apps and all of those kinds of applications. We were just at the birth of what we now call today SaaS, and so the majority of software, not just at Microsoft but everywhere, was built to be in a box.

This is what we used to call it, and you work on it for long periods of time and then ship it out for the world to use for potentially up to nearly a decade in a supported fashion.

So as a product manager working on that, or in my case the Microsoft title for this was a Program Manager, the onus on getting it right was very high because you couldn't iterate on the product on a week by week basis the way that we can today.

So today when you have a piece of software and people don't like something or you show something that doesn't really land with customers you can just fix that. You can remove it, you can change it, you can add to it. But imagine if you shipped something and it was out in the world for something on the order of like five to ten years, so you'd really want to spend more time gathering feedback from customers, narrowing down what should be in the box and really honing it in.

Not just from a bugs and quality perspective, but also in terms of what features should we build, which ones are strategic and which ones should go in a product that comes out three years later?

So in hindsight, now no software is built that way pretty much, maybe outside of operating systems, and so everything now we do in an accelerated fashion.

One of the things that's great is that I learned a lot about thinking strategically in terms of multiyear timeframes of products and I had to then unlearn after I left the idea of how to ship on a day to day basis which is a lot of how we ship nowadays.

Stef: Unlearn it?

Boris: Yeah. If you think about a product as, once it goes out, fixing it is very difficult, if not impossible.

So one of the most expensive things at Microsoft was what they called a Hot Fix.

So if there's a bug in your software, remember it was on a CD, right? It was shipped around the world, and so now you have to fix it.

How do you get that fix in the hands of people? Even just shipping the fix was difficult. And then how do you apply it?

You have to be able to make a patch that runs on the software that people installed on their computer potentially a year later, and it has to be able to work even if they haven't installed the other patch that you released a month before.

So it's a very difficult software engineering problem, and so you were very careful about what you ship. You're really trying to minimize damage.

Whereas in modern software development, especially with SaaS, not only can you fix things immediately if you break them, you have to embrace a much more experimental approach because-

Stef: You have the opportunity to embrace a much more experimental approach?

Boris: Exactly. And I think if you don't, someone else will. So shipping early and often became the new mantra, and up to a point, right?

Obviously you don't want to have massive failures in front of customers even if it's a web based piece of software that you can fix within hours or days.

Obviously you have to be careful here as well. But it should shift your baseline to, why not ship it now?

Versus when you ship software that's in a box that's very hard to update, your default... I used to say this, this is a very famous phrase inside Microsoft that I've since then kept, I think it's used in other companies too, which is that, "Every feature, every idea started with negative 100 points."

Because the potential danger of the problems that it causes are very high.

There's also user overload, people shouldn't have too many features. You should often focus on a few things.

But it was this way of thinking, and I think when you can ship all the time you can flip that script a little bit and say, "Why not ship this and test it and see what people say?"

And then you have new problems, like maybe how do you retire features that you've shipped because they don't have enough usage?

So I think that's what I had to unlearn, and then in SaaS or in a startup you have to learn new things like being able to ship quickly but also be willing to realize that this feature that you shipped is not having the momentum or the effect or the upside or the kind of usage that you wanted and so you have to cut it. Which is very difficult for most people.

Stef: Yeah. You're touching on something really, really like a fundamental shift that is the reason why product analytics exists, right? Today I would say.

Boris: Yeah, basically.

Stef: And why it's grown so huge and why we're seeing all of these companies that support people's journey on updating datasets that have to change every day or every week, because you have to change your product analytics every time you ship a new product to be able to rapidly respond to the decision to ship this thing, and it was a completely different concept.

Boris: Yeah. You always need to have creativity, right? I think analytics doesn't take away the need for people who develop products to be creative, thank goodness.

Stef: Agree.

Boris: But it changed the feedback loop, right? That's really what it changed.

To put it in perspective, the feedback loop for software that you ship, again, in a box would be you might have emails and user groups that you could find, you could get in front of people and get their feedback in person or get on calls and see how they use the software.

When you were trying to ship on the way to something you would build betas, then people in focus group rooms with the fake mirror, you know all that kind of stuff, to try to see how they use the product.

Those are the things you had to do, and even then you're doing it on a very, very small, very small sample of the population, and so you're always getting a very myopic view of your users.

The Web changed all that and product analytics changed all that, because now you can get very fine grain information on most people that use your product.

Stef: Yeah. Inspiration for a lot of tangents as we love to go on, so that's very exciting.

Boris: Yeah. People should realize they have it on Easy Mode now. My old man card is that--

We used to have virtually no data about how people used our products and the data that we had was very biased and limited.

Stef: There are a few things here that I definitely want to touch on. One of them is the Program Manager versus the Product Manager.

The other thing, I just can't skip over the fact that you worked on Visual Studio. I can't skip over that.

Then this concept of unlearning the desire to ship very carefully versus the ability to be able to ship with experimentation and rapid learning. I wonder, and would love to get your thoughts on it, how have people that have been in this industry for a long time...

I mean the industry, the software industry and the product development industry and the SaaS industry is pretty filled with a lot of young people right now because it blew up, obviously, and became like one of the most sought after careers in computer science and all that stuff.

How do you think people have managed to unlearn this, that have been in this industry for a really long time? How was that shift?

Boris: So I think maintaining a beginner's mind is super important in life and in any field of study, and thinking that you know the answer to most things is very dangerous, even as you develop a lot of wisdom over the years and eventually decades that you work in an industry.

I think the first best way for people to not be overly careful at shipping software, and instead embracing a culture of experimentation and, let's be clear, a culture of failing in public is to realize that you're going to have failures even when you prepare a lot as well.

So there are really famous examples of products being carefully planned that Microsoft and other companies would ship throughout the last many decades, and then that would have massive mistakes, it would have super costly recalls and, like I said, Hot Fixes.

I would actually spend a lot of time when I would interview Program Managers at Microsoft, I would put them in these situations where it's like, "How do they think about their mental model for how to be careful?" Especially I would put them in these situations of, "It's now weeks or days before the CD is going to get pressed. You're right at the end. The product or the game is going to go out. How do you think about bugs that you uncover in that phase, where every bug that you fix could uncover new bugs and so just changing the code at all is dangerous?" So

we would undergo this concept of code freeze near the end, and it was the role of a Program Manager to help decide what would stop the presses, versus not stop the presses because it's a whole machinery that you're stopping in that case.

I mean, to some people who would work in a newspaper, it's almost like you're going to print a newspaper, right?

So it's like, "What's a story that is sufficiently important that we would pause printing the newspaper and put it in?"

Again, now that doesn't make any sense because you can just put it on the Web. The truth is that you will make mistakes in that process, and I lived through some and the company as a whole lived through many of those long before my time and in parallel.

So I think the best way to embrace this kind of SaaS mindset, the experimentation mindset, the always be shipping mindset is that you're going to screw up regardless so you may as well screw up quickly and fix things quickly.

That's my best way of making it obvious to people that even as you get older in the industry, you should maintain that kind of mindset.

Stef: Yeah, I like that. There are two analogies that come to mind for this, there's just staying in an industry for a while and having a difficult time adapting to it.

I feel like that's a classic story of anyone I know who becomes a teacher.

They become a teacher and they have this vision for changing the educational system and they are met with a bunch of teachers that have been in the industry for a while, and they don't necessarily have the same visions.

But also they're a little bit on their heels and on the fence, and be like, "No, no, no. That's not going to work." That's the mindset, rather than like, "Let's do this!"

And I think that is a tough situation to be in so it's probably interesting to be a part of this shift in a company that's been around for a really long time.

Boris: I also think it's just really difficult to truly know your users. That goes both ways, right?

Where there's the famous, all the famous stories of people just wanted a faster horse, but in reality you had to give them a car.

Or everyone was angry at every change to the Facebook Newsfeed, but every time that turned out to be the right move afterwards, right? With tremendous societal effects down the road.

So even when you live and breathe a product, and the job of a Program Manager was very much to immerse themselves in the user, understand the needs to be able to prioritize the work that the engineering team was going to do. It doesn't matter how good you are, you will still have some blind spots.

I'll give you one of the classic stories that occurred before my time at Microsoft, but it was so famous internally that it had turned into a parable and it was used in interviews all the time.

One of the most famous products that Microsoft ever built was Microsoft Flight Simulator. They've been making that product for nearly 40 years now, over 30 anyway.

It's a game where you fly a plane and it's pretty boring unless you're a pilot because it's flying a plane in real time.

It actually doesn't go fast, it's not a fighter jet game, you're just flying a plane. So you can get in a 737 and you can fly it across the Atlantic Ocean.

Long ago they were about to ship the game and it was weeks before shipping, and someone found a bug.

The bug was that there was a bridge missing, and the Product Manager decided, "We can live without a bridge. That's okay. It's not like a runway is missing on an airport, right? You can still play the game, so it doesn't matter. It's a bridge."

Stef: You mean like you couldn't see a specific bridge that was supposed to exist in the world?

Boris: Yeah. So remember, the game is living in the real world, right?

It's Planet Earth and so you take off from airports, you land at airports, you have to simulate the engine.

That's the core of the game. Then of course there's scenery and somewhere in the scenery there was a bridge missing, so just imagine a bridge was missing.

It's like one of the bridges in London, let's say, was missing or something like that.

The team decided, "It's not worth fixing that because it doesn't affect the core gameplay. It has nothing to do with flying the plane, taking off, landing, et cetera."

This turned out to be a huge debacle because the players of Flight Simulator were pilots, that's the player, and they don't just think about the runway and the airport and the plane and the knobs.

They think about the landmarks when they fly because they're simulating flight, and this bridge turned out to be a very important bridge for pilots because it was the bridge that signaled, "Now you turn left and then you go land." And so they were missing this landmark.

Stef: What a twist!

Boris: Right. And so something that seemed trivial to the team because they were not pilots was actually a big deal to the user, and even if you had been a pilot, could you have flown everywhere in the world?

No, so you'll never know all the things that really matter to your users. It's like even the best prepared is still going to hit this problem.

Stef: Awesome. That's a great story. I love that story.

Boris: And I'm not sure data would've solved that, by the way.

Stef: No, exactly. We could've seen a lot of pilots that were flying that route just were failing miserably and going off track exactly on the spot where the bridge was. I don't know.

Boris: See, even there you think about the feedback loop as in the data, but the feedback loop became anger, right? It was anger that you had broken a fundamental user expectation, which is-

Stef: The presence of a bridge, of a landmark.

Boris: Yeah. The presence of a bridge, the presence of a landmark. Exactly.

Stef: That's a great story. The other analogy that I was thinking of is just the shift that's currently going on really in the analytics space, which is we're going a little bit away from a centralized BI team and analytics being something quite many steps removed from the product team, for example, and into being just very integrated.

That's the biggest goal of most of the folks that I talk to, it's just like there is no connection between there.

And, unfortunately, which is really tricky, one of the things I've learned is, and I say this with utmost respect for everything around data engineering and data engineering is super, super important as a role and we do it all the time, all day, every day as data analysts or data scientists.

But it's interesting to me that there is this... I've seen a trend where data engineers lack trust towards their product teams or the Product Managers, and managing their data because they're so burnt by having to pay the debt of when someone in the product team shipped bad data.

And so, they want to have full control over all of the data that gets shipped, rather than trying to bridge a gap and get a little bit closer to a collaborative data governance and collaborative analytics releases.

Boris: Sure. There's a lot to unpack in what you said. I see a lot of change if you ask me in the 15 year timeframe that you're talking about.

One is absolutely what you said, which is that there's a much greater amalgamation of data.

So I think if I think back to then there was product data and then there was business data, and those were two different worlds.

Then if you think about product data 15 years ago, and again obviously I'm already stating the words so it's going to date myself, but it was very difficult to acquire analytical data about your products and the way, for example, the way Microsoft and other companies acquired analytical data about usage of their products, the way they would capture those metrics about who is using what feature and are they clicking on these buttons was in an opt in form, right?

So most people who have had a Windows computer will have seen this where you get this popup and it said, "Would you like to join the customer experience improvement program?"

If you didn't say yes to that, then there was no telemetry sent from your computer to Microsoft so they didn't know how you used Microsoft Excel, they didn't know how you used Windows.

And of course what telemetry you were gathering had to be planned way ahead of time, so just the quantity of information was sorely lacking.

Now of course Microsoft is so large that even if only 10%, it was roughly 10% of people would opt in, 10% was still a massive number of people so the amount of telemetry about Windows and Office was still very large.

But it had to be planned ahead of time, you had to know exactly what questions and what telemetry you wanted to ask, and then you had no control over the biases that would be built into who opts in versus who doesn't.

So that's one big shift that I've seen, we're now awash in data. Getting it is a lot easier than it once was.

In fact, there are products now for product analytics that will just capture all the data and then let you make sense of it down the road.

There's not even predefined schemas you need to create, they'll just capturing every button click, every mouse movement will just be captured.

Stef: Hot topic, very hot topic.

Boris: Right? Exactly. So now on the product side, A, we're awash in data so I think that's the biggest shift over the last decade plus.

Then in terms of the whole business, I think to your point, the functions are merging not just because there's more of it and we have to specialize in the processing and management of data, but because in a SaaS company the interconnections, actually even in any company the interconnections between product and business are omnipresent.

Whereas the connections between those before were loose, right? It was like, "We sold Windows, here's our revenue."

How people use Windows didn't matter to the finance organization. But now you can think about pricing is something, for example, SaaS companies change all the time, and pricing is tied to usage of the product and which levers should we use in the pricing models?

All of that is interconnected so you need to join more kinds of data and the BI function cannot solely work on business metrics.

They have to understand product data. And so all of that has to have merged, right?

And I think that's what you're pointing out, that you now have to have a shared understanding and at least a shared substrate of infrastructure so that those data can be connected. You have to be able to join that data at the very least.

Stef: Exactly. Yeah, and like you were talking about, right now business cares about product usage for a few different reasons, and one of them is also just...

You mentioned impacting pricing and all that stuff, but also with SaaS what we see is all of these subscription models, and ultimately people will unsubscribe if they don't get the value from the product that they need.

So we've entered this shift also of business caring about product usage and product experience and retention because it highly impacts the revenue predictions, for example.

Boris: Yeah. If you were to look through the various ways people use our product, I think you would see that managing retention is near the top and Census at its core is used to sync data from your BI, from your warehouse, from your analytics function, out into business tools like a CRM, like a marketing tool, et cetera.

And this retention function, whether that's customer success in B2B or retention specialists in B2C, or even worse like at your company in the United States, they need the detailed information about what people are doing in a product in order to serve their users and, more importantly, to be proactive in telling them about the parts of a product they're not using or catching them when their usage starts to drop so that they can hopefully right the ship. It's like an interesting symbiosis of automation with the product playing a role, analytics playing a role, but also humans playing a role.

It might just be that you're going to automate when to make a phone call and find out what the company needs to do to change course.

Stef: Yeah. Help you get better value from the product, for example. Awesome. So I have to mention Visual Studio. This was, I don't know, a few years after the famous Developers, Developers, Developers, right?

Boris: Yeah.

Stef: Can you talk a little bit about that journey and what was it like to build an IDE right then at that time, when this mindset was going on within Microsoft?

Boris: Yeah. So I think I subconsciously or consciously was attracted to tools, and there's products you can put broadly in the, what I would call, experiences.

A video game is of course the most extreme version of that. And then products that you would put in the category of tools, Steve Jobs' famous line about, "The computer is a bicycle for the mind."

I think tools are just enhancers of other people and the work that they do.

Visual Studio I think of as potentially one of the ultimate tools because it is the tool of tool builders.

So if I give someone a tool like a hammer to build furniture, that's great, right?

The carpenter can perform better and build a more beautiful piece of furniture.

But if you make the best IDE, the best development tool what you're doing is helping programmers be the best versions of themselves and be the most productive versions of themselves, which it's like the second derivative, if you will, on the software industry. I don't think I intuited that when I first started working there, but that's what made me most excited to spend time on those problems, was millions of developers would use our product and in turn tens, if not hundreds, if not billions of people would be affected by the work that those people would do.

So I'll give you a great example, you don't even have to go outside Microsoft. I worked on a pretty low level team, Visual Studio is a large organization, and for a while I worked on the C++ team.

So this is the various languages in Visual Studio that you would use, so if you're in data world you might use the Python experience in Visual Studio or the SQL experience in Visual Studio, and I worked on the C++ experience which is the language used to build Visual Studio itself.

Stef: Ah, ha! Meta.

Boris: Right, right. Very meta. But also the language used to build Windows, so we sat underneath Windows, so when Windows had a problem with the compiler, we would get called because we are the ultimate dependency.

So there was a lot of work that we would do which would be about optimizing performance of... we're talking 1%, and that 1% would mean Windows might be 1% faster, which is a mind bogging result, right?

If you can change one piece of code down at the bottom and then with no work from any other developer, you've basically improved everybody else by 1%, it's just a really, really big deal.

And so I really just felt very early on the power of leverage, and I think and I really didn't intend for this to connect to what we do now, but I think of what our product does and what I think is most important for people who work in data is today is to figure out how to magnify themselves, how to lever themselves. How to take their work, whether that's data engineering or analysis or predictions using ML, doesn't matter, whatever you're doing what I have found is that in most cases it is under utilized by the company, and your true value is tied to how much leverage you have in all things.

The reason you pay managers a lot of money is because they affect a lot of humans underneath them, so they're very leveraged by a number of people.

The reason you pay a programmer a lot of money is because their line of code is very leveraged, a single line of code can impact millions of users.

And so data professionals should think the same way, and I think the underlying mission statement of Census for the user is, "I'm trying to magnify your output."

I'm trying to say, "If you have a prediction on when a customer is going to churn, don't make a chart. Let's push that directly into the alerting system of the customer success team so that you are impacting the business directly, rather than potentially using a meeting."

And I think that is great for your career, it's also just what everyone should be aiming to achieve.

So I think I learned that working on, like I said, one of the most ultimately leveraged tools in the world which is Visual Studio.

Stef: Great story. And I have to say, I very much relate to wanting to see the data professional's roles be more leveraged, and it certainly is getting there so much with all of the tooling that we are currently building based on the data experiences that we've been having over the past 10 years, 15 years and things like that. It's really refreshing.

Boris: Yeah, I agree. Here's a good litmus test to go back to how would we know that that's happening, I'll give you a qualitative version, which is will we start to hear about famous data analysts, famous data professionals?

Stef: Exciting.

Boris: Right? When I joined Microsoft, the most famous programmers at Microsoft I knew by name.

I never worked with them, but I knew who they were and I think we are still a ways away from that.

But it would start within your company, so in your company how famous is the data team or the data person or the analytics management or whoever it is, the analytics engineer?

It doesn't matter, how well known are they? And that's a good proxy for are you impacting the business at the right level.

Then of course the ultimate version of that is are you known in the industry? Which is the highest kind of level you can achieve.

Stef: Yeah, exactly like a chef's chef type of thing.

Boris: This is good, I used to think about this for managing people.

There was a lot of rules of thumb, people always asked, "What is it to be a junior engineer, senior engineer, principle engineer?"

Same for a Product Manager, same for a marketer, it doesn't matter. There's a lot of ways you define your competencies and hopefully a good company lists out, in relatively detailed ways, what it means to go from level one to two to three to four, right?

But a really useful shorthand I tell people, especially in larger companies, it's harder to do this in a small company.

But in a larger company you would just frame it as like, "What is your scope?"

If you work on a single feature then you're a basic programmer, you're a level one kind of product manager.

You work on a feature, same for an engineer. If you work on a product, or half a product depending on the size of the company, then you're a senior.

Then if you're influencing a whole division, then you're principle. And if your scope of influence is the whole company, then you're maybe a distinguished engineer.

If you're impacting the whole industry, then you're even more than that, right? You're a technical fellow or something.

And so I think it's a really good rule of thumb for thinking about what is your level, it's like, "How many people in your company or eventually in the world, are affected by what you do?"

Stef: Yeah, that's lovely. That's all of our missions', right? That are building tools for the data industry, I really relate.

I have a couple of ways for how I see the success metrics of our data culture that we internally built back in QuizUp, back in the days.

One of them was literally how much time does the data team get to actually focus on the high impact, challenging data problems versus just answering some basic questions because nobody else could do that?

Boris: Right. But I think you told me once when you were talking to me about QuizUp, you framed one of the major transitions in terms of when a data team is valued, right?

Or the maturity of a data team was when the product and engineering team listened to the insights that you delivered, right?

In other words, that you drove change in the product and that, to me, is a perfect example.

It's like now your scope of influence has expanded and therefore you should be rewarded in kind.

Stef: Exactly, yeah. And that's the other metric that I used to obsess over because when we see product developers and product engineers care about data and care about insights.

That is, high leverage to increase data quality because when the people that write the code that generate the data points from the product, when they care about the reason and the outcome of that work, as opposed to seeing it as like an analytics task or like just some task that they have to complete for a coworker that has no impact on their job, that mindset shift is what creates this huge shift in data quality and data impact, I think.

Which, is high leverage for just every single product team that has to rely on that data going forward.

The reason why I had the other metric, how much time do the data professionals get to spend or have, really, to allocate to be able to spend on things like predicting retention or using or creating recommendation algorithms or something like that, all of the cool stuff that data scientists eventually or ultimately apply for the job because of.

I think when that proportion of time goes up, it is because we have created a leverage within the company with tools and with access to data so that anyone in the company can be making that impact and those decisions and applying that to the product strategy on their own, even, without the help of a data scientist.

So that's like a foundational leverage shift, and the intermediate step is when the data professionals actually are the persons that provide the insights.

Then ultimately we want people to be able to find those insights, which is the same as you were describing with, "Don't just build a chart, just feed it directly into decisions."

Boris: Yeah. It's a good way to frame success. I think it even implies something that is not necessarily true at most companies, which is the more the data organization is in demand, the more it will be involved.

You will have this pressure to turn into a kind of IT organization which is a dependency for everybody in the company, but is a highly reactive dependency where it's a service oriented model, where it's, "We need something, we go get it."

And it's always frustrating for everyone in the company to depend on that resource, but that's just the way it's done.

It's kind of like a law firm, right? It's like you're not scaling through, you're underlying everything, like, "We must ask legal, we must ask IT."

But you're always competing for resources against everyone else in the company, every other department.

The way out of that, which you make implicit but I like to make explicit, is to model your team as a product team.

And the reason I do that is because product is way of delivering value that is leveraged and scaled by default, it's, "I will not hand you this, hand you that, hand you this, hand you that."

I will hand you a system that allows you to do something yourself, and so I think that's the real shift.

Then when you think about what you said, you said, "Percentage of time you work on the interesting stuff versus the non interesting stuff."

But I think the product framing of that would be, "How much are you fixing bugs versus building features?"

And I think the first step is moving the mindset of the organization to be what we ship, what we build is a product, and sometimes it has bugs like the dashboard is broken.

It's a bug, right? Sometimes it needs to have product support where people don't know how to do something and you show them.

But over time you should frame it as like, "Well, what is the next version of the product? What are we shipping next and how do we drive down the interrupts and problems so that we can drive up quality and drive up the big new ideas?"

Stef: I love that. We've already touched on so many of the areas that I wanted to cover explicitly on this episode.

But I know and I love to hear one of the things that make data real for people and make you real as a person who has dealt with data, is inspiring data stories and frustrating data stories.

It's also just something that helps us unite, like we've all been there. So you would you mind sharing an inspiring data story and a frustrating data story?

Boris: Sure, interesting one. I get to see a lot of those from our customers every day, but I'll tell you one.

Since we're doing a walk down memory lane, I'll tell you one from the Microsoft days that is easily Googleable, right?

Everyone can go read about it because it was not a small thing. It was, in fact, an enormous kind of change.

So before the move to the cloud Microsoft was, by and large, lots of products but two big ones, right? Windows and Microsoft Office.

And Office doesn't need an introduction, it's used by a ton of people. Both of these products had been on like Version 132.

I think Excel shipped for the first time in the 80s and has been iterated on continuously since then, same for Word, et cetera.

And so making real change, non incremental change, was very difficult. The older a product gets, the harder it is.

To your point about how do individuals get out of their mindsets, it's even harder for the older products, and one of the things that the data showed in Microsoft Office was they overlaid all of the support requests, all of the feature requests from customers so users of Office would be like, "I love Word but I wish it could do this. I love Excel, I wish it could do this."

A shocking percentage of the time what they asked for already existed in Word and Excel, because these products had been around for 30 years and they were super sophisticated, really, really sophisticated.

So what the product team realized or decided was that the issue wasn't that we needed to build new features, it's that we need to find a better way to advertise the features that we already have. And so the data was pointing us, right?

The data was saying, "Look, setting the font to bold? Okay, no problem."

Everyone knew how to do that in Word, right? But there were all there all these other things that people didn't seem to know how to do or didn't seem to realize you could do.

So the assumption was the menu system was no longer functional, too many things were deep in the system and you couldn't get to it through the menus or you couldn't find it.

Menus were no longer a good discovery mechanism and so they went about a massive shift which is very famous, you can Google this.

It's still to this day the design of Office, which is the transition to what they called the Ribbon.

So they went and said, "We're going to break a fundamental assumption that's been going back since the Macintosh, which is that you have menus at the top of the window with like File, Edit, et cetera, and then you click on a menu and then you get a vertical list of options in that menu, et cetera, because things were too deeply nested.

So we're going to provide this new kind of Ribbon which is going to be a tabbed interface and on it there's going to be a lot of large icons and large sections that, inside those, you'll have drop downs." So it was kind of a 2D version of a menu.

Now, this is obviously the most significant change in Office that happened in the last 20 years basically, 30 years, aside from moving to the Web.

So if you think of Office and you want to summarize it, the two biggest things that happened was, one, it went to the Web and, two, it developed this Ribbon interface.

Everything else is details. So this Ribbon shipped and once it shipped, and remember the data was not... You would see proof of improvement by people using features that have always been there and now they would find them.

So the most important thing is this was a success, right? This was a success in terms of using product data about these features exist but they're not being used, combined with feedback data that was at large scale.

This was not three people mentioned something, it was analyzed feedback about feature requests that had been normalized to find that like, "Wow, 80% of feature requests are already in the product."

And it was a great use of creativity, you took a problem about people don't know how to find features and you came up with a really interesting solution to it because you could've done product tours or something, right?

You could've solved it a lot of other ways, or made more videos, I don't know.

There was a lot of things you could've done and they chose to rearchitect the UI completely.

But this is also a great example of a kind of failure in data, because one of the things this failed to warn the team about is that while there were a lot of people asking for features that couldn't find them, there were a lot of people who were power users of this products, right?

Excel especially. Excel more than anything else has a lot of power users. This is why it cannot be really unseated in the industry because lightweight spreadsheets, whether they're made by Google or by Apple, just do not compare to the depth of Excel so entire banking products are on Excel.

The users in those places are like the pilots in Flight Simulator, they rely on very specific functionality and they use it like an airplane cockpit.

So you and I would walk into the cockpit of a plane and we'd say, "This is unusable."

But to a pilot it's very usable because it's the best way they've come up with to present this information.

And so what the Ribbon caused is a break in a lot of the keyboard shortcuts that power users of Excel had used for a decade.

Stef: Which are fundamental, you can see it when Excel power users go into Spreadsheets and they're paralyzed.

Boris: Exactly. And so the data failed to present that kind of emergent problem from changing the user experience, because you couldn't change the user experience but also hold all of the existing kind of keyboard shortcuts because the keyboard shortcuts, a lot of them were tied to flying through the menu using your keyboard.

And so this caused a really big uproar, again I think this is a really good example of data can paint a lot of different stories and uncovering something like power users is an interesting problem in data.

Even if you could, how to think about those users in terms of features and prioritization is also really, really hard.

So this is a good example of a blessing and a curse, they use data to uncover, to get the backup, to make a fundamental shift, the biggest shift in the product in decades, and it screwed up.

Stef: And it's also a really interesting conversation around, ultimately, we're trying to build a product that can do... or some products are meant to be able to do really intricate and detailed things, but it's not the best way to introduce you to the journey of getting there, is to shove all of those things in your face at once.

And so there's this really interesting balance, generally I think, in product design where you have to make the experience feel a bit smooth in the beginning but still manage to get people to the power user stage and so this is a really good story.

I wonder how it compares with just the general industry ribbon. I mean, ribbons are everywhere. It's fundamental right now.

Boris: I mean, that's really interesting that you say that. This is one of those things that causes hubris as well, but when you reach the scale of Apple or Microsoft in terms of your application usage, anything you do has almost a guaranteed downstream effect on the style of every other application.

So Apple started going flat, more people started going flat, Microsoft did the Ribbon and eventually more people go with the ribbon.

Some of that is because smaller companies just don't have the time to invest in fundamental user experience research so they'll be like, "Let's just do what those guys did. It's got to be correct."

Stef: Exactly. "They've probably learned something cool, we can just follow that lead for now."

Boris: For now. Then of course a good side effect of that, and something that's really undervalued in user experience by the way, I'm sure there's an analogy here for data teams, one of the things that people don't think about and I used to knot think about it as much until I worked especially with... I had a colleague who was dyslexic that really changed my perspective on these things.

People underestimate the importance of consistency of user experience. So it's better to be wrong the same way everywhere in your product to be wrong and right in a bunch of ways.

So if you've decided, I don't know, let's take an extreme, let's say when Microsoft put the X in it's Windows on the far right and Apple put the X on the far left, right?

And I don't think either one of them is correct or incorrect, but let's just take that, let's just assume one of them was better.

It's still more appropriate for you to be consistent across every application in your system than to mix and match because users really build muscle memory, to your point.

Not just power users, but regular users. And so consistency really matters, and so if you're going to copy a design that's not bad because it actually increases the level of muscle memory amongst the general public.

Stef: Expectations.

Boris: Yeah, missed expectations really hurts, right?

So I think something you probably think about a lot with your users and your product is how to name things, how to name elements in a schema, and I think it's more important for people to be able to come up with a style and stick with it than to be constantly in search of a better scheme.

Stef: Absolutely, +1 on that.

Particularly for when you look at all of that, I mean this is a great example of leverage because when you name all of your events that represent a user action or all of the different events that represent all of the different user actions, when you name them consistently, that is a powerful tool to enable data discovery for any data consumer in your organization and the opposite is true as well.

If you name them inconsistently it is detrimental to data discovery in your organization. Awesome. I love that inspiring story. Can you think of a frustrating data story?

Boris: Day to day I think the number one frustration I see here is that people just don't have the data they want, right?

And that has more to do with efficient communication and prioritizing and helping all sides realize that they can help each other by being proactive about what they want and telling them and having a data organization that can deliver on that in a way that is predictable-

Stef: And really very empowerful by making sure that each product team that needs their own data or each team that needs their own data is empowered to generate that data as well and use that.

Boris: Yeah, yeah. I think where I sit in our organization, it's almost like the kind of frustration that I see is that I know that there's places where we could be operating with data and that we don't, and it's kind of the inertia argument of, "It's not there so whatever, I won't bother that person or I won't go do the work."

And as data-savvy management it's like I know the potential energy that's there that's not being put to good use, right? That's probably my personal frustration.

Then in the day to day, the probably most common kind of frustration is not the lack of data but how easily data gets stale and not knowing whether that matters or not.

So you'll have a source pipeline that will fail, from your side of the world or the product team changed something and now it's like data is missing.

Or it's like one step lower down where the ingestion pipelines are failing for some other reason like credentials or whatever, or size, suddenly you hit a threshold that causes pipelines to fail and it's going to take a while to fix, right?

It's probably the most common way me and our team can get frustrated, it's like, "Oh, the system is now behind by like two days."

But also, is that a problem? What is the thing that's affected by that? In our case we have a high sensitivity to that latency because we operationalize a lot of our data, so we're going to send bad trial expiry emails if the data pipelines are stale.

So someone extended your trial, and you get an email saying, "Your trial is over."

So really this is kind of stupidity you want to avoid, so we have a fairly high sensitivity to stale data so that's probably the kind of frustration that I feel the most.

But if someone were to come at me saying, "Our data is always stale."

I would say, "Well, hold on. How do you build up some tolerance towards that? Do you really need to freak out every time the data is a little behind? Can you create different SLAs for different kinds of data so that not everything is a fire drill?"

So that's probably the most common thing for us that I deal with. Look, I think a lot of the tooling in place for analytics has been optimized over a decade for answering questions in the timeframe of weeks, right?

So if you're trying to get numbers on revenue you close the books and then you have a couple of weeks before you have to tell Wall St., so you can really make sure all that ingestion works, really process the data, clean it, identify holes in that data, ask people about why those things are in there, et cetera.

And I'd say our company and what our users need to, to use the music thing, we turn that up to 11, right?

Where we are taking analytics, distilled analytics, post modeled analytics, right? So post capture, post ingestion, post analysis and predictions and ML processing, post all that, and powering sales marketing, customer success, report, et cetera, with that data.

So we have very stringent latency expectations on data that was not where BI historically has been.

It's understandable that we'd be very frustrated by that, and I think we're all moving... I think of our product as almost a catalyst for forcing the rest of the analytic stack to get more, not necessarily real time, but lower latency in general and surface failures more quickly.

Stef: Yeah, and I think you've hit the nail on the head there with also, probably, there needs to be different SLAs on different datasets.

It reminds me of a conversation I recently had, or earlier this year, with Maura Church who is Director of Data Science at Patreon, and she was talking about how they also deal with stale, not necessarily stale pipelines or stale data streams.

But stale metrics and stale dashboards and things like that to prevent people from being overwhelmed when they don't know exactly which dashboard they should be looking at or which dashboard is ultimately the thing that should get them as a product manager or something like that.

And there are a couple of things that they do, and one of them is they define like, "These are really the key metrics and they matter and they should not break, they should never break."

Then another thing that they do is they do these regular pizza and sodas where they sit down and clean up stuff, just like, "This is deprecated, get rid of that, get rid of that and all that stuff."

And I think the stale data problem that you're just talking about has probably these two ankles on it, what are the most important things that really can't break and then also-

Boris: Which analyses themselves are getting stale from a business perspective. Yeah.

I've joked about this a few times where maybe if I were building a BI tool, which I'm not, I might say, "Every dashboard, no matter what, every chart, every dashboard has a built in expiration date and you have to forcibly extend it."

Stef: I love that.

Boris: That's not an optional thing, it's just the built in behavior.

So it's like, "Does the business care? Is anyone using this?"

Let's assume by default that everything is going to get sunset, and then have people go, "No, I need this, I love this."

And now it's like, "Oh. Okay, great." It's almost like a forcing function for conversation.

Stef: Exactly, and I feel like that's growing also in the data space, is this verified. It's like a verified dashboard or verified analyses and you can see when it was last verified, so I think we're seeing the birth of something like that.

Boris: Yeah. To go back to something you said earlier where a good leveraged data team is not answering every question, they're building a system that allows everyone else in the company to also ask and answer a question.

Which, means that you have to find a way to say, "This is the surface I give you and I stand by it."

Even though you're shipping every day, I hope as a data team, some of the same things that we did in larger software apply.

So this is a release, this is going to be supported, this is a set of metrics or tables or sets of schemas that we are saying, "These are perfectly supported. Anything here that you think is wrong, it's not and we will always fix it if so."

Then there's the, "Okay. Well, this is the experimental grounds. You can go play around but you'll get less support if you ever go up and present that, you're going to get dinged."

All these kinds of things you could try to do and you might start to have to. So yeah, I think a lot about how do you communicate in the software layer what is blessed data, I guess, that's maybe a weird term for it. But what is approved or properly supported?

And maybe, by the way, that could solve some of the things that you probably encounter, which is if it reaches a certain support level then it cannot be instantly deprecated by the source.

So if you want to change your eventing model, that's fine for things that have potentially low support thresholds.

But if it's a critical thing then maybe it has to go through a two phase kind of deprecation, where it's like step one is, "We are going to change it."

And then you can communicate that out, and then a month later, two months later, whatever it is, you can say, "Okay. Now it's gone."

And you're not allowed to do that in one move, which is how software does it. When you work on programming languages like I did at Microsoft, APIs in programming languages are very stringent contracts between the end user and the tool builder, and you cannot break those things overnight.

Apple does, but it comes with a lot of pain. But the proper way to do it is to signal that it's coming, so you start admitting that it's deprecated, that this event cannot be depended on, this API cannot be depended on, this language feature cannot be depended on because it's going to change.

Then in a subsequent release you enact the changes so no one can complain.

Stef: This is actually really good, this is juicy stuff here, we're on the philosophy of how to maintain a dataset.

Boris: Yeah, like what's a contract? When does a dataset become a contract?

Stef: Exactly. This is great. I'm writing a blog post about data quality right now and how to maintain data trust and data literacy so I am definitely going to quote you right there.

We've already talked about a lot of industry changes and things like that, but I know you have some exciting thoughts that we talked about a little bit before we started recording.

And often I like to think about this as from the perspective of what's changed in the data space or in this industry over just the past two years?

Because it's a really rapidly moving space. But in this case, I would love to also go further back and also I think that will be a fun segue into a moment where you realize something would maybe need to change or you would want to change something and potentially segue that into like why did you start Census?

Boris: So I think you and I already talked a bunch about what's changed in the long horizon, like the 10+ year horizon and I think I would summarize that as, one, we have a lot more data than we did before.

Capturing data 10 years ago or 15 years ago was an opt in and now it's an opt out experience, so we just have a lot more data at our fingertips.

That's one. Two, we've already said it's more interconnected so the data that comes out of product and the data that comes out of the business are more interrelated than ever and so you need a shared substrate to deal with that.

I think that gets us to the more recent past, so I think the biggest shift that I've seen in the last few years is the one that's obvious to everybody, it's of course the shift to the cloud data warehouse, right?

But I think people adopted those because they were very smooth pricing curves so everyone could get started. You didn't have to be a big company, which is how it used to be for a data warehouse.

Of course the experience of a modern cloud data warehouse is just so, so attractive, right?

So you can have separated workflows and it's really nice for multiple people to collaborate on a warehouse.

But I think the actual pressure came from the quantity of data, the omnipresence of it and the need to have it be interconnected.

So I think the biggest change was that it wasn't necessarily on people's minds, they were like, "I love Snowflake, it's easy to use."

But really what's happening is you need to be able to join any dataset to any dataset, and the silo of product analytics versus business analytics is no longer okay.

So the only way to solve for that, there are really fancy ways you can try to solve for that, but the best way to try to solve for that is to put them in the same data substrate which is a cloud data warehouse, and then you can compute on it and join it and aggregate.

So I think that's what happened over the last few years, then when you have that...

So what I found was everyone had been or has been investing in the data infrastructure to join all their data together, they were bringing in more data than they have ever had in the past, right?

Not just from the product but also from all these business tools.

But what was lacking, and that's what led to the birth of Census three years ago, so it was a little while ago now, was that the analytics team was building out this amazing infrastructure but they were using it primarily to answer questions looking into the past and for slow moving processes, like a quarterly review.

All that investment in data, whether it's analytics or infra, was just dramatically underutilized, under leveraged on the rest of the business.

When I say the rest of the business, I really mean everything. I mean product team, sales team, marketing team, support team, finance team, customer success team, you name it.

All of them were dying to be more data driven and I think most companies before Census existed were either, broadly speaking, under informed with data.

But at best, they were just very well informed with their data, but they weren't driving day to day decisions with their data. That's really where Census was born.

It was in saying, "How do we connect these two worlds? How do we take all of the interesting value that an analytics team can build and put that into the hands of where it can have the most impact, which is all sorts of business users."

I joke about this, like there was a left hand and a right hand in the business, and there was the analysis side of the business and the action side of the business and they were not talking to each other, and that's why Census was born.

The reason it's called Census, by the way, it's tied to this idea that I've always felt that there should be one shared substrate, one view of this.

You don't have three different counts of the number of people in Iceland or in the United States, right?

You have one and it's the census, and then that is what everyone depends on to make decisions whether that's how many parliamentarians you need, how much taxation you should get or financing decisions in the future or loans. Everything.

How many buses we should plan for in that part of town? All of that is tied to the same core data.

Companies should work the same way.

Stef: That's great. I love that story about the name of the company, that's very good and it's a fun analogy and it's something that we all dream of, a census around our data.

Boris: Exactly. Then eventually I guess if we get really good at it, we'll just have to take over the census.

Stef: Exactly, that's right. But it's interesting because I think you're right, data teams use the data warehouse to look back and you were talking about and mentioning maybe even ask seldom, ask for quarterly reviews, and how much of a waste potentially it is to be able to have all of this infrastructure but not using it to its fullest.

What do you think was stopping them from doing that, or being able to pipe those insights that they had... Operationalized, it was operationalized data and operationalized analytics, what was stopping them from being able to pipe those insights further even?

Boris: There's a few things here. One is it historically has required engineering skills, not just engineering skills, but also coordination between an engineer who knows how to build it and the various applications in the mix.

So I don't know how to move data from a data warehouse to HubSpot so I need an engineer to do that.

Okay, how do I do that well? Well, that requires real work. But then it's even more subtle because the engineer doesn't necessarily even understand what the needs are in HubSpot so it's a whole process just to turn that into a feature that you can deliver.

So if someone says, "Hey, we need data from here to here."

That engineer probably needs a Program Manager to go talk to the sales team and go like, "What do you need in HubSpot? How does HubSpot work? What is the definition of a company in HubSpot?" All these things.

And so that is everything that we have abstracted, that is what our product does which is to say, "Hey, you don't need to understand the depth of HubSpot because we do. You don't need to write the code to move data because we do. You don't need to manage API quotas because we do. And we understand the impedance between your warehouse and HubSpot and that's where we help you resolve."

That's one. But then there's also non product things that I think were hindering people's ability to do this.

The first that I talk about a lot is people in business intelligence, when your job is reporting on what happened last quarter or last year, et cetera, your primary way of modeling data is in an aggregated form.

You take all of the transactions and you say, "How much money did we make?"

But when you're trying to operationalize data, when you're trying to put it into these systems whether they're sales systems, support systems, et cetera, you actually want to do it in a disaggregated way.

I don't care about the total usage of a feature or the total revenue. I care about each individual's usage of the feature, each company's use of the feature and each company's revenue and each user's revenue.

So data professionals had to figure out how to do model their data correctly in order to operationalize it, and so I think those are the...

There's all the technical reasons that we resolve and then all that remains is the hard part, which is the interesting part, which is the modeling part, the data modeling part.

Stef: This is so inspiring, this is so exciting to talk about, really. I love the identification that you're making.

I was literally hoping you would say that, that that would be the stopper. That's the show stopper for people being able to build this because actually I'm working on another blog post right now, which is I'm preparing for the Coalesce Conference which is in December, which is called a hot, tacky name which I might rebrand a little bit.

But it's called Don't Hire A Data Engineer Yet. Again, this is with high respect for data engineering and, again, everything that we do as data people, data professionals it entails a lot of data engineering.

But this gap, we tried having a data engineer that was working on a specific project and trying to build something, but he was just so many steps removed from the actual problem that ultimately I think it was way more powerful to build a team of data professionals that were also proficient in building all of the things that we really needed for ourselves, with some support from engineering.

This was a completely different aspect than having someone that specializes in building integrations or specializes in building something because that really, really, really requires a lot more personal connections and project management, really. Super, super interesting.

Boris: I agree.

Stef: So we've covered a lot of ground.

Boris: We've covered a lot of ground.

Stef: I think I would still want to touch on, before we wrap this up, I would love to touch a little bit on data trust.

I think someone saying, "I don't trust this data," is a really common statement. What is your take on that? Why do people say that?

Boris: It's a really, really big deal. It's like this hidden tax on the whole company because it means people may not use data when they should, and you won't necessarily even know.

Because you may have built the analytics or you may have thought you've done the work, but it's not actually put to use because people don't trust it.

So I think there's a lot of ways in which people end up with untrusted data, and I think we're still in the very early days of resolving that.

It's easy for people to not realize this when you live only in analytics and BI, you think of all the reasons why you have untrusted data there so you'll look at a dashboard, you're like, "I don't trust it."

But if you zoom out and look at the company writ large, the reasons people don't trust data goes far beyond the BI team.

It actually is tied to the fact that the way companies and people within companies get data is haphazard. It is fragmented.

So there are actually hidden data pipelines all over your company, and some of those you won't even think of as data pipelines.

It's like an integration that exists between your support tool and Salesforce or something, just because when you bought the support tool it did something, it plugged into Salesforce and did something. And so you don't even realize that that is a kind of data pipeline. It's part of the ecosystem of data in your system in your company.

So the first and hardest thing to do is just the thing my team obsesses about, is how do we get more of a hub and spoke centralization model going for how data goes in and out of a company?

So the cloud data warehouse is a big part of that, right? Because you can get all of your data in one, and again, it wasn't so long ago that you'd have multiple of those in a single company.

And so now we just have to figure out a best way to broker data, by and large, to go all through this infrastructure and then ideally through a single team that can own some percentage of it. I don't think the data team can own everything that's going on, because that doesn't scale at a certain size. But that you can route it through a central system that you can then put, to use a term that we used earlier, expectations and SLAs on.

It's a lot easier to make data trustworthy if there's only one place that everyone agrees it emanates from.

So I think that's been the first and hardest thing, is there is a lot of what I think of as peer to peer connections in your company.

There's a lot of things where the marketing and sales team have a connection, and the supporting sales team have a connection, and the product and data team have a connection or the analytics team, and there is not a... People use all sorts of names for these, they even talk about... What is it these days?

Like a data mesh or whatever. Data is going in every direction and more importantly you don't even realize that it's moving around.

So I think getting a handle on that is key and I think a lot of what we are trying to accomplish is that. But even if you have that, that only opens up the possibility for the data organization to build the trust.

So how do you do that? Well, it's all the same ways you build trust in software.

It's not by saying that everything is going to be perfect, that's guaranteed to be false.

So you have to build a culture of, A, continuous improvement and being able to point at something that is being improved.

So you want to have testing, you want to have monitoring around your data, and you want to have clear definitions of what correct is, and you want to have a feedback loop with your users which are the other people in the company, on data.

Because you can specify it all you want, there is not a theorem prover on your data that's going to say it's correct, there is no categorical sense of true.

So I think it's about creating a culture of committing to a certain quality bar and then continuously improving it, and having over time, one throat to choke, one system to point at to build trust in. That's my view.

Stef: View, accept it. Okay, excellent.

I think I want to maybe end this conversation talking a little bit about data misconceptions, and potentially how we can get over some of the data misconceptions and potentially help more teams get their analytics right and things like that.

What do you think is some of the biggest misconceptions people have about data or product analytics, how that works?

Boris: I think trust and quality in data is a process, not an endpoint. It's not a state that you end up in, it's a way that you work, it's a culture, it's a process.

The most common way I see people get in the way of themselves, of not taking advantage of their analytics or their data, or not engaging across teams in a company, is that they make perfect the enemy of good enough.

And so it's better to start with a very small piece of information, potentially a single metric, a single column, and start building trust around that and building usage around that, rather than say, "Well, I can't share this information with the sales team or the marketing team because it's not perfect yet, it's not ready yet, and then they won't use it.

Or if they do, they will misconstrue it and then it'll be worse and then it'll go up to the CEO and then they'll say something." It's like, "The data team will be wrong."

And it's like, "No, they just over interpreted the data." And so it's a very real problem, that politics is very painful for analysts but I think you can't constantly hope for, "We're going to get the data right soon, eventually, eventually it'll be right."

You just have to build in this idea that it can be a little bit wrong and the key is to iterate and to start thinking about... It goes back to what we said at the beginning, right?

Don't ship once a year. Find a way to ship more often. I think it's maybe not necessarily a misconception people have from the outside world, but I think they view this data as this thing that is absolutely true, and that's not the point.

Stef: Awesome.

Boris: Truth is just a really hard thing to get at anyway.

Stef: I think this actually can work as really great final words for this show, Boris.

We should start small, potentially even a single metric, get that right and build the conversations around that and start somewhere in building the trust, and ship often. I like that as well.

Boris: Yeah, I think those are good takeaways for people.

Stef: Yes. It was such a pleasure to have you on The Right Track. A crossover episode, we should definitely use this opportunity to hype your podcast.

Boris: That's right. You are one of the best episodes we have done of The Sequel Show, so thank you for joining ours as well.

Stef: Big words, big words.

Boris: We had a good time. We definitely went off on the most tangents of any episode I've ever done. But I think, hey, listen, to me I think this format is also conversational so it's like you may as well.

Stef: Yeah. This is what happens when two mathematicians get together.

Boris: It's true, it's true.

Stef: Excellent. Thank you so much for joining us on The Right Track, Boris. It was a pleasure having you on.

Boris: This was really fun. Thanks for having me.