1. Library
  2. Podcasts
  3. Practical Product
  4. Ep. #5, Data Driven Product Management At Yammer
Practical Product
41 MIN

Ep. #5, Data Driven Product Management At Yammer

light mode
about the episode

In this episode of Practical Product, Craig and Rimas are joined by Anna Marie Clifton to discuss Data Driven Product Management at Yammer. Anna Marie discusses the role of data in her day to day at Yammer, and the group talks through how to get the data you need from your product, even at an early stage.

Anna Marie Clifton is a Product Manager at Yammer, an Enterprise Social Network. Anna Marie is also co-host of the product management podcast Clearly Product, where she and her co-host Sandi MacPherson review and discuss a product oriented book once a month.

transcript

Craig Kerstiens: All right, awesome. We're back this week for our next episode. We've got a guest with us, Anna Marie, from Yammer. You want to tell us a little bit about yourself?

Anna Marie Clifton: Thanks. My name is Anna Marie Clifton. I'm a product manager at Yammer, pretty recent there. I've been there for about six months, so I'm spending all my time learning all of the things and soaking them up like a sponge.

Before that, I did a bit of apprenticeship at Asana, in product management, and then before that a developer-facing, tiny startup with five people, Hakka Labs. And a couple other things before that, and then long before, I was doing gallery management in New York City.

Craig: Interesting, I did not know that last bit. Let's see if we can tie a little bit of that back into product management, but we'll get there. So, you're at Yammer. Interesting company, acquired a while ago. You're Microsoft now, right?

Anna Marie: Sort of. We're part of Microsoft. Yammer has about 250 employees. In the Microsoft org we report up through Microsoft, and there's some interesting realities that come through whenever you are reporting to a mega org of 100,000+. It's also really interesting, because at Yammer we run on Yammer and we exist in our product, which is a workplace collaboration tool.

The Yammer network is now the Microsoft network internally, so in my internal workplace collaboration tool, there are 100,000 people on the network, which is really interesting.

But yeah, we've been a part of Microsoft for about four years, so long before I joined.

Craig: I think that's pretty interesting, because Yammer is in the space, I would say, to me, there's one interesting thing there: you don't have revenue. You are part of Microsoft. Microsoft obviously has revenue. I think they know how to generate revenue. But you're in this indirect line, right?

Anna Marie: Right, absolutely. It's really fascinating as product managers who aren't tied to any revenue line. We are only ever judged by our engagement numbers, internally.

Rimas Silkaitis: Wait, so you're telling me you're not charging for your product at all?

Anna Marie: There's basically no way to buy Yammer on its own. You buy the Microsoft Office Suite, like the Office 365 Suite, and you get Yammer. The lowest-level SKU comes with Yammer, so if anyone's buying Microsoft Word online, they get Yammer with it, right?

So there's almost no way that you can tie, like, "Well, which percentage of this seat is actually going to be using Yammer? And which percentage of this sale can we attribute to...?"

Craig: Isn't the correct answer 100% in those cases? When in doubt, go ahead and claim credit for the biggest amount.

Anna Marie: We all report up through the same org, and so those revenue numbers are being generated from all of the products and suite, and there's just dozens. And there's a huge sales force.

The other interesting thing about not being tied directly to revenue is we're also not connected directly to our sales force. So we have the Microsoft Office sales force selling Yammer, but we have almost no direct communication with them. So even talking to our sales force about what Yammer is and making sure they're messaging the right thing, and when to use this tool as opposed to other tools in the collaboration suite, is really a fascinating challenge.

Craig: For those not in the room, Rimas looks really confused.

Rimas: I'm still confused here because does this mean that Yammer then gets attached to a sale, when customers have hit certain thresholds? I'm thinking about tiering out to various sales, right?

Anna Marie: Yammer is in the lowest tier, so there's almost no way to buy anything from Microsoft without getting Yammer with it. Yammer is a cross-suite collaboration tool, so we're embedded in a lot of the other tools. If you can SharePoint, you have Yammer. The core product, but then also because it's embedded in SharePoint, there's all this cross-suite integrations. So it's very difficult to have any part of the suite without having Yammer as well, because we're just in the whole system.

Rimas: I'm with Craig, then. You attribute everything to Yammer.

Anna Marie: 100% of the revenue.

Craig: 100% of the users, 100% of the revenue, it sounds like a win. So, this is interesting to me because it's not a consumer-focused one where you care about adoption, right?

Anna Marie: No, it is.

Craig: But you still have businesses as customers, right?

Anna Marie: That's the interesting thing, is that

we think about two different user types. We have customers and we have users, and they're different.

Craig: Customers are who pays, right?

Anna Marie: Customers are the IT buyers. Customers are the network admins. We have a whole network where we communicate directly with our customers all the time. We almost never develop features based on anything that comes out of that network. We're only ever developing features based on usage patterns that we see in the data from end users. So we're very, very end-user focused.

If we see a feature that network admins are like, "Oh, this is a little bit uncomfortable for us," yet we see engagement spike up, not down, we're going to ship that feature. And we also do a lot of AB testing on the user level, which is another thing.

There's just a whole slew of things that we do, because we maintain a user focus that's to perhaps the chagrin of the network admins and the customers, so we do talk to customers a lot. We listen to them and we try to understand what their problems are. But we very rarely do what they exactly want. We're not very customer driven.

Craig: That almost sounds counter to, I think, almost everything in product management, saying the person writing the check is the one you want to make sure is happy, right? And the user, especially the one that's free and not paying? Who cares?

Anna Marie: The point of having Yammer and the point of using Yammer in your network is cross communication in the entire org, right? And so it's information discovery from one section of your org to an entire different section. So that only works if the users are using it and putting information into the system. So the whole thing falls apart if you build something that the admins want, like locked groups, they want private content, they want things to be, by default, closed, closed, closed.

Craig: They sound like awful people.

Anna Marie: If you're being charged with handling harassment or handling things like that, you want more and more tools to lock things down, shut it down to prevent it from becoming an issue. Whereas if you want to promote more sharing and responsiveness within the org, you have to build things that, by default, groups are public.

That's something that a lot of admins would like to toggle where they can change that for their network. Because they, of course, think that for their network, it's going to be different. But when you change that default, then all of a sudden you siphon off the access to information that other people have, and the whole product stops functioning.

Rimas: I just want to be clear, though. You're not saying that a customer is not necessarily a user as well, right? They still have other needs that can be met and features that could be built as part of what they need. I'm thinking about things like billing or consolidated things. Well, granted you're not charging anything.

Anna Marie: The way the Yammer product management org is structured, we have a specific department explicitly for handling customer concerns and market concerns like that. That's a particular PM who owns that and has staffing allocation for those kinds of concerns.

We do things like we're working on EU datacenters and things like that, which is a customer concern. No end-user engagement metric is ever going to say, "Hey, you need EU datacenters." What metric would tell you that? You're never going to see that in the data, so we have that set aside and protected within the org, and then the rest of the PMs are agnostic.

We all float between mobile and web and email and all the other things. We switch that up depending on whatever initiative we think is most valuable at the time. But that area of the org works on that area forever, kind of unlocking and unblocking that.

Craig: Cool, that sounds pretty interesting.

I'm reasonably convinced because you don't care about revenue, someone else does, that engagement is the right metric.

How are you measuring it? Is it just like time in the system overall? What's your top metric there?

Anna Marie: Absolutely, we actually don't think that spending time in the system or having long sessions is super valuable for users. Our top metric is days engaged, so the number of days in a given period of time that you, as a unique user, logged in. So for a week, your max days engaged could be seven, right? So it's a very noiseless, it's a binary metric.

Craig: So you want people to work on the weekends?

Anna Marie: We don't want people to work in the weekends.

Craig: But okay, like a recurring active user.

Anna Marie: Yeah, recurring active users. We see if you are a days-engaged user, if you're a 21-days-engaged user in a month, you're using the product for work, right? At that point, it's part of your workflow. We see that you log in, it's something that you're checking, so that's our absolute top line.

We look at a lot of retention metrics. There's a funny story around two-week retention. What is two-week retention versus week two retention? There's about five different ways to measure or define second-week retention. Is it on day 14? Is it at any point between day seven and day 14? There's just so many different floating ways, so

we do look at two-week retention as a pretty strong market in enterprise collaboration.

Craig: Which of those measures of two-week did you settle on? Because it sounds like there's some really deep analysis on them.

Anna Marie: Absolutely, which is really interesting, when everyone in the org is talking about our two-week retention numbers and you find out in a product meeting that we're talking about five different things. The analysts know exactly what they're describing, and the PMs are hearing something different, and the designers are hearing a third thing.

So the way we define two-week retention is it's rolling, so given your start date, did you log in at any point between day seven and day 14? So it's second week retention.

Craig: I'm thinking through that, there. Interesting, so you bucket it so that you don't care about the weekday. Full week, anytime in there works.

Anna Marie: Right, so your seat could be activated by a network admin who's in a different time zone than you and it could be on a weekend for your weekend. We don't tend to tie, "Seven days after you logged in or after your account was activated, did you come back?" We look at sometime in that second week, "Are you still retained?"

Craig: Now, where did you get the two weeks?

Rimas: Yeah, how do you get there?

Craig: Why two weeks? Why not three weeks, why not one? It sounds like there was quite a bit of data analysis that go into that.

Anna Marie: This metric was established before I joined. I know that in me trying to understand the metric, I uncovered that three different people in my reporting chain had different definitions of it. So we had some clarity discussions around that.

If you think about a product that you use and how likely you are to come back within a one-week work week context, that changes a lot the next week. The work week is bucketed by this Monday to Friday thing, and there's some continuity within a one Monday-through-Friday chunk. And so coming back in another chunk of that is a pretty good indicator. It's an early indicator of sticking power.

And one of the reasons for looking at early indicators is we don't want to have to have tests run for a very long time in order to know if you can ship something, especially because we're testing things at the user level and not the network level.

Craig: I have so many questions on so many anomalies of, like, what happens about the holiday week, the Thanksgiving week? Do you change the metric just for that time?

Anna Marie: Or August, all of Europe goes on vacation in August. You should see our August numbers. It's always like, "By the way, you should know this is August." We have tons of users in Europe and we see massive dips whenever we're talking to people in the U.S. about it, because they don't understand as much.

Rimas: This is when you hire the data scientist, right? Or we'll just hire you, Craig, right? Your SQL skills are paramount.

Craig: It's all SQL, that's all it is. We can probably save digging too much deeper into the metrics, because I think Rimas and I could go on for a while about this.

Anna Marie: Oh, I love it, too.

Craig: It's super interesting, and I think there are all sorts of anomalies comparing year every year, month every month, all sorts of things. But it does make a lot of sense that, in a business setting, if they come back the second week, they are actively engaged with the product. It's a good driver for future engagement.

What's the next level down of factor of the thing you're measuring? Are you looking at everything that you do? Does it affect this number? Are you A/B testing that? What level of testing are you doing and what are the other metrics that lead into that? Or is that just the one that you pay attention to?

Anna Marie: I talked to people who want to get into product management or people who are getting advice on interviews. They're getting ready about this concept a lot, so we think a lot about the difference between global metrics and local metrics when we're testing. And days engaged, second-week retention, and there are two others like a posting binary metric, which I can talk about if you want me to dig into that.

We have basically four core global metrics, and we want to see any test that we do affect those in some level, unless we're trying to ship something for strategic reasons or code them for decomplexity reasons. So those are global metrics, and ostensibly anything that you test will have local metrics. Like, "Did they push the button?" And that's the metric that most people, when they're especially getting into PM for the first time, think about, the success metric for if this feature is good is "How many people use this feature?"

But what you want to be able to see is that, "The people who use this feature actually saw their global metrics go up as well." And so we structure our hypothesis by, "Here's the change that we're affecting in the product, here is the local metric that we expect to change." And then based on this assumption, about this change in local metric, this is why we think it's going to affect the global metric.

When you're looking at a test result and you're looking at your local metrics, and they're all going up and your global metrics are going down, that tells you that your assumption was wrong. And if you're looking at your test results and you see that everything is flat, both local and global metrics, that tells you that users didn't find it or didn't think it was useful from the outside.

If you see your local metrics are flat but your global metrics are up, that tells you something's very interesting. That's where you really want to dig in.

Rimas: I'm curious, given the size of the organization, and you talk about global metrics, how do you parse out how your changes affect the global metrics, when you've got any number of other PMs releasing things at the same time?

Craig: If it's positive, it's yours. If it's negative, it's someone else's.

Anna Marie: We're looking at global metrics within the confines of an A/B test. So we're looking at the control group and their global metrics, and we're looking at the treatment group and their global metrics. We want to see the treatment group global metrics go up relative to the control group. If our randomization is good, we shouldn't see a lot of cross pollination.

We do always check that, and our analysts are always aware of other experiments that are going on in the same area of the product that could interact with that. I've run six tests so far and I haven't had anything come up as a potential interaction effect. We always check if there are any potential interaction effects that are in the same area.

Then we always check our randomizations, and our analysts do an amazing job of checking the distributions and making sure that we don't happen to have a strange distribution where most of the people in the control are in Europe, or there's even a larger percentage of them in Europe than in the rest of the world. So we're watching things like that all the time.

Craig: So this almost sounds like a perfect world for a product. There's a two-week timeframe for you to run a test, roughly to start to see the impact. You're measuring the actual thing that you care about, the top-line metric. For most people this is probably revenue, but in your case, that two-week metric makes sense. It should always be this easy, right?

Rimas: I wish. But do you have to do a lot of planning before? Because you talk about bringing other analysts and other things like that. Yes, a two-week period sounds great and all, but do you have infinite resources with your analysts? Do you have to plan three weeks ahead of time to actually get your test in?

Anna Marie: I do live in a world that's a bit of an embarrassment of riches at Yammer. We have user researchers that inform us when we're thinking about a spec. "Oh, we've done some research here." Think about this kind of usability stuff. We have analysts that will look at our spec early, and they're almost the first people we get involved when we're talking about hypotheses that will validate if it's significantly falsifiable or not.

They'll point us to what we might want to think about, in terms of adding log events that we might not have thought of. Our designers, great UX designers as well. Then once we start working with engineers, the analysts are already ready. They're working with the engineers as well. The shipping of the experiment is only ever gated on the engineering time. The analysts are never bottleneck there. I've never seen them be a bottleneck.

Then the amazing thing is all of the analysts are just product analysts. They don't do any pipeline engineering or data work like that. We have an entire section of the org, the Avocado team. Avocado is our internal experiment reporting team that is now part of Microsoft.

People talk a lot about how that was a big part of the acquisition, was getting this team to teach Microsoft how to do data. And so they actually now support all the experiment reporting for all of Microsoft's online products. And so we have this entire data tooling team that was built out at Yammer. It's dozens of people, and we invested so much over the course of even pre-acquisition into data tooling, so we were able to have analysts just help us work on that.

Craig: That all sounds great.

Rimas: Well, luckily I've got a Craig, so I don't need a whole analyst team.

Craig: What is your advice to someone that doesn't have that setup, that doesn't log into that? How do you create it or how do you cheat and get some of these benefits? Because that is a well-oiled machine that took years and years and years to build, and I expect there was interesting influence in the very early days of Yammer. How do I get that now, at a 10-person startup?

Anna Marie: Right, so the folklore for why it existed at Yammer is that it was the only way that people had to push back against the personality of David Sacks. If you ever wanted to not do something that David Sacks wanted to do, you had to have a lot of data to back it up. The org put a lot of resources into building out this data tooling so that they could stand up to him. There's a lot of folklore around it.

Craig: So if we've got opinions, let's go with mine. If we've got data, let's use it.

Anna Marie: Exactly, yes.

When you have as strong an opinion as that in the room, it will create this vacuum where on the other side you've got to build something up.

And so there were a lot of really strong personalities who said, "Okay, we don't agree with you, necessarily, and the only thing that we can do, Mr. CEO, is build all the data tooling to prove it." A lot of this was built to prove David Sacks wrong about various things, and this is common knowledge in the org. I've heard anecdotally that other orgs build this up because CTOs are very data interested.

There's this interest in having a binary "yes or no" if something's good or bad, and in the absence of this product-fluid intuition space, if you have an engineering-heavy org, you might be able to push that through on the engineering side where like, "Hey, no, we really want to have answers to things. We don't want to just guess." But it depends.

Craig: I think we made it a record of 15 to 20 minutes into the podcast before we said "it depends." So I just wanted to point that out, that this may be the least product management podcast yet.

Rimas: It depends. We still have plenty more time to go.

Anna Marie: Yeah, I'm a big fan of not depending.

Rimas: Actually, I was going to interject and say, doesn't it depend on the size of the organization that you're working with? And can't you just make data say anything that you want it to say?

Anna Marie: You can. What is it, "There's the three types of lies: lies, damned lies and statistics"? My mother was an actuary, so I've heard the story anecdotally, growing up.

Craig: I can make SQL do a lot of things.

Anna Marie: It's really, really hard to make SQL tell you that people are engaging in the product more days on average in one group than the other.

Craig: I think you underestimate some people's ability to write bad SQL.

Anna Marie: There's a lot of people that are checking, and obviously at some point I, as the product manager, take a lot of things on faith from my analyst team. I'm not going to go and double check things. I'll jump in and query if I have, "Oh, how many people are touching this button I'm thinking about changing?" Or something like that. But I'm not going to do really complex queries on my own, and I'm not going to check other people's. I don't have that skill.

Craig: One thing I'm hearing, though, is that top-line metric is very, very, very important, that it's a very clear one. It's intuitive, people understand it. It sounds like, with the debate around the two-week engagement timeframe, defining the metric is incredibly important.

Anna Marie: Absolutely, and it's really important that everyone in the entire org is really fluent about this, so when we have our product all-hands, our designers will ask about P-values if we don't put our P-values in a report. They'll be like, "Oh yeah, you have that metric lift there, but what's the P-value on that? I don't see that."

If the entire org is thinking that way, it gets really easy to spot errors in your work.

I was talking actually to our head of UX about this, where she was arguing for the Yammer way, that you've already got so many people who are invested in it, and when you onboard a new person to the org, learning about this data mindset is a big part of that. And you just breathe that in. It's really hard to slip things past people when everyone in the org is like, "Something smells fishy there."

Rimas: So does data only make sense for features or work that you're doing for workflows or things like that are known already? Because what I'm thinking about are at some point, or maybe even not, I don't know, you have to build something entirely new and you have no sense of what that might affect.

Craig: You talked about that early on. You glossed over, like, "We don't do something unless the data says to do it, unless it's strategic."

Anna Marie: Right. I should have brought this with me.

We have the seven reasons to ship something, and data is one of them.

Craig: What are the other six? Because we've talked a lot about data, of course.

Anna Marie: Data is the core. Decomplexity is a reason to ship something, so if we test a feature that removes a lot of code complexity, not necessarily UI complexity, although that's also a win sometimes, but specifically code complexity, if we have a really outdated algorithm or this black box that not really many people know or even understand anymore, if we can test removing it and it doesn't hurt metrics, we'll remove that. So we do a lot of work to try to remove things over time. Yammer's been around since 2009, I think.

Craig: Sounds about right.

Anna Marie: In terms of enterprise-collaboration-cloud-SaaS software in that niche, that's really old. And there's something really interesting that happens with workflows, where people will build out how they're working around your product. And so changing that over time becomes more and more difficult, as you've added more that people are calcified around in their workflow.

Trying to keep things fresh and trying to keep people who are up to date on the entire code base is a big challenge, so we try to keep that clean as possible and remove things as often as we can.

Rimas: Does that mean that that's mutually exclusive from data, as part of the other reasons for why you might ship something?

Anna Marie: No, we always specify in the spec if we're going to ship it regardless of what the data outcomes are. I've had a few specs where I've specified something that we're trying to get for parity in our Android and iOS clients.

If this is a flat test on Android, we're going to ship it, because it brings parity between iOS and Android and that helps us with velocity later on. We don't have to think about these two different clients and maintaining different realities. If we ever want to ship something not because of data, we specify in the spec in advance that we're planning on doing that.

Rimas: So then if somebody else comes to you with some data, they can totally override that?

Anna Marie: If they come with data that says what? No, we specify if we're going to ignore the data. And if it gets really complicated when you try and specify that we think that this is a UI pattern that's better for our users.

Obviously the blinking light is going to make them click on something, but we don't want to become Vegas, right?

Rimas: I love Vegas, though.

Anna Marie: But do you love Vegas in your workplace collaboration software?

Rimas: It depends on how boring my workplace is.

Craig: To be fair, they have disabled the blink tag from the browser, so it's not even an option these days. It is fully gone now. Now, marquee does still work, so there might be some options around that.

Anna Marie: I don't even know what that is.

Craig: You don't know what the marquee tag is?

Anna Marie: No. I'm too young.

Rimas: We just dated ourselves, Craig.

Anna Marie: And you thought you were data-ing yourself.

Craig: So, what's the balance between how often you ignore data? Because to me, I would just say, "This is a strategic decision," every time. I'm a strategy guy. I have big-picture in mind, so how to say, "Data doesn't matter in this case?"

Anna Marie: I'm definitely more "in the weeds" level, so I'll give an example of a feature. We had a jarring feature in the product that just bugged us all, and everyone was like, "Ugh." We look at it, and it just makes us go, "Ugh, why do we have this in our product? It's blinking, basically, at our users."

I was trying to design a way to remove that without taking metrics, because we had tried just removing it and it hit engagement pretty bad. So like, "Okay, let's see what we can do." Actually, I tested a couple different variants, one that was really subtle and one that was halfway between.

I did a lot of scenario planning around how we validate if the subtle one loses by a little bit but not by a lot, we'd rather ship that. Because we think the subtle is better, long term, for a user's perception of the product. And we just would prefer to not have to support something slightly more garish.

Craig: How many people are in that decision? That sounds like when any time, given the culture, the strong focus on data, when we say, "We're going to make this call despite what the data says."

Rimas: The vision committee meetings all the way up to the Chief Product Officer or something, right?

Anna Marie: In this case, this was my decision, and I had to validate. I knew that that was what my gut was. "Okay, if it's slightly worse to ship the subtle version, we're going to ship the subtle version as long as it's close to control." But if the garish version is a lot better than the subtle version and they're both better than control, how do you validate that?

That sounds good to talk about, "What is that in actual numbers? How does that translate into raw numbers?" And that's something I spent a fair amount of time digging all through the interwebs, all through past projects and, like, has any product manager here ever specified a point where we were going to take metric losses and then validate what the threshold of that is? And it was really, really complicated to try and find.

We knew when we just removed the feature to begin with, what the metrics hits were. And so I was like, "Okay, so, that percentage is definitely below our threshold, so we're going to say anything less than half of that percentage is going to be at the threshold." I put all of these numbers in the spec and I had multiple variants being tested. And so there's lots of different ways different things can move.

And then, even within the global metrics, we have two different global metrics that we're really watching. Then there's new users and existing users, and you start getting into segments and all of that. So I had all of this all spreadsheeted out, and it's like, "Here are the five most expected paths that I think this could go through, and given each of these five results, here's what we're going to do."

Oh, happy day. The one that I expected to happen the most happened to a T, and I had fleshed out that, where I expected even some local metrics to go down and some to go up and pointing to a lot of different things. I built out a narrative around that, and that narrative came through. So it was really convenient that the one I've done all the work around defining actually happened in the metrics.

But it took a lot of work and a lot of managing up to the Head of Product, and I had to tell him at a point, "Hey, sorry, this is my decision, and it's actually exactly counter to your decision. But I think it's very important, and here is why." That was terrifying.

Rimas: Can you dig into that in that disagreement between the two, and how you managed this? Well, maybe not the two in that situation particularly, but just in general, how do you deal with disagreement? I've heard data is one thing, but just from the day in and day out of being a PM, how do you handle that with engineering staff, analysts, whomever else?

Anna Marie: I think,

especially about disagreeing with designers, that's one of the most subtle and delicate ones that a PM has to manage.

When it comes to disagreeing with people that you report up through, the power structure is such that you're going to bring a lot of evidence to say, "Hey, here's why I disagree: it's XYZ and ABC," and all of that, and our management structure is very open to listening to that.

We have the strong thesis that the person closest to the work should have the final say, and that's great. It gets really complicated when you disagree with someone on your actual project team, and most so when you disagree with the designer.

I found this to be true in several cases that designers are oftentimes really emotionally attached to their work, and so disagreeing with their work can feel like disagreeing with them. And even by just the very nature, especially at Yammer, by the very nature of being a PM, you are looking for evidence to back up your reasons, and designers aren't really in that mindset. I had some difficulty with this early on, and I've landed on a paradigm that I really like about disagreeing with designers.

Craig: Oh, tell me more.

Anna Marie: This is my revelation: I had a really particularly contentious disagreement where I saw from a distance that I really disagreed with the designer on something and I knew that if I came into the room and said, "Hey, I disagree, you disagree, let's hash it out," that I would just end up barreling over this designer because there's almost no way that someone who doesn't come with a lot of evidence-based thinking is going to be able to stand up to my evidence-based thinking.

Craig: And so you win?

Anna Marie: And so you win, right.

Craig: Cool.

Craig: I'm not seeing the problem yet.

Anna Marie: Right, so the thing that is a win for me is not necessarily a win for our users.

Craig: Okay, so it's stepping back, being the adult in the room saying, "It matters that we make the right decision," not that "I can just bulldoze this."

Anna Marie: Right, absolutely.

Craig: It sounds less fun, but keep going.

Anna Marie: But it's very core to how I think about working in project teams, is that the firmer you hold on to your opinion, the more that you mandate that the other person digs their heels in. Because it's the only way that they can stand up to that energy that you have.

If you actually want either of you to arrive at the best decision, you have to let go of your opinion and step aside from it.

Instead of looking at, "Here's why I think this is right," step aside from your own opinion and meet your designer somewhere in a space that's outside of "who believes what."

I went through this exercise with a designer a few weeks ago that was really, really, really effective and very healthy. It felt great, where I walked into the room and I said, "Hey, we could whiteboard about this decision until the cows come home. We can just dig our heels in and argue and argue and argue, and we know that we could do that.

"In the end I'd probably win because I'm just going to come with more arguments at you, and at some point, you're going to get exhausted by that." You just want it to stop, so I said, "Let's not do that. Let's set aside our opinions."

Craig: You just took away my fun, but go on.

Anna Marie: I was like, "Let's set aside our opinions and let's go back to the user. I'm going to define the user that I think we're building for, and you can define the user that you think we're building for, and then let's see at the points that we disagree at.

Because there's something underneath what we believe, because we both have our user's best interest at heart. "There's obviously something much, much deeper that we disagree on, so let's try and start at the absolute first principle and build up from there."

Then we see where we diverge, and we ended up finding two different points where we diverged about, like, our core assumptions on our users. We were able to build up a quick usability test or a click test with user research to figure out which one was more accurate, and she won. We had a six-person usability test. We had six people come in, try a prototype, and six out of six were completely on her side, not even ambiguously on her side.

They were confused by my idea, and my idea was also the one that my manager wanted, my idea. The head of our initiative wanted my idea, our Head of Product wanted my idea, the Head of Design wanted my idea. The only person on the other side of this idea was this designer... and all the users.

Craig: One of my favorite questions is, "What problem are we trying to solve here? What are we building for?" Stepping back and saying, "Why are we doing this, and who for?" Right? A re-framing of it from fresh principles.

Anna Marie: Absolutely, and trying to find the absolute lowest level that you can start to build from. Because why we thought we wanted to do it, we'd had a few discussions about it and we couldn't find anything that we really disagreed on other than I thought mine was better and she thought hers was better.

And so it was like, "Okay, let's just go into the deepest level we can find." It's like a "five whys." The five whys start at the top and goes down, and this one we're like, "Okay, we think this is the bottom. Let's start here and then build up the whys and see at which point along the way we go different directions."

Rimas: I also feel, as part of that, you also need to create a psychologically safe space so that when you're saying these things or talking about these, even going down to the low level of the problem we really are trying to solve,

if either of you don't have that trust that you can say things that are divergent, you probably even lose before you even get in the room.

Is that a fair statement?

Anna Marie: Definitely, psychological safety is all the vogue. All the vogue, is that even a phrase? All the vogue these days.

Craig: But I think there is a counter there, because this sounds like a very nice setup establishment. Same discussion but there is the other side where you do have the heated debates.

Anna Marie: That comes through with the PM team and the analytics team. We get our really healthy tension, head-to-head, "I think you're wrong." "No, I think you're wrong." And we're both really prepared to have those conversations, and I think it's really healthy.

I think that you want to promote that tension where both sides are equally balanced and equally empowered in terms of how they're thinking in that way. We definitely have that "rawr." I'm gesturing with my fists against each other. You can't see that on the podcast, but there can be a lot of tension.

I've had an analyst flat out tell me, "No, that's ridiculous. I can't believe you'd even think of that," basically. And then I've said the same to them at one point, they were trying to do something, and I was like, "That is insane. Why would we do that?"

Craig: As long as both sides can hang with it, that kind of tension can be healthy?

Anna Marie: Absolutely, and you talk about psychological safety, it's what makes you feel comfortable being your full self. And I think the responsibility resides often with the PM to make sure that everyone on the team feels like they're in a place they can be their full self.

For analysts, that means that I'm comfortable being like, "No, that is wrong," because then they'll come back at me and say, "No, that is wrong." That's their full self and they feel really comfortable with that. The designers that I've worked with, I've had a couple at other companies that are more that way, but most designers at Yammer are not that way.

Finding space for them to be their full self and bring their full creativity is one of the tactics of being an effective PM.

Rimas: Wow, I feel like we could spend a whole other episode just on emotional intelligence, psychological safety and bringing out the full self.

Anna Marie: I would like to.

Craig: It sounds like a lot less fun to me.

Anna Marie: I would like to listen to that episode.

Craig: You mentioned a little bit on the designer approaching them one way. It sounds like you had a nice private meeting there. Data analysts, PMs going at it. What happens when it's a little more complicated than that and it's the middle of a meeting and there's 10 people in the room. You see the designer over there.

How do you handle those more complicated dynamics of a lot of parties? How does that come to fruition, or is it always like, "Let's move this to a one-on-one meeting?" How do you handle those bigger dynamics?

Anna Marie: That's a great question. I think the meeting dynamics are, again, part of the grit of what it means to be a good PM. Managing that is a lot of times where our work happens.

First of all, I think if you ever get into a meeting where individuals in the room don't know what's going to be talked about or you don't know the opinions that they're going to have, you've already slipped up a bit, especially going into product review.

If your Head of Product doesn't already know what it is that you think you're going to do, you have failed a little bit and you're setting yourself up for a really dramatic ride.

Rimas: A little bit? Probably a lot.

Anna Marie: A lot of bit. It's so dramatic. You walk into this, you've done all this work, you've got your analyst there, you've got your researcher, you've got the Head of Initiative, other PMs that are there, and the head of product asks three or four questions. They've never seen your staff before. They ask three or four questions you were not anticipating, they don't know where you're going, you don't know where they're going, and

all of a sudden you're on a roller coaster and you're trying to hold on. You're asking your analyst to look things up in real time.

Rimas: Eject, eject, eject.

Anna Marie: I have been in one and a half of those. My manager pulled me aside at one point, and he's like, "Hey, okay, so here's what you want to think about." He's like, "Inception, you need to incept your ideas with anyone who's going to be a stakeholder in the room long before you get into the room."

Craig: Or if you're on the other side, you just grab some popcorn and you enjoy the show while it's happening to someone else.

Anna Marie: Exactly.

Craig: No, it's actually maybe more uncomfortable just to observe it I think than to be on receiving end.

Anna Marie: Oh god, you're antsy. You're squirming in your seat. You're like, "Oh, I feel for you. I've been there." I've sat on one of those as well.

It depends on who we're talking about. When we're talking about head of product or anyone who has veto power or things like that, in a product review setting, incepting is really important. Even the kickoff meetings with the whole team, getting the engineers.

The Yammer way, we get a lot of the UX and hypothesis building set before we bring engineers on board. So when we do have that kickoff meeting where the engineers get staffed and everything gets really expensive, making sure that they already know, before that meeting, what we're thinking about. That we've thought about some of the engineering constraints and chat up with them offline about that, is very, very, very important.

When it comes to things in the meeting, I think one of the strongest skills a PM can bring into a meeting is identifying, when something comes up, if it can be resolved there or if not. And if it cannot be resolved there, tabling it.

I've gotten really, really good at saying, "Okay, I think this is what you guys are discussing. I think this is the kind of thing that we're not going to be able to come to a decision about right now. I appreciate that we disagree on it. I'm going to schedule a separate discussion for this, and let's move forward on something that we can agree on." And I probably do that in every single meeting at this point.

Craig: And it sounds like it's heavily coupled too, which is clearly saying what the goal of the meeting is from the outside, right? If you have that actual agenda, if people know why we're actually meeting versus it was on the calendar.

Anna Marie: Oh god, I hate that so much. I'm the biggest fan of meetings ending early. I'm the biggest fan of having as few meetings as possible. When I was at Asana, we have "No-Meeting Wednesdays" where the entire org doesn't have meetings on Wednesdays. Let's just try to give people, what is it, the IC's time versus the manager's time?

Craig: The makers versus the managers.

Anna Marie: The manager's schedule and the maker's schedule. In a maker's schedule, they need these blocks of time.

Craig: It was Thursday, at Heroku.

Anna Marie: Oh really, you have "No-Meeting Thursday"? And scheduling that? Even being thoughtful when you're scheduling your meetings, to be like just before lunch or just after lunch where you're already going to be interrupted at that point in time, and so you don't end up breaking apart an entire afternoon into separate chunks.

I'm very, very conscious when I'm scheduling a meeting that there's a clear agenda. People understand why we're going to be there, we state at the beginning at what point we're going to leave the meeting. "Here are the things we want to figure out in this meeting, and then the meeting will be over."

I'd say at least 80% of my meetings, I end early. There's no reason for us to spend the rest of the hour here.

We've already figured out everything we need to be and we'll follow up online.

Craig: I think that's a huge tip for a lot of people. They don't realize, just because it's on the calendar doesn't mean you have to keep going.

Anna Marie: Absolutely, and give those people that time back.

Craig: Cool. I think this is all super interesting and helpful. We covered a lot from data to discarding data to being nice to people. I don't quite follow that one.

Rimas: Psychological safety.

Craig: That sounds a little better, there. Anything you want to leave with listeners before taking off?

Anna Marie: The one thing I'm thinking most about right now is what I call "45 decisions, 55 decisions," which is, if you can't necessarily tell what's going to be the right way to go, it's a vague, 45 one way, 55 the other, and you can't tell where it splits, it's more effective to just make a decision then get that extra value from being sure which way to go.

Drive for decisions, even if there's a little bit of ambiguity. If it's not a ton of ambiguity, it's better to just make a decision one way. The org will thank you later.

Rimas: Awesome, that's great advice.

Craig: Anything else you want to plug?

Anna Marie: I'll plug my product podcast. I have a podcast I'm starting with Sandi MacPherson called Clearly Product. We have a book club podcast where we read books about project management and then we get together and talk about it on the podcast: what we learned, big-company PM, small-company PM. Don't read the book yourself, listen to us.

Craig: So, if you don't get enough of it from us, there's another source for your product management information.

Anna Marie: Clearlyproduct.com. Thanks guys, this was really great.