March 2, 2017
Getting To Your First 100 Customers And Beyond
In this Heavybit talk, Ursula Ayrout shares her expertise around getting your first 100 customers. Ursula reveals her approaches to building...
In episode 8 of O11ycast, Charity and Rachel are joined by Nylas Co-Founder and CTO Christine Spang to discuss navigating the complex ecosystem of emails and how Nylas has managed to create an API for it.
About the Guests
Charity Majors: An email startup in this decade might seem surprising. Even a little bit old fashioned. What opportunities did you find that everyone else seems to have missed?
Christine Spang: A lot of times I have to end up explaining the email ecosystem to people.
Charity: Oh, boy. This is going to be a five hour long podcast.
Christine: I'll keep it short. The TL;DR of this is that everybody else's email startups fall into basically two different categories. One is, "I'm building a new sort of email client that has xyz feature that's better than everything else that's out there." And funny story, we actually--
Charity: Did this startup.
Christine: Did this for a little while back in the early days. And these start ups tend to be pretty short lived because they have to either get bought or they go out of business because all of the like major tech giants out there have basically set a baseline cost of zero dollars for an email client by subsidizing them with their other businesses. So like Microsoft has an email client, Google has an email client. And like if you're building an email client as a business you have to both be like better in terms of features and that. But also get over this psychological hurdle of like--
Charity: Be freer.
Christine: Yeah, be freer, which is basically impossible as a way to make money. I don't recommend doing that. But the other kind of category of other email businesses out there that has been started in the past decade is essentially what's called transactional email sending services. I consider these to be the cousin of Nylas. In that they are APIs and they have to do with email, but they have a specific use case that they're useful for, in that they're only sending. When you're sending, you're not authenticating as a specific person.
So these APIs are used for sending mass emails of like 100,000 people for marketing campaigns and things like that. There's a specific set of use cases which these APIs are really useful for, and those are things that Nylas is not good at. We're essentially complimentary to these other email services. But the opportunity that no one else had taken advantage of is really seeing the value in the data of email.
Charity: Not just marketing to people based on that data.
Christine: Yeah, exactly. Email's been around for like 50 years at this point, and in the beginning it was just people sending messages to each other. Literally electronic mail. I could send you a letter or I could type in the words and then send it through the internet, and then you'll get it and it'll be like a letter. But over the past decades people have started using email for all sorts of different things, and now it's essentially the lingua franca of business. People are sending documents, they're signing contracts, they're setting up meetings.
Charity: They're sending highly structured data in a lot of ways.
Christine: For sure. And yet the actual data storage that people use for email is literally just a pile of documents.
Rachel Chalmers: And it's 50 years old.
Christine: It's like a giant file cabinet that's in chronological order of what--
Rachel: It wouldn't surprise me if it were a literal file cabinet with one really overworked clerk just running.
Charity: Indexes, file catalogs, all this stuff. This seems like a really great chance to introduce yourself.
Christine: For sure. Hi everybody, my name is Christine Spang and I am the founder and CTO of Nylas. We are a technology startup based in San Francisco and New York. We've been around for over five years at this point. It always blows my mind whenever I say that 'cause I definitely didn't expect it to work out this well.
Charity: I might have cut you off a little bit when you were talking about the value props. Please continue, if so. Sorry.
Christine: Anyway, the company is 45 people at this point, which always blows my mind. But to circle back and talk a little bit about what we do, we have taken the last 50 years of email client history and abstracted it all away for folks that are building email integrations. We give developers what they want, a modern REST API that allows you to connect to any email mailbox that's out there. An end user can connect their mailbox to some developer's application and then the developer gets a token, and then they can do all of the CRUD operations on a mailbox.
The reason this is really powerful is because the email ecosystem is really complex. There are millions of different email servers deployed around the world, and there's a few different major families of types of email servers, and hundreds and thousands of different client implementations. So because email was originally created as this open standard and it allowed for there to be this innovation in terms of what the email servers, who were built by, who was building the mail clients. The complexity that built up over that time has made it very hard to--
Charity: Once you give people a really flexible open-ended thing-- it's really hard to rein that in. It's really hard to dial it back to a more structured and less open-ended way.
Rachel: At the same time, email has been for me personally and for everyone I think who's used it for the last 20-25 years, the base of Maslow's Hierarchy of Needs. I imagine that sticking an API in there is going to like present you with some really significant performance challenges. People are very intolerant of slow delivery. How do you make your API go fast?
Christine: There was a really fundamental decision that we made early on when building Nylas, which actually was really scary at the time, but looking back it was really fundamental and I think it worked out for the best. And that decision was essentially to have two different options for how do you build this thing. One is like, "We could be a proxy layer, where someone makes a request to our API and we translate that into the Google API way to request the data, or the Microsoft API way to request that data. But we threw this idea out really fast, specifically because of this speed requirement and also for reliability.
So what we decided to do was basically build an email client that lives in the Cloud and that is fronted by an API, so that we are constantly syncing and caching a copy of this data so we can serve it really quickly. That means there's a couple different components of our service which makes it more complicated than some random API that does something simpler, in that an API request for us is really boring. It's literally just, "Hits a proxy, talks to our application server, loads some data from the database, returns it." But the interesting part there is that that data has to be up to date with the back hand provider, so our speed is not just "How fast I can serve a request?" But it's like, "How fast are we getting that data into our system and making it available for end users?"
Rachel: What performance goals do you set around that? Were there any surprises?
Christine: I don't think that there's anything really weird in terms of what kinds of things we're measuring. It's all standard, like success rate, latency. But we also have a bunch of custom instrumentation around this sync service that is keeping our data stores up to date. For example, we measure the message ingest rate and the delta, so every message has a timestamp from the server on it. We keep track of the difference between the timestamp on the server and when we commit through our database in order to generate metrics, because we want to make that as small as possible. That's a really interesting one. We don't have one master metric that tells us that the system is working. We've brainstormed a few different ideas.
Rachel: Does anyone really? I'm not a big believer in the master metric.
Charity: Most people have one, they are just all crappy.
Christine: I've heard a lot said about the like Amazon orders--
Charity: Or like Pingdom. A lot of people will just set up a Pingdom check that runs once a minute from all around the world and then they're like, literally at the end of the month they're like, "Our up time was 99.8%." And you're like, "Jesus."
Rachel: That feels all kinds of participation trophy to me.
Charity: Oh god, so much yes. There is something powerful in being able to condense it down to a thing that you can track though. And often when it's your first stabs into this arena it's too depressing to do anything else. You got to get that low hanging fruit, you got to go, "OK now we're at 99.9. Now let's start to look under the covers."
Rachel: Maybe it's the difference between testing and teaching to the test. Having a guideline versus optimizing to maximize that.
Charity: It is, it's also like when you start-- A lot of people, like at Parse, when I started there they didn't have anything. They had some health checks, but they had nothing. When we set up a check and it was like, "Oh my God this end to end check is fielding how much?" And then the temptation is to go, "The test is flaky," and roll it back out. But usually it's not, usually it's your infrastructure that is way more flaky than you ever dreamed. You just have to start paying down that debt.
Christine: For us we've tried to figure out, "Can we just measure how many objects are being synced into our system over time?"
Charity: Yeah. Which is valuable, but it's not going to catch the problems before the object gets into the sync thing and out of the-- I really do think the gold standard is end to end tests. It just depresses everyone how flaky they are.
Christine: I really think that there's this under explored area where if you're dealing with a system that's made of state, your end to end checks are not going to catch a million different things that could go wrong.
Charity: Talk about that. What are some things you guys have devised for dealing with state problems, for telling whether or not there's a critical corruption bug, or--?
Christine: Yeah. We've tried a lot of different things and I would say that we still don't feel like we have a good solution for this.
Charity: That's probably a good sign.
Christine: I just think it's a really hard problem.
Charity: It really is.
Christine: But when, for example, when we make major changes to the sync engine we've done things like doing side by side data comparisons with the old version of the code just so you can at least see what changed. For small mailboxes you can do a manual inspection. We've tried all sorts of data snapshotting type things, but they're all really complicated systems just in terms of testing.
Charity: And expensive.
Charity: When we were rolling out our storage engine, we rolled out a change that did compression and so we kept both copies for awhile. That was expensive.
Christine: For sure.
Charity: We needed it to get confidence, but it's definitely a pretty touch trade off.
Christine: People are at least not talking about this particular problem.
Charity: People don't talk about the fact that the closer you get to storing bits on disk the more paranoid you have to be. The game all changes. A lot of people who are used to living their lives up in happy fairy land, where it's all APIs and everything's obstructed, bits get on discs and that's where your mistakes are permanent, they're irreversible. They can often put companies out of business and they are often not detectable until it's too late.
Christine: Right. If you have a credit app, where the data is not that complicated, it's like an order of magnitude easier.
Charity: Yeah, but it's discontinuous. The complexity and the difficulty and the magnitude of very small errors.
Rachel: As you create these goals, as you create this way of measuring whether you're on the right path or not, how do the engineers feel about that? Do they chafe at suddenly having things to be measured by?
Charity: You were there from literally the very beginning. Often when you're like, "OK engineering team, you've been writing code. Now it's time to have SLAs." Did they chafe, or did they embrace it?
Christine: I would say in the early days it was not a problem to establish an on call rotation, and some basic metrics and stuff like that. Because the alternative was literally the people that were using our platform calling us.
Christine: Most engineers, they want to build something that is useful to people and is reliable, and when they see that a thing is not like living up to that they want to fix it.
Charity: In my experience there can be friction or disagreements though when it feels like there's an arbitrary number that's being imposed upon you. Or when it feels like it correlates to someone else's pain, not to yours. You're like, "Why do I--? This doesn't actually reflect the health of the system as I understand it. Why am I--?" Because they don't want to teach to the test, they want to build.
Rachel: To Spang's point though, I do feel like that tends to creep in at a later stage. When it's all early stage and the engineers are building and testing and production. It's very personal and they take real pride in their work. But as organizations scale they lose that agency and autonomy and they lose that direct connection between the product of their work and then customer happiness. I think that's where I see metrics suddenly becoming a real bone of contention.
Charity: Do you feel like this is a--?
Christine: I've never seen that. Like at Nylas, and--
Rachel: Nylas is still at the scale where I would say everything's still really personal. If you're looking at, BMC or CA or IBM, and--
Charity: Or if you're, like I think of what you're talking about as the different stage where you've got sales people who are going out and selling a number, and the engineers are going, "You sold what? Wait, what does it even mean? I didn't sign up for this."
Rachel: Responsibility without input.
Christine: Because it's a little bit difficult to define what the end to end reliability of our system is, it's also hard to put in a contract. So we very intentionally tried to minimize.
Charity: Contracts-- Lawyers are weaselly as fuck. They can figure this out.
Christine: "What are the contract numbers?" Because there's just so many details there that we don't want to be arguing about it at a legal level.
Charity: How did you know it was time to start introducing these numbers though?
Christine: People were building production things on our servers that was in beta.
Charity: But there are multiple. There's a, "OK. It's time to have a number." And then there's the, "Our numbers aren't good enough." What are the things that feed into your intuition that says, "OK. It's time for us to level up in terms of our commitment to our users?"
Christine: It's a lot of just talking to customers. They're very vocal with us about when they feel like--
Charity: But there's a difference between somebody saying, "Your service is not reliable enough right now and we need a clearer guarantee. It doesn't have to be perfect but we need to know what we can base it on." These in my experience, they've been two different experiences.
Christine: Interesting, I haven't had that experience.
Charity: Interesting, OK.
Christine: Because we've been pretty minimalist in terms of our guaranteed SLAs, it's all based on, "Are the customers happy?" That's the goal at the end of the day. If the customers aren't happy, then we introspect.
Charity: What's been your process for coming up with these goals? Is it pretty much, you get in a room with a couple people and then you come out? Are the engineers part of it?
Christine: Obviously the engineers are part of it. A big input into our process is literally our head of customer support categorizing things.
Charity: Support teams are chronically undervalued in this ecosystem.
Rachel: Oh my god, yes.
Christine: I completely agree with that.
Rachel: Support and documentation, unsung heroes of our industry.
Christine: The thing is that our support teams ended up being the experts on what is good and bad about our product.
Charity: They can show up to a meeting and they can embody, they can channel your entire user base and just be like the voice of God if you dare to listen to it. Like, "This is what you should do."
Rachel: That's why people don't listen. Because it is terrifying.
Christine: What we've ended up doing is trying to systematize the knowledge that they're getting out of just being on the ground with customers everyday. We use this tool called productboard to identify what are the different features or improvements or just like, projects or reliability things that need to get worked on, and associate customer feedback with that. Then we actually have a scoring system to help us prioritize, which is really easy to get into this recency bias mode. Where it's like, "We're going to prioritize the thing that this customer was really unhappy about last week," whereas maybe you've heard like eight times about a different thing over the past three or six months. And you just don't do that because it didn't happen before your planning process. I don't think that our process is perfect, but it at least feels somewhat data driven.
Charity: But you have a process.
Rachel: Talk us through incident response. Customer calls, customer is not happy, what happens?
Christine: We have a few different things. One is we try to proactively realize when there are problems going on in the system through end to end alerting, and alerting on some important symptoms. In that case, just super standard, like our on call engineer is in charge of the incident, they pull in whatever help they need and we try to get it fixed.
But where it gets more complicated and nuanced is customer reports where "I'm seeing such and such behavior" that is not what's supposed to be happening. Then we have to figure out what's going on. So our systems generate a crap ton of data about what they're doing, because it's a really stateful system and we need to have the breadcrumbs to be able to diagnose those things when people report them.
Charity: Do you know how many events get generated roughly on the backend for every API request that comes in your front door?
Christine: Again it's not the API requests that matter, it's-- Well, they matter but it's a small piece. A simple piece. All of our mailboxes that we're constantly syncing are generating-- if they're active, they're going to be generating an event every time they sync the data. So imagine how many emails you get everyday, or if more stuff is on read or add a label, or move it somewhere else. All of that is generating changes on our end. And not just that, but things that the email provider is doing are things that we have to respond to. I'm not sure what the right order of magnitude is, but it really depends on the email account.
Charity: I remember at Facebook somebody calculated, this took a long time, they calculated that every web request somebody made to Facebook.com generated between 200 and 500 events on the back end. Debugging as it made it's way through the stack of all the dozens of services and databases and everything.
Christine: It really blows my mind how you have to spend the overhead on--
Charity: This is why you have to sample, why you have to sample. It's like not even a question.
Christine: For sure. But it's like, it's not a science, it's an art.
Charity: It is absolutely an art.
Christine: Like, "How much money do we want to spend on debugging information? How does that affect the speed at which we're able to debug things?"
Charity: Because engineers would love to spend 200 to 500 times as much on observability as they do on production, and obviously that's not reasonable. So it's always negotiations, it's always an art. The massaging of sample rates and the massaging of these are like the replacements in modern day observability for what was massaging pager roles. It was job that was never finished, went on forever, you were always trying to find that perfect balance, and it was constantly changing. It woke you up in the middle of the night, and we're just trying to move it up the stack a little bit so you're not getting woken up but you're still getting those.
Rachel: So is the art in setting tiers of importance? What do you do when there are multiple things broken?
Charity: You can't care about everything equally. You have explicit--
Christine: Some of the weirdest things for us to handle are when they have complicated steps to reproduce. It's like, "When I do thing x, y, and z in this order on like a mailbox, this thing gets out of sync." We have had to keep around some semblance of order, like events for our sync service, because for sync it seems to be really important to have things in order and to not drop important events.
Charity: Write a head log, basically.
Christine: What we end up doing is try to be really vigilant about what events are not providing signal, and try to just put them on debug and not log them in production. If multiple things are broken at the same time we try to prioritize things according to customer impact, but obviously that's not a science either.
Charity: At Parse I remember we would prioritize. The API was P1.
Christine: It's like, "If we're not serving API requests, we better fix that really soon."
Charity: Writes are more important that reads, because at least you know the reads can catch up later. But it was always the APIs first and then push notifications, and writes before reads. And then the website was somewhere way down on like P10 or something.
Christine: We don't even manage our own website, we let marketing do it.
Charity: Living the dream.
Christine: I'm a fan of all these tools and services that mean our engineering team doesn't have to manage that thing.
Rachel: How have you improved over time, and what else would you like to achieve?
Christine: Gosh. It's hard to even compare. Our reliability in year one--
Charity: Oh my god, I've known you, I didn't know you then but I think I've known you since year two or so.
Christine: Definitely not production ready.
Charity: It's phenomenal how far you guys have come.
Christine: I mean the scale is like--
Rachel: Oh, God. We met back at seed stage and I dragged it into my former employers, and yeah.
Christine: My pitching was so bad then.
Rachel: You should have seen the dudes. You and Edith my big misses.
Christine: I remember really--
Charity: You were very earnest.
Christine: Feeling really awkward.
Rachel: It's all theater.
Christine: Rachel, you're super intimidating.
Rachel: How have you improved?
Charity: What do you have left to do?
Christine: I'm really proud of the fact that no one else has done this before. Like, we've been around for five years and for three of those years I was like, "Am I a bad engineer or is this just really, really hard?" Because it's like--
Charity: Yeah, it's hard.
Christine: "We're just constantly fixing things and making it better."
Charity: Email's hard.
Christine: Engineers are naturally often a little bit perfectionist, and it's like, "I'm not satisfied with my thing until it's working all of the time." And the nature of software engineering just means that never happens.
Rachel: That was the valid pushback on Nylas at seed stage. It was like, "This is huge. Can it actually be done?"
Charity: And then it takes this much money to store all this email forever.
Christine: So we had to figure out how to sell it in a way that would pay for those costs. And that's really why we ended up focusing on the business market. Because email's cool, everyone uses it, but it's so much more valuable for businesses.
Rachel: Well this is what I keep telling people about going after the enterprise market. You go to the enterprise because that's where the money is, that's why you rob banks. Not that enterprise software is bank robbery.
Charity: Not that startup founders are robbing anyone whatsoever.
Christine: I mean consumer is fun, but like--
Charity: No it's not, people are dip shits.
Rachel: Consumer gets all of the oxygen because the big exits have been consumer, but you look at enterprise exits, there's more of them and they're really consistently--
Charity: If you're a data company-- We're going back to the difference between APIs and data, the level of abstraction. If you are a data company there are just things that are attached to that, and you need to be talking to enterprises. It moves too slowly to keep up with fads like consumers.
Christine: I'm really a practical person and I think it's great to just be a really good plumber.
Charity: To be boring.
Rachel: This is why I always drawn to infra, it's hugely leveraged like indoor plumbing. It's the basis of civilization but nobody pays any attention to it.
Charity: It's been so weird for me, just this last year learning how to do product stuff, quote unquote. 'Cause I show up and I'm like, "They don't know anything." In infrastructure I have always shown up and I knew what my priorities were. Because it was all, "Do or die, you get this done or nobody gets to do anything anymore 'cause your databases are not there." And then the product side they're just like, "We don't know, we have to sniff over after some intuitions or see what people are feeling like this week."
Rachel: You just have a hard time with human factors because you are at heart a murder bot. I love that about you, but--
Charity: Good times. Spang, lessons learned.
Christine: Right, lessons learned. I was just thinking about how one of the hard problems in enterprise is just being able to explain well what the fuck you actually do. It took us several years.
Rachel: Not because the buyer's dumb, it's because the buyers have enormous breadth and they're doing all the things all the time. You just have to find the story that matches their pain.
Christine: Right, so it's always tricky in recruiting because you have to be able to explain what the value of the company is in a way that people can say at parties.
Charity: Without leading on all of the, "We're doing microservices and go lang and blah blah blah."
Rachel: This is the thing that distinguishes the really great start-ups from the ones that don't quite make it, is the ability to tell the story.
Christine: I've had practices with this before, because the previous company I worked for was doing Kernel engineering stuff, and then nobody understood what the fuck I was working on. Working on email that's a lot easier.
-Rachel: It's probably the fundamental part of all three of our jobs, it's infrastructure but you need to care about it.
Christine: Scale, and infrastructure, and just making the bids flow in a useful way is a really interesting problem from an engineering point of view.
Rachel: I go on road trips to look at interesting bridges, you've got me.
Charity: I agree. But if there's one thing that I over the course of the last-- We're almost at three years with Honeycomb. Looking back on all of the mistakes that I knew I was going to make and tried not to make, I nevertheless made anyway. With my eyes wide open, one of them being that we spent the first year writing a storage engine.
I knew people like me always under invest in the product and they don't hire soon enough and they don't blah blah blah. I did not do all those things and I thought that I was compensating, or I thought I was leaning hard in the other direction, but I was leaning half as hard as I should have. Lessons to Spang, if you could just go back and whisper in the ear of yourself, let's say four years ago, what would you tell yourself?
Christine: That's a great question. I honestly feel like the story of this company has been a bunch of really smart people who have no experience--
Charity: I don't know why anyone funds first time founders.
Christine: Trying to figure things out from scratch, and it kind of worked but I also kind of wished that we had known some more experienced people and had some more best practices from the beginning. But I also feel like when it comes to experience, you don't know that if you had taken the time or had the experience to do things right the first time that you ever would have made it to like the point--
Charity: You can't actually take someone else's advice and best practices and just adopt them without understanding them--
Christine: But if we had spent twice as long launching the product maybe it never would have come.
Charity: I'm a big fan of falling off a cliff and seeing how it works.
Christine: So it's like, "Yes the first version was shitty."
Charity: Let's constrain this to best practices around observability and metrics and stuff. How do you think if you were doing another startup two years from now, what lessons would you take away from this one?
Christine: I definitely underestimated the importance of trying to figure out how to get consistency right from the beginning. I feel like once you build a system and it has all the bugs in it, it's infinitely harder to go back and remove them again.
Charity: Because you gotta reproduce the bugs too.
Christine: Because there's complicated ways that like they've gotten in there, and that person who wrote that code might not even be there anymore. Debugging is just harder than writing code.
Charity: Debugging is harder than writing code.
Rachel: Oh god, yeah.
Charity: That should be our maxim on this.
Christine: We shot from the hip on that in the early days. And I'm seeing, looking back over time how hard it's been to just remove those bugs.
Rachel: From my perspective, the reason I invest in first time founders is precisely because you don't know what you don't know. Sometimes you make a mistake that turns out to be something somebody thought was impossible and you do it, like Nylas.
Christine: I really think that there's this weird thing with start ups where you need to be persistent, and also--
Christine: Just delusional enough to not give up until you actually get to a good enough point.
Charity: It's basically impossible, but you don't quite know that so you keep going.
Christine: But I've also struggled so hard over the years with, "Is this completely pointless and impossible?"
Charity: But here you are.
Christine: It seems to be possible.
Charity: I have to throw this in there at the end, just because Nylas was our first paying customer at Honeycomb. Which is a terrible thing to inflict on anyone, I'm so glad we're still friends.What's it like working with services that are immature and bleeding edge? When is it worth it, when is it not, and what advice would you give someone else about when to take the plunge, and how to evaluate that risk versus reward?
Christine: It's really important to know the context here because otherwise you can't figure out what the lessons are. We had a really great experience working with Honeycomb in the early days, partially because your engineering team is awesome, so even in the early days the technology worked and it was pretty stable. I was constantly just excited and amazed by the fact that you guys had so much experience building reliable, scalable services.
Charity: You had just come from the dumpster fire that was Parse.
Christine: We never had to worry about, "Is this thing going to scale?" Or like--
Charity: It was so green from a product perspective.
Christine: "Is this thing gonna actually work?" Because we knew you guys, and we trusted that.
Charity: We were working on the same block, so you could walk around the block and throw things at us.
Christine: It was a three minute walk or something like that.
Rachel: You were also the dream customer. You lived the problem statement everyday.
Christine: One was like--
Charity: That's why we wanted you so badly, 'cause platforms.
Christine: We had a really big pain point, and when you have a really big pain point the factors that you're considering are weighed differently. Because if it works out your pain will go away. So if you don't have a big pain point you probably shouldn't take a bet on some tiny little start-up, but there were definitely things that were difficult.
Charity: What was that pain point?
Christine: There were no documentation, no best practice. We would ask questions and it'd be like, "We're just making this up as we go along."
Charity: "That's a great question."
Christine: But on the other hand, we could walk around the block and we have an engineer from your team come and literally plug in the integration for us.
Charity: That's true. Early start-ups will be so grateful they will bend over backwards to help.
Christine: We knew that there was this trust from just getting that hands on support and knowing that--
Charity: What was that key problem that we--
Christine: Your team's brains were working on our problems and trying to solve our pain.
Charity: You had all of our brains on a silver platter.
Christine: But the pain point was just like, "Because we're a multi-tenant platform and we have all of these different customers that are using us in different ways, we really need to be able to drill in and see things from their point of view." There's no other tool that gives us that in the same way that Honeycomb does.
Rachel: It's technical empathy.
Charity: It is.
Christine: I had to really become an evangelist for Honeycomb to my team, because there wasn't the brand for that. It was hard to explain.
Charity: We did not have it figured out how to explain at all. You were really helpful.
Christine: I had to go teach the team how useful that was, but the really key point is that we have this huge pain point that this product solves, and we had that personal ties and partnership, a trust relationship. You can develop that with someone you don't know personally, but it's gonna take face time and you gotta treat it like a relationship and not like something that you're gonna get packaged with a bow tie on it.
Charity: A good rule of thumb for me if I'm asking someone to adopt something that is brand new, I have to be able to say that I think it's 10X better than what they have. If I can say that, then I can say, "It's worth the pain. Trust me."
Christine: For sure.
Charity: If not, then it's probably not. I never would have gone after somebody for first customer who didn't have the pain of a platform where I knew that we could have that much of an impact, because I didn't want you to have a bad experience.
Christine: In a sense, we didn't know what we didn't know. We were like, "You guys built Parse and know things that we don't about building platforms," and so there was that line of trust of like, "We think we need this tool," and these people that have used it or used something like this onon a similar type of problem--
Charity: We're eternally grateful. It's been so much fun watching Nylas grow up over the past couple of years.
Rachel: So much fun.
Charity: I'm really exited to see what you do next. Congratulations belatedly on Series B.
Christine: Totally. I'm sad that you guys are not around the block anymore
Charity: I know, we're three blocks down from you now.
Rachel: They're a long three blocks.
Christine: Come on. It takes-- I guess you guys moved again.
Charity: Yeah we did.
Christine: I'm not sure I've seen the new offices.
Charity: You've been by for happy hour once.
Christine: No, not the new one. I've only been to the one next to the bar.
Charity: We should resolve this.
Rachel: Party time. Thanks so much Spang, it's always great to see you.