In episode 19 of EnterpriseReady, Grant speaks with Mark Geene, CEO and Co-Founder of Cloud Elements. They discuss Mark’s early days in enterprise, integrations between apps and APIs, and the go-to-market lessons learned along the way.
About the Guests
Grant Miller: Alright, Mark. Thank you so much for joining us.
Mark Geene: Grant, it's great to be here today. Thanks for inviting me.
Grant: Of course. Maybe let's just jump in, and get a quick overview of your background?
Mark: Sure, yeah. I've been in the software industry for far too many years, but really I've always been in the enterprise software space.
I started as a software developer, building applications, and like I said I've stayed in the enterprise software world ever since.
I have had roles in engineering and product management, which I'd say are some of the areas I've really enjoyed best over time.
Sales, professional services, customer support, all those different roles at a variety of companies.
IBM, Oracle, I had a really good run at Oracle during a good portion of the 90's when they were growing super rapidly.
I actually left with some other Oracle executives and we started a company called Tenfold, which we ended up taking public, and that was a fantastic run and really my first experience outside of a big corporation like an IBM and Oracle.
I got the bug for startups ever since, so now Cloud Elements is actually my third startup.
Grant: Amazing. How long ago did you start Cloud Elements?
Mark: We founded the company about seven years ago, but the first couple of years were really--
We knew we wanted to solve a problem in the integration space for software companies, but we didn't know the problem/solution fit.
We actually spent the first couple of years self-funding the company, doing consulting and working in that API integration space for companies, then we built out our product and launched our first version of our product in the beginning of 2015.
Grant: All , cool. I'm going to dive more into Cloud Elements a minute, but I wanted to dig around in your background a little bit because it sounds pretty interesting.
So, you took a company public? Was that in the internet--?
Mark: Yeah, that was in "The bubble." We were the late 90's, early 2000's.
What made us unique was actually enterprise software, a number of Oracle execs left and really built a company that was taking what ERP was doing in a horizontal basis and applying that to vertical markets.
Like in the financial services on the trading side, and utilities, and insurance, and really taking for the first time these bigger ERP concepts, but applying it to vertical market solutions.
We had, unlike a lot of things in the .com bubble, we had real revenues and we grew that to a few hundred million in revenue and real profits, and other things like that as we scaled the company.
Grant: And you weren't selling to other technology companies?
Mark: No, we were selling to real businesses. Like the Goldman Sachs' of the world and the Chase's and United Insurance, and other firms like that.
They were real businesses and really looking to transform their operations using more internet-ready based software.
Mark: We were actually quite early in helping enterprises take and apply web-based services and applications to solving problems that were outside of their finance departments, real operational problems in their businesses.
Grant: Yes. First of all, selling to actual companies at that point was probably fairly rare.
I think that's a lot of the reason that some of these companies went under, is because their customers were all other startups and--
Mark: Exactly. It was all this fictitious-- Other startups, or there was a huge run of telecommunication companies that were sprouting up everywhere that were going out of business left and as well.
Grant: So, if you had real companies paying you real revenue, then-- Yeah, sure. Is that business still around ?
Mark: No. Tenfold essentially is no longer standalone operational, the various parts got folded into other businesses and other software companies over time.
So as you can imagine, as we built out a vertical catalog, the interest in that vertical catalog became by a lot of different companies that were, like the Enfours of the world or whatever that were going after particular vertical segments.
Mark: It ended up spawning out about five different types of companies and technologies.
Grant: That makes sense, because realistically each vertical becomes so specific and the buyers become so specific.
I think you see this and there's some really big companies that are built on taking a vertical approach.
Mark: Yeah. You look at some of those markets, like policy management systems for insurance companies, that's an entire business in and of itself.
That's a big market and a big set of companies, and bite-size trading systems and things like that are big businesses in and of themselves.
A big part of the lessons I learned through that company was the importance of having a good platform that can be applied to various use cases.
Essentially what Tenfold did was build a really very sophisticated platform to make it easier to build any type of application.
That got applied to building vertical market applications with our go to market strategy there.
Grant: That's cool.
Mark: Yeah, but that was my first taste of the startup world.
After engaging in that and seeing how quickly you can make decisions versus a big company like an Oracle, and I just got addicted to that speed of operation that you only get in a leaner organization.
Grant: Then, was Cloud Elements--? Actually, you went to another company.
Mark: Yeah. The next company was a company called Channel Insights, and we built essentially the largest aggregator of point of sale and inventory data for companies that sell through a distribution channel.
So I got some exposure to distribution channels throughout my time at Oracle and Tenfold, and other things like that.
It always amazed me, like these companies that are selling technology products, whether it's HP or Symantec or Intel, they sell their product out to a distributor and then they have no idea which end customers are buying that product or how much is really left in the channel after it goes out from a distributor to a re-seller.
So there was a company called Info Now in Denver that had this idea, they were involved in a number of things.
I spun off one of their divisions and really focused on building this business called Channel Insights, to really give businesses that sold through complex distribution channels for the first time visibility into who's buying their product, and how much inventory is still out in that channel.
So if I'm selling millions of units in the UK, I would love to know more about "How much are being sold to retailers in the UK? How much to the financial services industry, and how much to big companies versus small companies?"
Things like that, we would collect this data from thousands and thousands of re-sellers worldwide directly out of that re-seller's CRM and/or accounting systems, and then clean it and aggregate it together, and then sell it to the manufacturers.
So they could get better visibility in running their business, pay their partners more efficiently, see which ones were accomplishing the goals they wanted and things like that, and market development funds to pay commissions to their sales people and manage their inventory more effectively.
Grant: That's really interesting. Is any of that same principles or software that's similarly related for enterprise channel sales?
Mark: It all is the data enabler for enterprise channel sales. We would work with, whether it was the Salesforce.com's--
We were an AppExchange partner with them and worked in a partnership program to resell our product to Salesforce customers who were trying to get a better handle on sales data.
Other vendors in the PRM and channel management space as well, we would work with and really solve that last mile problem.
Which is, "Get the data and aggregate." It was really a big data play before it was called "Big data," but really aggregate that data together and provide analytics on it now for the first time, all the way through to that end customer.
Grant: OK, cool. I'm guessing some of those experiences were formative in how you came to what you're going to do with Cloud Elements? Or, how did that come about?
Mark: Yeah, absolutely. That was the inspiration for Cloud Elements. So Channel Insights is now part of Model N.
So, that was an exit. Vineet, who is our CTO, and I after we were looking to do our next thing we really sat down and said, "What was the biggest challenge we had at Channel Insights?"
We sat there and looked at each other and said, "Integration."
The challenge we had is that we tried integration platforms, some of the state of the art platforms out there like MuleSoft and others, and when we were at Channel Insights we tried to solve this problem.
Because we had to connect to CRM and accounting systems, and we learned that they weren't really built for us as software developers or software companies.
They were built for the buyers of applications at enterprises to make all the shit they bought work together, but weren't really about us as a provider of an application to be able to work with the applications used by our customers and partners.
That informed our decision in founding Cloud Elements as our core thesis, that more and more of the integration responsibility was going to move from the buyer of apps to the provider of applications.
So as you describe EnterpriseReady, one of our beliefs was that to be really EnterpriseReady with a software is you have to take on more and more of the integration burden and responsibility away from the buyer.
Or they may not buy from you or your product may not be as sticky, or your onboarding time may be too long if you can't work with that ecosystem of apps that those companies are using.
Grant: Yeah. For everyone listening, if you've been to Enterprise Ready you know that integration is one of the core principles that we think builds up an enterprise ready application.
Mark, I'd love your perspective on if you're an application vendor and you agree with your perspective or you're just hearing this perspective, what are the first steps you should be taking in order to actually make the progress and be integration-ready and start to have a great integration foundation and framework?
Mark: I actually look at multiple levels, and then that first step is to design your application to --
Probably based on APIs, it sounds basic but you still see a lot of new application software being built with the UI first and the APIs catch up later.
And "I'll document and define the services underneath." So I think that first step is to really develop a good service-level foundation for whatever functionality you're building, even if you only expose it internally initially.
You've got that set of APIs that you can then make it easier to then expose out to the world.
Really making sure you're developing at that service-based micro service based architecture in today's world, but really defining what those services are and keeping them from being big monolithic services, granular, getting them well-documented in an open API framework.
Then that's foundation number one that will serve you well for integrating well in the future.
But I look at that as foundational, because there is a problem that crops up with even some of the most biggest and successful software companies we work with, is that on the other end the consuming side of APIs there's not enough developers in the world to consume all these exponential growth of APIs that have been published.
So you've got this incredible fragmentation of software markets, each software application could have dozens to hundreds of individual API endpoints associated with them, and then you've got this developer on the other end that their enterprise might have a hundred to thousands of applications all again dealing with potentially thousands to millions of different API endpoints that they have to try to work with?
The first step is you have to have that API, but one of the next steps we start looking at is providing some pre-built integration experiences to own some of that within your ecosystem and let the API be more of a somewhere where I can integrate to long tail scenarios that aren't pre-packaged, or other use cases like that.
But at Channel Insights those re-sellers that we had to connect to, unless I worked with their accounting system or their CRM system, whichever or both that I was connecting to, they just were like "OK. Great. You've got an API, but we're not going to spend the time to write to that.
We don't care if you're going to pay us for the data we're providing or not. I don't have a developer sitting around ready to do this, and I'm not going to hire a consulting firm to do it."
So we offered out of the box integrations to dozens of different systems so we could make those more self-service type of experiences from within our application simple and relatively easy for those common use cases we saw, and let writing to our API be for those more edge cases or non-standardized or more customized type of scenarios.
Grant: That's a great foundation. So hopefully these folks are thinking about actually building an enterprise piece of software , and they understand that architecture is important.
Are there any data considerations you should make, in terms of how you structure the data?
I think about holistically the idea that your system is likely not to be the final and only system of record, but leaving space for other considerations? Any thoughts around that?
Mark: Yeah, absolutely. Designing your solution around that ecosystem. we call it "The application ecosystem that your application lives in."
As an application provider, what is the context of other applications that my business coexists in and lives in?
Looking at how I can help harmonize data, and whether that's synchronizing back to other services or combining the data with other services, loading data up to other services or receiving it from them in batch or bulk, whatever it might be.
But if you think in terms of that example of, "I'm a new fintech company and I'm a B2B fintech in the payables space."
For sure all your customers have a payables system of record, they've got an accounting system or something where those payables are managed out of.
How do I bring together that data from that system from a user experience perspective with the data that I have in my application that they're going to use to complete that payment?
And then how do I reconcile it back up to whatever accounting system, like QuickBooks or NetSuite or Intact, SAP, whatever it is that's their system of record.
So that user experience of "Do I need to be in the other system ?"
And the user experience to be a single pane of glass in that accounting system where they're working, that may be one scenario.
Or do I need to make it seamless for that user to come to my user experience but have all their data synchronized in near real time from the accounting system they're using?
So they're comfortable coming to my user experience and being there, because I can control and manage that better?
That aspect of really starting from the experience that my user needs, and context to which other systems, and then how they want to engage and operate with those other systems is really a critical success factor for every software company in today's world.
Because even the SAP's, SAP re-sells our product to connect to 175 other third party products.
Even the SAP's of the world realize that they don't exist as an island, and business processes aren't completed through just one system any longer.
It's very rare that a business process is end-to-end completed through one application.
It's completed through multiple applications in your application ecosystem, or analytics if it's not a process.
Like I'm analyzing something, same type of thing. It's pulled together from data that could exist in multiple sources, so I like to start with the personas of the users because integration is just a means to an end.
It's just in and of itself it's nothing, it's plumbing. But the user experience and the personas of the users are going to be engaged in pulling data together from other systems in my ecosystem.
How can I optimize that experience for them? How can I make that be the most productive and powerful user experience, whether it's spanning across multiple systems or not?
Or, how can I make it the most efficient transactionally if it's a more transactional type of pattern?
Those are things that I need as a product owner and product manager to really be considering in my roadmap for my new application and my functionality.
Grant: Yeah, that's great. OK, so there's a handful of different areas here that I want to touch on.
First is as a application vendor, how should I go about thinking about the ecosystem?
Obviously you mentioned there's financial services that has one app ecosystem, and there's different app ecosystems in analytics.
How should I go about understanding and discovering what applications are in that ecosystem?
Just wait for customers to tell me, or how do I find that?
Mark: There's multiple dimensions. First of all, it's the "What are the systems of record that are being used by the types of customers I'm targeting?"
And "What do I--? How do I engage with that system of record?" You start with the--
If I'm creating a new human capital piece of software for applicant tracking or whatever it might be, and my customer wants to move that from tracking the applicant to onboarding the applicant, and I'm targeting large enterprise as well, I better have Workday and SuccessFactors and Taleo, for example.
Maybe Ultimate Software as targets that I'm going to need to connect to be able to complete that process from tracking to onboarding, and it's just one example.
But thinking through that use case, "In which system of record do I need to touch and engage with?"
That's the starting point, because that will become the roadblock to customer adoption over time.
Now you can't, if you have to go get IT organization involved and it's a complex integration into, let's say, success factors.
You've just slowed down your sales cycle, because now that human capital department has to go find a consultant or get on the IT backlog which could be a year out, or whatever it might be in order to connect back into that system of record.
If you can take off on some of that burden, that's what I talk about, move and shift some of that burden from the buyer to you as the provider.
Now I can shorten that cycle, but I always like to encourage starting with those.
But if I'm going after mid-size and small and mid-sized companies, then I'm going to be looking at things like Namely and Bamboo, and other products like that are used in the-- QuickBooks, for example, that are used in the mid-market.
Those are going to be different targets, so your market segments you're going after and those system of record for the type of business you're in.
Start to set the basis and the foundation for "OK, this is what I need to start with my integration strategy."
So that's the starting point, then there's the complementary systems that are in your ecosystem that you tend to coexist with.
If I made a marketing automation system, I've got to go take on HubSpot in the mid-market, there's probably in addition to Salesforce there's probably a few CRM systems that I'm going to need to work with.
But there's probably a number of other marketing content systems, or if I'm a lead generation system others that I'm going to work with and integrate to that are not necessarily the system of record, but they are complementary to increasing the value of my offering in areas where I may have a gap in my product or don't plan to solve by working with as well.
So that's that next layer out of those systems of record, and then there's the other systems that may be partners that I want to go to market jointly with.
If I integrate with this partner that makes it a faster and more seamless experience to get to market together, how does it fit into my business development and my partnering strategy over time in reaching this market?
That'll drive some of the integration plans, and then as you also mentioned you'll hear from those early customers.
I hear over and over again from CEOs and software companies, it's like usually the second or third question out of the mouth of a prospective customer is, "Do you work with blank?"
So as you capture that data really well, get your sellers to capture what that blank is, so you're gathering real data from your sales prospects and your actual customers so you can start prioritizing your roadmap of integrations over time as well, beyond those obvious systems of record.
Grant: Yeah, that makes sense. If your answer is always, "We have an API so you can just hook it up."
You're probably missing an opportunity.
Mark: Yeah, it's just naive to think in most cases-- If the Marketo's of the world or the-- Again, name a big software company.
They can't just say, "Go write to my API" is generally not always the answer. Sometimes partners will go do that, but not all partners will do it.
We work with some major financial services companies, biggest brands in the world that have a lot of market pull, and they want to integrate their payment service.
This big financial services company wanted their payment service, they were expecting every accounting vendor to essentially write to their API.
But they could only get a few of them to do it because again, they're not interested in this big brand, but it takes time and it takes effort.
Who has to maintain that? And those big accounting system vendors have really big roadmaps with more things than they can get to that they have to do for their own features, so you end up with this chicken and egg type of scenario.
"I got my API, go write to that" and they're like, "No. You write to our API," whatever it is.
At the end of the day the customer in many cases will be willing to just say, "That's no problem. I'll go hire a consultant and write to that API," or "I'll get my IT team to do it."
Grant: One of the challenges with pre-built integrations is that a lot of times different companies use their different applications a bit differently.
They have different workflows or they use different fields to do different things, so if you're an application vendor how much customization do you need to be exposed to your customers to actually configure that integration?
Are you talking about mapping fields and defining various events? Is that a part of what you have to offer in order to do a pre-built integration really well?
Mark: Yeah. We look at-- It's very important that you can accommodate the level of customization with a pre-built integration like you described, especially when you're connecting into another system of record of some sort in whatever space you're in.
90 % of the time they're going to have custom data and custom fields, and so the first area of customization has to be in accommodating custom data fields and custom data objects.
So we generally encourage customers who are going to offer pre-built integrations to be able to have default mapping to the objects that they're connecting to.
So, let's say you're connecting to the marketing automation system with your passing lead information back to that.
Everybody has customized their contact and their lead objects, and if it goes into the CRM system they've customized the opportunity objects with all sorts of rich data that you could never anticipate that exists.
The key is to be able to have a default to the default field, the mapping, so your customer doesn't have to map "Street 1" and "Street 2" and "City, state" and all those basic things.
You can anticipate that work for them, but then have an ability to discover the custom data fields at those end points, and then usually present the data mapper to be able to accommodate and map the custom field self-service by your customer.
So a lot of our clients actually as part of what our-- Not to make a commercial here, but that's what our platform does.
It essentially gives a software company a full integration platform. Authentication, data discovery, data mapping and transformation.
Mapping is just mapping the fields, transformation may be "I'm connecting into Zendesk, and since Zendesk uses ticket priority one, two, three but I use high, medium, low, I need to transform a simple view.
I've got to transform the values when I move data back and forth between those two." So, that's transformation.
Eventing, which is knowing when data has changed. "Was something created, updated or deleted?"
That should have -- I should know when that happens, if I'm going to make an event-based type of integration scenario.
Then the orchestration. So, workflow. Do I need to apply some logic "If this, then that" to a different scenario in how I move the data? And then logging in recovery.
All those things, whether you do it yourself or you use a platform like us, you need to consider an integration offering from that authentication to that logging in recovery. In some cases companies build that themselves, in some cases, again, they leverage platforms to do that. But that ability then gives you that ability to understand that data structure, build a map and then transform it.
That's the most common type of customization. Sometimes you need to customize the workflow or the orchestration based on your client making changes to their underlying system.
That's not as necessary initially, especially if you're doing a mapping to each system, but that would be another level of customization.
Whether it's filtering for only getting the data fields that I want, whether it's applying some logic to when I do want to update or not, and some scheduling associated with that.
But those might be areas that have to be tailored and customized a bit as well, but our experience is the biggest areas are that just making sure the data can flow seamlessly between the systems.
Grant: Yeah, it's super helpful.
I love that you guys are addressing some of these issues, because it definitely feels like a lot to build. Another question, I was thinking about how you mentioned events, but I was thinking about this because I think there's this--
I get a little fuzzy on this sometimes, but there's API that you can request information from, and then there's web hooks, which I think are the other side which are a little more event-driven.
Can you talk about how a application vendor should think about what they offer as an API versus when a web hook should be available?
Mark: Absolutely. Web hooks are super valuable to enable your API with essentially a response and an automated, essentially near real-time response when something was created, updated or deleted.
Unfortunately, we have our state of the API integration report-- I shouldn't say "Unfortunately," we do it every year and it's a great report.
But unfortunately, through that report we discover that the amount of endpoints that support web hooks is around the 20 % range of all endpoints.
So if you don't provide a web hook and you're in a event-based integration scenario, and somebody wants to know that a new contact was added to your system, they then have to poll you and ask "Has a new contact been added since such and such a date?"
Whatever window of time, and polling's super inefficient for you as the application provider, because it's somebody hitting your end point searching and saying "Has something changed? Has something changed?"
Versus you just telling them when something changed, which is what a web hook does.
I'm glad you brought that up, Grant, because it's an important part of being able to move beyond just providing an API to providing APIs that can be really well-consumed in synchronization scenarios where I need to know when your data changed, and adding a web hook will cause overhead in your system as well.
Because you have to provide that response back, but it's less overhead then if you've got somebody polling at your system to figure out and find out when it was changed.
Because you're essentially pushing to them and letting them know, so we feel like adding web hooks and enabling your APIs that are going to participate your core data objects--
You don't necessarily need your administrative APIs, like your APIs that add a new user to your system and things like that.
But for your core pieces of data content that are going to be consumed into that ecosystem of applications, adding web hooks to those is really valuable and really enhances the power of your API.
Grant: Great. So with a web hook you're basically allowing the integrator, or the user, to provide the endpoint.
Then when the event happens on your side, then you're just calling that endpoint for them. Right?
Mark: Yeah. You're just putting your response to that endpoint, saying "At this time such and such--"
Usually it has an ID or whatever it might be, "That this data field has changed."
So now, instead of having to ask, you tell me that it changed and now I can take action accordingly as your customer or the integration activity that you're providing.
Especially with so many, like we were talking about earlier, so many of these if you think about an end-to-end process getting completed for your customer and the piece of the process that your application fits in.
It's likely the other applications that have to participate in that process will benefit greatly if you can provide them essentially an HTTP call that responds and says "This has changed."
Now they can act accordingly in their workflows or orchestration and complete that process.
Grant: Right. That's one of those areas where I appreciate your feedback on that, because it's always something I've thought about and it's a little bit hard to figure out exactly when to use it.
But I think your description is really useful. One of the challenges around publishing your own API, or integrating someone else's API and doing that hard integration, is around versioning.
So how do you manage this from both perspective of providing an API for folks to work with and versioning it correctly, as well as consuming APIs and making sure that when there is a new version that you can move to it or utilize it?
Mark: There's a lot of layers to the aspect of versioning APIs. First of all, the best practice is to have a clear versioning strategy, a version number that you can pass along in the header of the API that it represents.
So it can be called and the same API could be the same URL just by changing the version number, and it can be called to the old version or the new version accordingly so the developer doesn't have to make much in the way of a change to invoke and call that API.
Then supporting multiple versions in your documentation is important as well, because if somebody implemented to version 1.2 of that object and you dramatically changed it in version 2.0, they may not be ready to consume version 2.0.
A level of just regular software is that being able to support a level of historical versions for some period of time to ensure that the developer can operate with minimal changes to their systems is important.
The most well-behaved APIs are backward - compatible with changes, and it's so important, because when you make backward-incompatible changes, whether it's from authentication mechanism or a data model structure, things like that.
Obviously it then requires your customer to have a developer on the other end, or a intermediary like us to have a developer on the other end to make that change, to support it.
We see most well-behaved applications really maintaining strong backward-compatibility in their API versions, which is really productive, and usually it's extending the data object or the data model or things like that. Adding additional fields, or richness to it, or additional APIs, but being compatible with the previous version.
The area where the biggest breakage of APIs tends to be around the most damaging or changing your authentication mechanism.
We've had some leading providers, I won't mention them, but with large install bases move from OAuth to OAuth 2, and everybody then has to re-authenticate and change their authentication pattern who's done an integration and things like that.
It's areas where we shield for our customers, because we do all that maintenance and we provide a level of abstraction to the API so we can shield a lot of those changes that end up.
That's a value proposition that a platform offers, like ours with supported what we call "Elements," or "Connectors."
But by supporting and maintaining those with our own team and our own tooling and our own automation, we can minimize the impact that our customer's applications and systems.
Grant: OK. With versioning, the other area that's always interesting is testing.
You have all these different integrations, how do you make sure that when you update your software, how do you test against remote APIs?
Is that something that you recommend integrators do?
Mark: Yeah, absolutely. Very critical that you're testing on an ongoing basis, and especially if you're adding additional connectors and adding pre-built integration scenarios for your customers.
They expect those to work, and they obviously also expect your own APIs to work.
So the ability to exercise an endpoint, whether you're going directly to a sandbox that you have for that service and calling it on a very frequent basis, we tend to run nearly every API that we integrate to in terms of a nightly basis, in terms of testing a successful return and payload back.
That's something you can automate and put into an automated process, you need the sandbox that you're testing against in order to do that, or use a mock testing service that can essentially mock the endpoint to validate the call as well.
But things can go stale and then impact your quality quickly, and erode that confidence in your service.
Grant: Yeah. We do a lot of contract testing around that mock, some mock testing.
We use contract testing in a form called "Pact testing." It has been something that we've started to really implement much more over the last year, but we've been pretty happy with it in terms of how both for internal services and a handful of external services.
Mark: It's absolutely got to be part of your strategy, and your commitment is to ensure that testing level.
Grant: Cool. There's a couple other areas around API management that I think about.
One question is around the API keys, and management of API keys.
Generally I think about an API key as basically just a password generated by the server, but with passwords most of the time the server doesn't actually store the password.
You just store a hashed unsalted version of the password, but with API keys I feel like most services actually just generate an API key for you and then keep it stored on their server.
Are there best practices around that right now?
Mark: If you're integrating to a service that you're getting that API key back, clearly the encryption of that key is critical, and then the separation of the ability to decode that encryption from where you've got it encrypted and storing that's going to be super important as well, because there's so much attention to the vulnerability associated with APIs and where those can be compromised.
When you're doing integrations as well for your customers with API integrations, where that's key going to be stored?
Generally you're going to end up having to store it in your platform if you're connecting your customer into their QuickBooks system, and you're going to get that authentication token back from QuickBooks.
So the first step is to obviously-- And I'm not a security expert per se, but what we do is make it a multi-factor authentication as well.
Through our platform our customers are essentially a organization and then they have a user, and then they have the actual token for the endpoint.
You have to have all three of those keys present for that endpoint.
Let's say the authorization and OAuth 2 for QuickBooks, we provide that back as a key user who's accessing that, and then the organization they're accessing it from.
Then we combine those three together to be authorized to access that endpoint.
Grant: Cool. One other security-related question that I think about, and this is a bit higher-level and a little less deeply technical, is thinking about it almost in context of GDP R and the idea of processors and sub-processors.
One of the challenges with the API economy, or so it is sometimes called, is that it's very easy for data to go from one system into another system and then into another system--
I call this "The data perimeter." So your data can just keep going everywhere, and how do you address that?
What are folks doing to keep tabs on their data as it goes into all these different systems?
Are there any best practices in this space, or any thoughts around that?
Mark: Yeah. I guess what I can share is what we see enterprises and large software companies essentially demanding of us, and that is we become essentially a sub-processor for our customers.
Those sub-processors have to be identified in their processor agreement, which sub-processors they use and do they meet the security requirements for SAP, for example, one of our customers.
We have to meet the highest level of security standards to process data on behalf of SAP, and we have to make it real clear through that where that data goes and where it gets stored and where it doesn't get stored.
We've addressed that problem by frankly not storing any data, just being a pass through of the data.
I think unless you have to store the data other than the metadata, we store the metadata about the transaction, how long it took, but not the payload data.
For our own strategy, that's been unable to give us the ability to participate in American Express's and SAP's and Western Union's, the financial payment flows and everything else.
Some of the most secure because not only of our standards as a sub-processor in terms of what we do, and guaranteeing where that data is going and what's stored and not stored through that agreement.
The way to control that is through your clarity with your sub-processors, and then you know exactly where it's going and are in control of it.
Grant: Right. I think that addresses the first step.
You talk about how you're a sub-processor for SAP, but one of the challenges in the integration ecosystem is that from Cloud Elements it's going into all these other applications.
So, how does an application vendor--? Do they think about all the down streams as additional sub-processors? Is that how they list this out?
Mark: Good question. If you really think about it, when you're accomplishing an integration scenario on behalf of your customer, it's their system.
It's their workday system, or it's their HubSpot system or their Salesforce application. It's important that you're not moving the data anywhere beyond--
If you're accomplishing an integration from your application to your customer's HubSpot application, you're just moving the data between those two.
All you have to worry about in that case is your customer has to be comfortable how you're managing the data and where you're putting that data, meaning where you're putting it back into their system and who you're giving access to.
It's not-- You're never taking it, and that's part of what you have to honor, you're never taking it outside of their domain, their systems that they're responsible for.
I'm updating their instance at Workday and absolutely nothing else. So that's how you-- It's not creating.
You don't have to become a -- If you're doing integration for your customer, you're not becoming a data processor, per se.
You're just in the flow and following the security flow of their own systems that they use to manage their business, and that data is just going back into their instance.
Grant: So basically, it's under the assumption that basically every account that you're integrating into is an account that the customer who's turning on an integration controls.
Grant: That makes total sense. That's a great answer to that question.
Mark: It flows under their GDP R and their requirements, and that's where you can't do anything else with that data then that they're not authorizing you to do, other than just for the purpose of that integration scenario.
Grant: Then just from a security perspective, the one challenge that I would have there if I was a CISO would be--
OK, so we have done a very intensive audit of Workday and we've classified them to be able to accept our most confidential data.
But then there's an integration to go from Workday into some other little app that just extends something, and they don't have the same level of authorization to accept the most confidential data.
The challenge there is "I need to be able to train and make sure my employees aren't turning on these integrations and pumping data into just any account that we have, they have to pump it into something that we've validated."
Mark: Absolutely, yeah. That onboarding to validate what you have permission to update and where, or to get and just even getting the data.
Let's say in a human capital scenario or something like that is super sensitive, so that permission level is critical to establish that trust initially.
Grant: Yeah, and just the overall training around "Look, we have this information in one application.
Which doesn't mean that we need it to be integrated into every other application that it can possibly integrated into, because some of that data we might want to mask that field or change something else out."
I'd love to just spend a few more minutes talking less about integrations and more just about, how do you think about taking a product or a new feature to market?
How do you discover what you need to be developing next as a application vendor? How do you get your teams together?
How do you do that whole end-to-end problem discovery and problem POC to get it to customer's hands?
Mark: One of our core values, Grant, as a company is --
We call it "Iterate to success," and it's really built around combining Agile software development principles with the lean startup principle.
The core lean startup, the core takeaway for me was that "Build, measure, learn, loop."
You build something based on a hypothesis you establish, you build a MVP to test it, you measure the results against your hypothesis and you learn and then you iterate, and keep going from there.
When we're doing new things with our product or new areas, that is actually how we started the company, even. It was based on really trying to optimize, and we did our MVP in a month for our first integration.
My core hypothesis was that developers would like one place to go to do all the integrations they need.
We're like, "What's the simplest thing we could do to test that out?"
We essentially built a unified API that could send messages through SendGrid and Twilio, because we were finding lots of developers were doing both email messaging through SendGrid and text messaging through Twilio in their applications.
We're like, "Why don't we just build a single API so they could do that all at once?"
So they would save time, it would be half the time to do that integration and our core hypothesis is they'd rather do that all at once to do it.
We built that in less than a month, got that out there in our initial version of our platform, and we never sold it once.
Because frankly it's really easy to integrate SendGrid and Twilio, and I didn't need necessarily--
Even though I could save some time, I'm not really ready to pay for saving that time because it's so minimal in terms of how much you'd save.
But we learned through that, and we're able to iterate our product around and say "As we talked to those developers and got them to try our SendGrid to Twilio Unified API, we started discovering other problems through that.
Say, "I need to integrate multiple cloud storage systems," or "I need to integrate to multiple human capital, or marketing automation systems, or CRM systems."
Those are hard because I have to transform the payloads and things like that, but it gave us the basis to iterate to "What was the real product?"
We generally start every new initiative with the hypothesis, and then "What's that MVP that I can get to get started on that 'Build, measure, learn, loop' as fast as possible and get the feedback on it?"
Then we usually, those MVPs are very controlled user communities and groups that we try it with in order to get that feedback and validation.
Then we add more and more features to it as we get that validation on the use case to get to a to a real release.
So that's the process we use to test and validate new ideas.
Grant: OK, that's really interesting. It's actually great to dive into the early story or Cloud Elements. Who were you selling to?
Obviously your first unification API thing you got some feedback on, and then you started to offer something a little bit more detailed and integrating with different back ends.
Was that the first thing you were selling then? Is that the first thing people bought?
Mark: Yeah. After we did that unified API for messaging that, like I said, we never really made any money on, we discovered another problem.
Because we were talking to developers, and we would essentially -- Every developer and product manager, product managers and developers we could talk to.
We ended up with about 100 of them that we would just go out and say--
We had what I call a "Lean research," which was five questions, which were "How many integrations do you expect to do over the course of the next 12-24 months? Which ones do you plan to do? How long do they take on average?"
And a few questions like that. So as we at least got this initial test to show them, we were able to ask some of these other questions.
They're like, "I need to integrate to Google Drive, Box and Dropbox," for example, and "SharePoint down the road and One Drive, and other things."
So we started discovery and it was like "There's actually a need to have a unified way, so we were on the right path with the unified API, but we just had the wrong endpoints because they were too easy to do."
There's more complexity in dealing with a handful of cloud storage services to normalize those and unify those, so that's what we did next.
We discovered that through just sharing and having an initial MVP and prototype, and then being able to give us the credibility to ask some of those additional questions or what they needed to do next.
So that was actually our first real unified API that we made money on, was what we called the "Cloud storage hub," which is still one of our top selling products.
Now we integrate about 15 different cloud storage services, all in a unified way of doing that.
Then getting that to market, we started discovering and saying "OK, that's interesting. But there's a lot of interest in building a new marketing automation application."
Say, "If I'm one of the 8,000 companies on the Martek landscape, if I'm one of those companies building a new marketing automation system, I almost invariably have to connect to HubSpot, Eloqua, Marketo, Pardot, some of these main marketing system of record solutions."
So that was then what helped us discover that type of use case, and that was a next level of integration because you had to transform the data payloads and deal with custom data.
Things you don't have to do necessarily when you're connecting to a file service, and so that gave us enough knowledge to say "OK. Some of these same customers who started with this cloud storage want to connect into their marketing or CRM systems, or things like that."
Then that led to being able to continue to deal with a more complex set of integrations that we could handle as we continued to iterate based on feedback from the customer base.
Grant: So, when I think about the first unified API for SendGrid and Twilio, that feels like a really any developer /product person on a consumer company or a SaaS company could be using that.
Then your next product felt a little bit more focused on business applications, so did you realize that there were these two segments and that you weren't as interested in serving the consumer-oriented companies and you wanted to focus on the B2B?
Mark: Yeah, absolutely. That was a learning as we came in, because we didn't have a focus on B2B versus consumer, or we had a focus on making it easier to integrate.
Making integration faster for the software and application provider, and so that was a learning that we started seeing very different use cases and different endpoints that companies wanted to connect to for that consumer-oriented side.
It was less about connecting to the customer's applications and more about building an app like Uber , where I'm connecting to Google Maps and payment services and messaging services, and other things like that.
So we quickly pivoted to the B2B use cases exclusively because we saw a lot more complexity and therefore more value in the platform, because if you have to transform the data payloads and you're dealing with more complex data structures and things like that, the need to not only build or make it faster to build an application, but be able to connect into that ecosystem as we described of what your customers are using.
We started seeing that being very pervasive almost across every B2B application provider having a need for that type of service.
Grant: Yeah, that makes sense. One of the challenges of developing for B2B application developers, because we sell to the same customer at Replicated, is I call it "Cooking for chefs."
These are people that know how to build software and they often want to build more software, and we obviously both tackle fairly ugly problems.
We do on prem deployment and you do thousands of integrations, but we still find that there's engineers that want to build it themselves.
So I'm sure that you find the same problem, how do you address that?
Mark: First of all, our technology-- Yes, our number one competitor is coding and coding it yourself.
How over time we've dealt with it is not to deal with it.
Our best prospects are people who've done a couple of integrations, one or two integrations themselves to business systems, and then they almost invariably come back.
Even if we had talked to them a year ago, they'll come back and be like "This is harder than we thought."
Because every developer in the world looks at an API and says-- Especially if it's a reasonably-behaved restful endpoint, it's like "I can write to that in a couple days and I could accomplish that."
It's true, it's really easy to connect to an API. Integration is hard because all of the things that are below that tip of the iceberg of just invoking a restful call to get our post data.
Like you were talking about earlier, "I need to store this key now somewhere and make sure it's held securely and safely.
I need to refresh it, and how I refresh the key for Salesforce is different for how I refresh the key for Microsoft, so now I got to deal with that one as the next one comes up.
I've got a deal with custom data. How do I do that? Where do I transform and deal with the custom data?
I've got to have an eventing framework, so I have to build a polling framework because very few of these have web hooks. I thought they all had web hooks, but they don't."
What we find is our best prospects have discovered the complexity by doing it, and once they've done that, you start realizing it's like "I don't want to do that for a living. I'd rather build my unique application functionality versus building and maintaining an entire integration platform and a handful of connectors." Because there's going to be no limit to the amount of demand that product management, sales, business development are going to have for new integrations.
So now you're like, "OK. Great. This was fun at first, but it is not fun now doing the third, fourth, fifth, tenth one over time."
So, that's how we've approached it. Illuminate the complexity and the problem and what needs to be done, and then let people experience that for themselves.
Grant: Yeah, I totally get that because we face the same thing.
It's funny because you know that they're going to go into a few months of pain or year of pain, and you're like "I could help you now, but you don't want me to and you don't really understand it or value it as much as you will in a year.
So, I'll wait a year and then you'll come back and you'll be very happy with what we have. You'll pay us and you'll be a great customer."
Yeah, that's funny.
Mark: You just have to, like I said, just have to be willing to wait until they discover more of the complexity.
Grant: It's frustrating because you want to help people, you're like "I can save you this pain."
But we all have to learn for ourselves.
OK, so I know that you have been selling to application vendors, but you started to get some enterprise adoption so people were using Cloud Elements to integrate internally built applications with external systems.
Because obviously there's a ton of internal applications that are being built by these companies as well.
So I think that's the next interesting step, is you're actually really getting enterprise pool because I think realistically obviously there was a lot of big companies that were using your software before, like SAP, but most of them were probably more mid-market software or enterprise software companies themselves.
Now you're really getting enterprise adoption, can you talk a bit about what that felt like and how you started to address those customer needs?
Mark: Enterprises are building lots of software as well, as part of digital transformation initiatives and re-engineering their businesses.
They're investing massively in publishing and building APIs as well, so they were a little behind where software companies were, but not far.
You'll take financial services companies, every single one of them, especially in the B2B space is exposing APIs for everything from completing payments to now in Europe it's mandatory with a PSD-2 for banking to be able to provide third parties with the ability to get credit information or account history and things like that through APIs.
It's mandated by the government, so that same problem that I talked about earlier, it's like APIs have table stakes for what is now an integration responsibility.
As we start seeing-- Again, I'll take financial services.
All these major financial services institutions having to deal with integrating to their API with things like the accounting systems used by their customers, or commerce systems to do reconciliation for payments, foreign exchange transfers, liquidity management, whatever it might be.
We actually discovered the same type of problems we were solving for software companies were existing within enterprises, although I have to say it was quite a learning experience of the additional requirements that enterprises had beyond what even some of the biggest software companies had in terms of security and compliance and other things like that we had to step up to.
If you would have told me a little over a year ago that I would have had a chief security officer on my staff, I would have said "No way. We're not big enough."
But we sure enough have a CISO now because we really, in order to meet the compliance and security requirements of these world's largest enterprises, we really needed to step up our game in that area.
And not only from our compliance certifications, but how we ran our business and to be in that grown up world with these massive enterprises.
Grant: How did they start coming to you, and what was that pool like?
Mark: Yeah. We started getting some inbound, just finding through shows and just through content that we produce, some of the Western Union and Silicon Valley Bank and some others coming us and just saying "We've got this problem with integration, this next generation problem with our API."
Then we started targeting it.
So once we started seeing a pattern there with some of these early customers in that space, we really started validating that there was a real demand, and then started participating in industry conferences and creating vertical content and industry content to drive more of those leads.
Especially, like I said, in financial services initially but other industries as well that we'll expand to.
Grant: Great. Then in terms of features, obviously you said you did some compliance pieces, probably went off and got some certifications.
Did you already have integrations with SSO, and were doing a SAML sign on? Or, did you add any audit logging or any of those features?
Mark: Audit logging we were really strong on, that was all API-based from our platform to begin with.
But more sophisticated role based access, single sign on support for identity vendors was it was critical.
I'd say the next level GDPR and data sovereignty rights to really being able to not only have data stay in Europe, but have the servers and things maintained by people in Europe.
So really have true sovereignty in the data, which was becoming a critical item with a number of our European customers.
Being able to have our data processors agreement and be able to hand that to somebody, hand a compliance and security white paper that described all our PCI and SOC -2 and ISO 27001, and those things.
Those are all great that you've got those certifications, but enterprises want to hear that quickly and have a way to consume that and route that around the organization of how you do those things, not just that you do them.
That there's regular reviews, and it's a regular process. They're sophisticated enough to not just accept that you have a checkmark, that you've gotten SOC-2 compliance.
They want to know what your ongoing controls and reviews and things like that are as well, so that really led to a set of requirements that were a significant step up in investment for our organization to be able to meet.