
Ep. #41, Covenant with Suzanne EL-Moursi
In episode 41 of Generationship, Brighthive CEO Suzanne EL-Moursi joins Rachel Chalmers to unpack the “three-layer cake” of modern data architecture: composable stacks, agentic AI, and governance-first design. From integrating hundreds of data sources to enforcing real-time compliance, Suzanne shares how her team is tackling the grunt work of data so humans can focus on innovation. Plus, her vision for turning every organization—big or small—into a data powerhouse.
Suzanne EL-Moursi is the co-founder and CEO of Brighthive, a full-stack data management platform powered by agentic AI. With over 22 years of experience spanning digital transformation, brand strategy, and startup leadership, she’s dedicated to making AI-ready data accessible, governed, and insightful for organizations of all sizes. Her mission: turn every knowledge worker into a confident, empowered data analyst.
transcript
Rachel Chalmers: Today I'm thrilled to have Suzanne EL-Moursi on the show.
Suzanne's a serial entrepreneur with 22 years experience specializing in digital transformation, brand strategy, growth marketing, entrepreneurship, and startup operations, building new brands and successful businesses, following 14 years working for leading corporate brands, including IBM, HSBC, GE Healthcare, SapientNitro, leading innovation and growth strategies.
Her passion for entrepreneurship has led her career shift to build startup businesses since 2012.
As co-founder and CEO of Brighthive, Suzanne is currently laying down every brick in building the company, a productivity platform for data analysts.
Brighthive is the leading full-stack data management platform with end-to-end agentic AI powering the entire data management workflow. So we have plenty to talk about.
Suzanne, thank you so much for coming on the show. It's a pleasure to have you.
Suzanne EL-Moursi: Thank you for having me. And thank you for giving us the opportunity to tell you about our Brighthive mission.
Rachel: Let's get down and dirty. What are the core technical components of Brighthive?
How are you handling things like access control and governance at scale?
Suzanne: So when we step back and explain to the market as we're educating the market to capture here what is Brighthive, how to think about it, we highlight that what we've been building really has, it's a multilayer cake, cake of three parts, if you will.
Rachel: I love cake.
Suzanne: Who doesn't, right? Like, anything that's delicious, you eat a lot of it.
So we hope that that materializes into hopefully great growth for us.
But we've architected the platform to be a fully agentic platform designed with an intention for automation and collaboration for folks and that is all wrapped around trust.
And what does that mean is you can never trust, you know, when something doesn't feel right, isn't accurate, right?
So the foundation, first layer, is that it's a composable modern data stack.
By composable, we are saying that it's deployable in multiple places if it needs to be and it has an agility, Airbyte for ETL, OpenMetadata for the data cataloging, DBT for transformations, Great Expectations for data quality, and open data contracts standards for all the enforceable data governance that we have.
Rachel: Things that we already know, that are trustworthy, that code is out there.
Suzanne: Mm-hm. It was best described on one of our sales calls from a technical leader, you get to a point in sales, you know, where they have to do compliance and the CTOs kind of come in, and they said, "Wow, Suzanne, you're in everybody's category, but you're in a category of your own."
Rachel: Oh, I like that.
Suzanne: That's the best way to describe it, right? Because all these brands that I just mentioned, you know.
So it's quality baked in from the beginning. It shows the value of collaboration on this journey.
On top of this stack that I just mentioned, we have built our own approach to agentic AI for data and that's the agentic integration layer. That's the second layer.
And this is using lane graph, which lets us compose all the multi-agent workflows that people are so excited about and the promise of the future that function like a fully staffed data team in a box.
And I will say data team in a box for everyone. It's the future where, you know--
You hear companies say that AI is going to make a builder out of everybody. And we believe Brighthive is going to make a data analyst out of everybody in that same kind of hopefully promise.
These agents that we've built, the seven agents, they handle ingestion, transformation, quality checks, policy enforcement, and even the analytics and the reporting, all in an autonomous manner, which frees up humanity, frees up, you know, the high-value, high intelligence brain power of humans to do the much more innovative work.
In a way, I like to say that our agents really deal with 80% of the grunt work of data, right? That we just have to deal with 'cause we've never known any other way.
And then finally, for access, the third layer is the controlling governance, which is important. We treat data contracts as code. Truly code.
These policies are codified on our platform. Like, who can access your data and under what conditions can they do it?
And we've codified and enforced this runtime of these policies all through our agents using metadata from OpenMetadata and contract specs from the ODCS.
This lets us really kind of manage the complex multi-tenant nature of data environments and audit access decisions in real time.
And when we say multi-tenant, in one organization, it could be multiple business units or multiple geographies.
McDonald's, for example, you've got multiple countries, multiple geographies, and so forth.
Or externally, it could be an ecosystem of data owners when you think about labor and workforce data, right?
What private sector and public sector have on us as people that work in the labor force.
So however you look at it, your data is great, my data is great. Together, we're powerful. Where do we come to share our data set together in a governed manner and at the same time be able to benefit from the different pieces of the puzzles that we have without losing control individually?
That's the Brighthive originating story. And then add on to that, now that we brought all this data together, and usually it's a lot of it, how do we efficiently unlock the insights?
We always say our time to insight, Brighthive's most important KPI, is fast, but also in a secure manner.
So that's Brighthive's architecture and three-layer cake that really brings in the foundational piece that gives you a great data architecture, that then powers our agents because they're not going to tell you the obvious answer; they're going to tell you the insightful answer off of this.
And all of this is happening in a trusted model in your own environment.
Rachel: So this is almost a solved problem if I have like a pure, say, Snowflake environment.
You know, I've got role-based access control. I can do data analysis in that closed little world.
But in the real world, I have Snowflake and I have an Oracle database and I have a bunch of CSV files and I have a bunch of Excel files.
How do I integrate those multiple sources? How do I enforce schema consistency? How do I make the data integration seamless?
Suzanne: It's true. So that's the reality of data. And that's when we say data is hard, data is ugly, it's because of that.
It's fragmented, it's siloed. In many cases, it's in proprietary environments.
I started my career when I came from Egypt 30 years ago in IBM Global Business Services.
Big Blue was a big deal in the '90s. And I often think about, you know, the 20-year-old Suzanne that started then, which, why?
Because it was a lot of Linux. A lot of stuff was just locked up in different, and, you know, the accounts that I had, Bank of America, State Farm, these were big brands still today, and they were adopting the internet, right?
It was a huge transformation. And at the core of it, their data is all over the place and in all of these proprietary environments, and to a great extent, Rachel, that's still true today.
There's a lot of the data that's sitting in environments that is just not inaccessible.
So at the start of Brighthive's, you know, experience for our customers is being able to pull your data wherever it is.
We have over 350 connectors and counting just to be able to bring data from cloud environments.
We take in unstructured or unstructured data, flat files, you know, .CSVs. We allow you to first of all bring it all to the same pool or playground, whatever you want to talk about.
And when we're doing this, we're also empowering our customers. They do bring us on and say, "I need you to create a couple other custom connectors for these environments that are not, you know, turned on yet."
So this is how you see the very first wave where you're just creating liquidity and velocity around data being able to come together.
The next thing we have a responsibility towards is that schemic consistency, and it's really non-negotiable because what's the point of bringing all this data together when it can't talk to each other, right?
It's not mapped and it doesn't work. And so we ensure that in two ways. The first is the open data contracts.
Every data set that comes in is accomplished by a machine readable contract that is defining expected schema types and constraints and meaning.
And those contracts become the mechanism by which we can say Brighthive polices the schema in kind of the versioning of it and the governance of it all collaboratively.
We've codified that. This isn't just AI, but this is the fact that software and writing software code can help you solve with that problem that policing by way of machine versus just a human being being able to open the file and look at all of it.
Then you're able to take that and be able to run it at scale because that machine readable file is going to check when, you know, Rachel plugs in her information and shares it with Suzanne by way of Brighthive, it's going to error and tell you your file, you know, doesn't meet the schema definitions in these fields, right?
Or in these columns. So that is kind of very powerful for folks.
Metadata-driven ingestion and validation starts to happen as well because of this, and our agents use the OpenMetadata and Great Expectations to validate every oncoming record against that contract that we mentioned.
That also starts to happen at scale.
When something breaks, it is as easy as the agent just being able to trigger that and show it on the UI.
And the failure that it's communicating to you isn't just by way of point and click, you accept it and you go and try to figure it out.
We actually notify you and open a ticket to whoever the responsible party, because it could be a couple of colleagues in different functions or it could be a couple of data owners from different organizations altogether suggesting what we need to fix.
It will tell you down to the naming convention or if you're using an underscore or even identifying, you know, something may be completely empty, right?
We've been able to help CIOs and big university systems, which, by the way, a lot of data lives there as we think about current students, previous students, alumni, and so forth.
Just the fact that one of our customers found, we discovered for them 16 empty tables completely, right?
And that was uploaded, it was sitting in there in Snowflake, but it's not monitored by anybody and it's just useless, right?
Rachel: Getting paid for.
Suzanne: So think about what I just said.
And that's possible today because we're able to help at a level of discovery and harmonization and mapping and building the schema faster for you, enforcing the schema immediately on day one, so all of these disparate sources, as you mentioned, become more unified and become AI ready in an AI-ready format without the hours and months that are required to wrangle it all in so that you can potentially adopt AI.
What we've been finding as we've been talking to the market and figuring out how we can help people is that everyone now knows that AI is not just some trend and it's going to go away.
Like, the conversation's moved from being, "Is this thing real," to now, "What is my AI adoption strategy?"
And I'm sure you've read--
AI is artificial until it's not. You have to teach it. And the data problem, like I said previously and always, we've never been really solving a very sexy thing. We're solving a very mission-critical thing, which is the quality of your data.
This is step number one. And when we're able to unify and implement this schema and get things to be in order to create these clean data pipelines, now you have a greater chance of adopting AI today, right?
Which is now becoming a competitive advantage for a lot of companies. That's what gets us excited is that solving the age old problem to move forward a very important agenda for everybody.
Rachel: Let's come back to that in a sec. I love the idea of data contracts as code.
We also have a bunch of social contracts that we need to observe. How are you thinking about things like GDPR, CPRA, whatever AI act regulations come along?
You know, we're doing real-time data processing and we have to be compliant with the law. How are you managing that?
Suzanne: Yeah, ourselves as Brighthive, everybody as our customers, everybody as our data partners, that's where we believe our moat is the governance agent that we've built and our approach to governance as a whole.
As you can imagine, when we pitch Brighthive and we talk about it, right away people's minds go to the visualization of data and just the analysis.
But back up from that point. No one is really starting from the point of that ingestion that I just explained and all of its harmonization.
The next one that kicks in is our governance agent, and governance for us is just baked in the platform, right?
Nothing will happen in visualization and/or all the others, data engineering, writing DBT code, doing Jupyter Notebooks, all this stuff that is crowded, that category is crowded, if the governance isn't cleared.
And that's our first approach. It's baked in our platform. It's our moat as a company.
And that means that we're monitoring and we built it to monitor data usage in real time, cross-checking agents, and they look at the actions that people are doing that are embedded at the policy level, that are aligned with those frameworks that you mentioned, GDPR.
So we take the framework, we break it down into snackable code, and then we've built the data agent to look for that before any other processing of the data happens.
To do that, that requires that you have a governance-first approach because you've created an agent that says, "Hey, this is not in compliant with the latest law. You cannot allow for these two things to be shared together."
A Social Security and a birthday, for example. You cannot reveal, by the way, there's a lot of PII, you know.
And so it's truly, I say the word policing and I have to be careful obviously now, but from the very beginning, we want to take this responsibility where we do oversee and police kind of how the data is being shared because most of the time when the breaches happen and you end up in the front page of "The New York Times," it's because you weren't thinking about it at all, right?
And no one can check it. That's what we're doing here. Yeah.
Rachel: So that's great for the insider threat.
Now, if I'm evil, which I'm not, but if I were, I'm thinking about how to corrupt the data upstream before ingestion.
How do you think about verifying provenance and making sure that the AIs are ingesting real data and not malicious or harmful data?
Suzanne: Right, we treat that with this approach that we talk about often internally in all of our, you know, kind of brainstorming sessions around data lineage and provenance as first-class citizens.
I think that generally people, you know, they imagine things being in a perfect sense, right?
But the work that it takes to actually get under the hood to get you to that perfect state of being, that's where you find that all of us at Brighthive really try to start any agentic definition of our work, start there.
What is all of the things that have been very hard and people just overlook them because they're hard?
How can we create new thinking and ideas and approaches around them? And by the way, it is going to be hard to code, but we can crack it if we have some very fundamental requirements and, you know, milestones around it.
And so when we look at data lineage and looking at provenance as a first-class citizen, why hasn't it been important from the very beginning?
How can we make it first class now? Every data set that comes into Brighthive is traced end to end, right?
Because then we said the hard part is also that traceability. If you never have done it, if you never made it a priority, it hasn't happened.
Can we do that from the original data system that it's coming, you know, the data's coming from before it even hits our AI model?
So it's very much at the very, you know, up upfront stream.
Our agents log these things to say, "How can we be end to end?"
The source system metadata from the very beginning, that's our ingestion agent doing this.
The transformation steps that need to happen end up happening. The quality scoring.
We look at all of the different, you know, components that are coming in, and they're obviously not all equal, so how are we going to identify that and what are we going to do about it?
And then the usage context. We have a lot of folks that will bring, say, you know, "We've been bringing this. We've always used it. It's always been X."
Now, is it because you defined it as such or just you inherited that, right? They're two different things.
So it means that if a model, you know, today starts to drift, and we can identify this from the very beginning against the work that our data agents are tasked to do, not all AI is equal and I speak about this a lot--
We are in the data space, which is complex work by definition, our agents are doing mission-critical work, and we look at every model that we're plugged into and we are across four or five. Our value proposition to our customers is that because we're logging this from the very beginning, we're monitoring the performance of that model data that's coming in from the very beginning.
And when we see it start to drift, we can trace it back to the root causes and be able to say, whatever, stale feed, there may be a policy change, why isn't it trending well, or even a data entry anomaly that maybe passed through checks, and if we don't trust it, it doesn't go forward.
It doesn't become part of what trains the agents. It doesn't become part of what happens downstream when the analysis and visualization work is happening.
So we're essentially debugging and maintaining the AI system from the beginning, you know, plugging it into an LLM all the way to the data product that comes out by way of the Brighthive workflow, which that work product then is going to travel to many people after that.
Rachel: Very cool. It gets expensive. You know, I think you mentioned that 90% of all data was created in the last 24 months.
We've got incredibly sprawling systems. We're spending on storage, compute, network efficiency.
What optimizations can you recommend? How can we manage the cost of our infra?
Suzanne: Yeah, it's becoming expensive. It's going to continue to be expensive.
We do believe, you know, at some point, there may be some commodity, that, you know, some things become cheaper.
We don't know what direction it will be, and we don't know how fast.
But at this point when we kind of step back, if we don't pay attention to this, we perhaps hinder or affect adoption.
And what we want is we want adoption, Brighthive and yourself, everybody.
We want people that are, you know, we want this to be an equalizer.
We want people to adopt AI in their work and their life. We want them to benefit and see the value of it.
We want to create delight around it. So if it's expensive, you'll never try, right? You'll never be even thinking that it's right for me because it's cost prohibitive.
The secret for us is optimizing the AI, you know, around the smart orchestration, not just bigger boxes.
And what we mean by this is we separate, right? We separate storage from compute in the stream levels. We can look at those two things differently.
Where is the cost spiking and what is driving that? Is it legitimate real cost to incur or is it just fake, right?
You kind of have to look at things in smaller pools before you just give an aggregated blend of what something costs.
A lot of people are used to hearing things like, "This model took $10 million and $12 million to train."
How did that happen? Or this agentic layer, you know, took that much.
Well, there's a difference between the upfront cost to innovate and R&D and then the commercialization and kind of distribution, right?
Then also I would advise folks who are not listening to the fine tune of it, look at it specifically too from that point.
For us, because we don't have our own LLM but we're users of it, we're looking at these things in very, very fine comb detail.
We cache all the time. We frequently use datasets that people have created and pre-compute popular transformations.
If the work has already happened once and it's up to quality, and these data sets are quality, we cached that frequency, they're available again, and we can track that as being a popular positive thing that is contributing for the customer adoption.
And we use a agentic load balancers to essentially reroute the heavy analytical tasks.
You don't know what the prompts are going to ask, but some of them can be very heavy and will be for sure.
So we're looking at heavy analytical tasks during peak periods and avoiding warehouse surges. How do you reroute as well?
To build a company and to build a value-adding agentic solution, you have to find yourself. And if you are not doing that, there's risk for your company.
But you have to step back and build almost the control dashboard that optimizes your platform, because without you owning that, you are at risk of it never happening and then you could be out of the game.
And that's how I'd speak to it as kind of the CEO of the company. We know what the end result is, is we want to make data analysts out of everyone in a team or an organization.
To do that, we want them to know that they can adopt Brighthive and Brighthive is the right partner, so we look at this as a partnership, to not only give you the most insightful agents and all the stack that makes that true, but then also in our platform architecture as a company, we're looking at infrastructure costs and efficiency.
We've baked in checks and balances that help us understand what can be reusable and helpful and what's not and we know when we have to take a different measure because something is cost prohibitive and not, you know, ultimately good for the people.
And that is a role when you think about incepting a company. It may not be in the hypothesis or in the idea, but that's where execution matters in startups, right?
No shortage of good ideas. It's execution and shortage in the operations of it.
Rachel: Yeah, absolutely.
As the CEO of an AI company, do you have a view on whether the training data should be controlled by private companies or do you think there's a case for decentralized, publicly-governed data infrastructure?
Suzanne: Yeah, such an important question. It's never going to have one singular answer.
I do think it needs to have a vision and it needs to have intention and that hopefully that intention is the right one for people, for humanity and for society as a whole.
Private data stacks are optimized for speed. This is why everybody's hungry for their private data, the control and the monetization of it.
That will always be true. And, you know, you kind of risk these silos that get created because it's private.
Like, it's owned by one entity. But we can't solve that, right? We can't tell someone or any entity that you shouldn't be the only person.
It has to come from that owner, decide that for the benefit of, you know, whatever, I'm going to allow you to use it.
So when we think about what can we do that is within our control, the middle path is a federated policy-governed data collaborations.
That was actually the start of Brighthive in 2018, well before AI and the potential and the art of the possible of agentic AI to data, you know, became the headline news of '24 and now '25.
Brighthive helps build trust frameworks and that is a very niche, specific area in the data space, right? For data nerds if they're playing in it.
And a lot of people that care about creating that balance and access between the private and public data, it won't happen until you have policy-governed data collaborations between both entities regardless of where you are.
Brighthive was founded to be that, to be that neutral Switzerland that brings people together and allow the private and the public data to come together and be used for the service of all.
And so we help build trust frameworks where public and private data entities can come together safely and be AI-ready data under clear rules that everybody accepts and conforms to with enforcement mechanisms that are baked in.
That's basically our role as that neutral, you know, Switzerland model.
So I do believe there is a case to be made for publicly-governed AI infrastructure, but it needs to be built on trust, interoperability, and enforceable contracts. And that's the hard part, is who's enforcing and the intention that you're coming to the party with and being able to trust that that enforceable contracts piece can happen, it's for your benefit and everyone else.
That way this data doesn't just stay private over here and this data stays public over here and then there's no way for us to actually use both for the purposes of the greater good.
Rachel: And how does that differ from something like a Snowflake data exchange or AWS data exchange?
Suzanne: Yeah, marketplaces, Snowflake, just all of them.
At the end of the day when we look at, because, actually, BrightHaven deploys on a, you get to pick as a customer.
Are you a Snowflake or do you not have a lakehouse, you know, warehouse at all? Are you Redshift? Whatever the case is.
It still heavily is helping people that are in the private side, right? A lot of our customers that are on the public side of this don't necessarily have all that sophistication figured out yet.
So when we look at, first of all, who's coming with what, that is very important to zero in on, double-click, and figure out who's where.
But marketplaces as a whole for us, and by the way, we are definitely a Snowflake, you know, partner and player, or AWS, they're great for the one-to-many data monetizations and not everybody has that as a goal.
So do you have them in play? Are you already on them? Or have you not even thought about monetization? Question number one.
But at the end, it doesn't solve what people come and tell us at Brighthive. "I want deep collaboration based on trust."
Well, that may not be on either of them, right? And it may be, it could look something else. So that's the intention part that I'm talking about.
And we're different in we're trying to achieve the, "I want to engage with Rachel based on trust. I want to share data with her. I want to use her data. And that is more important to me than the tech stack part of it."
So we focus on saying, "Okay, great, that's your intention. That's what we are, you know, obsessed with."
Our ongoing policy-driven, multi-party data sharing that's baked in the Brighthive platform allows you to achieve that intention and goal without just one-off data dumps that you can't control or you don't know what's going to be happening or you don't know who's monetizing it, that kind of downstream that happens.
We manage not just the obligation piece of it, but also who's using the data and for how long and under what terms and the feedback.
This is all visible on our platform, on the UI. You're showing the restrictions. You're showing everything that has happened to it.
Not every platform today has that level of transparency.
And I think because if you're, what is your intention from the very beginning?
To bring these data sets together under this one specific brand.
As we've evolved the company from being the data collaboration and sharing partner to being also the platform where all data work happens, whether it's just going to be private or if it's just going to be public, this is where we start to see more of the players that we're in everybody's category, but in category of our own.
We are very friendly and work with the Snowflake team. We're very friendly and work with the Redshift team. We don't get pushed to do something that's serving either agenda, right? And that's where the customer trust lies with us.
And for that reason, I think the intention piece that we are so obsessed with on what is the customer trying to do and what's right for them, some people have come to us and say, "Can you help us milk the marketplace because we're doing all this work on your platform," and that is not our place.
We don't keep any of your raw data. We are not a data marketplace. We don't sell it and we don't necessarily, our platform doesn't drive that use case.
Rachel: That makes sense. So what still needs to be built in AI infrastructure? Is it distributed training? Is it a real-time data pipeline? Is it model deployment?
Suzanne: Yeah, totally.
The next big leap is going, for all of this agentic AI infrastructure, and we're at the very early days, I'm going to continue to say as hyped and excited we are, we are so infinite, you know?
The embryo stage, if you will. Data systems that reason for themselves and take corrective action, we're moving towards, you know, data lakes, from data lakes to data organisms.
And let me explain what that is. You know, it's kind of like even the rise of a snowflake.
Like, what is a snowflake? What is a lakehouse? What is a warehouse? Why do I need it, right?
So 1.0 is just about if you don't have it, you know, you're not as ready for whatever.
And we didn't necessarily think about AI. You're not as ready to be data mature as a culture in a company.
You're not as ready to really unlock and online insights.
And by the way, since the beginning of time, we're trying to get insights out of data. We don't want just the data. We want the insights. That's what everybody wants to do.
And it was the pathway for how you can solve for this.
But with these agentic AI infrastructure being the new normal, it's like the new internet, you're moving away from having a lakehouse to completely organisms, meaning an organism is always alive and evolving and growing and self-correcting and sharing.
So it's just even defining the future to be a lot more real and available. And essentially, you know, the anti-stale model, that doesn't happen just because you have a lakehouse.
What you do that with it, what guardrails you put around it, how you tool it, how you even present it, you know, allow it to exist in the world, in the same way as an organism.
So then you are looking at standardization of agent interfaces. We never had reason to discuss that across an entire stack of technology.
No singular entity is going to own that. It's all of us who are building things that are going to coexist in this ecosystem, we have to think about standard UI for agents, right?
Trustworthy model data feedback loops. Everybody should understand that our platforms you're offering needs to create a data feedback loop and some other entity, another agent or another player, is going to benefit from that feedback and it has to be trustworthy.
So it's almost like the nervous system that we collectively have to build for this entire organism to continue to grow.
And then the commons infrastructure for sharing compute context across organizations.
The reason I say we're so early is we're now learning these words. We've been learning these words for the past 18 months.
We're going to move from learning a new language to essentially completely living in a different way and hence an organism and being alive and thriving and hopefully changing humanity a lot to the better.
I don't fear this at all. Don't fear this moment. I don't fear its potential.
Rachel: I'm excited for your vision and your platform. If everything goes exactly how you'd like it to for the next five years, what is the future going to look like?
Suzanne: Yeah, it's a big question because, you know, entrepreneurship is not for the faint of heart, as you know.
And if it were, we'd all be millionaires, billionaires, right? Because it'd be easy.
So you have to obsess in your conviction on what the world would look like, because today it's so hard. It continues to be hard until you see the vision that you believe in actually be true.
And for us as a company and for myself as someone who's leading this company, we want to see that regardless of your size, big or small, private or public, funded or not funded, I mean, just whatever, this is at the human being level for me, to be able to stand up any mission-critical data platform in minutes.
Just think about this. I just, you know, started from the very beginning saying data is hard for people.
Ask anybody how do you feel about working with your data today and since the beginning of time, they cringe.
They think it's hard. They don't think they're qualified to do it. It is not delightful. It is not a delighting thing in your job.
I always argue on every podcast or every conversation, wouldn't it be great, and my goal is to measure this, that people describe their jobs as delightful because their knowledge work that they're responsible for has become data-informed work and they were at the core and the center of understanding the data that they have.
People are smart, but not everything we have to do in our jobs is tapping into our intelligence.
We conform to exist and we do the task and the job as we are trained.
But imagine now, Rachel, Suzanne, wherever we are working, we know we have data sets in our inbox, in our Dropbox, wherever it is, that has the answers that I need for that, you know, ask from your manager or from your senior leadership.
Anybody plugs into, if Brighthive is in your own world, right?
You load that data or you connect to it and you begin unlocking it and getting the insights, the evidence that you're going to then use and continue the project or make a recommendation or show a strategy or solve a mission-critical problem that you have.
So for that to happen and everyone becomes an empowered data analyst, to flip the switch from being afraid to look at data and don't even know from where to begin because it's so technical and make it something that you start your every day with because, you know, your work has become data-informed work, that is the future and the magic wand, that you removed the on-ramp and the friction completely that everybody is a data analyst.
And with that full transparency and belief in the automated governance and AI agents handling all the grunt work, then you are completely liberated as a knowledge worker to be able to focus on all the things that are more innovative and exciting.
And so that frontline for us includes anybody, educators, workforce, state governments, private sector, startups, everyone's really collaborating with AI in the middle and not just the few that can afford it or those that are trained in it or all the things that are our current day.
So it really goes back to our claim on our website and all our marketing. Everyone has a data team. We are a data team in a box for everybody.
Rachel: Last question, best question. If you had a generation ship for voyaging to the stars, what would you name it?
Suzanne: It's called Covenant.
I mean, you kind of think back and forth, and it's not because we're sci-fi nerds or any single one of us is, but really the future that I just described and the future of working with data is less even about the AI that built it and, you know, the agreements and all of such, but the fact that we got you there and changed the way you even behave.
And so AI is empowering us to stand up the agreements and the shared purpose and the collaboration at scale.
You really kind of don't just, we say you don't get the stars alone and just what do I do with it, right?
It is a whole connective tissue that brings us all together. And we do think about the world differently.
We think about it with much more excitement than we have today. But trusting the fact that when you get to the stars, you know exactly what to do, right?
That's kind of what I would love to close on, is that we're Brighthive, we're bringing everybody the stars, but also the entire thing that comes with it is trusting that it's the right set of data at the right time, the right agreements are around it, you're empowered with the right capabilities as a human being, and that the collaboration that you're partaking in is, at scale, everybody's doing it with you, right?
That's kind of how we look at the future as a whole.
Rachel: There's a quote from the Chilean economist Fernando Flores that I think you'll love, "Work is the making and keeping of promises."
Suzanne: I absolutely love that. It's so true. Yeah.
Rachel: Suzanne, what a joy to have you on the show. Thank you so much for taking the time.
Suzanne: Thank you so much, Rachel, for listening to us and to me, and for giving us the opportunity to tell the world our story.
Content from the Library
Best Practices for Developing Data Pipelines in Regulated Spaces
How to Think About Data Pipelines in Regulated Spaces Tech teams standing up new AI programs, or scaling existing programs, need...
Regulation & Copyrights: Do They Work for AI & Open Source?
Emerging Questions in Global Regulation for AI and Open Source The 46th President of the United States issued an executive order...
The Right Track Ep. #5, Intangible Metrics with Elena Dyachkova of Peloton
In episode 5 of The Right Track, Stef is joined by Elena Dyachkova of Peloton. They discuss the intersection of fitness and data,...