In episode 10 of Demuxed, Matt and Steve are joined by Colleen Kelly Henry and Eric Zawolkow, to talk about their experiences overseeing all things video-related, and what it’s like maintaining complex live video streams for large audiences.
About the Guests
Colleen Kelly Henry is the lead Media Systems Engineer at Facebook, and previously did video engineering projects for Google and YouTube. She was also a senior tech advisor for the popular television show Silicon Valley. Eric Zawolkow works beside Colleen as a Media Systems Engineer at Facebook, where he was previously an Event Operations Technician.
Matt McClure: Hey, everybody. It's been a minute since the last podcast was released. But since then we've had a successful Demuxed 2018. At least, I think it was pretty successful.
Steve Heffernan: I had fun.
Matt: There you go. Starting the new year right, we're going to try to line up a few podcasts. We usually try to keep timing irrelevant in these things so we can release whenever they want, but we're going to talk about some stuff coming up at NAB, and hopefully this gets released before NAB.
But anyway, we don't have Phil on the call today, but we will in the next few episodes. Thank goodness. Apparently, time zones are hard. But we do have Colleen and Eric here.
Eric Zawolkow: Hey.
Colleen Kelly Henry: Hello.
Matt: Before we jump into other things, let's talk about that stuff happening at NAB. There's a Demuxed party, that'll be on Monday night. It's from six to eight.
Steve: That sounds right.
Matt: Six to eight-ish.
Steve: We'll go with that.
Matt: There'll be events and stuff. So, ask a friend, and--
Eric: Any charity events this year?
Matt: Like the charity poker tournament?
Eric: The charity poker tournament.
Matt: I don't think we're doing that this year.
Colleen: Who was the charity?
Matt: There was lots of--
Eric: There was three or four different charities last year.
Colleen: There was an actual charity? It wasn't just, in air quotes, "charity" poker tournament--?
Eric: No, this was a real charity poker tournament for Demuxed.
Matt: What is an "air quotes charity"?
Colleen: Like, John Doyle comes in and wrecks everyone and takes home all the money. And he's in charity.
Steve: Didn't James win?
Matt: James was one of the winners, yeah.
Steve: I do like the idea of John having a persona of being a total poker shark. But, he is. You heard it here first. Talking about the Demuxed party first so that it's not overshadowed by Colleen's party, but Colleen, why don't you tell people about your party that you're throwing?
Colleen: I thought you wanted to talk about the Demuxed party first.
Steve: I just did.
Matt: Wait, is it at the--?
Eric: It's going to be at the same place it was last year.
Steve: Yes. Same place it was last year, Millennium Fandom. You're going to get in your Uber or Lyft, the driver is going to look at the destination and be confused, and he's going to be like "Are you sure you want me to drop you off here?"
Eric: It's not a bad neighborhood--
Steve: It's not a bad neighborhood.
Eric: There's just nothing there.
Steve: There's just not really much around, but there is this amazing cosplay bar called Millennium Fandom, and we've got the whole place just like last year. There's a little gallery, and we'll have tacos, and drinks and good times.
Colleen: I don't remember there being cosplay. Is that their normal thing, and we just rent out the joint? OK.
Matt: But the bar is cosplay themed, so there's all the--
Eric: If you want to show up in an R2 D2 costume, no one's going to look twice at you. If you show up--
Colleen: I can take my giant Snorlax suit out.
Matt: If somebody full sends it in a Big Buck Bunny outfit, I will be--
Eric: Why does that not exist in the industry?
Colleen: Nicolay, I think is his name, from the stream route wants to join in the party and he wanted to do a really Big Buck Bunny themed party to continue the theme from the video that he did.
Matt: BBC High Enough.
Colleen: I told him you can have one of the rooms in the place, you can just make a Big Buck Bunny room. It's like a house of horrors.
Eric: It would be cool to have pinatas made of tiny squirrels. Or you could do an inflatable obstacle course for Big Buck Bunny, that was themed after the traps in Big Buck Bunny.
Colleen: There you go.
Matt: Pretty good. If you hadn't seen Big Buck Bunny--
Eric: You probably shouldn't be listening to this podcast if you haven't seen Big Buck Bunny.
Steve: Let's just blow through that one.
Matt: You've got to let me finish. I mean the BBC, the video that stream route submitted a talk as a joke to Demuxed 2018--
Colleen: It was a joke.
Matt: About how Big Buck Bunny is really an allegory for capitalism, or something. It was one of the most highly reviewed talks, everybody wanted to see it happen, so I told them they had to make that happen and they ended up making this incredible video.
Colleen was in it, Will Law, a bunch of other-- Basically the who's who of the Demuxed community was in this video talking about how they had PTSD from watching Big Buck Bunny too much. That should be also on the Demuxed 2018 YouTube channel, but I'm sure you can find it in Video Dev or something like that if you're there.
Eric: Is it on the Demuxed.com website?
Matt: It will be soon. The community portal is launching any day now. I promise.
Eric: Great segue, right? Into the community portal?
Matt: Yeah. We've got the community portal. We've talked about this for years, but it's finally happening where Demuxed.com will migrate over to being a community portal with content, like recent news or things that anybody can contribute stuff just by submitting a PR. So, Demuxed.com will move over to being a community portal, upcoming meet ups and things like that from all around the world, and videos, stuff like that. Contributed, things like--
Colleen: Who knows.
Matt: Who knows. Demuxed.com is your oyster.
Eric: It would be so weird to have videos on a website about videos.
Colleen: We could have a video platform.
Steve: Hold on, let's not get too crazy here.
Matt: Then Demuxed 2019 is moving to a subdomain like all the others. So 2019.Demuxed.com will be this year's conference, which is happening October 23rd and 24th here in San Francisco.
Eric: Is that the first official announcement of the dates?
Matt: Unless something happens between now and the release of this podcast, that is the first official announcement.
Colleen: Do you have sponsors yet?
Matt: Yeah. We do have our first few sponsors already. This will not be the case by the time we announce this, but right now we're in the-- Anybody that sponsored last year gets first dibs on sponsorships for next year.
Colleen: As you know, the video community is pretty ballin'.
Matt: Speaking of which, tell us about your party today.
Colleen: I've gone to NAB for years and years, and there's always the AJ party that's massive, although they didn't do it last year. The Demuxed party, and different things like that. Usually they're associated with some organization or company, and they pay for the drinks, and generally it's cool and all but that's how it's always done. But I've been threatening for years to throw my own party, that it's just the Colleen party or maybe the Streamline party.
Eric: The anti-NAB party.
Colleen: I don't have to follow the rules of my company being associated with it, or some other companies there pitching something. It's just a bunch of video nerds hanging out talking about video. I ended up last year at NAB at this ridiculous suite in the Hard Rock, and it's got a indoor jacuzzi that's like--
Eric: It had a bathtub between the dining room and the living room.
Colleen: It fits like six people, and it's like gilded gold and stuff.
Steve: That sounds delightful.
Colleen: Then outside there's a 12-person sized small pool, like this is a private thing in the room. It's heated so it's like a giant hot tub and stuff, and the place was 4,300 square feet or something. It was truly ridiculous. It was four grand.
So I called up and I'm like, "I want this. This is where we're going to hold the Colleen party," and I booked it. The plan is that I want to create something transparent, anyone who's invited is welcome to donate and help reimburse me or not, it's not required to join.
If it goes over the price of what it is, it will just go into the party next year. If I stop doing the parties I'll just donate to EFF or something. I think maybe Patreon might be a good platform, so we can transparently share that. I'm going to see if I can keep it anonymous, because I don't want companies to try to be like, "We're sponsoring this."
Eric: Unlike Demuxed, we don't want to advertise sponsorships for the Colleen party.
Steve: You don't want the liability.
Eric: You don't want your name associated with this.
Colleen: No. So that's the plan. So far people are really stoked about it. I think they mostly are just stoked about the room, but that's cool.
Eric: We should see if we can get whiteboards in the room, so people can whiteboard at the party.
Colleen: Or we could get, you know how in movies they're always drawing on windows with pens and stuff? We get glowing pens and maybe some black lights or something? It's more portable than whiteboards.
Matt: That's how we interviewed our first hire.
Colleen: A Beautiful Mind.
Matt: At 'Mux was a window.
Steve: It was weird. We were working at Steve's apartment because we didn't have an office, or Steve's house. That was the closest we got to a whiteboard, was his windows in his living room.
Eric: That's perfect.
Matt: OK so, wait. How do I get into this party?
Colleen: You be my friend and you hit me up on Slack. The only real rule is "No douche bags allowed." People keep asking, "Who does that entail?" And there's one person in particular, but other than that.
I think I'm going to make it pool party themed, because there's pools, and normally parties like this people forget their swimsuit or something. Or there's two people in the pool and everyone else is just standing around staring at them and they feel super awkward. So, if everyone looks like they're possibly getting in the pool--
Matt: Beach themed.
Colleen: I think people would be more comfortable getting in the pool, considering that that's one of the main features of this ridiculous thing. I really like hot tubs so it'll make me feel better.
Steve: We've been talking for a while, and I don't feel like we've really talked about who you two are. Which for most people in the community is probably fine because who doesn't know your shining faces, but for anybody that doesn't, why don't you guys give a little background about yourself? Colleen?
Colleen: I'm Colleen, and this is my co-worker Eric. We work together at Facebook.
Eric: You could call us friends.
Colleen: Sometimes I call you my work wife.
Eric: Work wife.
Colleen: I said "My partner at work," but then people started to think that was a thing. But for some reason if you say "Work wife" and you're a dude, they're like, "They're clearly not in a relationship." But we basically spend all day every day together working on stuff. I can see why they would think that.
But I joined Facebook five and a quarter years ago or something like that to work on video. I was the only person working on video at the time. Before that I was at Google, and other startup stuff. I joined and there was this guy Eric who I ended up meeting, and he was a contractor on the events team at the time, and it was a cute origin story.
Eric: I was on the events team and we used to do all kinds of crazy stuff where we'd-- Like, it was running big events for the company when we'd have all hands meetings and stuff like that. But it was also, people would want their meetings recorded.
So imagine being in this conference room and there's a guy in the corner with a video camera recording the whole thing. And that was me.
Colleen: Which is the most ridiculous service.
Eric: It was such a--
Matt: You had to wear all black and try and blend in?
Eric: Kind of. It was so--
Matt: I like to think about you in the Green Man suit.
Eric: Exactly. Everyone would be sitting in this meeting, like having the meeting, and you'd be standing there staring at them. It was uncomfortable because they'd have to pretend you weren't there. It was the worst part of the job. Everything else was great. I got to work at Facebook, I was fresh out of college.
The only other place I'd worked at as a full time employee was at an Apple store. So I was like, "There's free yogurt everywhere. This is amazing." But I had to stand in conference rooms and record people's meetings, which was a little-- It was not the best.
But one day I was recording a meeting, and I stopped the camera and I popped the SD card out before the file finished writing. I completely corrupted--
Colleen: So, it was an MP4 file that was being recorded so the metadata has to be right at the end, right?
Eric: Or it's whatever the weird proprietary Sony file type.
Colleen: I seem to remember that it was a MP4, and that's relevant to how popping it out early would be.
Eric: Either way, the metadata was written at the end, so it corrupted the entire file. I didn't find that out until much later in my career, but we had just hired Colleen and one of my managers had said "If you ever have questions about video, talk to Colleen, we just hired her and she knows everything about video."
And I was like, "I've never talked to her before." I just hit her up on IM and I was like, "I can't get this video file to open. Do you think you could take a look?" She's like, "Yeah. Send it over." So I send her the file and immediately she writes back, "Did you pop this out of the camera too early?"
And I was like, "You know what, maybe. Maybe I did that." And she's like, "All right. Give me a couple hours." I was like, "OK." So the whole day I'm fending off my bosses who are looking for this video file to be delivered, and I'm like "No, it's coming guys, don't worry I'm working on it."
And Colleen comes back three hours later and she's like, "Here's your video. If you ever want to learn how I did that, come sit by my desk." So then everyday after work for the next four or five months I would just-- After I would get done with my day, Colleen works late and my job was early, so I would get done with work and I would just go sit by Colleen's desk and I would just watch her as she was typing furiously.
Every once in a while she'd be like, "This is what I'm doing right now. Here's a bunch of words, you should figure out what these mean." Through a multi-month process she eventually convinced her boss to hire me, and now I'm the video engineer with no training in how to be a video engineer.
Colleen: Don't worry, I had no training either.
I don't think any video engineers have any formal training.
Colleen: It just happens.
Steve: So you went from the creeper in the corner, to the creeper standing behind Colleen, to a video engineer. Which is a pretty normal progression.
Matt: Now it's like, "Who didn't get here that way?"
Eric: It's pretty fair.
Colleen: Long term, my actual plan is I want him to be the manager of my team because I feel if you have a manager that owes you, it's pretty sweet. And I don't want to be the manager so I've been building him up over the years to try and get him to that spot. Then people are like, "We're never going to let you do it."
And they're like, "Why?" And they're like, "You're not going to do what he says." And I was like, "Of course not."
Eric: That's fair.
Matt: That's a solid strategy.
Steve: Between the voices there and the scenario, it sounds like you're basically Rick and Morty.
Eric: That's not wrong. Rick would probably-- "Did you pop that SD card out too early there, Morty?" "Aw jeez, Rick. You know what, I might've."
Colleen: Honestly that really is our relationship at work. I run off and do crazy things and I'm like, "No time to explain. Get in the car. You're coming." And he's just like, "It's worked out so far."
Matt: Why don't you walk us through some of these crazy things?
Colleen: There was the time where in a weekend I had to build Oculus cinema, which is basically like Netflix but in VR or something like that, and live streaming spherical video end to end. From fabricating the camera, 3D printing it, bunch of GoPro software, all of it in one seventy two hour straight-- Because I had to pitch it later.
Then it worked. That's an example. I was like, "Eric. No time to explain. I need the following things."
Eric: "Just get me these things, and then don't bother me."
Colleen: We've done some truly crazy things on short notice.
Eric: We've done some crazy streams.
Colleen: There was the time where at a particular developers conference somebody created a spherical live stream platform without telling me, and they didn't properly position their origin server. So when all the people joined it just slammed the origin, the CDN made too many requests and it fell over.
So I get a call saying "The live stream is down." I'm like, "What live stream?" "The spherical one." I'm like, "What? What are you talking about? I have no idea--" I had a run in, figure out what was going on, reverse engineer the system.
I then stayed up all night building low testing platform after we put a Origin shield in front of it and then configured it, and then had to slam it with 20x the number of expected viewers to make it work. Then about 7:00AM we're like, "It works now." Just random things.
Eric and I are responsible for anything and everything related to video, there's no part of it that we don't do. Between a lot of times I'd describe it as photon to photon. Like photons go in, photons go out. I can explain that. But we don't own the cameras--
Eric: Fix anything in between.
Colleen: We know how the cameras work, but we don't run the cameras, we don't do the lights, we don't do the microphones. But somebody hands us off-- We don't personally go and physically do these things typically, but we build the infrastructure for it.
Eric: Sometimes we do.
Eric: There was one time during a live stream we had an audio issue where the left and right channel were completely out of phase from each other, so it was canceling out all the audio.
I was running the stream so I was trying to keep it up, and Colleen was physically tracing cables throughout the back of house area trying to figure out which cable led to which piece of equipment so that we could figure out what--
Colleen: First off, it was like "Why can't we hear the person on the stream?" What it was is that there was a phasing issue, somebody had soldered a cable wrong. So it's backwards.
Eric: Colleen traced the cables, found the piece of equipment that had the cables soldered backwards, and flipped the--
Colleen: Because everyone's immediate reaction was, "The live stream platform is broken." I can hear it on this phone, I can't hear it on an iPhone. If I put headphones on I can hear it. What it is, because of the phasing even though it was mono, since they're identical but inverse when they were mixed together it became nothing.
The iPhones had mono speakers so you couldn't hear anything. So I go, and this is a vendor who's providing all of the infrastructure for lights, cameras, switchers and stuff like that. So we go to audio guy and we're like, "There's a phasing issue." And he's like, "It's fine here."
Of course we had to quickly figure out how to compensate for that. In the encoder, we're like "Alright we'll take the left channel and copy it," so that fixed the issue.
But then of course everyone's blaming the streaming platform, and it's like "No, that's not the platform that would do that. We're just taking the signal in and compressing and delivering it to people. We don't make the signal."
Nobody is taking responsibility so we had to go in real time go and trace down all the cables and pull out a scope. It's right there.
Eric: It was like, follow a cable, pull it out, plug in a scope, see if it's in phase. Pull it out, plug it back in and go to the next piece of equipment until we narrowed it down. It was like a patch panel, that ended up being it.
Colleen: Typically we don't, we deal more with servers and trans coding and packaging, stuff like that.
Sometimes you have to go and crawl under tables and trace cables. You have to know how to do everything.
Matt: That's a good sentiment. I know that both of you do things all across the stack of video stack for a long time, but particularly for what sound like pretty big internal events, things like that.
Colleen: Internal events, external events.
Eric: I did a live stream with the ISS once.
Colleen: The space station.
Matt: Can you walk us through--? What do these events typically look like from your perspective?
Eric: There's a large scale event with a lot of moving pieces, including a space station that's orbiting the planet. This one was like a little--
Colleen: Time dilation is a factor in the signal flow.
Eric: There's a bunch of principles in large event video streaming that are important to take into effect.
Your event space is going to become impacted in some way. You want to get your signal offsite as quickly as possible, and through really robust means.
We had downlinking facility in somewhere in Texas that was down linking the satellite feed from the ISS. Then we had our facility here in California where our talent was talking to the space station from.
Then we had mission control, which was back in Houston I guess, and we had to be able to talk to them too. Then we also had an encoding facility in Washington DC where we were sending all of our video to get encoded.
You need satellite downlinking to get the signal down, you have satellite uplinking to get our signal back to DC, you have a phone call going on because that's how they communicate. Because you can't send the video back and forth fast enough to actually talk over video conferencing. They're on a phone call.
I have a really good diagram of how I set this all up, that is pretty cool, and I can show you guys later. But in the diagram there's signals going everywhere, and you end up using--
Matt: Like space.
Eric: Only two signals from space. The video--
Colleen: If you remember when the elemental guys keynoted NAB and they did the 4K live stream. It's actually on Amazon Prime. People super down vote it because they think that they're seeing a space station documentary from the logo, and it's like "First live 4K from space," and they click and it's an NAB opening thing.
They're like, "What the hell is this?" So they're talking about doing their live stream from space, and they're super stoked. Then Eric is stressing me out. "I did that."
Eric: I already did that. I didn't do it in 4K, they did it in 4K. I didn't do that. But it was pretty fun. We used fiber backhauls for a lot of it, because it's super reliable. That way you can get your signal offsite to a facility where you have guaranteed bandwidth.
Colleen: This is IP delivery, but it's dedicated. It's reserved and you're not competing on the public internet.
Eric: Dark Fiber video, if that helps.
Colleen: Although if you were to do that, you might use something like Zixi, to force your way through.
Eric: Then we add a whole video production team on site and a separate team in DC where all the signals were being compiled together, both the ISS signal and the signal that we were sending out.
They were compositing those two together and mixing it all up and encoding that, and sending it up to our streaming platform.
Matt: Is that workflow pretty common?
Eric: It was common because we had talent in two separate locations. Having someone on the ISS is pretty similar to having one person in California and one person in Brazil, or--
Colleen: Except for one is floating.
Eric: Africa or Europe. Having people in remote locations is a challenge, whether they're in space or they're terrestrial. It's a pretty similar problem to deal with.
To be fair, NASA dealt with getting the signal down from the space station, so I didn't have to do that part. I have to figure out how to--
Colleen: You can tell them that, they would've believed you.
Eric: I know. It's a satellite. They beam their signal to the same satellites that you're using for sending sports games around.
Matt: It's just one long coax cable that runs and is raking across the ground all around the Earth.
Eric: Lots of them. And we took one up into space. It's baseband video over the wires is the best way to run it.
Colleen: This makes us sound more like broadcast engineers, and--
Eric: We're a little bit broadcast engineers.
Colleen: Sometimes. But we build entire platforms end to end, so we build encoders that are acquiring the signal. We do the entire workflow. We do the transcoding, packaging, origin servers, content delivery networks, players, everything.
Back in the day we just ran all of that ourselves. As Facebook Live has come out, we've used that more and more. That's what we use in general. But before Facebook Live existed we built all of that ourselves.
A lot of what we focus on in the live space is ingestion, because garbage in garbage out. We've designed APIs where live encoders are super easy to use, automatically configure themselves so that we don't have to transcode it, multigenerational loss, that kind of stuff.
But then I also work on some OnDemand stuff, so spherical, VR, volumetric. Stuff like that.
Eric: I troubleshoot live events, fix problems.
Matt: A lot of what you said there segues nicely into, tell us about Streamline.
Colleen: All right. Since I work in a company that has a platform, which is Facebook Live, we have our own CDN we have our own players, we have our own everything. You can use an element alive or something like that, appliance to get the signal in, you can use OBS wire cast.
Not that I would recommend that you use software on an arbitrary OS with random driver versions and stuff for a super high end event, but it's super flexible and easy.
One of the things we did at Facebook is we designed these APIs and workflows for, it was called Project Blueprint at the time, and basically it makes ingesting live video Facebook Live reliably as easy as using a Roku.
You just get a box, turn on it on, it says "Go to Facebook.com/device and type in the six digit code." Then you pop it in and you go to Facebook and hit "Go live." Your encoder turns on remotely, automatically configure itself, and then you hit "Go live" and it counts down, then you start immediately when it tells you."
That's inherently linked with Facebook. We have our own infrastructure at Facebook, we have our own CDN, we have all that stuff. But it's important there for people to have a easy high quality workflow that is opaque to them.
The partner who first implemented that workflow is Wowza, and they've done an amazing job with ClearCast for making it all work really well. But to make it that easy, it's inherently linked with our platform, and our platform is also inherently linked with users and audience and all that stuff.
But Eric and I have not been building all the infrastructure end to end ourselves anymore, we've been more and more collaborating with more and more teams like our CDN team . I got a little bit of an itch to want to build everything myself and give it all away for free, in a transparent way where people could learn about it.
So I started this thing called Streamline. It came out of you saying, "Colleen, do a talk." And I'm like, "What should I do a talk on?" Then my thought was, "Why not 'How all of live video works, end to end from hardware to software to everything,' and try to condense it into the basics."
The trick is, "How deep do you go? How high level do you go?" All that stuff. Streamline is a reference system for live streaming end to end for educational purposes, from the perspective of, "Here's how live video works."
What it is, it's a GitHub repository. You can get it there on GitHub, or the website is streamline.wtf. That's why I didn't tell you earlier. I knew that you'd--. So, you go there and it's crazy simple.
It's just a couple bash scripts at the most basic level. Now, it's all black magic fuckery incantations of random video stuff, but it's not complicated code. All the actual complicated code is, for example, in the server or in the CDN or in ffmpeg, which we're leveraging all that stuff.
What this is, an architecture that shows you how everything goes together end to end. Another way of saying it is "You go to this GitHub repo and you click on a link that takes you to an Amazon shopping cart that has computer parts with a capture card, and you buy that, then you go to this repo and there's very in-depth directions and walkthroughs that explain how live video works step by step.
Get computer, assemble, install Ubuntu, run the script, now you have an encoder." What that does is it downloads a ffmpeg, all that stuff, compiles everything, drivers for your GPU, for the adaptive bitrate encoding, all that stuff. Then it teaches you, "Log in to AWS," one of the assumptions is that you have an AWS account.
Then you provision an origin server and you SSH in, and you run one script and you have an origin server. Then it teaches you how to hook up cloud front for your CDN, now you have an encoder, you have an origin, you have a CDN.
You just run on the encoder from the command line, launch, and you say "Launch, here's the origin, here's the CDN path." It says, "Cool. Here's the URL to a player on a CDN." What it's doing though is it is taking in the hardware, HDSI capture or HDMI into a black magic capture card.
Then ffmpegs running taking that source, scaling it into multiple different resolutions, sending it to a combination of software encoding with loop x264 and hardware accelerating coding with NVENC.
Packaging that up into HLS, then doing HP puts to the origin server, and it also writes a simple web page that has an HTML5 player in it. Then it uploads that with the segments and says, "Here you go."
It's super high quality. It's good HLS. It's 4K, you can do 4K-30, you can do 10-AP 60. Whatever, all of it end to end, but it's super simple.
The whole point is that it's everything you need, nothing you don't, for it to be transparent for you to learn how it all works.
One of the reasons I chose HLS instead of dash for this particular use case is that HLS is easier to read the manifest and learn how it works. I don't think that it's better, I think for the concept of Streamline learning how it all works together is more transparent for people to understand.
The reason that things are simple bash scripts is you can look at them and you can say, "This is the drivers that I need. This is how it's all going together. This is the ffmpeg commands," all that stuff.
People are always asking me, "Wouldn't it be better to write this in Go?" Or abstract this when I'm like, "I want to show my work and I want to make it as simple as possible end to end."
I didn't really do that much from a software perspective, what it is how all of the software and all of the pieces go together. Hardware, everything, and the configuration of how they all go together. So you can look at it like, "That's how you do HLS you in ffmpeg.
All of it's right there for you to look at." I get tons of IMs from people saying "This has been unbelievably helpful because I didn't know how to do this thing, and I looked at it, and now I get it."
And they say, "But have you considered doing this fancier thing?" I'm like, "Not in Streamline. Because the whole point is it's the basics that you need. All of it. The downside is I in no way make any promise that you should bet on this. It's going to explode. I write that."
They're like, "I don't do a ton of testing, it turns on, it works. You can see it, how it all works. But you wouldn't want to do an event where you get fired if it crashes. That's not what it's for. What it's for is to teach you how this works together.
Now, you could take it and you could burn it and test it and all that stuff, but that wasn't the point of that version of it. The point was education and learning how it all goes together. I'm working on two next iterations of it.
The next one is called Screamline, which is-- What it is all of the fancy stuff. One of the projects that I did recently with some friends is we built, with a combination of dash.js and ffmpeg stuff that Akamai did.
We wrote origin server and built a demo of how you can do low latency live streaming on a public CDN scalably, but in 2.2 seconds latency. It's the low latency cc map stuff.
The thing that was missing in the community is nobody had a server that could do it, and nobody knew how it went together.
This is a combination of the ffmpeg work to ingest and code contribute, then we have the server and we have the player properly configured. You can see it all going end to end, right?
That was just, again, an educational thing where people can look at and say "That's how low latency cc map works." I like getting fancier for fun. Like FPJ accelerated not in encoding, or other wacky stuff like that.
Screamline is, instead of being educational and simple to learn how it works, Screamline is going to be the fancy, wacky, gloves are off. Here's how you do cool stuff. Again, no promise that this is going to be robust, but it's going to be like drag racing.
One of the techniques that people will use is they will take an engine and fill it with cement and just go down. Because they only have to make it a quarter mile. That's it. And the thing is, that keeps them from exploding and they don't need a cooling system because they're not running more than a couple of minutes.
This thing is not designed for testing longevity, reliability, that stuff. It's to demonstrate "Here's how you get bits from here to here the fastest in the highest quality with the coolest stuff."
For example, we're going to be doing ingest and encode into HVC, and I would never deliver HVC because super terrible patent things, but for contribution you can get away with because nobody can catch you. Amazon now has a service called Media Connect which is Zixi compatible.
You can take media connect and you get the free Zixi proxy software, and this will do bonding across multiple connections. With the Zixi protocol it's super robust and it will go to media connect, then from there it'll go to an F1 instance running the engine codec VP9 FPJ accelerated stuff, then that'll do VP9 which is 30% smaller for a given quality.
Package that into the dash, it'll be low latency dash, and just do the whole system. That's basically the hot rod.
Matt: Is it all open source?
Colleen: All open source. Well, the energy codec FPGA stuff runs on an AMI by the hour. At some point you have to pay for something.
Matt: Running it on Amazon?
Colleen: My thought is everyone has a credit card and can use an Amazon account, both to purchase hardware or using AWS. That's basically the rule for me, is that for Streamline it was all open source and then you buy the parts on Amazon, computer parts, and you can set it all up.
Matt: I pulled up your shopping cart here, it looks like $1,500.
Colleen: The next one is going going to be much cheaper from a hardware perspective, because instead of doing the encoding-- On Streamline we do all of the encoding on the encoder, and contribute all of it pre-done to the origin. The origin doesn't do any transcoding or packaging, it just serves it.
Which is a tradeoff that we make that we explain. One of the tradeoffs is the encoders are more expensive, but if you are going to run this thing 24/7 it's way cheaper.
The quality is better because you are not doing multigenerational trans coding, but it requires a faster internet connection because you're doing the contribution of all of the levels not just one top level.
For Screamline we're doing HPC contribution over bonding. It'll be a cheap little Intel nuk with a black magic capture card or something, and all the transcoding will happen server side.
Now the engine codec FPGA IP is closed source. One of things is that you can't just walk off the street and license it, but they do have an AMI where you can go and for X dollars an hour use it. There's ffmpeg.
That is an easier way, because then they don't have to procure enough PGA, flash it, negotiate IP and all that stuff. This time we're doing server side transcoding, and it also shows a different signal flow. It's not that one's better than the other, I explain why you do one versus the other.
Streamline's more like you would have a fast internet connection in a facility, you'd be running a 24/7 channel or something like that. Whereas the other one's more of an event based thing where it needs to be small, light, cheap, go across multiple internet connections on untrusted networks, things like that. Then you do a server side trans code and package it up and deliver it.
The final phase I would like to get to at some point is called Dreamline, which is-- You like that? Dreamline is, hopefully will take some of this stuff that's been educational/experimental, and make it robust for people to use reliably and that they can count on.
The first one is like, "Here's how you learn the basics." Next one is like, "Now let's get fancy and you can do the advanced class." But all of this is not meant for-- Like, go buy something from elemental or wilder if you want something reliable and supported.
If you get fired because it exploded, I'm not going to care, because I literally tell you don't use this. You can learn how it works, you could build something out of it that's reliable, but I have not spent all the time on that.
Because reliability is so important in live streaming, I want to be very clear that you should not use this.
Steve: What are the specific pieces that you think of, like reliability, that you might run into first if you were running this and trying to run in production?
Colleen: For example, I haven't run it for 10,000 hours straight. Things happen with video, weird timestamp stuff, all sorts of things I don't know, unlike the clear caster workflow where everything's automated.
This thing is meant to be transparent to the people who are configuring everything, so they can look at it. But because of that, you also have all the knobs to screw yourself with if you mess them up.
This shows you all the settings, but you can also change the settings. Dreamline would be, "All right, we'll do the fancy stuff and it will have a workflow where you can easily-- 'Here's the parts, download a file, burn a USB stick, plug it in, turn it on and it boots right up.'"
It's QC software and you spin up one cloud formation thing and you've got a platform.
Eric: Have you guys ever tried to do it?
Eric: Yeah. It's weirdly easy and fun to build your own streaming platform out of nothing. It's also one of those things that I wish you had made it before you found me, because it would have made learning about this stuff so much easier.
It's hard to find this information just out there in the world, and to have one repository where you're like, "This is everything I need."
It's one of those things where you don't even need to look at the code to find value out of it.
The readme itself is--
Eric: The software isn't important, you have to look at what FFM peg is doing.
Matt: Do you have to buy the computer and put it together in order to use it?
Colleen: Yes. From the perspective of, "I have made some specific design decisions and it means that the scripts that build that encoder, for example, I haven't tested them on any other configuration. You literally buy that."
For high end live streaming you want a completely controlled stack. I was saying earlier about OBS, amazing software, super flexible, powerful and cool. The problem is it can run on anything.
You don't know what the OS version is, what the driver version is, what the-- Hugh writes OBS, awesome guy. He doesn't have a whole page QA team making sure everything's perfect. He rolls something out and if there's some bugs, then he unrolls it and fixes it and rolls it out again.
The approach architecturally was streamlined to work towards Dreamline. It's that we control the entire stack of hardware and software and everything end to end. If you wanted to, for example, take Streamline and the basic stuff there, rip off the capture card and put an RTP in put flag on the ffmpeg command and run it on a server? Totally.
You can modify it into tons of different things. There's a lot of value if you want to start tweaking things, but that's what I want you to be able to get to eventually by using this one controlled simple stack.
Hopefully you can eventually say, "But what I want to do is the following thing," or "I want that." You start tweaking it, but it's a reference starting point of how everything goes together end to end. It's not like Wowza where you can just install that server software on whatever machine and configure it.
It's very simple and transparent to get you started on particularly open source based and standards based video contribution and delivery. Does that make sense?
Matt: I like this a lot. It's like if you work your way up to Dreamline you have this solid base of understanding to get to. You understand what's happening underneath the scenes as you're streaming these events, and if you need to fix something you're in a much better place than if you'd not understood any of the mechanisms that are happening.
Colleen: Or even you get the streamlined stuff or you even read the repository and look at it, and you understand it deeper, then you say "Cool. Now I'm going to buy the Ferrari, I'm going to buy the element alive," or whatever equivalent of this. But the thing is, nobody teaches you how it all goes together.
Eric: I was just thinking about this. The beta version of Streamline was you telling people "Go learn what engine X R2P module is, and set it up on your own, and stream something." This is like-- I don't know what you would call it--
Colleen: The end conclusion, where we have the contribution and we have the hardware, and we have the player, and all that.
Eric: As a way to get into video and streaming and learning how all the pieces go together.
Colleen: But it's also a fun basis for me to play with new technologies for self education. For example, the new Nvidia cards just came out. The RTX generation has really good hardware encoders in them.
I previously was using the previous generation of it, and now we're going to be using that one. I'm playing with the engine codec FPGA stuff. I can put that in there, and low latency cc map Dash, I can put that in there. It's a playground to see how things go together.
But you would be amazed at how many vendors in the industry who will try to sell you something, and I'm not saying that are disrespectful way, but let's say that you are elemental.
They're like, "Here is an elemental live. You can take in video and you can stream it." The next thing that a beginner is going to ask is "Cool. What next? What server do I use? What CDN do I use? What player do I use?" Very few people actually give you--
Eric: "What settings do I use?"
Colleen: Unless you're using an off the shelf service, like Twitch or something like that. They all have pieces. Nobody knows how the pieces go together, holistically, and then shares that knowledge in a simple accessible way. That's the point of the project.
Steve: That's awesome. You're talking about how it could run on a badger.
Colleen: Install Linux on a dead badger.
Eric: My contribution to this universe of projects is going to be Seamline, and it's stream line that runs live streams from internet connected sewing machines.
Matt: Oh my God.
Eric: Of course, the natural progression of this would be--
Matt: 2019 is going to be wild, y'all.
Eric: To use the small amount of compute on.
Steve: Last question, I think we're running close to time, but to wrap this all together what would you say is the relationship between Streamline and your work? How do those things tie together?
Colleen: My work is based around people doing live streams that are going to be watched. It's part of a platform that has to be reliable and there have to be viewers, and it has to be easy to use, and that's that world.
But because of that simplicity and reliability, things become opaque. And you can't learn about it. I also work with teams and teams and teams of people, whereas on the other side I'm like, "Wait a second. Hold my beer. I'll just show you how this works."
There's no promise that anyone is going to shop and watch it, there's no promise that it's going to run for a long period of time. It's not expensive, it doesn't have the community aspect of it or anything like that in the perspective of generating viewership or whatever. What it does is video goes in, video goes out.
But it's a educational open source tool for people to learn about this stuff, as opposed to actually accomplish the task of doing a live stream. I don't care whether or not I can look under the hood when I've got tens of thousands of people watching, just don't fail.
The opposite one is, "What if failure is OK, and I want to learn how this works?"
Steve: I love open source projects that at least start as an educational platform. It sets up the project so much better for building community, and building contributors, and helping people understand it.
I know you have a lot of open source projects that focus on the technology first and doing the best thing there, and they do things one specific way that only certain contributors are going to understand. But this is like starting on a base of education.
Colleen: It's architecturally sound. This is why even though we're going for easy and cheap, I still provide you the entire hardware stack and OS specs, even though we can just say, "Go use AWS."
The reason is I want to be able to work towards in the future making it more and more refined and robust and reliable. The bones are there. The architecture is there. It works great already, but you have to refine the reliability and ease of use over time.
That's also when a lot of this stuff becomes opaque. Architecturally, Streamline is great. Screamline is also going to be great in a different architecture for different use cases. The goal of Dreamline is to make it eventually easy and turn key in something like that end to end, but it's going to take a community to do that.
I'm just one person screwing together different things and showing stuff off. A lot of the work that is in Streamline isn't something that you see in the repo. What it is I get part A, and I get slot B, and I try to put them together and they're supposed to work together, and they don't.
I track down who the people are who own that muxer ffmpeg on this server are doing this protocol, or whatever it is. I work with them to fix their bugs and get it all together so that the configuration is super easy and simple, whereas before it was like "I didn't know that didn't work."
But nobody tried it before. The reason it's so simple is I bugged all the people in the system to try to get it to be that simple.
Steve: So users of this have the force of Colleen ahead of them, blazing the way and making it simple.
Colleen: For example, adding to the dash muxer the ability to HTTP put methods instead of HTTP post, something like that. That was not a flag that was previously available in a fem peg. Simple add, right?
I worked with Karthik who maintains that. I was like, "Can you add this?" And he's like, "OK. Chill."
The next thing is I could have used the post stuff and had a server side thing that takes that data and writes it to a file, but if we just do put it's just a configuration for engine x and writes to a file. Does that make sense?
Steve: Seriously, if you haven't checked out streamlined yet, the read me alone is worth a visit. That's what I've been pointing people who have joined Video Dev and ask, "I need high level overviews of how all this stuff works, for--"
Colleen: "How do I video?"
Steve: "An engineer that's just joined our team."
Colleen: By the way, all of the information is there. It's not exclusively relevant to live streaming. The reason that I chose live streaming is that it shows you how everything works end to end by definition.
The only difference between live streaming and on demand video is you are creating the video, contributing it, encoding, doing all of it in real time. Therefore you can see the whole system.
If you remove the real time requirement, that's way easier. You can rip off the hardware, you can rip off the live real time encoding, and you can just take your time.
But all the configs for the HLS ABR delivery, the player, the CDN, the origin, it's all the same. So, I figure if you can dodge a wrench you can dodge ball. Do live.
Steve: That's real.
Matt: Thank you so much for joining, you two, I appreciate it.
Eric: Thanks for having us.
Matt: Of course, any time. If you have any questions for Colleen or Eric, both of them are also on Video Dev. Video-Dev.org.
Colleen: There's a Streamline channel in the Video Dev Slack. If you want to come and ask questions, we're helpful. I'm trying to build more of a community around it. If you want to work on it, that's awesome.
Contributions welcome, but talk to me first before you waste time on it. A lot of people are like, "Patches welcome!" And I'm like, "Don't go write a bunch of stuff and just show up with a patch, and I'm going to be like, 'I disagree,'. Just be like, 'I want to work on this,' and I'll be like, 'OK great.' If that fits."
For example, there's a lot of the philosophy behind it's got to be simple and transparent, sometimes people want to make it easier to use but harder to understand. A lot of this is architecturally, it's got to be simple.
But if you want to join and work on it, that's great. If you want to ask questions about how video works, we're super happy to help. Hopefully, eventually, we will build the opaque simple one with Dreamline.
Matt: Awesome. As always, you can catch some of a subset of us, all of us at SF Videos on the end of the month.
Colleen: If you're not a douche bag and you want to come to a cool part --
Eric: Come to NAB.
Colleen: On Tuesday, join--
It's important to be part of the video industry, and NAB is where the video industry comes together.
Colleen: On Monday there's the Demuxed party, on Tuesday there is my party. T here's also the Wowza party, would be super awesome. It's been awesome the last couple of years, so maybe go to that first and then to mine.
Steve: There will be parties, there will be booths.
Colleen: There will be booze.
Matt: Booths will be had.
Colleen: What's the phrase from one of the Aliens movies? "Big things have small beginnings."
Steve: Yes, something like that. Is that from Prometheus?
Colleen: It is. The robot guy?
Matt: Thanks again, y'all. Hopefully we'll see you at the next SF Video or Demuxed. Thanks everyone, See you soon.