about the episode
about the guests
Matt: Hey everybody.
Welcome to the first Demuxed episode, or at least the first Demuxed recording, of 2021.
It's been a hot minute since our last episode.
I think the last one dropped for Phil's birthday, actually, in October 2020.
Either around Demuxed, or right after but--
Phil: It was the day before Demuxed, I think.
Matt: Oh, that's right.
Phil: I seem to remember. It was the day before we lent into that a bit.
Matt: If you missed it, that was the low latency, real time conversation with Kwin.
Kwindla from Daily, who congratulations, they just announced their series A, I think, in the last few weeks and it's a cool new product.
If you haven't checked that episode out, check it out. It's great.
But yeah, so we're trying our best to actually be--
We say this, I think we've said this every year, but this year it's going to happen for real starting in 2021, where we're going to try to be a little bit more consistent and get folks on and have more conversations.
If you know anybody that would want to join the podcast, or somebody that you think we should reach out to talk to, reach out firstname.lastname@example.org, or just ping any of us on Video Dev.
That's video-dev.org or Twitter or snail mail.
Phil: We take snail mail? Okay. We have a P.O. box that people can send us things to?
Matt: I shouldn't have said snail mail. I'm not accepting your mail.
Speaking of Demuxed 2020, or Demuxed 2021.
We do have dates, that's the week of October 4th.
So we'll probably do something similar to last year where it's either three days more in the morning, or go back to the two day platform.
We're still, as you can hear being a little bit--
We're keeping our options a little open now while we wait and see a little bit more how vaccinations continue to roll out.
We're really, really, hopeful given how well things have been going here for the last month or two with numbers and vaccination numbers that we can at least get domestic folks in for a sizable event in October.
That would be really exciting.
And then if it's still weird around international travel, then we still want to make sure that international friends can join and really lean into having a great hybrid experience with a couple remote speakers and whatever else that means.
But we're encouraging right now that we'll be able to do something in person and then maybe something hybrid or if we need to and things heaven forbid, take a different direction, then we'll have the same great experience last year we did online.
But either way that will happen the week of October 4th, so keep an eye out.
Just go ahead and clear your schedule that week. You know, put "OOO" on the calendar. Whatever you need to do.
But in the meantime before then, as you know last year was supposed to be the first time we were running Demuxed.
Phil had done an amazing job of putting together a slate of sponsors and we had a call for papers out.
We were really excited about all the stuff.
And then that was supposed to be in March, right? No. April.
Phil: April, I think. Yeah.
Matt: Yeah. And then something happened and we weren't able to do it.
This year, still not quite there as you can--
We might be okay by the end of the year but right now is not the time so obviously this year is also not going to happen.
However, Jeremy from Sydney Video Technology is doing the same thing we did last year and we're organizing--
There's very little "we" except for as a local organizers from Phil and I's part.
Phil: I think what we technically did was give him a stream key at this point.
Matt: Yes. Well, we did that excellently. My bad.
He's putting together a global video tech meetup, so it's a 24 hour Mega Meetup where he's got meetup organizers from all the other meetups around the world taking an hour to present during their timezone so that we can have a full 24 hour meetup.
It's going to be on May 27th, so if you're interested in speaking at it, it differs from, you have to meet up with a lot of people to just pre-record and then do a Q&A after or just do all pre records or just do a live on Zoom or something.
The organizers themselves can help you out individually, but just remember last Thursday of May. Yeah.
Any other high points I'm missing there, Phil?
Phil: No. What an exciting end of the year, it's looking like, though.
Matt: I know, right?
Phil: NAB, IBC, Demuxed.
I'm so excited, so excited to see people in person.
This is when we have a travel ban and UK can't go anywhere, obviously.
I just, I'm worried about NAB being early in that list of events in Vegas after nearly two years of no human interaction.
It's going to get messy.
Matt: My dice skills are going to be rusty is the real problem there.
Phil: That's true. That's true. Your craps will be crappy, one might say.
Matt: I didn't know where you're going to take that.
I'm glad that, that went a better direction than I thought that was going to go. But cool.
Anyway, looking forward to seeing hopefully many of you at the end of the year and hopefully in person.
Anyway, this episode, we were lucky enough to get Rob from JW Player and the hls.js project coming in to talk about coming close to hitting 1.0.
We've been through a bunch of release candidates, and I think I downloaded an RC with seven numbers after it the other day.
Tons of work going into that thing, a bunch of amazing, exciting features right in the pipeline, and so we wanted to get Rob on to talk about what are some of those exciting new things?
Where the project is? And yeah, just life, liberty, pursuit of open source stuff.
Whatever else we want to talk about today.
But anyway, thanks for joining Rob.
Do you want to give a high level overview of either yourself or hls.js or both?
Robert Walch: Sure, man. Thanks. I work on the Web Player team at JW Player.
I've been there since 2013. My first job in video.
My background is, I like to say in interactive media, meaning I guess, I just knew Adobe and Macromedia products well, prior to really working on Web Video.
But what was good about that is I was exposed to video in different ways in QuickTime and getting QuickTime to sync with other interactive applications.
Same deal with Flash, and that converted pretty well over in 2013 when Flash was on its way out, but it was still carrying a lot of the load for a lot of browsers.
Based out in New York City, where JW Player is. I don't know what else there is to tell there.
Matt: Yeah. That's a start. Jumping into what is hls.js?
How did that project, what are the origins of the project? Can you give us a little detail there?
The project was created by Guillaume du Pontavice in 2015.
There's a lot of background there.
I'm not sure how much is good to talk about because as I mentioned, I was working on a Flash HLS player for JW Player.
They had it open to the public, not necessarily open source, and Guillaume had a Flash LS player based on that.
I think JW Player didn't like him doing consulting work with it.
I think he created it while he was at Dailymotion, or at least was maintaining it while at Dailymotion.
It really took off in terms of an easy to use performant HLS solution for browsers that didn't support HLS playback natively in HTML5 video tag, but that did have MSE support.
For me at that time, 2015, '16, we were trying to build our own net JW Player.
We didn't do a port of our Flash HLS player.
We built something new that had a codename, Caterpillar because the little pieces of the caterpillar were segments of media.
But ultimately, it had some flaws in terms of yielding too much in the browser, right? Everything was async.
It yielded so much in between like getting a sample of something and trying to transcode it and then appending it to the buffer, that other things would get in the way.
At the time, Jeroen, JW at the time, behind JW Player, Jeroen was like, "Just use hls.js. It's really fast. Just use it."
After saying no to that five times, he just gave up.
There's too many other things to do.
I think we were, I don't know if it was--
Yeah, I think we were trying to get JW Player 8 at the time, and John Bartos I think had just started at JW Player and we're like, "Let's give this to John. He's getting the hang of things, and pretty good," right?
He can make this new, we call them providers.
Video adapter that could see HLS, and then wrap hls.js to handle it in our player.
He did, and eventually started to maintain the project as well, which is pretty cool.
Steve: I'll say, I remember when hls.js came out.
I remember having Skype calls with Guillaume, and I met up with him at IBC years and years ago.
Yeah, I really appreciated what he was doing with it because it was just this nice, focused project.
When you're looking at something like a Web Player, sometimes it's hard to think about, what is the right place to break off a solid module of this thing, and just focused on this?
And open it up to everybody else to contribute to, right?
I just really appreciated the focus of that project and how clear it was and how open it was.
It fit in perfectly for us too in the model of needing different providers or different modules adapters for types of media. It's an MP4.
We have something that just handles video source equals this URL.
But if we know that it's type supported, or whatever tested is in the browser says, doesn't say probably, if the browser says it can't handle it, we're going to have to handle it some other way.
Yeah. We're mostly dealing with HLS streams and hls.js was, that was a perfect solution.
At the same time, we were looking at doing Dash with either dash.js or Shaka Player. It's not every day that there's something.js solves all your problems.
Video.js probably does most days, but we couldn't do that either for obvious reasons. It's nice.
I think there's a lot to say about what HLS is, and what its purpose is related to that, right?
It would be great to have Guillaume on the show and get their take on it.
But for me, the goal of hls.js was always to play HLS streams in the browser, as well as Safari does.
In other words, when you're not in Safari, and you want to play an HLS stream, if you use hls.js, it should be able to play the same streams as Safari can play just as well.
But on top of that, it's really configurable.
You can set up how much you want to buffer ahead, or how much of the back buffer you want to clear.
There's so many settings. It's extensible.
You can replace parts of it if you want to.
If you want to use your own AVR algorithm, you have to replace the whole module that handles AVR, but you can do that.
It's pretty transparent in terms of giving you access to what's in the stream, or what at least the player is parsing from the stream.
Not just metadata, but all the details from the manifest, how performant loading is and all the events of everything happening.
There it goes overboard compared to Safari, but where we can say yes or no to whether we should fix the stream issue is, does it work in Safari?
Steve: Yeah, it's funny that you mentioned Safari as the bar, because I think, a lot of people who've worked with HLS and Safari actually have had bad experiences in some ways, right?
And so I think what is a big value of doing HLS, let's say manually in the browser, is that sometimes you as the person building the player actually knows better about what the audience needs than the browser is expecting. Right?
And so having everything in a black box underneath the browser is not always going to create the best option.
Matt: Yeah. I think it's also worth taking a quick step back and talking about what is actually happening here in hls.js, because it's gotten better in recent years.
But that shit is cray.
Correct me if I'm wrong, Rob, but I assume this has gotten much better in fMP4.
It's really incredible.
Robert: Yeah. I think everybody asked, "Why would you do that?"
And then it's like, "We'll broadcast and pick two." Still don't get it?
That's also you're a web developer. Why don't we use JSON?
Well, come on. Got to use XML or weird playlists and stuff like broadcasts.
Okay. You want to do ads?
Man, it's the way. But yeah.
I have a lot of feelings, opinions, thoughts about MPEG-2 TS, and fragments CMAF, and where things were, and where they're going, right? The standard was, when I got started that you had the TS segments, and they would be...
The standard was 10 seconds long.
The duration of these things is a big deal, and they've only been getting smaller and general, but that's a decision of folks creating streams and what latency they want or performance.
So we're being forced to use really small segments with these parts, I'm sure which we'll get to.
But even without that, as a maintainer looking at issues, you'll have someone in HLS going, "I've got one second segments and the stream is going crazy. There's so much stuff."
Well, they don't complain about all this stuff in the network panel, except for the times of errors or whatever, but I definitely would have always pushed back on anyone trying to do to second segments at a certain point in time.
That's crazy. You don't want to do that. That was even one of the reasons for being of Dash, right?
It was with Dash, two second segments. You can do that.
It's going to be faster than hls.js, or HLS.
But yeah, you can do that now with HLS, but at a certain point, you lose some of the efficiency of how a player tries to offload that transmuxing.
It does exactly what you said, Matt, but it will also take the data that it loads, pass it off to a worker, which can then do that transmuxing and another thread, right?
And then it has to give the data back to the main thread to be appended to MSC to actually be buffered once it's been turned into MP4.
I guess that maybe there's less work with fMP4, but there's a couple things going on. It's not as pass through as some folks would make you believe.
I've heard a lot of excuse, things being said, "Well, with CMAF, for example, like it's really sticking to a certain specification of fragmented MP4."
But players need to know or want to verify.
At least hls.js wants to verify the media's actual duration for each track, audio and video, and so there's some unboxing of the MP4 going on.
You might also want to unbox the codecs which are tricky to do. Hls.js just gets the codec type.
It will get the first, oh, this is ABC one. But that's it. It doesn't get the rest of the little profile and level data after that.
If you don't know that, if that wasn't delivered in the manifest correctly, you're going to have problems when you try to initialize your source buffers and start buffering.
That's just one example of a little thing I see go wrong with people streams, especially since they might not have the main manifest.
They might just give us the playlist, and so you have to guess or parse out a codec.
And then the TS segments are most of the time going to be maxed. You're going to have audio and video elementary streams in a single playlist.
That's rare or harder to do with fragments and MP4, and so you're almost always and especially with CMAF, I think it's fine that you have to have separate audio and video.
Now, you're dealing with twice as many playlists, twice as many segments.
You've got to align the two, which is easy in VOD, but not always that easy in a live stream, where you might get the two in a slightly different position, have to parse.
Again, parsing things because you want to know the time code that's in there to align them to even do that with fMP4.
I forget. PTS, there was PT. We had program timestamps, right?
Presentation timestamp, sorry, with MPEG-2. Yeah.
What are we looking at again, with fMP4? I forget.
Phil: BMDT. Base Media Decode Time, right?
Is how it puts on my timeline, I seem to remember. I could be wrong.
Robert: I don't even know off the top of my head, what's happening in hls.js there.
There's plenty of gray areas like that for me that one day we'll get to.
Phil: Things fMP4 was supposed to fix, but really, it would have been complicated.
Robert: Sure. I think we're all going that direction, and this project will get better. I think there's definitely improvements I've made with version one.
I'm not going to call any out because maybe some things are worse.
But that is one of the things we're going to look at more.
Is we feel pretty confident that TS is solid, but CMAF, before in smaller, smaller segments, there's still some work to do around just improving performance they're handling.
It's not always just pass through. There's still some work that needs to happen.
Actually, sometimes by offloading things to a worker, we're creating more work, or we're recreating that scenario of too many async yields in the browser, that maybe don't need to be there anymore.
That same problem I was describing with the failed HLS project that I worked on, it could be happening as we go to smaller. Take smaller sips of media.
There's less of a need to offload it to crunch the data, and more of a need to just get it buffered as quickly as possible.
I know that the folks at the media team at Chrome have done a lot of work to handle MSC in workers and network requests and workers and HLS should probably already be doing that at this point.
That's definitely one of the things that just haven't got around to, but that's definitely on the list of future enhancements.
Matt: You did mention v1 in there.
We mentioned at the beginning that, that's something we were going to talk about. Yeah.
It's been a long time coming. We're coming up on-- Yeah.
I love the notion of 1.0 is when it's supposed to be production ready and hls.js has been in production for-
Phil: I don't think it's ever been used in production actually. That's what I heard.
Matt: Yeah. I mean, who would be crazy enough to use it in production? I don't know. Yeah.
But we are coming up on a V1 of hls.js here, which is, I don't think that it's the milestone of its production ready, because I think we can all agree that it's been production ready for quite some time now.
But what does that milestone mean to you and everyone else that's been working on it?
Robert: All right. V1 means production ready.
That was the V0 was, we're not confident yet. I don't know.
At the point where I became maintainer of the project, the project was already going in this direction, right?
Just going back to earlier, I mentioned that John Bartos at JW Player was maintaining hls.js.
He put forward a spec for low latency HLS, the Bartos spec.
It was getting a lot of momentum, but a lot of folks were saying, well, a lot of people were in because there's already companies using this chunk transfer style of live streaming on top of HLS.
But a lot of people were going, "Well, Apple hasn't said okay yet. We're trying to get their blessing on this spec."
They came out and said, "Well, no, we have a different idea of how this is going to work. It's called Apple's ZHLS, LLHLS."
It's quite different. It doesn't use chunked transfer, those parts.
And then John famously got off one camel and onto another, and while in Will Law's talk at Demuxed, then essentially, in other words, went to work at Twitch and stopped working on the project.
At that point, it looked like I was going to pick up maintenance, and try to shepherd through the work until we get to 1.0.
Getting to your question, what is 1.0?
I think the plan was basically do a conversion to TypeScript for all the code in the project.
That was three quarters of the way through, or more, as well as add low latency one form of another.
We already had the chunk transfer support there, and a demo mode. That was pretty much it.
There's whatever else we might fix.
There'll be issues coming in, and someone might respond, "Well, seems to work in our version one fork, so that's what it's going to be."
I made the decision that we wouldn't support LHLS since the spec no longer has traction, or no longer support.
We would add support instead for Apple's LLHLS, and that would be the main project.
Fix any issues, try to make sure it's a little more performant than the previous release, and call it v1.
There's a few more things that when in feature wise, some better codec support, looking at some of the MP4 for support trying to prove that.
Some more live features, calculating drift and managing latency.
Controlling playback rate so you can catch up if you need to. Some things like that related to support.
But it did take a long while. That's not just because LLHLS is complex.
It's made up of a lot of different features.
It's also because I had three different forks in my lap that I was working with.
Version one was actually based off of JW Players fork of hls.js. It had some different caption handling features.
A couple other little tweaks in there.
There was the current v0 which is pretty production ready.
People were using 0.12, whatever it was. And then there was JW Player's fork that hadn't been updated to v1.
There was a fork of 0.12 or 14, or 13, whatever it was.
What was important for me was not just, okay, I got to take over what John was doing and get version one out.
I have to prove myself as a maintainer here of the project, and show people that I'm supporting their use cases for the version of hls.js they're using now before just shoving this new major release down throat, right?
I had to get rid of one of these forks. Basically, I did this 0.13 and 0.14.
It totally messed with the MP4 remuxing and got familiar with that.
That probably took me six months.
Oh, and there was this whole pandemic thing.
2020 I think is a better way of putting it happened.
It wasn't until maybe just last summer that I started feeling comfortable with where I was taking the previous version of the project.
At that point, I had replaced JW Player's fork with the one that's an open source.
JW Player was using 0.13 or 14 releases that I was cutting.
I was starting work on LLHLS. Yeah.
Now, another, I don't know what year later, there's been six release candidates.
I just cut the sixth one yesterday.
The seven digits that you were seeing might have been a commit hash or something. I'm not sure.
There's been a lot of those and probably went from beta to RC a little early.
What happened there was just we started getting a lot more feedback after cutting RC.
It's surprising how many more people start to test upgrading players when they see we're getting close to a big release.
I think more folks are actually rolling out solutions to LLHLS on the streaming side, right?
That's really cool. People are putting it through its paces. I've been responding to issues there.
I'm cutting more RCs than id like to, but hopefully by the time people hear this, at one point, I will be out. I think we're really close.
I say that there could be 10 new issues I need to triage and I'm not in a rush, but I do want to make sure it's not just solving my company's needs.
It really serves the community well. It is in production today with JW Player.
The RC5 is in production today with JW Player. It has for a few of the RCs.
Most of the fixes in these RCs have been around LLHLS support.
We pretty much built that feature with two sample streams from Apple, and here and there, some from Wowza.
There's a lot more to do there in terms of testing stability and performance, but hopefully with the use case that's similar to what Apple is presenting, one second parts, you should be good. I hope.
Matt: Often with a major version, there's breaking changes along with it?
Are there any major breaking changes to speak to?
Robert: Sure. I think for the most part, version one doesn't have breaking changes for a simple setup use case like set up hls.js, attach a video tag and load a URL.
That should work exactly the same.
Where things start to change a bit are in the configuration options.
There's some new ones, some ones that have changed slightly.
A couple of defaults that have changed automatically clearing the back buffer.
A lot of smart TV folks wanted that, but you can configure it depending on your smart TV.
After that, the biggest breaking changes are going to be if you're listening to events, and the object structure of those events are a little bit in order of certain events.
There's some things that changed there when we did this refactor to support chunk transfer.
There was an experiment, I've labeled it an experimental feature, right?
This one configuration option called Progressive.
It was the option that would stream bytes from a segment progressively.
Rather than getting the whole segment, you could get the chunks and it would append those that's off by default.
But you can turn it on, if you still want to play around and make HLS a thing.
Just don't expect a lot of support for me.
I've broken it once or twice in the pre releases and fixed it, but there's just certain situations and streams where it's not everything's going to work perfectly.
It's experimental, and not the good experimental.
We're almost there and going to make it work.
It's, maybe this is actually deprecated before arrival, but it was a big part of the v1, and I thought it was worth keeping in.
But yeah, so breaking changes, events get pretty touchy. Oh, and the stats for loading.
When you get a loading event, you get this stats objects that say when something started loading, when it ended loading.
That structure is totally different.
If those metrics are important to you, you'll need to follow the migration guide to just move things over.
Phil: Are you taking notes here, Heff, or?
Heff: Yeah. It's good to know.
Robert: We made a guide for you, and I would love feedback on that.
It's been a running thing.
A lot of this stuff changed over a year ago, and so I haven't made a new app with the old version and migrated over to the new version to double check everything.
A kitchen sink of migration tests.
That would be cool, but I'd love to.
It was in the ticket. It was like, let's do this. No. Too much work.
We have a lot of notes. We put them together in the migration guide, and we wish you all luck.
Let us know if we missed anything.
It will go right into the migration guide, and feel free to contribute to it as well.
Matt: On that note, I would love to hear your thoughts on where do you see the future of the project?
Where do you see hls.js going?
Robert: Well, one thing is Apple is still updating the HLS spec.
We've got some new features coming. We're interested in those.
Me personally, I would really like to have a little more focus on some broadcast features and features that might be smart, or might be good for smart TVs, so HDR and working with UHD content.
It would just be fun. I don't get a chance to do or haven't had much of a chance to do much with that.
Like I said, there is some better support for Kodak and Kodak switching.
In the old version, if you had a stream that had HEVC and for hls.js, would either ignore the HEVC altogether, or it might even in Safari, where that codec is supported, it should be supported.
It might try to switch between the two but not do it properly. That's supported now.
You can manually switch between the two, but initially, it's going to pick one, and it's going to, the ABR is just going to stick to a certain codec.
But maybe in the future, ABR should be a little smarter. It should adapt based on some other principles it doesn't have now, right?
Like which codec is more efficient?
I think there's a feature in HLS about prioritizing or giving a score to variance, so we don't use that.
Matt: Yeah. The new label.
Robert: Yeah. We're also not using media capabilities in Chrome or other browsers, which I'd really like to do.
Preventing a constrained device from using 4K, if it's not a good idea with the media capabilities.
API would be really cool. We have issues.
We have a media capabilities label that we can apply to issues.
There's a couple of waiting there. A big one is performance.
I mean, I wish I could say all the performance issues have been resolved in version one.
It's ready. This is it. Version one. But there's just tons more to do.
Like I was saying, MSE in workers and handling the network and in workers, there's a lot of refactoring that needs to happen there.
Yes, there will be yet another refactor of hls.js.
Maybe that's version two, maybe we get started, and it's not that complicated.
Version one is not the end all be all.
Hopefully, I think a lot of other folks would say it's a major update, but maybe we'll take these numbers a little less seriously now, or just as seriously as we have and move quicker. Right? We'll see.
Matt: Well, that was the whole thing with React, a while back, right?
Because they went from 0.14, 0.15, 16. Wow.
Because they were like, look, we've been one though from the perspective of if you're thinking about it, 1.0 means you can be using in production or they're like, "We've been this way for a while. It's time to just make the jump."
And so, I personally see a big parallel there. Hls.js is massively used in production.
It's out there. People are using all the time.
And then just this, I'm excited about this 1.0 feeling the project can put the foot on the gas a little bit more.
You know what I mean? Is that how it feels to you at all?
Matt: Going from 1.0 to 1.10, 1.20, it feels less of a big deal than going from 0.14, to one, just because of the perception. Am I remotely on base?
Robert: Yeah. I mean, everybody has that perception.
You can't help it. But ultimately, major update is about breaking changes being introduced in the API, or in the functionality and even user experience, I think would be acceptable.
If there were two, it wouldn't just be something in the guts changing about hls.js.
I think it might be just another, what about the future?
Maybe there's too many configuration options in hls.js, or the whole config is too flat, and we need to think about restructuring the API a little bit.
It also depends on how things go with this release and the migration guide.
If folks upgrade and they find they're just having a lot of trouble, or we didn't document things as well as we course correct within version one, we might take those lessons and apply them to version two, right?
How do we set this up, so it's more intuitive for folks to find the correct path to build their implementation or get at the underlying data in HLS?
I think that will be what drives major updates and a little bit of how often we want to take them on.
But there is certainly something about just cruising in maintenance mode, and just making it really as solid as it can be, and seeing if more and more folks will keep using the project.
It was really encouraging during Demuxed 2020 to hear how many people were using hls.js, and I felt a bit remiss for not trying to even do this talk at the conference last year.
Matt: I hear there's something happening this year.
Robert: Well, the other thing about, was it October?
Was Big Sur was coming out and I was 14. I don't know if it had just came out or about to come out, I forget.
But that for me was also a milestone of we've got to get this done.
We want to drop hls.js v1 with LHS just the same time Apple drops their update.
I couldn't do it. It was tough getting things in over the summer.
Delta playlists and part loading and stuff like that.
But again, having really pushed through to try to get it closer, but feel like, man, who even has these streams?
Who's going to use this? But hearing at Demuxed in every other talks, I was like, "Yes. I threw this together with hls.js."
Or "Check it out. Component video player with hls.js." It was awesome.
It really encouraged me and kept me going.
Matt: We're at time for our session here, but you and our time, this is the day we wanted to throw it out there for you.
What do you need? What do you want?
What are you looking for in terms of, if folks want to help out with a project, get involved, managing, maintaining a large open source project, as Steve can attest to is a pain in the ass sometimes.
But it must be really rewarding, but what are ways that the community can help and support you, or if people want to get involved?
What are the on ramps that you see into the project for folks that want to start contributing?
Robert: Yeah. I see you guys as the example of community building.
For myself, it's not a strong suit.
A challenge I have is looking at contributions, PR specifically trying to give good feedback and PR review.
But sometimes you see something and you're like, I don't know if this is a good idea?
Sometimes you know it's going in the right direction, but you're not so sure about the implementation.
There's other times I'm just really distracted with some other aspect of the project, and I can't really devote the amount of time.
You can't just look at, appear on the surface sometimes and go, "The code looks good, so let's just accept it."
There's a whole feature underneath that you really need to think about.
Is this feature being implemented correctly? That can be really difficult.
It can take a lot of time just to ramp up to understand the problem.
Maybe, I'm sure there's ways to mitigate that or at least help with giving some guidance in PR templates or contributing guideline in terms of asking folks to really explain why something is important, or how to do that ramp up.
Just because I've been working on this project for the last year, and that video company for however many years, it doesn't make me an expert at everything, right?
I do have to say no, sometimes. But more important than that, I don't want that to turn off people.
I want to really grow the community.
I'd like to see some other folks be interested in coming on and helping maintain the project, whether it's triaging and answering issues, collaborating on future projects.
A really, really big one that I feel is a huge gap in the project, as it stands now is our demo and sample streams.
Our test streams. I'm really jealous of Dash.js, and the industry forums.
Collection of sample videos that have a little credit to, "Oh, you too contributed in these," and "I also contributed to these streams."
And it's like, "Oh, how come they're not just handing me streams.
I need a beautiful list of streams by feature and by a company, an encoder and all this thing."
If you're listening, throw some streams at me.
But I think also our demo needs to have a little bit nicer of a design to make it a nicer feeling home for said contributions.
That's something I've been thinking about, and hopefully something that's in the future for the project.
I think that would be a great way for people to get involved.
Matt: Awesome. Well, any parting words from you all, Phil, and Steve?
Phil: This has been great. Appreciate jumping on, Rob, and chat and do this.
Steve: Thanks. This is really exciting. Congratulations on getting to 1.0 production ready.
Robert: Thanks, guys.
Robert: Thanks so much.
Matt: Thanks again, Rob. That was awesome.
Really looking forward to 1.0. Hopefully by the time this is released, it will be available.
Check out hls.js, github.com/video-dev/hls.js.
If you want to get involved, it sounds there could be some help triage issues or whatever else that might mean, but I bet Rob would more than welcome some help there or test streams.
Also, don't forget to start thinking about your Demuxed talk, Rob.
It sounds like you should be thinking about a Demuxed talk.
We're probably planning on opening up call for papers same time as usual, early summer, so think-
Phil: A little even last year. A little earlier than last year. I think last year is a little hot.
Matt: A little earlier than last year, so think May. End of April, May for when you can start submitting talks.
Be thinking about those. Mega meetup in May if you want to talk or want to get another meetup or any of ours.
Thanks again, Rob. Thanks for listening, and we'll see you all next time.
Content from the Library
The Right Track Ep. #12, Building Relationships in Data with Emilie Schario of Amplify Partners
In episode 12 of The Right Track, Stefania Olafsdottir speaks with Emilie Schario of Amplify Partners. Together they discuss...
Jamstack Radio Ep. #110, Online Whiteboards with Shin Kim of Eraser
In episode 110 of JAMstack Radio, Brian is joined by Kim Shin, founder of Eraser. They discuss the importance of collaborative...
The Kubelist Podcast Ep. #31, Kustomize with Katrina Verey of Shopify
In episode 31 of The Kubelist Podcast, Marc and Benjie speak with Katrina Verey, Senior Staff Production Engineer at Shopify....