Ep. #15, Low-Latency HLS, Pt. 2
about the episode
Matt: Hey, everybody. Welcome to part two of the Low-Latency HLS series. I guess two things are a series?
We're without Heff today. And when I say without, I mean, we are celebrating Heff today.
He just had a second child, so he is out on paternity leave and hopefully he'll be around for the next episode but we're not going to have him today because he's enjoying a new life in his family. So that's great.
Phil: So lucky, congratulations Heff.
Matt: Congratulations Heff. If you see him on Twitter, which you won't, because he's not on Twitter really, you should tell him congratulations or send him an email.
Phil: Just tell him to get off Twitter, is what you should do.
Matt: You should probably tell him to get off Twitter.
So yeah, first episode, we talked about kind of the background of Low-Latency HLS and like what got us to this point, community specs, Apple stance, what latency looks like in these different ways of delivering video, blah, blah, blah.
And so now comes kind of where we were building up to, which is Apple's newly released spec.
So before we jump in to talking about the new spec in detail, can you give us a quick TLDR on like what we're seeing in this new spec?
Phil: Oh yeah, absolutely. That's a great idea before we get really into the weeds of it.
So the new spec is oriented around delivering what Apple are calling parts. We'll talk a bit more about those.
You can just think of these as sub-segments really, and those can either be transport stream segments or CMAF Chunks.
And there's a new playlist intact to advertise these little parts which are between, you know, third of a second, let's say, and about a second on the bottom of a manifest.
There's then two new blocking behaviors that are in the specification.
One is a Blocking Playlist Reload as Apple calls it. So this is allowing you to request a manifest update.
And this is, another part of it is manifest updates or delta updates to manifest are just little bits at the end of the manifest so you don't have to load a full manifest anymore and fundamentally, or you can ask these before they're generated.
So that's Blocking Playlist Updates, the second part of it.
The third piece of it is Blocking Preload Hints which is the ability to advertise these parts before they actually become available.
So again, you can ask for a part and you have to block on the server side and then respond to the part when it's available.
Those parts have to be delivered over HTTP/2. That's another piece of the puzzle.
And then finally, there's something called Rendition Reports as well, which are designed to allow you to jump faster between different renditions within a piece of media.
Matt: Awesome. Okay, then let's get into the weeds a little bit.
Phil: Yeah, of course.
I can't remember when we recorded the last one, I'm not going to give away real world dates but it's a couple of weeks since we recorded the last one obviously, and the exciting thing's happened since then is we've had a WWDC, right?
And I don't have one of those every day, but we had one of those and it was, I don't know, I really enjoyed WWDC this year with it going online.
A lot of the content felt a little bit more, I don't know, cleaned up.
I don't know, it was actually really great, I really enjoyed it.
And kind of the big news from the low-latency perspective is we kind of all knew there were a lot of changes ongoing in the specification but there is now kind of a timeline for that being available in Apple's platform.
So Low-Latency HLS will be available to everyone as described by Roger, in iOS 14, tvOS 14, watchOS 7 for all y'all.
Yeah, this is on the list for all y'all low-latency video on your watch requirements.
Matt: And, it's dead.
Phil: Yes, okay bro. It's really low-latency for about five minutes and macOS as well. So that's macOS 11 I believe technically and I have forgotten the code name.
Matt will know it off the top of his head. Big Sur, I remember it, Big Sur, that's it, macOS Big Sur which I believe is macOS 11 according to the prebuilds.
Matt: I thought it was the Catalina and I'm pretty sure that was like three versions ago.
Phil: That's a while ago now. But beyond that was, and with the kind available to everyone the requirement to use entitlement to enable Low-Latency HLS is seemingly gone in these releases.
Matt: Could you tell me what that means?
Phil: Yeah, absolutely.
So really what that means is with the prerelease, there was what's called an entitlement, which is kind of these flags you set on your app when you put it in for App Store review that might enable certain features that aren't available to everyone.
And low-latency playback was behind one of these flags because Apple didn't want to let everyone get access to it and I'm not actually sure that any app made it into the App Store with that entitlement, but you know, this new version won't have any flagging or anything, it will just be enabled by default.
Now, what, as far as I understand it right now, is interesting is I don't think this is going to be available in macOS' Safari natively.
I think it is going to be behind kind of the app player right now or at least it didn't seem to work in the testing I was doing.
And so I think it is going to still be hidden in kind of, you need to be using AV player a little bit beyond it just being enabled.
Now, that may be a bug or something, but it's certainly going to be, you know, when you're using like an AV player, on iOS or on tvOS, or, you know, in iPadOS, it'll be there and available so it might look a little bit different, but it should all be there and magically working with no magical entitlements needed.
And what I think is super exciting about that is beyond that, the work that's been done really has brought kind of every HLS feature into that release.
So this isn't going to break anything. You're still going to be able to do things like ad insertion. You're still going to be able to do DRM, you're still going to be able to do all the other cool things you still do with HLS today but you're going to be able to do them at low-latency and middle, in some cases, a couple of changes to how you do it but it will be fundamentally compatible with all those HLS features set which I think is really exciting for an ecosystem, you know, there's relatively little sacrifices happening there.
Matt: The Safari decision feels big. I mean, I'm going to put my tin-foil hat on for just a second. Love all of you at Apple, I promise.
But like on one side of things, that's a huge incentive to go native versus web. And that would suck.
That would suck really bad if they were just like, you have to use MSC, which is like questionably supported anyway.
Phil: Yeah. I agree with you. I think it's going to be super interesting if they don't choose to do it, it would feel like a weird decision.
One of the weird things was like, even if you put a Low-Latency HLS stream into Safari today in the current macOS, you actually get it trying to make some part requests.
And so something obviously slipped into a macOS release somewhere.
And I don't know if that's been kind of backed out or what were real planners at, but yeah.
Matt: Okay, so we can just assume it's not ready yet, not that it's not coming.
Phil: I think it's not ready. It's my good instinct, you know. It certainly is ready under AV player, which is great so that's probably the most exciting thing anyway.
Matt: Okay, that makes me feel a lot better, but, okay so when you mentioned the standard HLS feature support, backing that back, so for the context of the, kind of the community spec, the Bartos spec--
I need to write a spec just so I can have my name next to it because that always makes me feel really like, "Ah, Bartos, yes, I know that guy."
So, yeah, the Bartos spec talked about like a huge feature of it w as it being backwards compatible. It was just like some extension tags.
And if your player was low-latency aware, then it would use them but if it wasn't, it was fine. It would just carry on as usual.
When you say that this supports the standard HLS feature set, is this also backwards compatible as like hls.js going to be fine with a low-latency even like right now? You know what I mean?
Phil: Yeah, absolutely. It really should be. It is designed to be.
There is, as many people listening to this will know, you know, there is a big difference between how the HLS spec is interpreted on some devices.
You know, there are certainly devices out there that just, you know, can't deal with unrecognized, you know, xx tags, which kind of suck, but--
You know, at a fundamental level, yes, you know, it doesn't redefine anything that already exists in the specification, which is really important.
It's only an additive, a specification, and there is kind of a very strict set of requirements to turn on low-latency mode that will need to be satisfied and--
You know, so long as your players accept, you know, don't crash on tags they don't recognize, it is totally compatible with existing players.
You can pick up the latest reference streams from Apple and check them in your players already, if you do want to check that your player doesn't have issues with them, but yeah, it should totally be compatible with really any existing player if it's well written out of there, for sure.
Matt: Nice. I mean, I'm sure that's going to-- That's going to break something somewhere.
Phil: Oh, 100%. I could name two or three devices that will crash but we'll go slow first.
Matt: I mean, all the Android ageless support was flawless so nothing will go wrong there.
Phil: Perfect. And of course smart TVs are perfect in their reading the HLS specifications.
Matt: And they stay up to date, which is great.
Phil: Yeah, yeah, right. As you've asked, that's a really interesting point as well.
One of the things that I think many of us weren't necessarily expecting and which Apple have done a great job on here is all of this is now in the RFC version of the HLS specifications as well.
This isn't Apple's specific extension. This is in the HLS RFC.
The, what Roger has referred to as v2, but is, I think actually v7 of the HLS RFC contains all of the low-latency protocols, including some interoperability stuff and some reference implementation stuff.
So, you know, this isn't something that's hopefully only going to exist on Apple devices.
This really is something that Apple is saying, "Hey, here's the HLS RFC, this is the v2 of it, it includes a low-latency interoperability mode and hey, go at it, device manufacturers, stick it in your smart TV, put it in your set top box or whatever."
So, that I think is a big positive in this. You know, it's not going to be an Apple specific thing anymore.
It's going to be a hopefully industry-wide available for adoption or at least much more easily.
Matt: Nice. Okay, cool. So let's talk about the protocol a little bit underlying this thing.
So there was-- And, I guess before we dig too far into there, I would love to hear a little bit about what happened with some of the more controversial pieces of the initial announcement of this.
Because there was, a lot of the crying and gnashing of teeth was around like no CDNs really support this.
This is like technically really difficult to implement. Like where have we come from that initial announcement in your mind?
Phil: Yeah, great question. Fantastic. It's super interesting.
Well, we're kind of a long way from where we, the original conversations and the original specification.
I actually don't think we're that far at all.
You know, really what the challenges boil down to in that initial specification and the conversations that came off it was the initial specification had a requirement to push segments over well, push parts to be more accurate over HTTP/2, the fundamental mechanisms and the fundamental thought process behind this specification hasn't really changed in my mind.
You know, when you think about how HLS works at a fundamental level, there is this polling interval on the manifest, right?
You have to poll the manifest to see that there's a new segment available to then ask that segment.
And really what the idea was all the time behind this version of specification was to try and reduce that polling, to try and minimize the amount of polling and the waiting and the interval that really happens.
And this is phrased as in the case of kind of Roger's presentations as reducing segment delay.
And I think in my mind originally, this is about reducing number of round trips.
But the way that it's phrased now is kind of reducing segment delay, which is reduce the time spent waiting for a segment to start coming down.
And that includes both kind of that poll cycle of polling manifests and then having to make an upfront request for the segment as well when you know it's become available.
So, originally that was fundamentally addressed by HTTP/2 Push in combination with the manifest requests.
So starting with something that is still in the new specification, the new version of the specification is you can now ask for a manifest in advance of it being available or in this case specifically a delta manifest so delta manifests are a new concept in HLS manifests.
It allows you to not have to get the full manifest when you want a manifest update. You can just ask for kind of a little snapshot of the latest piece of that manifest. This is great because a manifest bloat is a very real thing in HLS and it's one of the things that DASH generally doesn't suffer from as well. So, you know, you're not transferring around these huge, even if zipped a list of segments anymore, you're only transferring around these little deltas for the head of your manifest, as it were.
Matt: Excuse my ignorance. But is this akin to like HTTP byte range requests? Like how does that delta--
Phil: Yeah, you could absolutely look at it that way. It doesn't work as a range request.
It kind of could interestingly wrench custom things super important in this spec, but will come onto for sure.
Really, you just say, so if you think about how HLS segment numbers work, right, you have this just incrementing segment number, you can say, "Hey, only give me the manifest after segment 100," for example.
And it might be that there's one or two segments after 200 or whatever.
So, really just you're saying in your manifest response, throw away the first, you know, 100 segments of this manifest or 1000 or 10,000.
And when you're working with really small segments that can add up to quite a lot of bloat that's being reduced.
Phil: So really you just, it's this new thing called the Origin API, it's what it's being referred to as in the specification, they basically defines a bunch of things that you can set as parameters when you ask for a manifest, including yes, get up to a certain part of the manifest.
And that's a super cool bit of technology that's going to make a big difference.
It doesn't stop you having to ask for a full manifest when you first start playing back content so it's still going to be a little bit of bloat when you first start playback, but for like ongoing subscribers, it'll be quite a significant improvement on the chattiness of the protocol, which is great.
Matt: Right. So interestingly enough, it almost feels like glorified pagination.
Phil: Yeah, completely agree.
So originally the next kind of step on that was those requests, those partial manifests, we're going to be able to block until there was a change.
So you'd be able to ask for a delta manifest update for a manifest, so for kind of these future upcoming parts, and I'll talk more about parts in a second, but you're going to be able to ask these upcoming manifest updates before they happen, which is cool.
And this is something that survives introspect today and this, in the specification, is called Blocking Playlist Reload so you ask for an update for the playlist before that update has happened, and then when it happens, you know, that request gets unblocked and suddenly you've got that information.
Now, originally when that delta update was released from a blocking request, originally there was going to be a HTTP/2 Push chained off the back of that response.
So the idea would be that, "Hey, give me the next manifest updates, wait, okay, the next manifest update's here, by the way, I also happen to know that you're going to need this part of video as well so I'm going to push that to you at the same time."
And that's where the HTTP/2 Push piece came into this.
And that was fundamentally the piece that challenged most of the major CDNs right now and beyond that, it also created a major problem.
So, taking a little bit of step back. So HTTP/2, as a general concept, is pretty well supported, right?
Matt: Yeah. Overall.
Phil: Yep, it's pretty much something you can get on you know, all the major CDNs, it's a fundamental piece of functionality.
Matt: Most programming languages, HTTP implementation supports it by default now.
Phil: Yep. So HTTP/2, you know, now a widely supported technology.
Unfortunately, HTTP/2 Push is a less widely supported kind of piece of that specification, you know, from the big CDNs, there was, I think only one that at the time it was announced could do a HTTP/2 Push, but specifically a HTTP/2 Push programmatically.
This was something that was kind of available as well, if kind of all you're doing is like serving a HTML page and you want to push the CSS along with it, was something that was in there for a couple of major CDNs, but being able to do this programmatically was not something that was available.
And specifically there was a header that has been defined, a C HTTP/2 link header which was being used to link together a kind of a couple of objects so if you were responding with a HTML page, for example, and you wanted to bundle in that CSS, but you needed to do it in a kind of dynamic way, this link header was being used to then kind of give the CDN some knowledge of what other thing to go and get from the origin to then package up for this HTTP/2 response that then included a push as well.
And what had become the problem there and the limitation or one of the large limitations with how this was being approached was while that worked, it required an extra round trip to the origin.
So in this delta manifest updates after this delta manifest was released back to the CDN, the CDN would then be reading the link header and saying, "Oh shoot, now I have to go off again to the origin and get the partial segment that I need to package up and send as a HTTP/2 Push."
So that was a pretty big sticking point for CDNs, really eliminating that second round trip would have been a big deal, but that just doesn't exist right now as a technology, unfortunately.
There is actually a posting from Apple on some of the ITEF mailing lists saying, "Hey, How could we do a chain to HTTP/2 Push," which was a--
Well there kind of isn't a way to do it right now, unfortunately.
So eliminating that kind of second origin round trip was certainly a big deal for CDNs and, you know, getting a programmatic HTTP/2 Push in some of the CDNs is going to be a long lead time and at a fundamental level and at least in my mind for this technology to gather major adoption it had to work across all the major CDNs pretty quickly in my opinion and that means, you know, that all of the major CDN players had to have a way to get this technology into real-time.
Because if you think about the use cases for Low-Latency HLS, you know, a lot of use cases, I think in people's minds, well, let's do my next big sporting event using Low-Latency HLS.
And that means having a really good multi CDN strategy, right?
So if you don't have that, then you're in a bit of a pickle.
There was one of a little complication with the HTTP/2 Push approach which was that it did require the manifests and the segments to be served from the same edge hostname which unfortunately many people in the video industry have spent a very long time decoupling their manifests servers from their kind of media delivery stacks.
Speaking from experience, that kind of eliminated all that hard work so that, you know, again, limited the ability to use a true multi CDN stack.
So, you know, one of the pieces of backing out the HTTP/2 Push was to deal with that as well.
Matt: Interesting. So I will say one thing I've been surprised by in this whole process was like, I kind of thought that this was coming faster than it did, you know, this is looking at the past with some rosy glasses for sure.
It felt like the community spec, the Bartos spec was close, like there were people working on implementations, we were working on an implementation, felt like there's momentum.
Phil: I agree.
Matt: And then like this spec was released, everything else halted.
I came to a screeching halt because there's just like why work on something that's just going to die in a few months, theoretically, and then I kind of thought that we would see a few big names rolling it out almost immediately.
And that just hasn't happened as far as I know.
Phil: Yeah, no, I agree with you. I think there's a few factors to that in my mind.
I think really with the feedback from the, "Hey, this HTTP/2 Push stuff's going to be really hard," I think that put a big, a stopping path on it.
And, you know, a lot of people in the industry did obviously start working on it.
And there were some products that announced compatibility that, you know, were based on the initial version of specification as well, but obviously without kind of any really good place to play it, or like a comprehensive course environment place to play it, you know, there wasn't much noise about that really.
And I agree it suddenly, you know, it felt like there was a lot of momentum behind getting a ultra low latency solution in place or, you know, even a consistent, reliable five second latency in place, right?
Which I think, you know, was a epic halt on. I think there was a component of once bitten, twice shy about this.
You know, when in October, there were two meetings with Apple, which involved, you know, large portions of the video streaming industry.
One after Mile High Video in Denver and one after Demuxed in Cupertino.
And I think, you know, in late last year it became pretty clear that there were going to be changes namely, to remove the HTTP/2 Push piece of the puzzle so I think really people understood that, you know, without changes of specification, there wasn't going to be mass adoption.
I think now we've got a timeline of, you know, being in iOS, tvOS, watchOS and macOS and that sort of thing, we're going to see a lot momentum behind it this time, for sure.
And I'm excited about that. I'm excited to see everyone's implementations and really excited to see how quickly it goes.
Matt: Yeah, absolutely. I mean, like this is something as an industry we desperately need, and honestly, I'm kind of shocked that we haven't seen DASH eat up more market share.
And market share is not the right phrase, but you know what I mean? I'm surprised like DASH just hasn't kind of left HLS in the dust because of this.
And that's purely because of iOS' power but that's kind of surprising to me.
Phil: I completely agree with you, you know, this was arguably one of the biggest opportunities for DASH to gain market share.
But I completely agree, you know, there's, if you can't do it on iOS, you know, for many organizations that can be 60 plus percent of viewership, depending on, you know, your target market.
So, you know, if you can't hit your primary market then why bother, right?
And as well, I think beyond that, there is a reluctance in many, many organizations to give that split experience, right?
That, "Hey, if you're watching on an iPhone, sorry, you get, you know, 30 seconds latency, but hey, good, you know, get an Android phone or watch on your desktop and you'll get, you know, the ultra low-latency five seconds."
I think there is a big reluctance in traditional media outlets, particularly to give that split user experience for sure.
Matt: Oh, it looks like you just made a shitty app on one platform, i s what that ends up looking like.
Phil: On the premium platform as well. That's like, "Oh, you've got a $1,200 iPhone 11 Pro, yeah, sorry, that's going to be 30 seconds latency." Yeah.
Matt: Yeah, so speaking of DASH, you've mentioned parts a few times, are we still with TS here? Like how does that look in this new spec?
Phil: Yeah, absolutely. And this is something that hasn't changed from a first version of the specification.
So, one of the core pieces of this new certification is what Apple are calling parts and they have a little bit of a flexible definition of that.
But I would say, think of this as something smaller than a segment that doesn't necessarily have to contain a complete gap is how I would define it.
And Apple consider this to be either a small chunk of TS, you know, it still has to be a valid, you know, a valid TS segment as it was just shorter, not necessarily have an eye frame in it, or Apple defined as a CMAF Chunk.
And that's super exciting so that you mentioned, DASH low-latency also uses CMAF Chunks.
Now, a CMAF Chunk isn't anything super special, you know, it's a move aimed at to pool, which contains, yep, some video data and, you know, maybe has an eye frame, maybe doesn't, you know, and the idea is that these parts are between--
Apple have a pretty flexible definition of how long a part can be.
I would say roughly speaking, parts are going to be between 300 milliseconds or 333 milliseconds probably, and probably a second at the higher end.
And they, yeah, are just kind of sub segments or sub fragments of your more traditional segment as you think of it in a HLS playlist.
So the exciting thing is those are fundamentally interchangeable with DASH low-latency as a chunk of media, which is great.
So I think this is going to be a big driver for, you know, people who have already obviously started merging their DASH and HLS delivery chains as HLS introduced, you know CMAF delivery a while ago now so that's available to most Apple devices these days, but this is going to be another encouragement to kind of unify that delivery stack for sure.
Matt: Nice. Okay, with all of these other things that we're talking about in the HLS spec, DASH having their own low-latency thing, does this start to get us to a place where you mentioned earlier, like people had started going down this path of like breaking apart manifest generation from media.
And that was actually looking pretty good for a little while there, because we'd already gotten to fragmented MP4 support in new versions of iOS which meant that DASH and HLS could support the same underlying segments and so you could just generate new different manifests.
Does this new spec, like the CMAF support seems like it helps there, but everything else I'm hearing about this feels like you would really be kind of building two completely disparate systems if you wanted to support low-latency DASH and this low-latency version of HLS, am I reading that wrong?
Phil: So that was certainly true with the first version specification. Of this version, no.
You should be able to use one delivery chain for your media to deliver both your CMAF Chunks as parts for Apple's Low-Latency HLS and DASH LL kind of CMAF Chunks as well, they should be completely interchangeable.
They are basically delivered in kind of the exact same way because of the dropping of a requirement for HTPP/2 Push.
You know, Apple's requirement now is just that you can get a CMAF Chunk and you can, there's some really interesting intricacy, we'll go into it in a minute around how you can get these CMAF Chunks, but they are the kind of--
The delivery technique while it kind of is described a little bit differently in the two specifications, Apple had been fairly reluctant to call what they're doing Chunked Transfer Encoding, but it really is Chunked Transfer Encoding by any of the name at this point.
I think they do actually call it Chunk Transferring but the latest version, but you know, the caveat there is like Chunked Transfer Encoding doesn't really exist in HTTP/2.
It's just, you know, something without a content length header. So, you know, it's a little more complicated than that from a, you know, like a definition perspective.
So, well the beauty is of the specifications and--
From a practical standpoint, it is interchangeable.
So we've talked a little bit about these chunks, these are advertised.
If you think about the dumbest implementation you could do for a low-latency HLS, what happens is you load up your playlist, you then download, you know, the last segments and then you look at the part definitions so these are new definitions in the playlist called EXDX part, and they just pretty much have a URL to a part in them.
And then kind of the dumbest implementation is the player fetches those parts and off it goes and it's just really small parts.
Now, that'll fall apart really quickly, which is why Apple have the Blocking Preload Hint for media services where the manifest can contain a what's called xx preload hint so what this is effectively a notification of an upcoming part.
So this is saying to the player, "Hey, you can request this part."
And this is just the exact same technologies they're using on the Blocking Playlist Reload, which is you can request this part in advance of it becoming available, and you can also request for playlist in advance of it becoming available.
So if you know that there's an upcoming little chunk of media, you can ask for it before the encoder has even generated it and the CDN and the packager are going to have to hold on to that request until it's available. So this is kind of reducing all of that segment delay concept that we talked about earlier on.
So, that's kind of the first step of implementing this, would be to, you know, hey, ask these parts and ask these playlist deltas upfront before they're available.
And, you know, that reduces their segment delay in getting these parts down to you, which is great.
That alone doesn't really make you interchangeable with DASH Low-Latency.
DASH Low-Latency uses kind of traditional chunk transfer encoding, delivering CMAF Chunks.
So, you know, the encoder and packager degenerate, you know, you ask for segment four and the package goes, "Cool, here's a bit segment four."
Then the player can decode that and play it. "Here's another bit of segment four."
Okay, decode and play. "Here's another bit of segment four," et cetera.
So if you imagine like a two second segment that could be split up into four, 500 milliseconds CMAF Chunks, and, you know, in Apple's world, yep, you could do the same and you could just actually add those as like parts onto the bottom of your manifest and that would work just fine.
You could even address them as bite ranges if you wanted but the elegance of what Apple have actually designed here, it goes a little bit beyond that.
So, you can define a part as a bite range, but you can also define a part as a bite range of an upcoming segment that doesn't yet have a length.
So if you imagine, you've generated segment three, you're about to start generating segment four. You start delivering segment four.
You can make a range request for bites not kind of onwards in that segment.
So, the way that Apple want this to work is that just like entering transferring coding, you would send that CMAF Chunk as soon as you had it available.
The idea is provided. You can make an upfront range request for kind of that whole segment.
You can just deal with it as if it was like chunk transferring coded sending these 500 milliseconds CMAF Chunks down.
Now, the idea is you would still be updating your playlist while you were doing this request.
You'd still be saying, and you know, there is, and we now know how long that CMAF part was, that chunk was. So I now know the length.
If anyone else was joining the stream, it can catch up by using exact bite ranges rather than having to make this open-ended request.
It can catch up using like little bits of the request and there's other reasons you might want to request these other parts any way, mainly due to, if you need to jump between playlists more efficiently so there is various reasons you might want that outside of the startup behavior.
So there are totally other reasons you might want to ask for parts individually with knowing their range outside of just the startup behavior as well.
Matt: So this bite range addressability of the parts themselves, would that allow you to like, so, okay, in traditional HLS manifest, a really common thing to do was like stitch together, manifest, toss an EXT-X-DISCONTINUITY tag in between them and then bam, you've got like concatenated playlists.
So be able to do this all the time for pre-rolls and I had insertion and all that sort of stuff.
Where it didn't work very well at all is if you didn't have a perfect clip between the two.
So would this allow you to do, I hesitate to say like frame accurate, but could you reasonably use this bite range request to like, say, like start mid this chunk or this part and mid this chunk and then stitch these manifests together?
Is that the kind of thing this would allow or is that kind of out of the scope of what we're talking about here?
Phil: Ah, man, I wish I could answer that with 100% confidence. I think you can discontinuity in the middle of a part, but I'm not 100% sure.
Matt: Got it. So this, right now, this is more from the perspective of like jumping into a playlist, Not necessarily like within like sub-manifest.
Phil: Yeah, that's correct.
Matt: Got it.
Phil: So, this is super cool, you know, if you imagine treating two interchangeable head ends the same way, one for DASH LL and one for LLHLS where we're kind of now there, because you can deliver these, as long as you're using kind of the bite range addressable format within LLHLS that is fundamentally compatible with DASH LL, which is super exciting.
It should just work. Now, there's a caveat. There always is. There is a little bit of work remaining to be done on the CDN end of this.
Some CDNs don't currently support responding with a kind of sub piece of something that your bite range addressing.
Some CDNs will consider like that open-ended bite range request to be completed on like the first like, and it's finished, like it's complicated, but yeah, there is a little bit of work to do on some CDNs for sure, to get to a place where that will be working kind of completely compatible with everything else.
So just kind of the, it's a little more complicated because while this is working today on all major CDNs for DASH LL, doing it with an open-ended range request is not how DASH LL is doing it today.
It's just doing a HTTP Get for that upcoming segment rather than what we'll be doing in LLHLS which is a range request for that upcoming segment as well.
So it is subtly different and that does require a little bit of work for sure.
Matt: Got it. So at one point you mentioned being able to jump between renditions.
That's always kind of been one of those things where it works in HLS today, that's one of the big benefits, but it can be a little jarring or slow, like, is that related?
Like, what do you mean when you say jumping between renditions? How does that change from the current implementations?
Phil: Yeah, it changes a little bit.
And one of the challenges with doing this and as we discussed in the first episode, one of the challenges with all low-latency technologies is you're always going to have this trade off, right?
This trade off, the stability and, you know, buffer length and scalability for each chunk of latency.
So, being able to react to changing network conditions in a low-latency mode is really, really important, right?
You're so much more impacted by minor network fluctuations than you otherwise would be if you've got a significant buffer built up.
So what Apple wants to do is have a way to jump between renditions really quickly.
And to enable that, they've added something called Rendition Reports, this is something that was actually originally optional in the spec but has now become mandatory and effectively all this does is allow you in a response to a playlist or a delta playlist update to give the client a peak into another playlist.
And in fact, to give a client a peak into all of the playlists now, as it has become mandatory.
So you just basically say, "Hey, I'm on part 42 of the following segments."
And, you know, so that if you needed to make a delta manifest request for a different rendition, you would be able to do that without necessarily needing to go and fetch that whole manifest, pausing or working out where you are.
So it reduces around trips when you just want to jump as quickly as possible into a different rendition which should make a big difference, you know, something we haven't talked about extensively, that I think is exciting, is, you know, obviously bandwidth estimation is hard for low-latency.
You know, you're so much more sensitive to jitters and that sort of thing.
And that's one of the things that the HTTP/2 Push change had a big challenge as well.
HTTP/2 Push was not going to be easy to measure in the browser whereas this approach should be a good chunk easier to measure performance wise in a browser.
So, hopefully, you know, that then means that rendition reports are much more useful, obviously because you are able to understand quicker when you have a bandwidth constraint networks, that's important for sure.
Matt: That's awesome. Thanks for the high level overview. Well, that weren't even that high level, but thank you for the overview.
Okay, probably should give a quick overview of what's going on with Demuxed right now.
We've kind of mentioned this a few times. You've probably heard rumors or seen stuff around but we are going to be an online native event this year.
It's official by the time this podcast is released. The website should be up and the emails should have gone out.
And the call for papers should be open. So it's open for all of July, 2020.demuxed.com/submit.
The deal is same dates as we were planning in person, October 7th and 8th with bonus, 6th.
And part of the reasoning there is we're going to try to go in the morning, particularly since Demuxed Europe was the first casualty of COVID, this allows us to reasonably allow a lot of Europe to also join if we maintain most of the talks being more in the morning.
So that's what we're going to do. We're going to do three days, keep the content in the morning, keep the afternoons free.
That way Europe can still join. Most of US can still join comfortably.
And that way we can try to be at least a little bit more internationally friendly.
But yeah, so we'll take talks, we'll help you record them, it's going to be great. We're really excited. I mean, there's like--
Phil and I've been talking about this a lot and honestly, to be terribly honest, I've been procrastinating making the announcement because it sucks.
Like it's, I really enjoy like seeing a lot of people in person in an avenue that's not like a giant soulless trade floor.
There's something really exciting about like seeing a lot of friends in the industry that way.
Phil: Oh, yeah.
Matt: And it's a bummer that like, that's not going to happen this year.
So, just want to be upfront that that part does kind of suck a little bit.
That being said, we're actually really, really excited about the possibilities here.
Like we've got a lot of great ideas about how we want to handle chat interaction and what the speaker experience looks like and we've been going to a lot of online events.
And speaking at them, I think we've at least got a good framework for how a well run online event looks and how we can make it a good experience for everybody involved or at least as good as can be expected given the circumstances of everybody needing to be home.
Phil: Yeah, I think we're lucky from that perspective, right?
You know, we're in October, the world has had a little bit of time to adjust to this stuff and you know, we're not going to be one of the first, you know, rushing to do this so I think that it gives us a great opportunity.
Matt: Totally. 100%. And the other piece of good news here is that I will not get a single complaint about chairs this year.
Phil: Man, my sofa is so uncomfortable, Matt, please get me a sofa.
Matt: Yeah, if anybody's complaining about their chair this year, that is your own damn fault. You should probably make an investment there.
Phil: This air on is just not acceptable.
Matt: But yeah, we'll figure out some other fun, like there's some other fun stuff that we've got in the pipeline around swag and-
Phil: Oh, yeah. So excited.
Matt: All sorts of stuff like that. I'm making sure to give to good causes. That's a piece of this too.
So we wrote a whole thing online, you should go read it, about the reasoning and logistics, all that sort of stuff.
If you have thoughts, questions, suggestions, feel free to reach out.
And we're also doing a matching right now.
So part of the ticket sales, that's all going to go to social awareness causes.
But if you want to give right now outside of your ticket, we're matching up to $5,000 of donations.
The places we'll be donating our matches to is splitting between the ACLU and dev/color, which if you're not familiar, you should look it up.
It's really great, they're in our YC batch, we're friendly, but we will also accept donations to other organizations, bail funds.
If there are organizations that you know that are working with diversity inclusion and social justice, feel free to donate to those and just send us the receipt and we'll match to ACLU or dev/color as part of that.
Phil: And especially if you're international, if there's someone local like me, please do feel free to donate to your local charity as well, for sure.
Matt: Absolutely. Great, well, thank you all so much for taking the time to listen and we look forward to recording more of these.
And as always let us know what you think and hopefully we'll see you, at least online, in October.
Phil: Yeah, and tell us what subjects you want us to cover as well. We'd love to get more feedback on that as well.
Matt: Yeah, thanks everyone. We'll see you next time, bye.
Content from the Library
Demuxed Ep. #19, Password Sharing and Patent Conflicts
In episode 19 of Demuxed, Matt, Phil, and Steve discuss video meetups, particularly the return of IRL meetups and the...
The Right Track Ep. #12, Building Relationships in Data with Emilie Schario of Amplify Partners
In episode 12 of The Right Track, Stefania Olafsdottir speaks with Emilie Schario of Amplify Partners. Together they discuss...
Jamstack Radio Ep. #110, Online Whiteboards with Shin Kim of Eraser
In episode 110 of JAMstack Radio, Brian is joined by Kim Shin, founder of Eraser. They discuss the importance of collaborative...