Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

QUESTION : Comparison with SRT #12

Open
ut0mt8 opened this issue Sep 18, 2018 · 4 comments
Open

QUESTION : Comparison with SRT #12

ut0mt8 opened this issue Sep 18, 2018 · 4 comments

Comments

@ut0mt8
Copy link

ut0mt8 commented Sep 18, 2018

Just back from IBC I notice that the other emerging standard for ingest seems to be SRT.
How it compare with your proposal ?

Best,

@ghost
Copy link

ghost commented Sep 18, 2018

Good question.

It's my understanding that SRT's design is for transporting a single stream at high bitrate. It's a bi-directional UDP based protocol, and can use ARQ to deal with missing packets. In my view it makes most sense to use SRT in OB scenarios before OTT distribution transcoding. Other comparable protocols and products to it would be VSF's RIST protocol, ZiXi, amongst others.

This standard's focus is on transporting a final distribution transcode, which may contain up to tens of representations of the same video at varying bitrates for OTT delivery to audience which then may be transformed by a packager or CDN (we generically refer in spec as a "media processing entity") before serving to audience as an adaptive format such as HLS or DASH. Unlike SRT, we are using HTTP which gives retransmission. I don't believe there is an existing comparible standard to this work - existing implementations in this space tend to extend existing distribution protocols and formats with proprietry implementations.

@RufaelDev
Copy link
Contributor

Dear Rafael, thanks to tigertoes for the answer which I also agree with, indeed this protocol targets all HTTP workflows where typically indeed each transcoded format is ingested (but single bit-rate is also possible in some cases). There is currently not a good protocol for ingesting such content to web servers and media clouds (media processing entities that provide added value) which typically use HTTP based transmission. Using HLS/DASH is not ideal or well defined and it is hard to achieve redundant workflows, which is one of the things fmp4 ingest targets. HTTP workflows have shown to be able to scale and HTTP transmission has many advantages such as that many existing technologies can be re-used, such as HTTP middleboxes CDN's etc which do not work well for SRT. In addition fmp4 provides clear segment boundaries for retransmitting content making re-transmission and redundancy easy to support, this makes troubleshooting much easier. Overall this protocol targets highly redundant live OTT workflows which is quite a different from the SRT that targets low latency ingest at a single bit-rate. The fmp4 ingest format combined with HTTP allows re-transmission and caching of data in a manner transparent in CDN's and clouds without firewall problems. Having seen some SRT demos at IBC, while giving low latency, it can result in low latency and blocking and coloring artefacts which is not really acceptable, also caching such contents in large scale workflows based on SRT is difficult and some low framerates may result as was seen in some of the SRT demos at IBC aswell. Overall, i think there are use cases for both, with fmp4 mainly targetting highly robust and redundant OTT workflows targettting millions of users, while SRT is probably used for low latency smaller scale use cases targetting user generated content. To summarize fmp4 ingest is a format for highly robust scalable OTT, which is not found in any other standard or specification that is not proprietary, therefore we believe this fmp4 ingest specification will be useful in this space. In addition common media application format is targetted to be used in major streaming formats making this ingest closely alligned with them avoiding lots of overhead in media processing when doing conversions when having fmp4 ingest. This spec already has quite some industry support, most notably from Microsoft and their Azure cloud, also many of the encoder vendors have implemented it such as Amazon Elemental, Harmonic and Mediaexcel etc. I think SRT and this spec both still need some work, but overall they should be complementary based on the requirements of the workflow. I hope this sufficiently answers your question. Other protocols like RIST and Zixi (MPEG Media transport) also typically do not target content will all transcoded bit-rates for OTT workflows, RIST features the contribution from camera to production studio, while MPEG Media Transport MMT is more similar to SRT in terms of features.

@ut0mt8
Copy link
Author

ut0mt8 commented Sep 18, 2018

Thanks you guys for your clarification.
Reading the spec I understand now that this is the missing standard between encoders and transcoders in live ott workflow. For now we are using a mix of RTMP and MSS but this is far from perfect. (I know you at USP are against RTMP from the beginning).

I hope this standard will be widely adopted.
It's good start if encoders vendors have already implemented it. (I will check on my Elemental server).
What also help now is a open source lib/implementation like the SRT guys does :)

@RufaelDev
Copy link
Contributor

thanks for your encouragement, we hope to advance this work until the end of this year and promote it, already some implementations exist in the industry

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants