Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature: Multiple streams/channels #9

Open
cmaddick opened this issue Jan 4, 2021 · 24 comments
Open

Feature: Multiple streams/channels #9

cmaddick opened this issue Jan 4, 2021 · 24 comments
Labels
enhancement New feature or request Low Priority These are the least important

Comments

@cmaddick
Copy link
Contributor

cmaddick commented Jan 4, 2021

Title is really self descriptive. Would be really cool to have the ability to make multiple feeds available.

Fantastic work so far @GRVYDEV, I've been looking for something like this for a while!

@GRVYDEV
Copy link
Owner

GRVYDEV commented Jan 4, 2021

While this would be cool I am not exactly sure that this is where I want this project to go. I will keep it in mind though! Best case scenario this wont be added for a while anyways

@krakow10
Copy link

krakow10 commented Jan 6, 2021

Does this mean multiple video and audio tracks for a single streamer, or is it meant as a suggestion to allow more than one streamer to use the same server with different streams? As for the former, I imagine a livestream with two video tracks where the viewer can change the location and size of the face camera or disable it entirely, and independently change the volume of two or more audio tracks such as game audio, voice, and music.

@GRVYDEV
Copy link
Owner

GRVYDEV commented Jan 6, 2021

Does this mean multiple video and audio tracks for a single streamer, or is it meant as a suggestion to allow more than one streamer to use the same server with different streams? As for the former, I imagine a livestream with two video tracks where the viewer can change the location and size of the face camera or disable it entirely, and independently change the volume of two or more audio tracks such as game audio, voice, and music.

This would be awesome... but a nightmare as it would involve making OBS send multiple streams to the server and that would take a lot of modification to OBS

@woodrowbarlow
Copy link

woodrowbarlow commented Jan 6, 2021

I would like to +1 this feature request and volunteer some time to work on this. Early 2020 I went about setting up my own streaming server and found the landscape to be pretty empty. Your project goes end-to-end, even including a frontend, and is more straightforward and self-contained than anything else out there. This project has a lot of potential, great job!

I've been thinking about what it would take to turn the backend (ingest + webrtc) into a multi-tenant server (can receive multiple streams from multiple origins and send each to a unique webrtc endpoint). It would definitely involve collaboration between ingest and webrtc, in the form of sharing config and communicating events to each other.

In other words, I'm imagining a server that could be configured something like this, where you explicitly define each endpoint:

[server]
# the websocket port
webrtc-port = 8080
# need to have a "pool" of RTP ports that can be handed out in FTL handshakes
# the backend isn't listening to any of these at first, but when a handshake is finalizing,
# the backend chooses one available port from this pool, marks it in-use, and starts
# listening on that port. when a stream disconnects, need to put its port back into
# the pool.
rtp-port-range-start = 65535
rtp-port-range-end = 65555

[streams]

[streams.default]
# "default" is just the stream name used for logging
# path is used to build the websocket URL. so to view this stream, the frontend will
# open a websocket to http(s)://example.com:8080/websocket
path = "websocket"
# this is the channel id used to derive the stream key. the "server" config section would
# probably need a "hmac" config value, pair those together to generate the stream key.
channel = 123456789

[streams.example]
path = "test"
channel = 987654321

With all the communication between "ingest" and "webrtc" (to open/close UDP for RTP ports, and to start/stop accepting requests at certain websocket URLs, all in response to handshakes), it would be a lot easier if the backend were a single binary. I see that Rust's webrtc implementation is not ready for use, which explains why Go handles those parts. But given the simplicity of the FTL handshaking process, it is conceivable that the Go code could do the handshake as well as its current roles.

Would you consider a PR that subsumes the ingest project into the webrtc project, creating a single Go binary for the entire backend? This would be a preliminary step towards multi-tenent, the next step would probably be to add a configuration system.

@cmaddick
Copy link
Contributor Author

cmaddick commented Jan 6, 2021

Does this mean multiple video and audio tracks for a single streamer, or is it meant as a suggestion to allow more than one streamer to use the same server with different streams? As for the former, I imagine a livestream with two video tracks where the viewer can change the location and size of the face camera or disable it entirely, and independently change the volume of two or more audio tracks such as game audio, voice, and music.

Multiple video & audio feeds from different streamers. The reason why I ask is that I was thinking about using this as a centralised low-latency "cleanfeed" (that remote talent accesses) for some live video production work which sometimes involves multiple distinct feeds. There are closed source, enterprise solutions to this but the investment needed is a little too much for my purposes (and wallet). :D

@woodrowbarlow gets what I mean.

@woodrowbarlow
Copy link

I see this comment from you (@GRVYDEV ) in #16:

This is the reason that I want to keep the ingest decoupled from the WebRTC broadcast though as someone could easily swap another ingest in as long as it outputs RTP packets.

Which makes perfect sense to me and implies that you would be more interested in adding an API between ingest and webrtc than you would be in combining them -- if going this route at all, that is.

@kanki6315
Copy link

Does this mean multiple video and audio tracks for a single streamer, or is it meant as a suggestion to allow more than one streamer to use the same server with different streams? As for the former, I imagine a livestream with two video tracks where the viewer can change the location and size of the face camera or disable it entirely, and independently change the volume of two or more audio tracks such as game audio, voice, and music.

Multiple video & audio feeds from different streamers. The reason why I ask is that I was thinking about using this as a centralised low-latency "cleanfeed" (that remote talent accesses) for some live video production work which sometimes involves multiple distinct feeds. There are closed source, enterprise solutions to this but the investment needed is a little too much for my purposes (and wallet). :D

@woodrowbarlow gets what I mean.

+1 here. The ability to host a single instance of this and have multiple producers each send a production feed to the same instance if multiple broadcasts are ongoing with multiple commentators per producer connecting to each of the clean feeds would literally be our drop in replacement for Mixer. Would also mean for our large scale events where we want to have as many remote camera operators as possible, one instance of this project could allow us to bring in multiple 1080p60 streams with minimal latency for broadcasting purposes.

@GRVYDEV
Copy link
Owner

GRVYDEV commented Jan 6, 2021

Does this mean multiple video and audio tracks for a single streamer, or is it meant as a suggestion to allow more than one streamer to use the same server with different streams? As for the former, I imagine a livestream with two video tracks where the viewer can change the location and size of the face camera or disable it entirely, and independently change the volume of two or more audio tracks such as game audio, voice, and music.

Multiple video & audio feeds from different streamers. The reason why I ask is that I was thinking about using this as a centralised low-latency "cleanfeed" (that remote talent accesses) for some live video production work which sometimes involves multiple distinct feeds. There are closed source, enterprise solutions to this but the investment needed is a little too much for my purposes (and wallet). :D
@woodrowbarlow gets what I mean.

+1 here. The ability to host a single instance of this and have multiple producers each send a production feed to the same instance if multiple broadcasts are ongoing with multiple commentators per producer connecting to each of the clean feeds would literally be our drop in replacement for Mixer. Would also mean for our large scale events where we want to have as many remote camera operators as possible, one instance of this project could allow us to bring in multiple 1080p60 streams with minimal latency for broadcasting purposes.

Lets not forget about bandwidth here though!

@GRVYDEV
Copy link
Owner

GRVYDEV commented Jan 6, 2021

I see this comment from you (@GRVYDEV ) in #16:

This is the reason that I want to keep the ingest decoupled from the WebRTC broadcast though as someone could easily swap another ingest in as long as it outputs RTP packets.

Which makes perfect sense to me and implies that you would be more interested in adding an API between ingest and webrtc than you would be in combining them -- if going this route at all, that is.

I love the idea of multiple channels HOWEVER there are a lot of challenges to overcome but I am willing to support if people are willing to help. My initial hesitation is due to the time it would take to implement this by myself however I have always planned to support this in the future. If you look here you can see that I want the ingest to relay on the loopback interface. I feel as if this would make it easier to control but I also like the idea of an API between ingest and webrtc services (since we may run into loopback bandwidth issues) which could easily be done via websockets on the loopback interface (although im not sure where rust support for websockets is at). I am definitely interested in discussing this further!

@kanki6315
Copy link

Does this mean multiple video and audio tracks for a single streamer, or is it meant as a suggestion to allow more than one streamer to use the same server with different streams? As for the former, I imagine a livestream with two video tracks where the viewer can change the location and size of the face camera or disable it entirely, and independently change the volume of two or more audio tracks such as game audio, voice, and music.

Multiple video & audio feeds from different streamers. The reason why I ask is that I was thinking about using this as a centralised low-latency "cleanfeed" (that remote talent accesses) for some live video production work which sometimes involves multiple distinct feeds. There are closed source, enterprise solutions to this but the investment needed is a little too much for my purposes (and wallet). :D
@woodrowbarlow gets what I mean.

+1 here. The ability to host a single instance of this and have multiple producers each send a production feed to the same instance if multiple broadcasts are ongoing with multiple commentators per producer connecting to each of the clean feeds would literally be our drop in replacement for Mixer. Would also mean for our large scale events where we want to have as many remote camera operators as possible, one instance of this project could allow us to bring in multiple 1080p60 streams with minimal latency for broadcasting purposes.

Lets not forget about bandwidth here though!

True! To explain the request a bit more, bandwidth limitations are actually why this is such a game changer for some of our team! While some of our producers have high upload speeds, others have more limited bandwidth to use. After Mixer shut down, we had to move to more peer-to-peer based solutions as most server based solutions were commercial in nature and being honest, well beyond our budget to use. This means that some of our producers actually use more bandwidth pushing out multiple individual peer-to-peer streams than what they can use to push our broadcasts to their respective platforms, so this will help out even our team that have 2 ISPs to try and workaround this cap. Really love this project, early days but I'm excited about its future!

@woodrowbarlow
Copy link

@kanki6315 I think you've misunderstood; FTL+RTP+WebRTC can provide sub-second latency, but only if the available bandwidth is sufficient for the feeds you're pushing. Lightspeed doesn't reduce bandwidth consumption. If you're maxing out bandwidth, you'll drop frames across the board and the only fix is to increase bandwidth or decrease stream quality.

@kanki6315
Copy link

@kanki6315 I think you've misunderstood; FTL+RTP+WebRTC can provide sub-second latency, but only if the available bandwidth is sufficient for the feeds you're pushing. Lightspeed doesn't reduce bandwidth consumption. If you're maxing out bandwidth, you'll drop frames across the board and the only fix is to increase bandwidth or decrease stream quality.

Maybe I did - the way I understood it is that the streamer sends the data to the RTC instance with a single stream. Then, each consumer of the stream has a unique WebRTC connection to the RTC instance which is relaying and repeating the incoming stream to each viewer. This means the streamer is only sending data once no matter how many people are connected to the stream.

@woodrowbarlow
Copy link

woodrowbarlow commented Jan 6, 2021

@kanki6315 Oh, it seems I misunderstood your original issue. You are correct. Although, if you want to do anything with the independent streams, like composite them into a single feed, you'd need one instance of OBS that takes in all the streams (except I don't think mainline OBS can take in a webrtc stream) and re-streams the final mix to yet another endpoint.

@GRVYDEV
Copy link
Owner

GRVYDEV commented Jan 6, 2021

@kanki6315 I think you've misunderstood; FTL+RTP+WebRTC can provide sub-second latency, but only if the available bandwidth is sufficient for the feeds you're pushing. Lightspeed doesn't reduce bandwidth consumption. If you're maxing out bandwidth, you'll drop frames across the board and the only fix is to increase bandwidth or decrease stream quality.

Maybe I did - the way I understood it is that the streamer sends the data to the RTC instance with a single stream. Then, each consumer of the stream has a unique WebRTC connection to the RTC instance which is relaying and repeating the incoming stream to each viewer. This means the streamer is only sending data once no matter how many people are connected to the stream.

Sorry I was super vague haha what I meant is that your bottleneck will be the egress bandwidth of the RTC server but if you had a 10gig nic and you were using this internally then you should be able to support quite a bit of connections!

@GRVYDEV
Copy link
Owner

GRVYDEV commented Jan 6, 2021

@kanki6315 Oh, it seems I misunderstood your original issue. You are correct. Although, if you want to do anything with the independent streams, like composite them into a single feed, you'd need one instance of OBS that takes in all the streams (except I don't think mainline OBS can take in a webrtc stream) and re-streams the final mix to yet another endpoint.

This could be an interesting project in the future!

@GRVYDEV
Copy link
Owner

GRVYDEV commented Jan 6, 2021

Does this mean multiple video and audio tracks for a single streamer, or is it meant as a suggestion to allow more than one streamer to use the same server with different streams? As for the former, I imagine a livestream with two video tracks where the viewer can change the location and size of the face camera or disable it entirely, and independently change the volume of two or more audio tracks such as game audio, voice, and music.

Multiple video & audio feeds from different streamers. The reason why I ask is that I was thinking about using this as a centralised low-latency "cleanfeed" (that remote talent accesses) for some live video production work which sometimes involves multiple distinct feeds. There are closed source, enterprise solutions to this but the investment needed is a little too much for my purposes (and wallet). :D
@woodrowbarlow gets what I mean.

+1 here. The ability to host a single instance of this and have multiple producers each send a production feed to the same instance if multiple broadcasts are ongoing with multiple commentators per producer connecting to each of the clean feeds would literally be our drop in replacement for Mixer. Would also mean for our large scale events where we want to have as many remote camera operators as possible, one instance of this project could allow us to bring in multiple 1080p60 streams with minimal latency for broadcasting purposes.

Lets not forget about bandwidth here though!

True! To explain the request a bit more, bandwidth limitations are actually why this is such a game changer for some of our team! While some of our producers have high upload speeds, others have more limited bandwidth to use. After Mixer shut down, we had to move to more peer-to-peer based solutions as most server based solutions were commercial in nature and being honest, well beyond our budget to use. This means that some of our producers actually use more bandwidth pushing out multiple individual peer-to-peer streams than what they can use to push our broadcasts to their respective platforms, so this will help out even our team that have 2 ISPs to try and workaround this cap. Really love this project, early days but I'm excited about its future!

Also Im glad you love the project! May I ask what your company does?

@kanki6315
Copy link

Both OP and I are part of RaceSpot TV - we are a sim racing broadcaster that primarily does iRacing broadcasts!

@kinghat
Copy link

kinghat commented Jan 7, 2021

@kanki6315 Oh, it seems I misunderstood your original issue. You are correct. Although, if you want to do anything with the independent streams, like composite them into a single feed, you'd need one instance of OBS that takes in all the streams (except I don't think mainline OBS can take in a webrtc stream) and re-streams the final mix to yet another endpoint.

would this help at all? https://github.com/steveseguin/obsninja

@GRVYDEV GRVYDEV added enhancement New feature or request Low Priority These are the least important labels Jan 7, 2021
@excedra
Copy link

excedra commented Jan 10, 2021

I would like to +1 this feature request and volunteer some time to work on this. Early 2020 I went about setting up my own streaming server and found the landscape to be pretty empty. Your project goes end-to-end, even including a frontend, and is more straightforward and self-contained than anything else out there. This project has a lot of potential, great job!

I've been thinking about what it would take to turn the backend (ingest + webrtc) into a multi-tenant server (can receive multiple streams from multiple origins and send each to a unique webrtc endpoint). It would definitely involve collaboration between ingest and webrtc, in the form of sharing config and communicating events to each other.

In other words, I'm imagining a server that could be configured something like this, where you explicitly define each endpoint:

[server]
# the websocket port
webrtc-port = 8080
# need to have a "pool" of RTP ports that can be handed out in FTL handshakes
# the backend isn't listening to any of these at first, but when a handshake is finalizing,
# the backend chooses one available port from this pool, marks it in-use, and starts
# listening on that port. when a stream disconnects, need to put its port back into
# the pool.
rtp-port-range-start = 65535
rtp-port-range-end = 65555

[streams]

[streams.default]
# "default" is just the stream name used for logging
# path is used to build the websocket URL. so to view this stream, the frontend will
# open a websocket to http(s)://example.com:8080/websocket
path = "websocket"
# this is the channel id used to derive the stream key. the "server" config section would
# probably need a "hmac" config value, pair those together to generate the stream key.
channel = 123456789

[streams.example]
path = "test"
channel = 987654321

With all the communication between "ingest" and "webrtc" (to open/close UDP for RTP ports, and to start/stop accepting requests at certain websocket URLs, all in response to handshakes), it would be a lot easier if the backend were a single binary. I see that Rust's webrtc implementation is not ready for use, which explains why Go handles those parts. But given the simplicity of the FTL handshaking process, it is conceivable that the Go code could do the handshake as well as its current roles.

Would you consider a PR that subsumes the ingest project into the webrtc project, creating a single Go binary for the entire backend? This would be a preliminary step towards multi-tenent, the next step would probably be to add a configuration system.

This would be very nice. Some thoughts here:
Currently my solution for this problem would be to have multiple instances of Lightspeed up and running on different ports. This would - caused by th fact that I found no possibility to change the ports of the three services - lead me to the usage of docker containers which then could handle the NAT. (As discussed here: #2, bring up an instance for each bundle of ports). But this wouldn't be dynamic, so the number of concurrent users/sessions is limited to the number of prepared and started docker containers. Each user will them have an indiviual react-page. Problem here will then be the Number of ports used by webrtc... Which only will exist in the mulit-docker-container-szenario. If this is one binary which handels webrtc, this problem can be solved (because it would now each session using which port).

If the service would have the possibility to handle mulitple connections or a ports range (every one corresponding to an specific range) this would fix this and would provide a basic multi-user-expierence.
Example of the idea:

  • On startup of all three services, define number of users (by using an parameter or config, like --user=10)
  • Then React takes port 80 to 90
  • Then Ingest Takes 8084 to 8094
  • Then webrtc Takes 8080 to 8090 (Collision of the port Range of Ingest! Eventually change ports of one of thoose to a hight numer, like 9084 and up?)

This would lead to a "pack" of port for each service/user. (80,8084,8080 or 84,8088,8084, for example)
Then it would be possible to grab the induvidual streams with ffmpeg, for example and post-process them.

I'm an infrastructure guy, not an coder - so i could contribute in that part. But this is the reason i could only talk about the concept here. I cannot validate the coding-part ;-)

@GRVYDEV
Copy link
Owner

GRVYDEV commented Jan 10, 2021

I would like to +1 this feature request and volunteer some time to work on this. Early 2020 I went about setting up my own streaming server and found the landscape to be pretty empty. Your project goes end-to-end, even including a frontend, and is more straightforward and self-contained than anything else out there. This project has a lot of potential, great job!
I've been thinking about what it would take to turn the backend (ingest + webrtc) into a multi-tenant server (can receive multiple streams from multiple origins and send each to a unique webrtc endpoint). It would definitely involve collaboration between ingest and webrtc, in the form of sharing config and communicating events to each other.
In other words, I'm imagining a server that could be configured something like this, where you explicitly define each endpoint:

[server]
# the websocket port
webrtc-port = 8080
# need to have a "pool" of RTP ports that can be handed out in FTL handshakes
# the backend isn't listening to any of these at first, but when a handshake is finalizing,
# the backend chooses one available port from this pool, marks it in-use, and starts
# listening on that port. when a stream disconnects, need to put its port back into
# the pool.
rtp-port-range-start = 65535
rtp-port-range-end = 65555

[streams]

[streams.default]
# "default" is just the stream name used for logging
# path is used to build the websocket URL. so to view this stream, the frontend will
# open a websocket to http(s)://example.com:8080/websocket
path = "websocket"
# this is the channel id used to derive the stream key. the "server" config section would
# probably need a "hmac" config value, pair those together to generate the stream key.
channel = 123456789

[streams.example]
path = "test"
channel = 987654321

With all the communication between "ingest" and "webrtc" (to open/close UDP for RTP ports, and to start/stop accepting requests at certain websocket URLs, all in response to handshakes), it would be a lot easier if the backend were a single binary. I see that Rust's webrtc implementation is not ready for use, which explains why Go handles those parts. But given the simplicity of the FTL handshaking process, it is conceivable that the Go code could do the handshake as well as its current roles.
Would you consider a PR that subsumes the ingest project into the webrtc project, creating a single Go binary for the entire backend? This would be a preliminary step towards multi-tenent, the next step would probably be to add a configuration system.

This would be very nice. Some thoughts here:
Currently my solution for this problem would be to have multiple instances of Lightspeed up and running on different ports. This would - caused by th fact that I found no possibility to change the ports of the three services - lead me to the usage of docker containers which then could handle the NAT. (As discussed here: #2, bring up an instance for each bundle of ports). But this wouldn't be dynamic, so the number of concurrent users/sessions is limited to the number of prepared and started docker containers. Each user will them have an indiviual react-page. Problem here will then be the Number of ports used by webrtc... Which only will exist in the mulit-docker-container-szenario. If this is one binary which handels webrtc, this problem can be solved (because it would now each session using which port).

If the service would have the possibility to handle mulitple connections or a ports range (every one corresponding to an specific range) this would fix this and would provide a basic multi-user-expierence.
Example of the idea:

  • On startup of all three services, define number of users (by using an parameter or config, like --user=10)
  • Then React takes port 80 to 90
  • Then Ingest Takes 8084 to 8094
  • Then webrtc Takes 8080 to 8090 (Collision of the port Range of Ingest! Eventually change ports of one of thoose to a hight numer, like 9084 and up?)

This would lead to a "pack" of port for each service/user. (80,8084,8080 or 84,8088,8084, for example)
Then it would be possible to grab the induvidual streams with ffmpeg, for example and post-process them.

I'm an infrastructure guy, not an coder - so i could contribute in that part. But this is the reason i could only talk about the concept here. I cannot validate the coding-part ;-)

So multi streams would be possible with only one instance of Lightspeed if implemented right. The reason you can’t have multiple instances on one server is due to NAT breaking WebRTC and FTL only using port 8084 meaning you can only have 1 ingest server per interface

@excedra
Copy link

excedra commented Jan 10, 2021

I would like to +1 this feature request and volunteer some time to work on this. Early 2020 I went about setting up my own streaming server and found the landscape to be pretty empty. Your project goes end-to-end, even including a frontend, and is more straightforward and self-contained than anything else out there. This project has a lot of potential, great job!
I've been thinking about what it would take to turn the backend (ingest + webrtc) into a multi-tenant server (can receive multiple streams from multiple origins and send each to a unique webrtc endpoint). It would definitely involve collaboration between ingest and webrtc, in the form of sharing config and communicating events to each other.
In other words, I'm imagining a server that could be configured something like this, where you explicitly define each endpoint:

[server]
# the websocket port
webrtc-port = 8080
# need to have a "pool" of RTP ports that can be handed out in FTL handshakes
# the backend isn't listening to any of these at first, but when a handshake is finalizing,
# the backend chooses one available port from this pool, marks it in-use, and starts
# listening on that port. when a stream disconnects, need to put its port back into
# the pool.
rtp-port-range-start = 65535
rtp-port-range-end = 65555

[streams]

[streams.default]
# "default" is just the stream name used for logging
# path is used to build the websocket URL. so to view this stream, the frontend will
# open a websocket to http(s)://example.com:8080/websocket
path = "websocket"
# this is the channel id used to derive the stream key. the "server" config section would
# probably need a "hmac" config value, pair those together to generate the stream key.
channel = 123456789

[streams.example]
path = "test"
channel = 987654321

With all the communication between "ingest" and "webrtc" (to open/close UDP for RTP ports, and to start/stop accepting requests at certain websocket URLs, all in response to handshakes), it would be a lot easier if the backend were a single binary. I see that Rust's webrtc implementation is not ready for use, which explains why Go handles those parts. But given the simplicity of the FTL handshaking process, it is conceivable that the Go code could do the handshake as well as its current roles.
Would you consider a PR that subsumes the ingest project into the webrtc project, creating a single Go binary for the entire backend? This would be a preliminary step towards multi-tenent, the next step would probably be to add a configuration system.

This would be very nice. Some thoughts here:
Currently my solution for this problem would be to have multiple instances of Lightspeed up and running on different ports. This would - caused by th fact that I found no possibility to change the ports of the three services - lead me to the usage of docker containers which then could handle the NAT. (As discussed here: #2, bring up an instance for each bundle of ports). But this wouldn't be dynamic, so the number of concurrent users/sessions is limited to the number of prepared and started docker containers. Each user will them have an indiviual react-page. Problem here will then be the Number of ports used by webrtc... Which only will exist in the mulit-docker-container-szenario. If this is one binary which handels webrtc, this problem can be solved (because it would now each session using which port).
If the service would have the possibility to handle mulitple connections or a ports range (every one corresponding to an specific range) this would fix this and would provide a basic multi-user-expierence.
Example of the idea:

  • On startup of all three services, define number of users (by using an parameter or config, like --user=10)
  • Then React takes port 80 to 90
  • Then Ingest Takes 8084 to 8094
  • Then webrtc Takes 8080 to 8090 (Collision of the port Range of Ingest! Eventually change ports of one of thoose to a hight numer, like 9084 and up?)

This would lead to a "pack" of port for each service/user. (80,8084,8080 or 84,8088,8084, for example)
Then it would be possible to grab the induvidual streams with ffmpeg, for example and post-process them.
I'm an infrastructure guy, not an coder - so i could contribute in that part. But this is the reason i could only talk about the concept here. I cannot validate the coding-part ;-)

So multi streams would be possible with only one instance of Lightspeed if implemented right. The reason you can’t have multiple instances on one server is due to NAT breaking WebRTC and FTL only using port 8084 meaning you can only have 1 ingest server per interface

And if webrtc/FTL identifing the indiviuald stream by its stream keys (in the inpust side) and then providing different ports on the output/webrtc side? Then all streams would come in on 8084.

@GRVYDEV
Copy link
Owner

GRVYDEV commented Jan 10, 2021

I would like to +1 this feature request and volunteer some time to work on this. Early 2020 I went about setting up my own streaming server and found the landscape to be pretty empty. Your project goes end-to-end, even including a frontend, and is more straightforward and self-contained than anything else out there. This project has a lot of potential, great job!
I've been thinking about what it would take to turn the backend (ingest + webrtc) into a multi-tenant server (can receive multiple streams from multiple origins and send each to a unique webrtc endpoint). It would definitely involve collaboration between ingest and webrtc, in the form of sharing config and communicating events to each other.
In other words, I'm imagining a server that could be configured something like this, where you explicitly define each endpoint:

[server]
# the websocket port
webrtc-port = 8080
# need to have a "pool" of RTP ports that can be handed out in FTL handshakes
# the backend isn't listening to any of these at first, but when a handshake is finalizing,
# the backend chooses one available port from this pool, marks it in-use, and starts
# listening on that port. when a stream disconnects, need to put its port back into
# the pool.
rtp-port-range-start = 65535
rtp-port-range-end = 65555

[streams]

[streams.default]
# "default" is just the stream name used for logging
# path is used to build the websocket URL. so to view this stream, the frontend will
# open a websocket to http(s)://example.com:8080/websocket
path = "websocket"
# this is the channel id used to derive the stream key. the "server" config section would
# probably need a "hmac" config value, pair those together to generate the stream key.
channel = 123456789

[streams.example]
path = "test"
channel = 987654321

With all the communication between "ingest" and "webrtc" (to open/close UDP for RTP ports, and to start/stop accepting requests at certain websocket URLs, all in response to handshakes), it would be a lot easier if the backend were a single binary. I see that Rust's webrtc implementation is not ready for use, which explains why Go handles those parts. But given the simplicity of the FTL handshaking process, it is conceivable that the Go code could do the handshake as well as its current roles.
Would you consider a PR that subsumes the ingest project into the webrtc project, creating a single Go binary for the entire backend? This would be a preliminary step towards multi-tenent, the next step would probably be to add a configuration system.

This would be very nice. Some thoughts here:
Currently my solution for this problem would be to have multiple instances of Lightspeed up and running on different ports. This would - caused by th fact that I found no possibility to change the ports of the three services - lead me to the usage of docker containers which then could handle the NAT. (As discussed here: #2, bring up an instance for each bundle of ports). But this wouldn't be dynamic, so the number of concurrent users/sessions is limited to the number of prepared and started docker containers. Each user will them have an indiviual react-page. Problem here will then be the Number of ports used by webrtc... Which only will exist in the mulit-docker-container-szenario. If this is one binary which handels webrtc, this problem can be solved (because it would now each session using which port).
If the service would have the possibility to handle mulitple connections or a ports range (every one corresponding to an specific range) this would fix this and would provide a basic multi-user-expierence.
Example of the idea:

  • On startup of all three services, define number of users (by using an parameter or config, like --user=10)
  • Then React takes port 80 to 90
  • Then Ingest Takes 8084 to 8094
  • Then webrtc Takes 8080 to 8090 (Collision of the port Range of Ingest! Eventually change ports of one of thoose to a hight numer, like 9084 and up?)

This would lead to a "pack" of port for each service/user. (80,8084,8080 or 84,8088,8084, for example)
Then it would be possible to grab the induvidual streams with ffmpeg, for example and post-process them.
I'm an infrastructure guy, not an coder - so i could contribute in that part. But this is the reason i could only talk about the concept here. I cannot validate the coding-part ;-)

So multi streams would be possible with only one instance of Lightspeed if implemented right. The reason you can’t have multiple instances on one server is due to NAT breaking WebRTC and FTL only using port 8084 meaning you can only have 1 ingest server per interface

And if webrtc/FTL identifing the indiviuald stream by its stream keys (in the inpust side) and then providing different ports on the output/webrtc side? Then alle stream would come in on 8084.

Correct I just need to workout the WebRTC logic on how to handle multiple video and audio tracks in the most efficient manner

@excedra
Copy link

excedra commented Jan 10, 2021

I would like to +1 this feature request and volunteer some time to work on this. Early 2020 I went about setting up my own streaming server and found the landscape to be pretty empty. Your project goes end-to-end, even including a frontend, and is more straightforward and self-contained than anything else out there. This project has a lot of potential, great job!
I've been thinking about what it would take to turn the backend (ingest + webrtc) into a multi-tenant server (can receive multiple streams from multiple origins and send each to a unique webrtc endpoint). It would definitely involve collaboration between ingest and webrtc, in the form of sharing config and communicating events to each other.
In other words, I'm imagining a server that could be configured something like this, where you explicitly define each endpoint:

[server]
# the websocket port
webrtc-port = 8080
# need to have a "pool" of RTP ports that can be handed out in FTL handshakes
# the backend isn't listening to any of these at first, but when a handshake is finalizing,
# the backend chooses one available port from this pool, marks it in-use, and starts
# listening on that port. when a stream disconnects, need to put its port back into
# the pool.
rtp-port-range-start = 65535
rtp-port-range-end = 65555

[streams]

[streams.default]
# "default" is just the stream name used for logging
# path is used to build the websocket URL. so to view this stream, the frontend will
# open a websocket to http(s)://example.com:8080/websocket
path = "websocket"
# this is the channel id used to derive the stream key. the "server" config section would
# probably need a "hmac" config value, pair those together to generate the stream key.
channel = 123456789

[streams.example]
path = "test"
channel = 987654321

With all the communication between "ingest" and "webrtc" (to open/close UDP for RTP ports, and to start/stop accepting requests at certain websocket URLs, all in response to handshakes), it would be a lot easier if the backend were a single binary. I see that Rust's webrtc implementation is not ready for use, which explains why Go handles those parts. But given the simplicity of the FTL handshaking process, it is conceivable that the Go code could do the handshake as well as its current roles.
Would you consider a PR that subsumes the ingest project into the webrtc project, creating a single Go binary for the entire backend? This would be a preliminary step towards multi-tenent, the next step would probably be to add a configuration system.

This would be very nice. Some thoughts here:
Currently my solution for this problem would be to have multiple instances of Lightspeed up and running on different ports. This would - caused by th fact that I found no possibility to change the ports of the three services - lead me to the usage of docker containers which then could handle the NAT. (As discussed here: #2, bring up an instance for each bundle of ports). But this wouldn't be dynamic, so the number of concurrent users/sessions is limited to the number of prepared and started docker containers. Each user will them have an indiviual react-page. Problem here will then be the Number of ports used by webrtc... Which only will exist in the mulit-docker-container-szenario. If this is one binary which handels webrtc, this problem can be solved (because it would now each session using which port).
If the service would have the possibility to handle mulitple connections or a ports range (every one corresponding to an specific range) this would fix this and would provide a basic multi-user-expierence.
Example of the idea:

  • On startup of all three services, define number of users (by using an parameter or config, like --user=10)
  • Then React takes port 80 to 90
  • Then Ingest Takes 8084 to 8094
  • Then webrtc Takes 8080 to 8090 (Collision of the port Range of Ingest! Eventually change ports of one of thoose to a hight numer, like 9084 and up?)

This would lead to a "pack" of port for each service/user. (80,8084,8080 or 84,8088,8084, for example)
Then it would be possible to grab the induvidual streams with ffmpeg, for example and post-process them.
I'm an infrastructure guy, not an coder - so i could contribute in that part. But this is the reason i could only talk about the concept here. I cannot validate the coding-part ;-)

So multi streams would be possible with only one instance of Lightspeed if implemented right. The reason you can’t have multiple instances on one server is due to NAT breaking WebRTC and FTL only using port 8084 meaning you can only have 1 ingest server per interface

And if webrtc/FTL identifing the indiviuald stream by its stream keys (in the inpust side) and then providing different ports on the output/webrtc side? Then alle stream would come in on 8084.

Correct I just need to workout the WebRTC logic on how to handle multiple video and audio tracks in the most efficient manner

But then again if webrtc is using port 8080 and up, the ingest port 8084 would have to be moved "away" (as mentioned for example 9084). This would lead to 1004 possible incoming connections/streams... which will be enough, if i think about bandwith. But should'nt this (the port change) be done better earlier then later, so that the impact to existing installations would be as small as possible?

Also: Nice project overall! I've found nothing compareable and my first thought (after I made my research) was: "Why does this not exist?".

@GRVYDEV
Copy link
Owner

GRVYDEV commented Jan 10, 2021

You could negotiate multiple SDP offers over the same web socket so you wouldn’t need more than one. Then only thing you would need are enough UDP ports for WebRTC to use

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request Low Priority These are the least important
Projects
None yet
Development

No branches or pull requests

7 participants