-
Notifications
You must be signed in to change notification settings - Fork 603
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
groupWithin
prematurely closes resources
#3477
Comments
Some of the fs2 combinators don't respect resources. The cause is that the foreground stream is compiled separately here. If possible, can you structure your stream like this: Stream
.resource(
Resource.make(IO.println("start"))(_ => IO.println("stop"))
)
.flatMap { _ =>
Stream(1, 2, 3, 4)
.covary[IO]
.metered(100.millis)
.take(2)
.evalTap(IO.println)
.groupWithin(2, 1.second)
.evalTap(IO.println)
}
.compile
.drain I think a resource safe version of most combinators can be implemented via a more expressive object Pull2 {
// Option = Is there anything to lease now
// Resource = Try to take a lease
// Boolean = Is the resource still available; Was the Scope's resource closed between the Pull and when the resource was opened?
def lease[F[_]]: Pull[F, Nothing, Option[Resource[F, Boolean]]] = ???
} This would allow transferal of resource ownership across separately compiled streams. // very contrived example which shows that ownership can cross through a channel
def leaseChunks[F[_], A](implicit
F: Async[F]
): fs2.Pipe[F, A, (Chunk[A], F[Unit])] = stream =>
Stream.resource(Supervisor[F]).flatMap { sup =>
Stream
.eval {
concurrent.Channel.bounded[F, (Chunk[A], F[Unit])](1)
}
.flatMap { chan =>
def send(c: Chunk[A]): Pull[F, Nothing, Unit] = for {
r <- Pull2.lease[F]
_ <- Pull.eval {
r match {
case None => F.unit
case Some(r) =>
F.uncancelable { poll =>
poll(r.allocated).flatMap {
case (false, _) => F.unit
case (true, release) =>
sup.supervise(Async[F].never[Unit].onCancel(release)).flatMap { fib =>
poll(chan.send((c, fib.cancel))).void
}
}
}
}
}
} yield ()
val bg = stream.chunks.flatMap(c => send(c).stream).onFinalize(chan.close.void)
chan.stream.concurrently(bg)
}
} I have an issue open here regarding this. |
In this simple example that I adapted a little bit, using |
Actually no, it does seem to work on See, I only added I've actually run into this issue before: #3081 (comment) |
Resources and interruption is implemented via
which introduces a new Scope tree:fs2/core/shared/src/main/scala/fs2/Compiler.scala Lines 156 to 158 in eea0c25
When the lookup function is evaluated, which traverses the scope tree, the link between the sub-stream's scopes and the previous parent scopes is broken.
There might be an implementation of concurrently that shares the root scope, however I am unsure of the implications. All in all, the rule is that if a stream origins as substream, it might have weak references to state in its origin stream though scopeIds, thus you may not compile that stream separately. |
Well, a dummy-proof and easy API for transferring resource ownership between streams would be great. |
Code:
Result:
The resource is already closed before the stage after
groupWithin
gets to process the chunk. So this can leak a closed resource.The text was updated successfully, but these errors were encountered: