-
Notifications
You must be signed in to change notification settings - Fork 49
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PrepPipeline daemon thread shutdown #65
base: master
Are you sure you want to change the base?
Conversation
…nd thread started in PrepPipeline can be shut down
I'm afraid this cannot be changed that easily, because it's intended behavior. If there are multiple tests running in a sequence, it is desired to start only a single database cluster (postgres instance) for all these tests. Typically, most of the tests share the same preparer configuration, so in the result there should be only a single or very limited number of background threads. If your experience is different, please double check what implementation of To make it more clear, if I have the following test class and run it several times, it should always refer to the same database cluster. And only if I change the migration location parameter (preparer configuration), it will use a new database cluster.
|
Okay so perhaps I can clarify further. As you see sbt is here ran as a program and not a script (as typically mvn or gradle would be). This makes it not start a new JVM for every test run. Running (There is an option for sbt to fork a new JVM for each testrun, but that would mean much slower testing). 7 each time might not sound like much, but if I run Right now the Thread internally in PrepPipeline creates databases in the So what I am basically looking for is some method for shutting down these processes, and I thought killing the internal thread in PrepPipeline would be the way. |
Okay, so I attempted to do this with a hack (finding all threads containing If I run an even worse hack with manually finding and running the shutdown hoooks added by val field = Class.forName("java.lang.ApplicationShutdownHooks").getDeclaredField("hooks")
field.setAccessible(true)
val hooks =field.get(null)
val threads: List[Thread] = hooks.asInstanceOf[java.util.Map[Thread, Thread]].asScala.keySet.toList
val postgresThreads = threads.filter(thread => thread.getName.contains("postgres") && thread.getName.contains("closer"))
postgresThreads.foreach(thread => Runtime.getRuntime.removeShutdownHook(thread))
postgresThreads.foreach(thread => thread.run()) |
@amumurst Thanks a lot for the reproducer, it made it a lot easier for me to investigate the problem. I have confirmed that the SBT server is indeed using only a single JVM process. However, the problem is that the server uses a custom class loader, which creates the same classes over and over again each time the project is run. So within the same JVM process, there are then duplicate classes that have no relation to each other. This makes a caching mechanism, which is used to store instances of the embedded databases, impossible to work properly. Because the caching mechanism depends on a static variable, and in this case the static variable refers to a new object every time the project is run. So to solve this problem, we need to figure out how to implement a cache that persists data between multiple runs of the project that is run via the SBT server. Or alternatively, implement a special shutdown hook that will be called when the project is shutting down and not when the SBT server (the JVM process) is shutting down. |
616f7f3
to
69c9b08
Compare
Make PreparedDbProvider AutoCloseable such that the internal background thread started in PrepPipeline can be shut down without needing to shut down the entire JVM.
This is useful for usage in build tools that run in interactive mode, eg scalas sbt. Here you typically run
sbt
then a set oftest
->test
->test
(running tests multiple times without shutting down the underlying JVM in between. The current implementation then keeps multiple versions of preparers running in the background since the daemon thread is not killed.I am not too familiar with the internals here so there might be more to clean up, but I think this would work.