You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When using JdbcJobRepository PurgeBatchlet doesn't work propertly.
By code, with JdbcJobRepository, PurgeBatchlet only uses sql and sqlFile parameters, but when using other parameters (like numberOfRecentJobExecutionsToKeep) JobExecutions are only removed from memory.
What's the sense of that ?
JdbcJobRepository should override AbstractJobRepository removeJobExecutions method and properly delete/clean every data JBeret may have been created.
JBeret users SHOULD NOT use/know any specific JBeret/implementation detail to clean the JobRepository !
Also, even if this is not the right place, Wildfly integration should also be improved.
The jberet submodule should have a parameter to specify the number of jobs to keep and use the specific JBeret implementation methods to clean the data, without the use of the PurgeBatchlet.
Absolutely this is not something end/user programmers should know about ...
I really like jsr352 specification but I had my production environment pollutted with thousands of old entries and being unable to clean them in proper way.
Also, I took a look inside the JdbcJobRepository and I believe what we really need is a JpaJobRepository.
I don't think using JDBC is the proper way of handling database serialization in 2021.
We don't want to create N-th different queries for removing data for N-th different databases.
JPA just does that for us, let's not reinvent the wheel.
Furthermore, for future developments, I suggest a NoSqlJobRepository, making use of the new NoSql specification (the Document driver should be good enough for JBeret).
We don't need a specific implementation for MongoDB.
When using JdbcJobRepository PurgeBatchlet doesn't work propertly.
By code, with JdbcJobRepository, PurgeBatchlet only uses sql and sqlFile parameters, but when using other parameters (like numberOfRecentJobExecutionsToKeep) JobExecutions are only removed from memory.
What's the sense of that ?
JdbcJobRepository should override AbstractJobRepository removeJobExecutions method and properly delete/clean every data JBeret may have been created.
JBeret users SHOULD NOT use/know any specific JBeret/implementation detail to clean the JobRepository !
Also, even if this is not the right place, Wildfly integration should also be improved.
The jberet submodule should have a parameter to specify the number of jobs to keep and use the specific JBeret implementation methods to clean the data, without the use of the PurgeBatchlet.
Absolutely this is not something end/user programmers should know about ...
I really like jsr352 specification but I had my production environment pollutted with thousands of old entries and being unable to clean them in proper way.
This causes problems like :
https://issues.redhat.com/browse/WFLY-7418
https://developer.jboss.org/thread/272830
For now, I'll switch to memory repository and use the PurgeBatchlet with a low value of numberOfRecentJobExecutionsToKeep.
Hoping this will be enough.
Thank you in advance
The text was updated successfully, but these errors were encountered: