The Spring Batch 'chunking' concept will support your retry and failure scenario as you've described. That is, you've created 500 records and a failure occurs and you don't want to lose what you've already got when you restart.
A simple configuration for such a job might be as follows;
<batch:job id="entityCreationJob">
<batch:step id="entityCreationJob.step1">
<batch:tasklet>
<batch:chunk reader="entityReader" writer="entityWriter" commit-interval="250"/>
</batch:tasklet>
</batch:step>
</batch:job>
This simple configuration will do the following;
- read/create a single record 'per row' (entity.getEntityManager(session).createEntity(e))
- 'commit' the record in 250 record blocks (set by the commit interval)
should you have a failure (lets say, at record 1190), with a commit interval of 250, you will only 'lose' the work of 190 records. the previous 1000 have been committed already to the database. when the application restarts, it will pick up at the 1001 record and continue, committing in 250 record blocks.
to make best use of the commit/retry component, i would suggest you use the JpaItemWriter
and either the JpaTransactionManager
or a JTA Transaction Manager.