我正在使用 Hazelcast 2.0.1 频繁更新数据(大约 2 分钟),其中包括先删除然后从数据库加载数据。然而,在某个地方,其中一个线程持有一个键上的锁,这会阻止删除操作并引发异常 ( java.util.ConcurrentModificationException: Another thread holds a lock for the key: abc@gmail.com
)。请帮助我更新我在 hazelcast 中的地图。
我在下面给出我的代码
DeltaParallelizer
def customerDetails = dataOperations.getDistributedStore(DataStructures.customer_project.name()).keySet()
ExecutorService service = Hazelcast.getExecutorService()
def result
try{
customerDetails?.each{customerEmail->
log.info String.format('Creating delta task for customer:%s',customerEmail)
def dTask = new DistributedTask(new EagerDeltaTask(customerEmail))
service.submit(dTask);
}
customerDetails?.each {customerEmail ->
log.info String.format('Creating task customer aggregation for %s',customerEmail)
def task = new DistributedTask(new EagerCustomerAggregationTask(customerEmail))
service.submit(task)
}
}
catch(Exception e){
e.printStackTrace()
}
EagerDeltaTask
class EagerDeltaTask implements Callable,Serializable {
private final def emailId
EagerDeltaTask(email){
emailId = email
}
@Override
public Object call() throws Exception {
log.info(String.format("Eagerly computing delta for %s",emailId))
def dataOperations = new DataOperator()
def tx = Hazelcast.getTransaction()
tx.begin()
try{
deleteAll(dataOperations)
loadAll(dataOperations)
tx.commit()
}
catch(Exception e){
tx.rollback()
log.error(String.format('Delta computation is screwed while loading data for the project:%s',emailId),e)
}
}
private void deleteAll(dataOperations){
log.info String.format('Deleting entries for customer %s',emailId)
def projects = dataOperations.getDistributedStore(DataStructures.customer_project.name()).get(emailId)
projects?.each{project->
log.info String.format('Deleting entries for project %s',project[DataConstants.PROJECT_NUM.name()])
def srs = dataOperations.srs(project[DataConstants.PROJECT_NUM.name()])?.collect{it[DataConstants.SR_NUM.name()]}
def activitiesStore = dataOperations.getDistributedStore(DataStructures.sr_activities.name())
srs?.each{sr ->
activitiesStore.remove(sr)
}
dataOperations.getDistributedStore(DataStructures.project_sr_aggregation.name()).remove(project[DataConstants.PROJECT_NUM.name()])
}
dataOperations.getDistributedStore(DataStructures.customer_project.name()).remove(emailId)
}
private void loadAll(dataOperations){
log.info(String.format('Loading entries for customer %s',emailId))
def projects = dataOperations.projects(emailId)
projects?.each{project->
log.info String.format('Loading entries for project %s',project[DataConstants.PROJECT_NUM.name()])
def srs = dataOperations.srs(project[DataConstants.PROJECT_NUM.name()])
srs?.each{sr->
dataOperations.activities(sr[DataConstants.SR_NUM.name()])
}
}
}
}
数据运算符
class DataOperator {
def getDistributedStore(String name){
Hazelcast.getMap(name)
}
}
我在 deleteAll srs 中遇到异常,因此删除了一些地图内容,并且仅为内容被删除的地图加载了新数据,而地图的其余部分具有旧数据。所以我没有在我的 Hazelcast 地图中获得更新的数据。请就如何将更新的数据导入我的 Hazelcast 地图提出您的看法。
该Hazelcast.getTransaction
客户是否也为此目的工作?
注意:客户可以有多个project_num,1个project_num也可以被多个客户共享1个project_num可以有多个SR_NUM