Imagine a simple case:
class B{
public final String text;
public B(String text){
this.text = text;
}
}
class A {
private List<B> bs = new ArrayList<B>;
public B getB(String text){
for(B b :bs){
if(b.text.equals(text)){
return b;
}
}
return null;
}
[getter/setter]
}
Imagine that for each instance of A, the List<B>
is large and we need to call getB(String)
often. However assume that it is also possible for the list to change (add/remove element, or even being reassigned).
At this stage, the average complexity for getB(String) is O(n). In order to improved that I was wondering if we could use some clever caching.
Imagine we cache the List<B>
in a Map<String, B>
where the key is B.text
. That would improve the performance but it won't work if the list is changed (new element or deleted element) or reassigned (A.bs
points to a new reference).
To go around that I thought that, along with the Map<String, B>
, we could store a hash of the list bs
. When we call getB(String)
method, we compute the hash of the list bs
. If the hash hasn't changed, we fetch the result from the map, if it has we reload the map.
The problem is that computing the hash for a java.util.List
goes through all the element of the list and computes their hash, which is at least O(n).
Question
What I'd like to know is whether the JVM will be faster at computing the hash for the List than going through my loop in the getB(String)
method. May be that depends on the implementation of hash for B
. If so what kind of things could work? In a nutshell, I'd like to know whether this is stupid or could bring some performance improvement.