If testing two objects for equality would be expensive, and if the hash codes of the objects are known, it may be helpful to test the hash codes as a first step toward testing equality. If the hash codes are not equal, there's no need to look any further. If they are equal, then examine things in more detail. Suppose, for example, that one had many 100,000-character strings which happened to differed only in the last ten characters (but there was no reason to expect that to be the case). Even if there were a 1% false match rate with hash codes, checking hash codes before checking the string contents in detail could offer a nearly-100-fold speedup versus repeatedly having to examining the first 9,990 characters of every strings.
The goal of a hash code is generally not to be unique, but rather to reduce the cost of comparisons involving false hash matches to be in the same ballpark as the cost of hash code computations. If a given hash code generates so many false matches that the time spent processing those dominates the time computing the hash codes, then spending more time computing hash codes may be worthwhile if it can reduce the number of false matches. If the hash algorithm is so effective that the time spent computing hash codes dominates the time spent on false matches, it may be better to use a faster hashing algorithm even if the number of false matches would increase.