Tools vary in what they detect and how well they detect them. As a general rule, it is always recommend running as many tools as possible on the source code. Granted, there are a number of considerations about doing that. First and foremost is the cost of owning and maintaining any one tool.
The big names (Fortify, Code sonar, Coverity, Klockwerk, etc) are all expensive to buy, and have a hefty yearly maintenance cost. On the upside, they all tend to preform better then the open-source tools.
Any tool, be it open-source or proprietary will require "care and feeding", in creation of custom rules, modification of what is reported etc. This should be done by, in my opinion, a dedicated senior programmer that is well versed in the theory and practice of secure programming.
The evaluation of the tool reports, also should be done by a programmer / analyst well versed in security. The take a way message here is that a proficient programmer is not necessarily a secure programmer. There are additional sets of knowledge and skills to be a secure programmer.
For a brief overview of various tools, I would suggest looking at the various SAMATE (static-analysis metrics and tool evaluation) reports located here. Although I do not believe that the SAMATE team ever evaluated "Sparse".
I know these are more generalities about the use of static analysis tools, but given the current state of the art, I suspect that these are probably the best you are going to get. Also, you can check out this State of the Art report of software assurance.