There is usually something you can check - for instance, that your algorithm always returns solutions that satisfy their constraints, even if they are not optimal. You should also put in assertion checks at every possible opportunity - these will be specific to your program, but might check that some quantity is conserved, or that something that should increase or at worst stay the same does not decrease, or that some supposed local optimum really is a local optimum.
Given these sorts of checks, and the checks on bounds that you have already mentioned, I favour running tests on a very large number of randomly generated small problems, with random seeds chosen in such a way that if it fails on problem 102324 you can repeat that failure for debugging without running through the 102323 problems before it. With a large number of problems, you increase the chance that an underlying bug will cause an error obvious enough to fail your checks. With small problems, you increase the chance that you will be able to find and fix the bug.