As with many questions involving compiler codegen decisions, the answer is "it depends".
Building your own hash table will probably run faster than compiler generated code in many cases because the compiler has other cost metrics it is trying to balance that you are not: primarily, memory consumption.
A hash table will use more memory than a handful of if-then-else IL instructions. If the compiler spit out a hash table for every switch statement in a program, memory use would explode.
As the number of case blocks in the switch statement grows, you will probably see the compiler produce different code. With more cases, there is greater justification for the compiler to abandon small and simple if-then-else patterns in favor of faster but fatter alternatives.
I don't know offhand if the C# or JIT compilers perform this particular optimization, but a common compiler trick for switch statements when the case selectors are many and mostly sequential is to compute a jump vector. This requires more memory (in the form of compiler generated jump tables embedded in the code stream) but executes in constant time. Subtract arg - "a", use result as index into jump table to jump to appropriate case block. Boom, yer done, regardless of whether there are 20 or 2000 cases.
A compiler is more likely to shift into jump table mode when the switch selector type is char or int or enum and the values of the case selectors are mostly sequential ("dense"), since these types can be easily subtracted to create an offset or index. String selectors are a little harder.
String selectors are "interned" by the C# compiler, meaning the compiler adds the string selectors values to an internal pool of unique strings. The address or token of an interned string can be used as its identity, allowing for int-like optimizations when comparing intern'd strings for identity / byte-wise equality. With sufficient case selectors, the C# compiler will produce IL code that looks up the intern'd equivalent of the arg string (hash table lookup), then compares (or jump tables) the interned token with the precomputed case selector tokens.
If you can coax the compiler into producing jump-table code in the char/int/enum selector case, this can execute faster than using your own hash table.
For the string selector case, the IL code still has to do a hash lookup so any performance difference from using your own hash table is probably a wash.
In general, though, you shouldn't dwell on these compiler nuances too much when writing application code. Switch statements are generally much easier to read and understand than a hash table of function pointers. Switch statements that are big enough to push the compiler into jump table mode are often too big to be humanly readable.
If you find a switch statement is in a performance hotspot of your code, and you have measured with a profiler that it has tangible performance impact, then changing your code to use your own dictionary is a reasonable tradeoff for a performance gain.
Writing your code to use a hash table from the start, with no performance measurements to justify this choice, is over-engineering that will lead to unfathomable code with an unnecessarily higher maintenance cost. Keep It Simple.