If you are trying to minimise latency, I have found writing to MemoryMappedFiles to be faster because it avoids the need to make a system call. This assumes you don't need to force the data to disk and are happy for the OS to do this on a best effort basis.
The typical latency for writing to a MemoryMappedFile is the same as writing to memory, so i don't believe you will get faster. As the file grows you need to perform an additional memory mapped regions and this can take 50 to 100 micro-seconds which is significant but should be rare enough that it doesn't matter.
Writing to IO via a system call takes in the order of 5 to 10 micro-seconds which is fast enough for more applications, but relatively much slower if it matters.
If you have a need to see the data as it is written with a low latency, I suggest you look at my library Java Chronicle which supports reading data with a typical latency of 100 ns from the time it is written.
Note: While memory mapped files can reduce latency of individual writes it doesn't increase the write throughput of your disk subsystem. This means that if you have a slow disk sub-system, your memory will soon become exhausted (even if you have many GBs) and this will be the performance bottleneck regardless of which approach you take.
For example if you have SATA or fibre, you might have a limit of 500 MB/s which is easy to exceed in which case once you hit you memory limit, this will slow you down regardless of what you chose.