虽然这是一个老问题,但我认为到目前为止没有一个答案提到完全函数式编程的真正动机,那就是:
如果程序是证明,而证明是程序,那么带有“漏洞”的程序作为证明没有任何意义,并且会引入逻辑不一致。
Basically, if a proof is a program, an infinite loop can be used to prove anything. This is really bad, and provides much of the motivation for why we might want to program totally. Other answers tend to not account for the flip side of the paper. While the languages are techincally not turing complete, you can recover a lot of interesting programs by using co-inductive definitions and functions. We're very prone to think of inductive data, but codata serves an important purpose in these languages, where you can totally define a definition which is infinite (and when doing real computation which terminates, you will potentially use only a finite piece of this, or maybe not if you're writing an operating system!).
It is also of note that most proof assistants work based on this principle, Coq, for example.