7

Here seems to be the two biggest things I can take from the How to Design Programs (simplified Racket) course I just finished, straight from the lecture notes of the course:

1) Tail call optimization, and the lack thereof in non-functional languages:

Sadly, most other languages do not support TAIL CALL OPTIMIZATION. Put another way, they do build up a stack even for tail calls.

Tail call optimization was invented in the mid 70s, long after the main elements of most languages were developed. Because they do not have tail call optimization, these languages provide a fixed set of LOOPING CONSTRUCTS that make it possible to traverse arbitrary sized data.

a) What are the equivalents to this type of optimization in procedural languages that don't feature it? b) Do using those equivalents mean we avoid building up a stack in similar situations in languages that don't have it?

2) Mutation and multicore processors

This mechanism is fundamental in almost any other language you program in. We have delayed introducing it until now for several reasons:

  • despite being fundamental, it is surprisingly complex

  • overuse of it leads to programs that are not amenable to parallelization (running on multiple processors). Since multi-core computers are now common, the ability to use mutation only when needed is becoming more and more important

  • overuse of mutation can also make it difficult to understand programs, and difficult to test them well

But mutable variables are important, and learning this mechanism will give you more preparation to work with Java, Python and many other languages. Even in such languages, you want to use a style called "mostly functional programming".

I learned some Java, Python and C++ before taking this course, so came to take mutation for granted. Now that has been all thrown in the air by the above statement. My questions are:

a) where could I find more detailed information regarding what is suggested in the 2nd bullet, and what to do about it, and b) what kind of patterns would emerge from a "mostly functional programming" style, as opposed to a more careless style I probably would have had had I continued on with those other languages instead of taking this course?

4

3 回答 3

9

正如 Leppie 所指出的,对于它们支持的特定类型的循环,循环构造设法恢复了正确尾调用所节省的空间。循环结构的唯一问题是,你所拥有的永远不够,除非你只是将球扔进用户的场地并强制他们显式地为堆栈建模。

举个例子,假设您正在使用循环遍历二叉树。它有效……但是您需要明确跟踪“要返回的对象”。尾调用语言中的递归遍历允许您吃蛋糕并吃掉它,不需要在不需要时浪费空间,也不会强迫您自己跟踪堆栈。

您关于并行性和并发性的问题更为广泛,最好的指针可能是研究领域,而不是现有的解决方案。我认为大多数人都会同意计算世界正在发生危机。我们如何使突变繁重的编程技能适应新的多核世界?

简单地切换到功能范式在这里也不是灵丹妙药。我们仍然不知道如何编写高级代码并生成超快的非变异运行并发代码。不过很多人都在做这个!

于 2011-12-15T08:37:43.760 回答
4

为了扩展“可变性使并行性变得困难”的概念,当您有多个内核运行时,如果您想从一个内核修改某些内容并让所有其他内核一致地看到它,则必须使用同步。

获得正确的同步很难。如果您过度同步,则会出现死锁、缓慢(串行而不是并行)性能等。如果您同步不足,您会看到部分变化(另一个核心只能看到您从不同核心所做的部分更改),使您的对象处于无效的“中途更改”状态。

正是由于这个原因,许多函数式编程语言鼓励消息队列概念而不是共享状态概念。在这种情况下,唯一的共享状态是消息队列,在消息队列中管理同步是一个已解决的问题。

于 2011-12-15T15:06:32.163 回答
0

a) 在没有它的过程语言中,这种优化的等价物是什么?b) 使用这些等价物是否意味着我们避免在类似情况下在没有它的语言中建立堆栈?

好吧,尾调用的意义在于它可以在不添加调用堆栈的情况下评估另一个函数,因此构建堆栈的任何内容都不能真正称为等效函数。

尾调用的行为本质上类似于跳转到新代码,使用函数调用的语言陷阱和所有适当的细节管理。所以在没有这种优化的语言中,你会在单个函数中使用跳转。循环、条件块,甚至是任意goto语句(如果没有其他方法)。

a) 我在哪里可以找到有关第二个项目符号中建议的内容以及如何处理的更多详细信息

第二个项目符号听起来过于简单化了。有很多方法可以使并行化变得比需要的更加困难,过度使用突变只是其中之一。

但是,请注意并行化(将任务拆分为可以同时完成的部分)与并发(同时执行多个可能交互的任务)并不完全相同,尽管肯定存在重叠。避免突变对于编写并发程序非常有帮助,因为不可变数据避免了很多竞争条件和资源争用,否则这些都是可能的。

b)“主要是函数式编程”风格会出现什​​么样的模式,而不是我可能会继续使用那些其他语言而不是参加这门课程的更粗心的风格?

你看过 Haskell 或 Clojure 吗?两者都非常倾向于强调受控突变的功能性风格。Haskell 对此更加严格,但有很多工具可用于处理有限形式的可变性,而 Clojure 则更为非正式,您可能更熟悉它,因为它是另一种 Lisp 方言。

于 2011-12-15T15:19:19.163 回答