Summary of paper summary

The paper demonstrates the significance of functional programming and clearly outlines what its advantages are.

## Modularity

Modular design is essential to succesful programming and involves breaking problems up and gluing the solutions. The available methods of gluing dictate the ways in which problems are broken up.

## Functional programming advantages (or lacks)

Commonly provided advantages of functional programming (FP) – side-effect free, no assignment statements, “referentially transparent”, immutability.

These refer to what FP doesn’t have rather than what it has, making them unconvincing arguments.

## Structured programming

No gotos summed up the advantages of structured programming, similar to the lacks in FP.

The most important difference between structured and unstructured programs is that structured programs are designed in a modular way. Modular design brings with it great productivity improvements:

• small modules coded quickly and easily
• general purpose modules reused, leading to faster development later
• modules can be tested independently

Absence of gotos help with programming in the small, but modular design helps in the large.

## Two new glues in functional programming

Allow for smaller and more general modules glued together with:

• higher-order functions
• lazy evaluation

### Higher-order functions

Enables simple functions to be glued together to make complex ones.

Functions that are invisible in conventional programming languages can be expressed as a combination of:

• general higher-order function
• particular specializing functions

An example from the paper is foldr, which captures a common computation pattern over lists. This enables things like (in Haskell):

-- where foldr takes 2 parameters: operator initial

sum = foldr (+) 0
product = foldr (*) 1
anytrue = foldr (||) False
alltrue = foldr (&&) True
length = foldr count 0 -- where count a n = n + 1
map f = foldr ((:) . f) []
summatrix = sum . map sum


Modularizing a simple function (sum) means it can be used to write many other functions on lists with little programming effort.

### Lazy evaluation

Makes it practical to modularize a program as:

• a generator that constructs a large number of possible answers
• a selector that chooses the appropriate answer

Lazy evaluation and side-effects cannot coexist, since lazy evaluation relies on giving up the ability to dictate the order in which parts of a program execute . Side effects, on the other hand rely on this ordering and knowing about the context in which parts are embedded.

Powerful and expressive levels of abstraction can be reached quickly.

#### Example: computing Newton-Raphson square roots

Algorithm computes the square root of a number n starting at an initial approximation a[0] and computing increasingly better ones using the rule:

a[i + 1] = (a[i] + n / a[i]) / 2


The approximation converges rapidly to a limit. Square root programs take a tolerance eps and stop when two consecutive approximations differ by less than eps.

def root(n, eps, a0):
x = a0
y = a0 + 2 * eps # dummy value

while abs(x - y) >= eps:
y = x
x = (x + n / x) / 2

return x


Note that the traditional implementation is indivisible in a conventional language. We can use lazy evaluation to modularize it.

f n x = (x + n / x) / 2


Where f calculates the next approximation. Then the sequence of approximations is:

[a0, f a0, f (f a0), f (f (f a0)), ...]


Define a function to compute this:

repeat f a = a : (repeat f (f a))


Given n and a0, the list of computations is computed as:

repeat (f n) a0


repeat has potentially “infinite” output, but it won’t all be computed.

Now we just need the approximation accuracy checker:

within eps (a:b:t) = if abs (a - b) <= eps then b else within eps (b:t)


Gluing everything together:

newton_raphson a0 eps n = within eps (repeat (f n) a0)
sqrt = newton_raphson 1 0.000001 -- default parameters
sqrt 2 -- 1.414213562373095


Extensions:

• can swap out within for another approx. checker
• reuse within and other approx. checkers in other numerical algorithms that generate a sequence of approximations (ex. numerical differentiation and integration)

More examples in paper.