**Chain Rule for Multivariate Calculus**

We continue our discussion of multivariate calculus. The first item here is the analogue of **Chain Rule** for the multivariate case. Suppose we have parameters *f*, *u*, *v*, *x*, *y*, *z*. Suppose {*u*, *v*} are independent parameters (in particular, the system is at least 2-dimensional), and assume that (i) we can write , and as functions of {*u*, *v*}, (ii) we can write as a function of *x*, *y* and *z*. This also means we can write *f* as a function of *u* and *v*. Upon perturbing the system, we get:

and

.

We wish to find a formula which expresses the second set of partial derivatives in terms of the first. To do that, we divide the first equation by and obtain:

If we maintain and let , then the LHS converges to by definition. Taking the limit on the RHS also, we obtain:

This is usually written in books as the simplified form: , which is acceptable since the context is clear: the coordinate *x* is assumed to occur together with *y* and *z*, while *u* and *v* are always assumed to occur together. We left all the subscripts in our initial equation because we’re really trying to be careful here.

To remember the above formula, use the diagram:

Thus to compute we find all possible paths from *f* to *u* through the intermediate parameters {*x*, *y*, *z*} and take the sum of all terms, where each term is the product of the corresponding partial derivatives along the way.

Example 1. Suppose and , , . Then

.

Together with , and , we get the desired relation for which is convenient if we wish to calculate explicit values.

Example 2. Suppose we have polar coordinates . Then for any *f* = *f*(*x*, *y*),

– (1)

– (2)

But if we wish to express and in terms of , then the equation simplifies to: . A similar computation gives us an expression for . In short:

**Higher Order Multivariate Derivatives**

Recall that in the single-variable case, we can take successive derivatives of the function *f*(*x*) to obtain , , etc. Let’s consider the multivariate case here.

Suppose {*x*, *y*, *z*, *w*} forms a set of coordinates. If we fix the values of *y*, *z*, and *w* then we can differentiate a function with respect to *x* as many times as we please. Thus we write this as:

, keeping *y*, *z*, *w* constant.

For example, if , then .

On the other hand, if we fix the values of z and *w*, then we can differentiate with respect to *x* first while keeping *y* constant, then with respect to *y* while keeping *x* constant. This is denoted by:

But one can also switch the order around: differentiate with respect to *y* first, then with respect to *x*. It turns out for **the order doesn’t matter** if the function is nice enough, i.e. we get:

Here’s an intuitive (but non-rigourous) explanation of the reason. Since *z*, *w* are fixed throughout, let’s simplify our notation by denoting . Now consider a small perturbation and consider the following:

If we let with constant, the two terms on the RHS converge to and respectively. If we now let , the expression converges to . By symmetry, the equation also converges to if we switch the order of convergence. Since it shouldn’t matter whether we let first then or vice versa, the two derivatives are equal.

[ *Warning: pathological examples where the two derivatives differ do exist! Such functions are explicitly forbidden in our consideration.* ]

Example 3. Consider . Then the two derivatives are:

,

.

Example 4. Consider rectilinear coordinates (*x*, *y*) and polar coordinates (*r*, *θ*), where the two are related via . We already know from example 2 that:

Let’s see if we can express the second derivatives with respect to {*x*, *y*} in terms of those with respect to {*r*, *θ*}. It may look horrid, but the calculations can be simplified by thinking of $\frac \partial{\partial x}$ as an operator, i.e. a function which takes functions to other functions! Thus we shall write:

So to get the second derivative in terms of *x*, we just apply the operator to itself:

Since the operators are all additive (an operator *D* is said to be **additive** if *D*(*f* + *g*) = *Df* + *Dg* for all functions *f* and *g*), we can use the distributive property to expand the RHS. Beware, though, that operators are in general not commutative; for example, by the product law we get:

Now the reader has enough tools to verify the following:

,

.

The case of is left as an exercise for the reader.

**Exercises**

All hints are ROT-13 encoded to avoid spoilers.

- Obligatory mechanical exercises: in each of the following examples, verify that , etc, via explicit computations.
- .
- .

- If
*f*=*f*(*x*,*y*), and*z*=*x*+*y*, is there any relationship between , and ? [ Hint:*Ner nyy guerr cnegvny qrevingvirf jryy-qrsvarq*? ] - In 3-D space, we can define spherical coordinates which satisfies , , . For a function
*f*=*f*(*x*,*y*,*z*), express the partial derivatives in terms of spherical coordinates. - Prove that there does not exist
*f*(*x*,*y*) such that and . - Given , for a function
*f*(*x*,*y*), express and in terms of*u*and*v*, and the partial derivatives of*f*with respect to*u*and*v*. - Find all points on the curve where the curve is tangent to a circle centred at the origin (see diagram below for a sample circle). You may use wolframalpha to numerically obtain the values.
- (*) (
**Legendre transform**) Suppose we have a 2-dimensional system with (non-independent) parameters*u*,*v*,*w*. Define the parameter . Explicitly write down a new parameter*g*in terms of*u*,*v*,*w*,*f*such that . [ Hint:*lbh pna pbzcyrgryl vtaber bar bs gur cnenzrgref*. ] - (*) For a set of coordinates {
*x*,*t*} in the plane, the differential equation is called the**1-dimensional wave equation**. Find all general solutions of this equation. [ Hint:*fhofgvghgr gur gjb inevnoyrf ol gur fhz naq gur qvssrerapr.*]