Calculus: many variables
Open and closed sets
To make precise statements about functions of many variables, we need to generalize the notions of open and closed intervals to many dimensions.
We want to say that a set of points is "open" if it does not include its boundary. But how exactly can we define the boundary of an arbitrary set of vectors?
We say that a point x is a boundary point of a set of n-vectors if there are points in the set that are arbitrarily close to x, and also points outside the set that are arbitrarily close to x. A point x is an interior point of a set if we can find a (small) number ε such that all points within the distance ε of x are in the set.
The green point in the following figure, for example, is a boundary point of the (two-dimensional) gray set because every disk centered at the point, however small, contains both points in the set and points outside the set. The red point is an interior point because the blue disk (and all smaller disks, as well as some larger ones) contain exclusively points in the set.
Here is a more precise definition of the two notions.
|
|
|
|
|
|
|
|
Differentiability
The derivative of a function of a single variable at a point is a good linear approximation of the function around the point. If no good linear approximation exists at some point x (as is the case if the graph of the function has a "kink" at x), then the function is not differentiable at x.The definition of differentiability for a function of many variables, of which I do not give a precise statement, captures the same idea: a function of many variables is differentiable at a point if there exists a good linear approximation of the function around the point. Like the graph of a differentiable function of a single variable, the graph of a differentiable function of many variables is "smooth", with no "kinks".
In economic theory we often assume that the functions in our models (production functions, utility functions, ...) are differentiable.
Partial derivatives
Let f be a differentiable function of n variables. (The value of f at the point (x1, ..., xn), for example, is f (x1, ..., xn).) Suppose that we fix the values of all variables except xi. Specifically, suppose that xj =cj for all j ≠ i. Denote by g the resulting function of the single variable xi. Precisely,g(xi) = f (c1, ..., ci−1, xi, ci+1, ..., cn) for all xi.The derivative of g at the point ci is called the partial derivative of f with respect to its ith argument at the point (c1, ..., cn). We may characterize the process of obtaining this partial derivative by saying that we "differentiate f with respect to its ith argument holding all the other arguments fixed".
Following common practice in economics, I usually denote the value of the partial derivative of f with respect to its ith argument at the point (c1, ..., cn) by f 'i(c1, ..., cn). This notation does not work well for a function that itself has a subscript. Suppose, for example, that we are working with the functions gj for j = 1, ..., m, each of which is a function of n variables. How do we denote the partial derivative of gjwith respect to its kth argument? I write g'ji, but this notation is not elegant. In the mathematical literature, the notation Di f (c1, ...., cn) is used for the value of the partial derivative of f with respect to its ith argument at the point (c1, ..., cn). This notation is elegant and may be used unambiguously for any function. However, it is not common in economics.
The notation (∂ f /∂xi)(c1, ..., cn) is also sometimes used. Although this notation is clumsy in using six symbols (∂ f /∂xi) where three (Di f ) suffice, it is often used by economists, and I sometimes follow this practice.
Occasionally the argument of a function may be more conveniently referred to by its name than its index. If I have called the arguments of f by the names w and p, for example (writing f (w, p)), I may write f p(w, p) for the value of the partial derivative of f with respect to its second argument at the point (w, p).
|
Let f be a function of n variables. The fact that each of the n partial derivatives of f exists for all values of the argument of f does not imply that f is differentiable. In fact, it does not even imply that f is continuous! (See the example if you are curious.) However, if all the partial derivatives of f exist and are continuous functions, then f is differentiable, and in fact its derivative is continuous. This result justifies our defining a function to be continuously differentiable if all its partial derivatives exist and all these partial derivatives are continuous functions. (Absent the result, this terminology would be misleading, because it does not contain the word "partial", though it concerns properties of the partial derivatives.)
The derivative of f 'i with respect to its jth argument evaluated at (x1, ..., xn) is denoted f "ij(x1, ..., xn), and is called a "cross partial". The following result shows that for functions that satisfy relatively mild conditions, the order in which the differentiations are performed does not matter.
|
No comments:
Post a Comment