- Source: Vector-valued function
A vector-valued function, also referred to as a vector function, is a mathematical function of one or more variables whose range is a set of multidimensional vectors or infinite-dimensional vectors. The input of a vector-valued function could be a scalar or a vector (that is, the dimension of the domain could be 1 or greater than 1); the dimension of the function's domain has no relation to the dimension of its range.
Example: Helix
A common example of a vector-valued function is one that depends on a single real parameter t, often representing time, producing a vector v(t) as the result. In terms of the standard unit vectors i, j, k of Cartesian 3-space, these specific types of vector-valued functions are given by expressions such as
r
(
t
)
=
f
(
t
)
i
+
g
(
t
)
j
+
h
(
t
)
k
{\displaystyle \mathbf {r} (t)=f(t)\mathbf {i} +g(t)\mathbf {j} +h(t)\mathbf {k} }
where f(t), g(t) and h(t) are the coordinate functions of the parameter t, and the domain of this vector-valued function is the intersection of the domains of the functions f, g, and h. It can also be referred to in a different notation:
r
(
t
)
=
⟨
f
(
t
)
,
g
(
t
)
,
h
(
t
)
⟩
{\displaystyle \mathbf {r} (t)=\langle f(t),g(t),h(t)\rangle }
The vector r(t) has its tail at the origin and its head at the coordinates evaluated by the function.
The vector shown in the graph to the right is the evaluation of the function
⟨
2
cos
t
,
4
sin
t
,
t
⟩
{\displaystyle \langle 2\cos t,\,4\sin t,\,t\rangle }
near t = 19.5 (between 6π and 6.5π; i.e., somewhat more than 3 rotations). The helix is the path traced by the tip of the vector as t increases from zero through 8π.
In 2D, we can analogously speak about vector-valued functions as:
r
(
t
)
=
f
(
t
)
i
+
g
(
t
)
j
{\displaystyle \mathbf {r} (t)=f(t)\mathbf {i} +g(t)\mathbf {j} }
or
r
(
t
)
=
⟨
f
(
t
)
,
g
(
t
)
⟩
{\displaystyle \mathbf {r} (t)=\langle f(t),g(t)\rangle }
Linear case
In the linear case the function can be expressed in terms of matrices:
y
=
A
x
,
{\displaystyle \mathbf {y} =A\mathbf {x} ,}
where y is an n × 1 output vector, x is a k × 1 vector of inputs, and A is an n × k matrix of parameters. Closely related is the affine case (linear up to a translation) where the function takes the form
y
=
A
x
+
b
,
{\displaystyle \mathbf {y} =A\mathbf {x} +\mathbf {b} ,}
where in addition b'' is an n × 1 vector of parameters.
The linear case arises often, for example in multiple regression, where for instance the n × 1 vector
y
^
{\displaystyle {\hat {y}}}
of predicted values of a dependent variable is expressed linearly in terms of a k × 1 vector
β
^
{\displaystyle {\hat {\boldsymbol {\beta }}}}
(k < n) of estimated values of model parameters:
y
^
=
X
β
^
,
{\displaystyle {\hat {\mathbf {y} }}=X{\hat {\boldsymbol {\beta }}},}
in which X (playing the role of A in the previous generic form) is an n × k matrix of fixed (empirically based) numbers.
Parametric representation of a surface
A surface is a 2-dimensional set of points embedded in (most commonly) 3-dimensional space. One way to represent a surface is with parametric equations, in which two parameters s and t determine the three Cartesian coordinates of any point on the surface:
(
x
,
y
,
z
)
=
(
f
(
s
,
t
)
,
g
(
s
,
t
)
,
h
(
s
,
t
)
)
≡
F
(
s
,
t
)
.
{\displaystyle (x,y,z)=(f(s,t),g(s,t),h(s,t))\equiv \mathbf {F} (s,t).}
Here F is a vector-valued function. For a surface embedded in n-dimensional space, one similarly has the representation
(
x
1
,
x
2
,
…
,
x
n
)
=
(
f
1
(
s
,
t
)
,
f
2
(
s
,
t
)
,
…
,
f
n
(
s
,
t
)
)
≡
F
(
s
,
t
)
.
{\displaystyle (x_{1},x_{2},\dots ,x_{n})=(f_{1}(s,t),f_{2}(s,t),\dots ,f_{n}(s,t))\equiv \mathbf {F} (s,t).}
Derivative of a three-dimensional vector function
Many vector-valued functions, like scalar-valued functions, can be differentiated by simply differentiating the components in the Cartesian coordinate system. Thus, if
r
(
t
)
=
f
(
t
)
i
+
g
(
t
)
j
+
h
(
t
)
k
{\displaystyle \mathbf {r} (t)=f(t)\mathbf {i} +g(t)\mathbf {j} +h(t)\mathbf {k} }
is a vector-valued function, then
d
r
d
t
=
f
′
(
t
)
i
+
g
′
(
t
)
j
+
h
′
(
t
)
k
.
{\displaystyle {\frac {d\mathbf {r} }{dt}}=f'(t)\mathbf {i} +g'(t)\mathbf {j} +h'(t)\mathbf {k} .}
The vector derivative admits the following physical interpretation: if r(t) represents the position of a particle, then the derivative is the velocity of the particle
v
(
t
)
=
d
r
d
t
.
{\displaystyle \mathbf {v} (t)={\frac {d\mathbf {r} }{dt}}.}
Likewise, the derivative of the velocity is the acceleration
d
v
d
t
=
a
(
t
)
.
{\displaystyle {\frac {d\mathbf {v} }{dt}}=\mathbf {a} (t).}
= Partial derivative
=The partial derivative of a vector function a with respect to a scalar variable q is defined as
∂
a
∂
q
=
∑
i
=
1
n
∂
a
i
∂
q
e
i
{\displaystyle {\frac {\partial \mathbf {a} }{\partial q}}=\sum _{i=1}^{n}{\frac {\partial a_{i}}{\partial q}}\mathbf {e} _{i}}
where ai is the scalar component of a in the direction of ei. It is also called the direction cosine of a and ei or their dot product. The vectors e1, e2, e3 form an orthonormal basis fixed in the reference frame in which the derivative is being taken.
= Ordinary derivative
=If a is regarded as a vector function of a single scalar variable, such as time t, then the equation above reduces to the first ordinary time derivative of a with respect to t,
d
a
d
t
=
∑
i
=
1
n
d
a
i
d
t
e
i
.
{\displaystyle {\frac {d\mathbf {a} }{dt}}=\sum _{i=1}^{n}{\frac {da_{i}}{dt}}\mathbf {e} _{i}.}
= Total derivative
=If the vector a is a function of a number n of scalar variables qr (r = 1, ..., n), and each qr is only a function of time t, then the ordinary derivative of a with respect to t can be expressed, in a form known as the total derivative, as
d
a
d
t
=
∑
r
=
1
n
∂
a
∂
q
r
d
q
r
d
t
+
∂
a
∂
t
.
{\displaystyle {\frac {d\mathbf {a} }{dt}}=\sum _{r=1}^{n}{\frac {\partial \mathbf {a} }{\partial q_{r}}}{\frac {dq_{r}}{dt}}+{\frac {\partial \mathbf {a} }{\partial t}}.}
Some authors prefer to use capital D to indicate the total derivative operator, as in D/Dt. The total derivative differs from the partial time derivative in that the total derivative accounts for changes in a due to the time variance of the variables qr .
= Reference frames
=Whereas for scalar-valued functions there is only a single possible reference frame, to take the derivative of a vector-valued function requires the choice of a reference frame (at least when a fixed Cartesian coordinate system is not implied as such). Once a reference frame has been chosen, the derivative of a vector-valued function can be computed using techniques similar to those for computing derivatives of scalar-valued functions. A different choice of reference frame will, in general, produce a different derivative function. The derivative functions in different reference frames have a specific kinematical relationship.
= Derivative of a vector function with nonfixed bases
=The above formulas for the derivative of a vector function rely on the assumption that the basis vectors e1, e2, e3 are constant, that is, fixed in the reference frame in which the derivative of a is being taken, and therefore the e1, e2, e3 each has a derivative of identically zero. This often holds true for problems dealing with vector fields in a fixed coordinate system, or for simple problems in physics. However, many complex problems involve the derivative of a vector function in multiple moving reference frames, which means that the basis vectors will not necessarily be constant. In such a case where the basis vectors e1, e2, e3 are fixed in reference frame E, but not in reference frame N, the more general formula for the ordinary time derivative of a vector in reference frame N is
N
d
a
d
t
=
∑
i
=
1
3
d
a
i
d
t
e
i
+
∑
i
=
1
3
a
i
N
d
e
i
d
t
{\displaystyle {\frac {{}^{\mathrm {N} }d\mathbf {a} }{dt}}=\sum _{i=1}^{3}{\frac {da_{i}}{dt}}\mathbf {e} _{i}+\sum _{i=1}^{3}a_{i}{\frac {{}^{\mathrm {N} }d\mathbf {e} _{i}}{dt}}}
where the superscript N to the left of the derivative operator indicates the reference frame in which the derivative is taken. As shown previously, the first term on the right hand side is equal to the derivative of a in the reference frame where e1, e2, e3 are constant, reference frame E. It also can be shown that the second term on the right hand side is equal to the relative angular velocity of the two reference frames cross multiplied with the vector a itself. Thus, after substitution, the formula relating the derivative of a vector function in two reference frames is
N
d
a
d
t
=
E
d
a
d
t
+
N
ω
E
×
a
{\displaystyle {\frac {{}^{\mathrm {N} }d\mathbf {a} }{dt}}={\frac {{}^{\mathrm {E} }d\mathbf {a} }{dt}}+{}^{\mathrm {N} }\mathbf {\omega } ^{\mathrm {E} }\times \mathbf {a} }
where NωE is the angular velocity of the reference frame E relative to the reference frame N.
One common example where this formula is used is to find the velocity of a space-borne object, such as a rocket, in the inertial reference frame using measurements of the rocket's velocity relative to the ground. The velocity NvR in inertial reference frame N of a rocket R located at position rR can be found using the formula
N
d
d
t
(
r
R
)
=
E
d
d
t
(
r
R
)
+
N
ω
E
×
r
R
.
{\displaystyle {\frac {{}^{\mathrm {N} }d}{dt}}(\mathbf {r} ^{\mathrm {R} })={\frac {{}^{\mathrm {E} }d}{dt}}(\mathbf {r} ^{\mathrm {R} })+{}^{\mathrm {N} }\mathbf {\omega } ^{\mathrm {E} }\times \mathbf {r} ^{\mathrm {R} }.}
where NωE is the angular velocity of the Earth relative to the inertial frame N. Since velocity is the derivative of position, NvR and EvR are the derivatives of rR in reference frames N and E, respectively. By substitution,
N
v
R
=
E
v
R
+
N
ω
E
×
r
R
{\displaystyle {}^{\mathrm {N} }\mathbf {v} ^{\mathrm {R} }={}^{\mathrm {E} }\mathbf {v} ^{\mathrm {R} }+{}^{\mathrm {N} }\mathbf {\omega } ^{\mathrm {E} }\times \mathbf {r} ^{\mathrm {R} }}
where EvR is the velocity vector of the rocket as measured from a reference frame E that is fixed to the Earth.
= Derivative and vector multiplication
=The derivative of a product of vector functions behaves similarly to the derivative of a product of scalar functions. Specifically, in the case of scalar multiplication of a vector, if p is a scalar variable function of q,
∂
∂
q
(
p
a
)
=
∂
p
∂
q
a
+
p
∂
a
∂
q
.
{\displaystyle {\frac {\partial }{\partial q}}(p\mathbf {a} )={\frac {\partial p}{\partial q}}\mathbf {a} +p{\frac {\partial \mathbf {a} }{\partial q}}.}
In the case of dot multiplication, for two vectors a and b that are both functions of q,
∂
∂
q
(
a
⋅
b
)
=
∂
a
∂
q
⋅
b
+
a
⋅
∂
b
∂
q
.
{\displaystyle {\frac {\partial }{\partial q}}(\mathbf {a} \cdot \mathbf {b} )={\frac {\partial \mathbf {a} }{\partial q}}\cdot \mathbf {b} +\mathbf {a} \cdot {\frac {\partial \mathbf {b} }{\partial q}}.}
Similarly, the derivative of the cross product of two vector functions is
∂
∂
q
(
a
×
b
)
=
∂
a
∂
q
×
b
+
a
×
∂
b
∂
q
.
{\displaystyle {\frac {\partial }{\partial q}}(\mathbf {a} \times \mathbf {b} )={\frac {\partial \mathbf {a} }{\partial q}}\times \mathbf {b} +\mathbf {a} \times {\frac {\partial \mathbf {b} }{\partial q}}.}
= Derivative of an n-dimensional vector function
=A function f of a real number t with values in the space
R
n
{\displaystyle \mathbb {R} ^{n}}
can be written as
f
(
t
)
=
(
f
1
(
t
)
,
f
2
(
t
)
,
…
,
f
n
(
t
)
)
{\displaystyle \mathbf {f} (t)=(f_{1}(t),f_{2}(t),\ldots ,f_{n}(t))}
. Its derivative equals
f
′
(
t
)
=
(
f
1
′
(
t
)
,
f
2
′
(
t
)
,
…
,
f
n
′
(
t
)
)
.
{\displaystyle \mathbf {f} '(t)=(f_{1}'(t),f_{2}'(t),\ldots ,f_{n}'(t)).}
If f is a function of several variables, say of
t
∈
R
m
{\displaystyle t\in \mathbb {R} ^{m}}
, then the partial derivatives of the components of f form a
n
×
m
{\displaystyle n\times m}
matrix called the Jacobian matrix of f.
Infinite-dimensional vector functions
If the values of a function f lie in an infinite-dimensional vector space X, such as a Hilbert space, then f may be called an infinite-dimensional vector function.
= Functions with values in a Hilbert space
=If the argument of f is a real number and X is a Hilbert space, then the derivative of f at a point t can be defined as in the finite-dimensional case:
f
′
(
t
)
=
lim
h
→
0
f
(
t
+
h
)
−
f
(
t
)
h
.
{\displaystyle \mathbf {f} '(t)=\lim _{h\to 0}{\frac {\mathbf {f} (t+h)-\mathbf {f} (t)}{h}}.}
Most results of the finite-dimensional case also hold in the infinite-dimensional case too, mutatis mutandis. Differentiation can also be defined to functions of several variables (e.g.,
t
∈
R
n
{\displaystyle t\in \mathbb {R} ^{n}}
or even
t
∈
Y
{\displaystyle t\in Y}
, where Y is an infinite-dimensional vector space).
N.B. If X is a Hilbert space, then one can easily show that any derivative (and any other limit) can be computed componentwise: if
f
=
(
f
1
,
f
2
,
f
3
,
…
)
{\displaystyle \mathbf {f} =(f_{1},f_{2},f_{3},\ldots )}
(i.e.,
f
=
f
1
e
1
+
f
2
e
2
+
f
3
e
3
+
⋯
{\displaystyle \mathbf {f} =f_{1}\mathbf {e} _{1}+f_{2}\mathbf {e} _{2}+f_{3}\mathbf {e} _{3}+\cdots }
, where
e
1
,
e
2
,
e
3
,
…
{\displaystyle \mathbf {e} _{1},\mathbf {e} _{2},\mathbf {e} _{3},\ldots }
is an orthonormal basis of the space X ), and
f
′
(
t
)
{\displaystyle f'(t)}
exists, then
f
′
(
t
)
=
(
f
1
′
(
t
)
,
f
2
′
(
t
)
,
f
3
′
(
t
)
,
…
)
.
{\displaystyle \mathbf {f} '(t)=(f_{1}'(t),f_{2}'(t),f_{3}'(t),\ldots ).}
However, the existence of a componentwise derivative does not guarantee the existence of a derivative, as componentwise convergence in a Hilbert space does not guarantee convergence with respect to the actual topology of the Hilbert space.
= Other infinite-dimensional vector spaces
=Most of the above hold for other topological vector spaces X too. However, not as many classical results hold in the Banach space setting, e.g., an absolutely continuous function with values in a suitable Banach space need not have a derivative anywhere. Moreover, in most Banach spaces setting there are no orthonormal bases.
Vector field
See also
Coordinate vector
Curve
Multivalued function
Parametric surface
Position vector
Parametrization
Notes
References
External links
Vector-valued functions and their properties (from Lake Tahoe Community College)
Weisstein, Eric W. "Vector Function". MathWorld.
Everything2 article
3 Dimensional vector-valued functions (from East Tennessee State University)
"Position Vector Valued Functions" Khan Academy module
Kata Kunci Pencarian:
- Ruang Banach
- Vector-valued function
- Vector (mathematics and physics)
- Vector field
- Sublinear function
- Infinite-dimensional vector function
- Jacobian matrix and determinant
- Vector quantity
- Real-valued function
- Inverse function theorem
- Derivative