maa.org/sites/default/files/Kominers-CMJ0918817.pdf
At first glance, this problem appears to be quite difficult. Beyond the likely difficulty
of finding such a matrix $N (x)$, it is not even immediately clear how one would prove
that a matrix $N (x)$ is actually a solution without a great deal of matrix algebra. However, this problem is not hard as it seems. In fact, it is one of a large class of problems
that can be solved via a surprising method based upon single-variable calculus.
In this JOURNAL , Khan [2] used nilpotent matrices and Taylor series to find matrix
functions satisfying the exponential functional equation, $f (x + y) = f (x) · f (y)$. His
method is an example of a much more general theory of matrix power series due to
Weyr [4], which can be used to find matrix functions satisfying a variety of functional
equations. (Rinehart [3] gives an excellent survey of Weyr’s approach. Higham [1,
ch. 4] gives a more comprehensive account, as well as further applications.)
We say that a set of real-valued functions $\{ f_ i (x)\}^n_{i=1} ⊂{\cal C}^∞(\Bbb R)$ satisfies an analytic
functional equation $E$ if there is an analytic function $E$ such that$$
E\left(f_{1}, \ldots, f_{n}\right)(x)=0
$$
identically for all $x \in \mathbb{R}$. For example, the trigonometric functions $f_{1}(x)=\sin (x)$ and $f_{2}(x)=\cos (x)$ satisfy the analytic functional equation
$$
E\left(f_{1}, f_{2}\right)=f_{1}^{2}+f_{2}^{2}-1 \equiv 0
$$
Now, for any set of real-valued functions $\left\{f_{i}(x)\right\}_{i=1}^{n} \subset \mathcal{C}^{\infty}(\mathbb{R})$ satisfying the analytic functional equation $E$, we will find a set of associated matrix functions $\left\{A_{i}(x)\right\}_{i=1}^{n}$ satisfying the same functional equation $E$.
Approximating each $f_{i}$ by its Taylor expansion about the origin, we obtain
$$
f_{i}(x)=f_{i}(0) x^{0}+\frac{f_{i}^{(1)}(0) x}{1 !}+\frac{f_{i}^{(2)}(0) x^{2}}{2 !}+\cdots+\frac{f_{i}^{(k)}(0) x^{k}}{k !}+\cdots
$$
where $f_{i}^{(j)}$ is the $j$ th derivative of the function $f_{i}$. We let $A$ be any nilpotent matrix with index of nilpotence $k$ and then take
$$
\begin{aligned}
A_{i}(x):=f_{i}(A x) &=f_{i}(0) I+\frac{f_{i}^{(1)}(0) A x}{1 !}+\frac{f_{i}^{(2)}(0) A^{2} x^{2}}{2 !}+\cdots \\
&=f_{i}(0) I+\frac{f_{i}^{(1)}(0) A x}{1 !}+\frac{f_{i}^{(2)}(0) A^{2} x^{2}}{2 !}+\cdots+\frac{f_{i}^{(k-1)}(0) A^{k-1} x^{k-1}}{(k-1) !}
\end{aligned}
$$
If the functions $\left\{f_{i}(x)\right\}_{i=1}^{n}$ satisfy the analytic functional equation $E$, then the Taylor series of the $f_{i}$ do as well, as $E$ is continuous. Thus, the matrix functions $\left\{A_{i}(x)\right\}_{i=1}^{n}$ found from the Taylor series of the $f_{i}$ must also satisfy the functional equation $E$.
We begin with a simple example. For aesthetic reasons, we will work with the nilpotent matrix
$$
A=\left(\begin{array}{llll}
0 & 0 & 12 & 0 \\
0 & 0 & 0 & 12 \\
6 & 6 & 0 & 0 \\
-6 & -6 & 0 & 0
\end{array}\right)
$$
We obtain from the Taylor series for $f_{1}=\sin (x)$ and $f_{2}=\cos (x)$ the matrices
$$
\begin{aligned}
&A_{1}(x)=f_{1}(A x)=A x-\frac{A^{3} x^{3}}{6}=\left(\begin{array}{llll}
0 & 0 & 12 x-144 x^{3} & -144 x^{3} \\
0 & 0 & 144 x^{3} & 144 x^{3}+12 x \\
6 x & 6 x & 0 & 0 \\
-6 x & -6 x & 0 & 0
\end{array}\right), \\
&A_{2}(x)=f_{2}(A x)=I-\frac{A^{2} x^{2}}{2}=\left(\begin{array}{llll}
1-36 x^{2} & -36 x^{2} & 0 & 0 \\
36 x^{2} & 36 x^{2}+1 & 0 & 0 \\
0 & 0 & 1-36 x^{2} & -36 x^{2} \\
0 & 0 & 36 x^{2} & 36 x^{2}+1
\end{array}\right) .
\end{aligned}
$$
We have immediately from this construction that $A_{1}(x)$ and $A_{2}(x)$ commute. More interestingly, these matrix functions satisfy the trigonometric functional equations. We therefore find the familiar identity
$$
\left(A_{1}(x)\right)^{2}+\left(A_{2}(x)\right)^{2}=I .
$$
Similarly, we obtain analogues of the "double-angle" formulas,
$$
\begin{aligned}
&A_{1}(2 x)=2 A_{1}(x) A_{2}(x) \\
&A_{2}(2 x)=\left(A_{2}(x)\right)^{2}-\left(A_{1}(x)\right)^{2}=2\left(A_{2}(x)\right)^{2}-I
\end{aligned}
$$
Although the matrices found via this method need not be nonsingular, in general, the matrix $A_{2}(x)$ is, as $\operatorname{det} A_{2}(x)=1 \neq 0$. We can therefore invert $A_{2}(x)$ and obtain an analogue of the secant-tangent trigonometric square identity:
$$
\left(A_{2}^{-1}(x)\right)^{2}=I+\left(A_{1}(x) A_{2}^{-1}(x)\right)^{2} .
$$
|