# Frobenius solution to the hypergeometric equation

In the following we solve the second-order differential equation called the hypergeometric differential equation using Frobenius method, named after Ferdinand Georg Frobenius. This is a method that uses the series solution for a differential equation, where we assume the solution takes the form of a series. This is usually the method we use for complicated ordinary differential equations.

The solution of the hypergeometric differential equation is very important. For instance, Legendre's differential equation can be shown to be a special case of the hypergeometric differential equation. Hence, by solving the Hypergeometric differential equation, one may directly compare its solutions to get the solutions of Legendre's differential equation, after making the necessary substitutions. For more details, please check the hypergeometric differential equation

We shall prove that this equation has three singularities, namely at "x" = 0, "x" = 1 and around infinity. However, as these will turn out to be regular singular points, we will be able to assume a solution on the form of a series. Since this is a second-order differential equation, we must have two linearly independent solutions.

The problem however will be that our assumed solutions may or not be independent, or worse, may not even be defined (depending on the value of the parameters of the equation). This is why we shall study the different cases for the parameters and modify our assumed solution accordingly.

The equation

Solve the hypergeometric equation around all singularities::

= Solution around "x" = 0 =

Let

:

Then

:$P_2\left(0\right) = 0, P_2 \left(1\right)=0.,$

Hence, "x" = 0 and "x" = 1 are singular points. Let's start with "x" = 0. To see if it is regular, we study the following limits:

:

Hence, both limits exist and "x" = 0 is a regular singular point. Therefore, we assume the solution takes the form

:$y = sum_\left\{r=0\right\}^infty a_r x^\left\{r + c\right\}$

with "a"0 ≠ 0. Hence,

:

Substituting these into the hypergeometric equation, we get

:That is,:

In order to simplify this equation, we need all powers to be the same, equal to "r" + "c" - 1, the smallest power. Hence, we switch the indices as follows:

:

Thus, isolating the first term of the sums starting from 0 we get

:

Now, from the linear independence of all powers of "x", that is, of the functions 1, "x", "x"2, etc., the coefficients of "x"k vanish for all "k". Hence, from the first term, we have

:$a_\left\{0\right\} \left(c\left(c - 1\right) + gamma c\right) = 0,$

which is the indicial equation. Since "a"0 ≠ 0, we have

:$c\left(c - 1 + gamma\right) = 0.,$

Hence,

:

Also, from the rest of the terms, we have

:

Hence,

:

But

:Hence, we get the recurrence relation:

Let's now simplify this relation by giving "a""r" in terms of "a"0 instead of "a""r" − 1. From the recurrence relation (note: below, expressions of the form ("u")"r" refer to the Pochhammer symbol).:

As we can see,

:

Hence, our assumed solution takes the form

:

We are now ready to study the solutions corresponding to the different cases for "c"1 − "c"2 = γ − 1 (it should be noted that this reduces to study the nature of the parameter γ: whether it is an an integer or not).

Analysis of the solution in terms of the difference &gamma; − 1 of the two roots

&gamma; not an integer

Then "y"1 = "y"|"c" = 0 and "y"2 = "y"|"c" = 1 − &gamma;. Since:we have:}_2 F_1}(alpha, eta; gamma; x) \ y_2 &= a_0 sum_{r = 0}^infty frac{(alpha + 1 - gamma)_r (eta + 1 - gamma)_r} {(1 - gamma + 1)_r (1 - gamma + gamma)_r} x^{r + 1 - gamma} = a_0 x^{1 - gamma} sum_{r = 0}^infty frac{(alpha + 1 - gamma)_r (eta + 1 - gamma)_r} {(1)_r (2 - gamma)_r} x^r \ &= a_0 x^{1 - gamma} }_2 F_1}(alpha - gamma + 1, eta - gamma + 1; 2 - gamma; x)end{align} Hence, $y = A\text{'} y_1 + B\text{'} y_2.$ Let "A"&prime; a0 = "a" and "B"&prime; "a"0 = "B". Then:$y = A$}_2 F_1}(alpha, eta; gamma; x) + B x^{1 - gamma} }_2 F_1}(alpha - gamma + 1, eta - gamma + 1; 2 - gamma; x),

= &gamma; = 1 =

Then "y"1 = "y"|"c" = 0. Since &gamma; = 1, we have:Hence,:}_2 F_1}(alpha, eta; 1; x) \ y_2 &= left.frac{partial y}{partial c} ight|_{c = 0}.end{align}To calculate this derivative, let :Then:But:Hence,: Differentiating both sides of the equation with respect to "c", we get::Hence, : Now,:Hence,:For "c" = 0, we get: left(ln x + sum_{k = 0}^{r - 1} left(frac{1}{alpha + k} + frac{1}{eta + k} - frac{2}{1 + k} ight) ight)x^{r}.Hence, "y" = "C"&prime; "y"1 + "D"&prime; "y"2. Let "C"&prime; "a"0 = "C" and "D"&prime; "a"0 = "D". Then:$y = C$}_2 F_1}(alpha, eta; 1; x) + D sum_{r = 0}^infty frac{(alpha)_r (eta)_r}{(1)_r^2} left(ln x + sum_{k = 0}^{r - 1} left(frac{1}{alpha + k} + frac{1}{eta + k} - frac{2}{1 + k} ight) ight) x^r

&gamma; an integer and &gamma; &ne; 1

&gamma; &le; 0

From the recurrence relation:we see that when "c" = 0 (the smaller root), "a"1 − &gamma; &rarr; &infin;. Hence, we must make the substitution "a"0 = "b"0 ("c" - "c""i"), where "c""i" is the root for which our solution is infinite. Hence, we take "a"0 = "b"0 "c" and our assumed solution takes the new form : Then "y"1 = "y""b"|"c" = 0. As we can see, all terms before: {(c + 1)_{1 - gamma}(c + gamma)_{1 - gamma x^{1 - gamma} vanish because of the "c" in the numerator. Starting from this term however, the "c" in the numerator vanishes. To see this, note that:$\left(c + gamma\right)_\left\{1 - gamma\right\} = \left(c + gamma\right)\left(c + gamma + 1\right) cdots c.$Hence, our solution takes the form:{(1)_{1 - gamma} (gamma)_{-gamma x^{1 - gamma} + frac{(alpha)_{2 - gamma} (eta)_{2 - gamma{(1)_{2 - gamma} (gamma)_{-gamma}(1)} x^{2 - gamma} + frac{(alpha)_{3 - gamma} (eta)_{3 - gamma{(1)_{3 - gamma} (gamma)_{-gamma}(1)(2)} x^{3 - gamma} + cdots ight) \ &= frac{b_0}{(gamma)_{-gamma sum_{r = 1 - gamma}^infty frac{(alpha)_r (eta)_r}{(1)_r (1)_{r + gamma - 1 x^r.end{align}Now,:$y_2 = left.frac\left\{partial y_b\right\}\left\{partial c\right\} ight|_\left\{c = 1 - gamma\right\}.$To calculate this derivative, let:Then following the method in the previous case, we get :Now,:Hence,:Hence,:At "c" = 1- &gamma;, we get "y"2. Hence, "y" = "E"&prime; "y"1 + "F"&prime; "y"2. Let "E"&prime; "b"0 = "E" and "F"&prime; "b"0 = "F". Then:

&gamma; > 1

From the recurrence relation:we see that when "c" = 1 - &gamma; (the smaller root), "a"&gamma; − 1 &rarr; &infin;. Hence, we must make the substitution "a"0 = "b"0("c" − "c""i"), where "c""i" is the root for which our solution is infinite. Hence, we take "a"0 = "b"0("c" + &gamma; - 1) and our assumed solution takes the new form::Then "y"1 = "y""b"|"c" = 1 - &gamma;. All terms before:vanish because of the "c" + &gamma; - 1 in the numerator. Starting from this term, however, the "c" + &gamma; - 1 in the numerator vanishes. To see this, note that :$\left(c + 1\right)_\left\{gamma - 1\right\} = \left(c + 1\right)\left(c + 2\right)cdots\left(c + gamma - 1\right).$Hence, our solution takes the form : Now,:$y_2 = left.frac\left\{partial y_b\right\}\left\{partial c\right\} ight|_\left\{c = 0\right\}.$To calculate this derivative, let:Then following the method in the second case above, :Now,:Hence,: At "c" = 0 we get "y"2. Hence, "y" = "G"&prime;"y"1 + "H"&prime;"y"2. Let "G"&prime;"b"0 = "E" and "H"&prime;"b"0 = "F". Then:

= Solution around "x" = 1 =

Let us now study the singular point "x" = 1. To see if it is regular, :Hence, both limits exist and "x" = 1 is a regular singular point. Now, instead of assuming a solution on the form:$y = sum_\left\{r = 0\right\}^infty a_r \left(x - 1\right)^\left\{r + c\right\},$we will try to express the solutions of this case in terms of the solutions for the point "x" = 0. We proceed as follows: we had the hypergeometric equation:Let "z" = 1 - "x". Then:Hence, the equation takes the form:Since "z" = 1 - "x", the solution of the hypergeometric equation at "x" = 1 is the same as the solution for this equation at "z" = 0. But the solution at z = 0 is identical to the solution we obtained for the point "x" = 0, if we replace each &gamma; by α + &beta; - &gamma; + 1. Hence, to get the solutions, we just make this substitution in the previous results. Note also that for "x" = 0, "c"1 = 0 and "c"2 = 1 - &gamma;. Hence, in our case, "c"1 = 0 while "c"2 = &gamma; - α - &beta;. Let us now write the solutions. It should be noted in the following we replaced each "z" by 1 - "x".

Analysis of the solution in terms of the difference &gamma; − α − &beta; of the two roots

&gamma; − α − &beta; not an integer

}_2 F_1}(alpha, eta; alpha + eta - gamma + 1; 1 - x) \ &quad + B (1 - x)^{gamma - alpha - eta} }_2 F_1}(gamma - alpha, gamma - eta; gamma - alpha - eta + 1; 1 - x)end{align}

=&gamma; − α − &beta; = 0=

}_2 F_1}(alpha, eta; 1; 1 - x) \ &quad + D sum_{r = 0}^infty frac{(alpha)_r (eta)_r}{(1)_r^2} left(ln(1 - x) + sum_{k = 0}^{r - 1} left(frac{1}{alpha + k} + frac{1}{eta + k} - frac{2}{1 + k} ight) ight) (1 - x)^r end{align}

&gamma; − α − &beta; is an integer and &gamma; − α − &beta; &ne; 0

&gamma; − α − &beta; > 0

sum_{r = 1 - gamma}^infty frac{(alpha)_r (eta)_r}{(1)_r (1)_{r + alpha + eta - gamma (1 - x)^r + {}\ &quadegin{align} {} + F(1 - x)^{gamma - alpha - eta} sum_{r = 0}^infty & frac{(gamma - alpha - eta)(gamma - eta)_r (gamma - alpha)_r} {(1 + gamma - alpha - eta)_r (1)_r} Biggl(ln(1 - x) + frac{1}{gamma - alpha - eta} + {} \ &+ sum_{k = 0}^{r - 1} left(frac{1}{k + gamma - eta} + frac{1}{k + gamma - alpha} - frac{1}{1 + k + gamma - alpha - eta} - frac{1}{1 + k} ight) Biggr) (1 - x)^r end{align}end{align}

&gamma; − α − &beta; < 0

(1 - x)^{gamma - alpha - eta} sum_{r = alpha + eta - gamma}^infty frac{(gamma - eta )_r (gamma - alpha)_r} {(1)_r (1)_{r + gamma - alpha - eta (1 - x)^r + {}\ &quadegin{align} {} + H sum_{r = 0}^infty & frac{(gamma - alpha - eta)(gamma - eta)_r (gamma - alpha)_r} {(1 + gamma - alpha - eta)_r (1)_r} Biggl(ln(1 - x) + frac{1}{alpha + eta - gamma} + {} \ & + sum_{k = 0}^{r - 1} left(frac{1}{alpha + k} + frac{1}{eta + k} - frac{1}{1 + k} - frac{1}{alpha + eta - gamma + 1 + k} ight) Biggr) (1 - x)^r end{align}end{align}

Solution around infinity

Finally, we study the singularity as "x" &rarr; &infin;. Since we can't study this directly, we let "x" = "s"−1. Then the solution of the equation as "x" &rarr; &infin; is identical to the solution of the modified equation when "s" = 0. We had

: Hence, the equation takes the new form

:

which reduces to

:

Let

:

As we said, we shall only study the solution when "s" = 0. As we can see, this is a singular point since "P"2(0) = 0. To see if it's regular,

:

Hence, both limits exist and "s" = 0 is a regular singular point. Therefore, we assume the solution takes the form

:$y=sumlimits_\left\{r=0\right\}^\left\{infty \right\}\left\{a_\left\{r\right\}s^\left\{r+c$with "a"0 &ne; 0.

Hence,

:$y\text{'}=sumlimits_\left\{r=0\right\}^\left\{infty \right\}\left\{a_\left\{r\right\}\left(r+c\right)s^\left\{r+c-1$ and $y"=sumlimits_\left\{r=0\right\}^\left\{infty \right\}\left\{a_\left\{r\right\}\left(r+c\right)\left(r+c-1\right)s^\left\{r+c-2.$

Substituting in the modified hypergeometric equation we get

:

i.e.,

:

In order to simplify this equation, we need all powers to be the same, equal to "r" + "c", the smallest power. Hence, we switch the indices as follows:

Thus, isolating the first term of the sums starting from 0 we get:

Now, from the linear independence of all powers of "s" (i.e., of the functions 1, "s", "s"2, ..., the coefficients of "s"k vanish for all "k". Hence, from the first term we have

:

which is the indicial equation. Since "a"0 &ne; 0, we have

:

Hence, "c"1 = α and "c"2 = &beta;.

Also, from the rest of the terms we have:

Hence,:But:

Hence, we get the recurrence relation:

Let's now simplify this relation by giving "a"r in terms of "a"0 instead of "a""r" − 1. From the recurrence relation,:

As we can see,:

Hence, our assumed solution takes the form:

We are now ready to study the solutions corresponding to the different cases for "c"1 − "c"2 = α − &beta;.

Analysis of the solution in terms of the difference α - &beta; of the two roots

α − &beta; not an integer

Then "y"1 = "y"|"c" = α and "y"2 = "y"|"c" = &beta;. Since :,

we have :

Hence, "y" = "A"&prime;"y"1 + "B"&prime;"y"2. Let "A"&prime;"a"0 = "A" and "B"&prime;"a"0 = "B". Then, noting that "s" = "x"-1,:

=α − &beta; = 0=

Then "y"1 = "y"|"c" = α. Since α = &beta;, we have :$y=a_\left\{0\right\}sumlimits_\left\{r=0\right\}^\left\{infty \right\}\left\{left\left( frac\left\{\left(c\right)_\left\{r\right\}\left(c+1-gamma \right)_\left\{r\left\{left\left( \left(c+1-alpha \right)_\left\{r\right\} ight\right)^\left\{2s^\left\{r+c\right\} ight\right)\right\}$

Hence,:

To calculate this derivative, let:

Then using the method in the case &gamma; = 1 above, we get:

Now, :

Hence:

Hence, :

For "c" = α we get:Hence, "y" = "C"&prime;"y"1 + "D"&prime;"y"2. Let "C"&prime;"a"0 = "C" and "D"&prime;"a"0 = "D". Noting that "s" = "x"-1,:$y=Cx^\left\{-alpha \right\}_\left\{2\right\}F_\left\{1\right\}\left(alpha ,alpha +1-gamma ; 1; x^\left\{-1\right\}\right)$

:$+Dx^\left\{-alpha \right\}sumlimits_\left\{r=0\right\}^\left\{infty \right\}\left\{left\left( left\left( frac\left\{\left(alpha \right)_\left\{r\right\}\left(alpha +1-gamma \right)_\left\{r\left\{left\left( \left(1\right)_\left\{r\right\} ight\right)^\left\{2 ight\right)left\left( ln x^\left\{-1\right\}+sumlimits_\left\{k=0\right\}^\left\{r-1\right\}\left\{left\left( frac\left\{1\right\}\left\{alpha +k\right\}+frac\left\{1\right\}\left\{alpha +1-gamma +k\right\}-frac\left\{2\right\}\left\{1+k\right\} ight\right)\right\} ight\right)x^\left\{-r\right\} ight\right)\right\}$

α − &beta; an integer and α − &beta; &ne; 0

α − &beta; > 0

From the recurrence relation

:we see that when "c" = &beta; (the smaller root), "a"α - &beta; &rarr; &infin;. Hence, we must make the substitution "a"0 = "b"0("c" − "c""i"), where "c""i" is the root for which our solution is infinite. Hence, we take "a"0 = "b"0("c" − &beta;) and our assumed solution takes the new form

:

Then "y"1 = "y"b|"c" = &beta;. As we can see, all terms before

:

vanish because of the "c" − &beta; in the numerator.

But starting from this term, the "c" − &beta; in the numerator vanishes. To see this, note that :

Hence, our solution takes the form :

Now,:$y_\left\{2\right\}=left.frac\left\{partial y_\left\{b\left\{partial c\right\} ight|_\left\{c=alpha\right\}.$To calculate this derivative, let :

Then using the method in the case &gamma; = 1 above we get:

Now,:

Hence,:

Hence,:

At "c" = α we get "y"2. Hence, "y" = "E"&prime;"y"1 + "F"&prime;"y"2. Let "E"&prime;"b"0 = "E" and "F"&prime;"b"0 = "F". Noting that "s" = "x"-1 we get:

α − &beta; < 0

From the symmetry of the situation here, we see that

:

Reference

* cite book|author=Ian Sneddon
title=Special functions of mathematical physics and chemistry
year=1966
publisher=OLIVER B
id=ISBN 978-0050013342

Wikimedia Foundation. 2010.

### Look at other dictionaries:

• Hypergeometric differential equation — In mathematics, the hypergeometric differential equation is a second order linear ordinary differential equation (ODE) whose solutions are given by the classical hypergeometric series. Every second order linear ODE with three regular singular… …   Wikipedia

• Frobenius method — In mathematics, the Frobenius method, named after Ferdinand Georg Frobenius, is a way to find an infinite series solution for a second order ordinary differential equation of the form in the vicinity of the regular singular point z=0. We can… …   Wikipedia

• List of mathematics articles (F) — NOTOC F F₄ F algebra F coalgebra F distribution F divergence Fσ set F space F test F theory F. and M. Riesz theorem F1 Score Faà di Bruno s formula Face (geometry) Face configuration Face diagonal Facet (mathematics) Facetting… …   Wikipedia

• Regular singular point — In mathematics, in the theory of ordinary differential equations in the complex plane , the points of are classified into ordinary points, at which the equation s coefficients are analytic functions, and singular points, at which some coefficient …   Wikipedia

• Power series method — In mathematics, the power series method is used to seek a power series solution to certain differential equations. Method Consider the second order linear differential equation: a 2(z)f (z)+a 1(z)f (z)+a 0(z)f(z)=0;!Suppose a 2 is nonzero for all …   Wikipedia