Ifsome of the roots form a curve, what properties does this curve have? Can it bifurcate? Can the curve be parameterised to a univariate polynomial? Can the curve have end points (which are not $\pm\infty$)?
How badly ill-conditioned is bivariate polynomial root finding? For example the polynomial $p(x,y) = x^2+2x+1$ has one zero curve but $p(x,y) = x^2 + (2-\epsilon)x+1$ has two zero curves. Is there a polynomial $p(x,y)$ with a zero curve but with a small perturbation of the coefficients has only isolated zeros?
I don't think most of your questions are appropriate for MO, you should try
math.stackexchange.com if you have any follow-up questions, as it sounds as if your knowledge and questions are at advanced undergraduate/beginning graduate level, but I'll try to answer some of them. A good reference is Fulton "Algebraic Curves" or perhaps some older books, like Walker's which go deeper into real points.
The zero set is a union of finitely many (real smooth) curves and points and has a finite number of singularities (nodes, cusps and higher) which look like multiple crosses and pointy bits (you have to look at a picture). It cannot end abruptly and, if by bifurcation you mean one curve separate into two, then I don't think you can have that. The curve can be parametrized by univariate polynomials only in very special cases (irreducible, genus zero and only one point at infinity).
At most $d^2$ isolated points (can improve bound slightly) because they are singular. There is also at most some quadratic function of $d$ number of "zero curves", I forget the exact formula. This is known as Harnack's theorem.
I don't know, ask a numerical analyst. Edit: Re last part of question. Having a curve of solutions is an open condition on the coefficients. An useful example is $x^2+y^2=a$, and look at $a$ small, positive, negative or zero.
I need to solve polynomials in multiple variables using Horner's scheme in Fortran90/95. The main reason for doing this is the increased efficiency and accuracy that occurs when using Horner's scheme to evaluate polynomials.
I currently have an implementation of Horner's scheme for univariate/single variable polynomials. However, developing a function to evaluate multivariate polynomials using Horner's scheme is proving to be beyond me.
The paper you cite, "On the multivariate Horner scheme" (Pena, Sauer) has an explicit algorithm specified on p.3. The remaining challenge is to penetrate the notation and conventions in the paperlaid out in the first three pages far enough to turn their algorithm presentation into code.
Abstract. An extension of Horner's algorithm to the evaluation of m-variate polynomials and their derivatives is obtained. The schemes of computation are represented by trees because this type of graph describes exactly in which order the computations must be done. Some examples of algorithms for one and two variables are given.
NOTE: In contrast to the one dimensional case there are multiple possible Horner factorisations of multivariate polynomials. One can allow a search over the possible factorisations to find a minimal representation as described HERE. In most cases however it suffices to use a heuristic to find a single "good" factorisation. multivar_horner implements the greedy heuristic described in "Greedy Algorithms for Optimizing Multivariate Horner Schemes".
Given a bivariate and symmetric polynomial $P(x,y)$ with a high degree (probably larger than 8). Is there any algorithm that helps me know if $P(x,y)$ has a root over $\mathbbR^+$ or not? I may not need a specific root, I just want to check if there is a root, or not.
*P/S: I'm new to SageMath. Currently, I know some basic operation with multivariate polynomial like factorizing, adding, substracting, ... two polynomials. It would be great if you can explain your answer a little bit. Thank you.
where \(q(x,y) \equiv 1\) in the case of minimizing a bivariate polynomial. With polysol2, Tensorlab offers a numerically robust way of computing isolated real roots of a system of bivariate polynomials
In fact, given a system of bivariate polynomials (1), polysol2will first convert it to the form (2) before computing the rootsas the eigenvalues of a (complex) generalized eigenvalue problem. Stationarypoints of real-valued polyanalytic polynomials and rational functions may becomputed with polymin2 and ratmin2, respectively.x
For example, the polynomial \(p(x) = x^3 + 2x^2 + 3x +4\) is represented by the row vector p = [1 2 3 4]. Its derivative \(\fracdpdx\) can be computed with polyder(p) and its zeros can be computed with roots(p).
The corresponding system of polynomials is solved by polysol2. The latter includes several balancing steps to improve the accuracy of the solution. For some poorly scaled problems, the method may fail to find all solutions of the system. In that case, try decreasing the polysol2 balancing tolerance options.TolBal to a smaller value, e.g.,
The functions polymin2 and ratmin2 depend on the lower level functionpolysol2 to compute the isolated solutions of systems of bivariatepolynomials (1) or systems of polyanalytic univariate polynomials(2). In the case (1), polysol2(p,q) computes theisolated real solutions of the system \(p(x,y) = q(x,y) = 0\). A solution\((x^*,y^*)\) is said to be isolated if there exists a neighbourhood of\((x^*,y^*)\) in \(\C^2\) that contains no solution other than\((x^*,y^*)\). Some systems may have solutions that are isolated in\(\R^2\), but not in \(\C^2\).
There may not be much difference at the low level of expressionswhere we store parameters as ordinary variables, at the mathematical levelpolynomials in several variables are different from polynomials in onevariable that have symbols as coefficients.In this lecture we make the important connection between factoringand root finding, symbolically as well as numerically.
Let us start with a definition in plain words.A univariate polynomial is a finite sum of terms,where every term is a coefficient multiplied with a monomial,and where every monomial is a power of the variable,by default, call this variable x.All coefficients in the polynomial are of the same type.
The keys of the dictionary are the exponents of those monomialsthat appear with nonzero coefficient. The dictionary representationcould be very convenient for sparse polynomials, when only relativelyfew monomials appear with nonzero coefficient, relative to the degreeof the polynomial.
Because the coefficient in the term with \(x^3\) is zero,the sum of the roots in both the symbolic and the numeric representationmust be zero. We can see this easily from the symbolic representation,but let us verify this on the numerical representation.
To recapitulate, we distinguish between two main forms of root finding,one is symbolic, the other numeric.The default numeric field is the field of complex numbers,whereas symbolically we extend the field of rational numberswith sufficiently many symbols to represent all roots.
Polynomials in several variables are declared similarly as polynomialsin one variable.The quotes around the names are needed if we have not usedor declared them explicitly before as variables.We take a random polynomial of degree 4, and with terms we cangive an upper bound on the number of terms in the polynomial.
The default order appears to be Degree reverse lexicographic term order.To change the ordering of the monomials in the polynomial,we coerce p into a another ring.In a lexicographic order, all monomials in which x occurs come first.
We can view a polynomial in several variables as a polynomial in one variableby collecting terms. Because of the type of argument of polynomial(),the selection of the tuple of the outcome of p.variables() is needed.
Let us now introduce a new set of polynomials \(Q_n,k^\alpha ,\beta ,\gamma (u,v)\) normalized in such a way that \(Q_n,k^\alpha ,\beta ,\gamma (1,1)=1\) for all \(n\ge 0\) and \(0\le k\le n\). \(Q_n,k^\alpha ,\beta ,\gamma (u,v)\) can also be defined as
Observe again, from (2.4) and recalling that \(\alpha ,\beta ,\gamma >-1\), \(\alpha +\gamma +3/2>0\) and \(\beta +\gamma +3/2>0\), that the coefficients \(a_n,k^(1), a_n,k^(3), c_n,k^(1), c_n,k^(3)\) in (2.10) are all positive. The coefficient \(a_n,k^(3)\) may have a different sign if \(k=n=0\) or \(k=n\). But a direct computation looking at the factors of \(a_n,n^(3)\) shows that \(a_n,n^(3)>0,n\ge 0\). The same applies to \(a_n,k^(1)\) for \(k=n\). Unlike the coefficient \(b_n,k\) in (2.8), we were not able to find a simplified expression of \(b_n,k^(2)\).
In this section, we will study under what conditions we may provide a probabilistic interpretation of the coefficients of the three-term recurrence relations (2.6) and (2.9). From the recurrence relations (2.3), we can define the following two block tridiagonal Jacobi matrices:
The behavior of \(F_k^\alpha ,\beta , k\ge 0,\) depends strongly on the behavior of the denominator \(D_k^\alpha ,\beta \). In the following lemma, we study the properties of \(D_k^\alpha ,\beta \).
As a function of k, \(D_k^\alpha ,\beta \) is a parabola with vertex located at \(k_0=-\frac12(\alpha +\beta +1)\in (-\infty ,1/2)\) and \(D_k_0^\alpha ,\beta =2\alpha ^2-2\beta ^2-1\). If \(1+2\beta ^2-2\alpha ^2\ge 0\), then the real zeros of \(D_k^\alpha ,\beta \) are given by
From the previous analysis, we see that the graph of \(F_k^\alpha ,\beta \) depends strongly on the values of the parameters \(\alpha \) and \(\beta \). In particular, the monotonicity [see (3.8)] depends on the sign of \(\alpha ^2-\beta ^2\) and \(\alpha +\beta +1\), the existence of the zeros of the parabola \(D_k^\alpha ,\beta \) depends on the sign of the hyperbola \(1+2\beta ^2-2\alpha ^2\) [see (3.9)] and their location also on the sign of \(3\alpha -\beta +2\) and \(\alpha +\beta \) (see the end of Lemma 3.1). Therefore, taking in mind all these curves, let us consider a subdivision of the two-dimensional region \(\alpha ,\beta >-1\) given by Fig. 2.
3a8082e126