Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Root finding for degenerate polynomials #446

Open
HechtiDerLachs opened this issue Nov 23, 2022 · 1 comment
Open

Root finding for degenerate polynomials #446

HechtiDerLachs opened this issue Nov 23, 2022 · 1 comment

Comments

@HechtiDerLachs
Copy link

HechtiDerLachs commented Nov 23, 2022

Let me first say that I was very happy to find the capabilities of Arb within Nemo and Oscar!

I have one application in mind where I need to track roots of a univariate polynomial f(u) for varying coefficients a_i of f. It lies in the nature of my problem that the limit polynomial f -> f_1 has at least one multiple root. I learned from the Arb documentation that internally, the Durand-Kerner method is used. According to this article, it is possible to obtain reasonable convergence also in this degenerate case.

Playing around with the Arb methods, I found that the error bounds for the returned list of roots of f becomes significantly worse, once f approaches f_1 which has a multiple root. Usually, one would circumvent this problem by making f square free. But in my setting, it will be most convenient to have the input in inexact form, i.e. f_1 is not an exact polynomial with a multiple root, but rather an Arb-polynomial with an 'approximate multiple root'. Therefore, doing exact division beforehand is not a feasible way to go.

Question: Can we modify the implementation of the root-finding algorithm in such a way that it returns a list of roots with higher precision in the degenerate cases? It already returns a list of roots, making no claims about their multiplicities. And this is fine for me; it's only that the precision really seems to go down.

The following is a sample code in julia using Oscar, derived from @JHanselman's work.

using Oscar

function recursive_continuation(f::acb_poly, g::acb_poly, r::Vector{acb})
  P = parent(f)
  P === parent(g) || error("polynomials must belong to the same parent ring")
  z = gens(P)[1]
  CC = coefficient_ring(P)
  prec = precision(CC)
  n = degree(f)
  degree(g) == n || error("number of coefficients must be equal to the degree of the polynomial")
  length(r) == n || error("number of approximate roots must be equal to the degree of the polynomial")
  temp_vec = Nemo.acb_vec(n)
  temp_vec_res = Nemo.acb_vec(n)

  # The following is an error estimation from Neurohr's thesis.
  d = reduce(min, [abs(r[i] - r[j]) for (i, j) in filter(t-> t[1] != t[2], [a for a in Iterators.product((1:n), (1:n))])]) 
  W = [ f(r[i]) // prod([r[i] - r[j] for j in vcat((1:i - 1), i+1:n)];init = one(CC)) for i in (1:n)]
  w = reduce(max, map(t -> real(t)^2 + imag(t)^2, W))
  
  if w < d // (2*n)
    temp_vec = Nemo.acb_vec(copy(r)) # TODO: Why does fill! not work?
    dd = ccall((:acb_poly_find_roots, Nemo.libarb), Cint, (Ptr{Nemo.acb_struct}, Ref{acb_poly}, Ptr{Nemo.acb_struct}, Int, Int), temp_vec_res, g, temp_vec, 0, prec)

    dd == n || println("number of roots has dropped to $dd")
    result = similar(r)
    result .= Nemo.array(CC, temp_vec_res, n)
    
    Nemo.acb_vec_clear(temp_vec, n)
    Nemo.acb_vec_clear(temp_vec_res, n)
    
    return result
  else
    h = CC(1//2).*(f+g)
    return recursive_continuation(h, g, recursive_continuation(f, h, r))
  end
end

prec = 128
CC = ComplexField(prec)

d = 5
P, a = PolynomialRing(QQ, push!(["a$i" for i in 0:d-1], "z"))
z = pop!(a)
F = sum([a[i]*z^(i-1) for i in 1:d]) + z^d

v = [CC(i) for i in 1:d]

CCu, u = CC["u"]

f = evaluate(F, push!(CCu.(v), u))

r = roots(f)

w = CC.([20, -10, 5, 25, -3])
g = u^d + sum([w[i]*u^(i-1) for i in 1:d])
@show r2 = recursive_continuation(f, g, r) # This produces good results!
g = u^d - u^(d-1) 
@show recursive_continuation(f, g, r) # This produces worse results.
@HechtiDerLachs
Copy link
Author

@fieker has asked me to do my homework first and look into the theory. He pointed out that there is a theoretic bound on how good roots r of polynomials f can be approximated in case that the coefficients of f are not precise, but only approximate. The key should be "Ostrowski's Theorem" (taken from some Rapport de recherche "Algorithmique du theoreme fondamental de l'algebre' by X. Gourdon, INRIA, programme 2, No 1852 (sorry about not providing a link!).

It states that if f has degree n and the coefficients a_i of f move with e (epsilon), then we have to expect the roots to move with O(e^(1/n)), the n-th root of e; regardless of how ill-conditioned the polynomial f is.

Thus, if the coefficients of f are given up to an accuracy of 2^(-m), I would expect the roots to be at least of accuracy 2^(-m+n) if not better. So, having done this homework: This bound does not seem to be realized in the current implementation. At least that is what the output of the above code suggests: If we started with an exact polynomial f and then try to interpolate to g = u^5 - u^4 over a complex field of precision prec = 128000 (which is already absurdly high), the roots of g come out with an accuracy of ~e^{-60} which is far from the expected precision 2^{-127955}.

Now I am far from being an expert on this! But is there a reason why we don't find such bounds realized here? Or is there hope that one might improve the code in this regard? If so, I think that would be great! With a view towards various applications, including numerical integration on curves (CC: @JHanselman).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant