-
-
Notifications
You must be signed in to change notification settings - Fork 23
feature request: cond() for sparse matrices #104
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Slow but accurate: Probably faster for large matrices but subject to iterative tolerances:
(denominator may have wrong sign if matrix is singular) Edit: might be better to do what Matlab's |
Ok, thanks for recommendation! |
We should consider if the condition number estimates for the one an inf norms that LAPACK do can be generalised to the sparse case. It appears that MATLAB does something like that. |
Yep, would be good to have Hager-Higham condition estimates for general matrices. There's some pseudocode here http://www.cse.psu.edu/~barlow/cse451/Hager.ps |
UMFPACK does provide a rough estimate of the condition number:
Would this be a good enough start? |
that's... very rough |
Indeed. Some of these estimates (even in LAPACK) I've found to be practically useless. |
For condition estimation, another possible algorithm is being championed by Sivan Toledo Allows estimate of conditioning for non square matrices ? |
http://www.eecs.berkeley.edu/~odedsc/seminars/scmc-seminar-spring2013_files/Sivan-Toledo-Slides.pdf interesting -- much faster than computing the SVD, slow sometimes .. he does survey other methods .. |
@gwhowell Thanks, these look like pretty interesting slides. The basic idea has been around for awhile - Parlett's and Saad's books (amongst others) have described how you can get estimates of eigenvalues (and hence the condition number) from studying the Lanczos vectors that underly iterative solvers. The new idea here seems to be his use of the forward error to study the convergence rate of LSQR. Normally the forward error is unknown, but the idea here is to simply pick a problem with a known solution and compute the Rayleigh quotient for the smallest singular value when it is converged. The usual caveats of iterative methods apply. The main problem (which is acknowledged in the slides but not addressed) is that the convergence rate depends strongly on how well separated the singular value you are interested in is from the other singular values. Many matrices do not have good spectral gaps for the smallest singular value, and so estimating the condition number remains a difficult problem in the general case. This could be a feature request for IterativeSolvers.jl |
For what it's worth, the de facto method right now for one-norm estimation is the block variant of the mentioned Hager-Higham algorithm: http://epubs.siam.org/doi/abs/10.1137/S0895479899356080?journalCode=sjmael |
Hi Jack, [edit: linke to documentation: http://octave.sourceforge.net/octave/function/condest.html] |
Please don't quote the copyrighted documentation of other projects here. It might qualify as fair use, but it's better not to risk any copyright infringement. |
@gwhowell Agreed, that's what I was implying when I said "de facto". From what I recall from my last brief conversation with Nick Higham, MATLAB defaults to setting the blocksize to 2. |
attn: @gwhowell - we've asked you on previous occasions not to do this. |
@StefanKarpinski @jiahao I assume that quoting documentation from permissively licensed (e.g., New BSD) projects is okay? |
Better to just link to things in general, if they're more than a few lines |
I implemented the Hager-Higham algorithm in Julia (a tentative gist) and would be happy to add it to base, but I'm not sure what the best way to do this will be. If I run julia> cond(A,2)
ERROR: MethodError: `svdvals!` has no method matching svdvals!(::Base.SparseMatrix.SparseMatrixCSC{Float64,Int64})
in cond at linalg/dense.jl:494
in cond at linalg/dense.jl:493
julia> cond(A,1)
ERROR: MethodError: `cond` has no method matching cond(::Base.SparseMatrix.UMFPACK.UmfpackLU{Float64,Int64}, ::Int64)
Closest candidates are:
cond(::Number, ::Any)
cond{T<:Union{Complex{Float32},Float64,Float32,Complex{Float64}},S}(::Base.LinAlg.LowerTriangular{T<:Union{Complex{Float32},Float64,Float32,Complex{Float64}},S}, ::Real)
cond{T<:Union{Complex{Float32},Float64,Float32,Complex{Float64}},S}(::Base.LinAlg.UnitLowerTriangular{T<:Union{Complex{Float32},Float64,Float32,Complex{Float64}},S}, ::Real)
...
in cond at linalg/dense.jl:499
julia> cond(A,Inf)
ERROR: MethodError: `cond` has no method matching cond(::Base.SparseMatrix.UMFPACK.UmfpackLU{Float64,Int64}, ::Float64)
Closest candidates are:
cond(::Number, ::Any)
cond{T<:Union{Complex{Float32},Float64,Float32,Complex{Float64}},S}(::Base.LinAlg.LowerTriangular{T<:Union{Complex{Float32},Float64,Float32,Complex{Float64}},S}, ::Real)
cond{T<:Union{Complex{Float32},Float64,Float32,Complex{Float64}},S}(::Base.LinAlg.UnitLowerTriangular{T<:Union{Complex{Float32},Float64,Float32,Complex{Float64}},S}, ::Real)
...
in cond at linalg/dense.jl:499 which I think isn't better than the original I think what we should have is:
Does this sound reasonable? In case it does, I'll be happy to adapt the gist to base and open a pull request. Comments on the code would be appreciated as well. |
I've changed my mind since April 2014, while I still think this code would be really good to have cleaned up and easily available somewhere, I'm not sure it absolutely needs to be in base unless it's important enough to be a dependency of all other code written in the language. Especially now with package precompilation, what about making a package for it, or contributing it to somewhere like https://github.com/andreasnoack/LinearAlgebra.jl (which is not yet registered, but probably will be sooner or later)? |
@tkelman Ok, thank you, I'll clean it up and then see where to put it. I might need it at some point to implement one of the functions in JuliaLang/julia#5840, but I'm not sure yet. |
As long as we haven't moved larger pieces of the sparse functionality out of base I think the condition number estimates for sparse matrices belong in base. It should just be called |
Let's be honest, condition estimates are a pretty niche feature. I guess we're calling the Lapack 1 and inf norm estimators in the dense case for |
Improvements to the documentation are always welcome and I agree that it would great to clarify how the different condition numbers are computed. Preferably with links to the relevant sources/papers. |
Along with a link to the relevant papers, the documentation should point On Fri, Jul 31, 2015 at 9:16 AM, Andreas Noack [email protected]
|
In general, I don't think we should have different functions for different algorithms for the same quantity. We should pick a good default and document it. If we'd like to give choices (which I'm considering for for eigen and singular values decompositions) I think we should use a keyword argument to select the algorithm. |
In this case, |
I wouldn't bother with the spectral condition number for now. It would be great just to have 1 and Inf. The smallest singular value would be pretty inaccurate if calculated with |
Closed by JuliaLang/julia#12467 |
Julia do have condition number computation
cond
algorithm for dense matrices but lacks it for sparse matrices:Thanks!
The text was updated successfully, but these errors were encountered: