Skip to content

[ITensors] Port VUMPS functions #1544

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 7 commits into
base: main
Choose a base branch
from
Open

Conversation

ryanlevy
Copy link
Contributor

This PR adds a few of the functions living in https://github.com/ITensor/ITensorInfiniteMPS.jl/blob/main/src/ITensors.jl

The one thing missing at the moment is the more efficient sqrt(T::DiagTensor) which I couldn't get working, but the eigen implementation seems sane at the moment so there may be no need

@ryanlevy
Copy link
Contributor Author

Looks like there's some assumption being made that for a sliced mps e.g. psi[2:4] returns a vector of itensors instead of a MPS, not sure if we want to adjust that here or in ITensorInfiniteMPS

@ryanlevy
Copy link
Contributor Author

Re the sliced MPS, seems the assumption is setting/getting a unit range

function deepcopy_ortho_center(M::AbstractMPS)
  M = copy(M)
  c = ortho_lims(M)
  # TODO: define `getindex(::AbstractMPS, I)` to return `AbstractMPS`
  M[c] = deepcopy(typeof(M)(M[c]))
  return M
end

What are your thoughts given the TODO Matt?

@mtfishman
Copy link
Member

mtfishman commented Oct 15, 2024

I think slicing an MPS/MPO with a unit range should return an MPS/MPO, if we make that change we can just update other code like the one you quote accordingly.

However, the definition in ITensorInfiniteMPS.jl: https://github.com/ITensor/ITensorInfiniteMPS.jl/blob/fc1078a60442081010447caf65c9a22c7278e4ce/src/ITensors.jl#L186 is too naive, as the comment states, since it doesn't preserve information about the orthogonality center.

@mtfishman mtfishman changed the title Port VUMPS functions [ITensors] Port VUMPS functions Oct 20, 2024
@ryanlevy
Copy link
Contributor Author

ryanlevy commented Oct 21, 2024

Just pushed a hopefully better performance version of sqrt. The original logic is a bit strange to me, it will fail for any matrix that isn't PSD without a large atol, how do you feel about allowing for conversion to complex like Base.sqrt does?

julia> eigvals(array(A,i,i'))
2-element Vector{Float64}:
 0.031100580805835387
 1.5475689615743486

julia> sqrt(Hermitian(array(A,i,i')))  array(sqrt(A),i,i')
true

julia> eigvals(array(B,i,i'))
2-element Vector{Float64}:
 -0.9688994191941646
  0.5475689615743488

jjulia> sqrt(array(B,i,i'))
2×2 Matrix{ComplexF64}:
 0.638411+0.135107im  0.254641-0.338726im
 0.254641-0.338726im  0.101568+0.84922im

julia> sqrt(B)
ERROR: DomainError with -0.9688994191941646:
sqrt was called with a negative real argument but will only return a complex result if called with a complex argument. Try sqrt(Complex(x)).
Stacktrace:
  [1] throw_complex_domainerror(f::Symbol, x::Float64)
    @ Base.Math ./math.jl:33
  [2] sqrt
    @ ./math.jl:608 [inlined]
...

@ryanlevy
Copy link
Contributor Author

ryanlevy commented Nov 1, 2024

@mtfishman now that ITensorMPS is out of ITensors I think this PR is complete pending your review and I'll move over there to fix the slice issue

# return itensor(sqrt(tensor(T)))
#end
D, U = eigen(T; ishermitian=ishermitian)
sqrtD = map_diag(x -> x < 0 && abs(x) < atol ? 0 : sqrt(x), D)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
sqrtD = map_diag(x -> x < 0 && abs(x) < atol ? 0 : sqrt(x), D)
sqrtD = map_diag(x -> x < 0 && abs(x) < atol ? zero(real(eltype(T)) : sqrt(x), D)

Additionally, I agree with your comment above that for large negative values we should convert to complex.

@ryanlevy
Copy link
Contributor Author

Thanks, I implemented all of those suggested changes. Do you have any suggestions on how to handle the complex conversion? It seems that map_diag won't convert with a type change Float -> Complex
i.e.

A = random_itensor(i,i')
 ITensors.NDTensors.map_diag(x-> x+1.0im, A)

crashes

ERROR: InexactError: Float64(-1.4801732244792405 + 1.0im)
Stacktrace:
  [1] Real
    @ ./complex.jl:44 [inlined]
  [2] convert
    @ ./number.jl:7 [inlined]
  [3] setindex!
    @ ./array.jl:987 [inlined]
  [4] setindex!
    @ ./subarray.jl:384 [inlined]
...

(Theoretically this could be on the user but that seems a bit annoying)

@mtfishman
Copy link
Member

Looks like NDTensors.map_diag needs to be generalized to handle the element type widening.

@mtfishman
Copy link
Member

mtfishman commented Apr 21, 2025

It's a bit hacky, but one strategy could be be to change NDTensors.map_diag to:

# Should go in `NDTensors.Exposed`, and specialized definitions may need to be added
# for certain GPU backends.
LinearAlgebra.copy_oftype(exposed_t::Exposed, elt::Type) = LinearAlgebra.copy_oftype(unexpose(exposed_t), elt)

function map_diag(f::Function, exposed_t::Exposed)
  new_elt = Base._return_type(f, Tuple{eltype(exposed_t)})
  t_dest = LinearAlgebra.copy_oftype(exposed_t, new_elt)
  map_diag!(f, expose(t_dest), exposed_t)
  return t_dest
end

@ryanlevy
Copy link
Contributor Author

Thanks Matt, it seems that I'm not familiar enough with the NDTensors internals to determine what should be changed. That snippet doesn't seem to work as it gets a call to copyto! which is not implemented
https://github.com/ITensor/ITensors.jl/blob/main/NDTensors/src/tensor/tensor.jl#L207

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants