-
Notifications
You must be signed in to change notification settings - Fork 49
Compatibility with MultiTensorKit #247
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #247 +/- ##
==========================================
+ Coverage 82.65% 82.68% +0.02%
==========================================
Files 43 43
Lines 5593 5589 -4
==========================================
- Hits 4623 4621 -2
+ Misses 970 968 -2 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall looks like a very minimal set of changes, that's really impressive!
I think I fully agree with your assessment of separating out the different points into separated PRs. Indeed, the typos can be merged as-is, so we can easily get that out of the way.
The fusiontree manipulations might indeed require some form of tests, so we may need to consider adding a very simple but non-trivial multifusion category to TensorKitSectors
, just for testing purposes. For example, simply using the multifuse(Z2, Z2)
sectors maybe seems feasible?
The scalartype is an independent issue that actually would have already appeared if we are dealing with complex sectortypes and real tensors, but I think this is just not something that people are ever really using that often, which explains why we haven't run into that (yet).
src/tensors/tensoroperations.jl
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For these changes, I think it would be nice to have a dedicated discussion (indeed in a separate PR) and some comments explaining what is going on.
The bottom line being that we want the sectorscalartype
to mix into the actual scalartype
in a very subtle kind of way, only really being used to promote from <:Real
to <:Complex
while leaving the actual precision the same. (I think?)
The implementation below is somewhat of a hack since normally TensorOperations
is responsible for figuring out the destination scalartype TC
, so in principle we would expect scalartype(tensoradd_type(TC, ...)) === TC
, which is not the case here.
There are a couple options for fixing this:
- Since
TensorOperations
usesscalartype
, we could consider redefiningscalartype(::AbstractTensorMap)
to take thesectorscalartype
into account. This would boil down to really having a different meaning betweeneltype
andscalartype
for TensorMaps, where the former is the type of the stored data, and the latter also contains information about the field. - We always require complex entries if the
sectorscalartype
is complex. This doesnt have too many performance implications since most operations on tensors would result in a complex tensor anyways, but we would have to think carefully aboutDiagonalTensorMap
, since that one could have real entries (e.g. for singular value decompositions) - We rework the implementation of
promote_contract
inTensorOperations
and the macros to actually work on values or types thereof instead of directly onscalartype
s, such that we can overloadpromote_contract(::AbstractTensorMap, ::AbstractTensorMap, ::Number)
orpromote_contract(::Type{<:AbstractTensorMap}, ...)
. This is quite a big internal change for TensorOperations though, since currently all macros expand topromote_contract(scalartype(A), scalartype(B), ...)
.
This PR can be divided in three parts:
The largest part concerns fusion tree manipulations where the type of unit (
leftone
orrightone
) matters in a multifusion context. As for now during the draft, I've left some comments where the type of unit was not rigorously derived, but guessed to get the code running. These will be removed when benchmarking MultiTensorKit is complete.A potential hack was used to promote storage types when I was working with
ComplexF64
F-symbols in MultiTensorKit.Minor docs and docstring typos.