Use floating point type for edge weights and tensor contraction costs
So far the cost of contracting a tensor is a u64
. Theoretically, this could be insufficient for very big but still realistic networks. On the other hand, using u128
for the cost seems excessive. Using integers also means that we have to deal with the possibility of overflow.
Using floating point for the weights and costs seems the right way to resolve this problems. Even f32
should be enough, but why not use f64
? Theoretically, the consequence could be that the algorithm won't find the optimal contraction, but only one that is worse by a factor about the machine precision. This should not matter at all.