2

方程是

$C'_{ijkl} = Q_{im} Q_{jn} C_{mnop} (Q^{-1})_{ok} (Q^{-1})_{pl}$

我能够使用

np.einsum('im,jn,mnop,ok,pl', Q, Q, C, Q_inv, Q_inv)

做这份工作,也期待

np.tensordot(np.tensordot(np.tensordot(Q, np.tensordot(Q, C, axes=[1,1]), axes=[1,0]), Q_inv, axes=[2,0]), Q_inv, axes=[3,0])

工作,但事实并非如此。

规格:

C 是一个四阶弹性张量:

array([[[[ 552.62389047,   -0.28689554,   -0.32194701],
         [  -0.28689554,  118.89168597,   -0.65559912],
         [  -0.32194701,   -0.65559912,  130.21758722]],

        [[  -0.28689554,  166.02923119,   -0.00000123],
         [ 166.02923119,    0.49494431,   -0.00000127],
         [  -0.00000123,   -0.00000127,   -0.57156702]],

        [[  -0.32194701,   -0.00000123,  165.99413061],
         [  -0.00000123,   -0.64666809,   -0.0000013 ],
         [ 165.99413061,   -0.0000013 ,    0.42997465]]],


       [[[  -0.28689554,  166.02923119,   -0.00000123],
         [ 166.02923119,    0.49494431,   -0.00000127],
         [  -0.00000123,   -0.00000127,   -0.57156702]],

        [[ 118.89168597,    0.49494431,   -0.64666809],
         [   0.49494431,  516.15898907,   -0.33132485],
         [  -0.64666809,   -0.33132485,  140.09010389]],

        [[  -0.65559912,   -0.00000127,   -0.0000013 ],
         [  -0.00000127,   -0.33132485,  165.98553869],
         [  -0.0000013 ,  165.98553869,    0.41913346]]],


       [[[  -0.32194701,   -0.00000123,  165.99413061],
         [  -0.00000123,   -0.64666809,   -0.0000013 ],
         [ 165.99413061,   -0.0000013 ,    0.42997465]],

        [[  -0.65559912,   -0.00000127,   -0.0000013 ],
         [  -0.00000127,   -0.33132485,  165.98553869],
         [  -0.0000013 ,  165.98553869,    0.41913346]],

        [[ 130.21758722,   -0.57156702,    0.42997465],
         [  -0.57156702,  140.09010389,    0.41913346],
         [   0.42997465,    0.41913346,  486.62412063]]]])

Q 是一个改变 x 和 y 坐标的旋转矩阵。

array([[ 0,  1,  0],
       [-1,  0,  0],
       [ 0,  0,  1]])

Q_inv 是

array([[-0., -1., -0.],
       [ 1.,  0.,  0.],
       [ 0.,  0.,  1.]])

np.einsum导致

array([[[[ 516.15898907,   -0.49494431,   -0.33132485],
         [  -0.49494431,  118.89168597,    0.64666809],
         [  -0.33132485,    0.64666809,  140.09010389]],

        [[  -0.49494431,  166.02923119,    0.00000127],
         [ 166.02923119,    0.28689554,   -0.00000123],
         [   0.00000127,   -0.00000123,    0.57156702]],

        [[  -0.33132485,    0.00000127,  165.98553869],
         [   0.00000127,   -0.65559912,    0.0000013 ],
         [ 165.98553869,    0.0000013 ,    0.41913346]]],


       [[[  -0.49494431,  166.02923119,    0.00000127],
         [ 166.02923119,    0.28689554,   -0.00000123],
         [   0.00000127,   -0.00000123,    0.57156702]],

        [[ 118.89168597,    0.28689554,   -0.65559912],
         [   0.28689554,  552.62389047,    0.32194701],
         [  -0.65559912,    0.32194701,  130.21758722]],

        [[   0.64666809,   -0.00000123,    0.0000013 ],
         [  -0.00000123,    0.32194701,  165.99413061],
         [   0.0000013 ,  165.99413061,   -0.42997465]]],


       [[[  -0.33132485,    0.00000127,  165.98553869],
         [   0.00000127,   -0.65559912,    0.0000013 ],
         [ 165.98553869,    0.0000013 ,    0.41913346]],

        [[   0.64666809,   -0.00000123,    0.0000013 ],
         [  -0.00000123,    0.32194701,  165.99413061],
         [   0.0000013 ,  165.99413061,   -0.42997465]],

        [[ 140.09010389,    0.57156702,    0.41913346],
         [   0.57156702,  130.21758722,   -0.42997465],
         [   0.41913346,   -0.42997465,  486.62412063]]]])

我认为这是正确的,而四个np.tensordot导致

array([[[[ 552.62389047,   -0.28689554,    0.32194701],
         [  -0.28689554,  118.89168597,    0.65559912],
         [  -0.32194701,   -0.65559912, -130.21758722]],

        [[  -0.28689554,  166.02923119,    0.00000123],
         [ 166.02923119,    0.49494431,    0.00000127],
         [  -0.00000123,   -0.00000127,    0.57156702]],

        [[  -0.32194701,   -0.00000123, -165.99413061],
         [  -0.00000123,   -0.64666809,    0.0000013 ],
         [ 165.99413061,   -0.0000013 ,   -0.42997465]]],


       [[[  -0.28689554,  166.02923119,    0.00000123],
         [ 166.02923119,    0.49494431,    0.00000127],
         [  -0.00000123,   -0.00000127,    0.57156702]],

        [[ 118.89168597,    0.49494431,    0.64666809],
         [   0.49494431,  516.15898907,    0.33132485],
         [  -0.64666809,   -0.33132485, -140.09010389]],

        [[  -0.65559912,   -0.00000127,    0.0000013 ],
         [  -0.00000127,   -0.33132485, -165.98553869],
         [  -0.0000013 ,  165.98553869,   -0.41913346]]],


       [[[   0.32194701,    0.00000123,  165.99413061],
         [   0.00000123,    0.64666809,   -0.0000013 ],
         [-165.99413061,    0.0000013 ,    0.42997465]],

        [[   0.65559912,    0.00000127,   -0.0000013 ],
         [   0.00000127,    0.33132485,  165.98553869],
         [   0.0000013 , -165.98553869,    0.41913346]],

        [[-130.21758722,    0.57156702,    0.42997465],
         [   0.57156702, -140.09010389,    0.41913346],
         [  -0.42997465,   -0.41913346,  486.62412063]]]])

注意负的大数字。

4

1 回答 1

3

方法#1

一种方法是使用np.tensordot获得与 with 相同的结果,np.einsum尽管不是一步,并在可信赖 broadcasting的人的帮助下-

# Get broadcasted elementwise multiplication between two versions of Q.
# This corresponds to "np.einsum('im,jn,..', Q, Q)" producing "'ijmn"" 
# broadcasted version of elementwise multiplications between Q's.
Q_ext = Q[:,None,:,None]*Q[:,None,:]

# Similarly for Q_inv : For "np.einsum('..ok,pl', Q_inv, Q_inv)" get "'opkl'" 
# broadcasted version of elementwise multiplications between Q_inv's.
Q_inv_ext = Q_inv[:,None,:,None]*Q_inv[:,None,:]

# Perform "np.einsum('im,jn,mnop,ok,pl', Q, Q, C)" with "np.tensordot".
# Notice that we are using the last two axes from 'Q_ext', so "axes=[2,3]"
# and first two from 'C', so "axes=[0,1]" for it. 
# These axes would be reduced by the dot-product, leaving us with 'ijop'.
parte1 = np.tensordot(Q_ext,C,axes=([2,3],[0,1]))

# Do it one more time to perform "np.einsum('ijop,ok,pl', parte1,Q_inv,Q_inv)"
# to reduce dimensions represented by 'o,p', leaving us with 'ijkl'.
# To confirm, compare the following against original einsum approach :
# "np.einsum('im,jn,mnop,ok,pl->ijkl', Q, Q, C, Q_inv, Q_inv)"
out = np.tensordot(parte1,Q_inv_ext,axes=([2,3],[0,1]))

方法#2

如果您希望避免broadcasting使用另外两个实例np.tensordot,您可以这样做 -

# Perform "np.einsum('jn,mnop', Q, C). Notice how, Q is represented by 'jn' 
# and C by 'mnop'. We need to reduce the 'm' dimension, i.e. reduce 'axes=1'
# from Q and `axes=1` from C corresponding to `n' in each of the inputs.
# Thus, 'jn' + 'mnop' => 'jmop' after 'n' is reduced and order is maintained.
Q_C1 = np.tensordot(Q,C,axes=([1],[1]))

# Perform "np.einsum('im,jn,mnop', Q, Q, C). We need to use Q and Q_C1.
# Q is 'im' and Q_C1 is 'jmop'. Thus, again we need to reduce 'axes=1'
# from Q and `axes=1` from Q_C1 corresponding to `m' in each of the inputs.
# Thus, 'im' + 'jmop' => 'ijop' after 'm' is reduced and order is maintained.
parte1 = np.tensordot(Q,Q_C1,axes=([1],[1]))

# Use the same philosophy to get the rest of the einsum equivalent, 
# but use parte1 and go right and use Q_inv 
out = np.tensordot(np.tensordot(parte1,Q_inv,axes=([2],[0])),Q_inv,axes=([2],[0]))

诀窍np.tensordot是跟踪axes参数减少的尺寸以及折叠尺寸如何与剩余输入的尺寸对齐。

于 2016-03-03T17:01:58.473 回答