SlideShare a Scribd company logo
1 of 92
Workshop on Matrix Equations and
                                                Tensor Techniques 2011 Aachen,
                                                             21 November 2011




                  ADI for Tensor Structured Equations
                                 Thomas Mach and Jens Saak

                   Max Planck Institute for Dynamics of Complex Technical Systems
                       Computational Methods in Systems and Control Theory




                                                                                               MAX PLANCK INSTITUTE
                                                                                             FOR DYNAMICS OF COMPLEX
                                                                                                TECHNICAL SYSTEMS
                                                                                                    MAGDEBURG




Max Planck Institute Magdeburg                                 Thomas Mach, Jens Saak, Tensor-ADI         1/37
ADI          ADI for Tensors       Numerical Results and Shifts        Conditioning of the Problem       Conclusions



   Classic ADI                                                          [Peaceman/Rachford ’55]


       Developed to solve linear systems related to Poisson problems

                            −∆u = f                                in Ω ⊂ Rd , d = 2
                                  u=0                              on ∂Ω.

       uniform grid size h, centered differences, d = 1,

                                              ⇒ ∆1,h u = h2 f
                                                                                  
                                           2 −1
                                         −1 2 −1                          
                                                                          
                                 ∆1,h   =
                                            ..   ..                  ..   .
                                                                           
                                               .    .                   . 
                                                 −1                   2 −1
                                                                      −1 2

Max Planck Institute Magdeburg                                              Thomas Mach, Jens Saak, Tensor-ADI   2/37
ADI          ADI for Tensors           Numerical Results and Shifts        Conditioning of the Problem         Conclusions



   Classic ADI                                                              [Peaceman/Rachford ’55]


       Developed to solve linear systems related to Poisson problems

                            −∆u = f                                    in Ω ⊂ Rd , d = 2
                                       u=0                             on ∂Ω.

       uniform grid size h, 5-point difference star, d = 2,

                                                   ⇒ ∆2,h u = h2 f
                                                                                                               
                  K             −I                                  4                 −1
                −I             K      −I                        −1                  4      −1                  
                                                                                                               
       ∆2,h    =
                               ..     ..     ..         and K = 
                                                                                    ..      ..          ..      .
                                                                                                                  
                                  .      .    .                                       .       .      .         
                                      −I     K      −I                                     −1       4       −1
                                              −I     K                                                −1        4


Max Planck Institute Magdeburg                                                  Thomas Mach, Jens Saak, Tensor-ADI     2/37
ADI          ADI for Tensors     Numerical Results and Shifts   Conditioning of the Problem      Conclusions



   Classic ADI                                                   [Peaceman/Rachford ’55]


       Observation
                                 ∆2,h = (∆1,h ⊗ I ) + (I ⊗ ∆1,h ).
                                                 =:H             =:V




Max Planck Institute Magdeburg                                      Thomas Mach, Jens Saak, Tensor-ADI   3/37
ADI          ADI for Tensors     Numerical Results and Shifts   Conditioning of the Problem      Conclusions



   Classic ADI                                                   [Peaceman/Rachford ’55]


       Observation
                                 ∆2,h = (∆1,h ⊗ I ) + (I ⊗ ∆1,h ).
                                                 =:H             =:V


                              ˜
       Solve ∆2,h u = h2 f =: f exploiting structure in H and V .




Max Planck Institute Magdeburg                                      Thomas Mach, Jens Saak, Tensor-ADI   3/37
ADI          ADI for Tensors         Numerical Results and Shifts   Conditioning of the Problem      Conclusions



   Classic ADI                                                       [Peaceman/Rachford ’55]


       Observation
                                    ∆2,h = (∆1,h ⊗ I ) + (I ⊗ ∆1,h ).
                                                     =:H             =:V


                              ˜
       Solve ∆2,h u = h2 f =: f exploiting structure in H and V .

       For certain shift parameters perform
                                                                      ˜
                                 (H + pi I ) ui+ 1 = (pi I − V ) ui + f ,
                                                     2
                                                                       ˜
                                 (V + pi I ) ui+1 = (pi I − H) ui+ 1 + f ,
                                                                           2


       until ui is good enough.


Max Planck Institute Magdeburg                                          Thomas Mach, Jens Saak, Tensor-ADI   3/37
ADI          ADI for Tensors    Numerical Results and Shifts   Conditioning of the Problem      Conclusions



   ADI and Lyapunov Equations                                                  [Wachspress ’88]


       Lyapunov Equation
                                       FX + XF T = −GG T




Max Planck Institute Magdeburg                                     Thomas Mach, Jens Saak, Tensor-ADI   4/37
ADI          ADI for Tensors           Numerical Results and Shifts   Conditioning of the Problem      Conclusions



   ADI and Lyapunov Equations                                                         [Wachspress ’88]


       Lyapunov Equation
                                              FX + XF T = −GG T

       Vectorized Lyapunov Equation
                                (I ⊗ F ) + (F ⊗ I ) vec(X ) = −vec(GG T )
                                 =:HF             =:VF


                                     Same structure ⇒ apply ADI




Max Planck Institute Magdeburg                                            Thomas Mach, Jens Saak, Tensor-ADI   4/37
ADI          ADI for Tensors           Numerical Results and Shifts   Conditioning of the Problem        Conclusions



   ADI and Lyapunov Equations                                                         [Wachspress ’88]


       Lyapunov Equation
                                              FX + XF T = −GG T

       Vectorized Lyapunov Equation
                                (I ⊗ F ) + (F ⊗ I ) vec(X ) = −vec(GG T )
                                 =:HF                =:VF


                                     Same structure ⇒ apply ADI

                         (F + pi I ) Xi+ 1 = −GG T − Xi F T − pi I
                                                 2

                         (F + pi I ) Xi+1 = −GG T − Xi+ 1 F T − pi I
                                                      T
                                                                        2



Max Planck Institute Magdeburg                                              Thomas Mach, Jens Saak, Tensor-ADI   4/37
ADI          ADI for Tensors    Numerical Results and Shifts   Conditioning of the Problem      Conclusions



   LR-ADI for Lyapunov Equations

       Lyapunov Equation
                                       FX + XF T = −GG T




Max Planck Institute Magdeburg                                     Thomas Mach, Jens Saak, Tensor-ADI   5/37
ADI          ADI for Tensors    Numerical Results and Shifts   Conditioning of the Problem      Conclusions



   LR-ADI for Lyapunov Equations

       Lyapunov Equation
                                       FX + XF T = −GG T

              Often singular values of X decay rapidly when G is “thin”.
                                X = ZZ T with Z “thin”.




Max Planck Institute Magdeburg                                     Thomas Mach, Jens Saak, Tensor-ADI   5/37
ADI          ADI for Tensors       Numerical Results and Shifts   Conditioning of the Problem      Conclusions



   LR-ADI for Lyapunov Equations

       Lyapunov Equation
                                          FX + XF T = −GG T

              Often singular values of X decay rapidly when G is “thin”.
                                X = ZZ T with Z “thin”.

       LR-ADI                                                      [Penzl ’99, Li/White ’02]

                   Z0 = []          V1 =         −2 Re (p1 )(F + p1 I )−1 G
                                  Re (pi )
                   Vi =                     I − (pi + pi−1 )(F + pi I )−1 Vi−1
                                 Re (pi−1 )
                   Zi = [Zi−1 , Vi ]


Max Planck Institute Magdeburg                                        Thomas Mach, Jens Saak, Tensor-ADI   5/37
ADI          ADI for Tensors            Numerical Results and Shifts   Conditioning of the Problem           Conclusions



   Generalizing Matrix Equations

                                                ∆2,h vec(X ) = vec(B)
                        I ⊗ ∆1,h +           ∆1,h ⊗ I                                      vec(X ) = vec(B)

                     =H                    =V                                                =u       =f




           ∆µa      a
                             Xa      c

                                                        +                                 =           Ba      c
                                             c     ∆µc
                             Xa      c




Max Planck Institute Magdeburg                                             Thomas Mach, Jens Saak, Tensor-ADI        6/37
ADI          ADI for Tensors       Numerical Results and Shifts       Conditioning of the Problem           Conclusions



   Generalizing Matrix Equations

                                            ∆4,h vec(X ) = vec(B)
              I ⊗ I ⊗ I ⊗ ∆1,h + I ⊗ I ⊗ ∆1,h ⊗ I + I ⊗ ∆1,h ⊗ I ⊗ I + ∆1,h ⊗ I ⊗ I ⊗ I   vec(X ) = vec(B)

                     =H                =V                   =R               =Q             =u       =f




           ∆µa      a
                            Xabcd           +                      Xabcd
                                                 ∆µb       b
                                                 +                                        =          Babcd
                                         c      ∆µc
                            Xabcd                        +         Xabcd
                                                                               d     ∆µd




Max Planck Institute Magdeburg                                             Thomas Mach, Jens Saak, Tensor-ADI       6/37
ADI          ADI for Tensors     Numerical Results and Shifts            Conditioning of the Problem       Conclusions



   Generalizing ADI

                                      I ⊗ ∆1,h + ∆1,h ⊗ I        vec(X ) = vec(B)

                                         =H          =V           =u        =f




                          (H + I ⊗ pi,1 I )Xi+ 1 = (pi,1 I − V )Xi + B
                                               2
                          (V + pi,2 I ⊗ I )Xi+ 1 = (pi,2 I − H)Xi+ 1 + B
                                                    2                                 2




Max Planck Institute Magdeburg                                                Thomas Mach, Jens Saak, Tensor-ADI   7/37
ADI          ADI for Tensors       Numerical Results and Shifts            Conditioning of the Problem          Conclusions



   Generalizing ADI

                                        I ⊗ ∆1,h + ∆1,h ⊗ I        vec(X ) = vec(B)

                                            =H         =V           =u        =f




                           (H + I ⊗ pi,1 I )Xi+ 1 = (pi,1 I − V )Xi + B
                                                2
                           (V + pi,2 I ⊗ I )Xi+ 1 = (pi,2 I − H)Xi+ 1 + B
                                                      2                                 2



              I ⊗ I ⊗ I ⊗ ∆1,h + I ⊗ I ⊗ ∆1,h ⊗ I + I ⊗ ∆1,h ⊗ I ⊗ I + ∆1,h ⊗ I ⊗ I ⊗ I       vec(X ) = vec(B)

                      =H               =V                   =R                     =Q           =u        =f




            (H + I ⊗ I ⊗ I ⊗ pi,1 I )Xi+ 1             = (pi,1 I − V − R                − Q)Xi            +B
                                         4
            (V + I ⊗ I ⊗ pi,2 I ⊗ I )Xi+ 1             = (pi,2 I − H − R                − Q)Xi+ 1         +B
                                         2                                                      4
            (R + I ⊗ pi,3 I ⊗ I ⊗ I )Xi+ 3             = (pi,3 I − H − V                − Q)Xi+ 1         +B
                                         4                                                      2
            (Q + pi,4 I ⊗ I ⊗ I ⊗ I )Xi+1              = (pi,4 I − H − V                − R)Xi+ 3         +B
                                                                                                      4

Max Planck Institute Magdeburg                                                  Thomas Mach, Jens Saak, Tensor-ADI      7/37
ADI          ADI for Tensors      Numerical Results and Shifts   Conditioning of the Problem      Conclusions



   Goal

                                              Solve AX = B



                                 A = I ⊗ I ⊗ · · · ⊗ I ⊗ I ⊗ A1 +
                                        I ⊗ I ⊗ · · · ⊗ I ⊗ A2 ⊗ I           +
                                                          ...                +
                                        Ad ⊗ I ⊗ · · · ⊗ I ⊗ I ⊗ I



       B is given in tensor train decomposition
       ⇒ X is sought in tensor train decomposition.


Max Planck Institute Magdeburg                                       Thomas Mach, Jens Saak, Tensor-ADI   8/37
ADI          ADI for Tensors         Numerical Results and Shifts       Conditioning of the Problem       Conclusions



   Tensor Trains                                                     [Oseledets, Tyrtyshnikov ’09]



                                        r1 ,...,rd−1
        T (i1 , i2 , . . . , id ) =                    G1 (i1 , α1 )G2 (α1 , i2 , α2 )
                                      α1 ,...,αd−1 =1

                                                        · · · Gj (αj−1 , ij , αj ) · · ·
                                                       Gd−1 (αd−2 , id−1 , αd−1 )Gd (αd−1 , id ).




Max Planck Institute Magdeburg                                               Thomas Mach, Jens Saak, Tensor-ADI   9/37
ADI          ADI for Tensors         Numerical Results and Shifts       Conditioning of the Problem       Conclusions



   Tensor Trains                                                     [Oseledets, Tyrtyshnikov ’09]



                                        r1 ,...,rd−1
        T (i1 , i2 , . . . , id ) =                    G1 (i1 , α1 )G2 (α1 , i2 , α2 )
                                      α1 ,...,αd−1 =1

                                                        · · · Gj (αj−1 , ij , αj ) · · ·
                                                       Gd−1 (αd−2 , id−1 , αd−1 )Gd (αd−1 , id ).




         G1 (i1 , α1 )           α1        G2 (α1 , i2 , α2 )          α2        ···         Gd (αd−1 , id )




Max Planck Institute Magdeburg                                               Thomas Mach, Jens Saak, Tensor-ADI   9/37
ADI          ADI for Tensors        Numerical Results and Shifts        Conditioning of the Problem        Conclusions



   Tensor Trains                                                    [Oseledets, Tyrtyshnikov ’09]


       Tensor trains are
               computable, and
                                                                                                        d
               require only O(dnr 2 ) storage, with TT-rank r and T ∈ Rn .

       Canonical representation

                          T (i1 , i2 , . . . , id ) =         G1 (i1 , α) · · · Gd (id , α)
                                                          α


       Tucker decomposition

          T (i1 , i2 , . . . , id ) =                C (α1 , . . . , αd )G1 (i1 , α1 ) · · · Gd (id , αd )
                                        α1 ,...,αd


Max Planck Institute Magdeburg                                              Thomas Mach, Jens Saak, Tensor-ADI    10/37
ADI          ADI for Tensors    Numerical Results and Shifts       Conditioning of the Problem      Conclusions



   Tensor Trains                                                [Oseledets, Tyrtyshnikov ’09]


                                       (I ⊗ · · · ⊗ I ⊗ A1 ) T




Max Planck Institute Magdeburg                                         Thomas Mach, Jens Saak, Tensor-ADI   11/37
ADI          ADI for Tensors     Numerical Results and Shifts       Conditioning of the Problem      Conclusions



   Tensor Trains                                                 [Oseledets, Tyrtyshnikov ’09]


                                        (I ⊗ · · · ⊗ I ⊗ A1 ) T



             A1 (β, i1 )

                   i1

            G1 (i1 , α1 )        α1        G2 (α1 , i2 , α2 )         α2         ···         Gd (αd−1 , id )




Max Planck Institute Magdeburg                                          Thomas Mach, Jens Saak, Tensor-ADI   11/37
ADI          ADI for Tensors     Numerical Results and Shifts        Conditioning of the Problem      Conclusions



   Tensor Trains                                                 [Oseledets, Tyrtyshnikov ’09]


                                         (I ⊗ · · · ⊗ I ⊗ A1 ) T



             A1 (β, i1 )

                   i1

            G1 (i1 , α1 )        α1        G2 (α1 , i2 , α2 )          α2         ···         Gd (αd−1 , id )



       T (i1 , i2 , . . . , id ) ×1 A1      =                    A1     β,i1 G1 (i1 , α1 )G2 (α1 , i2 , α2 )
                                                α1 ,...,αd−1

                                                    · · · Gd−1 (αd−2 , id−1 , αd−1 )Gd (αd−1 , id )
Max Planck Institute Magdeburg                                           Thomas Mach, Jens Saak, Tensor-ADI   11/37
ADI          ADI for Tensors        Numerical Results and Shifts        Conditioning of the Problem      Conclusions



   Tensor Trains                                                    [Oseledets, Tyrtyshnikov ’09]


                                           (I ⊗ · · · ⊗ I ⊗ A1 ) T

                                   ˜
                                 = G (β, α1 ) = A1 G1
             A1 (β, i1 )

                   i1

            G1 (i1 , α1 )          α1         G2 (α1 , i2 , α2 )          α2         ···         Gd (αd−1 , id )



       T (i1 , i2 , . . . , id ) ×1 A1         =                    A1     β,i1 G1 (i1 , α1 )G2 (α1 , i2 , α2 )
                                                   α1 ,...,αd−1

                                                       · · · Gd−1 (αd−2 , id−1 , αd−1 )Gd (αd−1 , id )
Max Planck Institute Magdeburg                                              Thomas Mach, Jens Saak, Tensor-ADI   11/37
ADI          ADI for Tensors        Numerical Results and Shifts       Conditioning of the Problem        Conclusions



   Tensor Trains                                                    [Oseledets, Tyrtyshnikov ’09]


                                           (I ⊗ · · · ⊗ I ⊗ A1 ) T

                                   ˜
                                 = G (β, α1 ) = A1 G1
             A1 (β, i1 )

                   i1

            G1 (i1 , α1 )          α1         G2 (α1 , i2 , α2 )            α2      ···         Gd (αd−1 , id )



       T (i1 , i2 , . . . , id ) ×1 A1 −1 =                         A1 −1   β,i1 G1 (i1 , α1 )G2 (α1 , i2 , α2 )
                                                   α1 ,...,αd−1

                                                       · · · Gd−1 (αd−2 , id−1 , αd−1 )Gd (αd−1 , id )
Max Planck Institute Magdeburg                                               Thomas Mach, Jens Saak, Tensor-ADI   11/37
ADI          ADI for Tensors    Numerical Results and Shifts   Conditioning of the Problem      Conclusions



   Eigenvalues

       A = I ⊗ · · · ⊗ I ⊗ A1 + I ⊗ · · · ⊗ I ⊗ A2 ⊗ I + . . . + Ad ⊗ I ⊗ · · · ⊗ I
       St´phanos’ theorem:
         e

                       ⇒ λi (A) = λi1 (A1 ) + λi2 (A2 ) + · · · + λid (Ad ),
                                                d−1
       with i = i1 + i2 n1 + · · · + id               nj .
                                                j=1




Max Planck Institute Magdeburg                                     Thomas Mach, Jens Saak, Tensor-ADI   12/37
ADI          ADI for Tensors        Numerical Results and Shifts       Conditioning of the Problem      Conclusions



   Eigenvalues

       A = I ⊗ · · · ⊗ I ⊗ A1 + I ⊗ · · · ⊗ I ⊗ A2 ⊗ I + . . . + Ad ⊗ I ⊗ · · · ⊗ I
       St´phanos’ theorem:
         e

                       ⇒ λi (A) = λi1 (A1 ) + λi2 (A2 ) + · · · + λid (Ad ),
                                                    d−1
       with i = i1 + i2 n1 + · · · + id                   nj .
                                                    j=1




                                                              d
                                 AX = B             ⇔               X ×j Aj = B
                                                             j=1

       A is regular              ⇔    λi (A) = 0 ∀i                   ⇐ Ai Hurwitz ∀i


Max Planck Institute Magdeburg                                             Thomas Mach, Jens Saak, Tensor-ADI   12/37
ADI          ADI for Tensors    Numerical Results and Shifts            Conditioning of the Problem      Conclusions



   Algorithm

       Input: {A1 , . . . , Ad }, tensor train B, accuracy
       Output: tensor train X , with AX = B
       forall j ∈ {1, . . . , d} do
             (0)
           Xj := zeros(n, 1, 1)
       end
       while r (i) > do
          Choose shift pi
          forall k ∈ {1, . . . , d} do
                                                           d
                                                                                ×j Aj ×k (Ak + pi I )−1
                          k                     k−1                      k−1
                   X (i+ d ) := B +pi X (i+      d )   −         X (i+    d )

                                                           j=1
                                                           j=k
          end
       end


Max Planck Institute Magdeburg                                              Thomas Mach, Jens Saak, Tensor-ADI   13/37
ADI          ADI for Tensors    Numerical Results and Shifts            Conditioning of the Problem      Conclusions



   Algorithm
                                 r (i) := B
       Input: {A1 , . . . , Ad }, tensor train B, accuracy
                                 forall j ∈ {1, . . . , d} do
       Output: tensor train X , with AX(i) B     =
                                       r (i) := r − Xi ×j Aj
       forall j ∈ {1, . . . , d} do
             (0)                 end
           Xj := zeros(n, 1, 1)
       end
       while r (i) > do
          Choose shift pi
          forall k ∈ {1, . . . , d} do
                                                           d
                                                                                ×j Aj ×k (Ak + pi I )−1
                          k                     k−1                      k−1
                   X (i+ d ) := B +pi X (i+      d )   −         X (i+    d )

                                                           j=1
                                                           j=k
          end
       end


Max Planck Institute Magdeburg                                              Thomas Mach, Jens Saak, Tensor-ADI   13/37
ADI          ADI for Tensors    Numerical Results and Shifts            Conditioning of the Problem      Conclusions



   Algorithm

       Input: {A1 , . . . , Ad }, tensor train B, accuracy
       Output: tensor train X , with AX = B
       forall j ∈ {1, . . . , d} do
             (0)
           Xj := zeros(n, 1, 1)
       end
       while r (i) > do
          Choose shift pi
          forall k ∈ {1, . . . , d} do
                                                           d
                                                                                ×j Aj ×k (Ak + pi I )−1
                          k                     k−1                      k−1
                   X (i+ d ) := B +pi X (i+      d )   −         X (i+    d )

                                                           j=1
                                                           j=k
          end
       end


Max Planck Institute Magdeburg                                              Thomas Mach, Jens Saak, Tensor-ADI   13/37
ADI          ADI for Tensors    Numerical Results and Shifts            Conditioning of the Problem      Conclusions



   Algorithm

       Input: {A1 , . . . , Ad }, tensor train B, accuracy
       Output: tensor train X , with AX = B
       forall j ∈ {1, . . . , d} do
             (0)
           Xj := zeros(n, 1, 1)
       end                           (I ⊗ I ⊗ · · · ⊗ I ⊗ Aj ⊗ I ⊗ · · · ⊗ I ) Xi+ k−1
                                                                                    d
       while r (i) > do
           Choose shift pi
           forall k ∈ {1, . . . , d} do
                                                           d
                                                                                ×j Aj ×k (Ak + pi I )−1
                          k                     k−1                      k−1
                   X (i+ d ) := B +pi X (i+      d )   −         X (i+    d )

                                                           j=1
                                                           j=k
          end
       end


Max Planck Institute Magdeburg                                              Thomas Mach, Jens Saak, Tensor-ADI   13/37
ADI          ADI for Tensors    Numerical Results and Shifts   Conditioning of the Problem      Conclusions



   Improvements
               better shifts, e.g. Wachspress/Penzl
               test residual in innermost loop
               replace inner loop by random k
               use tensor train truncation and start with low accuarcy




Max Planck Institute Magdeburg                                     Thomas Mach, Jens Saak, Tensor-ADI   14/37
ADI           ADI for Tensors    Numerical Results and Shifts           Conditioning of the Problem        Conclusions



   Improvements
                better shifts, e.g. Wachspress/Penzl
                test residual in innermost loop
                replace inner loop by random k
                use tensor train truncation and start with low accuarcy

       [...]
       while r (i) > do
          Choose shifts pi,j
          forall k ∈ {1, . . . , d} do

                                                   k −1
                                                                 d
                           k                                              k                                   −1
                    X (i+ d ) := B + pi,k X (i+      d )   −         X (i+ d ) ×j Aj ×k (Ak − pi,k I )
                                                               j=1
                                                               j=k
          end
       end

Max Planck Institute Magdeburg                                                Thomas Mach, Jens Saak, Tensor-ADI   14/37
ADI           ADI for Tensors    Numerical Results and Shifts           Conditioning of the Problem        Conclusions



   Improvements
                better shifts, e.g. Wachspress/Penzl
                test residual in innermost loop
                replace inner loop by random k
                use tensor train truncation and start with low accuarcy

       [...]
       while r (i) > do
          Choose shifts pi,j
          forall k ∈ {1, . . . , d} do

                                                   k −1
                                                                 d
                           k                                              k                                   −1
                    X (i+ d ) := B + pi,k X (i+      d )   −         X (i+ d ) ×j Aj ×k (Ak − pi,k I )
                                                               j=1
                                                               j=k
          end
       end

Max Planck Institute Magdeburg                                                Thomas Mach, Jens Saak, Tensor-ADI   14/37
ADI           ADI for Tensors    Numerical Results and Shifts           Conditioning of the Problem      Conclusions



   Improvements
                better shifts, e.g. Wachspress/Penzl
                test residual in innermost loop
                replace inner loop by random k
                use tensor train truncation and start with low accuarcy

       [...]
       while r (i) > do
          Choose shifts pi,j
          forall   = 1, . . . , 5 do
              k := random(1, . . . , d)
                                                    −1
                                                                 d
                                                                                                            −1
                    X (i+ 5 ) := B + pi, X (i+      5 )   −          X (i+ 5 ) ×j Aj ×k (Ak − pi, I )
                                                              j=1
                                                              j=k
          end
       end

Max Planck Institute Magdeburg                                              Thomas Mach, Jens Saak, Tensor-ADI   14/37
ADI          ADI for Tensors    Numerical Results and Shifts   Conditioning of the Problem      Conclusions



   Improvements
              better shifts, e.g. Wachspress/Penzl
              test residual in innermost loop
              replace inner loop by random k
              use tensor train truncation and start with low accuarcy
                                             Additional remark:
       [...]                                 The improvement with ran-
       while r (i) > do                      dom chosen directions works
                                             for the Lyapunov case with
             Choose shifts pi,j
                                             the special right hand side as
             forall      = 1, . . . , 5 do
                 k := random(1, . . . , d) in the examples. The investi-
                                             gation of the convergence be-
                                            −1
                                                    d
                                                                                       −1
                 X (i+ 5 ) := B + pi, X (i+ 5havior for(i+ 5 ) ×j Arhs×k (Ak − pi, I )
                                               )
                                                 −     X random j is work
                                                   j=1
                                             in progress.
                                                   j=k
             end
       end

Max Planck Institute Magdeburg                                     Thomas Mach, Jens Saak, Tensor-ADI   14/37
ADI                         ADI for Tensors   Numerical Results and Shifts     Conditioning of the Problem         Conclusions



   Improvements
                              better shifts, e.g. Wachspress/Penzl
                              test residual in innermost loop
                              replace inner loop by random k
                              use tensor train truncation and start with low accuarcy


                             1 · 105
                                                                      constant truncation error                    10−2




                                                                                                                                 i
         Storage in Double




                             80,000                                   tightened truncation error




                                                                                                                           Truncation Error
                             60,000                                                                                10−8
                             40,000
                                                                                                                   10−14
                             20,000

                                    0                                                                              10−20
                                        0      5           10           15        20            25            30
                                                                    Iteration
Max Planck Institute Magdeburg                                                     Thomas Mach, Jens Saak, Tensor-ADI      14/37
ADI          ADI for Tensors              Numerical Results and Shifts    Conditioning of the Problem        Conclusions



   Lemma

       Lemma                                                                              [Grasedyck ’04]
       The tensor equation
                                                       d
                                                       j=1 X     ×j Aj = B

       with λi (A) = 0 ∀i and Ak Hurwitz has the solution
                                        ∞
                      X =−             0 B       ×1 exp(A1 t) ×2 · · · ×d exp(Ad t)dt

                 Z (t) = B ×1 exp(A1 t) ×2 · · · ×d exp(Ad t)
                                d                                                              ∞
                 ˙
                 Z (t) =              Z (t) ×j Aj                    Z (∞) − Z (0) =               ˙
                                                                                                   Z (t)dt,
                                j=1                                                        0

                                d          ∞
               0−B =                           Z (t)dt ×j Aj
                                j=1    0


Max Planck Institute Magdeburg                                                Thomas Mach, Jens Saak, Tensor-ADI    15/37
ADI          ADI for Tensors          Numerical Results and Shifts             Conditioning of the Problem               Conclusions



   Theorem
       Theorem
       {A1 , . . . , Ad } ⇒ A, Λ(A) ⊂ [−λmax , −λmin ] ⊕ ı [−µ, µ] ⊂ C− .
       Let k ∈ N and use the quadrature points and weights:
                                        √
        hst := √k , tj := log e jhst + 1 + e 2jhst , wj := √ hst−2jh .
                     π
                                                                                                              1+e    st


       Then the solution X can be approximated by
                                                   r1 ,...,rd−1
             ˜
             X (i1 , i2 , . . . , id ) = −                            H1 (i1 , α1 ) · · · Hd (αd−1 , id ),
                                                α1 ,...,αd−1 =1

                                                                              2tj
                                                         2wj                        Ap
       with Hp (αp−1 , ip , αp ) := k
                                    j=−k                 λmin         βp e
                                                                             λmin
                                                                                          ip ,βp
                                                                                                   Gp (αp−1 , βp , αp )
       with the approximation error
                                       2µλ−1 +1   √
                                                                  (λI − 2A/λmin )−1
                                          min
            ˜
         X −X            ≤    Cst               −π k
                             πλmin e                                                                          dΓ λ B      2.
                     2                    π
                                                              Γ                                           2
       extending [Grasedyck ’04] (X and B of low Kronecker rank) to low TT-rank
Max Planck Institute Magdeburg                                                           Thomas Mach, Jens Saak, Tensor-ADI     16/37
ADI          ADI for Tensors          Numerical Results and Shifts             Conditioning of the Problem               Conclusions



   Theorem
       Theorem
       {A1 , . . . , Ad } ⇒ A, Λ(A) ⊂ [−λmax , −λmin ] ⊕ ı [−µ, µ] ⊂ C− .
       Let k ∈ N and use the quadrature points and weights:
                                        √
        hst := √k , tj := log e jhst + 1 + e 2jhst , wj := √ hst−2jh .
                     π
                                                                                                              1+e    st


       Then the solution X can be approximated by
                                                   r1 ,...,rd−1
             ˜
             X (i1 , i2 , . . . , id ) = −                            H1 (i1 , α1 ) · · · Hd (αd−1 , id ),
                                                α1 ,...,αd−1 =1

                                                                              2tj
                                                         2wj                        Ap
       with Hp (αp−1 , ip , αp ) := k
                                    j=−k                 λmin         βp e
                                                                             λmin
                                                                                          ip ,βp
                                                                                                   Gp (αp−1 , βp , αp )
       with the approximation error
                                       2µλ−1 +1   √
                                                                  (λI − 2A/λmin )−1
                                          min
            ˜
         X −X            ≤    Cst               −π k
                             πλmin e                                                                          dΓ λ B      2.
                     2                    π
                                                              Γ                                           2
       extending [Grasedyck ’04] (X and B of low Kronecker rank) to low TT-rank
Max Planck Institute Magdeburg                                                           Thomas Mach, Jens Saak, Tensor-ADI     16/37
ADI          ADI for Tensors          Numerical Results and Shifts             Conditioning of the Problem               Conclusions



   Theorem
       Theorem
       {A1 , . . . , Ad } ⇒ A, Λ(A) ⊂ [−λmax , −λmin ] ⊕ ı [−µ, µ] ⊂ C− .
       Let k ∈ N and use the quadrature points and weights:
                                        √
        hst := √k , tj := log e jhst + 1 + e 2jhst , wj := √ hst−2jh .
                     π
                                                                                                              1+e    st


       Then the solution X can be approximated by
                                                   r1 ,...,rd−1
             ˜
             X (i1 , i2 , . . . , id ) = −                            H1 (i1 , α1 ) · · · Hd (αd−1 , id ),
                                                α1 ,...,αd−1 =1

                                                                              2tj
                                                         2wj                        Ap
       with Hp (αp−1 , ip , αp ) := k
                                    j=−k                 λmin         βp e
                                                                             λmin
                                                                                          ip ,βp
                                                                                                   Gp (αp−1 , βp , αp )
       with the approximation error
                                       2µλ−1 +1   √
                                                                  (λI − 2A/λmin )−1
                                          min
            ˜
         X −X            ≤    Cst               −π k
                             πλmin e                                                                          dΓ λ B      2.
                     2                    π
                                                              Γ                                           2
       extending [Grasedyck ’04] (X and B of low Kronecker rank) to low TT-rank
Max Planck Institute Magdeburg                                                           Thomas Mach, Jens Saak, Tensor-ADI     16/37
ADI          ADI for Tensors            Numerical Results and Shifts             Conditioning of the Problem               Conclusions



   Theorem
       Theorem
       {A1 , . . . , Ad } ⇒ A, Λ(A) ⊂ [−λmax , −λmin ] ⊕ ı [−µ, µ] ⊂ C− .
       Let k ∈ N and use the quadrature points and weights:
                                        √
        hst := √k , tj := log e jhst + 1 + e 2jhst , wj := √ hst−2jh .
                     π
                                                                                                                1+e    st
                 B (i1 , i2 , . . . , id ) =             G1 (i1 , α1 )G2 (α1 , i2 , α2 )
                           α1 ,...,αd−1
       Then the solution X can be approximated by
                                                          · · · Gj (αj−1 , ij , αj ) · · ·
                                                     r1 ,...,rd−1
             ˜
             X (i1 , i2 , . . . , id ) = −               Gd−1 (αd−2 ,(i , α d−1·)GH (α , id ). i
                                                                 H id−1 , α ) · · d (αd−1 ,
                                                                          1 1          1               d   d−1 d ),
                                                  α1 ,...,αd−1 =1

                                                                                2tj
                                                           2wj                        Ap
       with Hp (αp−1 , ip , αp ) := k
                                    j=−k                   λmin         βp e
                                                                               λmin
                                                                                            ip ,βp
                                                                                                     Gp (αp−1 , βp , αp )
       with the approximation error
                                         2µλ−1 +1   √
                                                                     (λI − 2A/λmin )−1
                                            min
            ˜
         X −X            ≤    Cst                 −π k
                             πλmin e                                                                            dΓ λ B      2.
                     2                      π
                                                                Γ                                           2
       extending [Grasedyck ’04] (X and B of low Kronecker rank) to low TT-rank
Max Planck Institute Magdeburg                                                             Thomas Mach, Jens Saak, Tensor-ADI     16/37
ADI          ADI for Tensors          Numerical Results and Shifts             Conditioning of the Problem               Conclusions



   Theorem
       Theorem
       {A1 , . . . , Ad } ⇒ A, Λ(A) ⊂ [−λmax , −λmin ] ⊕ ı [−µ, µ] ⊂ C− .
       Let k ∈ N and use the quadrature points and weights:
                                        √
        hst := √k , tj := log e jhst + 1 + e 2jhst , wj := √ hst−2jh .
                     π
                                                                                                              1+e    st


       Then the solution X can be approximated by
                                                   r1 ,...,rd−1
             ˜
             X (i1 , i2 , . . . , id ) = −                            H1 (i1 , α1 ) · · · Hd (αd−1 , id ),
                                                α1 ,...,αd−1 =1

                                                                              2tj
                                                         2wj                        Ap
       with Hp (αp−1 , ip , αp ) := k
                                    j=−k                 λmin         βp e
                                                                             λmin
                                                                                          ip ,βp
                                                                                                   Gp (αp−1 , βp , αp )
       with the approximation error
                                       2µλ−1 +1   √
                                                                  (λI − 2A/λmin )−1
                                          min
            ˜
         X −X            ≤    Cst               −π k
                             πλmin e                                                                          dΓ λ B      2.
                     2                    π
                                                              Γ                                           2
       extending [Grasedyck ’04] (X and B of low Kronecker rank) to low TT-rank
Max Planck Institute Magdeburg                                                           Thomas Mach, Jens Saak, Tensor-ADI     16/37
ADI          ADI for Tensors        Numerical Results and Shifts            Conditioning of the Problem          Conclusions



   Proof – Part 1
       The quadrature formula (tj , wj ) can be found in [Stenger ’93,
       Example 4.2.11], with d = π/2, α = β = = 1, n = N = M = k.
       [Hackbusch ’09, D.4.3] shows, that d = π/2 is optimal. The
       quadrature formula is used to approximate 1 resp. the inverse of
                                                   r
       matrices by
                                                    ∞                  k
                                     1                  tr
                                        =               e dt ≈                   wj e tj r .
                                     −r         0                    j=−k

       The quadrature error is bounded by [Stenger ’93, (4.2.60)]
                                     ∞                  k                                      √
                                         tr
                                         e dt −              wj e tj r ≤ C3 e −π                   k
                                                                                                       ,
                                 0                  j=−k

       with [Grasedyck, Hackbusch, Khoromskij ’03],
                                              C3 ≤ Cst e |          (z)|/π
                                                                             .
Max Planck Institute Magdeburg                                                      Thomas Mach, Jens Saak, Tensor-ADI   17/37
ADI          ADI for Tensors        Numerical Results and Shifts               Conditioning of the Problem      Conclusions



   Proof – Part 2
       The lemma
                                 ∞
                        X =−          B ×1 exp(A1 t) ×2 · · · ×d exp(Ad t)dt
                                 0
                                          2
       together with scaling             λmin   we get the formula

                                                 r1 ,...,rd−1
             ˜
             X (i1 , i2 , . . . , id ) = −                          H1 (i1 , α1 ) · · · Hd (αd−1 , id ),
                                              α1 ,...,αd−1 =1

                                                 k                   2tj
                                                      2wj                  Ap
       where Hp (αp−1 , ip , αp ) :=                  λmin        e λmin        ip ,βp
                                                                                         Gp (αp−1 , βp , αp ).
                                              j=−k           βp


                                                ∞                     k
                                      1
                                      −r   =         e tr dt ≈              wj e tj r
                                                0                   j=−k

Max Planck Institute Magdeburg                                                     Thomas Mach, Jens Saak, Tensor-ADI   18/37
ADI          ADI for Tensors                Numerical Results and Shifts             Conditioning of the Problem           Conclusions



   Proof – Part 3
       For the error holds
                                             r1 ,...,rd−1        d
                 ˜
              X −X                  =                                        Gp (αp−1 , βp , αp )
                               2
                                         α1 ,...,αd−1 =1 p=1 βp
                                                                                                              
                                                                 ∞                     k               2tj
                                                                          2t
                                                                             A
                                                                         λmin p
                                                                                                           A
                                                                                                      λmin p
                                                  −                 e            +            wj e                        .
                                                             0                        j=−k
                                                                                                                 ip ,βp 2

       The Dunford-Cauchy formula and the quadrature error give
                  ∞                      k          2tj
                         2t    Ap                          Ap
          −           e λmin        +          wj e λmin
              0                         j=−k

                                   ∞                             −1               k                                   −1
             1                                        2Ap                                                      2Ap
        ≤      −                       e tλ λI −                      dΓ λ +          wj       e tj λ λI −                 dΓ λ
            2π           Γ     0                      λmin                                 Γ                   λmin
                                                                               j=−k


Max Planck Institute Magdeburg                                                             Thomas Mach, Jens Saak, Tensor-ADI     19/37
ADI          ADI for Tensors                Numerical Results and Shifts            Conditioning of the Problem         Conclusions



   Proof – Part 3

                               Dunford-Cauchy representation of the matrix exponential:

                                                    e tAp =       1
                                                                 2πı     Γe
                                                                              tλ (λI      − Ap )−1 dΓ λ




       The Dunford-Cauchy formula and the quadrature error give
                  ∞                      k          2tj
                         2t    Ap                          Ap
          −           e λmin        +          wj e λmin
              0                         j=−k

                                   ∞                            −1             k                                   −1
             1                                       2Ap                                                    2Ap
        ≤      −                       e tλ λI −                     dΓ λ +          wj       e tj λ λI −               dΓ λ
            2π           Γ     0                     λmin                                 Γ                 λmin
                                                                              j=−k


Max Planck Institute Magdeburg                                                            Thomas Mach, Jens Saak, Tensor-ADI   19/37
ADI          ADI for Tensors                Numerical Results and Shifts            Conditioning of the Problem         Conclusions



   Proof – Part 3
       The Dunford-Cauchy formula and the quadrature error give
                  ∞                      k          2tj
                         2t    Ap                          Ap
          −           e λmin        +          wj e λmin
              0                         j=−k

                                   ∞                            −1             k                                   −1
             1                                       2Ap                                                    2Ap
        ≤      −                       e tλ λI −                     dΓ λ +          wj       e tj λ λI −               dΓ λ
            2π           Γ     0                     λmin                                 Γ                 λmin
                                                                              j=−k




Max Planck Institute Magdeburg                                                            Thomas Mach, Jens Saak, Tensor-ADI   19/37
ADI          ADI for Tensors                Numerical Results and Shifts             Conditioning of the Problem         Conclusions



   Proof – Part 3
       The Dunford-Cauchy formula and the quadrature error give
                  ∞                      k          2tj
                         2t    Ap                          Ap
          −           e λmin        +          wj e λmin
              0                         j=−k

                    ∞                                           −1             k                                    −1
           1                     2Ap                                                                         2Ap
        ≤    −         e tλ λI −                                     dΓ λ +          wj        e tj λ λI −               dΓ λ
          2π    Γ 0              λmin                                                      Γ                 λmin
                                                                              j=−k
                                         
                   ∞         k                                                            −1
           1                                                                  2Ap
        ≤     −      e tλ +     wj e tj λ                             λI −                    dΓ λ
          2π     0                                               Γ             λmin
                                             j=−k




Max Planck Institute Magdeburg                                                            Thomas Mach, Jens Saak, Tensor-ADI    19/37
ADI          ADI for Tensors                Numerical Results and Shifts          Conditioning of the Problem            Conclusions



   Proof – Part 3
       The Dunford-Cauchy formula and the quadrature error give
                  ∞                      k          2tj
                         2t    Ap                          Ap
          −           e λmin        +          wj e λmin
              0                         j=−k

                    ∞                                           −1           k                                       −1
           1                     2Ap                                  2Ap
        ≤    −         e tλ λI −           dΓ λ +      wj e tj λ λI −      dΓ λ
          2π    Γ 0              λmin                      Γ          λmin
             
                  The quadrature error is bounded by [Stenger ’93, (4.2.60)]
                                       
                                                  j=−k

                   ∞         k                           −1
           1                                       2Ap
        ≤     −      e tλ +     w∞tj λ 
                                 je           λI −
                                                 k           dΓ λ     √
          2π     0                    tr Γ          λmin t r        −π k
                                             j=−k
                                                           e dt −                wj e   j
                                                                                             ≤ C3 e              ,
                                                    0                   j=−k

                               with [Grasedyck, Hackbusch, Khoromskij ’03],

                                                                     C3 ≤ Cst e |       (z)|/π
                                                                                                 .


Max Planck Institute Magdeburg                                                           Thomas Mach, Jens Saak, Tensor-ADI     19/37
ADI          ADI for Tensors                      Numerical Results and Shifts               Conditioning of the Problem         Conclusions



   Proof – Part 3
       The Dunford-Cauchy formula and the quadrature error give
                  ∞                        k              2tj
                         2t    Ap                               Ap
          −           e λmin        +               wj e λmin
              0                         j=−k

                    ∞                                                −1                k                                    −1
           1                     2Ap                                                                                2Ap
        ≤    −         e tλ λI −                                           dΓ λ +           wj        e tj λ λI −                dΓ λ
          2π    Γ 0              λmin                                                             Γ                 λmin
                                                                                     j=−k
                                         
                   ∞         k                                                                   −1
           1                                                                          2Ap
        ≤     −      e tλ +     wj e tj λ                                   λI −                     dΓ λ
          2π     0                                                     Γ               λmin
                                                   j=−k

                                                                           −1
             1       |    (z)|       −π
                                     √                          2Ap
        ≤      Cst e      π      e     k               λI −                         dΓ λ.
            2π                                 Γ                λmin
                                                                                2

       Summing over p completes the proof.



Max Planck Institute Magdeburg                                                                   Thomas Mach, Jens Saak, Tensor-ADI     19/37
ADI          ADI for Tensors     Numerical Results and Shifts   Conditioning of the Problem      Conclusions




                                Numerical Results




Max Planck Institute Magdeburg                                      Thomas Mach, Jens Saak, Tensor-ADI   20/37
ADI          ADI for Tensors    Numerical Results and Shifts    Conditioning of the Problem      Conclusions



   Example: Laplace – Ai = ∆1, 11
                               1




                                     Ai = ∆1, 1
                                                    11

                                     B = 0 0 ... 0 1




       Shifts:
       pi := e1 (∗1 ) + . . . + ed (∗d )                 — random chosen eigenvalue




Max Planck Institute Magdeburg                                      Thomas Mach, Jens Saak, Tensor-ADI   21/37
ADI          ADI for Tensors      Numerical Results and Shifts   Conditioning of the Problem      Conclusions



   Numerical Results – Ai = ∆1, 11
                                 1



                          d             t in s            residual      mean(#it)
                          2      0.3887 e+00            7.015 e−10          112.8
                          5      5.3975 e+00            7.467 e−10           45.8
                          8      6.0073 e+00            6.936 e−10           12.8
                         10      3.6624 e+00            7.685 e−10            6.8
                         25      3.1421 e+01            2.437 e−10            5.0
                         50      2.2682 e+02            2.049 e−10            5.0
                         75      7.1918 e+02            4.036 e−10            5.0
                        100      1.6997 e+03            1.864 e−10            5.0
                        150      5.5375 e+03            1.801 e−10            5.0
                        200      1.2795 e+04            1.472 e−10            5.0
                        250      2.4991 e+04            1.816 e−10            5.0
                        300      4.2979 e+04            2.535 e−10            5.0
                        500      1.9515 e+05            2.039 e−10            5.0

Max Planck Institute Magdeburg                                       Thomas Mach, Jens Saak, Tensor-ADI   22/37
ADI           ADI for Tensors       Numerical Results and Shifts         Conditioning of the Problem         Conclusions



   Numerical Results – Ai = ∆1, 11
                                 1




                                                sparse                                           dense
           d       TADI                       MESS                 Penzl’s sh.                               lyap
           2       0.310         0.0006         0.024                    0.003       0.0003                 0.0005
           4       3.130         0.1695         0.011                    0.049        6.331                  0.012
           6       8.147             —          0.076                    0.094           —                   7.165
           8       5.458             —          5.863                    1.097           —              13 698.212
          10       5.306             —      3 445.523                 249.464            —                       —




Max Planck Institute Magdeburg                                               Thomas Mach, Jens Saak, Tensor-ADI     23/37
ADI          ADI for Tensors                     Numerical Results and Shifts   Conditioning of the Problem      Conclusions



   Numerical Results – Ai = ∆1, 11
                                 1




                                       105

                                       104
              Computation Time in s




                                       103

                                       102
                                                                                         Tensor ADI
                                       101                                               sparse 
                                                                                         MESS
                                       100
                                                                                         Penzl’s shifts
                                      10−1                                               dense 
                                                                                         lyap
                                      10−2
                                             10     100                       300
                                                                      Dimension d
Max Planck Institute Magdeburg                                                      Thomas Mach, Jens Saak, Tensor-ADI   24/37
ADI          ADI for Tensors                Numerical Results and Shifts   Conditioning of the Problem      Conclusions



   Numerical Results – Ai = ∆1, 11
                                 1




                                       105

                                       104
              Computation Time in s




                                       103

                                       102
                                                                                    Tensor ADI
                                       101                                          sparse 
                                                                                    MESS
                                       100
                                                                                    Penzl’s shifts
                                      10−1                                          dense 
                                                                                    lyap
                                      10−2
                                                         10                         100             300
                                                                 Dimension d
Max Planck Institute Magdeburg                                                 Thomas Mach, Jens Saak, Tensor-ADI   24/37
ADI          ADI for Tensors           Numerical Results and Shifts     Conditioning of the Problem            Conclusions



   Single Shift and Convergence

       A = I ⊗ · · · ⊗ I ⊗ A1 + I ⊗ · · · ⊗ I ⊗ A2 ⊗ I + . . . + Ad ⊗ I ⊗ · · · ⊗ I
       We assume Λ(Ak ) ⊂ R− .

       Error Propagation, Single Shift
                                                   p−          λk + λl                                 λk
                                                                                                           
                                              d                                 d
                                                           k                                     k
           G1    2   ≤      max                                          =           1 −                    .
                          λk ∈Λ(Ak ),                     p + λl                               p + λl
                           k=1,...,d        l=0                                l=0


       If G1         2   < 1, then the ADI iteration converges.
                p < 0 and p > −∞




Max Planck Institute Magdeburg                                               Thomas Mach, Jens Saak, Tensor-ADI       25/37
ADI          ADI for Tensors           Numerical Results and Shifts     Conditioning of the Problem            Conclusions



   Single Shift and Convergence

       A = I ⊗ · · · ⊗ I ⊗ A1 + I ⊗ · · · ⊗ I ⊗ A2 ⊗ I + . . . + Ad ⊗ I ⊗ · · · ⊗ I
       We assume Λ(Ak ) ⊂ R− .

       Error Propagation, Single Shift
                                                   p−          λk + λl                                 λk
                                                                                                           
                                              d                                 d
                                                           k                                     k
           G1    2   ≤      max                                          =           1 −                    .
                          λk ∈Λ(Ak ),                     p + λl                               p + λl
                           k=1,...,d        l=0                                l=0


       If G1         2   < 1, then the ADI iteration converges.
                p < 0 and p > −∞
                                          d
                p < λi (A) =              k=1 λk (Ak )         ∀i



Max Planck Institute Magdeburg                                               Thomas Mach, Jens Saak, Tensor-ADI       25/37
ADI          ADI for Tensors           Numerical Results and Shifts      Conditioning of the Problem            Conclusions



   Single Shift and Convergence

       A = I ⊗ · · · ⊗ I ⊗ A1 + I ⊗ · · · ⊗ I ⊗ A2 ⊗ I + . . . + Ad ⊗ I ⊗ · · · ⊗ I
       We assume Λ(Ak ) ⊂ R− .

       Error Propagation, Single Shift
                                                   p−          λk + λl                                  λk
                                                                                                            
                                              d                                  d
                                                           k                                      k
           G1    2   ≤      max                                           =           1 −                    .
                          λk ∈Λ(Ak ),                     p + λl                                p + λl
                           k=1,...,d        l=0                                 l=0


       If G1         2   < 1, then the ADI iteration converges.
                p < 0 and p > −∞
                                          d
                p < λi (A) =              k=1 λk (Ak )         ∀i
                                                                         d−2
                Lyapunov case (Ak = A0 ∀k): p <                           2 λmin (A0 )


Max Planck Institute Magdeburg                                                Thomas Mach, Jens Saak, Tensor-ADI       25/37
ADI          ADI for Tensors           Numerical Results and Shifts      Conditioning of the Problem            Conclusions



   Single Shift and Convergence

       A = I ⊗ · · · ⊗ I ⊗ A1 + I ⊗ · · · ⊗ I ⊗ A2 ⊗ I + . . . + Ad ⊗ I ⊗ · · · ⊗ I
       We assume Λ(Ak ) ⊂ R− .

       Error Propagation, Single Shift
                                                   p−          λk + λl                                  λk
                                                                                                            
                                              d                                  d
                                                           k                                      k
           G1    2   ≤      max                                           =           1 −                    .
                          λk ∈Λ(Ak ),                     p + λl                                p + λl
                           k=1,...,d        l=0                                 l=0


       If G1         2   < 1, then the ADI iteration converges.
                p < 0 and p > −∞
                                          d
                p < λi (A) =              k=1 λk (Ak )         ∀i
                                                                         2−2
                Lyapunov case (Ak = A0 ∀k): p <                           2 λmin (A0 )      =0


Max Planck Institute Magdeburg                                                Thomas Mach, Jens Saak, Tensor-ADI       25/37
ADI          ADI for Tensors         Numerical Results and Shifts        Conditioning of the Problem        Conclusions



   Shifts

       Min-Max-Problem
                                                                         d
                                                                               pi,k −       j=k   λj
                          min                   max
                  {p1,1 ,...,p   ,d }⊂C    λk ∈Λ(Ak ) ∀k                           pi,k + λk
                                                                     i=0 k=0




Max Planck Institute Magdeburg                                                 Thomas Mach, Jens Saak, Tensor-ADI   26/37
ADI          ADI for Tensors         Numerical Results and Shifts           Conditioning of the Problem      Conclusions



   Shifts

       Min-Max-Problem
                                                                          d
                                                                                pi,k −         j=k   λj
                          min                   max
                  {p1,1 ,...,p   ,d }⊂C    λk ∈Λ(Ak ) ∀k                             pi,k + λk
                                                                     i=0 k=0


       Min-Max-Problem, Lyapunov case (Ak = A0 ∀k, A0 Hurwitz)
                                                                         d
                                                                               pi,k −         j=k   λj
                           min                 max
                   {p1,1 ,...,p   ,d }⊂C   λk ∈Λ(A0 ) ∀k                            pi,k + λk
                                                                     i=0 k=0




Max Planck Institute Magdeburg                                                  Thomas Mach, Jens Saak, Tensor-ADI   26/37
ADI          ADI for Tensors         Numerical Results and Shifts           Conditioning of the Problem      Conclusions



   Shifts

       Min-Max-Problem
                                                                          d
                                                                                pi,k −         j=k   λj
                          min                   max
                  {p1,1 ,...,p   ,d }⊂C    λk ∈Λ(Ak ) ∀k                             pi,k + λk
                                                                     i=0 k=0


       Min-Max-Problem, Lyapunov case (Ak = A0 ∀k, A0 Hurwitz)
                                                                         d
                                                                               pi −           j=k   λj
                         min                   max
                   {p1 ,...,p }⊂C         λk ∈Λ(A0 ) ∀k                             pi + λk
                                                                     i=0 k=0




Max Planck Institute Magdeburg                                                  Thomas Mach, Jens Saak, Tensor-ADI   26/37
ADI          ADI for Tensors         Numerical Results and Shifts           Conditioning of the Problem      Conclusions



   Shifts

       Min-Max-Problem
                                                                          d
                                                                                pi,k −         j=k   λj
                          min                   max
                  {p1,1 ,...,p   ,d }⊂C    λk ∈Λ(Ak ) ∀k                             pi,k + λk
                                                                     i=0 k=0


       Min-Max-Problem, Lyapunov case (Ak = A0 ∀k, A0 Hurwitz)
                                                                         d
                                                                               pi −           j=k   λj
                         min                   max
                   {p1 ,...,p }⊂C         λk ∈Λ(A0 ) ∀k                             pi + λk
                                                                     i=0 k=0


               λk = λ0 ∀k
               Penzl’s idea: {p1 , . . . , p } ⊂ (d − 1)Λ(A0 )


Max Planck Institute Magdeburg                                                  Thomas Mach, Jens Saak, Tensor-ADI   26/37
ADI          ADI for Tensors       Numerical Results and Shifts   Conditioning of the Problem      Conclusions



   Random Example

                seed := 1;
                    R := rand(10);
                    R := R + R ;
                    R := R − λmin + 0.1;
                   A0 = −R;
             Λ(A0 ) = {−0.1000, −0.2250, −1.1024, −1.7496, −2.0355,
                                −2.4402, −3.1330, −3.3961, −3.9347, −11.9713}

       ⇒ The random shifts do not lead to convergence.

                                          p0 = λ10 (A0 )(d − 1)
                                          p1 = λ9 (A0 )(d − 1)


Max Planck Institute Magdeburg                                        Thomas Mach, Jens Saak, Tensor-ADI   27/37
ADI          ADI for Tensors          Numerical Results and Shifts   Conditioning of the Problem      Conclusions



   Numerical Results – Ai = −R
       random k, test residual every 5 inner iterations, max. 250 iterations

                                   d         t in s             residual        #it
                                   2        0.3627            2.6327 e−06       250
                                   5       17.6850            1.4517 e−07       250
                                   8       62.4336            9.3164 e−09       200
                                  10       44.1547            8.5963 e−09       125
                                  15       12.2231            5.0356 e−09        60
                                  20       15.7506            3.3142 e−09        50
                                  25       25.2221            3.6501 e−09        45
                                  50       49.2004            5.4141 e−09        35
                                  75      118.1297            6.8682 e−09        30
                                 100      614.4017            2.4598 e−09        30



Max Planck Institute Magdeburg                                           Thomas Mach, Jens Saak, Tensor-ADI   28/37
ADI          ADI for Tensors    Numerical Results and Shifts   Conditioning of the Problem      Conclusions




            Conditioning of the Problem




Max Planck Institute Magdeburg                                     Thomas Mach, Jens Saak, Tensor-ADI   29/37
ADI          ADI for Tensors    Numerical Results and Shifts   Conditioning of the Problem      Conclusions



   Eigenvalues

       A = I ⊗ · · · ⊗ I ⊗ A1 + I ⊗ · · · ⊗ I ⊗ A2 ⊗ I + . . . + Ad ⊗ I ⊗ · · · ⊗ I
       St´phanos’ theorem:
         e

                       ⇒ λi (A) = λi1 (A1 ) + λi2 (A2 ) + · · · + λid (Ad ),
                                                d−1
       with i = i1 + i2 n1 + · · · + id               nj .
                                                j=1




Max Planck Institute Magdeburg                                     Thomas Mach, Jens Saak, Tensor-ADI   30/37
ADI           ADI for Tensors           Numerical Results and Shifts     Conditioning of the Problem      Conclusions



   Eigenvalues

       A = I ⊗ · · · ⊗ I ⊗ A1 + I ⊗ · · · ⊗ I ⊗ A2 ⊗ I + . . . + Ad ⊗ I ⊗ · · · ⊗ I
       St´phanos’ theorem:
         e

                        ⇒ λi (A) = λi1 (A1 ) + λi2 (A2 ) + · · · + λid (Ad ),
                                                        d−1
       with i = i1 + i2 n1 + · · · + id                         nj .
                                                        j=1


                                  d                         d
        max |λl (A)| ≥                 λmk (Ak ) ≥               Re (λmk (Ak )) ,      if ∀k : Ak Hurwitz
           l
                                 k=1                       k=1
                                  d
         min |λl (A)| ≤                min |λlk (Ak )| .
           l                            lk
                                 k=1



Max Planck Institute Magdeburg                                               Thomas Mach, Jens Saak, Tensor-ADI   30/37
ADI           ADI for Tensors           Numerical Results and Shifts     Conditioning of the Problem       Conclusions



   Eigenvalues

       A = I ⊗ · · · ⊗ I ⊗ A1 + I ⊗ · · · ⊗ I ⊗ A2 ⊗ I + . . . + Ad ⊗ I ⊗ · · · ⊗ I
       St´phanos’ theorem:
         e

                        ⇒ λimk = λi1 (A1 argmax 2 ) + ·|λ·j (Ak )| (Ad ),
                            (A) =        ) + λi2 (A     · + λid
                                             j∈{1,...,nk : Im (λi (Ak ))≥0}
                                                      d−1
       with i = i1 + i2 n1 + · · · + id                         nj .
                                                        j=1


                                  d                         d
        max |λl (A)| ≥                 λmk (Ak ) ≥               Re (λmk (Ak )) ,      if ∀k : Ak Hurwitz
           l
                                 k=1                       k=1
                                  d
         min |λl (A)| ≤                min |λlk (Ak )| .
           l                            lk
                                 k=1



Max Planck Institute Magdeburg                                                Thomas Mach, Jens Saak, Tensor-ADI   30/37
ADI          ADI for Tensors    Numerical Results and Shifts         Conditioning of the Problem      Conclusions



   Normal Matrices Aj

       Lemma
       If all Ai , i = 1, . . . , d are normal, then also A is normal.

       Proof.
       Here d = 3, the extension to larger d is obvious.
       AAT = A3 ⊗ I ⊗ I + I ⊗ A2 ⊗ I + I ⊗ I ⊗ A1               A3 ⊗ I ⊗ I + I ⊗ A2 ⊗ I + I ⊗ I ⊗ A1




Max Planck Institute Magdeburg                                           Thomas Mach, Jens Saak, Tensor-ADI   31/37
ADI          ADI for Tensors    Numerical Results and Shifts         Conditioning of the Problem      Conclusions



   Normal Matrices Aj

       Lemma
       If all Ai , i = 1, . . . , d are normal, then also A is normal.

       Proof.
       Here d = 3, the extension to larger d is obvious.
       AAT = A3 ⊗ I ⊗ I + I ⊗ A2 ⊗ I + I ⊗ I ⊗ A1               AT ⊗ I ⊗ I + I ⊗ AT ⊗ I + I ⊗ I ⊗ AT
                                                                 3                2                1




Max Planck Institute Magdeburg                                           Thomas Mach, Jens Saak, Tensor-ADI   31/37
ADI          ADI for Tensors     Numerical Results and Shifts         Conditioning of the Problem      Conclusions



   Normal Matrices Aj

       Lemma
       If all Ai , i = 1, . . . , d are normal, then also A is normal.

       Proof.
       Here d = 3, the extension to larger d is obvious.
       AAT = A3 ⊗ I ⊗ I + I ⊗ A2 ⊗ I + I ⊗ I ⊗ A1                AT ⊗ I ⊗ I + I ⊗ AT ⊗ I + I ⊗ I ⊗ AT
                                                                  3                2                1

              = A3 AT ⊗ II ⊗ II
                    3              + A3 I ⊗ IAT ⊗ II
                                              2                  + A3 I ⊗ II ⊗ IAT
                                                                                 1

                 + IAT ⊗ A2 I ⊗ II
                     3                + II ⊗ A2 AT ⊗ II
                                                 2                 + II ⊗ A2 I ⊗ IAT
                                                                                   1

                 + IAT ⊗ II ⊗ A1 I
                     3                + II ⊗ IAT ⊗ A1 I
                                               2                   + II ⊗ II ⊗ A1 AT
                                                                                   1




Max Planck Institute Magdeburg                                            Thomas Mach, Jens Saak, Tensor-ADI   31/37
ADI          ADI for Tensors     Numerical Results and Shifts         Conditioning of the Problem      Conclusions



   Normal Matrices Aj

       Lemma
       If all Ai , i = 1, . . . , d are normal, then also A is normal.

       Proof.
       Here d = 3, the extension to larger d is obvious.
       AAT = A3 ⊗ I ⊗ I + I ⊗ A2 ⊗ I + I ⊗ I ⊗ A1                AT ⊗ I ⊗ I + I ⊗ AT ⊗ I + I ⊗ I ⊗ AT
                                                                  3                2                1

              = AT A3 ⊗ II ⊗ II
                 3                 + IA3 ⊗ AT I ⊗ II
                                            2                    + IA3 ⊗ II ⊗ AT I
                                                                               1

                 + AT I ⊗ IA2 ⊗ II
                    3                 + II ⊗ AT A2 ⊗ II
                                              2                    + II ⊗ IA2 ⊗ AT I
                                                                                 1

                 + AT I ⊗ II ⊗ IA1 + II ⊗ AT I ⊗ IA1 + II ⊗ II ⊗ AT A1
                    3                      2                      1




Max Planck Institute Magdeburg                                            Thomas Mach, Jens Saak, Tensor-ADI   31/37
ADI          ADI for Tensors     Numerical Results and Shifts           Conditioning of the Problem      Conclusions



   Normal Matrices Aj

       Lemma
       If all Ai , i = 1, . . . , d are normal, then also A is normal.

       Proof.
       Here d = 3, the extension to larger d is obvious.
       AAT = A3 ⊗ I ⊗ I + I ⊗ A2 ⊗ I + I ⊗ I ⊗ A1                AT ⊗ I ⊗ I + I ⊗ AT ⊗ I + I ⊗ I ⊗ AT
                                                                  3                2                1

              = AT A3 ⊗ II ⊗ II
                 3                 + IA3 ⊗ AT I ⊗ II
                                            2                    + IA3 ⊗ II ⊗ AT I
                                                                               1

                 + AT I ⊗ IA2 ⊗ II
                    3                 + II ⊗ AT A2 ⊗ II
                                              2                      + II ⊗ IA2 ⊗ AT I
                                                                                   1

                 + AT I ⊗ II ⊗ IA1 + II ⊗ AT I ⊗ IA1 + II ⊗ II ⊗ AT A1
                    3                      2                      1
                                                                 T
              = A3 ⊗ I ⊗ I + I ⊗ A2 ⊗ I + I ⊗ I ⊗ A1                 A3 ⊗ I ⊗ I + I ⊗ A2 ⊗ I + I ⊗ I ⊗ A1
                  T
              =A A




Max Planck Institute Magdeburg                                              Thomas Mach, Jens Saak, Tensor-ADI   31/37
ADI          ADI for Tensors    Numerical Results and Shifts           Conditioning of the Problem      Conclusions



   Normal Matrices Aj

       Lemma
       If all Ai , i = 1, . . . , d are normal, then also A is normal.

       Proof.                                     max |λl (A)|
                                       σmax (A)
               κ3, the= A 2 A−1 larger d is obvious.l
       Here d = 2 (A) extension to 2 =          =
                                       σmin (A)   min |λl (A)|
                                                                l
       AAT = A3 ⊗ I ⊗ I + I ⊗ A2 ⊗ I + I ⊗ I ⊗ A1 AT ⊗ I ⊗ I + I ⊗ AT ⊗ I + I ⊗ I ⊗ AT
                                                   3
                                                           max |λlu (A0 )|
                                                                      2              1
                                                      ∀i     lu
              = AT A3 ⊗ II ⊗ II + IA3 ⊗ AT I ⊗ II + IA3 ⊗ II ⊗ AT I
                 3                       2            =           1
                                                   Ai =A0 min |λll (A0 )|
               + A3 I ⊗ IA2 ⊗ II + II ⊗ A2 A2 ⊗ II + II ⊗ IAl2 ⊗ AT I
                    T                     T
                                                              l     1

                 + AT I ⊗ II ⊗ IA1 + II ⊗ AT I ⊗ IA1 + II ⊗ II ⊗ AT A1
                    3                      2                      1
                                                                T
              = A3 ⊗ I ⊗ I + I ⊗ A2 ⊗ I + I ⊗ I ⊗ A1                A3 ⊗ I ⊗ I + I ⊗ A2 ⊗ I + I ⊗ I ⊗ A1
                  T
              =A A




Max Planck Institute Magdeburg                                             Thomas Mach, Jens Saak, Tensor-ADI   31/37
ADI          ADI for Tensors    Numerical Results and Shifts     Conditioning of the Problem        Conclusions



   Lower Bounds for Non-Normal Matrices Aj

       FX + XF T = −G                (i.e., d = 2, A1 = A2 = F Hurwitz)                   [Zhou ’02]

                                                      max |λl (F )|
                                                         l
                                       κ2 (A) ≥
                                                       min |λl (F )|
                                                         l




Max Planck Institute Magdeburg                                         Thomas Mach, Jens Saak, Tensor-ADI   32/37
ADI          ADI for Tensors       Numerical Results and Shifts              Conditioning of the Problem      Conclusions



   Lower Bounds for Non-Normal Matrices Aj

       FX + XF T = −G                   (i.e., d = 2, A1 = A2 = F Hurwitz)                            [Zhou ’02]

                                                          max |λl (F )|
                                                              l
                                          κ2 (A) ≥
                                                          min |λl (F )|
                                                              l


       by observing
                                 σmax (A) = A         2   = sup                 Ay   2
                                                                  y    2 =1

                                                          ≥            sup           Ay     2
                                                                  y    2 =1,y   EV

                                                          = max |λl (A)|
                                                                   l
                                                          = 2 max |λl (F )|
                                                                        l



Max Planck Institute Magdeburg                                                   Thomas Mach, Jens Saak, Tensor-ADI   32/37
ADI          ADI for Tensors    Numerical Results and Shifts     Conditioning of the Problem        Conclusions



   Lower Bounds for Non-Normal Matrices Aj

       FX + XF T = −G                (i.e., d = 2, A1 = A2 = F Hurwitz)                   [Zhou ’02]

                                                      max |λl (F )|
                                                            l
                                       κ2 (A) ≥
                                                       min |λl (F )|
                                                            l


       and
                                            −1            A−1 y 2 −1
                    σmin (A) = A−1          2
                                                 = sup
                                                            y 2
                                                          y 2          Ay 2
                                                 = inf   −1 y
                                                                 = inf
                                                        A      2       y 2
                                                         Ay 2
                                                 ≤ inf
                                                   y EV   y 2
                                                 = min |λl (A)| = 2 min |λl (F )|
                                                        l                     l

Max Planck Institute Magdeburg                                         Thomas Mach, Jens Saak, Tensor-ADI   32/37
ADI          ADI for Tensors         Numerical Results and Shifts    Conditioning of the Problem         Conclusions



   Lower Bounds for Non-Normal Matrices Aj

       Tensor Lyapunov Equations
       (i.e., ∀i = 1, . . . , d, Ai = A0 Hurwitz)                                             [M./S. ’11]

                                                          max |λl (A0 )|
                                                             l
                                          κ2 (A) ≥
                                                           min |λl (A0 )|
                                                              l




                                σmax (A) ≥ max |λl (A)| = d max |λl (A0 )|
                                                 l                     l


                                σmin (A) ≤ min |λl (A)| = d min |λl0 (A0 )|
                                                 l                    l0

Max Planck Institute Magdeburg                                              Thomas Mach, Jens Saak, Tensor-ADI   33/37
ADI          ADI for Tensors    Numerical Results and Shifts         Conditioning of the Problem           Conclusions



   Lower Bounds for Non-Normal Matrices Aj

       Tensor Silvester Equations
       (i.e., ∀i = 1, . . . , d, Ai Hurwitz)                                                  [M./S. ’11]
                                                        d
                                                            λmk (Ak )
                                                      k=1
                                   κ2 (A) ≥          d
                                                         min |λl (Ak )|
                                                   k=1      l
                                                                                                    mk as before



                                                                  d
                           σmax (A) ≥ max |λl (A)| ≥                   λmk (Ak )
                                           l
                                                                 k=1
                                                                 d
                           σmin (A) ≤ min |λl (A)| ≤                  min |λlk (Ak )|
                                          l                            lk
                                                                k=1

Max Planck Institute Magdeburg                                              Thomas Mach, Jens Saak, Tensor-ADI     33/37
ADI for Tensor Structured Equations
ADI for Tensor Structured Equations
ADI for Tensor Structured Equations
ADI for Tensor Structured Equations
ADI for Tensor Structured Equations
ADI for Tensor Structured Equations
ADI for Tensor Structured Equations
ADI for Tensor Structured Equations
ADI for Tensor Structured Equations
ADI for Tensor Structured Equations

More Related Content

What's hot

Gentle Introduction to Dirichlet Processes
Gentle Introduction to Dirichlet ProcessesGentle Introduction to Dirichlet Processes
Gentle Introduction to Dirichlet ProcessesYap Wooi Hen
 
F4 Final Sbp 2006 Math Skema P 1 & P 2
F4 Final Sbp 2006 Math Skema P 1 & P 2 F4 Final Sbp 2006 Math Skema P 1 & P 2
F4 Final Sbp 2006 Math Skema P 1 & P 2 norainisaser
 
F4 Final Sbp 2007 Maths Skema P 1 & P2
F4 Final Sbp 2007 Maths Skema P 1 & P2F4 Final Sbp 2007 Maths Skema P 1 & P2
F4 Final Sbp 2007 Maths Skema P 1 & P2norainisaser
 
Tensor train to solve stochastic PDEs
Tensor train to solve stochastic PDEsTensor train to solve stochastic PDEs
Tensor train to solve stochastic PDEsAlexander Litvinenko
 
Trial Sbp 2007 Answer Mm 1 & 2
Trial Sbp 2007 Answer Mm 1 & 2Trial Sbp 2007 Answer Mm 1 & 2
Trial Sbp 2007 Answer Mm 1 & 2norainisaser
 
Maths Answer Ppsmi2006 F4 P2
Maths Answer Ppsmi2006 F4 P2Maths Answer Ppsmi2006 F4 P2
Maths Answer Ppsmi2006 F4 P2norainisaser
 
Functions
FunctionsFunctions
Functionssuefee
 
My presentation at University of Nottingham "Fast low-rank methods for solvin...
My presentation at University of Nottingham "Fast low-rank methods for solvin...My presentation at University of Nottingham "Fast low-rank methods for solvin...
My presentation at University of Nottingham "Fast low-rank methods for solvin...Alexander Litvinenko
 
Convex optimization methods
Convex optimization methodsConvex optimization methods
Convex optimization methodsDong Guo
 
Using Alpha-cuts and Constraint Exploration Approach on Quadratic Programming...
Using Alpha-cuts and Constraint Exploration Approach on Quadratic Programming...Using Alpha-cuts and Constraint Exploration Approach on Quadratic Programming...
Using Alpha-cuts and Constraint Exploration Approach on Quadratic Programming...TELKOMNIKA JOURNAL
 
A study of the worst case ratio of a simple algorithm for simple assembly lin...
A study of the worst case ratio of a simple algorithm for simple assembly lin...A study of the worst case ratio of a simple algorithm for simple assembly lin...
A study of the worst case ratio of a simple algorithm for simple assembly lin...narmo
 
Spectral methods for solving differential equations
Spectral methods for solving differential equationsSpectral methods for solving differential equations
Spectral methods for solving differential equationsRajesh Aggarwal
 
Principal component analysis and matrix factorizations for learning (part 3) ...
Principal component analysis and matrix factorizations for learning (part 3) ...Principal component analysis and matrix factorizations for learning (part 3) ...
Principal component analysis and matrix factorizations for learning (part 3) ...zukun
 
class 12 2014 maths solution set 1
class 12 2014 maths solution set 1class 12 2014 maths solution set 1
class 12 2014 maths solution set 1vandna123
 
Linear Discriminant Analysis (LDA) Under f-Divergence Measures
Linear Discriminant Analysis (LDA) Under f-Divergence MeasuresLinear Discriminant Analysis (LDA) Under f-Divergence Measures
Linear Discriminant Analysis (LDA) Under f-Divergence MeasuresAnmol Dwivedi
 
MCQMC 2020 talk: Importance Sampling for a Robust and Efficient Multilevel Mo...
MCQMC 2020 talk: Importance Sampling for a Robust and Efficient Multilevel Mo...MCQMC 2020 talk: Importance Sampling for a Robust and Efficient Multilevel Mo...
MCQMC 2020 talk: Importance Sampling for a Robust and Efficient Multilevel Mo...Chiheb Ben Hammouda
 
Phenomenological Decomposition Heuristics for Process Design Synthesis of Oil...
Phenomenological Decomposition Heuristics for Process Design Synthesis of Oil...Phenomenological Decomposition Heuristics for Process Design Synthesis of Oil...
Phenomenological Decomposition Heuristics for Process Design Synthesis of Oil...Alkis Vazacopoulos
 
Simplified Runtime Analysis of Estimation of Distribution Algorithms
Simplified Runtime Analysis of Estimation of Distribution AlgorithmsSimplified Runtime Analysis of Estimation of Distribution Algorithms
Simplified Runtime Analysis of Estimation of Distribution AlgorithmsPK Lehre
 

What's hot (20)

Gentle Introduction to Dirichlet Processes
Gentle Introduction to Dirichlet ProcessesGentle Introduction to Dirichlet Processes
Gentle Introduction to Dirichlet Processes
 
F4 Final Sbp 2006 Math Skema P 1 & P 2
F4 Final Sbp 2006 Math Skema P 1 & P 2 F4 Final Sbp 2006 Math Skema P 1 & P 2
F4 Final Sbp 2006 Math Skema P 1 & P 2
 
F4 Final Sbp 2007 Maths Skema P 1 & P2
F4 Final Sbp 2007 Maths Skema P 1 & P2F4 Final Sbp 2007 Maths Skema P 1 & P2
F4 Final Sbp 2007 Maths Skema P 1 & P2
 
Tensor train to solve stochastic PDEs
Tensor train to solve stochastic PDEsTensor train to solve stochastic PDEs
Tensor train to solve stochastic PDEs
 
Trial Sbp 2007 Answer Mm 1 & 2
Trial Sbp 2007 Answer Mm 1 & 2Trial Sbp 2007 Answer Mm 1 & 2
Trial Sbp 2007 Answer Mm 1 & 2
 
Maths Answer Ppsmi2006 F4 P2
Maths Answer Ppsmi2006 F4 P2Maths Answer Ppsmi2006 F4 P2
Maths Answer Ppsmi2006 F4 P2
 
Functions
FunctionsFunctions
Functions
 
My presentation at University of Nottingham "Fast low-rank methods for solvin...
My presentation at University of Nottingham "Fast low-rank methods for solvin...My presentation at University of Nottingham "Fast low-rank methods for solvin...
My presentation at University of Nottingham "Fast low-rank methods for solvin...
 
Convex optimization methods
Convex optimization methodsConvex optimization methods
Convex optimization methods
 
Using Alpha-cuts and Constraint Exploration Approach on Quadratic Programming...
Using Alpha-cuts and Constraint Exploration Approach on Quadratic Programming...Using Alpha-cuts and Constraint Exploration Approach on Quadratic Programming...
Using Alpha-cuts and Constraint Exploration Approach on Quadratic Programming...
 
9 pd es
9 pd es9 pd es
9 pd es
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
 
A study of the worst case ratio of a simple algorithm for simple assembly lin...
A study of the worst case ratio of a simple algorithm for simple assembly lin...A study of the worst case ratio of a simple algorithm for simple assembly lin...
A study of the worst case ratio of a simple algorithm for simple assembly lin...
 
Spectral methods for solving differential equations
Spectral methods for solving differential equationsSpectral methods for solving differential equations
Spectral methods for solving differential equations
 
Principal component analysis and matrix factorizations for learning (part 3) ...
Principal component analysis and matrix factorizations for learning (part 3) ...Principal component analysis and matrix factorizations for learning (part 3) ...
Principal component analysis and matrix factorizations for learning (part 3) ...
 
class 12 2014 maths solution set 1
class 12 2014 maths solution set 1class 12 2014 maths solution set 1
class 12 2014 maths solution set 1
 
Linear Discriminant Analysis (LDA) Under f-Divergence Measures
Linear Discriminant Analysis (LDA) Under f-Divergence MeasuresLinear Discriminant Analysis (LDA) Under f-Divergence Measures
Linear Discriminant Analysis (LDA) Under f-Divergence Measures
 
MCQMC 2020 talk: Importance Sampling for a Robust and Efficient Multilevel Mo...
MCQMC 2020 talk: Importance Sampling for a Robust and Efficient Multilevel Mo...MCQMC 2020 talk: Importance Sampling for a Robust and Efficient Multilevel Mo...
MCQMC 2020 talk: Importance Sampling for a Robust and Efficient Multilevel Mo...
 
Phenomenological Decomposition Heuristics for Process Design Synthesis of Oil...
Phenomenological Decomposition Heuristics for Process Design Synthesis of Oil...Phenomenological Decomposition Heuristics for Process Design Synthesis of Oil...
Phenomenological Decomposition Heuristics for Process Design Synthesis of Oil...
 
Simplified Runtime Analysis of Estimation of Distribution Algorithms
Simplified Runtime Analysis of Estimation of Distribution AlgorithmsSimplified Runtime Analysis of Estimation of Distribution Algorithms
Simplified Runtime Analysis of Estimation of Distribution Algorithms
 

Viewers also liked

ANNA UNIVERSITY, CHENNAI AFFILIATED INSTITUTIONS R-2008 B.TECH. INFORMATION T...
ANNA UNIVERSITY, CHENNAI AFFILIATED INSTITUTIONS R-2008 B.TECH. INFORMATION T...ANNA UNIVERSITY, CHENNAI AFFILIATED INSTITUTIONS R-2008 B.TECH. INFORMATION T...
ANNA UNIVERSITY, CHENNAI AFFILIATED INSTITUTIONS R-2008 B.TECH. INFORMATION T...Anirudhan Guru
 
Finite difference method
Finite difference methodFinite difference method
Finite difference methodDivyansh Verma
 
Numerical methods for 2 d heat transfer
Numerical methods for 2 d heat transferNumerical methods for 2 d heat transfer
Numerical methods for 2 d heat transferArun Sarasan
 
FDM Numerical solution of Laplace Equation using MATLAB
FDM Numerical solution of Laplace Equation using MATLABFDM Numerical solution of Laplace Equation using MATLAB
FDM Numerical solution of Laplace Equation using MATLABAya Zaki
 
Interpolation with Finite differences
Interpolation with Finite differencesInterpolation with Finite differences
Interpolation with Finite differencesDr. Nirav Vyas
 
Finite DIfference Methods Mathematica
Finite DIfference Methods MathematicaFinite DIfference Methods Mathematica
Finite DIfference Methods Mathematicaguest56708a
 

Viewers also liked (8)

ANNA UNIVERSITY, CHENNAI AFFILIATED INSTITUTIONS R-2008 B.TECH. INFORMATION T...
ANNA UNIVERSITY, CHENNAI AFFILIATED INSTITUTIONS R-2008 B.TECH. INFORMATION T...ANNA UNIVERSITY, CHENNAI AFFILIATED INSTITUTIONS R-2008 B.TECH. INFORMATION T...
ANNA UNIVERSITY, CHENNAI AFFILIATED INSTITUTIONS R-2008 B.TECH. INFORMATION T...
 
Finite difference equation
Finite difference equationFinite difference equation
Finite difference equation
 
Finite difference method
Finite difference methodFinite difference method
Finite difference method
 
Numerical methods for 2 d heat transfer
Numerical methods for 2 d heat transferNumerical methods for 2 d heat transfer
Numerical methods for 2 d heat transfer
 
FDM Numerical solution of Laplace Equation using MATLAB
FDM Numerical solution of Laplace Equation using MATLABFDM Numerical solution of Laplace Equation using MATLAB
FDM Numerical solution of Laplace Equation using MATLAB
 
Interpolation with Finite differences
Interpolation with Finite differencesInterpolation with Finite differences
Interpolation with Finite differences
 
Finite DIfference Methods Mathematica
Finite DIfference Methods MathematicaFinite DIfference Methods Mathematica
Finite DIfference Methods Mathematica
 
Numerical method
Numerical methodNumerical method
Numerical method
 

Similar to ADI for Tensor Structured Equations

Tensor Train data format for uncertainty quantification
Tensor Train data format for uncertainty quantificationTensor Train data format for uncertainty quantification
Tensor Train data format for uncertainty quantificationAlexander Litvinenko
 
UNIT-II.pptx
UNIT-II.pptxUNIT-II.pptx
UNIT-II.pptxJyoReddy9
 
NIPS2010: optimization algorithms in machine learning
NIPS2010: optimization algorithms in machine learningNIPS2010: optimization algorithms in machine learning
NIPS2010: optimization algorithms in machine learningzukun
 
Expert Lecture on GPS at UIET, CSJM, Kanpur
Expert Lecture on GPS at UIET, CSJM, KanpurExpert Lecture on GPS at UIET, CSJM, Kanpur
Expert Lecture on GPS at UIET, CSJM, KanpurSuddhasheel GHOSH, PhD
 
Localized methods for diffusions in large graphs
Localized methods for diffusions in large graphsLocalized methods for diffusions in large graphs
Localized methods for diffusions in large graphsDavid Gleich
 
Finite_Element_Method analysis and theory of yeild line .ppt
Finite_Element_Method analysis and theory of yeild line .pptFinite_Element_Method analysis and theory of yeild line .ppt
Finite_Element_Method analysis and theory of yeild line .pptatifmx3
 
lecture01_lecture01_lecture0001_ceva.pdf
lecture01_lecture01_lecture0001_ceva.pdflecture01_lecture01_lecture0001_ceva.pdf
lecture01_lecture01_lecture0001_ceva.pdfAnaNeacsu5
 
1627 simultaneous equations and intersections
1627 simultaneous equations and intersections1627 simultaneous equations and intersections
1627 simultaneous equations and intersectionsDr Fereidoun Dejahang
 
Hierarchical matrices for approximating large covariance matries and computin...
Hierarchical matrices for approximating large covariance matries and computin...Hierarchical matrices for approximating large covariance matries and computin...
Hierarchical matrices for approximating large covariance matries and computin...Alexander Litvinenko
 
How to Generate Personalized Tasks and Sample Solutions for Anonymous Peer Re...
How to Generate Personalized Tasks and Sample Solutions for Anonymous Peer Re...How to Generate Personalized Tasks and Sample Solutions for Anonymous Peer Re...
How to Generate Personalized Tasks and Sample Solutions for Anonymous Peer Re...Mathias Magdowski
 
Gauge Systems With Noncommutative Phase Space
Gauge Systems With Noncommutative Phase SpaceGauge Systems With Noncommutative Phase Space
Gauge Systems With Noncommutative Phase Spacevcuesta
 
Gauge Systems With Noncommutative Phase Space
Gauge Systems With Noncommutative Phase SpaceGauge Systems With Noncommutative Phase Space
Gauge Systems With Noncommutative Phase Spaceguest9fa195
 
Fast relaxation methods for the matrix exponential
Fast relaxation methods for the matrix exponential Fast relaxation methods for the matrix exponential
Fast relaxation methods for the matrix exponential David Gleich
 

Similar to ADI for Tensor Structured Equations (20)

Tensor Train data format for uncertainty quantification
Tensor Train data format for uncertainty quantificationTensor Train data format for uncertainty quantification
Tensor Train data format for uncertainty quantification
 
UNIT-II.pptx
UNIT-II.pptxUNIT-II.pptx
UNIT-II.pptx
 
NIPS2010: optimization algorithms in machine learning
NIPS2010: optimization algorithms in machine learningNIPS2010: optimization algorithms in machine learning
NIPS2010: optimization algorithms in machine learning
 
Expert Lecture on GPS at UIET, CSJM, Kanpur
Expert Lecture on GPS at UIET, CSJM, KanpurExpert Lecture on GPS at UIET, CSJM, Kanpur
Expert Lecture on GPS at UIET, CSJM, Kanpur
 
Fdtd
FdtdFdtd
Fdtd
 
Hagen poise
Hagen poiseHagen poise
Hagen poise
 
Localized methods for diffusions in large graphs
Localized methods for diffusions in large graphsLocalized methods for diffusions in large graphs
Localized methods for diffusions in large graphs
 
Ki2518101816
Ki2518101816Ki2518101816
Ki2518101816
 
Ki2518101816
Ki2518101816Ki2518101816
Ki2518101816
 
Pres metabief2020jmm
Pres metabief2020jmmPres metabief2020jmm
Pres metabief2020jmm
 
QMC: Operator Splitting Workshop, Proximal Algorithms in Probability Spaces -...
QMC: Operator Splitting Workshop, Proximal Algorithms in Probability Spaces -...QMC: Operator Splitting Workshop, Proximal Algorithms in Probability Spaces -...
QMC: Operator Splitting Workshop, Proximal Algorithms in Probability Spaces -...
 
Finite_Element_Method analysis and theory of yeild line .ppt
Finite_Element_Method analysis and theory of yeild line .pptFinite_Element_Method analysis and theory of yeild line .ppt
Finite_Element_Method analysis and theory of yeild line .ppt
 
lecture01_lecture01_lecture0001_ceva.pdf
lecture01_lecture01_lecture0001_ceva.pdflecture01_lecture01_lecture0001_ceva.pdf
lecture01_lecture01_lecture0001_ceva.pdf
 
Beck Workshop on Modelling and Simulation of Coal-fired Power Generation and ...
Beck Workshop on Modelling and Simulation of Coal-fired Power Generation and ...Beck Workshop on Modelling and Simulation of Coal-fired Power Generation and ...
Beck Workshop on Modelling and Simulation of Coal-fired Power Generation and ...
 
1627 simultaneous equations and intersections
1627 simultaneous equations and intersections1627 simultaneous equations and intersections
1627 simultaneous equations and intersections
 
Hierarchical matrices for approximating large covariance matries and computin...
Hierarchical matrices for approximating large covariance matries and computin...Hierarchical matrices for approximating large covariance matries and computin...
Hierarchical matrices for approximating large covariance matries and computin...
 
How to Generate Personalized Tasks and Sample Solutions for Anonymous Peer Re...
How to Generate Personalized Tasks and Sample Solutions for Anonymous Peer Re...How to Generate Personalized Tasks and Sample Solutions for Anonymous Peer Re...
How to Generate Personalized Tasks and Sample Solutions for Anonymous Peer Re...
 
Gauge Systems With Noncommutative Phase Space
Gauge Systems With Noncommutative Phase SpaceGauge Systems With Noncommutative Phase Space
Gauge Systems With Noncommutative Phase Space
 
Gauge Systems With Noncommutative Phase Space
Gauge Systems With Noncommutative Phase SpaceGauge Systems With Noncommutative Phase Space
Gauge Systems With Noncommutative Phase Space
 
Fast relaxation methods for the matrix exponential
Fast relaxation methods for the matrix exponential Fast relaxation methods for the matrix exponential
Fast relaxation methods for the matrix exponential
 

More from Thomas Mach

Fast and backward stable computation of roots of polynomials
Fast and backward stable computation of roots of polynomialsFast and backward stable computation of roots of polynomials
Fast and backward stable computation of roots of polynomialsThomas Mach
 
On Deflations in Extended QR Algorithms
On Deflations in Extended QR AlgorithmsOn Deflations in Extended QR Algorithms
On Deflations in Extended QR AlgorithmsThomas Mach
 
On Deflations in Extended QR Algorithms
On Deflations in Extended QR AlgorithmsOn Deflations in Extended QR Algorithms
On Deflations in Extended QR AlgorithmsThomas Mach
 
Eigenvalues of Symmetrix Hierarchical Matrices
Eigenvalues of Symmetrix Hierarchical MatricesEigenvalues of Symmetrix Hierarchical Matrices
Eigenvalues of Symmetrix Hierarchical MatricesThomas Mach
 
Computing Inner Eigenvalues of Matrices in Tensor Train Matrix Format
Computing Inner Eigenvalues of Matrices in Tensor Train Matrix FormatComputing Inner Eigenvalues of Matrices in Tensor Train Matrix Format
Computing Inner Eigenvalues of Matrices in Tensor Train Matrix FormatThomas Mach
 
Preconditioned Inverse Iteration for Hierarchical Matrices
Preconditioned Inverse Iteration for Hierarchical MatricesPreconditioned Inverse Iteration for Hierarchical Matrices
Preconditioned Inverse Iteration for Hierarchical MatricesThomas Mach
 
Preconditioned Inverse Iteration for Hierarchical Matrices
Preconditioned Inverse Iteration for Hierarchical MatricesPreconditioned Inverse Iteration for Hierarchical Matrices
Preconditioned Inverse Iteration for Hierarchical MatricesThomas Mach
 
Hierarchical Matrices: Concept, Application and Eigenvalues
Hierarchical Matrices: Concept, Application and EigenvaluesHierarchical Matrices: Concept, Application and Eigenvalues
Hierarchical Matrices: Concept, Application and EigenvaluesThomas Mach
 

More from Thomas Mach (8)

Fast and backward stable computation of roots of polynomials
Fast and backward stable computation of roots of polynomialsFast and backward stable computation of roots of polynomials
Fast and backward stable computation of roots of polynomials
 
On Deflations in Extended QR Algorithms
On Deflations in Extended QR AlgorithmsOn Deflations in Extended QR Algorithms
On Deflations in Extended QR Algorithms
 
On Deflations in Extended QR Algorithms
On Deflations in Extended QR AlgorithmsOn Deflations in Extended QR Algorithms
On Deflations in Extended QR Algorithms
 
Eigenvalues of Symmetrix Hierarchical Matrices
Eigenvalues of Symmetrix Hierarchical MatricesEigenvalues of Symmetrix Hierarchical Matrices
Eigenvalues of Symmetrix Hierarchical Matrices
 
Computing Inner Eigenvalues of Matrices in Tensor Train Matrix Format
Computing Inner Eigenvalues of Matrices in Tensor Train Matrix FormatComputing Inner Eigenvalues of Matrices in Tensor Train Matrix Format
Computing Inner Eigenvalues of Matrices in Tensor Train Matrix Format
 
Preconditioned Inverse Iteration for Hierarchical Matrices
Preconditioned Inverse Iteration for Hierarchical MatricesPreconditioned Inverse Iteration for Hierarchical Matrices
Preconditioned Inverse Iteration for Hierarchical Matrices
 
Preconditioned Inverse Iteration for Hierarchical Matrices
Preconditioned Inverse Iteration for Hierarchical MatricesPreconditioned Inverse Iteration for Hierarchical Matrices
Preconditioned Inverse Iteration for Hierarchical Matrices
 
Hierarchical Matrices: Concept, Application and Eigenvalues
Hierarchical Matrices: Concept, Application and EigenvaluesHierarchical Matrices: Concept, Application and Eigenvalues
Hierarchical Matrices: Concept, Application and Eigenvalues
 

Recently uploaded

How To Create Editable Tree View in Odoo 17
How To Create Editable Tree View in Odoo 17How To Create Editable Tree View in Odoo 17
How To Create Editable Tree View in Odoo 17Celine George
 
How to Send Pro Forma Invoice to Your Customers in Odoo 17
How to Send Pro Forma Invoice to Your Customers in Odoo 17How to Send Pro Forma Invoice to Your Customers in Odoo 17
How to Send Pro Forma Invoice to Your Customers in Odoo 17Celine George
 
8 Tips for Effective Working Capital Management
8 Tips for Effective Working Capital Management8 Tips for Effective Working Capital Management
8 Tips for Effective Working Capital ManagementMBA Assignment Experts
 
Spellings Wk 4 and Wk 5 for Grade 4 at CAPS
Spellings Wk 4 and Wk 5 for Grade 4 at CAPSSpellings Wk 4 and Wk 5 for Grade 4 at CAPS
Spellings Wk 4 and Wk 5 for Grade 4 at CAPSAnaAcapella
 
Stl Algorithms in C++ jjjjjjjjjjjjjjjjjj
Stl Algorithms in C++ jjjjjjjjjjjjjjjjjjStl Algorithms in C++ jjjjjjjjjjjjjjjjjj
Stl Algorithms in C++ jjjjjjjjjjjjjjjjjjMohammed Sikander
 
Trauma-Informed Leadership - Five Practical Principles
Trauma-Informed Leadership - Five Practical PrinciplesTrauma-Informed Leadership - Five Practical Principles
Trauma-Informed Leadership - Five Practical PrinciplesPooky Knightsmith
 
MuleSoft Integration with AWS Textract | Calling AWS Textract API |AWS - Clou...
MuleSoft Integration with AWS Textract | Calling AWS Textract API |AWS - Clou...MuleSoft Integration with AWS Textract | Calling AWS Textract API |AWS - Clou...
MuleSoft Integration with AWS Textract | Calling AWS Textract API |AWS - Clou...MysoreMuleSoftMeetup
 
AIM of Education-Teachers Training-2024.ppt
AIM of Education-Teachers Training-2024.pptAIM of Education-Teachers Training-2024.ppt
AIM of Education-Teachers Training-2024.pptNishitharanjan Rout
 
Basic Civil Engineering notes on Transportation Engineering & Modes of Transport
Basic Civil Engineering notes on Transportation Engineering & Modes of TransportBasic Civil Engineering notes on Transportation Engineering & Modes of Transport
Basic Civil Engineering notes on Transportation Engineering & Modes of TransportDenish Jangid
 
SPLICE Working Group: Reusable Code Examples
SPLICE Working Group:Reusable Code ExamplesSPLICE Working Group:Reusable Code Examples
SPLICE Working Group: Reusable Code ExamplesPeter Brusilovsky
 
Andreas Schleicher presents at the launch of What does child empowerment mean...
Andreas Schleicher presents at the launch of What does child empowerment mean...Andreas Schleicher presents at the launch of What does child empowerment mean...
Andreas Schleicher presents at the launch of What does child empowerment mean...EduSkills OECD
 
Spring gala 2024 photo slideshow - Celebrating School-Community Partnerships
Spring gala 2024 photo slideshow - Celebrating School-Community PartnershipsSpring gala 2024 photo slideshow - Celebrating School-Community Partnerships
Spring gala 2024 photo slideshow - Celebrating School-Community Partnershipsexpandedwebsite
 
Transparency, Recognition and the role of eSealing - Ildiko Mazar and Koen No...
Transparency, Recognition and the role of eSealing - Ildiko Mazar and Koen No...Transparency, Recognition and the role of eSealing - Ildiko Mazar and Koen No...
Transparency, Recognition and the role of eSealing - Ildiko Mazar and Koen No...EADTU
 
Major project report on Tata Motors and its marketing strategies
Major project report on Tata Motors and its marketing strategiesMajor project report on Tata Motors and its marketing strategies
Major project report on Tata Motors and its marketing strategiesAmanpreetKaur157993
 
DEMONSTRATION LESSON IN ENGLISH 4 MATATAG CURRICULUM
DEMONSTRATION LESSON IN ENGLISH 4 MATATAG CURRICULUMDEMONSTRATION LESSON IN ENGLISH 4 MATATAG CURRICULUM
DEMONSTRATION LESSON IN ENGLISH 4 MATATAG CURRICULUMELOISARIVERA8
 
The Story of Village Palampur Class 9 Free Study Material PDF
The Story of Village Palampur Class 9 Free Study Material PDFThe Story of Village Palampur Class 9 Free Study Material PDF
The Story of Village Palampur Class 9 Free Study Material PDFVivekanand Anglo Vedic Academy
 
Improved Approval Flow in Odoo 17 Studio App
Improved Approval Flow in Odoo 17 Studio AppImproved Approval Flow in Odoo 17 Studio App
Improved Approval Flow in Odoo 17 Studio AppCeline George
 

Recently uploaded (20)

How To Create Editable Tree View in Odoo 17
How To Create Editable Tree View in Odoo 17How To Create Editable Tree View in Odoo 17
How To Create Editable Tree View in Odoo 17
 
How to Send Pro Forma Invoice to Your Customers in Odoo 17
How to Send Pro Forma Invoice to Your Customers in Odoo 17How to Send Pro Forma Invoice to Your Customers in Odoo 17
How to Send Pro Forma Invoice to Your Customers in Odoo 17
 
8 Tips for Effective Working Capital Management
8 Tips for Effective Working Capital Management8 Tips for Effective Working Capital Management
8 Tips for Effective Working Capital Management
 
Spellings Wk 4 and Wk 5 for Grade 4 at CAPS
Spellings Wk 4 and Wk 5 for Grade 4 at CAPSSpellings Wk 4 and Wk 5 for Grade 4 at CAPS
Spellings Wk 4 and Wk 5 for Grade 4 at CAPS
 
Stl Algorithms in C++ jjjjjjjjjjjjjjjjjj
Stl Algorithms in C++ jjjjjjjjjjjjjjjjjjStl Algorithms in C++ jjjjjjjjjjjjjjjjjj
Stl Algorithms in C++ jjjjjjjjjjjjjjjjjj
 
Trauma-Informed Leadership - Five Practical Principles
Trauma-Informed Leadership - Five Practical PrinciplesTrauma-Informed Leadership - Five Practical Principles
Trauma-Informed Leadership - Five Practical Principles
 
MuleSoft Integration with AWS Textract | Calling AWS Textract API |AWS - Clou...
MuleSoft Integration with AWS Textract | Calling AWS Textract API |AWS - Clou...MuleSoft Integration with AWS Textract | Calling AWS Textract API |AWS - Clou...
MuleSoft Integration with AWS Textract | Calling AWS Textract API |AWS - Clou...
 
AIM of Education-Teachers Training-2024.ppt
AIM of Education-Teachers Training-2024.pptAIM of Education-Teachers Training-2024.ppt
AIM of Education-Teachers Training-2024.ppt
 
Basic Civil Engineering notes on Transportation Engineering & Modes of Transport
Basic Civil Engineering notes on Transportation Engineering & Modes of TransportBasic Civil Engineering notes on Transportation Engineering & Modes of Transport
Basic Civil Engineering notes on Transportation Engineering & Modes of Transport
 
SPLICE Working Group: Reusable Code Examples
SPLICE Working Group:Reusable Code ExamplesSPLICE Working Group:Reusable Code Examples
SPLICE Working Group: Reusable Code Examples
 
Andreas Schleicher presents at the launch of What does child empowerment mean...
Andreas Schleicher presents at the launch of What does child empowerment mean...Andreas Schleicher presents at the launch of What does child empowerment mean...
Andreas Schleicher presents at the launch of What does child empowerment mean...
 
Including Mental Health Support in Project Delivery, 14 May.pdf
Including Mental Health Support in Project Delivery, 14 May.pdfIncluding Mental Health Support in Project Delivery, 14 May.pdf
Including Mental Health Support in Project Delivery, 14 May.pdf
 
Spring gala 2024 photo slideshow - Celebrating School-Community Partnerships
Spring gala 2024 photo slideshow - Celebrating School-Community PartnershipsSpring gala 2024 photo slideshow - Celebrating School-Community Partnerships
Spring gala 2024 photo slideshow - Celebrating School-Community Partnerships
 
Transparency, Recognition and the role of eSealing - Ildiko Mazar and Koen No...
Transparency, Recognition and the role of eSealing - Ildiko Mazar and Koen No...Transparency, Recognition and the role of eSealing - Ildiko Mazar and Koen No...
Transparency, Recognition and the role of eSealing - Ildiko Mazar and Koen No...
 
Major project report on Tata Motors and its marketing strategies
Major project report on Tata Motors and its marketing strategiesMajor project report on Tata Motors and its marketing strategies
Major project report on Tata Motors and its marketing strategies
 
DEMONSTRATION LESSON IN ENGLISH 4 MATATAG CURRICULUM
DEMONSTRATION LESSON IN ENGLISH 4 MATATAG CURRICULUMDEMONSTRATION LESSON IN ENGLISH 4 MATATAG CURRICULUM
DEMONSTRATION LESSON IN ENGLISH 4 MATATAG CURRICULUM
 
The Story of Village Palampur Class 9 Free Study Material PDF
The Story of Village Palampur Class 9 Free Study Material PDFThe Story of Village Palampur Class 9 Free Study Material PDF
The Story of Village Palampur Class 9 Free Study Material PDF
 
VAMOS CUIDAR DO NOSSO PLANETA! .
VAMOS CUIDAR DO NOSSO PLANETA!                    .VAMOS CUIDAR DO NOSSO PLANETA!                    .
VAMOS CUIDAR DO NOSSO PLANETA! .
 
OS-operating systems- ch05 (CPU Scheduling) ...
OS-operating systems- ch05 (CPU Scheduling) ...OS-operating systems- ch05 (CPU Scheduling) ...
OS-operating systems- ch05 (CPU Scheduling) ...
 
Improved Approval Flow in Odoo 17 Studio App
Improved Approval Flow in Odoo 17 Studio AppImproved Approval Flow in Odoo 17 Studio App
Improved Approval Flow in Odoo 17 Studio App
 

ADI for Tensor Structured Equations

  • 1. Workshop on Matrix Equations and Tensor Techniques 2011 Aachen, 21 November 2011 ADI for Tensor Structured Equations Thomas Mach and Jens Saak Max Planck Institute for Dynamics of Complex Technical Systems Computational Methods in Systems and Control Theory MAX PLANCK INSTITUTE FOR DYNAMICS OF COMPLEX TECHNICAL SYSTEMS MAGDEBURG Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 1/37
  • 2. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Classic ADI [Peaceman/Rachford ’55] Developed to solve linear systems related to Poisson problems −∆u = f in Ω ⊂ Rd , d = 2 u=0 on ∂Ω. uniform grid size h, centered differences, d = 1, ⇒ ∆1,h u = h2 f   2 −1 −1 2 −1    ∆1,h =  .. .. .. .   . . .   −1 2 −1 −1 2 Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 2/37
  • 3. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Classic ADI [Peaceman/Rachford ’55] Developed to solve linear systems related to Poisson problems −∆u = f in Ω ⊂ Rd , d = 2 u=0 on ∂Ω. uniform grid size h, 5-point difference star, d = 2, ⇒ ∆2,h u = h2 f     K −I 4 −1 −I K −I  −1 4 −1      ∆2,h =  .. .. ..  and K =    .. .. .. .   . . .   . . .   −I K −I   −1 4 −1 −I K −1 4 Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 2/37
  • 4. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Classic ADI [Peaceman/Rachford ’55] Observation ∆2,h = (∆1,h ⊗ I ) + (I ⊗ ∆1,h ). =:H =:V Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 3/37
  • 5. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Classic ADI [Peaceman/Rachford ’55] Observation ∆2,h = (∆1,h ⊗ I ) + (I ⊗ ∆1,h ). =:H =:V ˜ Solve ∆2,h u = h2 f =: f exploiting structure in H and V . Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 3/37
  • 6. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Classic ADI [Peaceman/Rachford ’55] Observation ∆2,h = (∆1,h ⊗ I ) + (I ⊗ ∆1,h ). =:H =:V ˜ Solve ∆2,h u = h2 f =: f exploiting structure in H and V . For certain shift parameters perform ˜ (H + pi I ) ui+ 1 = (pi I − V ) ui + f , 2 ˜ (V + pi I ) ui+1 = (pi I − H) ui+ 1 + f , 2 until ui is good enough. Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 3/37
  • 7. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions ADI and Lyapunov Equations [Wachspress ’88] Lyapunov Equation FX + XF T = −GG T Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 4/37
  • 8. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions ADI and Lyapunov Equations [Wachspress ’88] Lyapunov Equation FX + XF T = −GG T Vectorized Lyapunov Equation (I ⊗ F ) + (F ⊗ I ) vec(X ) = −vec(GG T ) =:HF =:VF Same structure ⇒ apply ADI Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 4/37
  • 9. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions ADI and Lyapunov Equations [Wachspress ’88] Lyapunov Equation FX + XF T = −GG T Vectorized Lyapunov Equation (I ⊗ F ) + (F ⊗ I ) vec(X ) = −vec(GG T ) =:HF =:VF Same structure ⇒ apply ADI (F + pi I ) Xi+ 1 = −GG T − Xi F T − pi I 2 (F + pi I ) Xi+1 = −GG T − Xi+ 1 F T − pi I T 2 Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 4/37
  • 10. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions LR-ADI for Lyapunov Equations Lyapunov Equation FX + XF T = −GG T Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 5/37
  • 11. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions LR-ADI for Lyapunov Equations Lyapunov Equation FX + XF T = −GG T Often singular values of X decay rapidly when G is “thin”. X = ZZ T with Z “thin”. Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 5/37
  • 12. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions LR-ADI for Lyapunov Equations Lyapunov Equation FX + XF T = −GG T Often singular values of X decay rapidly when G is “thin”. X = ZZ T with Z “thin”. LR-ADI [Penzl ’99, Li/White ’02] Z0 = [] V1 = −2 Re (p1 )(F + p1 I )−1 G Re (pi ) Vi = I − (pi + pi−1 )(F + pi I )−1 Vi−1 Re (pi−1 ) Zi = [Zi−1 , Vi ] Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 5/37
  • 13. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Generalizing Matrix Equations ∆2,h vec(X ) = vec(B) I ⊗ ∆1,h + ∆1,h ⊗ I vec(X ) = vec(B) =H =V =u =f ∆µa a Xa c + = Ba c c ∆µc Xa c Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 6/37
  • 14. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Generalizing Matrix Equations ∆4,h vec(X ) = vec(B) I ⊗ I ⊗ I ⊗ ∆1,h + I ⊗ I ⊗ ∆1,h ⊗ I + I ⊗ ∆1,h ⊗ I ⊗ I + ∆1,h ⊗ I ⊗ I ⊗ I vec(X ) = vec(B) =H =V =R =Q =u =f ∆µa a Xabcd + Xabcd ∆µb b + = Babcd c ∆µc Xabcd + Xabcd d ∆µd Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 6/37
  • 15. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Generalizing ADI I ⊗ ∆1,h + ∆1,h ⊗ I vec(X ) = vec(B) =H =V =u =f (H + I ⊗ pi,1 I )Xi+ 1 = (pi,1 I − V )Xi + B 2 (V + pi,2 I ⊗ I )Xi+ 1 = (pi,2 I − H)Xi+ 1 + B 2 2 Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 7/37
  • 16. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Generalizing ADI I ⊗ ∆1,h + ∆1,h ⊗ I vec(X ) = vec(B) =H =V =u =f (H + I ⊗ pi,1 I )Xi+ 1 = (pi,1 I − V )Xi + B 2 (V + pi,2 I ⊗ I )Xi+ 1 = (pi,2 I − H)Xi+ 1 + B 2 2 I ⊗ I ⊗ I ⊗ ∆1,h + I ⊗ I ⊗ ∆1,h ⊗ I + I ⊗ ∆1,h ⊗ I ⊗ I + ∆1,h ⊗ I ⊗ I ⊗ I vec(X ) = vec(B) =H =V =R =Q =u =f (H + I ⊗ I ⊗ I ⊗ pi,1 I )Xi+ 1 = (pi,1 I − V − R − Q)Xi +B 4 (V + I ⊗ I ⊗ pi,2 I ⊗ I )Xi+ 1 = (pi,2 I − H − R − Q)Xi+ 1 +B 2 4 (R + I ⊗ pi,3 I ⊗ I ⊗ I )Xi+ 3 = (pi,3 I − H − V − Q)Xi+ 1 +B 4 2 (Q + pi,4 I ⊗ I ⊗ I ⊗ I )Xi+1 = (pi,4 I − H − V − R)Xi+ 3 +B 4 Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 7/37
  • 17. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Goal Solve AX = B A = I ⊗ I ⊗ · · · ⊗ I ⊗ I ⊗ A1 + I ⊗ I ⊗ · · · ⊗ I ⊗ A2 ⊗ I + ... + Ad ⊗ I ⊗ · · · ⊗ I ⊗ I ⊗ I B is given in tensor train decomposition ⇒ X is sought in tensor train decomposition. Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 8/37
  • 18. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Tensor Trains [Oseledets, Tyrtyshnikov ’09] r1 ,...,rd−1 T (i1 , i2 , . . . , id ) = G1 (i1 , α1 )G2 (α1 , i2 , α2 ) α1 ,...,αd−1 =1 · · · Gj (αj−1 , ij , αj ) · · · Gd−1 (αd−2 , id−1 , αd−1 )Gd (αd−1 , id ). Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 9/37
  • 19. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Tensor Trains [Oseledets, Tyrtyshnikov ’09] r1 ,...,rd−1 T (i1 , i2 , . . . , id ) = G1 (i1 , α1 )G2 (α1 , i2 , α2 ) α1 ,...,αd−1 =1 · · · Gj (αj−1 , ij , αj ) · · · Gd−1 (αd−2 , id−1 , αd−1 )Gd (αd−1 , id ). G1 (i1 , α1 ) α1 G2 (α1 , i2 , α2 ) α2 ··· Gd (αd−1 , id ) Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 9/37
  • 20. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Tensor Trains [Oseledets, Tyrtyshnikov ’09] Tensor trains are computable, and d require only O(dnr 2 ) storage, with TT-rank r and T ∈ Rn . Canonical representation T (i1 , i2 , . . . , id ) = G1 (i1 , α) · · · Gd (id , α) α Tucker decomposition T (i1 , i2 , . . . , id ) = C (α1 , . . . , αd )G1 (i1 , α1 ) · · · Gd (id , αd ) α1 ,...,αd Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 10/37
  • 21. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Tensor Trains [Oseledets, Tyrtyshnikov ’09] (I ⊗ · · · ⊗ I ⊗ A1 ) T Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 11/37
  • 22. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Tensor Trains [Oseledets, Tyrtyshnikov ’09] (I ⊗ · · · ⊗ I ⊗ A1 ) T A1 (β, i1 ) i1 G1 (i1 , α1 ) α1 G2 (α1 , i2 , α2 ) α2 ··· Gd (αd−1 , id ) Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 11/37
  • 23. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Tensor Trains [Oseledets, Tyrtyshnikov ’09] (I ⊗ · · · ⊗ I ⊗ A1 ) T A1 (β, i1 ) i1 G1 (i1 , α1 ) α1 G2 (α1 , i2 , α2 ) α2 ··· Gd (αd−1 , id ) T (i1 , i2 , . . . , id ) ×1 A1 = A1 β,i1 G1 (i1 , α1 )G2 (α1 , i2 , α2 ) α1 ,...,αd−1 · · · Gd−1 (αd−2 , id−1 , αd−1 )Gd (αd−1 , id ) Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 11/37
  • 24. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Tensor Trains [Oseledets, Tyrtyshnikov ’09] (I ⊗ · · · ⊗ I ⊗ A1 ) T ˜ = G (β, α1 ) = A1 G1 A1 (β, i1 ) i1 G1 (i1 , α1 ) α1 G2 (α1 , i2 , α2 ) α2 ··· Gd (αd−1 , id ) T (i1 , i2 , . . . , id ) ×1 A1 = A1 β,i1 G1 (i1 , α1 )G2 (α1 , i2 , α2 ) α1 ,...,αd−1 · · · Gd−1 (αd−2 , id−1 , αd−1 )Gd (αd−1 , id ) Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 11/37
  • 25. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Tensor Trains [Oseledets, Tyrtyshnikov ’09] (I ⊗ · · · ⊗ I ⊗ A1 ) T ˜ = G (β, α1 ) = A1 G1 A1 (β, i1 ) i1 G1 (i1 , α1 ) α1 G2 (α1 , i2 , α2 ) α2 ··· Gd (αd−1 , id ) T (i1 , i2 , . . . , id ) ×1 A1 −1 = A1 −1 β,i1 G1 (i1 , α1 )G2 (α1 , i2 , α2 ) α1 ,...,αd−1 · · · Gd−1 (αd−2 , id−1 , αd−1 )Gd (αd−1 , id ) Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 11/37
  • 26. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Eigenvalues A = I ⊗ · · · ⊗ I ⊗ A1 + I ⊗ · · · ⊗ I ⊗ A2 ⊗ I + . . . + Ad ⊗ I ⊗ · · · ⊗ I St´phanos’ theorem: e ⇒ λi (A) = λi1 (A1 ) + λi2 (A2 ) + · · · + λid (Ad ), d−1 with i = i1 + i2 n1 + · · · + id nj . j=1 Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 12/37
  • 27. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Eigenvalues A = I ⊗ · · · ⊗ I ⊗ A1 + I ⊗ · · · ⊗ I ⊗ A2 ⊗ I + . . . + Ad ⊗ I ⊗ · · · ⊗ I St´phanos’ theorem: e ⇒ λi (A) = λi1 (A1 ) + λi2 (A2 ) + · · · + λid (Ad ), d−1 with i = i1 + i2 n1 + · · · + id nj . j=1 d AX = B ⇔ X ×j Aj = B j=1 A is regular ⇔ λi (A) = 0 ∀i ⇐ Ai Hurwitz ∀i Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 12/37
  • 28. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Algorithm Input: {A1 , . . . , Ad }, tensor train B, accuracy Output: tensor train X , with AX = B forall j ∈ {1, . . . , d} do (0) Xj := zeros(n, 1, 1) end while r (i) > do Choose shift pi forall k ∈ {1, . . . , d} do d ×j Aj ×k (Ak + pi I )−1 k k−1 k−1 X (i+ d ) := B +pi X (i+ d ) − X (i+ d ) j=1 j=k end end Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 13/37
  • 29. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Algorithm r (i) := B Input: {A1 , . . . , Ad }, tensor train B, accuracy forall j ∈ {1, . . . , d} do Output: tensor train X , with AX(i) B = r (i) := r − Xi ×j Aj forall j ∈ {1, . . . , d} do (0) end Xj := zeros(n, 1, 1) end while r (i) > do Choose shift pi forall k ∈ {1, . . . , d} do d ×j Aj ×k (Ak + pi I )−1 k k−1 k−1 X (i+ d ) := B +pi X (i+ d ) − X (i+ d ) j=1 j=k end end Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 13/37
  • 30. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Algorithm Input: {A1 , . . . , Ad }, tensor train B, accuracy Output: tensor train X , with AX = B forall j ∈ {1, . . . , d} do (0) Xj := zeros(n, 1, 1) end while r (i) > do Choose shift pi forall k ∈ {1, . . . , d} do d ×j Aj ×k (Ak + pi I )−1 k k−1 k−1 X (i+ d ) := B +pi X (i+ d ) − X (i+ d ) j=1 j=k end end Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 13/37
  • 31. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Algorithm Input: {A1 , . . . , Ad }, tensor train B, accuracy Output: tensor train X , with AX = B forall j ∈ {1, . . . , d} do (0) Xj := zeros(n, 1, 1) end (I ⊗ I ⊗ · · · ⊗ I ⊗ Aj ⊗ I ⊗ · · · ⊗ I ) Xi+ k−1 d while r (i) > do Choose shift pi forall k ∈ {1, . . . , d} do d ×j Aj ×k (Ak + pi I )−1 k k−1 k−1 X (i+ d ) := B +pi X (i+ d ) − X (i+ d ) j=1 j=k end end Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 13/37
  • 32. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Improvements better shifts, e.g. Wachspress/Penzl test residual in innermost loop replace inner loop by random k use tensor train truncation and start with low accuarcy Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 14/37
  • 33. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Improvements better shifts, e.g. Wachspress/Penzl test residual in innermost loop replace inner loop by random k use tensor train truncation and start with low accuarcy [...] while r (i) > do Choose shifts pi,j forall k ∈ {1, . . . , d} do k −1 d k k −1 X (i+ d ) := B + pi,k X (i+ d ) − X (i+ d ) ×j Aj ×k (Ak − pi,k I ) j=1 j=k end end Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 14/37
  • 34. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Improvements better shifts, e.g. Wachspress/Penzl test residual in innermost loop replace inner loop by random k use tensor train truncation and start with low accuarcy [...] while r (i) > do Choose shifts pi,j forall k ∈ {1, . . . , d} do k −1 d k k −1 X (i+ d ) := B + pi,k X (i+ d ) − X (i+ d ) ×j Aj ×k (Ak − pi,k I ) j=1 j=k end end Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 14/37
  • 35. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Improvements better shifts, e.g. Wachspress/Penzl test residual in innermost loop replace inner loop by random k use tensor train truncation and start with low accuarcy [...] while r (i) > do Choose shifts pi,j forall = 1, . . . , 5 do k := random(1, . . . , d) −1 d −1 X (i+ 5 ) := B + pi, X (i+ 5 ) − X (i+ 5 ) ×j Aj ×k (Ak − pi, I ) j=1 j=k end end Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 14/37
  • 36. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Improvements better shifts, e.g. Wachspress/Penzl test residual in innermost loop replace inner loop by random k use tensor train truncation and start with low accuarcy Additional remark: [...] The improvement with ran- while r (i) > do dom chosen directions works for the Lyapunov case with Choose shifts pi,j the special right hand side as forall = 1, . . . , 5 do k := random(1, . . . , d) in the examples. The investi- gation of the convergence be- −1 d −1 X (i+ 5 ) := B + pi, X (i+ 5havior for(i+ 5 ) ×j Arhs×k (Ak − pi, I ) ) − X random j is work j=1 in progress. j=k end end Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 14/37
  • 37. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Improvements better shifts, e.g. Wachspress/Penzl test residual in innermost loop replace inner loop by random k use tensor train truncation and start with low accuarcy 1 · 105 constant truncation error 10−2 i Storage in Double 80,000 tightened truncation error Truncation Error 60,000 10−8 40,000 10−14 20,000 0 10−20 0 5 10 15 20 25 30 Iteration Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 14/37
  • 38. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Lemma Lemma [Grasedyck ’04] The tensor equation d j=1 X ×j Aj = B with λi (A) = 0 ∀i and Ak Hurwitz has the solution ∞ X =− 0 B ×1 exp(A1 t) ×2 · · · ×d exp(Ad t)dt Z (t) = B ×1 exp(A1 t) ×2 · · · ×d exp(Ad t) d ∞ ˙ Z (t) = Z (t) ×j Aj Z (∞) − Z (0) = ˙ Z (t)dt, j=1 0 d ∞ 0−B = Z (t)dt ×j Aj j=1 0 Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 15/37
  • 39. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Theorem Theorem {A1 , . . . , Ad } ⇒ A, Λ(A) ⊂ [−λmax , −λmin ] ⊕ ı [−µ, µ] ⊂ C− . Let k ∈ N and use the quadrature points and weights: √ hst := √k , tj := log e jhst + 1 + e 2jhst , wj := √ hst−2jh . π 1+e st Then the solution X can be approximated by r1 ,...,rd−1 ˜ X (i1 , i2 , . . . , id ) = − H1 (i1 , α1 ) · · · Hd (αd−1 , id ), α1 ,...,αd−1 =1 2tj 2wj Ap with Hp (αp−1 , ip , αp ) := k j=−k λmin βp e λmin ip ,βp Gp (αp−1 , βp , αp ) with the approximation error 2µλ−1 +1 √ (λI − 2A/λmin )−1 min ˜ X −X ≤ Cst −π k πλmin e dΓ λ B 2. 2 π Γ 2 extending [Grasedyck ’04] (X and B of low Kronecker rank) to low TT-rank Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 16/37
  • 40. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Theorem Theorem {A1 , . . . , Ad } ⇒ A, Λ(A) ⊂ [−λmax , −λmin ] ⊕ ı [−µ, µ] ⊂ C− . Let k ∈ N and use the quadrature points and weights: √ hst := √k , tj := log e jhst + 1 + e 2jhst , wj := √ hst−2jh . π 1+e st Then the solution X can be approximated by r1 ,...,rd−1 ˜ X (i1 , i2 , . . . , id ) = − H1 (i1 , α1 ) · · · Hd (αd−1 , id ), α1 ,...,αd−1 =1 2tj 2wj Ap with Hp (αp−1 , ip , αp ) := k j=−k λmin βp e λmin ip ,βp Gp (αp−1 , βp , αp ) with the approximation error 2µλ−1 +1 √ (λI − 2A/λmin )−1 min ˜ X −X ≤ Cst −π k πλmin e dΓ λ B 2. 2 π Γ 2 extending [Grasedyck ’04] (X and B of low Kronecker rank) to low TT-rank Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 16/37
  • 41. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Theorem Theorem {A1 , . . . , Ad } ⇒ A, Λ(A) ⊂ [−λmax , −λmin ] ⊕ ı [−µ, µ] ⊂ C− . Let k ∈ N and use the quadrature points and weights: √ hst := √k , tj := log e jhst + 1 + e 2jhst , wj := √ hst−2jh . π 1+e st Then the solution X can be approximated by r1 ,...,rd−1 ˜ X (i1 , i2 , . . . , id ) = − H1 (i1 , α1 ) · · · Hd (αd−1 , id ), α1 ,...,αd−1 =1 2tj 2wj Ap with Hp (αp−1 , ip , αp ) := k j=−k λmin βp e λmin ip ,βp Gp (αp−1 , βp , αp ) with the approximation error 2µλ−1 +1 √ (λI − 2A/λmin )−1 min ˜ X −X ≤ Cst −π k πλmin e dΓ λ B 2. 2 π Γ 2 extending [Grasedyck ’04] (X and B of low Kronecker rank) to low TT-rank Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 16/37
  • 42. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Theorem Theorem {A1 , . . . , Ad } ⇒ A, Λ(A) ⊂ [−λmax , −λmin ] ⊕ ı [−µ, µ] ⊂ C− . Let k ∈ N and use the quadrature points and weights: √ hst := √k , tj := log e jhst + 1 + e 2jhst , wj := √ hst−2jh . π 1+e st B (i1 , i2 , . . . , id ) = G1 (i1 , α1 )G2 (α1 , i2 , α2 ) α1 ,...,αd−1 Then the solution X can be approximated by · · · Gj (αj−1 , ij , αj ) · · · r1 ,...,rd−1 ˜ X (i1 , i2 , . . . , id ) = − Gd−1 (αd−2 ,(i , α d−1·)GH (α , id ). i H id−1 , α ) · · d (αd−1 , 1 1 1 d d−1 d ), α1 ,...,αd−1 =1 2tj 2wj Ap with Hp (αp−1 , ip , αp ) := k j=−k λmin βp e λmin ip ,βp Gp (αp−1 , βp , αp ) with the approximation error 2µλ−1 +1 √ (λI − 2A/λmin )−1 min ˜ X −X ≤ Cst −π k πλmin e dΓ λ B 2. 2 π Γ 2 extending [Grasedyck ’04] (X and B of low Kronecker rank) to low TT-rank Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 16/37
  • 43. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Theorem Theorem {A1 , . . . , Ad } ⇒ A, Λ(A) ⊂ [−λmax , −λmin ] ⊕ ı [−µ, µ] ⊂ C− . Let k ∈ N and use the quadrature points and weights: √ hst := √k , tj := log e jhst + 1 + e 2jhst , wj := √ hst−2jh . π 1+e st Then the solution X can be approximated by r1 ,...,rd−1 ˜ X (i1 , i2 , . . . , id ) = − H1 (i1 , α1 ) · · · Hd (αd−1 , id ), α1 ,...,αd−1 =1 2tj 2wj Ap with Hp (αp−1 , ip , αp ) := k j=−k λmin βp e λmin ip ,βp Gp (αp−1 , βp , αp ) with the approximation error 2µλ−1 +1 √ (λI − 2A/λmin )−1 min ˜ X −X ≤ Cst −π k πλmin e dΓ λ B 2. 2 π Γ 2 extending [Grasedyck ’04] (X and B of low Kronecker rank) to low TT-rank Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 16/37
  • 44. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Proof – Part 1 The quadrature formula (tj , wj ) can be found in [Stenger ’93, Example 4.2.11], with d = π/2, α = β = = 1, n = N = M = k. [Hackbusch ’09, D.4.3] shows, that d = π/2 is optimal. The quadrature formula is used to approximate 1 resp. the inverse of r matrices by ∞ k 1 tr = e dt ≈ wj e tj r . −r 0 j=−k The quadrature error is bounded by [Stenger ’93, (4.2.60)] ∞ k √ tr e dt − wj e tj r ≤ C3 e −π k , 0 j=−k with [Grasedyck, Hackbusch, Khoromskij ’03], C3 ≤ Cst e | (z)|/π . Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 17/37
  • 45. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Proof – Part 2 The lemma ∞ X =− B ×1 exp(A1 t) ×2 · · · ×d exp(Ad t)dt 0 2 together with scaling λmin we get the formula r1 ,...,rd−1 ˜ X (i1 , i2 , . . . , id ) = − H1 (i1 , α1 ) · · · Hd (αd−1 , id ), α1 ,...,αd−1 =1 k 2tj 2wj Ap where Hp (αp−1 , ip , αp ) := λmin e λmin ip ,βp Gp (αp−1 , βp , αp ). j=−k βp ∞ k 1 −r = e tr dt ≈ wj e tj r 0 j=−k Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 18/37
  • 46. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Proof – Part 3 For the error holds r1 ,...,rd−1 d ˜ X −X = Gp (αp−1 , βp , αp ) 2 α1 ,...,αd−1 =1 p=1 βp   ∞ k 2tj 2t A λmin p A λmin p − e + wj e  . 0 j=−k ip ,βp 2 The Dunford-Cauchy formula and the quadrature error give ∞ k 2tj 2t Ap Ap − e λmin + wj e λmin 0 j=−k ∞ −1 k −1 1 2Ap 2Ap ≤ − e tλ λI − dΓ λ + wj e tj λ λI − dΓ λ 2π Γ 0 λmin Γ λmin j=−k Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 19/37
  • 47. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Proof – Part 3 Dunford-Cauchy representation of the matrix exponential: e tAp = 1 2πı Γe tλ (λI − Ap )−1 dΓ λ The Dunford-Cauchy formula and the quadrature error give ∞ k 2tj 2t Ap Ap − e λmin + wj e λmin 0 j=−k ∞ −1 k −1 1 2Ap 2Ap ≤ − e tλ λI − dΓ λ + wj e tj λ λI − dΓ λ 2π Γ 0 λmin Γ λmin j=−k Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 19/37
  • 48. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Proof – Part 3 The Dunford-Cauchy formula and the quadrature error give ∞ k 2tj 2t Ap Ap − e λmin + wj e λmin 0 j=−k ∞ −1 k −1 1 2Ap 2Ap ≤ − e tλ λI − dΓ λ + wj e tj λ λI − dΓ λ 2π Γ 0 λmin Γ λmin j=−k Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 19/37
  • 49. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Proof – Part 3 The Dunford-Cauchy formula and the quadrature error give ∞ k 2tj 2t Ap Ap − e λmin + wj e λmin 0 j=−k ∞ −1 k −1 1 2Ap 2Ap ≤ − e tλ λI − dΓ λ + wj e tj λ λI − dΓ λ 2π Γ 0 λmin Γ λmin j=−k   ∞ k −1 1  2Ap ≤ − e tλ + wj e tj λ  λI − dΓ λ 2π 0 Γ λmin j=−k Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 19/37
  • 50. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Proof – Part 3 The Dunford-Cauchy formula and the quadrature error give ∞ k 2tj 2t Ap Ap − e λmin + wj e λmin 0 j=−k ∞ −1 k −1 1 2Ap 2Ap ≤ − e tλ λI − dΓ λ + wj e tj λ λI − dΓ λ 2π Γ 0 λmin Γ λmin  The quadrature error is bounded by [Stenger ’93, (4.2.60)]  j=−k ∞ k −1 1  2Ap ≤ − e tλ + w∞tj λ  je λI − k dΓ λ √ 2π 0 tr Γ λmin t r −π k j=−k e dt − wj e j ≤ C3 e , 0 j=−k with [Grasedyck, Hackbusch, Khoromskij ’03], C3 ≤ Cst e | (z)|/π . Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 19/37
  • 51. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Proof – Part 3 The Dunford-Cauchy formula and the quadrature error give ∞ k 2tj 2t Ap Ap − e λmin + wj e λmin 0 j=−k ∞ −1 k −1 1 2Ap 2Ap ≤ − e tλ λI − dΓ λ + wj e tj λ λI − dΓ λ 2π Γ 0 λmin Γ λmin j=−k   ∞ k −1 1  2Ap ≤ − e tλ + wj e tj λ  λI − dΓ λ 2π 0 Γ λmin j=−k −1 1 | (z)| −π √ 2Ap ≤ Cst e π e k λI − dΓ λ. 2π Γ λmin 2 Summing over p completes the proof. Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 19/37
  • 52. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Numerical Results Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 20/37
  • 53. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Example: Laplace – Ai = ∆1, 11 1 Ai = ∆1, 1 11 B = 0 0 ... 0 1 Shifts: pi := e1 (∗1 ) + . . . + ed (∗d ) — random chosen eigenvalue Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 21/37
  • 54. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Numerical Results – Ai = ∆1, 11 1 d t in s residual mean(#it) 2 0.3887 e+00 7.015 e−10 112.8 5 5.3975 e+00 7.467 e−10 45.8 8 6.0073 e+00 6.936 e−10 12.8 10 3.6624 e+00 7.685 e−10 6.8 25 3.1421 e+01 2.437 e−10 5.0 50 2.2682 e+02 2.049 e−10 5.0 75 7.1918 e+02 4.036 e−10 5.0 100 1.6997 e+03 1.864 e−10 5.0 150 5.5375 e+03 1.801 e−10 5.0 200 1.2795 e+04 1.472 e−10 5.0 250 2.4991 e+04 1.816 e−10 5.0 300 4.2979 e+04 2.535 e−10 5.0 500 1.9515 e+05 2.039 e−10 5.0 Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 22/37
  • 55. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Numerical Results – Ai = ∆1, 11 1 sparse dense d TADI MESS Penzl’s sh. lyap 2 0.310 0.0006 0.024 0.003 0.0003 0.0005 4 3.130 0.1695 0.011 0.049 6.331 0.012 6 8.147 — 0.076 0.094 — 7.165 8 5.458 — 5.863 1.097 — 13 698.212 10 5.306 — 3 445.523 249.464 — — Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 23/37
  • 56. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Numerical Results – Ai = ∆1, 11 1 105 104 Computation Time in s 103 102 Tensor ADI 101 sparse MESS 100 Penzl’s shifts 10−1 dense lyap 10−2 10 100 300 Dimension d Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 24/37
  • 57. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Numerical Results – Ai = ∆1, 11 1 105 104 Computation Time in s 103 102 Tensor ADI 101 sparse MESS 100 Penzl’s shifts 10−1 dense lyap 10−2 10 100 300 Dimension d Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 24/37
  • 58. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Single Shift and Convergence A = I ⊗ · · · ⊗ I ⊗ A1 + I ⊗ · · · ⊗ I ⊗ A2 ⊗ I + . . . + Ad ⊗ I ⊗ · · · ⊗ I We assume Λ(Ak ) ⊂ R− . Error Propagation, Single Shift p− λk + λl λk   d d k k G1 2 ≤ max = 1 −  . λk ∈Λ(Ak ), p + λl p + λl k=1,...,d l=0 l=0 If G1 2 < 1, then the ADI iteration converges. p < 0 and p > −∞ Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 25/37
  • 59. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Single Shift and Convergence A = I ⊗ · · · ⊗ I ⊗ A1 + I ⊗ · · · ⊗ I ⊗ A2 ⊗ I + . . . + Ad ⊗ I ⊗ · · · ⊗ I We assume Λ(Ak ) ⊂ R− . Error Propagation, Single Shift p− λk + λl λk   d d k k G1 2 ≤ max = 1 −  . λk ∈Λ(Ak ), p + λl p + λl k=1,...,d l=0 l=0 If G1 2 < 1, then the ADI iteration converges. p < 0 and p > −∞ d p < λi (A) = k=1 λk (Ak ) ∀i Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 25/37
  • 60. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Single Shift and Convergence A = I ⊗ · · · ⊗ I ⊗ A1 + I ⊗ · · · ⊗ I ⊗ A2 ⊗ I + . . . + Ad ⊗ I ⊗ · · · ⊗ I We assume Λ(Ak ) ⊂ R− . Error Propagation, Single Shift p− λk + λl λk   d d k k G1 2 ≤ max = 1 −  . λk ∈Λ(Ak ), p + λl p + λl k=1,...,d l=0 l=0 If G1 2 < 1, then the ADI iteration converges. p < 0 and p > −∞ d p < λi (A) = k=1 λk (Ak ) ∀i d−2 Lyapunov case (Ak = A0 ∀k): p < 2 λmin (A0 ) Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 25/37
  • 61. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Single Shift and Convergence A = I ⊗ · · · ⊗ I ⊗ A1 + I ⊗ · · · ⊗ I ⊗ A2 ⊗ I + . . . + Ad ⊗ I ⊗ · · · ⊗ I We assume Λ(Ak ) ⊂ R− . Error Propagation, Single Shift p− λk + λl λk   d d k k G1 2 ≤ max = 1 −  . λk ∈Λ(Ak ), p + λl p + λl k=1,...,d l=0 l=0 If G1 2 < 1, then the ADI iteration converges. p < 0 and p > −∞ d p < λi (A) = k=1 λk (Ak ) ∀i 2−2 Lyapunov case (Ak = A0 ∀k): p < 2 λmin (A0 ) =0 Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 25/37
  • 62. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Shifts Min-Max-Problem d pi,k − j=k λj min max {p1,1 ,...,p ,d }⊂C λk ∈Λ(Ak ) ∀k pi,k + λk i=0 k=0 Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 26/37
  • 63. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Shifts Min-Max-Problem d pi,k − j=k λj min max {p1,1 ,...,p ,d }⊂C λk ∈Λ(Ak ) ∀k pi,k + λk i=0 k=0 Min-Max-Problem, Lyapunov case (Ak = A0 ∀k, A0 Hurwitz) d pi,k − j=k λj min max {p1,1 ,...,p ,d }⊂C λk ∈Λ(A0 ) ∀k pi,k + λk i=0 k=0 Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 26/37
  • 64. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Shifts Min-Max-Problem d pi,k − j=k λj min max {p1,1 ,...,p ,d }⊂C λk ∈Λ(Ak ) ∀k pi,k + λk i=0 k=0 Min-Max-Problem, Lyapunov case (Ak = A0 ∀k, A0 Hurwitz) d pi − j=k λj min max {p1 ,...,p }⊂C λk ∈Λ(A0 ) ∀k pi + λk i=0 k=0 Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 26/37
  • 65. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Shifts Min-Max-Problem d pi,k − j=k λj min max {p1,1 ,...,p ,d }⊂C λk ∈Λ(Ak ) ∀k pi,k + λk i=0 k=0 Min-Max-Problem, Lyapunov case (Ak = A0 ∀k, A0 Hurwitz) d pi − j=k λj min max {p1 ,...,p }⊂C λk ∈Λ(A0 ) ∀k pi + λk i=0 k=0 λk = λ0 ∀k Penzl’s idea: {p1 , . . . , p } ⊂ (d − 1)Λ(A0 ) Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 26/37
  • 66. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Random Example seed := 1; R := rand(10); R := R + R ; R := R − λmin + 0.1; A0 = −R; Λ(A0 ) = {−0.1000, −0.2250, −1.1024, −1.7496, −2.0355, −2.4402, −3.1330, −3.3961, −3.9347, −11.9713} ⇒ The random shifts do not lead to convergence. p0 = λ10 (A0 )(d − 1) p1 = λ9 (A0 )(d − 1) Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 27/37
  • 67. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Numerical Results – Ai = −R random k, test residual every 5 inner iterations, max. 250 iterations d t in s residual #it 2 0.3627 2.6327 e−06 250 5 17.6850 1.4517 e−07 250 8 62.4336 9.3164 e−09 200 10 44.1547 8.5963 e−09 125 15 12.2231 5.0356 e−09 60 20 15.7506 3.3142 e−09 50 25 25.2221 3.6501 e−09 45 50 49.2004 5.4141 e−09 35 75 118.1297 6.8682 e−09 30 100 614.4017 2.4598 e−09 30 Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 28/37
  • 68. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Conditioning of the Problem Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 29/37
  • 69. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Eigenvalues A = I ⊗ · · · ⊗ I ⊗ A1 + I ⊗ · · · ⊗ I ⊗ A2 ⊗ I + . . . + Ad ⊗ I ⊗ · · · ⊗ I St´phanos’ theorem: e ⇒ λi (A) = λi1 (A1 ) + λi2 (A2 ) + · · · + λid (Ad ), d−1 with i = i1 + i2 n1 + · · · + id nj . j=1 Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 30/37
  • 70. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Eigenvalues A = I ⊗ · · · ⊗ I ⊗ A1 + I ⊗ · · · ⊗ I ⊗ A2 ⊗ I + . . . + Ad ⊗ I ⊗ · · · ⊗ I St´phanos’ theorem: e ⇒ λi (A) = λi1 (A1 ) + λi2 (A2 ) + · · · + λid (Ad ), d−1 with i = i1 + i2 n1 + · · · + id nj . j=1 d d max |λl (A)| ≥ λmk (Ak ) ≥ Re (λmk (Ak )) , if ∀k : Ak Hurwitz l k=1 k=1 d min |λl (A)| ≤ min |λlk (Ak )| . l lk k=1 Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 30/37
  • 71. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Eigenvalues A = I ⊗ · · · ⊗ I ⊗ A1 + I ⊗ · · · ⊗ I ⊗ A2 ⊗ I + . . . + Ad ⊗ I ⊗ · · · ⊗ I St´phanos’ theorem: e ⇒ λimk = λi1 (A1 argmax 2 ) + ·|λ·j (Ak )| (Ad ), (A) = ) + λi2 (A · + λid j∈{1,...,nk : Im (λi (Ak ))≥0} d−1 with i = i1 + i2 n1 + · · · + id nj . j=1 d d max |λl (A)| ≥ λmk (Ak ) ≥ Re (λmk (Ak )) , if ∀k : Ak Hurwitz l k=1 k=1 d min |λl (A)| ≤ min |λlk (Ak )| . l lk k=1 Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 30/37
  • 72. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Normal Matrices Aj Lemma If all Ai , i = 1, . . . , d are normal, then also A is normal. Proof. Here d = 3, the extension to larger d is obvious. AAT = A3 ⊗ I ⊗ I + I ⊗ A2 ⊗ I + I ⊗ I ⊗ A1 A3 ⊗ I ⊗ I + I ⊗ A2 ⊗ I + I ⊗ I ⊗ A1 Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 31/37
  • 73. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Normal Matrices Aj Lemma If all Ai , i = 1, . . . , d are normal, then also A is normal. Proof. Here d = 3, the extension to larger d is obvious. AAT = A3 ⊗ I ⊗ I + I ⊗ A2 ⊗ I + I ⊗ I ⊗ A1 AT ⊗ I ⊗ I + I ⊗ AT ⊗ I + I ⊗ I ⊗ AT 3 2 1 Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 31/37
  • 74. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Normal Matrices Aj Lemma If all Ai , i = 1, . . . , d are normal, then also A is normal. Proof. Here d = 3, the extension to larger d is obvious. AAT = A3 ⊗ I ⊗ I + I ⊗ A2 ⊗ I + I ⊗ I ⊗ A1 AT ⊗ I ⊗ I + I ⊗ AT ⊗ I + I ⊗ I ⊗ AT 3 2 1 = A3 AT ⊗ II ⊗ II 3 + A3 I ⊗ IAT ⊗ II 2 + A3 I ⊗ II ⊗ IAT 1 + IAT ⊗ A2 I ⊗ II 3 + II ⊗ A2 AT ⊗ II 2 + II ⊗ A2 I ⊗ IAT 1 + IAT ⊗ II ⊗ A1 I 3 + II ⊗ IAT ⊗ A1 I 2 + II ⊗ II ⊗ A1 AT 1 Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 31/37
  • 75. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Normal Matrices Aj Lemma If all Ai , i = 1, . . . , d are normal, then also A is normal. Proof. Here d = 3, the extension to larger d is obvious. AAT = A3 ⊗ I ⊗ I + I ⊗ A2 ⊗ I + I ⊗ I ⊗ A1 AT ⊗ I ⊗ I + I ⊗ AT ⊗ I + I ⊗ I ⊗ AT 3 2 1 = AT A3 ⊗ II ⊗ II 3 + IA3 ⊗ AT I ⊗ II 2 + IA3 ⊗ II ⊗ AT I 1 + AT I ⊗ IA2 ⊗ II 3 + II ⊗ AT A2 ⊗ II 2 + II ⊗ IA2 ⊗ AT I 1 + AT I ⊗ II ⊗ IA1 + II ⊗ AT I ⊗ IA1 + II ⊗ II ⊗ AT A1 3 2 1 Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 31/37
  • 76. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Normal Matrices Aj Lemma If all Ai , i = 1, . . . , d are normal, then also A is normal. Proof. Here d = 3, the extension to larger d is obvious. AAT = A3 ⊗ I ⊗ I + I ⊗ A2 ⊗ I + I ⊗ I ⊗ A1 AT ⊗ I ⊗ I + I ⊗ AT ⊗ I + I ⊗ I ⊗ AT 3 2 1 = AT A3 ⊗ II ⊗ II 3 + IA3 ⊗ AT I ⊗ II 2 + IA3 ⊗ II ⊗ AT I 1 + AT I ⊗ IA2 ⊗ II 3 + II ⊗ AT A2 ⊗ II 2 + II ⊗ IA2 ⊗ AT I 1 + AT I ⊗ II ⊗ IA1 + II ⊗ AT I ⊗ IA1 + II ⊗ II ⊗ AT A1 3 2 1 T = A3 ⊗ I ⊗ I + I ⊗ A2 ⊗ I + I ⊗ I ⊗ A1 A3 ⊗ I ⊗ I + I ⊗ A2 ⊗ I + I ⊗ I ⊗ A1 T =A A Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 31/37
  • 77. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Normal Matrices Aj Lemma If all Ai , i = 1, . . . , d are normal, then also A is normal. Proof. max |λl (A)| σmax (A) κ3, the= A 2 A−1 larger d is obvious.l Here d = 2 (A) extension to 2 = = σmin (A) min |λl (A)| l AAT = A3 ⊗ I ⊗ I + I ⊗ A2 ⊗ I + I ⊗ I ⊗ A1 AT ⊗ I ⊗ I + I ⊗ AT ⊗ I + I ⊗ I ⊗ AT 3 max |λlu (A0 )| 2 1 ∀i lu = AT A3 ⊗ II ⊗ II + IA3 ⊗ AT I ⊗ II + IA3 ⊗ II ⊗ AT I 3 2 = 1 Ai =A0 min |λll (A0 )| + A3 I ⊗ IA2 ⊗ II + II ⊗ A2 A2 ⊗ II + II ⊗ IAl2 ⊗ AT I T T l 1 + AT I ⊗ II ⊗ IA1 + II ⊗ AT I ⊗ IA1 + II ⊗ II ⊗ AT A1 3 2 1 T = A3 ⊗ I ⊗ I + I ⊗ A2 ⊗ I + I ⊗ I ⊗ A1 A3 ⊗ I ⊗ I + I ⊗ A2 ⊗ I + I ⊗ I ⊗ A1 T =A A Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 31/37
  • 78. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Lower Bounds for Non-Normal Matrices Aj FX + XF T = −G (i.e., d = 2, A1 = A2 = F Hurwitz) [Zhou ’02] max |λl (F )| l κ2 (A) ≥ min |λl (F )| l Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 32/37
  • 79. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Lower Bounds for Non-Normal Matrices Aj FX + XF T = −G (i.e., d = 2, A1 = A2 = F Hurwitz) [Zhou ’02] max |λl (F )| l κ2 (A) ≥ min |λl (F )| l by observing σmax (A) = A 2 = sup Ay 2 y 2 =1 ≥ sup Ay 2 y 2 =1,y EV = max |λl (A)| l = 2 max |λl (F )| l Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 32/37
  • 80. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Lower Bounds for Non-Normal Matrices Aj FX + XF T = −G (i.e., d = 2, A1 = A2 = F Hurwitz) [Zhou ’02] max |λl (F )| l κ2 (A) ≥ min |λl (F )| l and −1 A−1 y 2 −1 σmin (A) = A−1 2 = sup y 2 y 2 Ay 2 = inf −1 y = inf A 2 y 2 Ay 2 ≤ inf y EV y 2 = min |λl (A)| = 2 min |λl (F )| l l Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 32/37
  • 81. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Lower Bounds for Non-Normal Matrices Aj Tensor Lyapunov Equations (i.e., ∀i = 1, . . . , d, Ai = A0 Hurwitz) [M./S. ’11] max |λl (A0 )| l κ2 (A) ≥ min |λl (A0 )| l σmax (A) ≥ max |λl (A)| = d max |λl (A0 )| l l σmin (A) ≤ min |λl (A)| = d min |λl0 (A0 )| l l0 Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 33/37
  • 82. ADI ADI for Tensors Numerical Results and Shifts Conditioning of the Problem Conclusions Lower Bounds for Non-Normal Matrices Aj Tensor Silvester Equations (i.e., ∀i = 1, . . . , d, Ai Hurwitz) [M./S. ’11] d λmk (Ak ) k=1 κ2 (A) ≥ d min |λl (Ak )| k=1 l mk as before d σmax (A) ≥ max |λl (A)| ≥ λmk (Ak ) l k=1 d σmin (A) ≤ min |λl (A)| ≤ min |λlk (Ak )| l lk k=1 Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 33/37