SlideShare a Scribd company logo
LoGiC: Generating RPG Worlds using
   Self-Learning Neural Networks

             Marlon Etheredge

     Amsterdam University of Applied Sciences
            marlon.etheredge@hva.nl


             December 5, 2012
About This Research


      Fast exact graph matching using adjacency matrices
          Describes a solution for the Graph Isomorphism problem
          Strong focus on Procedurally Generated Content
          Describes graph-based game structures rewriteable by rewrite
          rules
          milk: Implementation of the algorithm (open source)
          Builds on work by Joris Dormans
      LoGiC
          Learning Game world Creator
          Describes a method of creating RPG (or other sorts) game
          world by using a self-learning Neural Network (NN)
          Makes use of SARSA
A Subproblem: Genetic vs Graph Based Procedural
Content (Reiteration)

      Requirement of realtime alternation of structures within the
      game
      Evolutionary algorithms require an arbitrary number of
      generations
          Undesired with the quick and direct way of transformations we
          demand
      Graph Based Procedural Content offers us fast and direct
      modification of structures within the game
      Practically any structure within the game may be represented
      by a graph
          Allows for the modifications of a lot of structures within the
          game
Another Problem: Graph Isomorphism (Reiteration)


      Graph Based Procedural Content requires fast realtime Graph
      Transformation
      Fast realtime algorithm for solving the Graph Isomorphism
      problem
           Existing well-known algorithms including:
                VF2
                Mainly focuses on large graphs
                Ullmann
                Too slow for usage in our project
                R. Heckel
                Exponential processing time
      Need for a fast algorithm still offering full flexibility
      Solved by milk
milk: Adjacency Matrices (Reiteration)

      Uses adjacency matrices
      Adjacency matrix generation by trivial function
           For every connection in set set in a two dimensional by index
           of first node and index of second node, a one
      Matrices should store connection count for rows and columns,
      convenient to store this at the end of the row or column
                                                C1   E2   B3   A4   A5   B6   l
                                                                               
                               C1
                                         C1     0    0    0    0    0    0    0
      B6   A5     A4      E2
                                         E2    1
                                                    0    1    0    0    0    2
                               B3
                                         B3    0
                                                    0    0    0    0    0    0
                Pattern
                                         A4    0
                                                    1    0    0    0    0    1
                                         A5    0
                                                    0    0    1    0    0    1
                                         B6    0    0    0    0    1    0    1
                                         l      1    1    1    1    1    0    0
Neural Network: Eve

      Neurons and connections
      Weights, determine when to fire and passthrough
      Input Layer (input neurons, sensors)
      Hidden Layers
      Output Layer (output, actions/decisions/others)
Letting Eve learn
       Back propagation, two phases (propagation and revise)
           First phase step 1: Forward propagation, run input through the
           NN, generate output
           First phase step 2: Backward propagation, run the output in
           reversed order through the NN and determine ∆ for all
           neurons in the hidden and output layer
           Second phase step 1: Multiply ∆ and input activation to
           determine gradient
           Second phase step 2: Define weight by reversing the gradient
           and taking into account a defined Learning Rate
       AND training set
           Training set
           Generalization (true, never seen data)
           Validation (verify the training accuracy)

                                0    0   0
                                0    1   0
                                1    0   0
                                1    1   1
SARSA



                             Try to follow me
    Q(st , at ) ← Q(st , at ) + α[rt + γQ(st+1 , at+1 ) − Q(st , at )]

    Update the Q value of a pair of st , at taking into account a
    learning rate α and error value
    Q values are determined by a prediction of the next st , at ,
    Q(st+1 , at+1 )
    With other words: For a prediction of the next st , at ,
    determine some sort of ’weight’, define the best st , at pair
    See what’s happening here?
SARSA, continued: Backpropagation


      Defined an ideal st , at pair
      Act according to this st , at pair
      At some point, this will lead to some result set (in case of
      logic-game, the player finishing the game)
      When using a SARSA agent within our NN we need to keep
      track of all the possible st , at pairs for later reference
      Use backpropagation and generalize the chosen st , at pair
      through the NN:
           Run the actual result set in reverse order through the NN
           Update weights
           Iterate to live another world (hopefully a bit smarter than
           before)
The Big Picture
XML to Graphs to World


      Using XML to represent an abstract form of our soon to be
      graph-based structures
      Graph-based structures are directly converted to geometry
      Currently supporting two types of overlapping geometry types:

          GeoEntity: May contain children (represents dividable
          geometry, or space)
          GameEntity: Is a leaf (represents concrete objects)
      An assumption with logic is that we always supply a very basic
      default world
      Rewriting of game structures is performed on graph-structure
      level
Summary


     logic-game: 2d topdown RPG, engine written on top of XNA
     4.0 (for convenience)
     Eve: NN implementation in C
     SARSA agent: State-Action-Result-State-Action, defines the
     best state-action pair according to predicted result determined
     by NN
     Backpropagation: Used to update the internal connection in
     the NN
     milk: Graph-isomorphism implementation used to rewrite the
     structures inside the game, rewrite rules define concrete action
     sets
Work in progress




      Components
          logic-game
          Eve
          SARSA Agent
      The Big Picture
      Open source: Visit marlonetheredge.name for updates
Thank You Very Much



                        Thank you!
                I’ll now show some source.

                    Marlon Etheredge
                 marlon.etheredge@hva.nl
              http://marlonetheredge.name/

                  ”Big results require big ambitions.”

                              Heraclitus

More Related Content

What's hot

Lecture 06 marco aurelio ranzato - deep learning
Lecture 06   marco aurelio ranzato - deep learningLecture 06   marco aurelio ranzato - deep learning
Lecture 06 marco aurelio ranzato - deep learning
mustafa sarac
 
APPLIED MACHINE LEARNING
APPLIED MACHINE LEARNINGAPPLIED MACHINE LEARNING
APPLIED MACHINE LEARNING
Revanth Kumar
 
Convolutional Neural Network (CNN) presentation from theory to code in Theano
Convolutional Neural Network (CNN) presentation from theory to code in TheanoConvolutional Neural Network (CNN) presentation from theory to code in Theano
Convolutional Neural Network (CNN) presentation from theory to code in Theano
Seongwon Hwang
 
Backpropagation in Convolutional Neural Network
Backpropagation in Convolutional Neural NetworkBackpropagation in Convolutional Neural Network
Backpropagation in Convolutional Neural Network
Hiroshi Kuwajima
 
nn network
nn networknn network
Variational Autoencoders For Image Generation
Variational Autoencoders For Image GenerationVariational Autoencoders For Image Generation
Variational Autoencoders For Image Generation
Jason Anderson
 
Matrix decomposition and_applications_to_nlp
Matrix decomposition and_applications_to_nlpMatrix decomposition and_applications_to_nlp
Matrix decomposition and_applications_to_nlp
ankit_ppt
 
Extreme dxt compression
Extreme dxt compressionExtreme dxt compression
Extreme dxt compression
Ataceyhun Çelik
 
Generative adversarial networks
Generative adversarial networksGenerative adversarial networks
Generative adversarial networks
남주 김
 
Sparse autoencoder
Sparse autoencoderSparse autoencoder
Sparse autoencoder
Devashish Patel
 
Svm map reduce_slides
Svm map reduce_slidesSvm map reduce_slides
Svm map reduce_slides
Sara Asher
 
ICML2012読み会 Scaling Up Coordinate Descent Algorithms for Large L1 regularizat...
ICML2012読み会 Scaling Up Coordinate Descent Algorithms for Large L1 regularizat...ICML2012読み会 Scaling Up Coordinate Descent Algorithms for Large L1 regularizat...
ICML2012読み会 Scaling Up Coordinate Descent Algorithms for Large L1 regularizat...
sleepy_yoshi
 
Neural net and back propagation
Neural net and back propagationNeural net and back propagation
Neural net and back propagation
Mohit Shrivastava
 
Multiple Kernel Learning based Approach to Representation and Feature Selecti...
Multiple Kernel Learning based Approach to Representation and Feature Selecti...Multiple Kernel Learning based Approach to Representation and Feature Selecti...
Multiple Kernel Learning based Approach to Representation and Feature Selecti...
ICAC09
 
CS 354 Acceleration Structures
CS 354 Acceleration StructuresCS 354 Acceleration Structures
CS 354 Acceleration Structures
Mark Kilgard
 
ICASSP 2018 Tutorial: Generative Adversarial Network and its Applications to ...
ICASSP 2018 Tutorial: Generative Adversarial Network and its Applications to ...ICASSP 2018 Tutorial: Generative Adversarial Network and its Applications to ...
ICASSP 2018 Tutorial: Generative Adversarial Network and its Applications to ...
宏毅 李
 
CS 354 Texture Mapping
CS 354 Texture MappingCS 354 Texture Mapping
CS 354 Texture Mapping
Mark Kilgard
 
Gradient Descent, Back Propagation, and Auto Differentiation - Advanced Spark...
Gradient Descent, Back Propagation, and Auto Differentiation - Advanced Spark...Gradient Descent, Back Propagation, and Auto Differentiation - Advanced Spark...
Gradient Descent, Back Propagation, and Auto Differentiation - Advanced Spark...
Chris Fregly
 
CS 354 More Graphics Pipeline
CS 354 More Graphics PipelineCS 354 More Graphics Pipeline
CS 354 More Graphics Pipeline
Mark Kilgard
 
Introduction to Neural networks (under graduate course) Lecture 6 of 9
Introduction to Neural networks (under graduate course) Lecture 6 of 9Introduction to Neural networks (under graduate course) Lecture 6 of 9
Introduction to Neural networks (under graduate course) Lecture 6 of 9
Randa Elanwar
 

What's hot (20)

Lecture 06 marco aurelio ranzato - deep learning
Lecture 06   marco aurelio ranzato - deep learningLecture 06   marco aurelio ranzato - deep learning
Lecture 06 marco aurelio ranzato - deep learning
 
APPLIED MACHINE LEARNING
APPLIED MACHINE LEARNINGAPPLIED MACHINE LEARNING
APPLIED MACHINE LEARNING
 
Convolutional Neural Network (CNN) presentation from theory to code in Theano
Convolutional Neural Network (CNN) presentation from theory to code in TheanoConvolutional Neural Network (CNN) presentation from theory to code in Theano
Convolutional Neural Network (CNN) presentation from theory to code in Theano
 
Backpropagation in Convolutional Neural Network
Backpropagation in Convolutional Neural NetworkBackpropagation in Convolutional Neural Network
Backpropagation in Convolutional Neural Network
 
nn network
nn networknn network
nn network
 
Variational Autoencoders For Image Generation
Variational Autoencoders For Image GenerationVariational Autoencoders For Image Generation
Variational Autoencoders For Image Generation
 
Matrix decomposition and_applications_to_nlp
Matrix decomposition and_applications_to_nlpMatrix decomposition and_applications_to_nlp
Matrix decomposition and_applications_to_nlp
 
Extreme dxt compression
Extreme dxt compressionExtreme dxt compression
Extreme dxt compression
 
Generative adversarial networks
Generative adversarial networksGenerative adversarial networks
Generative adversarial networks
 
Sparse autoencoder
Sparse autoencoderSparse autoencoder
Sparse autoencoder
 
Svm map reduce_slides
Svm map reduce_slidesSvm map reduce_slides
Svm map reduce_slides
 
ICML2012読み会 Scaling Up Coordinate Descent Algorithms for Large L1 regularizat...
ICML2012読み会 Scaling Up Coordinate Descent Algorithms for Large L1 regularizat...ICML2012読み会 Scaling Up Coordinate Descent Algorithms for Large L1 regularizat...
ICML2012読み会 Scaling Up Coordinate Descent Algorithms for Large L1 regularizat...
 
Neural net and back propagation
Neural net and back propagationNeural net and back propagation
Neural net and back propagation
 
Multiple Kernel Learning based Approach to Representation and Feature Selecti...
Multiple Kernel Learning based Approach to Representation and Feature Selecti...Multiple Kernel Learning based Approach to Representation and Feature Selecti...
Multiple Kernel Learning based Approach to Representation and Feature Selecti...
 
CS 354 Acceleration Structures
CS 354 Acceleration StructuresCS 354 Acceleration Structures
CS 354 Acceleration Structures
 
ICASSP 2018 Tutorial: Generative Adversarial Network and its Applications to ...
ICASSP 2018 Tutorial: Generative Adversarial Network and its Applications to ...ICASSP 2018 Tutorial: Generative Adversarial Network and its Applications to ...
ICASSP 2018 Tutorial: Generative Adversarial Network and its Applications to ...
 
CS 354 Texture Mapping
CS 354 Texture MappingCS 354 Texture Mapping
CS 354 Texture Mapping
 
Gradient Descent, Back Propagation, and Auto Differentiation - Advanced Spark...
Gradient Descent, Back Propagation, and Auto Differentiation - Advanced Spark...Gradient Descent, Back Propagation, and Auto Differentiation - Advanced Spark...
Gradient Descent, Back Propagation, and Auto Differentiation - Advanced Spark...
 
CS 354 More Graphics Pipeline
CS 354 More Graphics PipelineCS 354 More Graphics Pipeline
CS 354 More Graphics Pipeline
 
Introduction to Neural networks (under graduate course) Lecture 6 of 9
Introduction to Neural networks (under graduate course) Lecture 6 of 9Introduction to Neural networks (under graduate course) Lecture 6 of 9
Introduction to Neural networks (under graduate course) Lecture 6 of 9
 

Similar to Logic presentation

Pcg2012 presentation
Pcg2012 presentationPcg2012 presentation
Pcg2012 presentation
Marlon Etheredge
 
Deep learning simplified
Deep learning simplifiedDeep learning simplified
Deep learning simplified
Lovelyn Rose
 
Backpropagation - Elisa Sayrol - UPC Barcelona 2018
Backpropagation - Elisa Sayrol - UPC Barcelona 2018Backpropagation - Elisa Sayrol - UPC Barcelona 2018
Backpropagation - Elisa Sayrol - UPC Barcelona 2018
Universitat Politècnica de Catalunya
 
Capstone paper
Capstone paperCapstone paper
Capstone paper
Muhammad Saeed
 
Neural networks
Neural networksNeural networks
Neural networks
HarshitGupta367
 
All Pair Shortest Path Algorithm – Parallel Implementation and Analysis
All Pair Shortest Path Algorithm – Parallel Implementation and AnalysisAll Pair Shortest Path Algorithm – Parallel Implementation and Analysis
All Pair Shortest Path Algorithm – Parallel Implementation and Analysis
Inderjeet Singh
 
Deep learning from scratch
Deep learning from scratch Deep learning from scratch
Deep learning from scratch
Eran Shlomo
 
3D Math Primer: CocoaConf Atlanta
3D Math Primer: CocoaConf Atlanta3D Math Primer: CocoaConf Atlanta
3D Math Primer: CocoaConf Atlanta
Janie Clayton
 
Josh Patterson MLconf slides
Josh Patterson MLconf slidesJosh Patterson MLconf slides
Josh Patterson MLconf slides
MLconf
 
Report_NLNN
Report_NLNNReport_NLNN
Report_NLNN
Rishi Metawala
 
Wavelets for computer_graphics_stollnitz
Wavelets for computer_graphics_stollnitzWavelets for computer_graphics_stollnitz
Wavelets for computer_graphics_stollnitz
Juliocaramba
 
Mlp trainning algorithm
Mlp trainning algorithmMlp trainning algorithm
Mlp trainning algorithm
Hưng Đặng
 
Find nuclei in images with U-net
Find nuclei in images with U-netFind nuclei in images with U-net
Find nuclei in images with U-net
Ding Li
 
DNN.pptx
DNN.pptxDNN.pptx
DNN.pptx
someshleocola
 
Lec_2_Digital Image Fundamentals.pdf
Lec_2_Digital Image Fundamentals.pdfLec_2_Digital Image Fundamentals.pdf
Lec_2_Digital Image Fundamentals.pdf
nagwaAboElenein
 
H2O Deep Learning at Next.ML
H2O Deep Learning at Next.MLH2O Deep Learning at Next.ML
H2O Deep Learning at Next.ML
Sri Ambati
 
H2O.ai's Distributed Deep Learning by Arno Candel 04/03/14
H2O.ai's Distributed Deep Learning by Arno Candel 04/03/14H2O.ai's Distributed Deep Learning by Arno Candel 04/03/14
H2O.ai's Distributed Deep Learning by Arno Candel 04/03/14
Sri Ambati
 
Separating Hype from Reality in Deep Learning with Sameer Farooqui
 Separating Hype from Reality in Deep Learning with Sameer Farooqui Separating Hype from Reality in Deep Learning with Sameer Farooqui
Separating Hype from Reality in Deep Learning with Sameer Farooqui
Databricks
 
The Day You Finally Use Algebra: A 3D Math Primer
The Day You Finally Use Algebra: A 3D Math PrimerThe Day You Finally Use Algebra: A 3D Math Primer
The Day You Finally Use Algebra: A 3D Math Primer
Janie Clayton
 
Batch normalization presentation
Batch normalization presentationBatch normalization presentation
Batch normalization presentation
Owin Will
 

Similar to Logic presentation (20)

Pcg2012 presentation
Pcg2012 presentationPcg2012 presentation
Pcg2012 presentation
 
Deep learning simplified
Deep learning simplifiedDeep learning simplified
Deep learning simplified
 
Backpropagation - Elisa Sayrol - UPC Barcelona 2018
Backpropagation - Elisa Sayrol - UPC Barcelona 2018Backpropagation - Elisa Sayrol - UPC Barcelona 2018
Backpropagation - Elisa Sayrol - UPC Barcelona 2018
 
Capstone paper
Capstone paperCapstone paper
Capstone paper
 
Neural networks
Neural networksNeural networks
Neural networks
 
All Pair Shortest Path Algorithm – Parallel Implementation and Analysis
All Pair Shortest Path Algorithm – Parallel Implementation and AnalysisAll Pair Shortest Path Algorithm – Parallel Implementation and Analysis
All Pair Shortest Path Algorithm – Parallel Implementation and Analysis
 
Deep learning from scratch
Deep learning from scratch Deep learning from scratch
Deep learning from scratch
 
3D Math Primer: CocoaConf Atlanta
3D Math Primer: CocoaConf Atlanta3D Math Primer: CocoaConf Atlanta
3D Math Primer: CocoaConf Atlanta
 
Josh Patterson MLconf slides
Josh Patterson MLconf slidesJosh Patterson MLconf slides
Josh Patterson MLconf slides
 
Report_NLNN
Report_NLNNReport_NLNN
Report_NLNN
 
Wavelets for computer_graphics_stollnitz
Wavelets for computer_graphics_stollnitzWavelets for computer_graphics_stollnitz
Wavelets for computer_graphics_stollnitz
 
Mlp trainning algorithm
Mlp trainning algorithmMlp trainning algorithm
Mlp trainning algorithm
 
Find nuclei in images with U-net
Find nuclei in images with U-netFind nuclei in images with U-net
Find nuclei in images with U-net
 
DNN.pptx
DNN.pptxDNN.pptx
DNN.pptx
 
Lec_2_Digital Image Fundamentals.pdf
Lec_2_Digital Image Fundamentals.pdfLec_2_Digital Image Fundamentals.pdf
Lec_2_Digital Image Fundamentals.pdf
 
H2O Deep Learning at Next.ML
H2O Deep Learning at Next.MLH2O Deep Learning at Next.ML
H2O Deep Learning at Next.ML
 
H2O.ai's Distributed Deep Learning by Arno Candel 04/03/14
H2O.ai's Distributed Deep Learning by Arno Candel 04/03/14H2O.ai's Distributed Deep Learning by Arno Candel 04/03/14
H2O.ai's Distributed Deep Learning by Arno Candel 04/03/14
 
Separating Hype from Reality in Deep Learning with Sameer Farooqui
 Separating Hype from Reality in Deep Learning with Sameer Farooqui Separating Hype from Reality in Deep Learning with Sameer Farooqui
Separating Hype from Reality in Deep Learning with Sameer Farooqui
 
The Day You Finally Use Algebra: A 3D Math Primer
The Day You Finally Use Algebra: A 3D Math PrimerThe Day You Finally Use Algebra: A 3D Math Primer
The Day You Finally Use Algebra: A 3D Math Primer
 
Batch normalization presentation
Batch normalization presentationBatch normalization presentation
Batch normalization presentation
 

Logic presentation

  • 1. LoGiC: Generating RPG Worlds using Self-Learning Neural Networks Marlon Etheredge Amsterdam University of Applied Sciences marlon.etheredge@hva.nl December 5, 2012
  • 2. About This Research Fast exact graph matching using adjacency matrices Describes a solution for the Graph Isomorphism problem Strong focus on Procedurally Generated Content Describes graph-based game structures rewriteable by rewrite rules milk: Implementation of the algorithm (open source) Builds on work by Joris Dormans LoGiC Learning Game world Creator Describes a method of creating RPG (or other sorts) game world by using a self-learning Neural Network (NN) Makes use of SARSA
  • 3. A Subproblem: Genetic vs Graph Based Procedural Content (Reiteration) Requirement of realtime alternation of structures within the game Evolutionary algorithms require an arbitrary number of generations Undesired with the quick and direct way of transformations we demand Graph Based Procedural Content offers us fast and direct modification of structures within the game Practically any structure within the game may be represented by a graph Allows for the modifications of a lot of structures within the game
  • 4. Another Problem: Graph Isomorphism (Reiteration) Graph Based Procedural Content requires fast realtime Graph Transformation Fast realtime algorithm for solving the Graph Isomorphism problem Existing well-known algorithms including: VF2 Mainly focuses on large graphs Ullmann Too slow for usage in our project R. Heckel Exponential processing time Need for a fast algorithm still offering full flexibility Solved by milk
  • 5. milk: Adjacency Matrices (Reiteration) Uses adjacency matrices Adjacency matrix generation by trivial function For every connection in set set in a two dimensional by index of first node and index of second node, a one Matrices should store connection count for rows and columns, convenient to store this at the end of the row or column C1 E2 B3 A4 A5 B6 l   C1 C1 0 0 0 0 0 0 0 B6 A5 A4 E2 E2  1  0 1 0 0 0 2 B3 B3  0  0 0 0 0 0 0 Pattern A4  0  1 0 0 0 0 1 A5  0  0 0 1 0 0 1 B6  0 0 0 0 1 0 1 l 1 1 1 1 1 0 0
  • 6. Neural Network: Eve Neurons and connections Weights, determine when to fire and passthrough Input Layer (input neurons, sensors) Hidden Layers Output Layer (output, actions/decisions/others)
  • 7. Letting Eve learn Back propagation, two phases (propagation and revise) First phase step 1: Forward propagation, run input through the NN, generate output First phase step 2: Backward propagation, run the output in reversed order through the NN and determine ∆ for all neurons in the hidden and output layer Second phase step 1: Multiply ∆ and input activation to determine gradient Second phase step 2: Define weight by reversing the gradient and taking into account a defined Learning Rate AND training set Training set Generalization (true, never seen data) Validation (verify the training accuracy) 0 0 0 0 1 0 1 0 0 1 1 1
  • 8. SARSA Try to follow me Q(st , at ) ← Q(st , at ) + α[rt + γQ(st+1 , at+1 ) − Q(st , at )] Update the Q value of a pair of st , at taking into account a learning rate α and error value Q values are determined by a prediction of the next st , at , Q(st+1 , at+1 ) With other words: For a prediction of the next st , at , determine some sort of ’weight’, define the best st , at pair See what’s happening here?
  • 9. SARSA, continued: Backpropagation Defined an ideal st , at pair Act according to this st , at pair At some point, this will lead to some result set (in case of logic-game, the player finishing the game) When using a SARSA agent within our NN we need to keep track of all the possible st , at pairs for later reference Use backpropagation and generalize the chosen st , at pair through the NN: Run the actual result set in reverse order through the NN Update weights Iterate to live another world (hopefully a bit smarter than before)
  • 11. XML to Graphs to World Using XML to represent an abstract form of our soon to be graph-based structures Graph-based structures are directly converted to geometry Currently supporting two types of overlapping geometry types: GeoEntity: May contain children (represents dividable geometry, or space) GameEntity: Is a leaf (represents concrete objects) An assumption with logic is that we always supply a very basic default world Rewriting of game structures is performed on graph-structure level
  • 12. Summary logic-game: 2d topdown RPG, engine written on top of XNA 4.0 (for convenience) Eve: NN implementation in C SARSA agent: State-Action-Result-State-Action, defines the best state-action pair according to predicted result determined by NN Backpropagation: Used to update the internal connection in the NN milk: Graph-isomorphism implementation used to rewrite the structures inside the game, rewrite rules define concrete action sets
  • 13. Work in progress Components logic-game Eve SARSA Agent The Big Picture Open source: Visit marlonetheredge.name for updates
  • 14. Thank You Very Much Thank you! I’ll now show some source. Marlon Etheredge marlon.etheredge@hva.nl http://marlonetheredge.name/ ”Big results require big ambitions.” Heraclitus