Dependent Haskell has been desired in the community of Haskell programmers for a long time. Our goal of this project is to make the core language of Haskell, known as System FC, dependently typed, as steps are taken towards dependent Haskell.
This is a working-in-progress project. As a small step towards our final goal, the focus of this talk is on coercion quantification. Coercion quantification is necessary to support homogeneous equality, which simplifies the core and is important for meta-theories of dependently typed core.
Coercion quantification is interesting for both people working in core and for Haskell users. For GHC hackers, the patch to core formalization is worth attention. Adding coercion quantification involves refactor to lots of files in the compilation pipeline and introduces several subtleties. For Haskell users, coercion quantification opens up new questions to the design space in source Haskell, which requires non-trivial extension of the solver. We would want Haskell users to answer if this feature is ever desired in their development.
In this talk, we will share the high-level story-line of the dependently typed core, our low-level progress in implementing coercion quantification, as well as the involving design space, and seek feedbacks from the broader community.
Cross-Lingual Sentiment Analysis using modified BRAEmarujirou
1) The document summarizes a paper that presents a model called BRAE (Bilingually Constrained Recursive Auto-encoder) for cross-lingual sentiment analysis using parallel corpora.
2) BRAE uses a recursive auto-encoder structure to learn joint representations for phrases in different languages that share the same semantic meaning.
3) It additionally incorporates sentiment supervision in the resource-rich language and transforms representations to the resource-poor language to perform sentiment classification without labeled data in that language.
This document discusses computing commonalities between SPARQL conjunctive queries. It defines the concept of a least general generalization (lgg) of queries, which is a most general query that entails each of the input queries. The document presents definitions for lgg of basic graph pattern queries in SPARQL with respect to a set of RDF entailment rules and RDFS constraints. It focuses on computing the lgg of two queries by iteratively taking the lgg of query pairs. The goal is to study computing lgg in the conjunctive fragment of SPARQL to applications like query optimization and recommendation.
This document discusses the rise of dynamic programming languages. It provides examples of popular dynamic languages like JavaScript, Ruby, Python, and Lisp. It outlines key characteristics of dynamic languages like being dynamically typed, late binding, interpretive, reflective, and having lightweight syntax. The document uses R as a case study to illustrate how dynamic languages can be functional, support powerful data structures and graphics, are embeddable and extensible through packages. It argues dynamic languages are widely used and growing in popularity due to being interactive, portable and failure oblivious.
Audio Quality Assurance. An application of cross correlationSCAPE Project
Jesper Sindahl Nielsen, State and University Library, Denmark, presented algorithms for automated quality assurance on audio files in context of preservation actions and
access. Cross correlation is used to compare the soundwaves.
In: iPRES 2012 – Proceedings of the 9th International Conference on Preservation of Digital Objects. Toronto 2012, 144-149.
ISBN 978-0-9917997-0-1
This document discusses 3 technologies: Evri which provides options, Ustream which is a live TV feed, and Wordle which makes words look cool with its visual aspects.
The document discusses the work being done at DLab to model complex dynamical systems using the κ language. Currently, members are modeling muscle contraction, simulating zombie attacks on human populations, and adapting model checking techniques for κ systems. DLab is also working on space-related simulations involving compartmentalization and diffusion events, as well as timing control using polymer-driven rules. An example is provided for compartmentalization in κ using location tags on agents.
Dependent Haskell has been desired in the community of Haskell programmers for a long time. Our goal of this project is to make the core language of Haskell, known as System FC, dependently typed, as steps are taken towards dependent Haskell.
This is a working-in-progress project. As a small step towards our final goal, the focus of this talk is on coercion quantification. Coercion quantification is necessary to support homogeneous equality, which simplifies the core and is important for meta-theories of dependently typed core.
Coercion quantification is interesting for both people working in core and for Haskell users. For GHC hackers, the patch to core formalization is worth attention. Adding coercion quantification involves refactor to lots of files in the compilation pipeline and introduces several subtleties. For Haskell users, coercion quantification opens up new questions to the design space in source Haskell, which requires non-trivial extension of the solver. We would want Haskell users to answer if this feature is ever desired in their development.
In this talk, we will share the high-level story-line of the dependently typed core, our low-level progress in implementing coercion quantification, as well as the involving design space, and seek feedbacks from the broader community.
Cross-Lingual Sentiment Analysis using modified BRAEmarujirou
1) The document summarizes a paper that presents a model called BRAE (Bilingually Constrained Recursive Auto-encoder) for cross-lingual sentiment analysis using parallel corpora.
2) BRAE uses a recursive auto-encoder structure to learn joint representations for phrases in different languages that share the same semantic meaning.
3) It additionally incorporates sentiment supervision in the resource-rich language and transforms representations to the resource-poor language to perform sentiment classification without labeled data in that language.
This document discusses computing commonalities between SPARQL conjunctive queries. It defines the concept of a least general generalization (lgg) of queries, which is a most general query that entails each of the input queries. The document presents definitions for lgg of basic graph pattern queries in SPARQL with respect to a set of RDF entailment rules and RDFS constraints. It focuses on computing the lgg of two queries by iteratively taking the lgg of query pairs. The goal is to study computing lgg in the conjunctive fragment of SPARQL to applications like query optimization and recommendation.
This document discusses the rise of dynamic programming languages. It provides examples of popular dynamic languages like JavaScript, Ruby, Python, and Lisp. It outlines key characteristics of dynamic languages like being dynamically typed, late binding, interpretive, reflective, and having lightweight syntax. The document uses R as a case study to illustrate how dynamic languages can be functional, support powerful data structures and graphics, are embeddable and extensible through packages. It argues dynamic languages are widely used and growing in popularity due to being interactive, portable and failure oblivious.
Audio Quality Assurance. An application of cross correlationSCAPE Project
Jesper Sindahl Nielsen, State and University Library, Denmark, presented algorithms for automated quality assurance on audio files in context of preservation actions and
access. Cross correlation is used to compare the soundwaves.
In: iPRES 2012 – Proceedings of the 9th International Conference on Preservation of Digital Objects. Toronto 2012, 144-149.
ISBN 978-0-9917997-0-1
This document discusses 3 technologies: Evri which provides options, Ustream which is a live TV feed, and Wordle which makes words look cool with its visual aspects.
The document discusses the work being done at DLab to model complex dynamical systems using the κ language. Currently, members are modeling muscle contraction, simulating zombie attacks on human populations, and adapting model checking techniques for κ systems. DLab is also working on space-related simulations involving compartmentalization and diffusion events, as well as timing control using polymer-driven rules. An example is provided for compartmentalization in κ using location tags on agents.
Paul Nathan gave a presentation on practical functional programming. He began by introducing himself and explaining why functional programming is an important paradigm to learn. He then defined key concepts in functional programming like first class functions, differences from object oriented programming, avoiding side effects, and advanced type systems. The presentation included examples in Haskell and Common Lisp to demonstrate how functional programming is implemented in different languages.
Eitaro Fukamachi presents CL21, a redesign of Common Lisp for the 21st century. CL21 aims to improve Common Lisp's consistency, expressiveness, compatibility, and efficiency. It focuses on simplifying naming conventions, removing unnecessary symbols, and making the language more suitable for modern use while maintaining 100% compatibility with Common Lisp code and libraries. The project is still in development with discussions ongoing about final syntax and standard library decisions. CL21 hopes to make Lisp a premier language for prototyping by building on Common Lisp's strengths.
LISP: How I Learned To Stop Worrying And Love ParanthesesDominic Graefen
The document discusses functional programming and compares it to object-oriented programming. It provides a brief history of functional programming languages like Lisp from the 1950s and newer languages that have become popular more recently like Clojure, F# and Scala. It explains some key aspects of functional programming like higher-order functions, recursion, pure functions and using functions as values. It also discusses why functional programming has become more popular again recently, in part due to multi-core processors and a need for concurrency.
Though yacc-like LR parser generators provide ways to solve shift-reduce
conflicts using token precedences, no mechanisms are provided
for the resolution of difficult reduce-reduce conflicts. To solve this kind
of conflicts the language designer has to modify the grammar.
A programming paradigm is a style or approach to programming. Some common paradigms include imperative, declarative, structured, procedural, functional, object-oriented, logic-based, and constraint-based programming. Paradigms define aspects like control flow, use of variables and data, and how programmers specify programs. Many popular languages support multiple paradigms to varying degrees. Pure languages focus on a single paradigm while multi-paradigm languages facilitate multiple approaches. Paradigms are not exclusive categories as programs may incorporate elements of different styles.
ADAM is an open source platform for scalable genomic analysis that defines a data schema, Scala API, and command line interface. It uses Apache Spark for efficient parallel and distributed processing of large genomic datasets stored in Parquet format. Key features of ADAM include its ability to perform iterative analysis on whole genome datasets while minimizing data movement through Spark. The document also describes using ADAM and PacMin for long read assembly through techniques like minhashing for fast read overlapping and building consensus sequences on read graphs.
This document discusses how Hadoop can be used for bioinformatics applications. It provides examples of how Hadoop has been used to efficiently process large genomic datasets, such as read mapping and genome assembly, in a distributed, parallel manner. Hadoop allows bioinformatics workflows and algorithms to be rethought and scaled to handle the growing size of genomic data. Key applications discussed include read mapping, variant discovery, and de novo assembly.
Computational Techniques for the Statistical Analysis of Big Data in Rherbps10
The document describes techniques for improving the computational performance of statistical analysis of big data in R. It uses as a case study the rlme package for rank-based regression of nested effects models. The workflow involves identifying bottlenecks, rewriting algorithms, benchmarking versions, and testing. Examples include replacing sorting with a faster C++ selection algorithm for the Wilcoxon Tau estimator, vectorizing a pairwise function, and preallocating memory for a covariance matrix calculation. The document suggests future directions like parallelization using MPI and GPUs to further optimize R for big data applications.
This document discusses using Hadoop for bioinformatics applications. It describes how Hadoop can be used to efficiently process and analyze the large datasets involved in genome sequencing and analysis. Examples are given of applications that map DNA reads, call SNPs, and compare genomes across species. The document argues that Hadoop provides a scalable framework for parallelizing bioinformatics workflows and enables new types of analysis that were previously computationally infeasible due to the volumes of data.
End-to-end speech recognition in Neon presented by Anthony Ndirango and Tyler Lee
Modern automatic speech recognition systems incorporate tremendous amount of expert knowledge and a wide array of machine learning techniques. The promise of deep learning is to strip away much of this complexity in favor of the flexibility of neural networks. We will describe our efforts in implementing end-to-end speech recognition in neon by combining convolutional and recurrent neural networks to create an acoustic model followed by a graph-based decoding scheme. These types of models are trained to go directly from raw waveforms to transcribed speech without requiring any kind of explicit forced alignment. We will also discuss additional challenges that must be overcome to produce state-of-the-art results.
The document summarizes the evolution of the Scala programming language from its origins to its present state and future directions. It discusses Scala's combination of object-oriented and functional programming, its type system, tooling improvements, and the emergent ecosystem around Scala. It also outlines plans to develop a Scala-specific platform called TASTY and explore new language concepts like effect systems to model side effects through capabilities.
Paul Nathan gave a presentation on practical functional programming. He began by introducing himself and explaining why functional programming is an important paradigm to learn. He then defined key concepts in functional programming like first class functions, differences from object oriented programming, avoiding side effects, and advanced type systems. The presentation included examples in Haskell and Common Lisp to demonstrate how functional programming is implemented in different languages.
Eitaro Fukamachi presents CL21, a redesign of Common Lisp for the 21st century. CL21 aims to improve Common Lisp's consistency, expressiveness, compatibility, and efficiency. It focuses on simplifying naming conventions, removing unnecessary symbols, and making the language more suitable for modern use while maintaining 100% compatibility with Common Lisp code and libraries. The project is still in development with discussions ongoing about final syntax and standard library decisions. CL21 hopes to make Lisp a premier language for prototyping by building on Common Lisp's strengths.
LISP: How I Learned To Stop Worrying And Love ParanthesesDominic Graefen
The document discusses functional programming and compares it to object-oriented programming. It provides a brief history of functional programming languages like Lisp from the 1950s and newer languages that have become popular more recently like Clojure, F# and Scala. It explains some key aspects of functional programming like higher-order functions, recursion, pure functions and using functions as values. It also discusses why functional programming has become more popular again recently, in part due to multi-core processors and a need for concurrency.
Though yacc-like LR parser generators provide ways to solve shift-reduce
conflicts using token precedences, no mechanisms are provided
for the resolution of difficult reduce-reduce conflicts. To solve this kind
of conflicts the language designer has to modify the grammar.
A programming paradigm is a style or approach to programming. Some common paradigms include imperative, declarative, structured, procedural, functional, object-oriented, logic-based, and constraint-based programming. Paradigms define aspects like control flow, use of variables and data, and how programmers specify programs. Many popular languages support multiple paradigms to varying degrees. Pure languages focus on a single paradigm while multi-paradigm languages facilitate multiple approaches. Paradigms are not exclusive categories as programs may incorporate elements of different styles.
ADAM is an open source platform for scalable genomic analysis that defines a data schema, Scala API, and command line interface. It uses Apache Spark for efficient parallel and distributed processing of large genomic datasets stored in Parquet format. Key features of ADAM include its ability to perform iterative analysis on whole genome datasets while minimizing data movement through Spark. The document also describes using ADAM and PacMin for long read assembly through techniques like minhashing for fast read overlapping and building consensus sequences on read graphs.
This document discusses how Hadoop can be used for bioinformatics applications. It provides examples of how Hadoop has been used to efficiently process large genomic datasets, such as read mapping and genome assembly, in a distributed, parallel manner. Hadoop allows bioinformatics workflows and algorithms to be rethought and scaled to handle the growing size of genomic data. Key applications discussed include read mapping, variant discovery, and de novo assembly.
Computational Techniques for the Statistical Analysis of Big Data in Rherbps10
The document describes techniques for improving the computational performance of statistical analysis of big data in R. It uses as a case study the rlme package for rank-based regression of nested effects models. The workflow involves identifying bottlenecks, rewriting algorithms, benchmarking versions, and testing. Examples include replacing sorting with a faster C++ selection algorithm for the Wilcoxon Tau estimator, vectorizing a pairwise function, and preallocating memory for a covariance matrix calculation. The document suggests future directions like parallelization using MPI and GPUs to further optimize R for big data applications.
This document discusses using Hadoop for bioinformatics applications. It describes how Hadoop can be used to efficiently process and analyze the large datasets involved in genome sequencing and analysis. Examples are given of applications that map DNA reads, call SNPs, and compare genomes across species. The document argues that Hadoop provides a scalable framework for parallelizing bioinformatics workflows and enables new types of analysis that were previously computationally infeasible due to the volumes of data.
End-to-end speech recognition in Neon presented by Anthony Ndirango and Tyler Lee
Modern automatic speech recognition systems incorporate tremendous amount of expert knowledge and a wide array of machine learning techniques. The promise of deep learning is to strip away much of this complexity in favor of the flexibility of neural networks. We will describe our efforts in implementing end-to-end speech recognition in neon by combining convolutional and recurrent neural networks to create an acoustic model followed by a graph-based decoding scheme. These types of models are trained to go directly from raw waveforms to transcribed speech without requiring any kind of explicit forced alignment. We will also discuss additional challenges that must be overcome to produce state-of-the-art results.
The document summarizes the evolution of the Scala programming language from its origins to its present state and future directions. It discusses Scala's combination of object-oriented and functional programming, its type system, tooling improvements, and the emergent ecosystem around Scala. It also outlines plans to develop a Scala-specific platform called TASTY and explore new language concepts like effect systems to model side effects through capabilities.
1. pre-κ
expander
hurbina
Introduction
What is κ
κ syntax
Kappa at
DLab
pre-kappa expander for κ language
DLab’s current
work
Space-related
simulations
Timming control H´ctor Urbina
e
pre-Kappa
expander
July 18, 2011
2. pre-κ
expander
hurbina
Introduction 1 Introduction
What is κ
κ syntax What is κ
Kappa at
DLab
κ syntax
DLab’s current
work
Space-related
simulations
Timming control 2 Kappa at DLab
pre-Kappa
expander DLab’s current work
Space-related simulations
Timming control
pre-Kappa expander
3. What is κ
pre-κ
expander
hurbina
Introduction κ is a formal language for defining agents as sets of sites.
What is κ
κ syntax
Sites hold an internal state as well as a binding state.
Kappa at
DLab
DLab’s current
κ also enables the expression of rules of interaction between
work
Space-related agents.
simulations
Timming control
pre-Kappa These rules are executable, inducing a stochastic dynamics on a
expander
mixture of agents.
A κ model is a collection of rules (with rate constants) and an
initial mixture of agents on which such rules begin to act.
Krivine et. al. Programs as models: Kappa language basics. Unpublised
work.
4. What is κ
pre-κ
expander
hurbina
Introduction κ is a formal language for defining agents as sets of sites.
What is κ
κ syntax
Sites hold an internal state as well as a binding state.
Kappa at
DLab
DLab’s current
κ also enables the expression of rules of interaction between
work
Space-related agents.
simulations
Timming control
pre-Kappa These rules are executable, inducing a stochastic dynamics on a
expander
mixture of agents.
A κ model is a collection of rules (with rate constants) and an
initial mixture of agents on which such rules begin to act.
Krivine et. al. Programs as models: Kappa language basics. Unpublised
work.
5. What is κ
pre-κ
expander
hurbina
Introduction κ is a formal language for defining agents as sets of sites.
What is κ
κ syntax
Sites hold an internal state as well as a binding state.
Kappa at
DLab
DLab’s current
κ also enables the expression of rules of interaction between
work
Space-related agents.
simulations
Timming control
pre-Kappa These rules are executable, inducing a stochastic dynamics on a
expander
mixture of agents.
A κ model is a collection of rules (with rate constants) and an
initial mixture of agents on which such rules begin to act.
Krivine et. al. Programs as models: Kappa language basics. Unpublised
work.
6. What is κ
pre-κ
expander
hurbina
Introduction κ is a formal language for defining agents as sets of sites.
What is κ
κ syntax
Sites hold an internal state as well as a binding state.
Kappa at
DLab
DLab’s current
κ also enables the expression of rules of interaction between
work
Space-related agents.
simulations
Timming control
pre-Kappa These rules are executable, inducing a stochastic dynamics on a
expander
mixture of agents.
A κ model is a collection of rules (with rate constants) and an
initial mixture of agents on which such rules begin to act.
Krivine et. al. Programs as models: Kappa language basics. Unpublised
work.
7. What is κ
pre-κ
expander
hurbina
Introduction κ is a formal language for defining agents as sets of sites.
What is κ
κ syntax
Sites hold an internal state as well as a binding state.
Kappa at
DLab
DLab’s current
κ also enables the expression of rules of interaction between
work
Space-related agents.
simulations
Timming control
pre-Kappa These rules are executable, inducing a stochastic dynamics on a
expander
mixture of agents.
A κ model is a collection of rules (with rate constants) and an
initial mixture of agents on which such rules begin to act.
Krivine et. al. Programs as models: Kappa language basics. Unpublised
work.
8. κ Syntax short introduction
pre-κ
expander
hurbina
Rule in English:
Introduction
What is κ
κ syntax
”Unphosphorilated Site1 of A binds to Site1 of B.”
Kappa at
DLab
DLab’s current
work
κ Rule:
Space-related
simulations
Timming control
A(Site1~u),B(Site1) → A(Site1~u!1),B(Site1!1)
pre-Kappa
expander
Agent Names : an identifier.
Agent Sites : an identifier.
Internal States : ~〈value〉.
Binding States : !〈n〉, ! or !?.
9. κ Syntax short introduction
pre-κ
expander
hurbina
Rule in English:
Introduction
What is κ
κ syntax
”Unphosphorilated Site1 of A binds to Site1 of B.”
Kappa at
DLab
DLab’s current
work
κ Rule:
Space-related
simulations
Timming control
A(Site1~u),B(Site1) → A(Site1~u!1),B(Site1!1)
pre-Kappa
expander
Agent Names : an identifier.
Agent Sites : an identifier.
Internal States : ~〈value〉.
Binding States : !〈n〉, ! or !?.
10. κ Syntax short introduction
pre-κ
expander
hurbina
Rule in English:
Introduction
What is κ
κ syntax
”Unphosphorilated Site1 of A binds to Site1 of B.”
Kappa at
DLab
DLab’s current
work
κ Rule:
Space-related
simulations
Timming control
A(Site1~u),B(Site1) → A(Site1~u!1),B(Site1!1)
pre-Kappa
expander
Agent Names : an identifier.
Agent Sites : an identifier.
Internal States : ~〈value〉.
Binding States : !〈n〉, ! or !?.
11. Kappa file structure
pre-κ
expander 1 #### Signatures
hurbina 2 %agent: A(x,c) # Declaration of agent A
3 %agent: B(x) # Declaration of B
Introduction 4 %agent: C(x1~u~p,x2~u~p) # Declaration of C with 2 modifiable sites
What is κ 5 #### Rules
κ syntax
6 ‘a.b’ A(x),B(x) -> A(x!1),B(x!1) @ ‘on rate’ #A binds B
Kappa at 7 ‘a..b’ A(x!1),B(x!1) -> A(x),B(x) @ ‘off rate’ #AB dissociation
DLab 8 ’ab.c’ A(x! ,c),C(x1~u) ->A(x! ,c!2),C(x1~u!2) @ ‘on rate’ #AB binds C
DLab’s current
work 9 ’mod x1’ C(x1~u!1),A(c!1) ->C(x1~p),A(c) @ ‘mod rate’ #AB modifies x1
Space-related 10 ’a.c’ A(x,c),C(x1~p,x2~u) -> A(x,c!1),C(x1~p,x2~u!1) @ ‘on rate’ #A binds C on x2
simulations
Timming control 11 ’mod x2’ A(x,c!1),C(x1~p,x2~u!1) -> A(x,c),C(x1~p,x2~p) @ ‘mod rate’ #A modifies x2
pre-Kappa 12 #### Variables
expander
13 %var: ‘on rate’ 1.0E-4 # per molecule per second
14 %var: ‘off rate’ 0.1 # per second
15 %var: ‘mod rate’ 1 # per second
16 %obs: ‘AB’ A(x!x.B)
17 %obs: ‘Cuu’ C(x1~u,x2~u)
18 %obs: ‘Cpu’ C(x1~p,x2~u)
19 %obs: ‘Cpp’ C(x1~p,x2~p)
20 #### Initial conditions
21 %init: 1000 A,B
22 %init: 10000 C
KaSim reference manual v1.06
12. DLab’s current work
pre-κ
expander
hurbina
Introduction DLab members study complex dynamical systems.
What is κ
κ syntax
Kappa at
DLab
Currently, Cesar Ravello is modeling muscle contration and
DLab’s current
work Felipe Nu˜ez is simulating massive responses to zombie attacks
n
Space-related
simulations on human populations, whereas Ricardo Honorato is adapting
Timming control
pre-Kappa
expander
Model Checking techniques to be used with systems expressed
in κ language.
Without intervening the κ language, we have reached some
interesting levels of abstraction!
13. DLab’s current work
pre-κ
expander
hurbina
Introduction DLab members study complex dynamical systems.
What is κ
κ syntax
Kappa at
DLab
Currently, Cesar Ravello is modeling muscle contration and
DLab’s current
work Felipe Nu˜ez is simulating massive responses to zombie attacks
n
Space-related
simulations on human populations, whereas Ricardo Honorato is adapting
Timming control
pre-Kappa
expander
Model Checking techniques to be used with systems expressed
in κ language.
Without intervening the κ language, we have reached some
interesting levels of abstraction!
14. DLab’s current work
pre-κ
expander
hurbina
Introduction DLab members study complex dynamical systems.
What is κ
κ syntax
Kappa at
DLab
Currently, Cesar Ravello is modeling muscle contration and
DLab’s current
work Felipe Nu˜ez is simulating massive responses to zombie attacks
n
Space-related
simulations on human populations, whereas Ricardo Honorato is adapting
Timming control
pre-Kappa
expander
Model Checking techniques to be used with systems expressed
in κ language.
Without intervening the κ language, we have reached some
interesting levels of abstraction!
15. DLab’s current work
pre-κ
expander
hurbina
Introduction
What is κ
κ syntax
Kappa at
Space-related simulations.
DLab Compartmentalization.
DLab’s current
work
Space-related
Diffusion events.
simulations
Timming control
pre-Kappa
expander Timming control.
Polymer-driven rules to manipulate latency.
16. DLab’s current work
pre-κ
expander
hurbina
Introduction
What is κ
κ syntax
Kappa at
Space-related simulations.
DLab Compartmentalization.
DLab’s current
work
Space-related
Diffusion events.
simulations
Timming control
pre-Kappa
expander Timming control.
Polymer-driven rules to manipulate latency.
17. DLab’s current work
pre-κ
expander
hurbina
Introduction
What is κ
κ syntax
Kappa at
Space-related simulations.
DLab Compartmentalization.
DLab’s current
work
Space-related
Diffusion events.
simulations
Timming control
pre-Kappa
expander Timming control.
Polymer-driven rules to manipulate latency.
18. Compartmentalization
pre-κ
expander
hurbina
Introduction
What is κ #Signatures
κ syntax %agent: A(x,c,loc~i~j~k)
Kappa at %agent: B(x,loc~i~j~k)
DLab
DLab’s current
work #Rules
Space-related
simulations #A binds B
Timming control
pre-Kappa A(x,loc~i),B(x,loc~i) → A(x!1,loc~i),B(x!1,loc~i) @ ’on rate’
expander
A(x,loc~j),B(x,loc~j) → A(x!1,loc~j),B(x!1,loc~j) @ ’on rate’
A(x,loc~k),B(x,loc~k) → A(x!1,loc~k),B(x!1,loc~k) @ ’on rate’
19. Compartmentalization
pre-κ
expander
hurbina
Introduction
What is κ #Signatures
κ syntax %agent: A(x,c,loc~i~j~k)
Kappa at %agent: B(x,loc~i~j~k)
DLab
DLab’s current
work #Rules
Space-related
simulations #A binds B
Timming control
pre-Kappa A(x,loc~i),B(x,loc~i) → A(x!1,loc~i),B(x!1,loc~i) @ ’on rate’
expander
A(x,loc~j),B(x,loc~j) → A(x!1,loc~j),B(x!1,loc~j) @ ’on rate’
A(x,loc~k),B(x,loc~k) → A(x!1,loc~k),B(x!1,loc~k) @ ’on rate’
20. Compartmentalization
pre-κ
expander
hurbina #Locations i, j and k have different volumen/area!
#Signatures
Introduction %agent: A(x,c,loc~i~j~k)
What is κ %agent: B(x,loc~i~j~k)
κ syntax
Kappa at #Rules
DLab
DLab’s current #A binds B
work
Space-related
simulations A(x,loc~i),B(x,loc~i) → A(x!1,loc~i),B(x!1,loc~i) @ ’on rate loc(i)’
Timming control
pre-Kappa A(x,loc~j),B(x,loc~j) → A(x!1,loc~j),B(X!1,loc~j) @ ’on rate loc(j)’
expander
A(x,loc~k),B(x,loc~k) → A(x!1,loc~k),B(X!1,loc~k) @ ’on rate loc(k)’
#AB dissociation
A(x!1,loc~i),B(x!1,loc~i) → A(x,loc~i),B(x,loc~i) @ ’off rate’
A(x!1,loc~j),B(x!1,loc~j) → A(x,loc~j),B(x,loc~j) @ ’off rate’
A(x!1,loc~k),B(x!1,loc~k) → A(x,loc~k),B(x,loc~k) @ ’off rate’
21. Compartmentalization
pre-κ
expander
hurbina #Locations i, j and k have different volumen/area!
#Signatures
Introduction %agent: A(x,c,loc~i~j~k)
What is κ %agent: B(x,loc~i~j~k)
κ syntax
Kappa at #Rules
DLab
DLab’s current #A binds B
work
Space-related
simulations A(x,loc~i),B(x,loc~i) → A(x!1,loc~i),B(x!1,loc~i) @ ’on rate loc(i)’
Timming control
pre-Kappa A(x,loc~j),B(x,loc~j) → A(x!1,loc~j),B(X!1,loc~j) @ ’on rate loc(j)’
expander
A(x,loc~k),B(x,loc~k) → A(x!1,loc~k),B(X!1,loc~k) @ ’on rate loc(k)’
#AB dissociation
A(x!1,loc~i),B(x!1,loc~i) → A(x,loc~i),B(x,loc~i) @ ’off rate’
A(x!1,loc~j),B(x!1,loc~j) → A(x,loc~j),B(x,loc~j) @ ’off rate’
A(x!1,loc~k),B(x!1,loc~k) → A(x,loc~k),B(x,loc~k) @ ’off rate’
22. Compartmentalization
pre-κ
expander
hurbina #Locations i, j and k have different volumen/area!
#Signatures
Introduction %agent: A(x,c,loc~i~j~k)
What is κ %agent: B(x,loc~i~j~k)
κ syntax
Kappa at #Rules
DLab
DLab’s current #A binds B
work
Space-related
simulations A(x,loc~i),B(x,loc~i) → A(x!1,loc~i),B(x!1,loc~i) @ ’on rate loc(i)’
Timming control
pre-Kappa A(x,loc~j),B(x,loc~j) → A(x!1,loc~j),B(X!1,loc~j) @ ’on rate loc(j)’
expander
A(x,loc~k),B(x,loc~k) → A(x!1,loc~k),B(X!1,loc~k) @ ’on rate loc(k)’
#AB dissociation
A(x!1,loc~i),B(x!1,loc~i) → A(x,loc~i),B(x,loc~i) @ ’off rate’
A(x!1,loc~j),B(x!1,loc~j) → A(x,loc~j),B(x,loc~j) @ ’off rate’
A(x!1,loc~k),B(x!1,loc~k) → A(x,loc~k),B(x,loc~k) @ ’off rate’
23. Diffusion events
pre-κ
expander
hurbina
Introduction
What is κ
κ syntax
Kappa at
DLab #Signatures
DLab’s current
work
Space-related
%agent: A(x,c,loc~i~j~k)
simulations
Timming control
%agent: B(x,loc~i~j~k)
pre-Kappa
expander %agent: T(s,org~i~j~k,dst~i~j~k)
25. polymer-driven rules
pre-κ
expander
hurbina
Introduction #Signatures
What is κ
κ syntax %agent: S(x)
Kappa at %agent: Z()
DLab
DLab’s current
work
%agent: V(p,n)
Space-related
simulations #Rules
Timming control
pre-Kappa
expander
’Infection’ Z(),S(x) → Z(),S(x!1),V(p!1,n) @ ’infection rate’
’Polymerization’ V(n) → V(n!1),V(p!1,n) @ ’polymer rate’
’Expression’ S(x!1),V(p!1,n!2),V(p!2,n!3),V(p!3,n!4),
V(p!4,n!5),V(p!5,n!6),V(p!6,n!7),V(p!7,n!8),V(p!8,n!9),
V(p!9,n!10),V(p!10,n) → Z() @ [inf]
26. polymer-driven rules
pre-κ
expander
hurbina
Introduction #Signatures
What is κ
κ syntax %agent: S(x)
Kappa at %agent: Z()
DLab
DLab’s current
work
%agent: V(p,n)
Space-related
simulations #Rules
Timming control
pre-Kappa
expander
’Infection’ Z(),S(x) → Z(),S(x!1),V(p!1,n) @ ’infection rate’
’Polymerization’ V(n) → V(n!1),V(p!1,n) @ ’polymer rate’
’Expression’ S(x!1),V(p!1,n!2),V(p!2,n!3),V(p!3,n!4),
V(p!4,n!5),V(p!5,n!6),V(p!6,n!7),V(p!7,n!8),V(p!8,n!9),
V(p!9,n!10),V(p!10,n) → Z() @ [inf]
27. polymer-driven rules
pre-κ
expander
hurbina
Introduction #Signatures
What is κ
κ syntax %agent: S(x)
Kappa at %agent: Z()
DLab
DLab’s current
work
%agent: V(p,n)
Space-related
simulations #Rules
Timming control
pre-Kappa
expander
’Infection’ Z(),S(x) → Z(),S(x!1),V(p!1,n) @ ’infection rate’
’Polymerization’ V(n) → V(n!1),V(p!1,n) @ ’polymer rate’
’Expression’ S(x!1),V(p!1,n!2),V(p!2,n!3),V(p!3,n!4),
V(p!4,n!5),V(p!5,n!6),V(p!6,n!7),V(p!7,n!8),V(p!8,n!9),
V(p!9,n!10),V(p!10,n) → Z() @ [inf]
28. polymer-driven rules
pre-κ
expander
hurbina
Introduction #Signatures
What is κ
κ syntax %agent: S(x)
Kappa at %agent: Z()
DLab
DLab’s current
work
%agent: V(p,n)
Space-related
simulations #Rules
Timming control
pre-Kappa
expander
’Infection’ Z(),S(x) → Z(),S(x!1),V(p!1,n) @ ’infection rate’
’Polymerization’ V(n) → V(n!1),V(p!1,n) @ ’polymer rate’
’Expression’ S(x!1),V(p!1,n!2),V(p!2,n!3),V(p!3,n!4),
V(p!4,n!5),V(p!5,n!6),V(p!6,n!7),V(p!7,n!8),V(p!8,n!9),
V(p!9,n!10),V(p!10,n) → Z() @ [inf]
29. pre-Kappa expander
pre-κ
expander
hurbina
Introduction
What is κ
κ syntax
A Python (V2) script that takes as input a (built in-house)
Kappa at
DLab pre-κ file and outputs a kappa file which can subsequently be
DLab’s current
work
Space-related
used with KaSim.
simulations
Timming control This is done using Lexer & Parser techniques, available in
pre-Kappa
expander
Python through ply library.
It facilitates κ abstraction while reducing error-proneness.
30. pre-Kappa expander
pre-κ
expander
hurbina
Introduction
What is κ
κ syntax
A Python (V2) script that takes as input a (built in-house)
Kappa at
DLab pre-κ file and outputs a kappa file which can subsequently be
DLab’s current
work
Space-related
used with KaSim.
simulations
Timming control This is done using Lexer & Parser techniques, available in
pre-Kappa
expander
Python through ply library.
It facilitates κ abstraction while reducing error-proneness.
31. pre-Kappa expander
pre-κ
expander
hurbina
Introduction
What is κ
κ syntax
A Python (V2) script that takes as input a (built in-house)
Kappa at
DLab pre-κ file and outputs a kappa file which can subsequently be
DLab’s current
work
Space-related
used with KaSim.
simulations
Timming control This is done using Lexer & Parser techniques, available in
pre-Kappa
expander
Python through ply library.
It facilitates κ abstraction while reducing error-proneness.
32. pre-Kappa syntax: Locations
pre-κ
expander
hurbina #Locations
Introduction
%loc: i 100
What is κ
κ syntax
%loc: j 1000
Kappa at %loc: k 500
DLab
DLab’s current
#Location list
work
Space-related %locl: all i j k
simulations
Timming control #Signatures
pre-Kappa
expander
%expand-agent: all A(x,c)
%expand-agent: all B(x)
gives:
%agent: A(x,c,loc~i~j~k)
%agent: B(x,loc~i~j~k)
33. pre-Kappa syntax: Locations
pre-κ
expander
hurbina #Locations
Introduction
%loc: i 100
What is κ
κ syntax
%loc: j 1000
Kappa at %loc: k 500
DLab
DLab’s current
#Location list
work
Space-related %locl: all i j k
simulations
Timming control #Signatures
pre-Kappa
expander
%expand-agent: all A(x,c)
%expand-agent: all B(x)
gives:
%agent: A(x,c,loc~i~j~k)
%agent: B(x,loc~i~j~k)
34. pre-Kappa syntax: Locations
pre-κ
expander
hurbina #Locations
Introduction
%loc: i 100
What is κ
κ syntax
%loc: j 1000
Kappa at %loc: k 500
DLab
DLab’s current
#Location list
work
Space-related %locl: all i j k
simulations
Timming control #Signatures
pre-Kappa
expander
%expand-agent: all A(x,c)
%expand-agent: all B(x)
gives:
%agent: A(x,c,loc~i~j~k)
%agent: B(x,loc~i~j~k)
35. pre-Kappa syntax: Locations
pre-κ
expander
hurbina #Locations
Introduction
%loc: i 100
What is κ
κ syntax
%loc: j 1000
Kappa at %loc: k 500
DLab
DLab’s current
#Location list
work
Space-related %locl: all i j k
simulations
Timming control #Signatures
pre-Kappa
expander
%expand-agent: all A(x,c)
%expand-agent: all B(x)
gives:
%agent: A(x,c,loc~i~j~k)
%agent: B(x,loc~i~j~k)
36. pre-Kappa syntax: Locations
pre-κ
expander #Locations
hurbina %loc: i 100
%loc: j 1000
Introduction
What is κ %loc: k 500
κ syntax
#Location list
Kappa at
DLab %locl: all i j k
DLab’s current
work
#Initializations (expand if densities are equal)
Space-related
simulations %expand-init: all ADensity A(x,c)
Timming control
pre-Kappa %expand-init: all BDensity B(x)
expander
gives:
%init: ADensity * 100 A(x,c,loc~i)
%init: ADensity * 1000 A(x,c,loc~j)
%init: ADensity * 500 A(x,c,loc~k)
%init: BDensity * 100 B(x,loc~i)
%init: BDensity * 1000 B(x,loc~j)
%init: BDensity * 500 B(x,loc~k)
37. pre-Kappa syntax: Locations
pre-κ
expander #Locations
hurbina %loc: i 100
%loc: j 1000
Introduction
What is κ %loc: k 500
κ syntax
#Location list
Kappa at
DLab %locl: all i j k
DLab’s current
work
#Initializations (expand if densities are equal)
Space-related
simulations %expand-init: all ADensity A(x,c)
Timming control
pre-Kappa %expand-init: all BDensity B(x)
expander
gives:
%init: ADensity * 100 A(x,c,loc~i)
%init: ADensity * 1000 A(x,c,loc~j)
%init: ADensity * 500 A(x,c,loc~k)
%init: BDensity * 100 B(x,loc~i)
%init: BDensity * 1000 B(x,loc~j)
%init: BDensity * 500 B(x,loc~k)
38. pre-Kappa syntax: Locations
pre-κ
expander
hurbina
Introduction
What is κ
κ syntax A bimolecular stochastic rate constant γ, expressed in
Kappa at s −1 molecule −1 , is related to its deterministic counterpart k,
DLab
DLab’s current
work
expressed in s −1 M −1 as
Space-related
simulations
Timming control k
pre-Kappa γ= , (1)
expander AV
where A is Avogadro’s number.
Krivine et. al. Programs as models: Execution. Unpublised work.
39. pre-Kappa syntax: Locations
pre-κ
expander
hurbina
#Locations
Introduction
%loc: i 100
What is κ
κ syntax %loc: j 1000
Kappa at
%loc: k 500
DLab #Location list
DLab’s current %locl: all i j k
work
Space-related #A binds B
simulations %expand-rule: all A(x),B(x) → A(x!1),B(x!1) @ ’on base rate’
Timming control
pre-Kappa
expander gives:
A(x,loc~i),B(x,loc~i) → A(x!1,loc~i),B(x!1,loc~i) @ ’on base rate’ / 100
A(x,loc~j),B(x,loc~j) → A(x!1,loc~j),B(x!1,loc~j) @ ’on base rate’ / 1000
A(x,loc~k),B(x,loc~k) → A(x!1,loc~k),B(x!1,loc~k) @ ’on base rate’ / 500
40. pre-Kappa syntax: Locations
pre-κ
expander
hurbina
#Locations
Introduction
%loc: i 100
What is κ
κ syntax %loc: j 1000
Kappa at
%loc: k 500
DLab #Location list
DLab’s current %locl: all i j k
work
Space-related #A binds B
simulations %expand-rule: all A(x),B(x) → A(x!1),B(x!1) @ ’on base rate’
Timming control
pre-Kappa
expander gives:
A(x,loc~i),B(x,loc~i) → A(x!1,loc~i),B(x!1,loc~i) @ ’on base rate’ / 100
A(x,loc~j),B(x,loc~j) → A(x!1,loc~j),B(x!1,loc~j) @ ’on base rate’ / 1000
A(x,loc~k),B(x,loc~k) → A(x!1,loc~k),B(x!1,loc~k) @ ’on base rate’ / 500
41. pre-Kappa syntax: Locations
pre-κ
expander
hurbina
#Locations
Introduction
%loc: i 100
What is κ
κ syntax %loc: j 1000
Kappa at
%loc: k 500
DLab #Location list
DLab’s current %locl: all i j k
work
Space-related #A binds B
simulations %expand-rule: all A(x),B(x) → A(x!1),B(x!1) @ ’on base rate’
Timming control
pre-Kappa
expander gives:
A(x,loc~i),B(x,loc~i) → A(x!1,loc~i),B(x!1,loc~i) @ ’on base rate’ / 100
A(x,loc~j),B(x,loc~j) → A(x!1,loc~j),B(x!1,loc~j) @ ’on base rate’ / 1000
A(x,loc~k),B(x,loc~k) → A(x!1,loc~k),B(x!1,loc~k) @ ’on base rate’ / 500