This document discusses using effective field theory (EFT) to search for new physics beyond the Standard Model. It introduces the kappa framework, which parametrizes deviations from the Standard Model in a non-gauge invariant way. EFT provides an alternative approach that is compatible with quantum field theory. The document outlines how to build a basis of higher-dimensional operators in the Standard Model EFT and perform calculations to NLO while renormalizing ultraviolet divergences. Open questions remain about the validity range, possible high-energy completions, and combining bottom-up and top-down EFT approaches.
Spark NLP: State of the Art Natural Language Processing at ScaleDatabricks
Natural language processing is a key component in many data science systems that must understand or reason about text. Common use cases include question answering, summarization, sentiment analysis, natural language BI, language modeling, and disambiguation. Building such systems usually requires combining three types of software libraries: NLP annotation frameworks, machine learning frameworks, and deep learning frameworks. This talk introduces the NLP library for Apache Spark. Spark NLP natively extends the Spark ML pipeline API’s which enabling zero-copy, distributed, combined NLP & ML pipelines, which leverage all of Spark’s built-in optimizations. Benchmarks and design best practices for building NLP, ML and DL pipelines on Spark will be shared.
BERT - Part 1 Learning Notes of Senthil KumarSenthil Kumar M
In this part 1 presentation, I have attempted to provide a '30,000 feet view' of BERT (Bidirectional Encoder Representations from Transformer) - a state of the art Language Model in NLP with high level technical explanations. I have attempted to collate useful information about BERT from various useful sources.
This presentation is the introduction to Density Functional Theory, an essential computational approach used by Physicist and Quantum Chemist to study Solid State matter.
Spark NLP: State of the Art Natural Language Processing at ScaleDatabricks
Natural language processing is a key component in many data science systems that must understand or reason about text. Common use cases include question answering, summarization, sentiment analysis, natural language BI, language modeling, and disambiguation. Building such systems usually requires combining three types of software libraries: NLP annotation frameworks, machine learning frameworks, and deep learning frameworks. This talk introduces the NLP library for Apache Spark. Spark NLP natively extends the Spark ML pipeline API’s which enabling zero-copy, distributed, combined NLP & ML pipelines, which leverage all of Spark’s built-in optimizations. Benchmarks and design best practices for building NLP, ML and DL pipelines on Spark will be shared.
BERT - Part 1 Learning Notes of Senthil KumarSenthil Kumar M
In this part 1 presentation, I have attempted to provide a '30,000 feet view' of BERT (Bidirectional Encoder Representations from Transformer) - a state of the art Language Model in NLP with high level technical explanations. I have attempted to collate useful information about BERT from various useful sources.
This presentation is the introduction to Density Functional Theory, an essential computational approach used by Physicist and Quantum Chemist to study Solid State matter.
How to cut your taxes, protect your assets, and protect your intellectual property
Presentation delivered at the CEDEC Doing Business in Quebec conference
Slides dedicated for a mentoring workshop during Startup Pirates in Tirana, Albania in 2016. The workshop intended to engage new mentors, how want to contribute and do more for the community.
Searches for new physics at LHC within the Higgs sector. Step 2: Defining the...Raquel Gomez Ambrosio
We discuss the Effective field theory bottom-up approach, and show some examples of its application for VH production at LHC. We find some interesting results regarding the applicability of the perturbative expansion. Finally we discuss the Pseudo Observable approach as a tool for New Physics searches at LHC.
In this paper, the underlying principles about the theory of relativity are briefly introduced and reviewed. The mathematical prerequisite needed for the understanding of general relativity and of Einstein field equations are discussed. Concepts such as the principle of least action will be included and its explanation using the Lagrange equations will be given. Where possible, the mathematical details and rigorous analysis of the subject has been given in order to ensure a more precise and thorough understanding of the theory of relativity. A brief mathematical analysis of how to derive the Einstein’s field’s equations from the Einstein-Hilbert action and the Schwarzschild solution was also given.
UCSD NANO 266 Quantum Mechanical Modelling of Materials and Nanostructures is a graduate class that provides students with a highly practical introduction to the application of first principles quantum mechanical simulations to model, understand and predict the properties of materials and nano-structures. The syllabus includes: a brief introduction to quantum mechanics and the Hartree-Fock and density functional theory (DFT) formulations; practical simulation considerations such as convergence, selection of the appropriate functional and parameters; interpretation of the results from simulations, including the limits of accuracy of each method. Several lab sessions provide students with hands-on experience in the conduct of simulations. A key aspect of the course is in the use of programming to facilitate calculations and analysis.
A tutorial given at the AMALEA workshop 2022.
This talk presents the statistical physics based theory of machine learning in terms of simple example systems. As a recent application, the occurrence of phase transitions in layered networks is discussed.
How to cut your taxes, protect your assets, and protect your intellectual property
Presentation delivered at the CEDEC Doing Business in Quebec conference
Slides dedicated for a mentoring workshop during Startup Pirates in Tirana, Albania in 2016. The workshop intended to engage new mentors, how want to contribute and do more for the community.
Searches for new physics at LHC within the Higgs sector. Step 2: Defining the...Raquel Gomez Ambrosio
We discuss the Effective field theory bottom-up approach, and show some examples of its application for VH production at LHC. We find some interesting results regarding the applicability of the perturbative expansion. Finally we discuss the Pseudo Observable approach as a tool for New Physics searches at LHC.
In this paper, the underlying principles about the theory of relativity are briefly introduced and reviewed. The mathematical prerequisite needed for the understanding of general relativity and of Einstein field equations are discussed. Concepts such as the principle of least action will be included and its explanation using the Lagrange equations will be given. Where possible, the mathematical details and rigorous analysis of the subject has been given in order to ensure a more precise and thorough understanding of the theory of relativity. A brief mathematical analysis of how to derive the Einstein’s field’s equations from the Einstein-Hilbert action and the Schwarzschild solution was also given.
UCSD NANO 266 Quantum Mechanical Modelling of Materials and Nanostructures is a graduate class that provides students with a highly practical introduction to the application of first principles quantum mechanical simulations to model, understand and predict the properties of materials and nano-structures. The syllabus includes: a brief introduction to quantum mechanics and the Hartree-Fock and density functional theory (DFT) formulations; practical simulation considerations such as convergence, selection of the appropriate functional and parameters; interpretation of the results from simulations, including the limits of accuracy of each method. Several lab sessions provide students with hands-on experience in the conduct of simulations. A key aspect of the course is in the use of programming to facilitate calculations and analysis.
A tutorial given at the AMALEA workshop 2022.
This talk presents the statistical physics based theory of machine learning in terms of simple example systems. As a recent application, the occurrence of phase transitions in layered networks is discussed.
Richard's aventures in two entangled wonderlandsRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
Multi-source connectivity as the driver of solar wind variability in the heli...Sérgio Sacani
The ambient solar wind that flls the heliosphere originates from multiple
sources in the solar corona and is highly structured. It is often described
as high-speed, relatively homogeneous, plasma streams from coronal
holes and slow-speed, highly variable, streams whose source regions are
under debate. A key goal of ESA/NASA’s Solar Orbiter mission is to identify
solar wind sources and understand what drives the complexity seen in the
heliosphere. By combining magnetic feld modelling and spectroscopic
techniques with high-resolution observations and measurements, we show
that the solar wind variability detected in situ by Solar Orbiter in March
2022 is driven by spatio-temporal changes in the magnetic connectivity to
multiple sources in the solar atmosphere. The magnetic feld footpoints
connected to the spacecraft moved from the boundaries of a coronal hole
to one active region (12961) and then across to another region (12957). This
is refected in the in situ measurements, which show the transition from fast
to highly Alfvénic then to slow solar wind that is disrupted by the arrival of
a coronal mass ejection. Our results describe solar wind variability at 0.5 au
but are applicable to near-Earth observatories.
Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...Sérgio Sacani
We characterize the earliest galaxy population in the JADES Origins Field (JOF), the deepest
imaging field observed with JWST. We make use of the ancillary Hubble optical images (5 filters
spanning 0.4−0.9µm) and novel JWST images with 14 filters spanning 0.8−5µm, including 7 mediumband filters, and reaching total exposure times of up to 46 hours per filter. We combine all our data
at > 2.3µm to construct an ultradeep image, reaching as deep as ≈ 31.4 AB mag in the stack and
30.3-31.0 AB mag (5σ, r = 0.1” circular aperture) in individual filters. We measure photometric
redshifts and use robust selection criteria to identify a sample of eight galaxy candidates at redshifts
z = 11.5 − 15. These objects show compact half-light radii of R1/2 ∼ 50 − 200pc, stellar masses of
M⋆ ∼ 107−108M⊙, and star-formation rates of SFR ∼ 0.1−1 M⊙ yr−1
. Our search finds no candidates
at 15 < z < 20, placing upper limits at these redshifts. We develop a forward modeling approach to
infer the properties of the evolving luminosity function without binning in redshift or luminosity that
marginalizes over the photometric redshift uncertainty of our candidate galaxies and incorporates the
impact of non-detections. We find a z = 12 luminosity function in good agreement with prior results,
and that the luminosity function normalization and UV luminosity density decline by a factor of ∼ 2.5
from z = 12 to z = 14. We discuss the possible implications of our results in the context of theoretical
models for evolution of the dark matter halo mass function.
Introduction:
RNA interference (RNAi) or Post-Transcriptional Gene Silencing (PTGS) is an important biological process for modulating eukaryotic gene expression.
It is highly conserved process of posttranscriptional gene silencing by which double stranded RNA (dsRNA) causes sequence-specific degradation of mRNA sequences.
dsRNA-induced gene silencing (RNAi) is reported in a wide range of eukaryotes ranging from worms, insects, mammals and plants.
This process mediates resistance to both endogenous parasitic and exogenous pathogenic nucleic acids, and regulates the expression of protein-coding genes.
What are small ncRNAs?
micro RNA (miRNA)
short interfering RNA (siRNA)
Properties of small non-coding RNA:
Involved in silencing mRNA transcripts.
Called “small” because they are usually only about 21-24 nucleotides long.
Synthesized by first cutting up longer precursor sequences (like the 61nt one that Lee discovered).
Silence an mRNA by base pairing with some sequence on the mRNA.
Discovery of siRNA?
The first small RNA:
In 1993 Rosalind Lee (Victor Ambros lab) was studying a non- coding gene in C. elegans, lin-4, that was involved in silencing of another gene, lin-14, at the appropriate time in the
development of the worm C. elegans.
Two small transcripts of lin-4 (22nt and 61nt) were found to be complementary to a sequence in the 3' UTR of lin-14.
Because lin-4 encoded no protein, she deduced that it must be these transcripts that are causing the silencing by RNA-RNA interactions.
Types of RNAi ( non coding RNA)
MiRNA
Length (23-25 nt)
Trans acting
Binds with target MRNA in mismatch
Translation inhibition
Si RNA
Length 21 nt.
Cis acting
Bind with target Mrna in perfect complementary sequence
Piwi-RNA
Length ; 25 to 36 nt.
Expressed in Germ Cells
Regulates trnasposomes activity
MECHANISM OF RNAI:
First the double-stranded RNA teams up with a protein complex named Dicer, which cuts the long RNA into short pieces.
Then another protein complex called RISC (RNA-induced silencing complex) discards one of the two RNA strands.
The RISC-docked, single-stranded RNA then pairs with the homologous mRNA and destroys it.
THE RISC COMPLEX:
RISC is large(>500kD) RNA multi- protein Binding complex which triggers MRNA degradation in response to MRNA
Unwinding of double stranded Si RNA by ATP independent Helicase
Active component of RISC is Ago proteins( ENDONUCLEASE) which cleave target MRNA.
DICER: endonuclease (RNase Family III)
Argonaute: Central Component of the RNA-Induced Silencing Complex (RISC)
One strand of the dsRNA produced by Dicer is retained in the RISC complex in association with Argonaute
ARGONAUTE PROTEIN :
1.PAZ(PIWI/Argonaute/ Zwille)- Recognition of target MRNA
2.PIWI (p-element induced wimpy Testis)- breaks Phosphodiester bond of mRNA.)RNAse H activity.
MiRNA:
The Double-stranded RNAs are naturally produced in eukaryotic cells during development, and they have a key role in regulating gene expression .
(May 29th, 2024) Advancements in Intravital Microscopy- Insights for Preclini...Scintica Instrumentation
Intravital microscopy (IVM) is a powerful tool utilized to study cellular behavior over time and space in vivo. Much of our understanding of cell biology has been accomplished using various in vitro and ex vivo methods; however, these studies do not necessarily reflect the natural dynamics of biological processes. Unlike traditional cell culture or fixed tissue imaging, IVM allows for the ultra-fast high-resolution imaging of cellular processes over time and space and were studied in its natural environment. Real-time visualization of biological processes in the context of an intact organism helps maintain physiological relevance and provide insights into the progression of disease, response to treatments or developmental processes.
In this webinar we give an overview of advanced applications of the IVM system in preclinical research. IVIM technology is a provider of all-in-one intravital microscopy systems and solutions optimized for in vivo imaging of live animal models at sub-micron resolution. The system’s unique features and user-friendly software enables researchers to probe fast dynamic biological processes such as immune cell tracking, cell-cell interaction as well as vascularization and tumor metastasis with exceptional detail. This webinar will also give an overview of IVM being utilized in drug development, offering a view into the intricate interaction between drugs/nanoparticles and tissues in vivo and allows for the evaluation of therapeutic intervention in a variety of tissues and organs. This interdisciplinary collaboration continues to drive the advancements of novel therapeutic strategies.
This pdf is about the Schizophrenia.
For more details visit on YouTube; @SELF-EXPLANATORY;
https://www.youtube.com/channel/UCAiarMZDNhe1A3Rnpr_WkzA/videos
Thanks...!
Seminar of U.V. Spectroscopy by SAMIR PANDASAMIR PANDA
Spectroscopy is a branch of science dealing the study of interaction of electromagnetic radiation with matter.
Ultraviolet-visible spectroscopy refers to absorption spectroscopy or reflect spectroscopy in the UV-VIS spectral region.
Ultraviolet-visible spectroscopy is an analytical method that can measure the amount of light received by the analyte.
The increased availability of biomedical data, particularly in the public domain, offers the opportunity to better understand human health and to develop effective therapeutics for a wide range of unmet medical needs. However, data scientists remain stymied by the fact that data remain hard to find and to productively reuse because data and their metadata i) are wholly inaccessible, ii) are in non-standard or incompatible representations, iii) do not conform to community standards, and iv) have unclear or highly restricted terms and conditions that preclude legitimate reuse. These limitations require a rethink on data can be made machine and AI-ready - the key motivation behind the FAIR Guiding Principles. Concurrently, while recent efforts have explored the use of deep learning to fuse disparate data into predictive models for a wide range of biomedical applications, these models often fail even when the correct answer is already known, and fail to explain individual predictions in terms that data scientists can appreciate. These limitations suggest that new methods to produce practical artificial intelligence are still needed.
In this talk, I will discuss our work in (1) building an integrative knowledge infrastructure to prepare FAIR and "AI-ready" data and services along with (2) neurosymbolic AI methods to improve the quality of predictions and to generate plausible explanations. Attention is given to standards, platforms, and methods to wrangle knowledge into simple, but effective semantic and latent representations, and to make these available into standards-compliant and discoverable interfaces that can be used in model building, validation, and explanation. Our work, and those of others in the field, creates a baseline for building trustworthy and easy to deploy AI models in biomedicine.
Bio
Dr. Michel Dumontier is the Distinguished Professor of Data Science at Maastricht University, founder and executive director of the Institute of Data Science, and co-founder of the FAIR (Findable, Accessible, Interoperable and Reusable) data principles. His research explores socio-technological approaches for responsible discovery science, which includes collaborative multi-modal knowledge graphs, privacy-preserving distributed data mining, and AI methods for drug discovery and personalized medicine. His work is supported through the Dutch National Research Agenda, the Netherlands Organisation for Scientific Research, Horizon Europe, the European Open Science Cloud, the US National Institutes of Health, and a Marie-Curie Innovative Training Network. He is the editor-in-chief for the journal Data Science and is internationally recognized for his contributions in bioinformatics, biomedical informatics, and semantic technologies including ontologies and linked data.
1. NLO Higgs Effective Field Theory and κ Framework
arXiv:1505.03706. M. Ghezzi, R.G. , G.Passarino, S.Uccirati
Raquel G´omez-Ambrosio
HiggsTools @Universit`a & INFN @Torino & CMS @CERN
Planck Conference 2015, Ionannina, GR
May 27, 2015
2. higgstools
Outline
Introduction:
The Search of BSM physics
The kappa framework
Effective Field theory
What is Effective Field Theory
Why choose Effective Field Theory
Hands on EFT:
HowTo
SM EFT
Summary & Open Questions
3. higgstools
Introduction:
The Search of BSM physics
Introduction: Why are we here?
The Standard Model Today:
Higgs-like particle with JCP = 0++,
MH = 125.09 ± 0.24 GeV found in 2012.
No new physics found yet:
Neutrino masses
Dark Matter
Graviton
4. higgstools
Introduction:
The Search of BSM physics
CMS results: “Constraints on the Higgs boson width . . . ” (1405.3455)
MH can be extracted from the peak, for ΓH we have to look at the off-shell region
5. higgstools
Introduction:
The kappa framework
Search for BSM physics: The kappa framework
First proposed by the LHC-HXSWG in 1209.0040
Idea: Introduce ad-hoc deviations for some SM observables
(Higgs’ σ’s and Γ’s)
Provide a series of benchmark parametrizations in order to
test deviations against experimental data
6. higgstools
Introduction:
The kappa framework
Search for BSM physics: The kappa framework
The simplest example:
Gamma-Gamma state originated from Gluon-Gluon fusion
(σ · BR)(gg→H→γγ) = (σggH )SM
· (BRHγγ)SM
·
κ2
g κ2
γ
κ2
H
κ2
g =
σggH
(σggH )SM
, κ2
γ =
Γγγ
(Γγγ)SM
, κ2
H =
ΓH
(ΓH )SM
7. higgstools
Introduction:
The kappa framework
Search for BSM physics: The kappa framework
Disadvantages . . .
Ad-hoc deviations are not compatible with QFT
(they break gauge invariance and unitarity)
The κ’s don’t have a direct physical interpretation
With the available amount of data and theoretical
predictions, no deviation has been found
Need to go to higher orders in perturbation theory
(NLO for signal AND background processes)
Need higher experimental accuracy
Or, maybe, need another approach . . .
8. higgstools
Introduction:
The kappa framework
Search for BSM physics: The kappa framework
Disadvantages . . .
Ad-hoc deviations are not compatible with QFT
(they break gauge invariance and unitarity)
The κ’s don’t have a direct physical interpretation
With the available amount of data and theoretical
predictions, no deviation has been found
Need to go to higher orders in perturbation theory
(NLO for signal AND background processes)
Need higher experimental accuracy
Or, maybe, need another approach . . .
9. higgstools
Introduction:
The kappa framework
Search for BSM physics: The kappa framework
Disadvantages . . .
Ad-hoc deviations are not compatible with QFT
(they break gauge invariance and unitarity)
The κ’s don’t have a direct physical interpretation
With the available amount of data and theoretical
predictions, no deviation has been found
Need to go to higher orders in perturbation theory
(NLO for signal AND background processes)
Need higher experimental accuracy
Or, maybe, need another approach . . .
10. higgstools
Introduction:
The kappa framework
Search for BSM physics: The kappa framework
Disadvantages . . .
Ad-hoc deviations are not compatible with QFT
(they break gauge invariance and unitarity)
The κ’s don’t have a direct physical interpretation
With the available amount of data and theoretical
predictions, no deviation has been found
Need to go to higher orders in perturbation theory
(NLO for signal AND background processes)
Need higher experimental accuracy
Or, maybe, need another approach . . .
11. higgstools
Effective Field theory
What is Effective Field Theory
Alternative strategy: Effective field theory
Definition:
An effective field theory (EFT) is a field theory, designed to reproduce the
behaviour of some underlying physical theory in some limited regime. It focuses
on the degrees of freedom relevant to that regime, simplifying the problem but
letting aside some physics.
12. higgstools
Effective Field theory
Why choose Effective Field Theory
Why choose EFT?
Historically legitimated: Large scale physics, as we know it, is made of EFTs: fluid
dynamics, solid state and condensed matter physics.
Newton’s theory of gravity is an effective low-energy theory of general relativity,
which is itself some low-energy effective theory of a quantum theory of gravity.
13. higgstools
Hands on EFT:
HowTo
Things the apprentice has to know: Top-down Vs. Bottom-up approach
In the Top-down approach: (model dependent)
Start from a complete high energy theory.
Integrate out heavy fields: eiSeff [φ](µ) = DΦ eiSUV [φ,Φ](µ)
Use RG-flow to study the resulting theory in its low-energy regime
In the Bottom-up approach: (model independent)
Start from a low-energy known theory (the SM).
Add operators consistent with the symmetries
(recall Wilson: only dim > 4 makes sense)
Calculate (pseudo)-observables and compare with experiments
14. higgstools
Hands on EFT:
HowTo
SM EFT (bottom-up approach)
Leff = LSM
dim 4
+
i
ai Oi
Λ2
dim 6
+ . . .
higher dim. operators
ai can be Wilson coefficients or the κ’s introduced previously
For current experimental thresholds, dim 6 operators are enough.
Using eqs. of motion and gauge symmetries, one can build a 59-operator
basis (for one generation of particles! for three → 2499 operators)
15. higgstools
Hands on EFT:
SM EFT
SM EFT
Some Assumptions
There is one Higgs doublet with a linear representation
The EFT does not add new light degrees of freedom
The heavy degrees of freedom of the EFT decouple
The heavy degrees of freedom do not mix with the Higgs doublet
The UV completion is weakly coupled and renormalizable
Also: Restrict to dim 6 and NLO
As a consequence: 5 TeV < Λ < 7 TeV
16. higgstools
Hands on EFT:
SM EFT
Hands on: The Strategy to follow
1. Start from the SM L
2. Add all possible dim 6 operators
(“a basis”)
3. Redefine fields and parameters to
recover the wanted expression:
L = LSM + Ldim 6
4. Write down Feynman rules and
renormalize this L
5. Do your calculations!
17. higgstools
Hands on EFT:
SM EFT
Redefiniton and Renormalization
Include wave-function factors and counterterms
Φ = ZΦΦren Mi = Zi Mi,ren Zi = 1 +
g2
16π2
dZ
(4)
i + g6dZ
(6)
i
counterterms
Dyson-resum the propagators. For example, Higgs self energy:
SHH =
g2
16π2
ΣHH =
g2
16π2
Σ
(4)
HH + Σ
(6)
HH
Add counterterms. Remove UV divergencies.
Use Ward-Slavnov-Taylor identities to check consistency.
OBS: Counterterms remove O(4) UV divergencies, not the O(6):
Wilson coefficients mix!
Finite renormalization instead of RG flow: Connect with pseudo observables
18. higgstools
Hands on EFT:
SM EFT
The message
We want to do precision physics: We are looking for tiny deviations from the SM,
and the energy scale of our theory is relatively narrow
× Therefore cannot use the renormalization group equations
We want to isolate the O(6) contributions to the amplitudes from the O(4)
If you manage to do this, it should be easy to measure SM deviations in Higgs
production and decays (through the couplings)
19. higgstools
Hands on EFT:
SM EFT
Step 5. Calculations: Higgs decays and all that
Example: H → γγ
The amplitude for the process is:
Aµν
HAA = THAA
pµ
2 pν
1 − p1 · p2δµν
M2
H
were we find T to be,
THAA = i
g3
16π2
(T
(4)
HAA + g6 T
(6),b
HAA
UV divergent
) + igg6 T
(6),a
HAA
UV finite
(1)
Need to renormalize T
(6),b
HAA → mixing of Wilson coefficients ( ≡ κ)
Find a final expression for the amplitudes in terms of κ’s and subamplitudes!
21. higgstools
Summary & Open Questions
Summary & Open Questions
We present an effective field theory approach to BSM physics
EFT is a very good choice,regarding model independence.
It also goes beyond LO:
Starting from the kappa-framework, propose an NLO extension for it.
We can identify the deviations inside the amplitudes
and therefore compare with LHC data
Open Questions
What is the range of validity of the effective theory?
Which kind of theory we find at (even) higher energies?
How to combine the bottom-up and top-down approaches of EFT?
Do SM deviations have a SM basis?
What happens to PDFs & theoretical uncertainties?