Exploring temporal graph data with Python: a study on tensor decomposition o...André Panisson
Tensor decompositions have gained a steadily increasing popularity in data mining applications. Data sources from sensor networks and Internet-of-Things applications promise a wealth of interaction data that can be naturally represented as multidimensional structures such as tensors. For example, time-varying social networks collected from wearable proximity sensors can be represented as 3-way tensors. By representing this data as tensors, we can use tensor decomposition to extract community structures with their structural and temporal signatures.
The current standard framework for working with tensors, however, is Matlab. We will show how tensor decompositions can be carried out using Python, how to obtain latent components and how they can be interpreted, and what are some applications of this technique in the academy and industry. We will see a use case where a Python implementation of tensor decomposition is applied to a dataset that describes social interactions of people, collected using the SocioPatterns platform. This platform was deployed in different settings such as conferences, schools and hospitals, in order to support mathematical modelling and simulation of airborne infectious diseases. Tensor decomposition has been used in these scenarios to solve different types of problems: it can be used for data cleaning, where time-varying graph anomalies can be identified and removed from data; it can also be used to assess the impact of latent components in the spreading of a disease, and to devise intervention strategies that are able to reduce the number of infection cases in a school or hospital. These are just a few examples that show the potential of this technique in data mining and machine learning applications.
AGGREGATION OF OPINIONS FOR SYSTEM SELECTION USING APPROXIMATIONS OF FUZZY NU...mathsjournal
In this article we assume that experts express their view points by way of approximation of Triangular fuzzy numbers. We take the help of fuzzy set theory concept to model the situation and present a method to aggregate these approximations of triangular fuzzy numbers to obtain an overall approximation of triangular fuzzy number for each system and then linear ordering done before the best system is chosen. A comparison has been made between approximation of triangular fuzzy systems and the corresponding fuzzy triangular numbers systems. The notions like fuzziness and ambiguity for the approximation of triangular fuzzy numbers are also found.
Exploring temporal graph data with Python: a study on tensor decomposition o...André Panisson
Tensor decompositions have gained a steadily increasing popularity in data mining applications. Data sources from sensor networks and Internet-of-Things applications promise a wealth of interaction data that can be naturally represented as multidimensional structures such as tensors. For example, time-varying social networks collected from wearable proximity sensors can be represented as 3-way tensors. By representing this data as tensors, we can use tensor decomposition to extract community structures with their structural and temporal signatures.
The current standard framework for working with tensors, however, is Matlab. We will show how tensor decompositions can be carried out using Python, how to obtain latent components and how they can be interpreted, and what are some applications of this technique in the academy and industry. We will see a use case where a Python implementation of tensor decomposition is applied to a dataset that describes social interactions of people, collected using the SocioPatterns platform. This platform was deployed in different settings such as conferences, schools and hospitals, in order to support mathematical modelling and simulation of airborne infectious diseases. Tensor decomposition has been used in these scenarios to solve different types of problems: it can be used for data cleaning, where time-varying graph anomalies can be identified and removed from data; it can also be used to assess the impact of latent components in the spreading of a disease, and to devise intervention strategies that are able to reduce the number of infection cases in a school or hospital. These are just a few examples that show the potential of this technique in data mining and machine learning applications.
AGGREGATION OF OPINIONS FOR SYSTEM SELECTION USING APPROXIMATIONS OF FUZZY NU...mathsjournal
In this article we assume that experts express their view points by way of approximation of Triangular fuzzy numbers. We take the help of fuzzy set theory concept to model the situation and present a method to aggregate these approximations of triangular fuzzy numbers to obtain an overall approximation of triangular fuzzy number for each system and then linear ordering done before the best system is chosen. A comparison has been made between approximation of triangular fuzzy systems and the corresponding fuzzy triangular numbers systems. The notions like fuzziness and ambiguity for the approximation of triangular fuzzy numbers are also found.
AGGREGATION OF OPINIONS FOR SYSTEM SELECTION USING APPROXIMATIONS OF FUZZY NU...mathsjournal
In this article we assume that experts express their view points by way of approximation of Triangular
fuzzy numbers. We take the help of fuzzy set theory concept to model the situation and present a method to
aggregate these approximations of triangular fuzzy numbers to obtain an overall approximation of
triangular fuzzy number for each system and then linear ordering done before the best system is chosen. A
comparison has been made betweenapproximation of triangular fuzzy systems and the corresponding fuzzy
triangular numbers systems. The notions like fuzziness and ambiguity for the approximation of triangular
fuzzy numbers are also found.
AGGREGATION OF OPINIONS FOR SYSTEM SELECTION USING APPROXIMATIONS OF FUZZY NU...mathsjournal
In this article we assume that experts express their view points by way of approximation of Triangular
fuzzy numbers. We take the help of fuzzy set theory concept to model the situation and present a method to
aggregate these approximations of triangular fuzzy numbers to obtain an overall approximation of
triangular fuzzy number for each system and then linear ordering done before the best system is chosen. A
comparison has been made betweenapproximation of triangular fuzzy systems and the corresponding fuzzy
triangular numbers systems. The notions like fuzziness and ambiguity for the approximation of triangular
fuzzy numbers are also found.
AGGREGATION OF OPINIONS FOR SYSTEM SELECTION USING APPROXIMATIONS OF FUZZY NU...mathsjournal
In this article we assume that experts express their view points by way of approximation of Triangular
fuzzy numbers. We take the help of fuzzy set theory concept to model the situation and present a method to
aggregate these approximations of triangular fuzzy numbers to obtain an overall approximation of
triangular fuzzy number for each system and then linear ordering done before the best system is chosen. A
comparison has been made betweenapproximation of triangular fuzzy systems and the corresponding fuzzy
triangular numbers systems. The notions like fuzziness and ambiguity for the approximation of triangular
fuzzy numbers are also found.
AGGREGATION OF OPINIONS FOR SYSTEM SELECTION USING APPROXIMATIONS OF FUZZY NU...mathsjournal
In this article we assume that experts express their view points by way of approximation of Triangular
fuzzy numbers. We take the help of fuzzy set theory concept to model the situation and present a method to
aggregate these approximations of triangular fuzzy numbers to obtain an overall approximation of
triangular fuzzy number for each system and then linear ordering done before the best system is chosen. A
comparison has been made betweenapproximation of triangular fuzzy systems and the corresponding fuzzy
triangular numbers systems. The notions like fuzziness and ambiguity for the approximation of triangular
fuzzy numbers are also found.
Incremental and Multi-feature Tensor Subspace Learning applied for Background...ActiveEon
ICIAR'14 - International Conference on Image Analysis and Recognition. Incremental and Multi-feature Tensor Subspace Learning applied for Background Modeling and Subtraction.
FINAL PROJECT, MATH 251, FALL 2015[The project is Due Mond.docxvoversbyobersby
FINAL PROJECT, MATH 251, FALL 2015
[The project is Due Monday after the thanks giving recess]
.NAME(PRINT).________________ SHOW ALL WORK. Explain and
SKETCH (everywhere anytime and especially as you try to comprehend the prob-
lems below) whenever possible and/or necessary. Please carefully recheck your
answers. Leave reasonable space between lines on your solution sheets. Number
them and print your name.
Please sign the following. I hereby affirm that all the work in this project was
done by myself ______________________.
1) i) Explain how to derive the representation of the Cartesian coordinates x,y,z
in terms of the spherical coordinates ρ, θ, φ to obtain
(0.1) r =< x = ρsin(φ)cos(θ), y = ρsin(φ)sin(θ), z = ρcos(φ) > .
What are the conventional ranges of ρ, θ, φ?
ii) Conversely, explain how to express ρ, sin(θ), cos(θ), cos(φ), sin(φ) as
functions of x,y,z.
iii) Consider the spherical coordinates ρ,θ, φ. Sketch and describe in your own
words the set of all points x,y,z in x,y,z space such that:
a) 0 ≤ ρ ≤ 1, 0 ≤ θ < 2π, 0 ≤ φ ≤ π b) ρ = 1, 0 ≤ θ < 2π, 0 ≤ φ ≤ π,
c) 0 ≤ ρ < ∞, 0 ≤ θ < 2π, φ = π
4
, d) ρ = 1, 0 ≤ θ < 2π, φ = π
4
,
e) ρ = 1, θ = π
4
, 0 ≤ φ ≤ π. f) 1 ≤ ρ ≤ 2, 0 ≤ θ < 2π, π
6
≤ φ ≤ π
3
.
iv) In a different set of Cartesian Coordinates ρ, θ, φ sketch and describe in your
own words the set of points (ρ, θ, φ) given above in each item a) to f). For example
the set in a) in x,y,z space is a ball with radius 1 and center (0,0,0). However, in
the Cartesian coordinates ρ, θ, φ the set in a) is a rectangular box.
2) [Computation and graphing of vector fields]. Given r =< x,y,z > and the
vector Field
(0.2) F(x,y,z) = F(r) =< 1 + z,yx,y >,
1
FINAL PROJECT, MATH 251, FALL 2015 2
i) Draw the arrows emanating from (x,y,z) and representing the vectors F(r) =
F(x,y,z) . First draw a 2 raw table recording F(r) versus (x,y,z) for the 4 points
(±1,±2,1) . Afterwards draw the arrows.
ii) Show that the curve
(0.3) r(t) =< x = 2cos(t), y = 4sin(t), z ≡ 0 >, 0 ≤ t < 2π,
is an ellipse. Draw the arrows emanating from (x(t),y(t),z(t)) and representing
the vector values of dr(t)
dt
, F(r(t)) = F(x(t),y(t),z(t)) . Let θ(t) be the angle
between the arrows representing dr(t)
dt
and F(r(t)) . First draw a 5 raw table
recording t, (x(t),y(t),z(t)), dr(t)
dt
, F(r(t)), cos(θ(t)) for the points (x(t),y(t),z(t))
corresponding to t = 0,π
4
, 3π
4
, 5π
4
, 7π
4
. Then draw the arrows.
iii) Given the surface
r(θ,φ) =< x = 2sin(φ)cos(θ), y = 2sin(φ)sin(θ), z = 2cos(φ) >,0 ≤ θ < 2π, 0 ≤ φ ≤ π,
in parametric form. Use trigonometric formulas to show that the following iden-
tity holds
x2(θ,φ) + y2(θ,φ) + z2(θ,φ) ≡ 22.
iv) Draw the arrows emanating from (x(θ,φ),y(θ,φ),z(θ,φ)) and representing the
vectors ∂r(θ,φ)
∂θ
× ∂r(θ,φ)
∂φ
, F(r(θ,φ)) = F(x(θ,φ),y(θ,φ),z(θ,φ)) . Let α(θ,φ) be
the angle between the arrows representing ∂r(θ,φ)
∂θ
× ∂r(θ,φ)
∂φ
and F(r(θ,φ)) . First
draw a table with raws and columns recording (θ,φ),(x(θ,φ),y ...
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...University of Maribor
Slides from:
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Track: Artificial Intelligence
https://www.etran.rs/2024/en/home-english/
Salas, V. (2024) "John of St. Thomas (Poinsot) on the Science of Sacred Theol...Studia Poinsotiana
I Introduction
II Subalternation and Theology
III Theology and Dogmatic Declarations
IV The Mixed Principles of Theology
V Virtual Revelation: The Unity of Theology
VI Theology as a Natural Science
VII Theology’s Certitude
VIII Conclusion
Notes
Bibliography
All the contents are fully attributable to the author, Doctor Victor Salas. Should you wish to get this text republished, get in touch with the author or the editorial committee of the Studia Poinsotiana. Insofar as possible, we will be happy to broker your contact.
Richard's aventures in two entangled wonderlandsRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
Professional air quality monitoring systems provide immediate, on-site data for analysis, compliance, and decision-making.
Monitor common gases, weather parameters, particulates.
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...Ana Luísa Pinho
Functional Magnetic Resonance Imaging (fMRI) provides means to characterize brain activations in response to behavior. However, cognitive neuroscience has been limited to group-level effects referring to the performance of specific tasks. To obtain the functional profile of elementary cognitive mechanisms, the combination of brain responses to many tasks is required. Yet, to date, both structural atlases and parcellation-based activations do not fully account for cognitive function and still present several limitations. Further, they do not adapt overall to individual characteristics. In this talk, I will give an account of deep-behavioral phenotyping strategies, namely data-driven methods in large task-fMRI datasets, to optimize functional brain-data collection and improve inference of effects-of-interest related to mental processes. Key to this approach is the employment of fast multi-functional paradigms rich on features that can be well parametrized and, consequently, facilitate the creation of psycho-physiological constructs to be modelled with imaging data. Particular emphasis will be given to music stimuli when studying high-order cognitive mechanisms, due to their ecological nature and quality to enable complex behavior compounded by discrete entities. I will also discuss how deep-behavioral phenotyping and individualized models applied to neuroimaging data can better account for the subject-specific organization of domain-general cognitive systems in the human brain. Finally, the accumulation of functional brain signatures brings the possibility to clarify relationships among tasks and create a univocal link between brain systems and mental functions through: (1) the development of ontologies proposing an organization of cognitive processes; and (2) brain-network taxonomies describing functional specialization. To this end, tools to improve commensurability in cognitive science are necessary, such as public repositories, ontology-based platforms and automated meta-analysis tools. I will thus discuss some brain-atlasing resources currently under development, and their applicability in cognitive as well as clinical neuroscience.
Seminar of U.V. Spectroscopy by SAMIR PANDASAMIR PANDA
Spectroscopy is a branch of science dealing the study of interaction of electromagnetic radiation with matter.
Ultraviolet-visible spectroscopy refers to absorption spectroscopy or reflect spectroscopy in the UV-VIS spectral region.
Ultraviolet-visible spectroscopy is an analytical method that can measure the amount of light received by the analyte.
1. TENSOR DECOMPOSITION WITH PYTHON
LEARNING STRUCTURES FROM MULTIDIMENSIONAL DATA
ANDRÉ PANISSON
@apanisson
ISI Foundation, Torino & New York City
2. WHAT IS DATA DECOMPOSITION?
DECOMPOSITION == FACTORIZATION
Representation a dataset as a sum of (interpretable) parts
▸ Represent data as the combination of many components / factors
▸ Dimensionality reduction: each new dimension
represents a latent variable:
▸ text corpus => topics
▸ shopping behaviour => segments (user segmentation)
▸ social network => groups, communities
▸ psychology surveys => personality traits
▸ electronic medical records => health conditions
▸ chemical solutions => chemical ingredients
4. DATA DECOMPOSITION
▸ Decomposition of data represented in two dimensions:
MATRIX FACTORIZATION
▸ text: documents X terms
▸ surveys: subjects X questions
▸ electronic medical records: patients X diagnosis/drugs
▸ Decomposition of data represented in more dimensions:
TENSOR FACTORIZATION
▸ social networks: user X user (adjacency matrix) X time
▸ text: authors X terms X time
▸ spectroscopy:
solution sample X wavelength (emission) X wavelength (excitation)
5. WHY TENSOR FACTORIZATION + PYTHON?
▸ Matrix Factorization is already used in many fields
▸ Tensor Factorization is becoming very popular
for multiway data analysis
▸ TF is very useful to explore time-varying network data
▸ But still, the most used tool is Matlab
▸ There’s room for improvement in
the Python libraries for TF
7. FACTOR ANALYSIS
Spearman ~1900
X≈WH
Xtests x subjects ≈ Wtests x intelligences Hintelligences x subjects
Spearman, 1927: The abilities of man.
≈
tests
subjects subjects
tests
Int.
Int.
X W
H
8. TOPIC MODELING / LATENT SEMANTIC ANALYSIS
Blei, David M. "Probabilistic topic models." Communications of the ACM 55.4 (2012): 77-84.
. , ,
. , ,
. . .
gene
dna
genetic
life
evolve
organism
brai n
neuron
nerve
data
number
computer
. , ,
Topics Documents
Topic proportions and
assignments
0.04
0.02
0.01
0.04
0.02
0.01
0.02
0.01
0.01
0.02
0.02
0.01
data
number
computer
. , ,
0.02
0.02
0.01
9. TOPIC MODELING / LATENT SEMANTIC ANALYSIS
X≈WH
Non-negative Matrix Factorization (NMF):
(~1970 Lawson, ~1995 Paatero, ~2000 Lee & Seung)
2005 Gaussier et al. "Relation between PLSA and NMF and implications."
arg min
W,H
kX WHk s. t. W, H 0
≈
documents
terms terms
documents
topic
topic
Sparse
Matrix! W
H
10. NON-NEGATIVE MATRIX FACTORIZATION (NMF)
NMF gives Part based representation
(Lee & Seung – Nature 1999)
NMF
=×
Original
PCA
×
=
NMF is similar to Spectral Clustering
(Ding et al. - SDM 2005)
arg min
W,H
kX WHk s. t. W, H 0
W W •
XHT
WHHT
H H •
WT
X
WTWH
NMF brings interpretation!
11. from sklearn import datasets, decomposition, utils
digits = datasets.fetch_mldata('MNIST original')
A = utils.shuffle(digits.data)
nmf = decomposition.NMF(n_components=20)
W = nmf.fit_transform(A)
H = nmf.components_
plt.rc("image", cmap="binary")
plt.figure(figsize=(8,4))
for i in range(20):
plt.subplot(2,5,i+1)
plt.imshow(H[i].reshape(28,28))
plt.xticks(())
plt.yticks(())
plt.tight_layout()
13. BEYOND MATRICES: HIGH DIMENSIONAL DATASETS
Cichocki et al. Nonnegative Matrix and Tensor Factorizations
Environmental analysis
▸ Measurement as a function of (Location, Time, Variable)
Sensory analysis
▸ Score as a function of (Wine sample, Judge, Attribute)
Process analysis
▸ Measurement as a function of (Batch, Variable, time)
Spectroscopy
▸ Intensity as a function of (Wavelength, Retention, Sample, Time,
Location, …)
…
MULTIWAY DATA ANALYSIS
21. RANK-1 TENSOR
The outer product of N vectors results in a rank-1 tensor
array([[[ 1., 2.],
[ 2., 4.],
[ 3., 6.],
[ 4., 8.]],
[[ 2., 4.],
[ 4., 8.],
[ 6., 12.],
[ 8., 16.]],
[[ 3., 6.],
[ 6., 12.],
[ 9., 18.],
[ 12., 24.]]])
a = np.array([1, 2, 3])
b = np.array([1, 2, 3, 4])
c = np.array([1, 2])
T = np.zeros((a.shape[0], b.shape[0], c.shape[0]))
for i in range(a.shape[0]):
for j in range(b.shape[0]):
for k in range(c.shape[0]):
T[i, j, k] = a[i] * b[j] * c[k]
T = a(1)
· · · a(N)
=
a
c
b
Ti,j,k = a
(1)
i a
(2)
j a
(3)
k
22. TENSOR RANK
▸ Every tensor can be written as a sum of rank-1 tensors
=
a1 aJ
c1 cJ
b1 bJ
+ +
▸ Tensor rank: smallest number of rank-1 tensors
that can generate it by summing up
X ⇡
RX
r=1
a(1)
r a(2)
r · · · a(N)
r ⌘ JA(1)
, A(2)
, · · · , A(N)
K
T ⇡
RX
r=1
ar br cr ⌘ JA, B, CK
23. array([[[ 61., 82.],
[ 74., 100.],
[ 87., 118.],
[ 100., 136.]],
[[ 77., 104.],
[ 94., 128.],
[ 111., 152.],
[ 128., 176.]],
[[ 93., 126.],
[ 114., 156.],
[ 135., 186.],
[ 156., 216.]]])
A = np.array([[1, 2, 3],
[4, 5, 6]]).T
B = np.array([[1, 2, 3, 4],
[5, 6, 7, 8]]).T
C = np.array([[1, 2],
[3, 4]]).T
T = np.zeros((A.shape[0], B.shape[0], C.shape[0]))
for i in range(A.shape[0]):
for j in range(B.shape[0]):
for k in range(C.shape[0]):
for r in range(A.shape[1]):
T[i, j, k] += A[i, r] * B[j, r] * C[k, r]
T = np.einsum('ir,jr,kr->ijk', A, B, C)
: Kruskal Tensorbr cr ⌘ JA, B, CK
24. TENSOR FACTORIZATION
▸ CANDECOMP/PARAFAC factorization (CP)
▸ extensions of SVD / PCA / NMF to tensors
NON-NEGATIVE TENSOR FACTORIZATION
▸ Decompose a non-negative tensor to
a sum of R non-negative rank-1 tensors
arg min
A,B,C
kT JA, B, CKk
with JA, B, CK ⌘
RX
r=1
ar br cr
subject to A 0, B 0, C 0
25. TENSOR FACTORIZATION: HOW TO
Alternating Least Squares(ALS):
Fix all but one factor matrix to which LS is applied
min
A 0
kT(1) A(C B)T
k
min
B 0
kT(2) B(C A)T
k
min
C 0
kT(3) C(B A)T
k
denotes the Khatri-Rao product, which is a
column-wise Kronecker product, i.e., C B = [c1 ⌦ b1, c2 ⌦ b2, . . . , cr ⌦ br]
T(1) = ˆA(ˆC ˆB)T
T(2) = ˆB(ˆC ˆA)T
T(3) = ˆC(ˆB ˆA)T
Unfolded Tensor
on the kth mode
26. F = [zeros(n, r), zeros(m, r), zeros(o, r)]
FF_init = np.rand((len(F), r, r))
def iter_solver(T, F, FF_init):
# Update each factor
for k in range(len(F)):
# Compute the inner-product matrix
FF = ones((r, r))
for i in range(k) + range(k+1, len(F)):
FF = FF * FF_init[i]
# unfolded tensor times Khatri-Rao product
XF = T.uttkrp(F, k)
F[k] = F[k]*XF/(F[k].dot(FF))
# F[k] = nnls(FF, XF.T).T
FF_init[k] = (F[k].T.dot(F[k]))
return F, FF_init
min
A 0
kT(1) A(C B)T
k
min
B 0
kT(2) B(C A)T
k
min
C 0
kT(3) C(B A)T
k
arg min
W,H
kX WHk s.
J. Kim and H. Park. Fast Nonnegative Tensor Factorization with an Active-set-like Method.
In High-Performance Scientific Computing: Algorithms and Applications, Springer, 2012, pp. 311-326.
W W •
XHT
WHHT
T(1)(C B)
27. HOW TO INTERPRET: USER X TERM X TIME
X is a 3-way tensor in which
xnmt is 1 if the term m was used by user n at interval t,
0 otherwise
ANxK
is the the association of each user n to a factor k
BMxK
is the association of each term m to a factor k
CTxK
shows the time activity of each factor
users
users
C
=
X
A
B
(N×M×T)
(T×K)
(N×K)
(M×K)
terms
tim
e
tim
e
terms
factors