An overview is provided of the Stravinsqi-Jun2014 algorithm and its performance on the MediaEval 2014 C@merata Task. Stravinsqi stands for STaff Representation Analysed VIa Natural language String Query Input. The algorithm parses a symbolic representation of a piece of music as well as a query string consisting of a natural language expression, and identifies where event(s) specified by the query occur in the music. The output for any given query is a list of time windows corresponding to the locations of relevant events. To evaluate the algorithm, its output time windows are compared with those specified by music experts for the same query-piece combinations. In an evaluation consisting
of twenty pieces and 200 questions, Stravinsqi-Jun2014 had recall .91 and precision .46 at the measure level, and recall .87 and precision .44 at the beat level. Important potential applications of this work in music-educational software and musicological research are discussed.
http://ceur-ws.org/Vol-1263/mediaeval2014_submission_50.pdf
This webinar by Oleksandr Navka (Lead Software Engineer, Consultant, GlobalLogic) was delivered at Java Community Webinar #1 on August 12, 2020.
Webinar agenda:
- The new structural unit of the program is Java Records
- Updated instanceof statement
- Updated switch operator
More details and presentation: https://www.globallogic.com/ua/about/events/java-community-webinar-1/
Extending Spark SQL API with Easier to Use Array Types Operations with Marek ...Databricks
Big companies typically integrate their data from various heterogeneous systems when building a data lake as single point for accessing data. To achieve this goal technical teams often deal with data defined by complex schemas and various data formats. Spark SQL Datasets are currently compatible with data formats such as XML, Avro and Parquet by providing primitive and complex data types such as structs and arrays.
Although Dataset API offers rich set of functions, general manipulation of array and deeply nested data structures is lacking. We will demonstrate this fact by providing examples of data which is currently very hard to process in Spark efficiently. We designed and developed an extension of Dataset API to allow developers to work with array and complex type elements in a more straightforward and consistent way. The extension should help users dealing with complex and structured big data to use Apache Spark as a truly generic processing framework.
UNED @ Retrieving Diverse Social Images Taskmultimediaeval
This paper summarizes the participation of UNED at the 2014 Retrieving Diverse Social Images Task [3]. We propose a novel approach based on Formal Concept analysis (FCA) to detect the latent topics related to the images and a later Hierarchical Agglomerative Clustering (HAC) to put together the images according to these latent topics. The diversification will be based on offering images from the different topics detected. In order to detect these latent topics, two kinds of data have been tested: only information related to the description of the images and all the textual information related to the images. The results show that our proposal is suitable for search result diversification, achieving similar results to those in the state of the art.
http://ceur-ws.org/Vol-1263/mediaeval2014_submission_9.pdf
Overview of the MediaEval 2014 Visual Privacy Task multimediaeval
This paper presents an overview of the Visual Privacy Task (VPT) of MediaEval 2014, its objectives, related dataset, and evaluation approaches. Participants in this task were required to implement a privacy filter or a combination of filters to protect various personal information regions in video sequences as provided. The
challenge was to achieve an adequate balance between the degree of privacy protection, intelligibility (how much useful information is retained post privacy filtering), and pleasantness (how minimal were the adverse effects of filtering on the appearance of the video frames). The submissions from the eight (8) teams who participated in this task were evaluated subjectively by surveillance experts, practitioners, data protection experts and by naïve viewers using a crowdsourcing approach.
http://ceur-ws.org/Vol-1263/mediaeval2014_submission_37.pdf
The Search and Hyperlinking Task at MediaEval 2014multimediaeval
The Search and Hyperlinking Task at MediaEval 2014 is the third edition of this task. As in previous versions, it consisted of two sub-tasks: (i) answering search queries from a collection of roughly 2700 hours of BBC broadcast TV material, and (ii) linking anchor segments from within the videos to other target segments within the video collection. For MediaEval 2014, both sub-tasks were based on an ad-hoc retrieval scenario, and were evaluated using a pooling procedure across participants submissions with crowdsourcing relevance assessment using Amazon Mechanical Turk.
Emotional expression is an important property of music. Its
emotional characteristics are thus especially natural for
music indexing and recommendation. The Emotion in Music task addresses the task of automatic music emotion prediction and is held for the second year in 2014. As compared to previous year, we modified the task by offering a new feature development subtask, and releasing a new evaluation set. We employed a crowdsourcing approach to collect the data, using Amazon Mechanical Turk. The dataset consists of music licensed under Creative Commons from the Free Music Archive, which can be shared freely without restrictions. In this paper we describe the dataset collection, annotations, and evaluation criteria, as well as the two required and optional runs.
http://ceur-ws.org/Vol-1263/mediaeval2014_submission_33.pdf
This webinar by Oleksandr Navka (Lead Software Engineer, Consultant, GlobalLogic) was delivered at Java Community Webinar #1 on August 12, 2020.
Webinar agenda:
- The new structural unit of the program is Java Records
- Updated instanceof statement
- Updated switch operator
More details and presentation: https://www.globallogic.com/ua/about/events/java-community-webinar-1/
Extending Spark SQL API with Easier to Use Array Types Operations with Marek ...Databricks
Big companies typically integrate their data from various heterogeneous systems when building a data lake as single point for accessing data. To achieve this goal technical teams often deal with data defined by complex schemas and various data formats. Spark SQL Datasets are currently compatible with data formats such as XML, Avro and Parquet by providing primitive and complex data types such as structs and arrays.
Although Dataset API offers rich set of functions, general manipulation of array and deeply nested data structures is lacking. We will demonstrate this fact by providing examples of data which is currently very hard to process in Spark efficiently. We designed and developed an extension of Dataset API to allow developers to work with array and complex type elements in a more straightforward and consistent way. The extension should help users dealing with complex and structured big data to use Apache Spark as a truly generic processing framework.
UNED @ Retrieving Diverse Social Images Taskmultimediaeval
This paper summarizes the participation of UNED at the 2014 Retrieving Diverse Social Images Task [3]. We propose a novel approach based on Formal Concept analysis (FCA) to detect the latent topics related to the images and a later Hierarchical Agglomerative Clustering (HAC) to put together the images according to these latent topics. The diversification will be based on offering images from the different topics detected. In order to detect these latent topics, two kinds of data have been tested: only information related to the description of the images and all the textual information related to the images. The results show that our proposal is suitable for search result diversification, achieving similar results to those in the state of the art.
http://ceur-ws.org/Vol-1263/mediaeval2014_submission_9.pdf
Overview of the MediaEval 2014 Visual Privacy Task multimediaeval
This paper presents an overview of the Visual Privacy Task (VPT) of MediaEval 2014, its objectives, related dataset, and evaluation approaches. Participants in this task were required to implement a privacy filter or a combination of filters to protect various personal information regions in video sequences as provided. The
challenge was to achieve an adequate balance between the degree of privacy protection, intelligibility (how much useful information is retained post privacy filtering), and pleasantness (how minimal were the adverse effects of filtering on the appearance of the video frames). The submissions from the eight (8) teams who participated in this task were evaluated subjectively by surveillance experts, practitioners, data protection experts and by naïve viewers using a crowdsourcing approach.
http://ceur-ws.org/Vol-1263/mediaeval2014_submission_37.pdf
The Search and Hyperlinking Task at MediaEval 2014multimediaeval
The Search and Hyperlinking Task at MediaEval 2014 is the third edition of this task. As in previous versions, it consisted of two sub-tasks: (i) answering search queries from a collection of roughly 2700 hours of BBC broadcast TV material, and (ii) linking anchor segments from within the videos to other target segments within the video collection. For MediaEval 2014, both sub-tasks were based on an ad-hoc retrieval scenario, and were evaluated using a pooling procedure across participants submissions with crowdsourcing relevance assessment using Amazon Mechanical Turk.
Emotional expression is an important property of music. Its
emotional characteristics are thus especially natural for
music indexing and recommendation. The Emotion in Music task addresses the task of automatic music emotion prediction and is held for the second year in 2014. As compared to previous year, we modified the task by offering a new feature development subtask, and releasing a new evaluation set. We employed a crowdsourcing approach to collect the data, using Amazon Mechanical Turk. The dataset consists of music licensed under Creative Commons from the Free Music Archive, which can be shared freely without restrictions. In this paper we describe the dataset collection, annotations, and evaluation criteria, as well as the two required and optional runs.
http://ceur-ws.org/Vol-1263/mediaeval2014_submission_33.pdf
The NNI Query-by-Example System for MediaEval 2014multimediaeval
In this paper we describe the system proposed by NNI (NWPUNTU-I2R) team for the QUESST task within the Mediaeval 2014 evaluation. To solve the problem, we used both dynamic time warping (DTW) and symbolic search (SS) based approaches. The DTW system performs template matching using subsequence DTW algorithm and posterior representations. The symbolic search is performed on phone sequences generated by phone recognizers. For both symbolic and DTW search, partial sequence matching is performed to reduce missing rate, especially for query type 2 and 3. After fusing 9 DTW systems, 7 symbolic systems, and query length side information, we obtained 0.6023 actual normalized cross entropy (actCnxe) for all queries combined. For type 3 complex queries, we achieved 0.7252 actCnxe.
http://ceur-ws.org/Vol-1263/mediaeval2014_submission_69.pdf
4845 Programa de Embajadores Rotarios 2016 2017Miguel DE PAOLI
El Distrito 4845 de Argentina y Paraguay convocan a la selección de Team Leader y Team Members para participar del Programa de Embajadores Rotarios, quienes viajarán al Distrito 4060 de Kansas City en 2017
T he SPL - IT Query by Example Search on Speech system for MediaEval 2014multimediaeval
This document briefly describes the system submitted by the Speech Processing Lab of Instituto de telecomunicações, pole of Coimbra (SPL-IT) to the Query by Example Search on Speech Task (QUESST) of MediaEval 2014. Our approach is based on merging results of a phoneme recognition system using three different languages. A version of Dynamic Time Warping (DTW) using posteriorgram distances was created to allow finding
some of the peculiar search cases of this task. Our primary submission merges two approaches: simple DTW
for detecting entire queries and a version where cutting final portions of queries is allowed. The late submission merges 5 approaches that account for all the search possibilities described for the task, though improved results
were only observed in the evaluation dataset for type 3 queries.
http://ceur-ws.org/Vol-1263/mediaeval2014_submission_74.pdf
TALP-UPC at MediaEval 2014 Placing Task: Combining Geographical Knowledge Bas...multimediaeval
This paper describes our Georeferencing approaches, experiments, and results at the MediaEval 2014 Placing Task evaluation. The task consists of predicting the most probable geographical coordinates of Flickr images and videos using its visual, audio and metadata associated features. Our approaches used only Flickr users textual metadata annotations and tagsets. We used four approaches for this task: 1) an approach based on Geographical Knowledge Bases (GeoKB), 2) the Hiemstra Language Model (HLM) approach with Re-Ranking, 3) a combination of the GeoKB and the HLM (GeoFusion). 4) a combination of the GeoFusion with a HLM model derived from the English Wikipedia georeferenced pages. The HLM approach with Re-Ranking showed the best performance within 10m to 1km distances. The GeoFusion approaches achieved the best results within the margin of errors from 10km to 5000km.
http://ceur-ws.org/Vol-1263/mediaeval2014_submission_77.pdf
MediaEval 2014: THU-HCSIL Approach to Emotion in Music Task using Multi-level...multimediaeval
This working notes paper describes the system proposed by THU-HCSIL team for dynamic music emotion recognition. The procedure is divided into two module-feature extraction and regression. Both feature selection and feature combination are used to form the final THU feature set. In regression module, a Booster-based Multi-level Regression method is presented, which outperforms the baseline significantly on test data in RMSE metric for dynamic task.
http://ceur-ws.org/Vol-1263/mediaeval2014_submission_15.pdf
Synchronizing Multi-User Photo Galleries with MRFmultimediaeval
We present a novel solution to the MediaEval 2014 Event
Synchronization Task: Synchronization of Multi-User Event
Media (SEM). The framework is based on a probabilistic
graphical model. Thanks to the simple topology of the
graph, the estimation of the true temporal displacement
among multiple photo collections can be performed eciently
through exact inference. The underlying tness function is
dened in a exible way, for which it is possible to integrate
easily new information (e.g., text tags or social network
data). The exibility makes the framework suitable and
adaptable to cope with many real situations. The method
is evaluated on two datasets obtaining an overall accuracy
of more than 85% in both cases
This paper provides an overview of the Social Event Detection (SED) system developed at LIMSI for the 2014 cam-
paign. Our approach is based on a hierarchical agglomera-
tive clustering that uses textual metadata, user-based knowl-
edge and geographical information. These dierent sources
of knowledge, either used separately or in cascade, reach
good results for the full clustering subtask with a normal-
ized mutual information equal to 0.95 and F1 scores greater
than 0.82 for our best run
This paper describes the results of the CERTH participation
in the Synchronization of Multi-User Event Media Task of
MediaEval 2014. We used a near duplicate image detector to identify very similar photos, which allowed us to temporally align photo galleries; and then we used time, geolocation and visual information, including the results of visual concept detection, to cluster all photos into different events.
http://ceur-ws.org/Vol-1263/mediaeval2014_submission_40.pdf
RECOD at MediaEval 2014: Violent Scenes Detection Taskmultimediaeval
This paper presents the RECOD approaches used in the MediaEval 2014 Violent Scenes Detection task. Our system is based on the combination of visual, audio, and text features. We also evaluate the performance of a convolutional network as a feature extractor. We combined those features using a fusion scheme. We participated in the main and the generalization tasks.
http://ceur-ws.org/Vol-1263/mediaeval2014_submission_89.pdf
The Munich LSTM-RNN Approach to the MediaEval 2014 “Emotion in Music” Taskmultimediaeval
In this paper we describe TUM's approach for the MediaEval's Emotion in Music" task. The goal of this task is to automatically estimate the emotions expressed by music (in terms of Arousal and Valence) in a time-continuous fashion. Our system consists of Long-Short Term Memory Recurrent Neural Networks (LSTM-RNN) for dynamic Arousal and Valence regression. We used two different sets of acoustic and psychoacoustic features that have been previously proven as effective for emotion prediction in music and speech. The best model yielded an average Pearson's correlation coeficient of 0.354 (Arousal) and 0.198 (Valence), and an average Root Mean Squared Error of 0.102 (Arousal) and 0.079 (Valence).
http://ceur-ws.org/Vol-1263/mediaeval2014_submission_7.pdf
In the ‘old’ days, when a circuit seemed to have a problem, one could look at each circuit to see if they could detect a bad circuit. Today, finding a bad chip is hard to due. Seeing a chip’s circuit may not be possible, how then can one determine a bad chip?
For example, in the 1970s, Memorex Corp. was designing their first “Thin Film Head” (TFH) and using micro chip components. It was soon realized that something was wrong with a small percent of its disc drives being made with these chips. But, how to detect or find the bad drives was not known. Destructive testing seemed like the only choice. Years went bye. No solution!
Text analytics in Python and R with examples from Tobacco ControlBen Healey
Ben has been doing data sciencey work since 1999 for organisations in the banking, retailing, health and education industries. He is currently on contracts with Pharmac and Aspire2025 (a Tobacco Control research collaboration) where, happily, he gets to use his data-wrangling powers for good.
This presentation focuses on analysing text, with Tobacco Control as the context. Examples include monitoring mentions of NZ's smokefree goal by politicians and examining media uptake of BATNZ's Agree/Disagree PR campaign. It covers common obstacles during data extraction, cleaning and analysis, along with the key Python and R packages you can use to help clear them.
The NNI Query-by-Example System for MediaEval 2014multimediaeval
In this paper we describe the system proposed by NNI (NWPUNTU-I2R) team for the QUESST task within the Mediaeval 2014 evaluation. To solve the problem, we used both dynamic time warping (DTW) and symbolic search (SS) based approaches. The DTW system performs template matching using subsequence DTW algorithm and posterior representations. The symbolic search is performed on phone sequences generated by phone recognizers. For both symbolic and DTW search, partial sequence matching is performed to reduce missing rate, especially for query type 2 and 3. After fusing 9 DTW systems, 7 symbolic systems, and query length side information, we obtained 0.6023 actual normalized cross entropy (actCnxe) for all queries combined. For type 3 complex queries, we achieved 0.7252 actCnxe.
http://ceur-ws.org/Vol-1263/mediaeval2014_submission_69.pdf
4845 Programa de Embajadores Rotarios 2016 2017Miguel DE PAOLI
El Distrito 4845 de Argentina y Paraguay convocan a la selección de Team Leader y Team Members para participar del Programa de Embajadores Rotarios, quienes viajarán al Distrito 4060 de Kansas City en 2017
T he SPL - IT Query by Example Search on Speech system for MediaEval 2014multimediaeval
This document briefly describes the system submitted by the Speech Processing Lab of Instituto de telecomunicações, pole of Coimbra (SPL-IT) to the Query by Example Search on Speech Task (QUESST) of MediaEval 2014. Our approach is based on merging results of a phoneme recognition system using three different languages. A version of Dynamic Time Warping (DTW) using posteriorgram distances was created to allow finding
some of the peculiar search cases of this task. Our primary submission merges two approaches: simple DTW
for detecting entire queries and a version where cutting final portions of queries is allowed. The late submission merges 5 approaches that account for all the search possibilities described for the task, though improved results
were only observed in the evaluation dataset for type 3 queries.
http://ceur-ws.org/Vol-1263/mediaeval2014_submission_74.pdf
TALP-UPC at MediaEval 2014 Placing Task: Combining Geographical Knowledge Bas...multimediaeval
This paper describes our Georeferencing approaches, experiments, and results at the MediaEval 2014 Placing Task evaluation. The task consists of predicting the most probable geographical coordinates of Flickr images and videos using its visual, audio and metadata associated features. Our approaches used only Flickr users textual metadata annotations and tagsets. We used four approaches for this task: 1) an approach based on Geographical Knowledge Bases (GeoKB), 2) the Hiemstra Language Model (HLM) approach with Re-Ranking, 3) a combination of the GeoKB and the HLM (GeoFusion). 4) a combination of the GeoFusion with a HLM model derived from the English Wikipedia georeferenced pages. The HLM approach with Re-Ranking showed the best performance within 10m to 1km distances. The GeoFusion approaches achieved the best results within the margin of errors from 10km to 5000km.
http://ceur-ws.org/Vol-1263/mediaeval2014_submission_77.pdf
MediaEval 2014: THU-HCSIL Approach to Emotion in Music Task using Multi-level...multimediaeval
This working notes paper describes the system proposed by THU-HCSIL team for dynamic music emotion recognition. The procedure is divided into two module-feature extraction and regression. Both feature selection and feature combination are used to form the final THU feature set. In regression module, a Booster-based Multi-level Regression method is presented, which outperforms the baseline significantly on test data in RMSE metric for dynamic task.
http://ceur-ws.org/Vol-1263/mediaeval2014_submission_15.pdf
Synchronizing Multi-User Photo Galleries with MRFmultimediaeval
We present a novel solution to the MediaEval 2014 Event
Synchronization Task: Synchronization of Multi-User Event
Media (SEM). The framework is based on a probabilistic
graphical model. Thanks to the simple topology of the
graph, the estimation of the true temporal displacement
among multiple photo collections can be performed eciently
through exact inference. The underlying tness function is
dened in a exible way, for which it is possible to integrate
easily new information (e.g., text tags or social network
data). The exibility makes the framework suitable and
adaptable to cope with many real situations. The method
is evaluated on two datasets obtaining an overall accuracy
of more than 85% in both cases
This paper provides an overview of the Social Event Detection (SED) system developed at LIMSI for the 2014 cam-
paign. Our approach is based on a hierarchical agglomera-
tive clustering that uses textual metadata, user-based knowl-
edge and geographical information. These dierent sources
of knowledge, either used separately or in cascade, reach
good results for the full clustering subtask with a normal-
ized mutual information equal to 0.95 and F1 scores greater
than 0.82 for our best run
This paper describes the results of the CERTH participation
in the Synchronization of Multi-User Event Media Task of
MediaEval 2014. We used a near duplicate image detector to identify very similar photos, which allowed us to temporally align photo galleries; and then we used time, geolocation and visual information, including the results of visual concept detection, to cluster all photos into different events.
http://ceur-ws.org/Vol-1263/mediaeval2014_submission_40.pdf
RECOD at MediaEval 2014: Violent Scenes Detection Taskmultimediaeval
This paper presents the RECOD approaches used in the MediaEval 2014 Violent Scenes Detection task. Our system is based on the combination of visual, audio, and text features. We also evaluate the performance of a convolutional network as a feature extractor. We combined those features using a fusion scheme. We participated in the main and the generalization tasks.
http://ceur-ws.org/Vol-1263/mediaeval2014_submission_89.pdf
The Munich LSTM-RNN Approach to the MediaEval 2014 “Emotion in Music” Taskmultimediaeval
In this paper we describe TUM's approach for the MediaEval's Emotion in Music" task. The goal of this task is to automatically estimate the emotions expressed by music (in terms of Arousal and Valence) in a time-continuous fashion. Our system consists of Long-Short Term Memory Recurrent Neural Networks (LSTM-RNN) for dynamic Arousal and Valence regression. We used two different sets of acoustic and psychoacoustic features that have been previously proven as effective for emotion prediction in music and speech. The best model yielded an average Pearson's correlation coeficient of 0.354 (Arousal) and 0.198 (Valence), and an average Root Mean Squared Error of 0.102 (Arousal) and 0.079 (Valence).
http://ceur-ws.org/Vol-1263/mediaeval2014_submission_7.pdf
In the ‘old’ days, when a circuit seemed to have a problem, one could look at each circuit to see if they could detect a bad circuit. Today, finding a bad chip is hard to due. Seeing a chip’s circuit may not be possible, how then can one determine a bad chip?
For example, in the 1970s, Memorex Corp. was designing their first “Thin Film Head” (TFH) and using micro chip components. It was soon realized that something was wrong with a small percent of its disc drives being made with these chips. But, how to detect or find the bad drives was not known. Destructive testing seemed like the only choice. Years went bye. No solution!
Text analytics in Python and R with examples from Tobacco ControlBen Healey
Ben has been doing data sciencey work since 1999 for organisations in the banking, retailing, health and education industries. He is currently on contracts with Pharmac and Aspire2025 (a Tobacco Control research collaboration) where, happily, he gets to use his data-wrangling powers for good.
This presentation focuses on analysing text, with Tobacco Control as the context. Examples include monitoring mentions of NZ's smokefree goal by politicians and examining media uptake of BATNZ's Agree/Disagree PR campaign. It covers common obstacles during data extraction, cleaning and analysis, along with the key Python and R packages you can use to help clear them.
Kernel Recipes 2019 - GNU poke, an extensible editor for structured binary dataAnne Nicolas
GNU poke is a new interactive editor for binary data. Not limited to editing basic ntities such as bits and bytes, it provides a full-fledged procedural, interactive programming language designed to describe data structures and to operate on them. Once a user has defined a structure for binary data (usually matching some file format) she can search, inspect, create, shuffle and modify abstract entities such as ELF relocations, MP3 tags, DWARF expressions, partition table entries, and so on, with primitives resembling simple editing of bits and bytes. The program comes with a library of already written descriptions (or “pickles” in poke parlance) for many binary formats.
GNU poke is useful in many domains. It is very well suited to aid in the development of programs that operate on binary files, such as assemblers and linkers. This was in fact the primary inspiration that brought me to write it: easily injecting flaws into ELF files in order to reproduce toolchain bugs. Also, due to its flexibility, poke is also very useful for reverse engineering, where the real structure of the data being edited is discovered by experiment, interactively. It is also good for the fast development of prototypes for programs like linkers, compressors or filters, and it provides a convenient foundation to write other utilities such as diff and patch tools for binary files.
This talk (unlike Gaul) is divided into four parts. First I will introduce the program and show what it does: from simple bits/bytes editing to user-defined structures. Then I will show some of the internals, and how poke is implemented. The third block will cover the way of using Poke to describe user data, which is to say the art of writing “pickles”. The presentation ends with a status of the project, a call for hackers, and a hint at future works.
Jose E. Marchesi
Classification of Strokes in Table Tennis with a Three Stream Spatio-Temporal...multimediaeval
Paper: http://ceur-ws.org/Vol-2882/paper62.pdf
YouTube: https://youtu.be/gV-rvV3iFDA
Pierre-Etienne Martin, Jenny Benois-Pineau, Boris Mansencal, Renaud Péteri and Julien Morlier : Classification of Strokes in Table Tennis with a Three Stream Spatio-Temporal CNN for MediaEval 2020. Proc. of MediaEval 2020, 14-15 December 2020, Online.
This work presents a method for classifying table tennis strokes using spatio-temporal convolutional neural networks. The fine-grained classification is performed on trimmed video segments recorded at 120 fps with different players performing in natural conditions. From those segments, the frames are extracted, their optical flow is computed and the pose of the player is estimated. From the optical flow amplitude, a region of interest is inferred. A three stream spatio-temporal convolutional neural network using combination of those modalities and 3D attention mechanisms is presented in order to perform classification.
Presented by: Pierre-Etienne Martin
HCMUS at MediaEval 2020: Ensembles of Temporal Deep Neural Networks for Table...multimediaeval
Paper: http://ceur-ws.org/Vol-2882/paper50.pdf
Hai Nguyen-Truong, San Cao, N. A. Khoa Nguyen, Bang-Dang Pham, Hieu Dao, Minh-Quan Le, Hoang-Phuc Nguyen-Dinh, Hai-Dang Nguyen and Minh-Triet Tran : HCMUS at MediaEval 2020: Ensembles of Temporal Deep Neural Networks for Table Tennis Strokes Classification Task. Proc. of MediaEval 2020, 14-15 December 2020, Online.
The Sports Video Classification Tasks in the Multimedia Evaluation 2020 Challenge focuses on classifying different types of table tennis strokes in video segments. In this task, we - the HCMUS Team - perform multiple experiments, which includes a combination of models such as SlowFast, Optical Flow, DensePose, R2+1, Channel-Separated Convolutional Networks, to classify 21 types of table tennis strokes from video segments. In total, we submit eight runs corresponding to five different models with different sets of hyper-parameters in each of our models. In addition, we apply some pre-processing techniques on the dataset in order for our model to learn and classify more accurately. According to the evaluation results, one of our team's methods out-performs the other team's. In particular, our best run achieves 31.35\% global accuracy, and all of our methods show potential results in terms of local and global accuracy for action recognition tasks.
Sports Video Classification: Classification of Strokes in Table Tennis for Me...multimediaeval
Paper: http://ceur-ws.org/Vol-2882/paper2.pdf
YouTube: https://youtu.be/-bRL868b8ys
Pierre-Etienne Martin, Jenny Benois-Pineau, Boris Mansencal, Renaud Péteri, Laurent Mascarilla, Jordan Calandre and Julien Morlier : Sports Video Classification: Classification of Strokes in Table Tennis for MediaEval 2020. Proc. of MediaEval 2020, 14-15 December 2020, Online.
Fine-grained action classification has raised new challenges compared to classical action classification problems. Sport video analysis is a very popular research topic, due to the variety of application areas, ranging from multimedia intelligent devices with user-tailored digests, up to analysis of athletes' performances. Running since 2019 as a part of MediaEval, we offer a task which consists in classifying table tennis strokes from videos recorded in natural conditions at the University of Bordeaux. The aim is to build tools for teachers, coaches and players to analyse table tennis games. Such tools could lead to an automatic profiling of the player and adaptation of his training for improving his/her sport skills more efficiently.
Presented by: Pierre-Etienne Martin
Predicting Media Memorability from a Multimodal Late Fusion of Self-Attention...multimediaeval
Paper: http://ceur-ws.org/Vol-2882/paper61.pdf
YouTube: https://youtu.be/brmI4g3jLS4
Ricardo Kleinlein, Cristina Luna-Jiménez, Fernando Fernández-Martínez and Zoraida Callejas : Predicting Media Memorability from a Multimodal Late Fusion of Self-Attention and LSTM Models. Proc. of MediaEval 2020, 14-15 December 2020, Online.
This paper reports on the GTH-UPM team experience in the Predicting Media Memorability task at MediaEval 2020. Teams were requested to predict memorability scores at both short-term and long-term, understanding such score as a measure of whether a video was perdurable in a viewer's memory or not. Our proposed system relies on a late fusion of the scores predicted by three sequential models, each trained over a different modality: video captions, aural embeddings and visual optical flow-based vectors. Whereas single-modality models show a low or zero Spearman correlation coefficient value, their combination considerably boosts performance over development data up to 0.2 in the short-term memorability prediction subtask and 0.19 in the long-term subtask. However, performance over test data drops to 0.016 and -0.041, respectively.
Presented by: Ricardo Kleinlein
Essex-NLIP at MediaEval Predicting Media Memorability 2020 Taskmultimediaeval
Paper: http://ceur-ws.org/Vol-2882/paper52.pdf
Janadhip Jacutprakart, Rukiye Savran Kiziltepe, John Q. Gan, Giorgos Papanastasiou and Alba G. Seco de Herrera : Essex-NLIP at MediaEval Predicting Media Memorability 2020 Task. Proc. of MediaEval 2020, 14-15 December 2020, Online.
In this paper, we present the methods of approach and the main results from the Essex NLIP Team’s participation in the MediEval 2020 Predicting Media Memorability task. The task requires participants to build systems that can predict short-term and long-term memorability scores on real-world video samples provided. The focus of our approach is on the use of colour-based visual features as well as the use of the video annotation meta-data. In addition, hyper-parameter tuning was explored. Besides the simplicity of the methodology, our approach achieves competitive results. We investigated the use of different visual features. We assessed the performance of memorability scores through various regression models where Random Forest regression is our final model, to predict the memorability of videos.
Overview of MediaEval 2020 Predicting Media Memorability task: What Makes a V...multimediaeval
Paper: http://ceur-ws.org/Vol-2882/paper6.pdf
YouTube: https://youtu.be/ySGGu_4vaxs
Alba García Seco De Herrera, Rukiye Savran Kiziltepe, Jon Chamberlain, Mihai Gabriel Constantin, Claire-Hélène Demarty, Faiyaz Doctor, Bogdan Ionescu and Alan F. Smeaton : Overview of MediaEval 2020 Predicting Media Memorability task: What Makes a Video Memorable? Proc. of MediaEval 2020, 14-15 December 2020, Online.
This paper describes the MediaEval 2020 Predicting Media Memorability task. After first being proposed at MediaEval 2018, the Predicting Media Memorability task is in its 3rd edition this year, as the prediction of short-term and long-term video memorability (VM) remains a challenging task. In 2020, the format remained the same as in previous editions. This year the videos are a subset of the TRECVid 2019 Video to Text dataset, containing more action rich video content as compare with the 2019 task. In this paper a description of some aspects of this task is provided, including its main characteristics, a description of the collection, the ground truth dataset, evaluation metrics and the requirements for the run submission.
Presented by: Rukiye Savran Kiziltepe
Fooling an Automatic Image Quality Estimatormultimediaeval
Paper: http://ceur-ws.org/Vol-2882/paper45.pdf
Benoit Bonnet, Teddy Furon and Patrick Bas : Fooling an Automatic Image Quality Estimator. Proc. of MediaEval 2020, 14-15 December 2020, Online.
In this paper we present our work on the 2020 MediaEval task: Pixel "Privacy: Quality Camouflage for Social Images". Blind Image Quality Assessment (BIQA) is a classifier that for any given image will return a quality score. Our task is to modify an image to decrease its BIQA score while maintaining a good perceived quality. Since BIQA is a deep neural network, we worked on an adversarial attack approach of the problem.
Fooling Blind Image Quality Assessment by Optimizing a Human-Understandable C...multimediaeval
Paper: http://ceur-ws.org/Vol-2882/paper16.pdf
YouTube: https://youtu.be/ix_b9K7j72w
Zhengyu Zhao : Fooling Blind Image Quality Assessment by Optimizing a Human-Understandable Color Filter. Proc. of MediaEval 2020, 14-15 December 2020, Online.
This paper presents the submission of our RU-DS team to the Pixel Privacy Task 2020. We propose to fool the blind image quality assessment model by transforming images based on optimizing a human-understandable color filter. In contrast to the common work that relies on small, $L_p$-bounded additive pixel perturbations, our approach yields large yet smooth perturbations. Experimental results demonstrate that in the specific context of this task, our approach is able to achieve strong adversarial effects, but has to sacrifice the image appeal.
Presented by: Zhengyu Zhao
Pixel Privacy: Quality Camouflage for Social Imagesmultimediaeval
Paper: http://ceur-ws.org/Vol-2882/paper77.pdf
YouTube: https://youtu.be/8Rr4KknGSac
Zhuoran Liu, Zhengyu Zhao, Martha Larson and Laurent Amsaleg : Pixel Privacy: Quality Camouflage for Social Images. Proc. of MediaEval 2020, 14-15 December 2020, Online.
High-quality social images shared online can be misappropriated for unauthorized goals, where the quality filtering step is commonly carried out by automatic Blind Image Quality Assessment (BIQA) algorithms. Pixel Privacy benchmarks privacy-protective approaches that protect privacy-sensitive images against unethical computer vision algorithms. In the 2020 task, participants are encouraged to develop camouflage methods that can effectively decrease the BIQA quality score of high-quality images and maintain image appeal. The camouflaged images need to be either imperceptible to the human eye, or it can be a visible enhancement.
Presented by: Zhuoran Liu
HCMUS at MediaEval 2020:Image-Text Fusion for Automatic News-Images Re-Matchingmultimediaeval
Paper: http://ceur-ws.org/Vol-2882/paper73.pdf
YouTube: https://youtu.be/TadJ6y7xZeA
Thuc Nguyen-Quang, Tuan-Duy Nguyen, Thang-Long Nguyen-Ho, Anh-Kiet Duong, Xuan-Nhat Hoang, Vinh-Thuyen Nguyen-Truong, Hai-Dang Nguyen and Minh-Triet Tran : HCMUS at MediaEval 2020:Image-Text Fusion for Automatic News-Images Re-Matching. Proc. of MediaEval 2020, 14-15 December 2020, Online.
Matching text and images based on their semantics has an important role in cross-media retrieval. However, text and images in articles have a complex connection. In the context of MediaEval 2020 Challenge, we propose three multi-modal methods for mapping text and images of news articles to the shared space in order to perform efficient cross-retrieval. Our methods show systemic improvement and validate our hypotheses, while the best-performed method reaches a recall@100 score of 0.2064.
Presented by: Thuc Nguyen-Quang
Efficient Supervision Net: Polyp Segmentation using EfficientNet and Attentio...multimediaeval
Paper: http://ceur-ws.org/Vol-2882/paper72.pdf
Sabarinathan D and Suganya Ramamoorthy : Efficient Supervision Net: Polyp Segmentation using EfficientNet and Attention Unit. Proc. of MediaEval 2020, 14-15 December 2020, Online.
Colorectal cancer is the third most common cause of cancer worldwide. In the era of medical Industry, identifying colorectal cancer in its early stages has been a challenging problem. Inspired by these issues, the main objective of this paper is to develop a Multi supervision net algorithm for segmenting polys on a comprehensive dataset. The risk of colorectal cancer could be reduced by early diagnosis of poly during a colonoscopy. The disease and their symptoms are highly varying and always a need for a continuous update of knowledge for the doctors and medical analyst. The diseases fall into different categories and a small variation of symptoms may lead to higher rate of risk. We have taken Medico polyp challenge dataset, which consists of 1000 segmented polyp images from gastrointestinal track. We proposed an efficient Net B4 as a pre-trained architecture in multi-supervision net. The model is trained with multiple output layers. We present quantitative results on colorectal dataset to evaluate the performance and achieved good results in all the performance metrics. The experimental results proved that the proposed model is robust and provides a good level of accuracy in segmenting polyps on a comprehensive dataset for different metrics such as Dice coefficient, Recall, Precision and F2.
HCMUS at Medico Automatic Polyp Segmentation Task 2020: PraNet and ResUnet++ ...multimediaeval
Paper: http://ceur-ws.org/Vol-2882/paper47.pdf
YouTube: https://youtu.be/vMsM4zg2-JY
Tien-Phat Nguyen, Tan-Cong Nguyen, Gia-Han Diep, Minh-Quan Le, Hoang-Phuc Nguyen-Dinh, Hai-Dang Nguyen and Minh-Triet Tran : HCMUS at Medico Automatic Polyp Segmentation Task 2020: PraNet and ResUnet++ for Polyps Segmentation. Proc. of MediaEval 2020, 14-15 December 2020, Online.
The Medico task, MediaEval 2020, explores the challenge of building accurate and high-performance algorithms to detect all types of polyps in endoscopic images. We proposed different approaches leveraging the advantages of either ResUnet++ or PraNet model to efficiently segment polyps in colonoscopy images, with modifications on the network structure, parameters, and training strategies to tackle various observed characteristics of the given dataset. Our methods outperform the other teams' methods, for both accuracy and efficiency. After the evaluation, we are at top 2 for task 1 (with Jaccard index of 0.777, best Precision and Accuracy scores) and top 1 for task 2 (with 67.52 FPS and Jaccard index of 0.658).
Depth-wise Separable Atrous Convolution for Polyps Segmentation in Gastro-Int...multimediaeval
Paper: http://ceur-ws.org/Vol-2882/paper31.pdf
Syed Muhammad Faraz Ali, Muhammad Taha Khan, Syed Unaiz Haider, Talha Ahmed, Zeshan Khan and Muhammad Atif Tahir : Depth-wise Separable Atrous Convolution for Polyps Segmentation in Gastro-Intestinal Tract. Proc. of MediaEval 2020, 14-15 December 2020, Online.
Identification of polyps in endoscopic images is critical for the diagnosis of colon cancer. Finding the exact shape and size of polyps requires the segmentation of endoscopic images. This research explores the advantage of using depth-wise separable convolution in the atrous convolution of the ResUNet++ architecture. Deep atrous spatial pyramid pooling was also implemented on the ResUNet++ architecture. The results show that architecture with separable convolution has a smaller size and fewer GFLOPs without degrading the performance too much.
Deep Conditional Adversarial learning for polyp Segmentationmultimediaeval
Paper: http://ceur-ws.org/Vol-2882/paper22.pdf
Debapriya Banik and Debotosh Bhattacharjee : Deep Conditional Adversarial learning for polyp Segmentation. Proc. of MediaEval 2020, 14-15 December 2020, Online.
This approach has addressed the Medico automatic polyp segmentation challenge which is a part of Mediaeval 2020. We have proposed a deep conditional adversarial learning based network for the automatic polyp segmentation task. The network comprises of two interdependent models namely a generator and a discriminator. The generator network is a FCN employed for the prediction of the polyp mask while the discriminator enforces the segmentation to be as similar as the real segmented mask (ground truth). Our proposed model achieved a comparative result on the test dataset provided by the organizers of the challenge.
A Temporal-Spatial Attention Model for Medical Image Detectionmultimediaeval
Paper: http://ceur-ws.org/Vol-2882/paper21.pdf
Hwang Maxwell, Wu Cai, Hwang Kao-Shing, Xu Yong Si and Wu Chien-Hsing : A Temporal-Spatial Attention Model for Medical Image Detection. Proc. of MediaEval 2020, 14-15 December 2020, Online.
A local region model with attentive temporal-spatial pathways is proposed for automatically learning various target structures. The attentive spatial pathway highlights the salient region to generate bounding boxes and ignores irrelevant regions in an input image. The proposed attention mechanism allows efficient object localization and the overall predictive performance is increased because there are fewer false positives for the object detection task for medical images with manual annotations. The experimental results show that proposed models consistently increase the base architectures' predictive performance for different datasets and training sizes without undue computational efficiency.
HCMUS-Juniors 2020 at Medico Task in MediaEval 2020: Refined Deep Neural Netw...multimediaeval
Paper: http://ceur-ws.org/Vol-2882/paper20.pdf
YouTube: https://youtu.be/CVelQl5Luf0
Quoc-Huy Trinh, Minh-Van Nguyen, Thiet-Gia Huynh and Minh-Triet Tran : HCMUS-Juniors 2020 at Medico Task in MediaEval 2020: Refined Deep Neural Network and UNet for Polyps Segmentation. Proc. of MediaEval 2020, 14-15 December 2020, Online.
The Medico: Multimedia Task focuses on developing an efficient and accurate framework to computer-aided diagnosis systems for automatic polyp segmentation to detect all types of polyps in endoscopic images of the gastrointestinal (GI) tract. We are HCMUS-team approach a solution, which includes combination Residual module, Inception module, Adaptive Convolutional neural network with Unet model and PraNet to semantic segmentation all types of polyps in endoscopic images. We submit multiple runs with different architecture and parameters in our model. Our methods show potential results in accuracy and efficiency through multiple experiments.
Fine-tuning for Polyp Segmentation with Attentionmultimediaeval
Paper: http://ceur-ws.org/Vol-2882/paper15.pdf
Rabindra Khadka : Transfer of Knowledge: Fine-tuning for Polyp Segmentation with Attention. Proc. of MediaEval 2020, 14-15 December 2020, Online.
This paper describes how the transfer of prior knowledge can effectively take on segmentation tasks with the help of attention mechanisms. The UNet model pretrained on brain MRI dataset was fine-tuned with the polyp dataset. Attention mechanism was integrated to focus on relevant regions in the input images. The implemented architecture is evaluated on 200 validation images based on intersection over union and dice score between groundtruth and predicted region. The model demonstrates a promising result with computational efciency.
Bigger Networks are not Always Better: Deep Convolutional Neural Networks for...multimediaeval
Paper: http://ceur-ws.org/Vol-2882/paper12.pdf
Adrian Krenzer and Frank Puppe : Bigger Networks are not Always Better: Deep Convolutional Neural Networks for Automated Polyp Segmentation. Proc. of MediaEval 2020, 14-15 December 2020, Online.
This paper presents our team's (AI-JMU) approach to the Medico automated polyp segmentation challenge. We consider deep convolutional neural networks to be well suited for this task. To determine the best architecture we test and compare state of the art backbones and two different heads. Finally we achieve a Jaccard index of 73.74\% on the challenge test set. We further demonstrate that bigger networks do not always perform better. However the growing network size always increases the computational complexity.
Insights for wellbeing: Predicting Personal Air Quality Index using Regressio...multimediaeval
Paper: http://ceur-ws.org/Vol-2882/paper51.pdf
Amel Ksibi, Amina Salhi, Ala Alluhaidan and Sahar A. El-Rahman : Insights for wellbeing: Predicting Personal Air Quality Index using Regression Approach. Proc. of MediaEval 2020, 14-15 December 2020, Online.
Providing air pollution information to individuals enables them to understand the air quality of their living environments. Thus, the association between people’s wellbeing and the properties of the surrounding environment is an essential area of investigation. This paper proposes Air Quality Prediction through harvesting public/open data and leveraging them to get the Personal Air Quality index. These are usually incomplete. To cope with the problem of missing data, we applied the KNN imputation method. To predict Personal Air Quality Index, we apply a voting regression approach based on three base regressors which are Gradient Boosting regressor, Random Forest regressor, and linear regressor. Evaluating the experimental results using the RMSE metric, we got an average score of 35.39 for Walker and 51.16 for Car.
Use Visual Features From Surrounding Scenes to Improve Personal Air Quality ...multimediaeval
Paper: http://ceur-ws.org/Vol-2882/paper40.pdf
YouTube: https://youtu.be/SL5Hvu1mARY
Trung-Quan Nguyen, Dang-Hieu Nguyen and Loc Tai Tan Nguyen : Use Visual Features From Surrounding Scenes to Improve Personal Air Quality Data Prediction Performance. Proc. of MediaEval 2020, 14-15 December 2020, Online.
In this paper, we propose a method to predict the personal air quality index in an area by using the combination of the levels of the following pollutants: PM2.5, NO2, and O3, measured from the nearby weather stations of that area, and the photos of surrounding scenes taken at that area. Our approach uses the Inverse Distance Weighted (IDW) technique to estimate the missing air pollutant levels and then use regression to integrate visual features from taken photos to optimize the predicted values. After that, we can use those values to calculate the Air Quality Index (AQI). The results show that the proposed method may not improve the performance of the prediction in some cases.
SOCRadar Research Team: Latest Activities of IntelBrokerSOCRadar
The European Union Agency for Law Enforcement Cooperation (Europol) has suffered an alleged data breach after a notorious threat actor claimed to have exfiltrated data from its systems. Infamous data leaker IntelBroker posted on the even more infamous BreachForums hacking forum, saying that Europol suffered a data breach this month.
The alleged breach affected Europol agencies CCSE, EC3, Europol Platform for Experts, Law Enforcement Forum, and SIRIUS. Infiltration of these entities can disrupt ongoing investigations and compromise sensitive intelligence shared among international law enforcement agencies.
However, this is neither the first nor the last activity of IntekBroker. We have compiled for you what happened in the last few days. To track such hacker activities on dark web sources like hacker forums, private Telegram channels, and other hidden platforms where cyber threats often originate, you can check SOCRadar’s Dark Web News.
Stay Informed on Threat Actors’ Activity on the Dark Web with SOCRadar!
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...Globus
The Earth System Grid Federation (ESGF) is a global network of data servers that archives and distributes the planet’s largest collection of Earth system model output for thousands of climate and environmental scientists worldwide. Many of these petabyte-scale data archives are located in proximity to large high-performance computing (HPC) or cloud computing resources, but the primary workflow for data users consists of transferring data, and applying computations on a different system. As a part of the ESGF 2.0 US project (funded by the United States Department of Energy Office of Science), we developed pre-defined data workflows, which can be run on-demand, capable of applying many data reduction and data analysis to the large ESGF data archives, transferring only the resultant analysis (ex. visualizations, smaller data files). In this talk, we will showcase a few of these workflows, highlighting how Globus Flows can be used for petabyte-scale climate analysis.
Understanding Globus Data Transfers with NetSageGlobus
NetSage is an open privacy-aware network measurement, analysis, and visualization service designed to help end-users visualize and reason about large data transfers. NetSage traditionally has used a combination of passive measurements, including SNMP and flow data, as well as active measurements, mainly perfSONAR, to provide longitudinal network performance data visualization. It has been deployed by dozens of networks world wide, and is supported domestically by the Engagement and Performance Operations Center (EPOC), NSF #2328479. We have recently expanded the NetSage data sources to include logs for Globus data transfers, following the same privacy-preserving approach as for Flow data. Using the logs for the Texas Advanced Computing Center (TACC) as an example, this talk will walk through several different example use cases that NetSage can answer, including: Who is using Globus to share data with my institution, and what kind of performance are they able to achieve? How many transfers has Globus supported for us? Which sites are we sharing the most data with, and how is that changing over time? How is my site using Globus to move data internally, and what kind of performance do we see for those transfers? What percentage of data transfers at my institution used Globus, and how did the overall data transfer performance compare to the Globus users?
Developing Distributed High-performance Computing Capabilities of an Open Sci...Globus
COVID-19 had an unprecedented impact on scientific collaboration. The pandemic and its broad response from the scientific community has forged new relationships among public health practitioners, mathematical modelers, and scientific computing specialists, while revealing critical gaps in exploiting advanced computing systems to support urgent decision making. Informed by our team’s work in applying high-performance computing in support of public health decision makers during the COVID-19 pandemic, we present how Globus technologies are enabling the development of an open science platform for robust epidemic analysis, with the goal of collaborative, secure, distributed, on-demand, and fast time-to-solution analyses to support public health.
Large Language Models and the End of ProgrammingMatt Welsh
Talk by Matt Welsh at Craft Conference 2024 on the impact that Large Language Models will have on the future of software development. In this talk, I discuss the ways in which LLMs will impact the software industry, from replacing human software developers with AI, to replacing conventional software with models that perform reasoning, computation, and problem-solving.
Paketo Buildpacks : la meilleure façon de construire des images OCI? DevopsDa...Anthony Dahanne
Les Buildpacks existent depuis plus de 10 ans ! D’abord, ils étaient utilisés pour détecter et construire une application avant de la déployer sur certains PaaS. Ensuite, nous avons pu créer des images Docker (OCI) avec leur dernière génération, les Cloud Native Buildpacks (CNCF en incubation). Sont-ils une bonne alternative au Dockerfile ? Que sont les buildpacks Paketo ? Quelles communautés les soutiennent et comment ?
Venez le découvrir lors de cette session ignite
Listen to the keynote address and hear about the latest developments from Rachana Ananthakrishnan and Ian Foster who review the updates to the Globus Platform and Service, and the relevance of Globus to the scientific community as an automation platform to accelerate scientific discovery.
A Comprehensive Look at Generative AI in Retail App Testing.pdfkalichargn70th171
Traditional software testing methods are being challenged in retail, where customer expectations and technological advancements continually shape the landscape. Enter generative AI—a transformative subset of artificial intelligence technologies poised to revolutionize software testing.
Check out the webinar slides to learn more about how XfilesPro transforms Salesforce document management by leveraging its world-class applications. For more details, please connect with sales@xfilespro.com
If you want to watch the on-demand webinar, please click here: https://www.xfilespro.com/webinars/salesforce-document-management-2-0-smarter-faster-better/
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I ...Juraj Vysvader
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I didn't get rich from it but it did have 63K downloads (powered possible tens of thousands of websites).
Software Engineering, Software Consulting, Tech Lead.
Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Security,
Spring Transaction, Spring MVC,
Log4j, REST/SOAP WEB-SERVICES.
OpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoamtakuyayamamoto1800
In this slide, we show the simulation example and the way to compile this solver.
In this solver, the Helmholtz equation can be solved by helmholtzFoam. Also, the Helmholtz equation with uniformly dispersed bubbles can be simulated by helmholtzBubbleFoam.
Code reviews are vital for ensuring good code quality. They serve as one of our last lines of defense against bugs and subpar code reaching production.
Yet, they often turn into annoying tasks riddled with frustration, hostility, unclear feedback and lack of standards. How can we improve this crucial process?
In this session we will cover:
- The Art of Effective Code Reviews
- Streamlining the Review Process
- Elevating Reviews with Automated Tools
By the end of this presentation, you'll have the knowledge on how to organize and improve your code review proces
top nidhi software solution freedownloadvrstrong314
This presentation emphasizes the importance of data security and legal compliance for Nidhi companies in India. It highlights how online Nidhi software solutions, like Vector Nidhi Software, offer advanced features tailored to these needs. Key aspects include encryption, access controls, and audit trails to ensure data security. The software complies with regulatory guidelines from the MCA and RBI and adheres to Nidhi Rules, 2014. With customizable, user-friendly interfaces and real-time features, these Nidhi software solutions enhance efficiency, support growth, and provide exceptional member services. The presentation concludes with contact information for further inquiries.
Providing Globus Services to Users of JASMIN for Environmental Data AnalysisGlobus
JASMIN is the UK’s high-performance data analysis platform for environmental science, operated by STFC on behalf of the UK Natural Environment Research Council (NERC). In addition to its role in hosting the CEDA Archive (NERC’s long-term repository for climate, atmospheric science & Earth observation data in the UK), JASMIN provides a collaborative platform to a community of around 2,000 scientists in the UK and beyond, providing nearly 400 environmental science projects with working space, compute resources and tools to facilitate their work. High-performance data transfer into and out of JASMIN has always been a key feature, with many scientists bringing model outputs from supercomputers elsewhere in the UK, to analyse against observational or other model data in the CEDA Archive. A growing number of JASMIN users are now realising the benefits of using the Globus service to provide reliable and efficient data movement and other tasks in this and other contexts. Further use cases involve long-distance (intercontinental) transfers to and from JASMIN, and collecting results from a mobile atmospheric radar system, pushing data to JASMIN via a lightweight Globus deployment. We provide details of how Globus fits into our current infrastructure, our experience of the recent migration to GCSv5.4, and of our interest in developing use of the wider ecosystem of Globus services for the benefit of our user community.
In software engineering, the right architecture is essential for robust, scalable platforms. Wix has undergone a pivotal shift from event sourcing to a CRUD-based model for its microservices. This talk will chart the course of this pivotal journey.
Event sourcing, which records state changes as immutable events, provided robust auditing and "time travel" debugging for Wix Stores' microservices. Despite its benefits, the complexity it introduced in state management slowed development. Wix responded by adopting a simpler, unified CRUD model. This talk will explore the challenges of event sourcing and the advantages of Wix's new "CRUD on steroids" approach, which streamlines API integration and domain event management while preserving data integrity and system resilience.
Participants will gain valuable insights into Wix's strategies for ensuring atomicity in database updates and event production, as well as caching, materialization, and performance optimization techniques within a distributed system.
Join us to discover how Wix has mastered the art of balancing simplicity and extensibility, and learn how the re-adoption of the modest CRUD has turbocharged their development velocity, resilience, and scalability in a high-growth environment.
How Does XfilesPro Ensure Security While Sharing Documents in Salesforce?XfilesPro
Worried about document security while sharing them in Salesforce? Fret no more! Here are the top-notch security standards XfilesPro upholds to ensure strong security for your Salesforce documents while sharing with internal or external people.
To learn more, read the blog: https://www.xfilespro.com/how-does-xfilespro-make-document-sharing-secure-and-seamless-in-salesforce/
4. • Stravinsqi
stands
for
STaff
Representation
Analysed
VIa
Natural
language
String
Query
Input.
1. Get
question
string
and
division
value,
analyse
it
and
split
into
compound
questions
if
necessary.
2. Load
staff
names,
time
signatures,
and
point-‐set
representations
of
the
piece.
3. Produce
time-‐interval
answers
to
elemental
question(s).
4. Cross-‐check
timings
of
answers
to
compound
questions.
5. Convert
time-‐interval
answers
to
required
format.
5. • Stravinsqi
stands
for
STaff
Representation
Analysed
VIa
Natural
language
String
Query
Input.
1. Get
question
string
and
division
value,
analyse
it
and
split
into
compound
questions
if
necessary.
2. Load
staff
names,
time
signatures,
and
point-‐set
representations
of
the
piece.
3. Produce
time-‐interval
answers
to
elemental
question(s).
4. Cross-‐check
timings
of
answers
to
compound
questions.
5. Convert
time-‐interval
answers
to
required
format.
6. • Stravinsqi
stands
for
STaff
Representation
Analysed
VIa
Natural
language
String
Query
Input.
1. Get
question
string
and
division
value,
analyse
it
and
split
into
compound
questions
if
necessary.
2. Load
staff
names,
time
signatures,
and
point-‐set
representations
of
the
piece.
3. Produce
time-‐interval
answers
to
elemental
question(s).
4. Cross-‐check
timings
of
answers
to
compound
questions.
5. Convert
time-‐interval
answers
to
required
format.
7. • Stravinsqi
stands
for
STaff
Representation
Analysed
VIa
Natural
language
String
Query
Input.
1. Get
question
string
and
division
value,
analyse
it
and
split
into
compound
questions
if
necessary.
2. Load
staff
names,
time
signatures,
and
point-‐set
representations
of
the
piece.
3. Produce
time-‐interval
answers
to
elemental
question(s).
4. Cross-‐check
timings
of
answers
to
compound
questions.
5. Convert
time-‐interval
answers
to
required
format.
8. • Stravinsqi
stands
for
STaff
Representation
Analysed
VIa
Natural
language
String
Query
Input.
1. Get
question
string
and
division
value,
analyse
it
and
split
into
compound
questions
if
necessary.
2. Load
staff
names,
time
signatures,
and
point-‐set
representations
of
the
piece.
3. Produce
time-‐interval
answers
to
elemental
question(s).
4. Cross-‐check
timings
of
answers
to
compound
questions.
5. Convert
time-‐interval
answers
to
required
format.
9. • Stravinsqi
stands
for
STaff
Representation
Analysed
VIa
Natural
language
String
Query
Input.
1. Get
question
string
and
division
value,
analyse
it
and
split
into
compound
questions
if
necessary.
2. Load
staff
names,
time
signatures,
and
point-‐set
representations
of
the
piece.
3. Produce
time-‐interval
answers
to
elemental
question(s).
4. Cross-‐check
timings
of
answers
to
compound
questions.
5. Convert
time-‐interval
answers
to
required
format.
12. #|
Let's
begin
by
looking
at
Morley's
'April
is
in
my
mistress'
face'.
|#
(setq
notation-‐name
"f1")
(setq
notation-‐path&name
(merge-‐pathnames
(make-‐pathname
:name
notation-‐name
:type
"krn")
notation-‐path))
(setq
question-‐number
"001")
13. #|
*********************************
****
1.
ANALYSE
QUESTION
STRING
****
*********************************
|#
;
Load
question
string
and
division
value.
(setq
question&division
(c@merata2014-‐question-‐file2question-‐string
question-‐number
question-‐path&name
notation-‐name))
;
-‐-‐>
("F#
followed
two
crotchets
later
by
a
G"
1)
(setq
division
(second
question&division))
;
-‐-‐>
1
14. ;
Split
up
compound
questions.
(setq
questions
(followed-‐by-‐splitter
(first
question&division)))
;
-‐-‐>
(("F#"
0)
("G"
2))
;
Try
US
version
of
the
question
string.
(setq
questions
(followed-‐by-‐splitter
"F#
followed
two
quarter
notes
later
by
a
G"))
;
-‐-‐>
(("F#"
0)
("G"
2))
15. ;
Try
junk
version
of
the
question
string.
(setq
questions
(followed-‐by-‐splitter
"F#
blah
followed
two
crotchets
later
by
a
blah
G"))
;
-‐-‐>
(("F#
blah"
0)
("blah
G"
2))
;
Try
reference
to
bars
rather
than
a
specific
duration.
(setq
questions
(followed-‐by-‐splitter
"F#
followed
two
bars
later
by
a
G"))
;
-‐-‐>
(("F#"
0)
("G"
2
"bars"))
16. ;
Try
'consecutive'
question.
(setq
questions
(followed-‐by-‐splitter
"F#
followed
by
two
consecutive
crotchets"))
;
-‐-‐>
(("F#"
0)
("crotchet"
0)
("crotchet"
0))
17. ;
Try
linguistic
variant
of
compound
query.
(setq
questions
(followed-‐by-‐splitter
(concatenate
'string
"F#
then
two
bars
later
a
G
"
"then
consecutive
fifths
in
the
left
hand")))
#|
-‐-‐>
(("F#"
0)
("G"
2
"bars")
("fifths
in
the
left
hand"
0)
("fifths
in
the
left
hand"
0))
|#
18. ;
Back
to
original
question.
(setq
questions
(followed-‐by-‐splitter
(first
question&division)))
;
-‐-‐>
(("F#"
0)
("G"
2))
19. #|
***************************************
****
2.
LOAD
REPRESENTATIONS
OF
PIECE
****
***************************************
|#
#|
Load
staff
names
and
names
of
the
opening
clefs
on
each
staff.
|#
(setq
staff&clef-‐names
(staves-‐info2staff&clef-‐names
notation-‐path&name))
#|
-‐-‐>
(("Bass"
"bass
clef")
("Tenor"
"tenor
clef")
("Alto"
"treble
clef")
("Soprano"
"treble
clef"))
|#
20. #|
Load
time-‐signature
information
and
any
changes
across
the
piece.
|#
(setq
ontimes-‐signatures
(append-‐ontimes-‐to-‐time-‐signatures
(kern-‐file2ontimes-‐signatures
notation-‐path&name)))
;
-‐-‐>
((1
4
4
0))
;
If
there
was
a
change,
it
might
look
like:
;
-‐-‐>
((1
4
4
0)
(9
3
2
32))
25. ;
This
is
the
function
of
interest
on
this
occasion...
(setq
ans-‐pitch
(mapcar
#'(lambda
(x)
(pitch-‐class-‐time-‐intervals
(first
x)
point-‐set
staff&clef-‐names))
questions))
#|
-‐-‐>
(((32
36)
(57
115/2)
(58
60))
((12
13)
(12
13)
(24
32)
(26
27)
(28
32)
(46
48)
(48
50)
(49
50)
(50
52)
(50
52)
(54
57)
(60
62)
(60
61)
(61
62)
(62
63)
(63
64)))
|#
26. #|
****************************************************
****
4.
Cross-‐check
timings
for
compound
questions
****
****************************************************
|#
;
Put
the
answers
next
to
one
another
in
one
list.
(setq
time-‐intervals
(list
ans-‐harmonic-‐interval
ans-‐melodic-‐interval
ans-‐dur-‐pitch
ans-‐pitch
ans-‐dur
ans-‐nadir-‐apex
ans-‐triad
ans-‐triad-‐inversion
ans-‐texture
ans-‐cadence
ans-‐word&event
ans-‐word
ans-‐artic
ans-‐dur-‐rest
ans-‐tied&event))
33. • The
Stravinsqi
algorithm
has
effectively
solved
seven
of
the
twelve
C@merata
task
categories
(pitch,
duration,
pitch
and
duration,
articulation,
voice
specific,
lyrics,
and
melodic
interval)
• As
for
the
remaining
five
categories,
precision
for
one
category
(compound
queries)
can
be
improved
by
fixing
a
selection-‐criteria
bug
• More
data
are
required
for
the
triad,
cadence,
and
texture
query
categories
• A
big
“thank
you”
to
Richard
Sutcliffe
and
team
34. • The
Stravinsqi
algorithm
has
effectively
solved
seven
of
the
twelve
C@merata
task
categories
(pitch,
duration,
pitch
and
duration,
articulation,
voice
specific,
lyrics,
and
melodic
interval)
• As
for
the
remaining
five
categories,
precision
for
one
category
(compound
queries)
can
be
improved
by
fixing
a
selection-‐criteria
bug
• More
data
are
required
for
the
triad,
cadence,
and
texture
query
categories
• A
big
“thank
you”
to
Richard
Sutcliffe
and
team
35. • The
Stravinsqi
algorithm
has
effectively
solved
seven
of
the
twelve
C@merata
task
categories
(pitch,
duration,
pitch
and
duration,
articulation,
voice
specific,
lyrics,
and
melodic
interval)
• As
for
the
remaining
five
categories,
precision
for
one
category
(compound
queries)
can
be
improved
by
fixing
a
selection-‐criteria
bug
• More
data
are
required
for
the
triad,
cadence,
and
texture
query
categories
• A
big
“thank
you”
to
Richard
Sutcliffe
and
team
36. • The
Stravinsqi
algorithm
has
effectively
solved
seven
of
the
twelve
C@merata
task
categories
(pitch,
duration,
pitch
and
duration,
articulation,
voice
specific,
lyrics,
and
melodic
interval)
• As
for
the
remaining
five
categories,
precision
for
one
category
(compound
queries)
can
be
improved
by
fixing
a
selection-‐criteria
bug
• More
data
are
required
for
the
triad,
cadence,
and
texture
query
categories
• A
big
“thank
you”
to
Richard
Sutcliffe
and
team
37. Researchers
at
CCARH
for
creating
and
hosting
kern
scores
Collaborators
at:
Johannes
Kepler
University:
Gerhard
Widmer,
Andreas
Arzt,
Sebastian
Böck,
and
Sebastian
Flossmann
Center
for
Mind
and
Brain,
UC
Davis:
Petr
Janata,
Fred
Barrett,
and
Joy
Geng
Aalborg
University:
David
Meredith
University
of
Lyon:
Barbara
Tillmann
The
Open
University:
Robin
Laney,
Alistair
Willis,
and
Paul
Garthwaite
Funding
bodies:
Austrian
Science
Fund
(FWF)
project
number
Z159
(Wittgenstein
Grant),
NSF
grant
#1025310,
ESPRC
38. Code
for
Stravinsqi
(as
well
as
other
projects):
http://www.tomcollinsresearch.net
40. Allegro non tanto
#$$$$$ % &% &%
'$$$$$
!"
!"
% &%
6
#$$$$$
'$$$$$ &%
11
#$$$$$
'$$$$$
!
P
22
!
P
33
!
P
32
!
P
21
!
P
34
!
P
23
!
P
31
!
P
11
!
P
12
Beginning
of
op.56
no.1
by
Chopin