View - and Scale-Based Progressive Transmission of Vector Data
Padraig Corcoran, Adam Winstanley, Peter Mooney - National University of Ireland Maynooth
Michela Bertolotto - University College Dublin
A PROGRESSIVE MESH METHOD FOR PHYSICAL SIMULATIONS USING LATTICE BOLTZMANN ME...ijdpsjournal
In this paper, a new progressive mesh algorithm is introduced in order to perform fast physical simulations
by the use of a lattice Boltzmann method (LBM) on a single-node multi-GPU architecture. This algorithm is
able to mesh automatically the simulation domain according to the propagation of fluids. This method can
also be useful in order to perform several types of physical simulations. In this paper, we associate this
algorithm with a multiphase and multicomponent lattice Boltzmann model (MPMC–LBM) because it is
able to perform various types of simulations on complex geometries. The use of this algorithm combined
with the massive parallelism of GPUs[5] allows to obtain very good performance in comparison with the
staticmesh method used in literature. Several simulations are shown in order to evaluate the algorithm.
Reconfiguration layers of convolutional neural network for fundus patches cla...journalBEEI
Convolutional neural network (CNN) is a method of supervised deep learning. The architectures including AlexNet, VGG16, VGG19, ResNet 50, ResNet101, GoogleNet, Inception-V3, Inception ResNet-V2, and Squeezenet that have 25 to 825 layers. This study aims to simplify layers of CNN architectures and increased accuracy for fundus patches classification. Fundus patches classify two categories: normal and neovascularization. Data used for classification is MESSIDOR and Retina Image Bank that have 2,080 patches. Results show the best accuracy of 93.17% for original data and 99,33% for augmentation data using CNN 31 layers. It consists input layer, 7 convolutional layers, 7 batch normalization, 7 rectified linear unit, 6 max-pooling, fully connected layer, softmax, and output layer.
A Review on Image Compression in Parallel using CUDAIJERD Editor
Now a days images are prodigiously and sizably voluminous in size. So, this size is not facilely fits in applications. For that image compression is require. Image Compression algorithms are more resource conserving. It takes more time to consummate the task of compression. Utilizing Parallel implementation of the compression algorithm this quandary can be overcome. CUDA (Compute Unified Device Architecture) Provides parallel execution for algorithm utilizing the multi-threading. CUDA is NVIDIA`s parallel computing platform. CUDA uses GPU (Graphical Processing Unit) for the parallel execution. GPU have the number of the cores for parallel execution support. Image compression can additionally implemented in parallel utilizing CUDA. There are number of algorithms for image compression. Among them DWT (Discrete Wavelet Transform) is best suited for parallel implementation due to its more mathematical calculation and good compression result compare to other methods. In this paper included different parallel techniques for image compression. With the actualizing this image compression algorithm over the GPU utilizing CUDA it will perform the operations in parallel. In this way, vast diminish in processing time is conceivable. Furthermore it is conceivable to enhance the execution of image compression algorithms.
MediaEval 2016 - MLPBOON Predicting Media Interestingness Systemmultimediaeval
Presenter: Jayneel Parekh
The MLPBOON Predicting Media Interestingness System for MediaEval 2016 In Working Notes Proceedings of the MediaEval 2016 Workshop, Hilversum, Netherlands, October 20-21, CEUR-WS.org (2016) by Jayneel Parekh, Sanjeel Parekh
Paper: http://ceur-ws.org/Vol-1739/MediaEval_2016_paper_25.pdf
Video: https://youtu.be/nAnrdYiy7nc
Abstract: This paper describes the system developed by team MLPBOON for MediaEval 2016 Predicting Media Interestingness Image Subtask. After experimenting with various features and classifiers on the development dataset, our final system involves use of CNN features (fc7 layer of AlexNet) for the input representation and logistic regression as the classifier. For the proposed method, the MAP for the best run reaches a value of 0.229.
DYNAMIC TASK SCHEDULING BASED ON BURST TIME REQUIREMENT FOR CLOUD ENVIRONMENTIJCNCJournal
Cloud computing has an indispensable role in the modern digital scenario. The fundamental challenge of cloud systems is to accommodate user requirements which keep on varying. This dynamic cloud environment demands the necessity of complex algorithms to resolve the trouble of task allotment. The overall performance of cloud systems is rooted in the efficiency of task scheduling algorithms. The dynamic property of cloud systems makes it challenging to find an optimal solution satisfying all the evaluation metrics. The new approach is formulated on the Round Robin and the Shortest Job First algorithms. The Round Robin method reduces starvation, and the Shortest Job First decreases the average waiting time. In this work, the advantages of both algorithms are incorporated to improve the makespan of user tasks.
A NOVEL GRAPH REPRESENTATION FOR SKELETON-BASED ACTION RECOGNITIONsipij
Graph convolutional networks (GCNs) have been proven to be effective for processing structured data, so
that it can effectively capture the features of related nodes and improve the performance of model. More
attention is paid to employing GCN in Skeleton-Based action recognition. But there are some challenges
with the existing methods based on GCNs. First, the consistency of temporal and spatial features is ignored
due to extracting features node by node and frame by frame. We design a generic representation of
skeleton sequences for action recognition and propose a novel model called Temporal Graph Networks
(TGN), which can obtain spatiotemporal features simultaneously. Secondly, the adjacency matrix of graph
describing the relation of joints are mostly depended on the physical connection between joints. We
propose a multi-scale graph strategy to appropriately describe the relations between joints in skeleton
graph, which adopts a full-scale graph, part-scale graph and core-scale graph to capture the local features
of each joint and the contour features of important joints. Extensive experiments are conducted on two
large datasets including NTU RGB+D and Kinetics Skeleton. And the experiments results show that TGN
with our graph strategy outperforms other state-of-the-art methods.
A PROGRESSIVE MESH METHOD FOR PHYSICAL SIMULATIONS USING LATTICE BOLTZMANN ME...ijdpsjournal
In this paper, a new progressive mesh algorithm is introduced in order to perform fast physical simulations
by the use of a lattice Boltzmann method (LBM) on a single-node multi-GPU architecture. This algorithm is
able to mesh automatically the simulation domain according to the propagation of fluids. This method can
also be useful in order to perform several types of physical simulations. In this paper, we associate this
algorithm with a multiphase and multicomponent lattice Boltzmann model (MPMC–LBM) because it is
able to perform various types of simulations on complex geometries. The use of this algorithm combined
with the massive parallelism of GPUs[5] allows to obtain very good performance in comparison with the
staticmesh method used in literature. Several simulations are shown in order to evaluate the algorithm.
Reconfiguration layers of convolutional neural network for fundus patches cla...journalBEEI
Convolutional neural network (CNN) is a method of supervised deep learning. The architectures including AlexNet, VGG16, VGG19, ResNet 50, ResNet101, GoogleNet, Inception-V3, Inception ResNet-V2, and Squeezenet that have 25 to 825 layers. This study aims to simplify layers of CNN architectures and increased accuracy for fundus patches classification. Fundus patches classify two categories: normal and neovascularization. Data used for classification is MESSIDOR and Retina Image Bank that have 2,080 patches. Results show the best accuracy of 93.17% for original data and 99,33% for augmentation data using CNN 31 layers. It consists input layer, 7 convolutional layers, 7 batch normalization, 7 rectified linear unit, 6 max-pooling, fully connected layer, softmax, and output layer.
A Review on Image Compression in Parallel using CUDAIJERD Editor
Now a days images are prodigiously and sizably voluminous in size. So, this size is not facilely fits in applications. For that image compression is require. Image Compression algorithms are more resource conserving. It takes more time to consummate the task of compression. Utilizing Parallel implementation of the compression algorithm this quandary can be overcome. CUDA (Compute Unified Device Architecture) Provides parallel execution for algorithm utilizing the multi-threading. CUDA is NVIDIA`s parallel computing platform. CUDA uses GPU (Graphical Processing Unit) for the parallel execution. GPU have the number of the cores for parallel execution support. Image compression can additionally implemented in parallel utilizing CUDA. There are number of algorithms for image compression. Among them DWT (Discrete Wavelet Transform) is best suited for parallel implementation due to its more mathematical calculation and good compression result compare to other methods. In this paper included different parallel techniques for image compression. With the actualizing this image compression algorithm over the GPU utilizing CUDA it will perform the operations in parallel. In this way, vast diminish in processing time is conceivable. Furthermore it is conceivable to enhance the execution of image compression algorithms.
MediaEval 2016 - MLPBOON Predicting Media Interestingness Systemmultimediaeval
Presenter: Jayneel Parekh
The MLPBOON Predicting Media Interestingness System for MediaEval 2016 In Working Notes Proceedings of the MediaEval 2016 Workshop, Hilversum, Netherlands, October 20-21, CEUR-WS.org (2016) by Jayneel Parekh, Sanjeel Parekh
Paper: http://ceur-ws.org/Vol-1739/MediaEval_2016_paper_25.pdf
Video: https://youtu.be/nAnrdYiy7nc
Abstract: This paper describes the system developed by team MLPBOON for MediaEval 2016 Predicting Media Interestingness Image Subtask. After experimenting with various features and classifiers on the development dataset, our final system involves use of CNN features (fc7 layer of AlexNet) for the input representation and logistic regression as the classifier. For the proposed method, the MAP for the best run reaches a value of 0.229.
DYNAMIC TASK SCHEDULING BASED ON BURST TIME REQUIREMENT FOR CLOUD ENVIRONMENTIJCNCJournal
Cloud computing has an indispensable role in the modern digital scenario. The fundamental challenge of cloud systems is to accommodate user requirements which keep on varying. This dynamic cloud environment demands the necessity of complex algorithms to resolve the trouble of task allotment. The overall performance of cloud systems is rooted in the efficiency of task scheduling algorithms. The dynamic property of cloud systems makes it challenging to find an optimal solution satisfying all the evaluation metrics. The new approach is formulated on the Round Robin and the Shortest Job First algorithms. The Round Robin method reduces starvation, and the Shortest Job First decreases the average waiting time. In this work, the advantages of both algorithms are incorporated to improve the makespan of user tasks.
A NOVEL GRAPH REPRESENTATION FOR SKELETON-BASED ACTION RECOGNITIONsipij
Graph convolutional networks (GCNs) have been proven to be effective for processing structured data, so
that it can effectively capture the features of related nodes and improve the performance of model. More
attention is paid to employing GCN in Skeleton-Based action recognition. But there are some challenges
with the existing methods based on GCNs. First, the consistency of temporal and spatial features is ignored
due to extracting features node by node and frame by frame. We design a generic representation of
skeleton sequences for action recognition and propose a novel model called Temporal Graph Networks
(TGN), which can obtain spatiotemporal features simultaneously. Secondly, the adjacency matrix of graph
describing the relation of joints are mostly depended on the physical connection between joints. We
propose a multi-scale graph strategy to appropriately describe the relations between joints in skeleton
graph, which adopts a full-scale graph, part-scale graph and core-scale graph to capture the local features
of each joint and the contour features of important joints. Extensive experiments are conducted on two
large datasets including NTU RGB+D and Kinetics Skeleton. And the experiments results show that TGN
with our graph strategy outperforms other state-of-the-art methods.
Mlp mixer image_process_210613 deeplearning paper review!taeseon ryu
안녕하세요 딥러닝논문읽기모임 입니다!
오늘 소개드릴 논문은 MLP-Mixer라는 제목의 논문입니다.
해당 논문은 아직 아카이브에만 올라와 있고 구글 브레인팀에서 발표한 논문입니다.
CNN은 컴퓨터 비전에서 널리 사용하고 있는 레이어지만, 최근에는 Transformer와 같은 네트워크도 비전영역에 들어오기 시작하고, 몇몇 분야에서는 SOTA를 달성하기도 했습니다. 해당 논문은 Multi layer perceptron만을 사용하여 최신 논문들과 경쟁력이 있는 결과를 달성하는대 성공하였습니다.
논문에 디테일한 설명을 이미지처리팀 허다운님이 자세한 리뷰를 도와주셨습니다! 오늘도 많은 관심 미리 감사드립니다!
Memory Efficient Graph Convolutional Network based Distributed Link Predictionmiyurud
Graph Convolutional Networks (GCN) have found multiple applications of graph-based machine learning. However, training GCNs on large graphs of billions of nodes and edges with rich node attributes consume significant amount of time and memory resources. This makes it impossible to train such GCNs on general purpose commodity hardware. Such use cases demand high-end servers with accelerators and ample amounts of memory. In this paper we implement a memory efficient GCN based link prediction on top of a distributed graph database server called JasmineGraph. Our approach is based on federated training on partitioned graphs with multiple parallel workers. We conduct experiments with three real world graph datasets called DBLP-V11, Reddit, and Twitter. We demonstrate that our approach produces optimal performance for a given hardware setting. JasmineGraph was able to train a GCN on the largest dataset DBLP-V11(>10GB) in 20 hours and 24 minutes for 5 training rounds and 3 epochs by partitioning it into 16 partitions with 2 workers on a single server while the conventional training method could not process it at all due to lack of memory. The second largest dataset Reddit took 9 hours 8 minutes to train with conventional training while JasmineGraph took only 3 hours and 11 minutes with 8 partitions-4 workers in the same hardware giving 3 times improved performance. In case of Twitter dataset JasmineGraph was able to give 5 times improved performance. (10 hours 31 minutes vs 2 hours 6 minutes;16 partitions-16 workers).
Deep Learning Fast MRI Using Channel Attention in Magnitude DomainJoonhyung Lee
My presentation on how we participated in the fastMRI Challanege in 2019.
Aside from theoretical considerations, it also explains key implementation issues that arise in all deep learning for MRI such as disk I/O and CPU/GPU load balancing.
Used for presentation at ISBI 2020 Oral session.
Accidentally wrote the title as "Deep Learning Sum-of-Squares Images in Accelerated Parallel MRI". Sorry for the mistake!
A Study of BFLOAT16 for Deep Learning TrainingSubhajit Sahu
Highlighted notes of:
A Study of BFLOAT16 for Deep Learning Training
This paper presents the first comprehensive empirical study demonstrating the efficacy of the Brain Floating Point (BFLOAT16) half-precision format for DeepLearning training across image classification, speech recognition, language model-ing, generative networks, and industrial recommendation systems. BFLOAT16 is attractive for Deep Learning training for two reasons: the range of values it can represent is the same as that of IEEE 754 floating-point format (FP32) and conversion to/from FP32 is simple. Maintaining the same range as FP32 is important to ensure that no hyper-parameter tuning is required for convergence; e.g., IEEE 754compliant half-precision floating point (FP16) requires hyper-parameter tuning. In this paper, we discuss the flow of tensors and various key operations in mixed-precision training and delve into details of operations, such as the rounding modes for converting FP32 tensors to BFLOAT16. We have implemented a method to emulate BFLOAT16 operations in Tensorflow, Caffe2, IntelCaffe, and Neon for our experiments. Our results show that deep learning training using BFLOAT16tensors achieves the same state-of-the-art (SOTA) results across domains as FP32tensors in the same number of iterations and with no changes to hyper-parameters.
A presentation on the "no new UNet" model, which attempts to automate hyper-parameter selection for medical image segmentation. The paper was accepted to Nature Methods.
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09849539085, 09966235788 or mail us - ieeefinalsemprojects@gmail.co¬m-Visit Our Website: www.finalyearprojects.org
SpecAugment, Park, Daniel S., et al. "SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition}}." Proc. Interspeech 2019 (2019): 2613-2617. review by June-Woo Kim
THRESHOLD BASED VM PLACEMENT TECHNIQUE FOR LOAD BALANCED RESOURCE PROVISIONIN...IJCNCJournal
The unbalancing load issue is a multi-variation, multi-imperative issue that corrupts the execution and productivity of processing assets. Workload adjusting methods give solutions of load unbalancing circumstances for two bothersome aspects over-burdening and under-stacking. Cloud computing utilizes planning and workload balancing for a virtualized environment, resource partaking in cloud foundation. These two factors must be handled in an improved way in cloud computing to accomplish ideal resource sharing. Henceforth, there requires productive resource, asset reservation for guaranteeing load advancement in the cloud. This work aims to present an incorporated resource, asset reservation, and workload adjusting calculation for effective cloud provisioning. The strategy develops a Priority-based Resource Scheduling Model to acquire the resource, asset reservation with threshold-based load balancing for improving the proficiency in cloud framework. Extending utilization of Virtual Machines through the suitable and sensible outstanding task at hand modifying is then practiced by intensely picking a job from submitting jobs using Priority-based Resource Scheduling Model to acquire resource asset reservation. Experimental evaluations represent, the proposed scheme gives better results by reducing execution time, with minimum resource cost and improved resource utilization in dynamic resource provisioning conditions.
Modelling Proximal Space in Urban Cellular Automata
Ivan Blečić, Arnaldo Cecchini, Giuseppe A. Trunfio - Department of Architecture, Planning and Design, University of Sassari, Alghero
Improvement of Spatial Data Quality Using the Data ConflationBeniamino Murgante
Improvement of Spatial Data Quality Using the Data Conflation
Silvija Stankute, Hartmut Asche -Geoinformation Research Group, Department of Geography, University of Potsdam
Accessibility Analysis and Modeling in Public Transport Networks - A Raster b...Beniamino Murgante
Accessibility Analysis and Modeling in Public Transport Networks - A
Raster based Approach
Morten Fuglsang, - National Environmental Research Institute, Aarhus
University and Aalborg University Copenhagen
Henning Sten Hansen - Aalborg University Copenhagen
Bernd Münier - National Environmental Research Institute, Aarhus University
Hierarchical clustering through spatial interaction data. The case of commuti...Beniamino Murgante
Hierarchical clustering through spatial interaction data. The case of commuting flows in South-Eastern France
Giovanni Fusco, Matteo Caglioni - University of Nice Sophia-Antipolis
Quantitative Analysis of Pollutant Emissions in the Context of Demand Respons...Beniamino Murgante
Quantitative Analysis of Pollutant Emissions in the Context of Demand Responsive Transport
Julie Prud'homme, Didier Josselin, Jagannath Aryal - University of Avignon
Conceptual approach to measure the potential of Urban Heat Islands from Landu...Beniamino Murgante
Conceptual approach to measure the potential of Urban Heat Islands from Landuse datasets and Landuse projections
Christian Daneke, Benjamin Bechtel, Jürgen Böhner,Thomas Langkamp,
Jürgen Oßenbrügge - University Hamburg
Mlp mixer image_process_210613 deeplearning paper review!taeseon ryu
안녕하세요 딥러닝논문읽기모임 입니다!
오늘 소개드릴 논문은 MLP-Mixer라는 제목의 논문입니다.
해당 논문은 아직 아카이브에만 올라와 있고 구글 브레인팀에서 발표한 논문입니다.
CNN은 컴퓨터 비전에서 널리 사용하고 있는 레이어지만, 최근에는 Transformer와 같은 네트워크도 비전영역에 들어오기 시작하고, 몇몇 분야에서는 SOTA를 달성하기도 했습니다. 해당 논문은 Multi layer perceptron만을 사용하여 최신 논문들과 경쟁력이 있는 결과를 달성하는대 성공하였습니다.
논문에 디테일한 설명을 이미지처리팀 허다운님이 자세한 리뷰를 도와주셨습니다! 오늘도 많은 관심 미리 감사드립니다!
Memory Efficient Graph Convolutional Network based Distributed Link Predictionmiyurud
Graph Convolutional Networks (GCN) have found multiple applications of graph-based machine learning. However, training GCNs on large graphs of billions of nodes and edges with rich node attributes consume significant amount of time and memory resources. This makes it impossible to train such GCNs on general purpose commodity hardware. Such use cases demand high-end servers with accelerators and ample amounts of memory. In this paper we implement a memory efficient GCN based link prediction on top of a distributed graph database server called JasmineGraph. Our approach is based on federated training on partitioned graphs with multiple parallel workers. We conduct experiments with three real world graph datasets called DBLP-V11, Reddit, and Twitter. We demonstrate that our approach produces optimal performance for a given hardware setting. JasmineGraph was able to train a GCN on the largest dataset DBLP-V11(>10GB) in 20 hours and 24 minutes for 5 training rounds and 3 epochs by partitioning it into 16 partitions with 2 workers on a single server while the conventional training method could not process it at all due to lack of memory. The second largest dataset Reddit took 9 hours 8 minutes to train with conventional training while JasmineGraph took only 3 hours and 11 minutes with 8 partitions-4 workers in the same hardware giving 3 times improved performance. In case of Twitter dataset JasmineGraph was able to give 5 times improved performance. (10 hours 31 minutes vs 2 hours 6 minutes;16 partitions-16 workers).
Deep Learning Fast MRI Using Channel Attention in Magnitude DomainJoonhyung Lee
My presentation on how we participated in the fastMRI Challanege in 2019.
Aside from theoretical considerations, it also explains key implementation issues that arise in all deep learning for MRI such as disk I/O and CPU/GPU load balancing.
Used for presentation at ISBI 2020 Oral session.
Accidentally wrote the title as "Deep Learning Sum-of-Squares Images in Accelerated Parallel MRI". Sorry for the mistake!
A Study of BFLOAT16 for Deep Learning TrainingSubhajit Sahu
Highlighted notes of:
A Study of BFLOAT16 for Deep Learning Training
This paper presents the first comprehensive empirical study demonstrating the efficacy of the Brain Floating Point (BFLOAT16) half-precision format for DeepLearning training across image classification, speech recognition, language model-ing, generative networks, and industrial recommendation systems. BFLOAT16 is attractive for Deep Learning training for two reasons: the range of values it can represent is the same as that of IEEE 754 floating-point format (FP32) and conversion to/from FP32 is simple. Maintaining the same range as FP32 is important to ensure that no hyper-parameter tuning is required for convergence; e.g., IEEE 754compliant half-precision floating point (FP16) requires hyper-parameter tuning. In this paper, we discuss the flow of tensors and various key operations in mixed-precision training and delve into details of operations, such as the rounding modes for converting FP32 tensors to BFLOAT16. We have implemented a method to emulate BFLOAT16 operations in Tensorflow, Caffe2, IntelCaffe, and Neon for our experiments. Our results show that deep learning training using BFLOAT16tensors achieves the same state-of-the-art (SOTA) results across domains as FP32tensors in the same number of iterations and with no changes to hyper-parameters.
A presentation on the "no new UNet" model, which attempts to automate hyper-parameter selection for medical image segmentation. The paper was accepted to Nature Methods.
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09849539085, 09966235788 or mail us - ieeefinalsemprojects@gmail.co¬m-Visit Our Website: www.finalyearprojects.org
SpecAugment, Park, Daniel S., et al. "SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition}}." Proc. Interspeech 2019 (2019): 2613-2617. review by June-Woo Kim
THRESHOLD BASED VM PLACEMENT TECHNIQUE FOR LOAD BALANCED RESOURCE PROVISIONIN...IJCNCJournal
The unbalancing load issue is a multi-variation, multi-imperative issue that corrupts the execution and productivity of processing assets. Workload adjusting methods give solutions of load unbalancing circumstances for two bothersome aspects over-burdening and under-stacking. Cloud computing utilizes planning and workload balancing for a virtualized environment, resource partaking in cloud foundation. These two factors must be handled in an improved way in cloud computing to accomplish ideal resource sharing. Henceforth, there requires productive resource, asset reservation for guaranteeing load advancement in the cloud. This work aims to present an incorporated resource, asset reservation, and workload adjusting calculation for effective cloud provisioning. The strategy develops a Priority-based Resource Scheduling Model to acquire the resource, asset reservation with threshold-based load balancing for improving the proficiency in cloud framework. Extending utilization of Virtual Machines through the suitable and sensible outstanding task at hand modifying is then practiced by intensely picking a job from submitting jobs using Priority-based Resource Scheduling Model to acquire resource asset reservation. Experimental evaluations represent, the proposed scheme gives better results by reducing execution time, with minimum resource cost and improved resource utilization in dynamic resource provisioning conditions.
Modelling Proximal Space in Urban Cellular Automata
Ivan Blečić, Arnaldo Cecchini, Giuseppe A. Trunfio - Department of Architecture, Planning and Design, University of Sassari, Alghero
Improvement of Spatial Data Quality Using the Data ConflationBeniamino Murgante
Improvement of Spatial Data Quality Using the Data Conflation
Silvija Stankute, Hartmut Asche -Geoinformation Research Group, Department of Geography, University of Potsdam
Accessibility Analysis and Modeling in Public Transport Networks - A Raster b...Beniamino Murgante
Accessibility Analysis and Modeling in Public Transport Networks - A
Raster based Approach
Morten Fuglsang, - National Environmental Research Institute, Aarhus
University and Aalborg University Copenhagen
Henning Sten Hansen - Aalborg University Copenhagen
Bernd Münier - National Environmental Research Institute, Aarhus University
Hierarchical clustering through spatial interaction data. The case of commuti...Beniamino Murgante
Hierarchical clustering through spatial interaction data. The case of commuting flows in South-Eastern France
Giovanni Fusco, Matteo Caglioni - University of Nice Sophia-Antipolis
Quantitative Analysis of Pollutant Emissions in the Context of Demand Respons...Beniamino Murgante
Quantitative Analysis of Pollutant Emissions in the Context of Demand Responsive Transport
Julie Prud'homme, Didier Josselin, Jagannath Aryal - University of Avignon
Conceptual approach to measure the potential of Urban Heat Islands from Landu...Beniamino Murgante
Conceptual approach to measure the potential of Urban Heat Islands from Landuse datasets and Landuse projections
Christian Daneke, Benjamin Bechtel, Jürgen Böhner,Thomas Langkamp,
Jürgen Oßenbrügge - University Hamburg
Mapping the anthropic backfill of the historical center of Rome (Italy) by us...Beniamino Murgante
Mapping the anthropic backfill of the historical center of Rome (Italy) by using Intrinsic Random Functions of order k (IRF-k)
Ciotoli Giancarlo, Francesco Stigliano, Fabrizio Marconi, Massimiliano Moscatelli, Marco Mancini, Gian Paolo Cavinato - Institute of Environmental Geology and Geo-engineering (I.G.A.G.), National Research Council, Italy
GIS and Remote Sensing to study urban-rural transformation during a fifty-yea...Beniamino Murgante
GIS and Remote Sensing to study urban-rural transformation during a fifty-year period
Carmelo Riccardo Fichera, Giuseppe Modica -Mediterranea University of Reggio Calabria
Maurizio Pollino - National Agency for New Technologies, Energy and Sustainable Economic Development (ENEA, UTMEA-TER)
An Adaptive Neural Network-Based Method for Tile Replacement in a Web Map CacheBeniamino Murgante
An Adaptive Neural Network-Based Method for Tile Replacement in a Web Map Cache
Ricardo García Martín, Juan Pablo de Castro Fernández, María Jesús Verdú Pérez, Elena Verdú Pérez, Luisa María Regueras Santos, Pablo López Escobés -Higher Technical School of Telecommunications Engineering, University of Valladolid
GaruaGeo: Global Scale Data Aggregation in Hybrid Edge and Cloud Computing En...Otávio Carvalho
Research work published on the 9th International Conference on Cloud Computing and Services Science (CLOSER 2019) held at Heraklion, Crete.
The combination of Edge Computing devices and Cloud Computing resources brings the best of both worlds: Data aggregation closer to the source and scalable resources to grow the network on demand. However, the ability to leverage each time more powerful edge nodes to decentralize data processing and aggregation is still a significant challenge for both industry and academia. In this work, we extend the Garua platform to analyze the impact of a model for data aggregation in a global scale smart grid application dataset. The platform is extended to support global data aggregators that are placed nearly to the Edge nodes where data is being collected. This way, it is possible to aggregate data not only at the edge of the network but also pre-process data at nearby geographic areas, before sending data to be aggregated globally by global centralization nodes. The results of this work show that the implemented testbed application, through the usage of edge node aggregation, data aggregators geographically distributed and messaging windows, can achieve collection rates above 400 million measurements per second.
'How to build efficient backend based on microservice architecture' by Anton ...OdessaJS Conf
This speech about micro-services, approaches, and practices in their construction. How to effectively build communication between micro-services and what approaches are commonly used for this.
We will talk a little about distributed transactions. Will touch the topic of infrastructure, monitoring, and scaling components. I want to inspire my listeners to develop themselves in the direction of backend development. Force to look towards scalable application architecture.
You cannot find this information in the documentation :) This speech will also consist of real-life examples.
AME-1934 : Enable Active-Active Messaging Technology to Extend Workload Balan...wangbo626
Session Type : Breakout Session
Date/Time : Thu, 26-Feb, 10:30 AM-11:30 AM
Venue : Mandalay Bay
Room : Surf Ballroom E
Descriptions:
Active-Active is the target model of modern data center, its successfully adoption includes not only the mainframe, but also the heterogeneous and periphery distributed platforms which makes it much complex to implement. Data synchronization is the heart in the various technologies of active-active, which messaging technology been chose in its implementation.
This session gives an overview of active-active technologies on both z and distributed platforms; highlight how does the Active-Active gives the benefits of both high availability and workload balancing, we also discuss China customer cases to implement messaging based active-active.
Resource aware and incremental mosaics of wide areas from small scale ua vsbhaskar reddy gurram
Resource-aware and incremental mosaics of wide areas from small-scale UAVs.
we study the problem of placing mobile sensors to get high coverage, Based on Voronoi diagrams.
BASIC PROTOCOLS,VIRTUAL MOVEMENT PROTOCOLS
The presentation slides of my Ph.D. thesis. For more information - https://kkpradeeban.blogspot.com/2019/07/my-phd-defense-software-defined-systems.html
Rethinking the Mobile Code Offloading Paradigm: From Concept to PracticeMobileSoft
Rethinking the Mobile Code Offloading Paradigm: From Concept to Practice by José I. Benedetto Andrés Neyem Jaime Navón Guillermo Valenzuela. MobileSoft 2017, Buenos Aires.
Improving Resource Utilization in Cloud using Application Placement HeuristicsAtakanAral
Application placement is an important concept when providing software as a service in cloud environments. Because of the potential downtime cost of application migration, most of the time additional resource acquisition is preferred over migrating the applications residing in the virtual machines (VMs). This situation results in under-utilized resources. To overcome this problem static/dynamic estimations on the resource requirements of VMs and/or applications can be performed.
A simpler strategy is using heuristics during application placement process instead of naively applying greedy strategies like round-robin. In this paper, we propose a number of novel heuristics and compare them with round robin placement strategy and a few proposed placement heuristics in the literature to explore the performance of heuristics in application placement problem. Our focus is to better utilize the resources offered by the cloud environment and at the same time minimize the number of application migrations. Our results indicate that an application heuristic that relies on the difference between the maximum and minimum utilization rates of the resources not only outperforms other application placement approaches but also significantly improves the conventional approaches present in the literature.
Semantic Segmentation on Satellite ImageryRAHUL BHOJWANI
This is an Image Semantic Segmentation project targeted on Satellite Imagery. The goal was to detect the pixel-wise segmentation map for various objects in Satellite Imagery including buildings, water bodies, roads etc. The data for this was taken from the Kaggle competition <https://www.kaggle.com/c/dstl-satellite-imagery-feature-detection>.
We implemented FCN, U-Net and Segnet Deep learning architectures for this task.
Analyzing and assessing ecological transition in building sustainable citiesBeniamino Murgante
"Analyzing and assessing ecological transition in building sustainable cities" Keynote presentation at "International Conference on Sustainable Environment and Technologies" 23 September 2022, Nicolas Tesla University Union, Belgrade, Serbia
Smart Cities: New Science for the Cities
Beniamino Murgante
School of Engineering, University of Basilicata
Lecture at the Department of Community and Regional Planning
Smart Cities course - Professor Alenka Poplin
Keynote at the 24th International Conference on Urban Planning and Regional Development in the Information Society
GeoMultimedia 2019, 2-4 April 2019
Karlsruhe Institute of Technology, Germany
Involving citizens in smart energy approaches: the experience of an energy pa...Beniamino Murgante
Involving citizens in smart energy approaches: the experience of an energy park in Calvello municipality
4th International Conference on Urban e-Planning, University of Lisbon, 23-24 April 2019
Programmazione per la governance territoriale in tema di tutela della biodive...Beniamino Murgante
Programmazione per la governance territoriale in tema di tutela della biodiversità - Sabrina Lai - Regione Sardegna, Direzione generale della difesa dell’ambiente slai@regione.sardegna.it
Università degli Studi di Cagliari, DICAAR, sabrinalai@unica.it
RISCHIO TERRITORIALE NEL GOVERNO DEL TERRITORIO: Ricerca e formazione nelle s...Beniamino Murgante
RISCHIO TERRITORIALE NEL GOVERNO DEL TERRITORIO: Ricerca e formazione nelle scuole di ingegneria
Giuseppe Las Casas, Beniamino Murgante, Francesco Scorza
UrbIng 2016
GEOGRAPHIC INFORMATION – NEED TO KNOW (GI-N2K) Towards a more demand-driven g...Beniamino Murgante
GEOGRAPHIC INFORMATION – NEED TO KNOW (GI-N2K) Towards a more demand-driven geospatial workforce education/training system
Mauro Salvemini, Giuliana Vitiello, Monica Sebillo, Sergio Farruggia. Beniamino Murgante
Focussing Energy Consumers’ Behaviour Change towards Energy Efficiency and Lo...Beniamino Murgante
Focussing Energy Consumers’ Behaviour Change towards Energy Efficiency and Low Carbon Economy: Perspective for Policy Making, Transnational Cooperation and Research.
Beniamino Murgante, Francesco Scorza,
Alessandro Attolico, Federico Amato
Presented at the REAL CORP 2016 - 21st International Conference on Urban Planning
and Regional Development in the Information Society
GEOGRAPHIC INFORMATION – NEED TO KNOW (GI-N2K) Towards a more demand-driven g...Beniamino Murgante
GEOGRAPHIC INFORMATION – NEED TO KNOW (GI-N2K) Towards a more demand-driven geospatial workforce education/training system
Mauro Salvemini, Francesco Di Massa, Monica Sebillo, Sergio Farruggia. Beniamino Murgante
Garden in motion. An experience of citizens involvement in public space regen...Beniamino Murgante
Garden in motion. An experience of citizens involvement in public space regeneration.
Sara Lorusso, Gerardo Sassano, Michele Scioscia, Antonio Graziadei, Pasquale Passannante, Sara Bellarosa, Francesco Scaringi, Beniamino Murgante
Fino alla fine degli anni '80 un urbanista che cercava di supportare dei ragionamenti di piano con l'informatica riusciva ad ottenere, nel migliore dei casi, qualche dato statistico sulla popolazione. Con il trascorrere degli anni si è assistito ad un incremento dell'utilizzo delle tecnologie per la costruzione dei quadri conoscitivi a supporto del processo di piano, fino a raggiungere l'attuale Information Explosion Era.
Il contenuto dell'intervento si baserà su aspetti teorici ed applicativi a partire dall'esperienza di Ian McHarg fino all'ultima "moda" delle Smart Cities.
Introduzione
Andreina Maahsen-Milan
Università di Bologna
Tecnologie, Territorio, Smartness
Beniamino Murgante
Università della Basilicata
Facoltà Ingegneria Edile di Ravenna - Università di Bologna
Via Tombesi dall'Ova 55, 48121 Ravenna
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Secstrike : Reverse Engineering & Pwnable tools for CTF.pptx
View - and Scale-Based Progressive Transmission of Vector Data
1. View- and Scale-Based Progressive
Transmission of Vector Data
Padraig Corcoran, Peter Mooney, Adam Winstanley and
Michela Bertolotto.
Department of Computer Science,
National University of Ireland Maynooth.
School of Computer Science and Informatics,
University College Dublin
1
2. Introduction
● Web application development is in the middle of
a paradigm shift.
● Web-GIS applications still linger behind
desktop-GIS in terms of:
● Functionality.
● Interface.
● User Interaction.
● This can be attributed to the manner in which
spatial data is transmitted.
2
3. Tile-based Transmission
● Predominant transmission methodology
● Vector data converted to raster maps tiles on the
server.
● Map tiles transmitted to client.
● Used by Google Maps and OpenStreetMap.
3
4. ● Advantages:
● HTML has native support for images.
● Image compression is an advanced science.
● All data requests are pre-computed.
4
5. ● Disadvantage:
● Vector data is not transmitted therefore the client
cannot perform spatial queries or adapt the
visualization.
5
6. Vector-Based Transmission
● Can we transmit vector data and maintain the
advantages of tile-based transmission?
● Development of such technology is a main goal
in the field of Progressive Transmission.
6
7. Progressive Transmission
● For large data sets a trade off exists between:
● Transmission of high levels of detail.
● Transmission in reasonable time.
● Progressive transmission attempts to optimize
this trade-off for vector data.
7
8. ● Progressive transmission is characterized by
two properties:
● Data is transmitted in the form of increments or
refinements.
● To reduce redundancy data is not re-transmitted.
8
9. View- and Scale Based
Transmission
● In order to structure existing research in this
field we propose a classification.
● All methods for progressive transmission may
be classified as view- or scale-based.
9
10. View-Based Transmission
● Data is transmitted progressively as a function
of changing viewing window.
Time (Progressively Changing View) 10
11. Scale-Based Transmission
● Data is transmitted progressively as a function
of changing scale.
Time (Progressively Changing Scale) 11
12. Scale-Based Implementation
● Refinement is the inverse of generalization.
● All refinements are actually generalizations and
therefore satisfy the same objectives. 12
13. Fusion View- and Scale-Based
● Both approaches reduce the volume of data
transmitted in different ways.
● To maximise efficiency concepts from both
must be fused.
● Currently the most advanced fusion method is
that of Li et al. 2009
13
14. Li et al. Methodology
● The vector data is divided into tiles.
● The subset of tiles a user views is determined.
● Each of these tiles is then transmitted using a
scale based transmission strategy.
14
15. ● Disadvantages:
● Features which span multiple tiles must be
segmented and rejoined.
● Such features cannot be generalized.
15
16. Proposed Fusion Methodology
● A transmission method which removes the
requirement for tiles is proposed.
● Firstly all features are generalized in a manner
which maintains topology (Corcoran et. al, IJGIS
2011).
16
17. ● Features are then inserted into an R-tree
(spatial indexing method).
● Given a viewing window the features contained
within this window are progressively transmitted
while maintaining topology (Corcoran. et al,
Agile 2011).
17
18. Implementation
● Implemented using client server model.
● Server client communication uses HTML 5
WebSocket API.
● Client rendering uses HTML 5 Canvas API.
Sequence Diagram 18
22. Conclusions
● We provide an analysis and propose a
framework to classify existing progressive
transmission methods.
● Subsequently, a new fusion method is
proposed.
● Request are computed on the fly; future work
will aim to reduce computational complexity.
22