The document proposes a cognitive behavior analysis framework with three sub-paradigms for failure prediction in cloud computing. The framework uses a probabilistic behavior analysis approach, simulated probabilistic analysis, and behavior-time profile modeling to analyze system behavior at different layers and identify potential failures. The framework is intended to be scalable and improve system dependability through failure prediction.
Identification of Causal Variables for Building Energy Fault Detection by Sem...Keigo Yoshida
K. Yoshida, M. Inui, T. Yairi, K. Machida, M. Shioya, and Y. Masukawa, "Identification of Causal Variables for Building Energy Fault Detection by Semi-supervised LDA and Decision Boundary Analysis", in Proc. ICDM Workshops, 2008, pp.164-173.
This document describes Sumant Tambe's PhD dissertation defense on end-to-end reliability of non-deterministic stateful components. The dissertation addresses challenges in specifying, composing, deploying, and configuring fault tolerance for distributed real-time embedded systems. It presents the Group-failover Protocol to solve the orphan request problem and ensure exactly-once execution semantics without transactions. It also introduces typed traversals and the LEESA solution to compose components while enforcing type safety. The dissertation contributes model-driven techniques for modularizing quality of service concerns and generating fault tolerance aspects to weave dependability into system artifacts.
A typical data-centric IT solution focuses on data objects and scattered orchestration of activities, with business rules hard-coded in program logic. Process coverage is limited as end-to-end processes are not fully automated and have gaps between systems. A process-centric solution with a business process management system provides flexible end-to-end process orchestration, a single access point for users, and configurable business rules to fully automate and integrate end-to-end processes across multiple systems.
The document discusses e-commerce technologies, consumer behavior in e-commerce, factors that influence customer behavior, and web marketing strategies. It describes how businesses use technologies like the internet, dynamic content generation, and client-server architectures to enable e-commerce. It also examines how consumer values, web experiences, and segmentation can be used to target different customer groups in e-commerce.
*If you see the screen is not good condition, downloading please.*
E-commerce customer behavior analysis
- Online Consumer Spending Habits and E-Commerce Checkouts
- Types of E-Shoppers
- Virtual main behavior patterns
Human Action Recognition in Videos Employing 2DPCA on 2DHOOF and Radon TransformFadwa Fouad
This document provides an overview of a Masters thesis that proposes algorithms for human action recognition. It begins with an introduction that discusses the importance of human action recognition, challenges in the field, and differences between actions and activities. It then presents an agenda that outlines an introduction, overview, and details of two proposed algorithms: 2DHOOF/2DPCA contour-based optical flow and human gesture recognition using Radon transform/2DPCA. The overview section describes the general structure of action recognition systems from video capture to classification. Experimental results on benchmark datasets demonstrate the effectiveness of the proposed algorithms.
Principles of Elastic Processes on Clouds and Some Enabling TechniquesHong-Linh Truong
While in contemporary research the elasticity is mostly interpreted
as the capability of scaling in and out of machine-based computing
resources within a single cloud, we argue that the elasticity can be
performed in multiple dimensions, such as resource, cost, and
quality. Furthermore, the elasticity can also be performed in hybrid
machine- and human-based computing systems. In this talk, first we
will discuss principles of such elastic processes and their research
challenges. Second, we will present some initial results of techniques
that can be used in the development of elastic processes, such as
composable cost evaluation, contract compatibility evaluation, and quality of
data evaluation for workflows using both software and human
capabilities. Finally, we will outline challenges when bringing these
techniques to hybrid systems of software- and human-based computing
elements.
Human Action Recognition Using Deep LearningIRJET Journal
This document discusses human action recognition using deep learning models. It proposes using two deep learning models - Convolutional Neural Networks (CNN) and Long-term Recurrent Convolutional Networks (LRCN) - to recognize human actions in videos. The Kinetics dataset is used to train and evaluate the models. Results show that both CNN and LRCN are able to accurately recognize human actions like playing piano or archery in test videos. The LRCN model achieves slightly higher accuracy compared to the traditional two-stream CNN method.
Identification of Causal Variables for Building Energy Fault Detection by Sem...Keigo Yoshida
K. Yoshida, M. Inui, T. Yairi, K. Machida, M. Shioya, and Y. Masukawa, "Identification of Causal Variables for Building Energy Fault Detection by Semi-supervised LDA and Decision Boundary Analysis", in Proc. ICDM Workshops, 2008, pp.164-173.
This document describes Sumant Tambe's PhD dissertation defense on end-to-end reliability of non-deterministic stateful components. The dissertation addresses challenges in specifying, composing, deploying, and configuring fault tolerance for distributed real-time embedded systems. It presents the Group-failover Protocol to solve the orphan request problem and ensure exactly-once execution semantics without transactions. It also introduces typed traversals and the LEESA solution to compose components while enforcing type safety. The dissertation contributes model-driven techniques for modularizing quality of service concerns and generating fault tolerance aspects to weave dependability into system artifacts.
A typical data-centric IT solution focuses on data objects and scattered orchestration of activities, with business rules hard-coded in program logic. Process coverage is limited as end-to-end processes are not fully automated and have gaps between systems. A process-centric solution with a business process management system provides flexible end-to-end process orchestration, a single access point for users, and configurable business rules to fully automate and integrate end-to-end processes across multiple systems.
The document discusses e-commerce technologies, consumer behavior in e-commerce, factors that influence customer behavior, and web marketing strategies. It describes how businesses use technologies like the internet, dynamic content generation, and client-server architectures to enable e-commerce. It also examines how consumer values, web experiences, and segmentation can be used to target different customer groups in e-commerce.
*If you see the screen is not good condition, downloading please.*
E-commerce customer behavior analysis
- Online Consumer Spending Habits and E-Commerce Checkouts
- Types of E-Shoppers
- Virtual main behavior patterns
Human Action Recognition in Videos Employing 2DPCA on 2DHOOF and Radon TransformFadwa Fouad
This document provides an overview of a Masters thesis that proposes algorithms for human action recognition. It begins with an introduction that discusses the importance of human action recognition, challenges in the field, and differences between actions and activities. It then presents an agenda that outlines an introduction, overview, and details of two proposed algorithms: 2DHOOF/2DPCA contour-based optical flow and human gesture recognition using Radon transform/2DPCA. The overview section describes the general structure of action recognition systems from video capture to classification. Experimental results on benchmark datasets demonstrate the effectiveness of the proposed algorithms.
Principles of Elastic Processes on Clouds and Some Enabling TechniquesHong-Linh Truong
While in contemporary research the elasticity is mostly interpreted
as the capability of scaling in and out of machine-based computing
resources within a single cloud, we argue that the elasticity can be
performed in multiple dimensions, such as resource, cost, and
quality. Furthermore, the elasticity can also be performed in hybrid
machine- and human-based computing systems. In this talk, first we
will discuss principles of such elastic processes and their research
challenges. Second, we will present some initial results of techniques
that can be used in the development of elastic processes, such as
composable cost evaluation, contract compatibility evaluation, and quality of
data evaluation for workflows using both software and human
capabilities. Finally, we will outline challenges when bringing these
techniques to hybrid systems of software- and human-based computing
elements.
Human Action Recognition Using Deep LearningIRJET Journal
This document discusses human action recognition using deep learning models. It proposes using two deep learning models - Convolutional Neural Networks (CNN) and Long-term Recurrent Convolutional Networks (LRCN) - to recognize human actions in videos. The Kinetics dataset is used to train and evaluate the models. Results show that both CNN and LRCN are able to accurately recognize human actions like playing piano or archery in test videos. The LRCN model achieves slightly higher accuracy compared to the traditional two-stream CNN method.
This document discusses verifying computations in cloud computing. It presents the RunTest approach, which randomly sends data along multiple processing paths and matches intermediate results to build an "attestation graph" showing node agreement. Nodes that are always inconsistent are identified as malicious. The Bron-Kerbosch algorithm finds the largest consistent clique to identify malicious nodes. The approach was evaluated on an IBM System S, detecting different attack patterns and assessing data quality. Issues discussed include the algorithm's complexity and scalability.
Technicolor/INRIA/Imperial College London at the MediaEval 2012 Violent Scene...MediaEval2012
This document summarizes the results of 5 different systems for detecting violent scenes in videos that were submitted by Technicolor, INRIA, and Imperial College London to the MediaEval 2012 Violent Scene Detection Task. System 1 used similarity measures between frames, System 2 used bag-of-audio-words modeling, System 3 used Bayesian network structure learning, System 4 used a naive Bayesian classifier, and System 5 fused the outputs of the first 3 systems. System 3 performed best with a MAP of 61.82% while the fusion system was the 4th best overall system. The document concludes with perspectives on improving the different approaches.
Elastic High Performance Applications – A Composition FrameworkHong-Linh Truong
With diverse and rich offerings from cloud computing
providers in the open cloud market, scientists have
great opportunities to design and conduct complex applications
by utilizing and combining computational resources,
software components and data sources in elastic manners.
While existing techniques focus mainly on resource elasticity
in single cloud infrastructure, scientists expect to design their
applications being elastic in multiple dimensions to ensure
that they applications can operate on multiple clouds with
minimum software engineering effort. In this paper we will
focus on providing techniques for scientists to compose elastic
high performance applications by utilizing traditional software
components, user-provided components and cloud services. We
characterize elastic compositions via their resource, quality,
cost, available time and usage right elasticity, thus enabling
scientists to evaluate and decide how to develop, deploy and
control the compositions to match their elastic needs. To
illustrate our approach, we will present several real-world
application compositions for multi-cloud environments.
Tim Malthus_Towards standards for the exchange of field spectral datasetsTERN Australia
This document discusses the development of standards for the exchange of field spectral datasets. It notes the importance of metadata for determining the quality and representativeness of spectral data obtained in the field. A workshop was held in 2012 to discuss best practices for data collection and exchange and key conclusions included the need for standards to facilitate accurate comparison across studies and the role of thorough metadata. Work is ongoing to enhance the SPECCHIO system for hosting spectral libraries and metadata and establishing it as the international tool for storage and exchange of spectral datasets.
This document presents a proposed churn prediction model based on data mining techniques. The model consists of six steps: identifying the problem domain, data selection, investigating the data set, classification, clustering, and utilizing the knowledge gained. The authors apply their model to a data set of 5,000 mobile service customers using data mining tools. They train classification models using decision trees, neural networks, and support vector machines. Customers are classified as churners or non-churners. Churners are then clustered into three groups. The results are interpreted to gain insights into customer retention.
M2M Platform-as-a-Service for Sustainability GovernanceHong-Linh Truong
Recently, cloud computing technologies have been employed for large-scale machine-to-machine (M2M) systems, as they could potentially offer better solutions for managing monitoring data and analytics applications to support the needs of different consumers. However, t here exist complex relationships between monitored objects, monitoring data, analysis features, and stakeholders in M2M that require efficient ways to handle
these complex relationships. This paper presents techniques for linking and managing monitored objects, sustainability monitoring data and analytics applications for different stakeholders in cloud-based M2M systems. We describe a Platform-as-a-Service
for sustainability governance that implements these techniques.
We also illustrate our prototype based on a real-world cloud system for facility monitoring.
Robust techniques for background subtraction in urbantaylor_1313
Robust techniques for background subtraction in urban traffic video aim to identify moving objects from video sequences. The paper surveys and compares various background subtraction algorithms, including simple techniques like frame differencing and adaptive median filtering, as well as more sophisticated probabilistic modeling. Experiments show that while complex techniques often perform best, simple adaptive median filtering produces good results with much lower computational complexity for detecting vehicles and pedestrians in traffic video.
The document describes the Green Computing Observatory (GCO), which monitors energy usage at a large computing center. The GCO collects data from sensors that measure energy usage of individual components. It publishes the data through XML files to analyze energy efficiency. The GCO aims to address barriers to improved energy efficiency by collecting detailed usage data and defining semantics for the data through an ontology. It currently collects data from sensors on servers, power distribution units, and the Ganglia monitoring system. Future work includes developing a consistent XML schema and calculating total energy consumption.
Machine learning in Dynamic Adaptive Streaming over HTTP (DASH)Eswar Publications
Recently machine learning has been introduced into the area of adaptive video streaming. This paper explores a novel taxonomy that includes six state of the art techniques of machine learning that have been applied to Dynamic Adaptive Streaming over HTTP (DASH): (1) Q-learning, (2) Reinforcement learning, (3) Regression, (4) Classification, (5) Decision Tree learning, and (6) Neural networks.
This document presents RadioSense, a system that utilizes wireless communication patterns for body sensor network activity recognition. RadioSense recognizes user activities based on patterns in packet delivery ratio and received signal strength observed by radio sensors on the body. It optimizes system parameters like transmission power level, packet sending rate, and smoothing window size to maximize the discriminative capacity of communication patterns for accurate activity recognition. Evaluation shows RadioSense achieves over 90% accuracy for single activities with latency under 10 seconds, while extending battery life up to 175 hours and reducing privacy risks through lower transmission power.
11.compression technique using dct fractal compressionAlexander Decker
1) The document discusses and compares different image compression techniques, specifically DCT and fractal compression.
2) Fractal compression works by finding self-similar patterns within an image during encoding, but can have a long computation time. DCT transforms an image into frequency coefficients that can be quantized for compression.
3) The document reviews previous work combining DCT and fractal compression with steganography and encryption to improve hiding capacity, imperceptibility, and security against subterfuge attacks. However, prior methods had limitations like low data hiding amounts or lack of protection for compressed data.
Compression technique using dct fractal compressionAlexander Decker
This document summarizes and compares different image compression techniques, including DCT, fractal compression, and their applications in steganography. It discusses how DCT works by transforming image data into frequency domains, while fractal compression exploits self-similarity within images. The document reviews several existing studies on combining these techniques with steganography and encryption. Specifically, it examines approaches that use DCT and fractal compression to improve data hiding capacity and security. Overall, the document provides an overview of key compression algorithms and their applications in digital watermarking and steganography.
Matthias Vallentin - Towards Interactive Network Forensics and Incident Respo...boundary_slides
Incident response, post-facto forensics, and network troubleshooting rely on the ability to quickly extract relevant information. To this end, security analysts and network operators need a system that (i) allows for directly expressing a query using domain-specific constructs, (ii) that delivers the performance required for interactive analysis, and (iii) that is not affected by a continuously arriving stream of semi-structured data.
This talk covers the design and implementation plans of a distributed analytics platform that meets these requirements. Well-proven Google architectures like GFS, BigTable, Chubby, and Dremel heavily influenced the design of the system, which leverages bitmap indexes to meet the interactive query requirements. The goal is to develop a prototype ready for production usage in the next few months and obtain feedback from using it on various large-scale sites serving tens of thousands of machines.
IaaS Cloud Benchmarking: Approaches, Challenges, and ExperienceAlexandru Iosup
IaaS cloud benchmarking approaches aim to quantify cloud performance and properties through formalized real-world scenarios, real traces, workload modeling, and repeatable experiments. Main challenges include developing statistical workload models, isolating performance under multi-tenancy, and measuring variability and elasticity beyond traditional metrics. The team studied IaaS cloud workloads including bags of tasks, workflows, MapReduce models, and big data, and evaluated cloud performance across providers to understand implications for real applications.
Project Report on Modeling and Robust Control of Blu-Ray disc Servo MechanismsManu Mitra
This project deals with the modeling and the robust control of the next generation of optical disc drives servo-mechanisms. While in many industrial servo-control implementations, the radial and focus loops are considered as decoupled, e.g. DVD drives, this is no longer true for HD-DVD and Blu-ray disc (BD) formats which are more sensitive to opto-mechanical interactions at high frequencies. The impact of such phenomena on the robustness of the servo is evaluated by using experimental data, and an h∞ controller is designed to reduce the coupling effect, by using a suitable disturbance model into the problem formulation. Simulations using experimental data illustrate the performance improvement of the compensated system despite the parametric uncertainties in mass-production optical drives. (Aug 2009 - Dec 2009)
This document summarizes a course on Lean/Six Sigma systems taught at MIT. It provides an overview of the course structure and content, which is organized around 33 single-point lessons covering the foundations and principles of Lean thinking, Six Sigma, and systems change. It also describes how student teams took a "leader-as-teacher" role in presenting some of the lessons.
IRJET - Face Recognition in Digital Documents with Live ImageIRJET Journal
This document proposes a system for face recognition in digital documents using a live image for verification. It aims to improve on existing ID photo matching systems which are slow, labor-intensive, and unreliable. The system uses a blockchain-based digital certificate system to securely store document photos. A deep learning model called dynamic weight imprinting is used to match live images to stored photos for faster and more accurate verification. An evaluation on an ID dataset showed the proposed face recognition system achieved a true accept rate of 95.95% at a false accept rate of 0.01%, outperforming existing general face identification methods.
Real-time Analytics with HBase (short version)alexbaranau
The document discusses using an append-only approach to enable real-time analytics with HBase. It describes how replacing read-modify-write operations with simple append writes can increase throughput and efficiently process updates. An open-source implementation called HBaseHUT is presented that uses update processors to aggregate data on reads and periodic jobs to merge updates in batches. The approach aims to provide real-time visibility of data changes while handling high volumes of input.
Paper presented at the 6th International Work-Conference on Ambient Assisted Living.
Abstract: Due to the increasing demand of multi-camera setup and long-term monitoring in vision applications, real-time multi-view action recognition has gain a great interest in recent years. In this paper, we propose a multiple kernel learning based fusion framework that employs a motion-based person detector for finding regions of interest and local descriptors with bag-of-words quantisation for feature representation. The experimental results on a multi-view action dataset suggest that the proposed framework significantly outperforms simple fusion techniques and state-of-the-art methods.
In this work, the implications of new technologies, more specifically the new optical FTTH technologies, are studied both from the functional and non-functional perspectives. In particular, some direct impacts are listed in the form of abandoning non-functional technologies, such as micro-registration, which would be implicitly required for having a functioning operation before arrival the new high-bandwidth access technologies. It is shown that such abandonment of non-functional best practices, which are mainly at the management level of ICT, immediately results in additional consumption and environmental footprint, and also there is a chance that some other new innovations might be `missed.' Therefore, unconstrained deployment of these access technologies is not aligned with a possible sustainable ICT picture, except if they are regulated. An approach to pricing the best practices, including both functional and non-functional technologies, is proposed in order to develop a regulation and policy framework for a sustainable broadband access.
This document discusses verifying computations in cloud computing. It presents the RunTest approach, which randomly sends data along multiple processing paths and matches intermediate results to build an "attestation graph" showing node agreement. Nodes that are always inconsistent are identified as malicious. The Bron-Kerbosch algorithm finds the largest consistent clique to identify malicious nodes. The approach was evaluated on an IBM System S, detecting different attack patterns and assessing data quality. Issues discussed include the algorithm's complexity and scalability.
Technicolor/INRIA/Imperial College London at the MediaEval 2012 Violent Scene...MediaEval2012
This document summarizes the results of 5 different systems for detecting violent scenes in videos that were submitted by Technicolor, INRIA, and Imperial College London to the MediaEval 2012 Violent Scene Detection Task. System 1 used similarity measures between frames, System 2 used bag-of-audio-words modeling, System 3 used Bayesian network structure learning, System 4 used a naive Bayesian classifier, and System 5 fused the outputs of the first 3 systems. System 3 performed best with a MAP of 61.82% while the fusion system was the 4th best overall system. The document concludes with perspectives on improving the different approaches.
Elastic High Performance Applications – A Composition FrameworkHong-Linh Truong
With diverse and rich offerings from cloud computing
providers in the open cloud market, scientists have
great opportunities to design and conduct complex applications
by utilizing and combining computational resources,
software components and data sources in elastic manners.
While existing techniques focus mainly on resource elasticity
in single cloud infrastructure, scientists expect to design their
applications being elastic in multiple dimensions to ensure
that they applications can operate on multiple clouds with
minimum software engineering effort. In this paper we will
focus on providing techniques for scientists to compose elastic
high performance applications by utilizing traditional software
components, user-provided components and cloud services. We
characterize elastic compositions via their resource, quality,
cost, available time and usage right elasticity, thus enabling
scientists to evaluate and decide how to develop, deploy and
control the compositions to match their elastic needs. To
illustrate our approach, we will present several real-world
application compositions for multi-cloud environments.
Tim Malthus_Towards standards for the exchange of field spectral datasetsTERN Australia
This document discusses the development of standards for the exchange of field spectral datasets. It notes the importance of metadata for determining the quality and representativeness of spectral data obtained in the field. A workshop was held in 2012 to discuss best practices for data collection and exchange and key conclusions included the need for standards to facilitate accurate comparison across studies and the role of thorough metadata. Work is ongoing to enhance the SPECCHIO system for hosting spectral libraries and metadata and establishing it as the international tool for storage and exchange of spectral datasets.
This document presents a proposed churn prediction model based on data mining techniques. The model consists of six steps: identifying the problem domain, data selection, investigating the data set, classification, clustering, and utilizing the knowledge gained. The authors apply their model to a data set of 5,000 mobile service customers using data mining tools. They train classification models using decision trees, neural networks, and support vector machines. Customers are classified as churners or non-churners. Churners are then clustered into three groups. The results are interpreted to gain insights into customer retention.
M2M Platform-as-a-Service for Sustainability GovernanceHong-Linh Truong
Recently, cloud computing technologies have been employed for large-scale machine-to-machine (M2M) systems, as they could potentially offer better solutions for managing monitoring data and analytics applications to support the needs of different consumers. However, t here exist complex relationships between monitored objects, monitoring data, analysis features, and stakeholders in M2M that require efficient ways to handle
these complex relationships. This paper presents techniques for linking and managing monitored objects, sustainability monitoring data and analytics applications for different stakeholders in cloud-based M2M systems. We describe a Platform-as-a-Service
for sustainability governance that implements these techniques.
We also illustrate our prototype based on a real-world cloud system for facility monitoring.
Robust techniques for background subtraction in urbantaylor_1313
Robust techniques for background subtraction in urban traffic video aim to identify moving objects from video sequences. The paper surveys and compares various background subtraction algorithms, including simple techniques like frame differencing and adaptive median filtering, as well as more sophisticated probabilistic modeling. Experiments show that while complex techniques often perform best, simple adaptive median filtering produces good results with much lower computational complexity for detecting vehicles and pedestrians in traffic video.
The document describes the Green Computing Observatory (GCO), which monitors energy usage at a large computing center. The GCO collects data from sensors that measure energy usage of individual components. It publishes the data through XML files to analyze energy efficiency. The GCO aims to address barriers to improved energy efficiency by collecting detailed usage data and defining semantics for the data through an ontology. It currently collects data from sensors on servers, power distribution units, and the Ganglia monitoring system. Future work includes developing a consistent XML schema and calculating total energy consumption.
Machine learning in Dynamic Adaptive Streaming over HTTP (DASH)Eswar Publications
Recently machine learning has been introduced into the area of adaptive video streaming. This paper explores a novel taxonomy that includes six state of the art techniques of machine learning that have been applied to Dynamic Adaptive Streaming over HTTP (DASH): (1) Q-learning, (2) Reinforcement learning, (3) Regression, (4) Classification, (5) Decision Tree learning, and (6) Neural networks.
This document presents RadioSense, a system that utilizes wireless communication patterns for body sensor network activity recognition. RadioSense recognizes user activities based on patterns in packet delivery ratio and received signal strength observed by radio sensors on the body. It optimizes system parameters like transmission power level, packet sending rate, and smoothing window size to maximize the discriminative capacity of communication patterns for accurate activity recognition. Evaluation shows RadioSense achieves over 90% accuracy for single activities with latency under 10 seconds, while extending battery life up to 175 hours and reducing privacy risks through lower transmission power.
11.compression technique using dct fractal compressionAlexander Decker
1) The document discusses and compares different image compression techniques, specifically DCT and fractal compression.
2) Fractal compression works by finding self-similar patterns within an image during encoding, but can have a long computation time. DCT transforms an image into frequency coefficients that can be quantized for compression.
3) The document reviews previous work combining DCT and fractal compression with steganography and encryption to improve hiding capacity, imperceptibility, and security against subterfuge attacks. However, prior methods had limitations like low data hiding amounts or lack of protection for compressed data.
Compression technique using dct fractal compressionAlexander Decker
This document summarizes and compares different image compression techniques, including DCT, fractal compression, and their applications in steganography. It discusses how DCT works by transforming image data into frequency domains, while fractal compression exploits self-similarity within images. The document reviews several existing studies on combining these techniques with steganography and encryption. Specifically, it examines approaches that use DCT and fractal compression to improve data hiding capacity and security. Overall, the document provides an overview of key compression algorithms and their applications in digital watermarking and steganography.
Matthias Vallentin - Towards Interactive Network Forensics and Incident Respo...boundary_slides
Incident response, post-facto forensics, and network troubleshooting rely on the ability to quickly extract relevant information. To this end, security analysts and network operators need a system that (i) allows for directly expressing a query using domain-specific constructs, (ii) that delivers the performance required for interactive analysis, and (iii) that is not affected by a continuously arriving stream of semi-structured data.
This talk covers the design and implementation plans of a distributed analytics platform that meets these requirements. Well-proven Google architectures like GFS, BigTable, Chubby, and Dremel heavily influenced the design of the system, which leverages bitmap indexes to meet the interactive query requirements. The goal is to develop a prototype ready for production usage in the next few months and obtain feedback from using it on various large-scale sites serving tens of thousands of machines.
IaaS Cloud Benchmarking: Approaches, Challenges, and ExperienceAlexandru Iosup
IaaS cloud benchmarking approaches aim to quantify cloud performance and properties through formalized real-world scenarios, real traces, workload modeling, and repeatable experiments. Main challenges include developing statistical workload models, isolating performance under multi-tenancy, and measuring variability and elasticity beyond traditional metrics. The team studied IaaS cloud workloads including bags of tasks, workflows, MapReduce models, and big data, and evaluated cloud performance across providers to understand implications for real applications.
Project Report on Modeling and Robust Control of Blu-Ray disc Servo MechanismsManu Mitra
This project deals with the modeling and the robust control of the next generation of optical disc drives servo-mechanisms. While in many industrial servo-control implementations, the radial and focus loops are considered as decoupled, e.g. DVD drives, this is no longer true for HD-DVD and Blu-ray disc (BD) formats which are more sensitive to opto-mechanical interactions at high frequencies. The impact of such phenomena on the robustness of the servo is evaluated by using experimental data, and an h∞ controller is designed to reduce the coupling effect, by using a suitable disturbance model into the problem formulation. Simulations using experimental data illustrate the performance improvement of the compensated system despite the parametric uncertainties in mass-production optical drives. (Aug 2009 - Dec 2009)
This document summarizes a course on Lean/Six Sigma systems taught at MIT. It provides an overview of the course structure and content, which is organized around 33 single-point lessons covering the foundations and principles of Lean thinking, Six Sigma, and systems change. It also describes how student teams took a "leader-as-teacher" role in presenting some of the lessons.
IRJET - Face Recognition in Digital Documents with Live ImageIRJET Journal
This document proposes a system for face recognition in digital documents using a live image for verification. It aims to improve on existing ID photo matching systems which are slow, labor-intensive, and unreliable. The system uses a blockchain-based digital certificate system to securely store document photos. A deep learning model called dynamic weight imprinting is used to match live images to stored photos for faster and more accurate verification. An evaluation on an ID dataset showed the proposed face recognition system achieved a true accept rate of 95.95% at a false accept rate of 0.01%, outperforming existing general face identification methods.
Real-time Analytics with HBase (short version)alexbaranau
The document discusses using an append-only approach to enable real-time analytics with HBase. It describes how replacing read-modify-write operations with simple append writes can increase throughput and efficiently process updates. An open-source implementation called HBaseHUT is presented that uses update processors to aggregate data on reads and periodic jobs to merge updates in batches. The approach aims to provide real-time visibility of data changes while handling high volumes of input.
Paper presented at the 6th International Work-Conference on Ambient Assisted Living.
Abstract: Due to the increasing demand of multi-camera setup and long-term monitoring in vision applications, real-time multi-view action recognition has gain a great interest in recent years. In this paper, we propose a multiple kernel learning based fusion framework that employs a motion-based person detector for finding regions of interest and local descriptors with bag-of-words quantisation for feature representation. The experimental results on a multi-view action dataset suggest that the proposed framework significantly outperforms simple fusion techniques and state-of-the-art methods.
Similar to Cognitive Behavior Analysis framework for Fault Prediction in Cloud Computing (20)
In this work, the implications of new technologies, more specifically the new optical FTTH technologies, are studied both from the functional and non-functional perspectives. In particular, some direct impacts are listed in the form of abandoning non-functional technologies, such as micro-registration, which would be implicitly required for having a functioning operation before arrival the new high-bandwidth access technologies. It is shown that such abandonment of non-functional best practices, which are mainly at the management level of ICT, immediately results in additional consumption and environmental footprint, and also there is a chance that some other new innovations might be `missed.' Therefore, unconstrained deployment of these access technologies is not aligned with a possible sustainable ICT picture, except if they are regulated. An approach to pricing the best practices, including both functional and non-functional technologies, is proposed in order to develop a regulation and policy framework for a sustainable broadband access.
Sustainability: Actors, Behavior, and Transparency
Part 1: A Graph-based Perspective to Footprint Assessment
Part 2: SmartPacket - Redistributing the Routing Intelligence among Network Components in SDNs
Part 3: Profiling without ‘Profiling’ – Use Case of a Federated Approach to Resource Management in Smart House
Part 4: A Multi-Entity Input Output (MEIO) Approach to Sustainability - Water-Energy-GHG (WEG) Footprint Statements
Part 5 (Afternoon): Dynamic Network Topology-on-Demand for SDNs Using Failure-resilience Generalized Topologies of Physical Underlay
This document summarizes a presentation on achieving a sustainable future through tools like ICT and energy, and the responsibility of actors like societies, businesses, and individuals. It discusses how exponential connectivity growth and scarce resources pose challenges. It proposes dematerialization using virtualization and resource sharing enabled by wireless connectivity. Renewable energy sources like solar and wind are discussed, along with distributed and efficient energy. It argues all actors have responsibility, and changes in individual actions could significantly impact societies if aligned. Education is key to empowering responsible individual actions.
The presentation of paper titled "Challenges and complexities in application of LCA approaches in the case of ICT for a sustainable future" in ICT4S'14 Conference
This document analyzes the sustainability of broadband wireless access (BWA). It discusses the challenges of the wireless and broadband aspects of BWA, including variability of the media, mobility, location challenges, and high energy consumption of always-on stations. It proposes new metrics to measure BWA sustainability and analyzes the potential paradoxes of efficiency increasing consumption. Examples are given of how BWA can enable benefits like reducing transportation sector footprint and aiding emergency operations. Approaches are outlined for regulators to contain the footprint of BWA through improved efficiency and new contract models.
Reza Farrahi Moghaddam presented a progress report for research conducted within the GSTC Project. The report summarized progress on three axes: Axis 1 focused on green computing and efficient operation of servers and switches. Axis 3 involved behavior analysis of systems, ecosystems, and experts to enable efficient operation while improving dependability. Disseminations included 3 published or submitted papers related to Axes 1 and 3.
The presentation of a paper entitled "Unsupervised ensemble of experts (EoE) framework for automatic binarization of document images" to be presented in ICDAR 2013, Washingthon, DC, USA (August 25h-28th, 2013, on August 27th, 2013.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Infrastructure Challenges in Scaling RAG with Custom AI modelsZilliz
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.
CAKE: Sharing Slices of Confidential Data on BlockchainClaudio Di Ciccio
Presented at the CAiSE 2024 Forum, Intelligent Information Systems, June 6th, Limassol, Cyprus.
Synopsis: Cooperative information systems typically involve various entities in a collaborative process within a distributed environment. Blockchain technology offers a mechanism for automating such processes, even when only partial trust exists among participants. The data stored on the blockchain is replicated across all nodes in the network, ensuring accessibility to all participants. While this aspect facilitates traceability, integrity, and persistence, it poses challenges for adopting public blockchains in enterprise settings due to confidentiality issues. In this paper, we present a software tool named Control Access via Key Encryption (CAKE), designed to ensure data confidentiality in scenarios involving public blockchains. After outlining its core components and functionalities, we showcase the application of CAKE in the context of a real-world cyber-security project within the logistics domain.
Paper: https://doi.org/10.1007/978-3-031-61000-4_16
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Cognitive Behavior Analysis framework for Fault Prediction in Cloud Computing
1. Cognitive Behavior Analysis
framework for Fault Prediction
in Cloud Computing
(NoF’12, Nov 21st-23rd, 2012, Tunis, Tunisia)
Reza FARRAHI MOGHADDAM, Fereydoun FARRAHI MOGHADDAM,
Vahid ASGHARI, Mohamed CHERIET
Synchromedia Lab, ETS, University of Quebec, Montreal, Quebec, Canada
Laboratory for Multimedia
Communication in Telepresence
2. Outline
Motivation for Behavior Analysis (BA) and
Failure Prediction
Proposed BA framework
Probabilistic Behavior Analysis
Simulated Probabilistic Behavior Analysis
Behavior-Time Profile Modeling and Analysis
Scalability of the Proposed BA framework
Conclusions and Future Prospects
11/23/2012 NoF’12 2
3. Why Behavior Analysis (BA)?
Benefits of BA for Failure Prediction
Preventing Service-Layer or System-Level failures
Enabling operation in “unallowable” states to save
energy and cost, and also to reduce footprint
Profiling the Actors
Profiling end users, service providers, and other
actors in a computing business (for example, a
telecom business)
The ensemble of these actors resembles more an
ecosystem than a system
Profiling helps in:
• Smart management of resources
• Building reputations and trust for actors
• Identifying and isolating wrong-acting actors and threats
11/23/2012 NoF’12 3
4. Why Failure Prediction?
A new failure source: Cyclic ElastoPlastic Operation (CEPO)
Cyclic
elastoplastic Hardware factor
operation
Software Human Middleware Other
factor factors
factor factor
11/23/2012 NoF’12 4
5. Cyclic elastoplastic operation (CEPO):
in Civil and Mechanical Engineering
Safe operation in plastic mode
Repeatable transitions between elastic and
plastic modes
Plastic regime
Cyclic operation is the key
Plastic
Elastic regime
Collapse Point
11/23/2012 NoF’12 5
6. Cyclic elastoplastic operation (CEPO):
its counterparts in Computing Systems
Carbon Enabling Effect and Green Push: Doing more with less
1. PUE of Data centers
Increasing inlet air flow temperature (2-4% energy saving per 1°C increase)
For example: PUE = 1.5, 20% saving (5°C) PUE = 1.2
Reducing or eliminating fans
Failure at component level (servers) increases with temperature (ASHRAE TC
9.9. 2011)
Failure Prediction and Behavior Analysis can isolate component-level failures
(even before their occurrence) in order to prevent system-level failures (which
may violate SLO constraints)
Again, cyclic operation is the key to success
2. Can be applied to Bandwidth too?? Uncertainty increases with the
length of stay in the plastic mode
Bearable stress level
Plastic mode
Stress on System
Elastic mode
Allowable Elastic range Inlet temperature
11/23/2012 NoF’12 6
7. The Proposed BA framework
An Ensemble-of-Experts approach:
The sub-paradigms
• Probabilistic Behavior Analysis
• Simulated Probabilistic Behavior Analysis
• Behavior-Time Profile Modeling and Analysis
Two different pictures:
Systemic picture
Ecosystemic picture
11/23/2012 NoF’12 7
9. BA Framework:
Ecosystemic picture
11/23/2012 NoF’12 9
10. Multiple layers in
BA framework
Layers vs (physical and non-
physical) location: Toward Location Various layers
Intelligence in Computing systems Hardware (Compute/Network)
Hardware Drivers/Software
Middleware/Protocols
Virtualware
Virtualware Drivers/Software
Applications (Software)
11/23/2012 NoF’12 10
11. Sub-paradigm 1:
Probabilistic Behavior Analysis
Each layer of system is considered as a graph
Sub-graphs constitute super-components of
higher levels (vertical scaling)
The behavior is modeled as PoA
The PoA is related to CDF of failure:
The Differential Density Function (DDF):
11/23/2012 NoF’12 11
12. Sub-paradigm 1:
Probabilistic Behavior Analysis
An example of a 2-component system:
11/23/2012 NoF’12 12
15. Sub-paradigm 2:
Simulated Probabilistic Behavior Analysis
For highly-complex system topologies, the CDFs of
high-level sub-graphs and components is estimated
using simulation based on CDFs of basic components
It can be also used to validate the calculations of the
first sub-paradigm
Monte Carlo strategy is used
In each run, the fault time of each basic component is
calculated randomly based on its CDF
The cumulative behavior of all runs of the high-level
sub-graph is used to estimate its CDF
1000-run simulations have been used
11/23/2012 NoF’12 15
16. Sub-paradigm 2:
Simulated Probabilistic Behavior Analysis
MC simulation: G_1,1 MC simulation: G_2,1
11/23/2012 NoF’12 16
17. Sub-paradigm 2:
Simulated Probabilistic Behavior Analysis
MC simulation: CDFs MC simulation: DDFs
11/23/2012 NoF’12 17
18. Sub-paradigm 3:
Behavior-Time Profile Modeling and Analysis
Time-profile of components characteristics collected
by opportunistic agents across the system (or
ecosystem)
Time-profile of state transitions in components and
also higher level sub-graphs at various layers
collected or injected by BSU
Machine learning methods are used to match the
state transitions with the characteristics
Support Vector Machine (SVM)
Bayesian networks
Agent-based data mining
Fuzzy logic
···
11/23/2012 NoF’12 18
19. Sub-paradigm 3:
Behavior-Time Profile Modeling and Analysis
Four motivations for behavior-time profile
analysis:
Spontaneous faults compared to cause-and-effect
faults have been reduced significantly
• Less pure hardware-caused faults compared to interaction-
caused faults
Patterns and cycles in fault occurrence and in
general in behavior
Handling of faulty systems that do not have any
faulty components
• context-sensitive diagnosis [Lamperti2011]
handling of gradual events
11/23/2012 NoF’12 19
20. Sub-paradigm 3:
Behavior-Time Profile Modeling and Analysis
A simple example:
11/23/2012 NoF’12 20
21. SLA and Service Grading
Even without considering elastoplastic use case, BA can help in
upgrading a service (for example, to the telco grade)
Probability of Availability (PoA): Lease-based business models
Predicting, isolating and resolving failure events at component or sub-
system levels before they get to the Service Layer.
Probability of Completion (PoC): Task-based business models
Countermeasure options:
Put out high risk components (maintenance tickets)
Temporal redundancy
But, all this depends on the ability to predict high risk or failure
An example:
No BA: Major fault mode with MTBF = 10 weeks, MTTR = 10
minutes 52:09 minutes downtime a year < 52:33 4nines
With BA: 90% of faults are detected 15 minutes before system
failure 5:13 minutes downtime a year < 5:15 5nines
11/23/2012 NoF’12 21
22. Countermeasures and
cost savings
Two alternative modes to save
An example: Full system both energy (cost) and life
expectancy of components
11/23/2012 NoF’12 22
24. Conclusions and Future
Prospects
A multi-paradigm, multi-layer, multi-level cognitive behavior analysis
framework is introduced
Three sub-paradigms (cross-cover):
Statistical inference
Statistical inference by means of simulation
Time-profile modeling and analysis
Multiple granularity analysis and scalability:
Horizontal, vertical and hierarchical scaling
Including other layers in the analysis: virtualware and middleware
Estimation of PoA to improve system dependability and its service grade
A new distribution is introduced: Tanh distribution
validated on a real database: lanl05 database
Future Prospects:
Large-scale operation of each sub-paradigm
Cognitive Response: Multi-Expert Decision Making, Cognitive Models
Integration of the framework with real computing systems:
• OpenStack, Open GSN
Machine learning techniques for the time-profile modeling sub-paradigm
Development of more sophisticated distributions
11/23/2012 NoF’12 24
25. Thanks you, Any question!
BATG
Reza Fereydoun Vahid Mohamed
FARRAHI FARRAHI ASGHARI, CHERIET,
MOGHADDAM, MOGHADDAM, Eng., Ph.D., MIEEE Eng., Ph.D., SMIEEE
Eng., Ph.D., MIEEE Eng., M.Sc., MIEEE vahid@emt.inrs.ca mohamed.cheriet@etsmtl.ca
imriss@ieee.org, farrahi@ieee.org,
rfarrahi@synchromedia.ca ffarrahi@synchromedia.ca
Research Associate PhD Student Postdoctoral Fellow Director of Synchromedia Lab
http://www.synchromedia.ca/
NSERC