This white paper, written for non-technical audiences, discusses the scope and applicability of the IEEE (Institute of Electrical and Electronics Engineers) 1366 standard methodology. It focuses on solutions available to improve utility indices and Network Operations Center (NOC) metrics. The purpose is to provide an educational and informational white paper and not overload the audience with statistical formulas and complex details.
One of the goals for this white paper is to illustrate the statistical concept and provide awareness. Another goal is to be able to summarize concepts and its applications to facilitate socializing with a wider audience such as Critical Infrastructure, Network Operations Centers (NOCs), and the Sustainability program.
This white paper, written for non-technical audiences, discusses the scope and applicability of the IEEE (Institute of Electrical and Electronics Engineers) 1366 standard methodology. It focuses on solutions available to improve utility indices and Network Operations Center (NOC) metrics. The purpose is to provide an educational and informational white paper and not overload the audience with statistical formulas and complex details.
One of the goals for this white paper is to illustrate the statistical concept and provide awareness. Another goal is to be able to summarize concepts and its applications to facilitate socializing with a wider audience such as Critical Infrastructure, Network Operations Centers (NOCs), and the Sustainability program.
Abstract
Big data plays a serious role within the business for creating higher predictions over business information that is collected from the real world. Finance is that the new sector wherever the big data technologies like Hadoop, NoSQL are creating its mark in predictions from financial data by the analysts. It’s a lot of fascinating within the stock exchange choices which might predict on a lot of profits of stock exchange. For this stock exchange analysis each regular information and historical information of specific stock exchange are needed for creating predictions. There are varied techniques used for analyzing the unstructured information like stock exchange reviews (day-to-day information) and historical statistic of economic information severally. This paper involves discussion regarding the strategies that square measure used for analyzing each varieties of information.
Keywords: Big data, prediction, finance, stock market, business intelligence
A Clustering Method for Weak Signals to Support Anticipative IntelligenceCSCJournals
Organizations need appropriate anticipative information to support their decision making process. Contrarily to some strategic information analyses that help managers to establish patterns using past information, anticipative intelligence is intended to help managers to act based on the analysis of pieces of information that indicate some sort of trend that may become true in the future. One example of this kind of information is known as a weak signal, which is a short text related to a specific domain. In this work, pairs of weak signals, written in Portuguese, are compared to each other so that similarities can be identified and correlated weak signals can be clustered together. The idea is that the analysis of the resulting similar groups may lead to the formulation of a hypothesis that can support the decision making process. The proposed technique consists of two main steps: preprocessing the set of weak signals and clustering. The proposed method was evaluated on a database of bio-energy weak signals. The main innovations of this work are: (i) the application of a computational methodology from the literature for analyzing anticipative information; and (ii) the adaptation of data mining techniques to implement this methodology in a software product.
Dynamic Rule Base Construction and Maintenance Scheme for Disease Predictionijsrd.com
Business and healthcare application are tuned to automatically detect and react events generated from local are remote sources. Event detection refers to an action taken to an activity. The association rule mining techniques are used to detect activities from data sets. Events are divided into 2 types' external event and internal event. External events are generated under the remote machines and deliver data across distributed systems. Internal events are delivered and derived by the system itself. The gap between the actual event and event notification should be minimized. Event derivation should also scale for a large number of complex rules. Attacks and its severity are identified from event derivation systems. Transactional databases and external data sources are used in the event detection process. The new event discovery process is designed to support uncertain data environment. Uncertain derivation of events is performed on uncertain data values. Relevance estimation is a more challenging task under uncertain event analysis. Selectability and sampling mechanism are used to improve the derivation accuracy. Selectability filters events that are irrelevant to derivation by some rules. Selectability algorithm is applied to extract new event derivation. A Bayesian network representation is used to derive new events given the arrival of an uncertain event and to compute its probability. A sampling algorithm is used for efficient approximation of new event derivation. Medical decision support system is designed with event detection model. The system adopts the new rule mapping mechanism for the disease analysis. The rule base construction and maintenance operations are handled by the system. Rule probability estimation is carried out using the Apriori algorithm. The rule derivation process is optimized for domain specific model.
AI TESTING: ENSURING A GOOD DATA SPLIT BETWEEN DATA SETS (TRAINING AND TEST) ...ijsc
Artificial Intelligence and Machine Learning have been around for a long time. In recent years, there has been a surge in popularity for applications integrating AI and ML technology. As with traditional development, software testing is a critical component of a successful AI/ML application. The development methodology used in AI/ML contrasts significantly from traditional development. In light of these distinctions, various software testing challenges arise. The emphasis of this paper is on the challenge of effectively splitting the data into training and testing data sets. By applying a k-Means clustering strategy to the data set followed by a decision tree, we can significantly increase the likelihood of the training data set to represent the domain of the full dataset and thus avoid training a model that is likely to fail because it has only learned a subset of the full data domain.
This white paper, written for non-technical audiences, discusses the scope and applicability of the IEEE (Institute of Electrical and Electronics Engineers) 1366 standard methodology. It focuses on solutions available to improve utility indices and Network Operations Center (NOC) metrics. The purpose is to provide an educational and informational white paper and not overload the audience with statistical formulas and complex details.
One of the goals for this white paper is to illustrate the statistical concept and provide awareness. Another goal is to be able to summarize concepts and its applications to facilitate socializing with a wider audience such as Critical Infrastructure, Network Operations Centers (NOCs), and the Sustainability program.
Abstract
Big data plays a serious role within the business for creating higher predictions over business information that is collected from the real world. Finance is that the new sector wherever the big data technologies like Hadoop, NoSQL are creating its mark in predictions from financial data by the analysts. It’s a lot of fascinating within the stock exchange choices which might predict on a lot of profits of stock exchange. For this stock exchange analysis each regular information and historical information of specific stock exchange are needed for creating predictions. There are varied techniques used for analyzing the unstructured information like stock exchange reviews (day-to-day information) and historical statistic of economic information severally. This paper involves discussion regarding the strategies that square measure used for analyzing each varieties of information.
Keywords: Big data, prediction, finance, stock market, business intelligence
A Clustering Method for Weak Signals to Support Anticipative IntelligenceCSCJournals
Organizations need appropriate anticipative information to support their decision making process. Contrarily to some strategic information analyses that help managers to establish patterns using past information, anticipative intelligence is intended to help managers to act based on the analysis of pieces of information that indicate some sort of trend that may become true in the future. One example of this kind of information is known as a weak signal, which is a short text related to a specific domain. In this work, pairs of weak signals, written in Portuguese, are compared to each other so that similarities can be identified and correlated weak signals can be clustered together. The idea is that the analysis of the resulting similar groups may lead to the formulation of a hypothesis that can support the decision making process. The proposed technique consists of two main steps: preprocessing the set of weak signals and clustering. The proposed method was evaluated on a database of bio-energy weak signals. The main innovations of this work are: (i) the application of a computational methodology from the literature for analyzing anticipative information; and (ii) the adaptation of data mining techniques to implement this methodology in a software product.
Dynamic Rule Base Construction and Maintenance Scheme for Disease Predictionijsrd.com
Business and healthcare application are tuned to automatically detect and react events generated from local are remote sources. Event detection refers to an action taken to an activity. The association rule mining techniques are used to detect activities from data sets. Events are divided into 2 types' external event and internal event. External events are generated under the remote machines and deliver data across distributed systems. Internal events are delivered and derived by the system itself. The gap between the actual event and event notification should be minimized. Event derivation should also scale for a large number of complex rules. Attacks and its severity are identified from event derivation systems. Transactional databases and external data sources are used in the event detection process. The new event discovery process is designed to support uncertain data environment. Uncertain derivation of events is performed on uncertain data values. Relevance estimation is a more challenging task under uncertain event analysis. Selectability and sampling mechanism are used to improve the derivation accuracy. Selectability filters events that are irrelevant to derivation by some rules. Selectability algorithm is applied to extract new event derivation. A Bayesian network representation is used to derive new events given the arrival of an uncertain event and to compute its probability. A sampling algorithm is used for efficient approximation of new event derivation. Medical decision support system is designed with event detection model. The system adopts the new rule mapping mechanism for the disease analysis. The rule base construction and maintenance operations are handled by the system. Rule probability estimation is carried out using the Apriori algorithm. The rule derivation process is optimized for domain specific model.
AI TESTING: ENSURING A GOOD DATA SPLIT BETWEEN DATA SETS (TRAINING AND TEST) ...ijsc
Artificial Intelligence and Machine Learning have been around for a long time. In recent years, there has been a surge in popularity for applications integrating AI and ML technology. As with traditional development, software testing is a critical component of a successful AI/ML application. The development methodology used in AI/ML contrasts significantly from traditional development. In light of these distinctions, various software testing challenges arise. The emphasis of this paper is on the challenge of effectively splitting the data into training and testing data sets. By applying a k-Means clustering strategy to the data set followed by a decision tree, we can significantly increase the likelihood of the training data set to represent the domain of the full dataset and thus avoid training a model that is likely to fail because it has only learned a subset of the full data domain.
An enhanced kernel weighted collaborative recommended system to alleviate spa...IJECEIAES
User Reviews in the form of ratings giving an opportunity to judge the user interest on the available products and providing a chance to recommend new similar items to the customers. Personalized recommender techniques placing vital role in this grown ecommerce century to predict the users‟ interest. Collaborative Filtering (CF) system is one of the widely used democratic recommender system where it completely rely on user ratings to provide recommendations for the users. In this paper, an enhanced Collaborative Filtering system is proposed using Kernel Weighted K-means Clustering (KWKC) approach using Radial basis Functions (RBF) for eliminate the Sparsity problem where lack of rating is the challenge of providing the accurate recommendation to the user. The proposed system having two phases of state transitions: Connected and Disconnected. During Connected state the form of transition will be „Recommended mode‟ where the active user be given with the Predicted-recommended items. In Disconnected State the form of transition will be „Learning mode‟ where the hybrid learning approach and user clusters will be used to define the similar user models. Disconnected State activities will be performed in hidden layer of RBF and Connected Sate activities will be performed in output Layer. Input Layer of RBF using original user Ratings. The proposed KWKC used to smoothen the sparse original rating matrix and define the similar user clusters. A benchmark comparative study also made with classical learning and prediction techniques in terms of accuracy and computational time. Experiential setup is made using MovieLens dataset.
A Comprehensive review of Conversational Agent and its prediction algorithmvivatechijri
There is an exponential increase in the use of conversational bots. Conversational bots can be
described as a platform that can chat with people using artificial intelligence. The recent advancement has
made A.I capable of learning from data and produce an output. This learning of data can be performed by using
various machine learning algorithm. Machine learning techniques involves construction of algorithms that can
learn for data and can predict the outcome. This paper reviews the efficiency of different machine learning
algorithm that are used in conversational bot.
In power system operation, operators must monitor a large and complex set of operational data while operating all kinds of control equipment and devices to maintain system reliability and stability within particular constraints. Although some tools can help operators to avoid this situation, the violation of these constraints could lead to misoperation. With the interconnection of power systems, grid operators must be able to analyze and evaluate a large amount of data and make cognitively demanding evaluations and critical, timely decisions under high pressure, meanwhile evaluating ever larger and more complex and changing data streams, and being fully aware of changing system operational conditions. They must focus on individually demanding and precise tasks while maintaining an overall understanding of a large amount of dynamic data affecting the operator’s perception and operation.
The process needs to maintain awareness of the overall situation while evaluating an overwhelming amount of critical and ever-changing information. In power grid operation, situation awareness is the concept of describing the performance of an operator during the operation of the power grid. From the point of view of human factors, the situation awareness is described as (Endsley, 1995): “An expert’s perception of the elements in the environment within a volume of time and space, the comprehension of their meaning, and the projection of their status in the near future”. An example is the visualization tools that enable operators and other decision makers to operate a power grid effectively without cognitive overload.
Granularity analysis of classification and estimation for complex datasets wi...IJECEIAES
Dispersed and unstructured datasets are substantial parameters to realize an exact amount of the required space. Depending upon the size and the data distribution, especially, if the classes are significantly associating, the level of granularity to agree a precise classification of the datasets exceeds. The data complexity is one of the major attributes to govern the proper value of the granularity, as it has a direct impact on the performance. Dataset classification exhibits the vital step in complex data analytics and designs to ensure that dataset is prompt to be efficiently scrutinized. Data collections are always causing missing, noisy and out-of-the-range values. Data analytics which has not been wisely classified for problems as such can induce unreliable outcomes. Hence, classifications for complex data sources help comfort the accuracy of gathered datasets by machine learning algorithms. Dataset complexity and pre-processing time reflect the effectiveness of individual algorithm. Once the complexity of datasets is characterized then comparatively simpler datasets can further investigate with parallelism approach. Speedup performance is measured by the execution of MOA simulation. Our proposed classification approach outperforms and improves granularity level of complex datasets.
Clustering Prediction Techniques in Defining and Predicting Customers Defecti...IJECEIAES
With the growth of the e-commerce sector, customers have more choices, a fact which encourages them to divide their purchases amongst several ecommerce sites and compare their competitors‟ products, yet this increases high risks of churning. A review of the literature on customer churning models reveals that no prior research had considered both partial and total defection in non-contractual online environments. Instead, they focused either on a total or partial defect. This study proposes a customer churn prediction model in an e-commerce context, wherein a clustering phase is based on the integration of the k-means method and the Length-RecencyFrequency-Monetary (LRFM) model. This phase is employed to define churn followed by a multi-class prediction phase based on three classification techniques: Simple decision tree, Artificial neural networks and Decision tree ensemble, in which the dependent variable classifies a particular customer into a customer continuing loyal buying patterns (Non-churned), a partial defector (Partially-churned), and a total defector (Totally-churned). Macroaveraging measures including average accuracy, macro-average of Precision, Recall, and F-1 are used to evaluate classifiers‟ performance on 10-fold cross validation. Using real data from an online store, the results show the efficiency of decision tree ensemble model over the other models in identifying both future partial and total defection.
Identification of important features and data mining classification technique...IJECEIAES
Employees absenteeism at the work costs organizations billions a year. Prediction of employees’ absenteeism and the reasons behind their absence help organizations in reducing expenses and increasing productivity. Data mining turns the vast volume of human resources data into information that can help in decision-making and prediction. Although the selection of features is a critical step in data mining to enhance the efficiency of the final prediction, it is not yet known which method of feature selection is better. Therefore, this paper aims to compare the performance of three well-known feature selection methods in absenteeism prediction, which are relief-based feature selection, correlation-based feature selection and information-gain feature selection. In addition, this paper aims to find the best combination of feature selection method and data mining technique in enhancing the absenteeism prediction accuracy. Seven classification techniques were used as the prediction model. Additionally, cross-validation approach was utilized to assess the applied prediction models to have more realistic and reliable results. The used dataset was built at a courier company in Brazil with records of absenteeism at work. Regarding experimental results, correlationbased feature selection surpasses the other methods through the performance measurements. Furthermore, bagging classifier was the best-performing data mining technique when features were selected using correlation-based feature selection with an accuracy rate of (92%).
CPU HARDWARE CLASSIFICATION AND PERFORMANCE PREDICTION USING NEURAL NETWORKS ...gerogepatton
We propose a set of methods to classify vendors based on estimated CPU performance and predict CPU performance based on hardware components. For vendor classification, we use the highest and lowest estimated performance and frequency of occurrences of each vendor to create classification zones. These zones can be used to identify vendors who manufacture hardware that satisfy a given performance requirement. We use multi-layered neural networks for performance prediction, which account for nonlinearity in performance data. Various neural network architectures are analysed in comparison to linear, quadratic, and cubic regression. Experiments show that neural networks obtain low error and high correlation between predicted and published performance values, while cubic regression produces higher correlation than neural networks when more data is used for training than testing. An analysis of how the neural network architecture affects prediction is also performed. The proposed methods can be used to identify suitable hardware replacements.
CPU HARDWARE CLASSIFICATION AND PERFORMANCE PREDICTION USING NEURAL NETWORKS ...ijaia
We propose a set of methods to classify vendors based on estimated CPU performance and predict CPU performance based on hardware components. For vendor classification, we use the highest and lowest estimated performance and frequency of occurrences of each vendor to create classification zones. These zones can be used to identify vendors who manufacture hardware that satisfy a given performance requirement. We use multi-layered neural networks for performance prediction, which account for nonlinearity in performance data. Various neural network architectures are analysed in comparison to linear, quadratic, and cubic regression. Experiments show that neural networks obtain low error and high correlation between predicted and published performance values, while cubic regression produces higher
correlation than neural networks when more data is used for training than testing. An analysis of how the neural network architecture affects prediction is also performed. The proposed methods can be used to identify suitable hardware replacements.
An enhanced kernel weighted collaborative recommended system to alleviate spa...IJECEIAES
User Reviews in the form of ratings giving an opportunity to judge the user interest on the available products and providing a chance to recommend new similar items to the customers. Personalized recommender techniques placing vital role in this grown ecommerce century to predict the users‟ interest. Collaborative Filtering (CF) system is one of the widely used democratic recommender system where it completely rely on user ratings to provide recommendations for the users. In this paper, an enhanced Collaborative Filtering system is proposed using Kernel Weighted K-means Clustering (KWKC) approach using Radial basis Functions (RBF) for eliminate the Sparsity problem where lack of rating is the challenge of providing the accurate recommendation to the user. The proposed system having two phases of state transitions: Connected and Disconnected. During Connected state the form of transition will be „Recommended mode‟ where the active user be given with the Predicted-recommended items. In Disconnected State the form of transition will be „Learning mode‟ where the hybrid learning approach and user clusters will be used to define the similar user models. Disconnected State activities will be performed in hidden layer of RBF and Connected Sate activities will be performed in output Layer. Input Layer of RBF using original user Ratings. The proposed KWKC used to smoothen the sparse original rating matrix and define the similar user clusters. A benchmark comparative study also made with classical learning and prediction techniques in terms of accuracy and computational time. Experiential setup is made using MovieLens dataset.
A Comprehensive review of Conversational Agent and its prediction algorithmvivatechijri
There is an exponential increase in the use of conversational bots. Conversational bots can be
described as a platform that can chat with people using artificial intelligence. The recent advancement has
made A.I capable of learning from data and produce an output. This learning of data can be performed by using
various machine learning algorithm. Machine learning techniques involves construction of algorithms that can
learn for data and can predict the outcome. This paper reviews the efficiency of different machine learning
algorithm that are used in conversational bot.
In power system operation, operators must monitor a large and complex set of operational data while operating all kinds of control equipment and devices to maintain system reliability and stability within particular constraints. Although some tools can help operators to avoid this situation, the violation of these constraints could lead to misoperation. With the interconnection of power systems, grid operators must be able to analyze and evaluate a large amount of data and make cognitively demanding evaluations and critical, timely decisions under high pressure, meanwhile evaluating ever larger and more complex and changing data streams, and being fully aware of changing system operational conditions. They must focus on individually demanding and precise tasks while maintaining an overall understanding of a large amount of dynamic data affecting the operator’s perception and operation.
The process needs to maintain awareness of the overall situation while evaluating an overwhelming amount of critical and ever-changing information. In power grid operation, situation awareness is the concept of describing the performance of an operator during the operation of the power grid. From the point of view of human factors, the situation awareness is described as (Endsley, 1995): “An expert’s perception of the elements in the environment within a volume of time and space, the comprehension of their meaning, and the projection of their status in the near future”. An example is the visualization tools that enable operators and other decision makers to operate a power grid effectively without cognitive overload.
Granularity analysis of classification and estimation for complex datasets wi...IJECEIAES
Dispersed and unstructured datasets are substantial parameters to realize an exact amount of the required space. Depending upon the size and the data distribution, especially, if the classes are significantly associating, the level of granularity to agree a precise classification of the datasets exceeds. The data complexity is one of the major attributes to govern the proper value of the granularity, as it has a direct impact on the performance. Dataset classification exhibits the vital step in complex data analytics and designs to ensure that dataset is prompt to be efficiently scrutinized. Data collections are always causing missing, noisy and out-of-the-range values. Data analytics which has not been wisely classified for problems as such can induce unreliable outcomes. Hence, classifications for complex data sources help comfort the accuracy of gathered datasets by machine learning algorithms. Dataset complexity and pre-processing time reflect the effectiveness of individual algorithm. Once the complexity of datasets is characterized then comparatively simpler datasets can further investigate with parallelism approach. Speedup performance is measured by the execution of MOA simulation. Our proposed classification approach outperforms and improves granularity level of complex datasets.
Clustering Prediction Techniques in Defining and Predicting Customers Defecti...IJECEIAES
With the growth of the e-commerce sector, customers have more choices, a fact which encourages them to divide their purchases amongst several ecommerce sites and compare their competitors‟ products, yet this increases high risks of churning. A review of the literature on customer churning models reveals that no prior research had considered both partial and total defection in non-contractual online environments. Instead, they focused either on a total or partial defect. This study proposes a customer churn prediction model in an e-commerce context, wherein a clustering phase is based on the integration of the k-means method and the Length-RecencyFrequency-Monetary (LRFM) model. This phase is employed to define churn followed by a multi-class prediction phase based on three classification techniques: Simple decision tree, Artificial neural networks and Decision tree ensemble, in which the dependent variable classifies a particular customer into a customer continuing loyal buying patterns (Non-churned), a partial defector (Partially-churned), and a total defector (Totally-churned). Macroaveraging measures including average accuracy, macro-average of Precision, Recall, and F-1 are used to evaluate classifiers‟ performance on 10-fold cross validation. Using real data from an online store, the results show the efficiency of decision tree ensemble model over the other models in identifying both future partial and total defection.
Identification of important features and data mining classification technique...IJECEIAES
Employees absenteeism at the work costs organizations billions a year. Prediction of employees’ absenteeism and the reasons behind their absence help organizations in reducing expenses and increasing productivity. Data mining turns the vast volume of human resources data into information that can help in decision-making and prediction. Although the selection of features is a critical step in data mining to enhance the efficiency of the final prediction, it is not yet known which method of feature selection is better. Therefore, this paper aims to compare the performance of three well-known feature selection methods in absenteeism prediction, which are relief-based feature selection, correlation-based feature selection and information-gain feature selection. In addition, this paper aims to find the best combination of feature selection method and data mining technique in enhancing the absenteeism prediction accuracy. Seven classification techniques were used as the prediction model. Additionally, cross-validation approach was utilized to assess the applied prediction models to have more realistic and reliable results. The used dataset was built at a courier company in Brazil with records of absenteeism at work. Regarding experimental results, correlationbased feature selection surpasses the other methods through the performance measurements. Furthermore, bagging classifier was the best-performing data mining technique when features were selected using correlation-based feature selection with an accuracy rate of (92%).
CPU HARDWARE CLASSIFICATION AND PERFORMANCE PREDICTION USING NEURAL NETWORKS ...gerogepatton
We propose a set of methods to classify vendors based on estimated CPU performance and predict CPU performance based on hardware components. For vendor classification, we use the highest and lowest estimated performance and frequency of occurrences of each vendor to create classification zones. These zones can be used to identify vendors who manufacture hardware that satisfy a given performance requirement. We use multi-layered neural networks for performance prediction, which account for nonlinearity in performance data. Various neural network architectures are analysed in comparison to linear, quadratic, and cubic regression. Experiments show that neural networks obtain low error and high correlation between predicted and published performance values, while cubic regression produces higher correlation than neural networks when more data is used for training than testing. An analysis of how the neural network architecture affects prediction is also performed. The proposed methods can be used to identify suitable hardware replacements.
CPU HARDWARE CLASSIFICATION AND PERFORMANCE PREDICTION USING NEURAL NETWORKS ...ijaia
We propose a set of methods to classify vendors based on estimated CPU performance and predict CPU performance based on hardware components. For vendor classification, we use the highest and lowest estimated performance and frequency of occurrences of each vendor to create classification zones. These zones can be used to identify vendors who manufacture hardware that satisfy a given performance requirement. We use multi-layered neural networks for performance prediction, which account for nonlinearity in performance data. Various neural network architectures are analysed in comparison to linear, quadratic, and cubic regression. Experiments show that neural networks obtain low error and high correlation between predicted and published performance values, while cubic regression produces higher
correlation than neural networks when more data is used for training than testing. An analysis of how the neural network architecture affects prediction is also performed. The proposed methods can be used to identify suitable hardware replacements.
Manufacturer & supplier of superior quality String Trimmer, Snow Thrower, Chain Saw etc to our clients at affordable prices. Our products are fabricated using premium quality raw materials. Our products are Ride On Lawn Mowers, Accessories for Ride On Lawn Mowers, String Trimmer, Hedge Trimmer, Gas Blower, Tree Cutter, Aerator, Snow Throwe, Hedge Trimmer, Gas Blower, Snow Thrower, Main Unit, String Trimmer Attachment, Hedge Trimmer Attachment.
ΜΑΘΗΤΕΣ ΓΥΜΝΑΣΙΟΥ ΚΑΙ ΔΙΑΔΙΚΤΥΟ ΣΤΟ ΤΥΧΕΡΟ, 2012tryfonid
Η έρευνα πραγματοποιήθηκε στα μέσα του Μαρτίου 2012 , στα πλαίσια Ερευντητικής Εργασίας με θέμα "Μαθαίνω να σερφάρω δημιουργικά και με ασφάλεια" με ερωτηματολόγια κλειστού τύπου , που απαντήθηκαν ανώνυμα
ANOMALY DETECTION AND ATTRIBUTION USING AUTO FORECAST AND DIRECTED GRAPHSIJDKP
In the business world, decision makers rely heavily on data to back their decisions. With the quantum of
data increasing rapidly, traditional methods used to generate insights from reports and dashboards will
soon become intractable. This creates a need for efficient systems which can substitute human intelligence
and reduce time latency in decision making. This paper describes an approach to process time series data
with multiple dimensions such as geographies, verticals, products, efficiently, and to detect anomalies in
the data and further, to explain potential reasons for the occurrence of the anomalies. The algorithm
implements auto selection of forecast models to make reliable forecasts and detect such anomalies. Depth
First Search (DFS) is applied to analyse each of these anomalies and find its root causes. The algorithm
filters the redundant causes and reports the insights to the stakeholders. Apart from being a hair-trigger
KPI tracking mechanism, this algorithm can also be customized for problems lke A/B testing, campaign
tracking and product evaluations.
Survey of streaming data warehouse update schedulingeSAT Journals
In this paper, we study scheduling problem of updates for the streaming data warehouses. The streaming data warehouses are the combination of traditional data warehouses and data stream systems. In this, jobs are nothing but the processes which are responsible for loading new data in the tables. Its purpose is to decrease the data staleness. In addition, it handles well, the challenges faced by the streaming warehouses like, data consistency, view hierarchies, heterogeneity found in update jobs because of dissimilar arrival times as well as size of data, preempt updates etc. The staleness of data is the scheduling metric considered here. In this, jobs are nothing but the processes which are responsible for loading new data in the tables. Its purpose is to decrease the data staleness. In addition, it handles well, the challenges faced by the streaming warehouses like, data consistency, view hierarchies, heterogeneity found in update jobs because of dissimilar arrival times as well as size of data, preempt updates etc. The staleness of data is the scheduling metric considered here.
Keywords: partitioning strategy, scalable scheduling, data stream management system.
The need for a transition from a traditional maintenance practices to a dependency around data that uses analytics to alter maintenance practices has the potential to add value while creating new rewards and challenges to the utility world.
Similar to IEEE 2 5 Beta Bethod Unraveled - A Technical Paper Prepared for SCTE/ISBE (20)
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.