The document discusses quantitative and qualitative research methods, including experiments, sampling, statistical significance, correlation, and their appropriate uses and limitations. It notes that quantitative research aims to quantify results for statistical analysis while qualitative research provides rich context and interpretation. Key points include that experiments offer control but reduce real-world validity, sampling should be representative but often relies on convenience, and statistical significance and correlation do not necessarily prove causation.
Implicit vs Explicit trust in Social Matrix FactorizationAlejandro Bellogin
1) The document discusses implicit vs explicit trust in social matrix factorization for recommender systems. It aims to evaluate methods for predicting implicit trust scores between users when explicit trust scores are unavailable.
2) Several trust metrics are evaluated to find the best method for inferring implicit trust scores based on user interaction data. The metric from O'Donovan and Smyth performed best at predicting implicit trust scores.
3) Social matrix factorization using the best implicit trust scoring method performed as accurately as using explicit trust scores, showing that implicit trust can be incorporated when explicit trust is unavailable.
Implicit vs. Explicit Trust in Social Matrix Factorization Soudé Fazeli
1) The document discusses implicit vs explicit trust in social matrix factorization for recommender systems. It aims to accurately predict trust between users based on interaction histories rather than relying on explicit trust scores provided by users.
2) Several trust metrics are evaluated to find the best method for inferring implicit trust scores, and the metric by O'Donovan and Smyth performed best. Social matrix factorization using the implicit trust scores performed as accurately as using explicit trust scores.
3) Incorporating implicit trust inferred from interaction data allows for social matrix factorization even when explicit trust scores are unavailable, outperforming baseline recommender systems. Future work aims to define trust taking context into account and evaluate additional recommendation quality dimensions.
Confusion matrix and classification evaluation metricsMinesh A. Jethva
This document discusses classification evaluation metrics and their limitations. It introduces the confusion matrix and metrics calculated from it such as precision, recall, F1-score, and accuracy. The summary highlights that these metrics can be "hacked" and misleading. More robust alternatives like balanced accuracy and MCC are presented that account for true negatives and are not as affected by class imbalance. Comprehensive reporting of multiple metrics from different perspectives is recommended for fully understanding a model's performance.
The document discusses attacks on collaborative recommendation systems. It finds that relatively small attacks can be effective at influencing recommendations. Various attack types are explored, including average, bandwagon, segment and nuke attacks. Detection methods are also examined, such as clustering, dimensionality reduction and supervised classification. While detection makes attacks less effective, obfuscated attacks remain a challenge. Overall, the document analyzes how to model and defend against attacks aimed at manipulating the insights provided by collaborative systems.
Textual & Sentiment Analysis of Movie ReviewsYousef Fadila
This document discusses analyzing sentiment in movie reviews using machine learning. It motivates the use of sentiment analysis to help movie studios understand popularity and develop marketing strategies. It describes the dataset, objectives of analyzing sentiment, preliminary analysis showing 86% accuracy, and exploring models like SVC and KNN. Parameter tuning improved SVC accuracy to 84%. The document discusses identifying false positives/negatives and finding better features to distinguish sentiment. Overall it aims to help movie studios make business decisions from review sentiment analysis.
This document summarizes a study analyzing large-scale Amazon book review data to understand how people evaluate opinions. The study found that the perceived helpfulness of a review depends on both its content and how its rating compares to other reviews of the same product. Specifically, reviews with ratings closer to the average rating tended to be considered more helpful, supporting a "conformity hypothesis". However, reviews above the average were sometimes considered slightly more helpful than reviews below the average. The study also found differences across countries, with negative reviews more favored in Japan.
The document discusses quantitative and qualitative research methods, including experiments, sampling, statistical significance, correlation, and their appropriate uses and limitations. It notes that quantitative research aims to quantify results for statistical analysis while qualitative research provides rich context and interpretation. Key points include that experiments offer control but reduce real-world validity, sampling should be representative but often relies on convenience, and statistical significance and correlation do not necessarily prove causation.
Implicit vs Explicit trust in Social Matrix FactorizationAlejandro Bellogin
1) The document discusses implicit vs explicit trust in social matrix factorization for recommender systems. It aims to evaluate methods for predicting implicit trust scores between users when explicit trust scores are unavailable.
2) Several trust metrics are evaluated to find the best method for inferring implicit trust scores based on user interaction data. The metric from O'Donovan and Smyth performed best at predicting implicit trust scores.
3) Social matrix factorization using the best implicit trust scoring method performed as accurately as using explicit trust scores, showing that implicit trust can be incorporated when explicit trust is unavailable.
Implicit vs. Explicit Trust in Social Matrix Factorization Soudé Fazeli
1) The document discusses implicit vs explicit trust in social matrix factorization for recommender systems. It aims to accurately predict trust between users based on interaction histories rather than relying on explicit trust scores provided by users.
2) Several trust metrics are evaluated to find the best method for inferring implicit trust scores, and the metric by O'Donovan and Smyth performed best. Social matrix factorization using the implicit trust scores performed as accurately as using explicit trust scores.
3) Incorporating implicit trust inferred from interaction data allows for social matrix factorization even when explicit trust scores are unavailable, outperforming baseline recommender systems. Future work aims to define trust taking context into account and evaluate additional recommendation quality dimensions.
Confusion matrix and classification evaluation metricsMinesh A. Jethva
This document discusses classification evaluation metrics and their limitations. It introduces the confusion matrix and metrics calculated from it such as precision, recall, F1-score, and accuracy. The summary highlights that these metrics can be "hacked" and misleading. More robust alternatives like balanced accuracy and MCC are presented that account for true negatives and are not as affected by class imbalance. Comprehensive reporting of multiple metrics from different perspectives is recommended for fully understanding a model's performance.
The document discusses attacks on collaborative recommendation systems. It finds that relatively small attacks can be effective at influencing recommendations. Various attack types are explored, including average, bandwagon, segment and nuke attacks. Detection methods are also examined, such as clustering, dimensionality reduction and supervised classification. While detection makes attacks less effective, obfuscated attacks remain a challenge. Overall, the document analyzes how to model and defend against attacks aimed at manipulating the insights provided by collaborative systems.
Textual & Sentiment Analysis of Movie ReviewsYousef Fadila
This document discusses analyzing sentiment in movie reviews using machine learning. It motivates the use of sentiment analysis to help movie studios understand popularity and develop marketing strategies. It describes the dataset, objectives of analyzing sentiment, preliminary analysis showing 86% accuracy, and exploring models like SVC and KNN. Parameter tuning improved SVC accuracy to 84%. The document discusses identifying false positives/negatives and finding better features to distinguish sentiment. Overall it aims to help movie studios make business decisions from review sentiment analysis.
This document summarizes a study analyzing large-scale Amazon book review data to understand how people evaluate opinions. The study found that the perceived helpfulness of a review depends on both its content and how its rating compares to other reviews of the same product. Specifically, reviews with ratings closer to the average rating tended to be considered more helpful, supporting a "conformity hypothesis". However, reviews above the average were sometimes considered slightly more helpful than reviews below the average. The study also found differences across countries, with negative reviews more favored in Japan.
The use of an architecture–centered development process for delivering information technology began with
the introduction of client / server based systems. Early client/server and legacy mainframe applications did not
provide the architectural flexibility needed to meet the changing business requirements of the modern manufacturing
organization. With the introduction of Object Oriented systems, the need for an architecture–
centered process became a critical success factor. Object reuse, layered system components, data abstraction,
web based user interfaces, CORBA, and rapid development and deployment processes all provide economic
incentives for object technologies. However, adopting the latest object oriented technology, without an
adequate understanding of how this technology fits a specific architecture, risks the creation of an instant legacy
system.
There are physical phenomena in everyday life that are taken for granted simply because the explanation of their behavior closely matches the expectations of the observer. For some of these phenomenon, an extensive body of theoretical knowledge exists which matches the experimental observations. The electromagnetic force is one of these phenomenon. The observer can envision empty space filled with electromagnetic waves, and describe these waves and their effects on matter with mathematical precision.
Devices can be constructed, based on electromagnetic theory, that confirm our belief that the electromagnetic phenomena are well understood— that is, observations are produced consistent with expectations. With further investigation new questions arise, requiring a reformulation of the theory which supports these observations.
Parameter Validation for Software ReliabilityGlen Alleman
The passing of parameters to procedures within a programming language allows the user great freedom in the design of procedures. A general purpose algorithm may be constructed which takes various parameters as input and produces various results. depending upon the input values . The concept of parameter passing is embedded within most programming languages in some manner, either by explicit parameter identifiers as seen in FORTRAN-type calling sequences or by implicit parameter identifiers as seen i n
stack-oriented languages . Interpreter-based programming languages make use of variants of both of these types, such as APL's argument lists, which get pushed on a stack when the function is invoked .
The use of an architecture–centered development process for delivering information technology began with
the introduction of client / server based systems. Early client/server and legacy mainframe applications did not
provide the architectural flexibility needed to meet the changing business requirements of the modern
publishing organization. With the introduction of Object Oriented systems, the need for an architecture–
centered process became a critical success factor. Object reuse, layered system components, data
abstraction, web based user interfaces, CORBA, and rapid development and deployment processes all
provide economic incentives for object technologies. However, adopting the latest object oriented technology,
without an adequate understanding of how this technology fits a specific architecture, risks the creation of an
instant legacy system.
Publishing software systems must be architected in order to deal with the current and future needs of the
business organization. Managing software projects using architecture–centered methodologies must be an
intentional step in the process of deploying information systems – not an accidental by–product of the
software acquisition and integration process.
We cannot determine the Value of something unless we know it’s cost. But determining Value requires have tangible measures to be compared against the cost. In the Systems Engineering Paradigm, these are the Measures of Effectiveness, Measures of Performance, Technical Performance Measures, and Key Performance Parameters
Exception Handling in CORBA EnvironmentsGlen Alleman
Component–based software development introduces new sources of risk because (i) independently developed
components cannot be fully trusted to conform to their published specifications and (ii) software failures are caused by systemic patterns of interaction that cannot be localized to any individual component. The need for a separate exception
handling infrastructure to address these issues becomes the responsibility of the exception handling subsystem. COTS
components focus on executing their own normal problem solving behavior, while their exception handling service
focuses on detecting and resolving exceptions within the local COTS domain [Dellarocas 98] The exception handling
architecture of the integrated system is realized by adding exception handling logic to each application component using a middleware approach.
Traditional project management methods are based on scientific principles considered “normal science,” but lack a theoretical basis for this approach. These principles make use of linear step–wise refinement of the project management processes using a planning–as–management paradigm. Plans made in this paradigm are adjusted by linear feedback methods. These plans cannot cope with the multiple interacting and continuously changing technology and market forces. They behave as a linear, deterministic, Closed–Loop control system.
Making Agile Development work in Government ContractingGlen Alleman
Before any of the current “agile” development
methods, Earned Value Management provided information
for planning and controlling complex projects by
measuring how much “value” was produced for a given
cost in a period of time. One shortcoming of an agile
development method is its inability to forecast the future
cost and schedule of the project beyond the use of “yesterdays
weather” metrics. These agile methods assume
the delivered value, “velocity” in the case of XP, is compared
with the estimated value – this is a simple comparison
between budget and actual cost resulting in a Cost
Variance.
There are many lists describing the reasons for project failure. That’s easy to do.
Assuring the success of an IT project is much harder. This success starts with assessing the capabilities of the project delivery process and participants
The use of an architecture–centered development process for delivering information technology began with the introduction of client / server based systems. Early client/server and legacy mainframe applications did not provide the architectural flexibility needed to meet the changing business requirements of the modern manufacturing organization. With the introduction of Object Oriented systems, the need for an architecture–centered process became a critical success factor. Object reuse, layered system components, data abstraction,
web based user interfaces, CORBA, and rapid development and deployment processes all provide economic
incentives for object technologies. However, adopting the latest object oriented technology, without an adequate understanding of how this technology fits a specific architecture, risks the creation of an instant legacy
system.
The Impedance Mismatch in Integrated Engineering Design Systems is an issue in the Integration of commercial off the shelf (COTS) components.
This issue is a member of the Impedance Mismatch
problems found when commercial off the shelf
components are assembled into systems.
This mismatch occurs when event, control sequence,
or data semantics of two or more participating application
domains are mismatched.
During the system integration process the impedance
mismatch must be addressed through some means,
either through an integration layer which hides the
mismatch or through an integrating service, such as
CORBA, which facilitates the impedance adaptation
between the applications.
Program governance is the process of developing and implementing policies, procedures, roles, and processes to increase the likelihood of project success. It aims to align projects with business needs, provide predictable processes, enable efficient delivery, and support measurable improvement. Effective program governance provides decision-making structures, collaboration processes, and accountability to help connect issues to resolutions and deliver expected value from projects.
The resources listed here are the starting point for anyone interested in applying the principles developed in this briefing for integrating Agile with Earned Value Management projects
This document discusses using Kaizen, or continuous improvement, to improve software development processes. It outlines three steps: 1) Reduce Waste by identifying unnecessary steps and motions in processes. 2) Assure Process Usage by standardizing work and establishing pull-based workflows. 3) Define Controls by establishing measures and accountability to ensure improvements are sustained long-term. The document provides examples of applying Kaizen principles like identifying different types of waste, conducting Kaizen events to generate improvements, and using the Kaizen cycle of focus, evaluate, solve, and act. The overall message is that incremental, daily improvements to processes can significantly increase organizational value over time.
Project governance provides a framework to ensure projects deliver expected value. It involves defining what the organization wants to achieve, how projects will be planned and executed, and how success will be measured. Implementing a project governance model based on a maturity framework like OGC P3M3 can improve budget/schedule predictability, productivity, quality and customer satisfaction. Reaching level 3 maturity involves defining standard processes in key areas like risk management and implementing them consistently across projects.
Pseudo–science and the art of software methodsGlen Alleman
We hear all the time about the next big thing that will undo all the standard principle of business management, software development methods, and processes needed to produce reliable, robust products as planned. Here's some "test" questions to get answered before getting to excitied.
The document discusses principles of program governance, which focuses on delivering products or services to support revenue growth while reducing costs. Effective program governance transitions organizations from solely focusing on operational effectiveness to also prioritizing strategy. This involves installing strategy, objectives and metrics to manage operations strategically. Drivers for governance include addressing perceived costs, integrating siloed business processes, and increasing visibility of costs and value. The role of governance is to provide strategic leadership, manage from a customer perspective, and reduce alignment, execution and innovation gaps between business units.
Root Cause Analysis is the method of problem solving that identifies the root causes of failures or problems. A root cause is the source of a problem and its resulting symptom, that once removed, corrects or prevents an undesirable outcome from recurring.
The naturally occurring uncertainties (Aleatory) in cost, schedule, and technical performance can be modeled in a Monte Carlo Simulation tool. The Event Based uncertainties (Epistemic) require capture, modeling of their impacts, defining handling strategies, modeling the effectiveness of these handling efforts, and the residual risks, and their impacts of both the original risk and the residual risk on the program.
The document discusses the presidential transition process in the United States. It outlines that there are over 100 government agencies that will brief the president-elect. It also details the budgets allocated for the outgoing administration, president-elect, and presidential candidates for the transition. The new president has just 75 days after being elected to prepare to govern and take over from the outgoing administration in a six hour handover on Inauguration Day. The presidential transitions act was passed in 2016 to help smooth the transition between administrations.
A NOVEL APPROACH FOR TWITTER SENTIMENT ANALYSIS USING HYBRID CLASSIFIERIRJET Journal
This document discusses a novel approach for Twitter sentiment analysis using a hybrid classifier. It begins with an abstract that outlines the goal of examining and analyzing Twitter sentiment during important events using a Bayesian network classifier and implementing principal component analysis for feature extraction. It then combines linear regression, XGBoost, and random forest classifiers. The results are evaluated based on accuracy, precision, recall, and F1-score metrics. The document then discusses challenges in sentiment analysis like co-reference resolution, association with time periods, sarcasm handling, domain dependency, negations, and spam detection that impact the sentiment analysis process.
Focused on social media strategies and effective ways to monitor success for your non-profit or change-focused organization. Christopher Berry, Group Director of Marketing Science at Critical Mass will speak on practical social analytics.
The use of an architecture–centered development process for delivering information technology began with
the introduction of client / server based systems. Early client/server and legacy mainframe applications did not
provide the architectural flexibility needed to meet the changing business requirements of the modern manufacturing
organization. With the introduction of Object Oriented systems, the need for an architecture–
centered process became a critical success factor. Object reuse, layered system components, data abstraction,
web based user interfaces, CORBA, and rapid development and deployment processes all provide economic
incentives for object technologies. However, adopting the latest object oriented technology, without an
adequate understanding of how this technology fits a specific architecture, risks the creation of an instant legacy
system.
There are physical phenomena in everyday life that are taken for granted simply because the explanation of their behavior closely matches the expectations of the observer. For some of these phenomenon, an extensive body of theoretical knowledge exists which matches the experimental observations. The electromagnetic force is one of these phenomenon. The observer can envision empty space filled with electromagnetic waves, and describe these waves and their effects on matter with mathematical precision.
Devices can be constructed, based on electromagnetic theory, that confirm our belief that the electromagnetic phenomena are well understood— that is, observations are produced consistent with expectations. With further investigation new questions arise, requiring a reformulation of the theory which supports these observations.
Parameter Validation for Software ReliabilityGlen Alleman
The passing of parameters to procedures within a programming language allows the user great freedom in the design of procedures. A general purpose algorithm may be constructed which takes various parameters as input and produces various results. depending upon the input values . The concept of parameter passing is embedded within most programming languages in some manner, either by explicit parameter identifiers as seen in FORTRAN-type calling sequences or by implicit parameter identifiers as seen i n
stack-oriented languages . Interpreter-based programming languages make use of variants of both of these types, such as APL's argument lists, which get pushed on a stack when the function is invoked .
The use of an architecture–centered development process for delivering information technology began with
the introduction of client / server based systems. Early client/server and legacy mainframe applications did not
provide the architectural flexibility needed to meet the changing business requirements of the modern
publishing organization. With the introduction of Object Oriented systems, the need for an architecture–
centered process became a critical success factor. Object reuse, layered system components, data
abstraction, web based user interfaces, CORBA, and rapid development and deployment processes all
provide economic incentives for object technologies. However, adopting the latest object oriented technology,
without an adequate understanding of how this technology fits a specific architecture, risks the creation of an
instant legacy system.
Publishing software systems must be architected in order to deal with the current and future needs of the
business organization. Managing software projects using architecture–centered methodologies must be an
intentional step in the process of deploying information systems – not an accidental by–product of the
software acquisition and integration process.
We cannot determine the Value of something unless we know it’s cost. But determining Value requires have tangible measures to be compared against the cost. In the Systems Engineering Paradigm, these are the Measures of Effectiveness, Measures of Performance, Technical Performance Measures, and Key Performance Parameters
Exception Handling in CORBA EnvironmentsGlen Alleman
Component–based software development introduces new sources of risk because (i) independently developed
components cannot be fully trusted to conform to their published specifications and (ii) software failures are caused by systemic patterns of interaction that cannot be localized to any individual component. The need for a separate exception
handling infrastructure to address these issues becomes the responsibility of the exception handling subsystem. COTS
components focus on executing their own normal problem solving behavior, while their exception handling service
focuses on detecting and resolving exceptions within the local COTS domain [Dellarocas 98] The exception handling
architecture of the integrated system is realized by adding exception handling logic to each application component using a middleware approach.
Traditional project management methods are based on scientific principles considered “normal science,” but lack a theoretical basis for this approach. These principles make use of linear step–wise refinement of the project management processes using a planning–as–management paradigm. Plans made in this paradigm are adjusted by linear feedback methods. These plans cannot cope with the multiple interacting and continuously changing technology and market forces. They behave as a linear, deterministic, Closed–Loop control system.
Making Agile Development work in Government ContractingGlen Alleman
Before any of the current “agile” development
methods, Earned Value Management provided information
for planning and controlling complex projects by
measuring how much “value” was produced for a given
cost in a period of time. One shortcoming of an agile
development method is its inability to forecast the future
cost and schedule of the project beyond the use of “yesterdays
weather” metrics. These agile methods assume
the delivered value, “velocity” in the case of XP, is compared
with the estimated value – this is a simple comparison
between budget and actual cost resulting in a Cost
Variance.
There are many lists describing the reasons for project failure. That’s easy to do.
Assuring the success of an IT project is much harder. This success starts with assessing the capabilities of the project delivery process and participants
The use of an architecture–centered development process for delivering information technology began with the introduction of client / server based systems. Early client/server and legacy mainframe applications did not provide the architectural flexibility needed to meet the changing business requirements of the modern manufacturing organization. With the introduction of Object Oriented systems, the need for an architecture–centered process became a critical success factor. Object reuse, layered system components, data abstraction,
web based user interfaces, CORBA, and rapid development and deployment processes all provide economic
incentives for object technologies. However, adopting the latest object oriented technology, without an adequate understanding of how this technology fits a specific architecture, risks the creation of an instant legacy
system.
The Impedance Mismatch in Integrated Engineering Design Systems is an issue in the Integration of commercial off the shelf (COTS) components.
This issue is a member of the Impedance Mismatch
problems found when commercial off the shelf
components are assembled into systems.
This mismatch occurs when event, control sequence,
or data semantics of two or more participating application
domains are mismatched.
During the system integration process the impedance
mismatch must be addressed through some means,
either through an integration layer which hides the
mismatch or through an integrating service, such as
CORBA, which facilitates the impedance adaptation
between the applications.
Program governance is the process of developing and implementing policies, procedures, roles, and processes to increase the likelihood of project success. It aims to align projects with business needs, provide predictable processes, enable efficient delivery, and support measurable improvement. Effective program governance provides decision-making structures, collaboration processes, and accountability to help connect issues to resolutions and deliver expected value from projects.
The resources listed here are the starting point for anyone interested in applying the principles developed in this briefing for integrating Agile with Earned Value Management projects
This document discusses using Kaizen, or continuous improvement, to improve software development processes. It outlines three steps: 1) Reduce Waste by identifying unnecessary steps and motions in processes. 2) Assure Process Usage by standardizing work and establishing pull-based workflows. 3) Define Controls by establishing measures and accountability to ensure improvements are sustained long-term. The document provides examples of applying Kaizen principles like identifying different types of waste, conducting Kaizen events to generate improvements, and using the Kaizen cycle of focus, evaluate, solve, and act. The overall message is that incremental, daily improvements to processes can significantly increase organizational value over time.
Project governance provides a framework to ensure projects deliver expected value. It involves defining what the organization wants to achieve, how projects will be planned and executed, and how success will be measured. Implementing a project governance model based on a maturity framework like OGC P3M3 can improve budget/schedule predictability, productivity, quality and customer satisfaction. Reaching level 3 maturity involves defining standard processes in key areas like risk management and implementing them consistently across projects.
Pseudo–science and the art of software methodsGlen Alleman
We hear all the time about the next big thing that will undo all the standard principle of business management, software development methods, and processes needed to produce reliable, robust products as planned. Here's some "test" questions to get answered before getting to excitied.
The document discusses principles of program governance, which focuses on delivering products or services to support revenue growth while reducing costs. Effective program governance transitions organizations from solely focusing on operational effectiveness to also prioritizing strategy. This involves installing strategy, objectives and metrics to manage operations strategically. Drivers for governance include addressing perceived costs, integrating siloed business processes, and increasing visibility of costs and value. The role of governance is to provide strategic leadership, manage from a customer perspective, and reduce alignment, execution and innovation gaps between business units.
Root Cause Analysis is the method of problem solving that identifies the root causes of failures or problems. A root cause is the source of a problem and its resulting symptom, that once removed, corrects or prevents an undesirable outcome from recurring.
The naturally occurring uncertainties (Aleatory) in cost, schedule, and technical performance can be modeled in a Monte Carlo Simulation tool. The Event Based uncertainties (Epistemic) require capture, modeling of their impacts, defining handling strategies, modeling the effectiveness of these handling efforts, and the residual risks, and their impacts of both the original risk and the residual risk on the program.
The document discusses the presidential transition process in the United States. It outlines that there are over 100 government agencies that will brief the president-elect. It also details the budgets allocated for the outgoing administration, president-elect, and presidential candidates for the transition. The new president has just 75 days after being elected to prepare to govern and take over from the outgoing administration in a six hour handover on Inauguration Day. The presidential transitions act was passed in 2016 to help smooth the transition between administrations.
A NOVEL APPROACH FOR TWITTER SENTIMENT ANALYSIS USING HYBRID CLASSIFIERIRJET Journal
This document discusses a novel approach for Twitter sentiment analysis using a hybrid classifier. It begins with an abstract that outlines the goal of examining and analyzing Twitter sentiment during important events using a Bayesian network classifier and implementing principal component analysis for feature extraction. It then combines linear regression, XGBoost, and random forest classifiers. The results are evaluated based on accuracy, precision, recall, and F1-score metrics. The document then discusses challenges in sentiment analysis like co-reference resolution, association with time periods, sarcasm handling, domain dependency, negations, and spam detection that impact the sentiment analysis process.
Focused on social media strategies and effective ways to monitor success for your non-profit or change-focused organization. Christopher Berry, Group Director of Marketing Science at Critical Mass will speak on practical social analytics.
http://home.ubalt.edu/ntsbarsh/business-stat/opre/partIX.htm
Tools for Decision Analysis: Analysis of Risky Decisions
If you will begin with certainties, you shall end in doubts, but if you will content to begin with doubts, you shall end in almost certainties. -- Francis Bacon
Making decisions is certainly the most important task of a manager and it is often a very difficult one. This site offers a decision making procedure for solving complex problems step by step.It presents the decision-analysis process for both public and private decision-making, using different decision criteria, different types of information, and information of varying quality. It describes the elements in the analysis of decision alternatives and choices, as well as the goals and objectives that guide decision-making. The key issues related to a decision-maker's preferences regarding alternatives, criteria for choice, and choice modes, together with the risk assessment tools are also presented.
Professor Hossein Arsham
MENU
1. Introduction & Summary
2. Probabilistic Modeling: From Data to a Decisive Knowledge
3. Decision Analysis: Making Justifiable, Defensible Decisions
4. Elements of Decision Analysis Models
5. Decision Making Under Pure Uncertainty: Materials are presented in the context of Financial Portfolio Selections.
6. Limitations of Decision Making under Pure Uncertainty
7. Coping with Uncertainties
8. Decision Making Under Risk: Presentation is in the context of Financial Portfolio Selections under risk.
9. Making a Better Decision by Buying Reliable Information: Applications are drawn from Marketing a New Product.
10. Decision Tree and Influence Diagram
11. Why Managers Seek the Advice From Consulting Firms
12. Revising Your Expectation and its Risk
13. Determination of the Decision-Maker's Utility
14. Utility Function Representations with Applications
15. A Classification of Decision Maker's Relative Attitudes Toward Risk and Its Impact
16. The Discovery and Management of Losses
17. Risk: The Four Letters Word
18. Decision's Factors-Prioritization & Stability Analysis
19. Optimal Decision Making Process
20. JavaScript E-labs Learning Objects
21. A Critical Panoramic View of Classical Decision Analysis
22. Exercise Your Knowledge to Enhance What You Have Learned (PDF)
23. Appendex: A Collection of Keywords and Phrases
Companion Sites:
· Business Statistics
· Success Science
· Leadership Decision Making
· Linear Programming (LP) and Goal-Seeking Strategy
· Linear Optimization Software to Download
· Artificial-variable Free LP
Solution
Algorithms
· Integer Optimization and the Network Models
· Tools for LP Modeling Validation
· The Classical Simplex Method
· Zero-Sum Games with Applications
· Computer-assisted Learning Concepts and Techniques
· Linear Algebra and LP Connections
· From Linear to Nonlinear Optimization with Business Applications
· Construction of the Sensitivity Region for LP Models
· Zero Sagas in Four Dimensions
· Systems Simulation
· B.
This document summarizes a study that compares systematic and automated methods for sentiment analysis. The study extracted product features from online reviews of Samsung tablet PCs and used Naive Bayes classification to determine the positive, negative, and neutral sentiment distributions for each feature. Features like battery life had the highest positive sentiment, while cost had low positive sentiment. Weight had equal positive and negative sentiment. The study concludes the systematic approach provides more useful insight for product improvement than automated tools, which fail to identify specific sentiment-causing features.
Size Of Writing Paper. Writing Paper Sizes Chart. 2019-01-16Kimberly Gomez
This document provides instructions for requesting writing assistance from HelpWriting.net. It outlines a 5-step process:
1. Create an account by providing a password and email.
2. Complete a 10-minute order form with instructions, sources, deadline, and attaching a sample for style imitation.
3. Review bids from writers and choose one based on qualifications, history, and feedback. Place a deposit to start work.
4. Ensure the paper meets expectations and authorize final payment if pleased. Free revisions are provided.
5. Multiple revisions can be requested to ensure satisfaction. Plagiarized work results in a full refund. HelpWriting.net aims to fully meet customer needs.
Social life in digital societies: Trust, Reputation and Privacy EINS summer s...i_scienceEU
Ralph Holz (Technische Universitat Munchen)
Pablo Aragon (Barcelona Media)
Katleen Gabriels (IBBT-SMIT, Vrije Univeriteit Brussel)
Janet Xue (Macquaire University)
Anna Satsiou (Centre for Research and Technology Hellas- Information Technologies Institute)
Sorana Cimpan (Universite De Savoie)
Norbert Blenn (Delft University of Technology)
More information: http://www.internet-science.eu/
Bram Wessel on UX Techniques for better Information ModelingBram Wessel
Bram Wessel's presentation at Taxonomy Bootcamp 2013 on how to use techniques from the User Experience discipline to develop and refine better Information Models
The document discusses relationship forecasting and why it is better than traditional budgeting approaches. Some key points:
1) Forecasting focuses on what is likely to happen rather than target-setting, and uses a range to capture uncertainty rather than a single number.
2) Considering best- and worst-case scenarios through a range helps have more honest, meaningful discussions about opportunities and risks.
3) Relationship forecasting emphasizes building trust between parties to improve forecast accuracy, which benefits the overall organization.
4) A variety of statistical tools from simple conversations to more advanced models like Monte Carlo simulations can help quantify probabilities within a forecast range.
1) The document discusses text analytics and sentiment analysis, explaining that these tools are important for businesses to make better data-driven decisions based on customer feedback and opinions expressed online.
2) It covers different approaches to sentiment analysis such as using natural language processing (NLP) to identify concepts and attributes, and data mining techniques that represent text as numeric vectors that can be modeled.
3) The benefits and drawbacks of the NLP and data mining approaches are compared, noting that NLP provides more control and interpretability while data mining may achieve better predictive performance.
Sentiment analysis and opinion mining is almost same thing however there is minor difference between them that is opinion mining extracts and analyze people's opinion about an entity while Sentiment analysis search for the sentiment words/expression in a text and then analyze it.
It uses machine learning techniques like SVM (Support Vector Machines) to analyze the text and classify them as positive, negative or neutral.
A set of practical strategies and techniques for tackling vagueness in data modeling and creating models that are semantically more accurate and interoperable.
This document summarizes some useful tips for performing sentiment analysis. It discusses several factors to consider, including:
1) Using both lexicon-based and learning-based techniques, with lexicon-based providing higher precision but lower recall.
2) Considering statistical and syntactic techniques, with statistical techniques being more adaptable to other languages.
3) Training classifiers to detect neutral sentiments in addition to positive and negative, to avoid overfitting.
4) Selecting optimal tokenization, part-of-speech tagging, stemming/lemmatization, and feature selection algorithms for the given topic, language and domain. Feature selection methods like information gain can improve classification accuracy.
Diamonds in the Rough (Sentiment(al) AnalysisScott K. Wilder
Gary Angel and Scott K. Wilder presented on sentiment analysis. Gary is the president of Semphonic, a web analytics consultancy, and Scott is a digital strategist. They discussed how sentiment analysis works, its limitations, and best practices for using it. Specifically, they noted that sentiment analysis provides anecdotal, not primary, insights and that the most accurate approach combines automated tools with manual review of verbatim comments.
Branops - Making Your Story Your StrategyBusiness901
In BRANOPS, we scale by looking at marketing from a Growth Mindset. We don’t start with a complex market and try to work back by tweaking and modifying it.
This document provides an overview of a case study on depression. It describes the participants in the study, which included 18 patients who met the inclusion criteria of having major depressive disorder. The study aimed to assess the effectiveness of cognitive behavioral therapy (CBT) for treating depression. Patients received 8 to 16 sessions of individual CBT and completed assessments before and after treatment to measure changes in depression symptoms. The results showed that CBT was effective at reducing depressive symptoms, with the majority of patients no longer meeting the criteria for major depressive disorder after treatment. CBT was found to be a viable treatment option for depression.
Who and why uses estimates - talk about waste 4devsAgata Sobek-Kreft
This document discusses estimates and estimating in software development. It provides definitions of estimates and perspectives from different sources that estimates are just guesses and the actual effort often differs. Estimates also take time away from development. Alternative approaches like #noestimates are proposed that focus on business value delivery over estimates. With #noestimates, data on past deliveries is used instead of estimates to help plan and establish capacity. The biggest value of #noestimates is focusing on business value delivery with less time wasted on guessing estimates.
This document provides an introduction to valuation and discusses various concepts and approaches related to valuation including:
- Discounted cash flow valuation which values an asset based on the present value of expected future cash flows.
- Relative valuation which values an asset based on comparable assets and common valuation multiples like price-to-earnings.
- Sources of bias, uncertainty and complexity that exist in valuations and how they can be addressed.
- When different valuation approaches like discounted cash flow and relative valuation work best depending on the situation.
This document discusses sentiment analysis on unstructured product reviews. It begins with an introduction to sentiment analysis and opinion mining. The author then reviews related work on aspect-based sentiment analysis and feature extraction. The proposed work involves extracting features from unstructured reviews, determining sentiment polarity using SentiStrength, and classifying features using Naive Bayes. The experiment uses 575 reviews to identify prominent product aspects and determine sentiment scores. Naive Bayes classification is performed in Tanagra to obtain prior distributions of sentiment for each feature. Figures and tables are included to illustrate the process.
Similar to A response to the No Estimates paradigm (20)
Managing risk with deliverables planningGlen Alleman
This document discusses managing risk through continuous risk management (CRM). It introduces the five principles of risk management and outlines the CRM process, which includes identifying risks, analyzing and prioritizing them, planning mitigations, tracking mitigation progress and risks, making decisions based on risk data, and communicating throughout the project. The presentation provides examples of risk statements, evaluation criteria, classification approaches, and integrating risks and mitigation plans into project schedules. The goal of CRM is to continually identify, assess, and mitigate risks to improve project outcomes.
Planning projects usually starts with tasks and milestones. The planner gathers this information from the participants – customers, engineers, subject matter experts. This information is usually arranged in the form of activities and milestones. PMBOK defines “project time management” in this manner. The activities are then sequenced according to the projects needs and mandatory dependencies.
Increasing the Probability of Project SuccessGlen Alleman
This document discusses principles and practices for increasing the probability of project success by managing risk from uncertainty. It defines risk as the effect of uncertainty on objectives. There are two types of uncertainty - epistemic (reducible) and aleatory (irreducible). Risk from epistemic uncertainty can be reduced through work on the program, while risk from aleatory uncertainty requires establishing margins. The document argues that effective risk management is needed to deliver capabilities on time and budget by identifying risks, understanding their interactions and impacts, and implementing risk handling strategies. This increases the likelihood of project success by preventing problems, improving quality, enabling better resource use, and promoting teamwork.
Process Flow and Narrative for Agile+PPMGlen Alleman
This document describes how an organization integrates agile software development practices with earned value management (EVM) to provide program status updates. It outlines a process that begins with developing a rough order of magnitude estimate of features needed. These features are then prioritized, mapped to a product roadmap and product backlog. Stories are developed from features and estimated, and tasks are estimated in hours. Physical percent complete data from tasks in Rally is used to calculate EVM metrics to inform stakeholders.
This document discusses principles of effective risk management for projects. It emphasizes the importance of clearly defining requirements and success criteria before releasing requests for proposals. This includes quantifying measures of effectiveness and performance for different use scenarios. Effective risk management also requires developing a funded implementation plan informed by historical risks and uncertainties. The document outlines key data and processes needed to reduce risks and increase the probability of a project's success, including defining requirements, developing plans and schedules, identifying risks and adjustments needed to plans. It discusses uncertainties from both known and unknown sources that can impact cost, schedule and performance.
Cost and schedule growth for complex projects is created when unrealistic technical performance expectations, unrealistic cost and schedule estimates, inadequate risk assessments, unanticipated technical issues, and poorly performed and ineffective risk management, contribute to project technical and programmatic shortfalls
From Principles to Strategies for Systems EngineeringGlen Alleman
From Principles to Strategies How to apply Principles, Practices, and Processes of Systems Engineering to solve complex technical, operational,
and organizational problems
Building a Credible Performance Measurement BaselineGlen Alleman
The document discusses establishing a credible Performance Measurement Baseline (PMB) for programs by integrating technical and programmatic plans. It recommends starting with a Work Breakdown Structure (WBS) that identifies system elements, associated risks, and processes to produce outcomes. An Integrated Master Plan (IMP) should then define how system elements mature at Program Events, with Measures of Effectiveness (MOEs) and Measures of Performance (MOPs) assigned. Finally, an Integrated Master Schedule (IMS) should arrange tasks to increase technical maturity, identify reducible and irreducible risks, and establish a risk-adjusted PMB to increase the probability of program success. Connecting these elements through the WBS, IMP and IMS
Integrated master plan methodology (v2)Glen Alleman
The document describes a methodology for developing an Integrated Master Plan (IMP). It outlines five conditions an IMP must meet, five steps in the development process, five common questions about IMP development, five common mistakes, and provides five templates/samples for key IMP sections. The methodology is intended to help program and project teams create effective IMPs that integrate execution plans and align with contractual requirements.
Capabilities‒Based Planning the capabilities needed to accomplish a mission or fulfill a business strategy
Only when capabilities are defined can we start with requirements elicitation
Starting with the development of a Rough Order of Magnitude (ROM) estimate of work and duration, creating the Product Roadmap and Release Plan, the Product and Sprint Backlogs, executing and statusing the Sprint, and informing the Earned Value Management Systems, using Physical Percent Complete of progress to plan.
Program Management Office Lean Software Development and Six SigmaGlen Alleman
Successfully combining a PMO, Agile, and Lean / 6 starts with understanding what benefit each paradigm brings to the table. Architecting a solution for the enterprise requires assembling a “Systems” with processes, people, and principles – all sharing the goal of business improvement.
This resource document describes the Program Governance Road map for product development, deployment, and sustainment of products and services in compliance with CMS guidance, ITIL IT management, CMMI best practices, and other guidance to assure high quality software is deployed for sustained operational success in mission critical domains.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
"Choosing proper type of scaling", Olena SyrotaFwdays
Imagine an IoT processing system that is already quite mature and production-ready and for which client coverage is growing and scaling and performance aspects are life and death questions. The system has Redis, MongoDB, and stream processing based on ksqldb. In this talk, firstly, we will analyze scaling approaches and then select the proper ones for our system.
AppSec PNW: Android and iOS Application Security with MobSFAjin Abraham
Mobile Security Framework - MobSF is a free and open source automated mobile application security testing environment designed to help security engineers, researchers, developers, and penetration testers to identify security vulnerabilities, malicious behaviours and privacy concerns in mobile applications using static and dynamic analysis. It supports all the popular mobile application binaries and source code formats built for Android and iOS devices. In addition to automated security assessment, it also offers an interactive testing environment to build and execute scenario based test/fuzz cases against the application.
This talk covers:
Using MobSF for static analysis of mobile applications.
Interactive dynamic security assessment of Android and iOS applications.
Solving Mobile app CTF challenges.
Reverse engineering and runtime analysis of Mobile malware.
How to shift left and integrate MobSF/mobsfscan SAST and DAST in your build pipeline.
Dandelion Hashtable: beyond billion requests per second on a commodity serverAntonios Katsarakis
This slide deck presents DLHT, a concurrent in-memory hashtable. Despite efforts to optimize hashtables, that go as far as sacrificing core functionality, state-of-the-art designs still incur multiple memory accesses per request and block request processing in three cases. First, most hashtables block while waiting for data to be retrieved from memory. Second, open-addressing designs, which represent the current state-of-the-art, either cannot free index slots on deletes or must block all requests to do so. Third, index resizes block every request until all objects are copied to the new index. Defying folklore wisdom, DLHT forgoes open-addressing and adopts a fully-featured and memory-aware closed-addressing design based on bounded cache-line-chaining. This design offers lock-free index operations and deletes that free slots instantly, (2) completes most requests with a single memory access, (3) utilizes software prefetching to hide memory latencies, and (4) employs a novel non-blocking and parallel resizing. In a commodity server and a memory-resident workload, DLHT surpasses 1.6B requests per second and provides 3.5x (12x) the throughput of the state-of-the-art closed-addressing (open-addressing) resizable hashtable on Gets (Deletes).
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Connector Corner: Seamlessly power UiPath Apps, GenAI with prebuilt connectorsDianaGray10
Join us to learn how UiPath Apps can directly and easily interact with prebuilt connectors via Integration Service--including Salesforce, ServiceNow, Open GenAI, and more.
The best part is you can achieve this without building a custom workflow! Say goodbye to the hassle of using separate automations to call APIs. By seamlessly integrating within App Studio, you can now easily streamline your workflow, while gaining direct access to our Connector Catalog of popular applications.
We’ll discuss and demo the benefits of UiPath Apps and connectors including:
Creating a compelling user experience for any software, without the limitations of APIs.
Accelerating the app creation process, saving time and effort
Enjoying high-performance CRUD (create, read, update, delete) operations, for
seamless data management.
Speakers:
Russell Alfeche, Technology Leader, RPA at qBotic and UiPath MVP
Charlie Greenberg, host
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/temporal-event-neural-networks-a-more-efficient-alternative-to-the-transformer-a-presentation-from-brainchip/
Chris Jones, Director of Product Management at BrainChip , presents the “Temporal Event Neural Networks: A More Efficient Alternative to the Transformer” tutorial at the May 2024 Embedded Vision Summit.
The expansion of AI services necessitates enhanced computational capabilities on edge devices. Temporal Event Neural Networks (TENNs), developed by BrainChip, represent a novel and highly efficient state-space network. TENNs demonstrate exceptional proficiency in handling multi-dimensional streaming data, facilitating advancements in object detection, action recognition, speech enhancement and language model/sequence generation. Through the utilization of polynomial-based continuous convolutions, TENNs streamline models, expedite training processes and significantly diminish memory requirements, achieving notable reductions of up to 50x in parameters and 5,000x in energy consumption compared to prevailing methodologies like transformers.
Integration with BrainChip’s Akida neuromorphic hardware IP further enhances TENNs’ capabilities, enabling the realization of highly capable, portable and passively cooled edge devices. This presentation delves into the technical innovations underlying TENNs, presents real-world benchmarks, and elucidates how this cutting-edge approach is positioned to revolutionize edge AI across diverse applications.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
Conversational agents, or chatbots, are increasingly used to access all sorts of services using natural language. While open-domain chatbots - like ChatGPT - can converse on any topic, task-oriented chatbots - the focus of this paper - are designed for specific tasks, like booking a flight, obtaining customer support, or setting an appointment. Like any other software, task-oriented chatbots need to be properly tested, usually by defining and executing test scenarios (i.e., sequences of user-chatbot interactions). However, there is currently a lack of methods to quantify the completeness and strength of such test scenarios, which can lead to low-quality tests, and hence to buggy chatbots.
To fill this gap, we propose adapting mutation testing (MuT) for task-oriented chatbots. To this end, we introduce a set of mutation operators that emulate faults in chatbot designs, an architecture that enables MuT on chatbots built using heterogeneous technologies, and a practical realisation as an Eclipse plugin. Moreover, we evaluate the applicability, effectiveness and efficiency of our approach on open-source chatbots, with promising results.
The Microsoft 365 Migration Tutorial For Beginner.pptxoperationspcvita
This presentation will help you understand the power of Microsoft 365. However, we have mentioned every productivity app included in Office 365. Additionally, we have suggested the migration situation related to Office 365 and how we can help you.
You can also read: https://www.systoolsgroup.com/updates/office-365-tenant-to-tenant-migration-step-by-step-complete-guide/
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-EfficiencyScyllaDB
Freshworks creates AI-boosted business software that helps employees work more efficiently and effectively. Managing data across multiple RDBMS and NoSQL databases was already a challenge at their current scale. To prepare for 10X growth, they knew it was time to rethink their database strategy. Learn how they architected a solution that would simplify scaling while keeping costs under control.
"Frontline Battles with DDoS: Best practices and Lessons Learned", Igor IvaniukFwdays
At this talk we will discuss DDoS protection tools and best practices, discuss network architectures and what AWS has to offer. Also, we will look into one of the largest DDoS attacks on Ukrainian infrastructure that happened in February 2022. We'll see, what techniques helped to keep the web resources available for Ukrainians and how AWS improved DDoS protection for all customers based on Ukraine experience
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
1. Here's a post asking for a conversation about estimates. Here's a response. Let's ignore the
term FACT as untestable and see how an answer can be arrived at. These answers are from a
paradigm of Software Intensive Systems, where Microeconomics of decision making are the
paradigm used to make decisions, based on the Opportunity Costs from those decisions.
• FACT: It is possible, and sometimes necessary, to estimate software tasks and
projects.
o It is always possible to estimate the future
o The confidence in the estimate's value is part of the estimating process
o The value at risk is one attribute of the estimate
o Low value at risk provides a wider range on the confidence value
o High value at risk requires higher confidence
• FACT: Questioning the intent behind a request for an estimate is the professional
thing to do
o The intent of estimates is to inform those accountable for the money to make
decisions about that money informed by the value at risk.
o To question that intent assumes those making those decision no longer have
the fiduciary responsibility for being the stewards of the money. And that
responsibility is transferred to those spending the money.
o This would imply the separation of concerns on any governance based
business has been suspended.
• FACT: #NoEstimates is a Twitter hashtag and was never intended to become a
demand, a method or a black-and-white truth
o The HashTag's original poster makes a clear and concise statement
o We can make decisions in the absence of estimating the impact of those
decisions.
o Until those original words are addressed, the hashtag will remain
contentious, since that would mean the principle of Microeconomics would
not longer be applicable in that business domain.
• FACT: The #NoEstimates hashtag became something due to the interest it
generated
o This is a shouting fire in a theater approach to conversation
o Without a domain and governance paradigm, the notion of making decisions
in the absence of estimates has no basis for being tested.
• FACT: A forecast is a type of estimate, whether probabilistic, deterministic,
bombastic or otherwise
• FACT: Forecasting is distinct from estimation, at least in the common usage of the
words, in that it involves using data to make the “estimate” rather than relying on a
person or people drawing on “experience” or guessing
o These definitions are not found outside the personally selected operational
definitions.
o Texts like Estimating Software Intensive Systems do not make this
distinction
o Estimating is about past, present, and future approximation of value found in
system with uncertainty.
§ Estimate - a number that approximates a value of interest in a system
with uncertainty.
§ Estimating - the process used to make such a calculation
§ To Estimate - find a value close to the actual value. 2 ≈ 2.3. 2 is an
approximation of the value 2.3.
o Forecasts are about future approximations of values found in systems with
uncertainty.
o Looking for definitions outside the domain of software development and
applying to fit the needs of the argument is disingenuous
2. • FACT: People who tweet with the hashtag #NoEstimates, or indeed any other
hashtag, are not automatically saying “My tweet is congruent and completely in
agreement with the literal meaning of the words in the hashtag”
o Those who tweet with hashtag are in fact retweeting the notion that decisions
can be made with estimates if they do not explicitly challenge that notion.
o If that is not established, there is an implicit support of the original idea
• FACT: The prevailing way estimation is done in software projects is single point
estimation
o This is likely a personal experience, since many stating that have limited
experience outside there domain.
• FACT: The prevailing way estimates are used in software organizations is a push
for a commitment, and then an excuse for a whipping when the estimate is not met.
o Again likely personal experience.
o If the poster said in my experience... that would establish the limits of the
statement.
o IME takes 3 letters. Those are rarely seen by those
suggesting not estimating is a desirable approach to managing in the
presence of uncertainty while spending other people money.
o Those complaining the phrase spending other peoples money are likely not
dong that, or not doing that with an substantial value at risk.
• FACT: The above fact does not make estimates a useless artifact, nor estimation
itself a useless or damaging activity
o Those proffering decisions can be made without estimating have in FACT said
estimating are damaging, useless, and a waste of time.
o Until that is countered, it will remain the basis of NoEstimates.