•Download as PPTX, PDF•

0 likes•1,497 views

Doing it – if not p then what http://alandix.com/statistics/course/ In this part we will look at the major kinds of statistical analysis methods: * Hypothesis testing (the dreaded p!) – robust but confusing * Confidence intervals – powerful but underused * Bayesian stats – mathematically clean but fragile None of these is a magic bullet; all need care and a level of statistical understanding to apply. We will discuss how these are related including the relationship between ‘likelihood’ in hypothesis testing and conditional probability as used in Bayesian analysis. There are common issues including the need to clearly report numbers and tests/distributions used. avoiding cherry picking, dealing with outliers, non-independent effects and correlated features. However, there are also specific issues for each method. Classic statistical methods used in hypothesis testing and confidence intervals depend on ideas of ‘worse’ for measures, which are sometimes obvious, sometimes need thought (one vs. two tailed test), and sometimes outright confusing. In addition, care is needed in hypothesis testing to avoid classic fails such as treating non-significant as no-effect and inflated effect sizes. In Bayesian statistics different problems arise: the need to be able to decide in a robust and defensible manner what are the expected likelihood of different hypothesis before an experiment; and the dangers of common causes leading to inflated probability estimates due to a single initial fluke event or optimistic prior. Crucially, while all methods have problems that need to be avoided, we will see how not using statistics at all can be far worse.

Report

Share

Report

Share

Making Sense of Statistics in HCI: part 4 - so what

So what? – making sense of results
http://alandix.com/statistics/course/
You have done your experiment or study and have your data – what next, how do you make sense of the results? In fact one of the best ways to design a study is to imagine this situation before you start!
This part will address a number of questions to think about during analysis (or design) including: Whether your work is to test an existing hypothesis (validation) or to find out what you should be looking for (exploration)? Whether it is a one-off study, or part of a process (e.g. ‘5 users’ for iterative development)? How to make sure your results and data can be used by others (e.g. repeatability, meta analysis)? Looking at the data, and asking if it makes sense given your assumptions (e.g. Fitts’ Law experiments that assume index of difficulty is all that matters). Thinking about the conditions – what have you really shown – some general result or simply that one system or group of users is better than another?

Making Sense of Statistics in HCI: part 3 - gaining power

Gaining power – the dreaded ‘too few participants’
http://alandix.com/statistics/course/
Statistical power is about whether an experiment or study is likely to reveal an effect if it is present. Without a sufficiently ‘powerful’ study, you risk being in the middle ground of ‘not proven’, not being able to make a strong statement either for or against whatever effect, system, or theory you are testing.
In HCI studies the greatest problem is often finding sufficient participants to do meaningful statistics. For professional practice we hear that ‘five users are enough’, but less often that this figure was based on particular historical contingencies and in the context of single iterations, not summative evaluations, which still need the equivalent of ‘power’ to be reliable.
However, power arises from a combination of the size of the effect you are trying to detect, the size of the study (number of trails/participants) and the size of the ‘noise’ (the random or uncontrolled factors).
Increasing number of participants is not the only way to increase power and we will discuss various ways in which careful design, selection of subjects and tasks can increase the power of your study albeit sometimes requiring care in interpreting results. For example, using a very narrow user group can reduce individual differences in knowledge and skill (reduce noise) and make it easier to see the effect of a novel interaction technique, but also reduces generalisation beyond that group. In another example, we will also see how careful choice of a task can even be used to deal with infrequent expert slips.

Making Sense of Statistics in HCI: From P to Bayes and Beyond – introduction

Many find statistics confusing, and perhaps more so given recent publicity of problems with traditional p-values and alternative statistical techniques including confidence intervals and Bayesian statistics. This course aims to help attendees navigate this morass: to understand the debates and more importantly make appropriate choices when designing and analysing experiments, empirical studies and other forms of quantitative data.

How to become a data scientist in 6 months

This is my talk from the PyDataLondon conference in May 2016. I outline some time management techniques and useful learning resources for those interested in transitioning into data science.

Risk Management and Reliable Forecasting using Un-reliable Data (magennis) - ...

To meet expectations and optimize flow, managing risk is an important part of Kanban. Anticipating and adapting to things that "go wrong" and the uncertainty they cause is topic of this session. We look at techniques for quantifying what risks should be considered important to deal with.
Although discouraging, forecasting size, effort, staff and cost is sometimes necessary. Of course we have to do as little of this as possible, but when we do, we have to do it well with the data we have available. Forecasting is made difficult by un-reliable information as inputs to our process – the amount of work is uncertain, the historical data we are basing our forecasts on is biased and tainted, the situation seems hopeless. But it isn't. Good decisions can be made on imperfect data, and this session discusses how. This session shows immediately usable and simple techniques to capture, analyze, cleanse and assess data, and then use that data for reliable forecasting.
Second and hopefully draft of LKCE 2014 talk.

What is the story with agile data keynote agile 2018 (Magennis)

This document discusses using data to improve agile practices and outcomes. It argues that agile has lost the "data war" by not capturing and utilizing data from teams effectively. It suggests that data needs to be handled safely to avoid embarrassing people and destroying the utility of historical data. Better ways are needed to measure outcomes rather than just output, and to balance predictability with creativity. The document also discusses visualizing and managing dependencies, comparing performance across teams, and using the right metrics depending on a team's characteristics and challenges. The overarching message is that data needs to be used carefully and conversationally to drive the right actions and improve agile practices.

I love the smell of data in the morning (getting started with data science) ...

Data Science 101 for software development. I know it misses the purist view of Data Science, but this is intended to get you started! First presented at Agile 2017 in Florida.

Agile Analysis 101: Agile Stats v Command & Control Maths

Introducing Agile teams to Statistical Analysis. It's the tool that will help them self-manage and I introduce simple methods to measure efficacy. We also compare and contrast the traditional use of mathematics for command and control versus statistics and learning for contemporary agile development and EA.

Making Sense of Statistics in HCI: part 4 - so what

So what? – making sense of results
http://alandix.com/statistics/course/
You have done your experiment or study and have your data – what next, how do you make sense of the results? In fact one of the best ways to design a study is to imagine this situation before you start!
This part will address a number of questions to think about during analysis (or design) including: Whether your work is to test an existing hypothesis (validation) or to find out what you should be looking for (exploration)? Whether it is a one-off study, or part of a process (e.g. ‘5 users’ for iterative development)? How to make sure your results and data can be used by others (e.g. repeatability, meta analysis)? Looking at the data, and asking if it makes sense given your assumptions (e.g. Fitts’ Law experiments that assume index of difficulty is all that matters). Thinking about the conditions – what have you really shown – some general result or simply that one system or group of users is better than another?

Making Sense of Statistics in HCI: part 3 - gaining power

Gaining power – the dreaded ‘too few participants’
http://alandix.com/statistics/course/
Statistical power is about whether an experiment or study is likely to reveal an effect if it is present. Without a sufficiently ‘powerful’ study, you risk being in the middle ground of ‘not proven’, not being able to make a strong statement either for or against whatever effect, system, or theory you are testing.
In HCI studies the greatest problem is often finding sufficient participants to do meaningful statistics. For professional practice we hear that ‘five users are enough’, but less often that this figure was based on particular historical contingencies and in the context of single iterations, not summative evaluations, which still need the equivalent of ‘power’ to be reliable.
However, power arises from a combination of the size of the effect you are trying to detect, the size of the study (number of trails/participants) and the size of the ‘noise’ (the random or uncontrolled factors).
Increasing number of participants is not the only way to increase power and we will discuss various ways in which careful design, selection of subjects and tasks can increase the power of your study albeit sometimes requiring care in interpreting results. For example, using a very narrow user group can reduce individual differences in knowledge and skill (reduce noise) and make it easier to see the effect of a novel interaction technique, but also reduces generalisation beyond that group. In another example, we will also see how careful choice of a task can even be used to deal with infrequent expert slips.

Making Sense of Statistics in HCI: From P to Bayes and Beyond – introduction

Many find statistics confusing, and perhaps more so given recent publicity of problems with traditional p-values and alternative statistical techniques including confidence intervals and Bayesian statistics. This course aims to help attendees navigate this morass: to understand the debates and more importantly make appropriate choices when designing and analysing experiments, empirical studies and other forms of quantitative data.

How to become a data scientist in 6 months

This is my talk from the PyDataLondon conference in May 2016. I outline some time management techniques and useful learning resources for those interested in transitioning into data science.

Risk Management and Reliable Forecasting using Un-reliable Data (magennis) - ...

To meet expectations and optimize flow, managing risk is an important part of Kanban. Anticipating and adapting to things that "go wrong" and the uncertainty they cause is topic of this session. We look at techniques for quantifying what risks should be considered important to deal with.
Although discouraging, forecasting size, effort, staff and cost is sometimes necessary. Of course we have to do as little of this as possible, but when we do, we have to do it well with the data we have available. Forecasting is made difficult by un-reliable information as inputs to our process – the amount of work is uncertain, the historical data we are basing our forecasts on is biased and tainted, the situation seems hopeless. But it isn't. Good decisions can be made on imperfect data, and this session discusses how. This session shows immediately usable and simple techniques to capture, analyze, cleanse and assess data, and then use that data for reliable forecasting.
Second and hopefully draft of LKCE 2014 talk.

What is the story with agile data keynote agile 2018 (Magennis)

This document discusses using data to improve agile practices and outcomes. It argues that agile has lost the "data war" by not capturing and utilizing data from teams effectively. It suggests that data needs to be handled safely to avoid embarrassing people and destroying the utility of historical data. Better ways are needed to measure outcomes rather than just output, and to balance predictability with creativity. The document also discusses visualizing and managing dependencies, comparing performance across teams, and using the right metrics depending on a team's characteristics and challenges. The overarching message is that data needs to be used carefully and conversationally to drive the right actions and improve agile practices.

I love the smell of data in the morning (getting started with data science) ...

Data Science 101 for software development. I know it misses the purist view of Data Science, but this is intended to get you started! First presented at Agile 2017 in Florida.

Agile Analysis 101: Agile Stats v Command & Control Maths

Introducing Agile teams to Statistical Analysis. It's the tool that will help them self-manage and I introduce simple methods to measure efficacy. We also compare and contrast the traditional use of mathematics for command and control versus statistics and learning for contemporary agile development and EA.

D. Mayo: Philosophy of Statistics & the Replication Crisis in Science

D. Mayo discusses various disputes-notably the replication crisis in science-in the context of her just released book: Statistical Inference as Severe Testing: How to Get Beyond the Statistics Wars.

Meeting #1 Slides Phil 6334/Econ 6614 SP2019

Mayo slides Meeting #1 Slides Phil 6334/Econ 6614 SP2019: Current Debates on Statistical Inference and Modeling

April 3 2014 slides mayo

1. The document discusses Dorling's solution to Duhem's problem using Bayesian probability and error statistics. Dorling illustrates how anomalous data could rationally discredit an auxiliary hypothesis rather than the main hypothesis using Bayes' theorem.
2. However, an error statistician would argue that simply satisfying Bayesian conditions is not enough to show strong evidence against the auxiliary hypothesis. The test of the auxiliary hypothesis needs to be severe, meaning it would likely uncover errors if present.
3. Examples are discussed where probabilistic instantiation is used incorrectly to derive epistemic probabilities. While epistemic probabilists may endorse this, it fails to ensure the severity assessments needed. Instantiating probabilities to particular hypotheses or individuals is a fall

P values and the art of herding cats

Talk given at RSS 2016 Manchester
I consider the problems that the ASA faced in getting a P-value statement together, not in terms of the process, but by looking at the expressed opinion of 21 published commentaries of the agreed statement. I then trace the history of the development of P-values. I show that the perceived problem with P-values in not just one of a supposed inadequacy of frequentist statistics but reflects a struggle at the very heart of Bayesian inference. I conclude that replacing P-values by automatic Bayesian approaches is unlikely to abolish controversy. It may be better to try and embrace diversity than to pretend it is not there.

Probing with Severity: Beyond Bayesian Probabilism and Frequentist Performance

Slides from Rutgers Seminar talk by Deborah G Mayo
December 3, 2014
Rutgers, Department of Statistics and Biostatistics
Abstract: Getting beyond today’s most pressing controversies revolving around statistical methods, I argue, requires scrutinizing their underlying statistical philosophies.Two main philosophies about the roles of probability in statistical inference are probabilism and performance (in the long-run). The first assumes that we need a method of assigning probabilities to hypotheses; the second assumes that the main function of statistical method is to control long-run performance. I offer a third goal: controlling and evaluating the probativeness of methods. An inductive inference, in this conception, takes the form of inferring hypotheses to the extent that they have been well or severely tested. A report of poorly tested claims must also be part of an adequate inference. I develop a statistical philosophy in which error probabilities of methods may be used to evaluate and control the stringency or severity of tests. I then show how the “severe testing” philosophy clarifies and avoids familiar criticisms and abuses of significance tests and cognate methods (e.g., confidence intervals). Severity may be threatened in three main ways: fallacies of statistical tests, unwarranted links between statistical and substantive claims, and violations of model assumptions.

P value wars

The document summarizes recent criticisms of the use of p-values and significance testing in research. It discusses how influential researchers like Ioannidis and replicability studies have questioned whether most published findings are true. It also summarizes the American Statistical Association's statement on p-values, which advises that scientific conclusions should not be based solely on p-value thresholds. The document then provides background on the history of p-values, noting that the debate is not just between frequentist and Bayesian approaches, but between different Bayesian views. It concludes by advising that p<0.05 is a weak standard, replication is important, and researchers should follow the ASA's recommendations to avoid overreliance on p-values alone.

Phil 6334 Mayo slides Day 1

This document discusses four waves in the philosophy of statistics from the 1930s to the present. The first wave centered around debates between Fisher, Neyman, Pearson, and others regarding hypothesis testing and the interpretation of p-values. The second wave from the 1960s-1980s involved criticisms of Neyman-Pearson methods regarding their applicability before and after obtaining data. The third wave from 1980-2005 explored likelihoodism and debates over the likelihood principle. The fourth and ongoing wave since 2005 continues philosophical debates in statistics.

The replication crisis: are P-values the problem and are Bayes factors the so...

Today’s posterior is tomorrow’s prior. Dennis Lindley1 (P2)
It has been claimed that science is undergoing a replication crisis and that when looking for culprits, the cult of significance is the chief suspect. It has also been claimed that Bayes factors might provide a solution.
In my opinion, these claims are misleading and part of the problem is our understanding of the purpose and nature of replication, which has only recently been subject to formal analysis2. What we are or should be interested in is truth. Replication is a coherence not a correspondence requirement3 and one that has a strong dependence on the size of the replication study4.
Consideration of Bayes factors raises a puzzling question. Should the Bayes factor for a replication study be calculated as if it were the initial study? If the answer is yes, the approach is not fully Bayesian and furthermore the Bayes factors will be subject to exactly the same replication ‘paradox’ as P-values. If the answer is no, then in what sense can an initially found Bayes factor be replicated and what are the implications for how we should view replication of P-values?
A further issue is that little attention has been paid to false negatives and, by extension to true negative values. Yet, as is well known from the theory of diagnostic tests, it is meaningless to consider the performance of a test in terms of false positives alone.
I shall argue that we are in danger of confusing evidence with the conclusions we draw and that any reforms of scientific practice should concentrate on producing evidence that is reliable as it can be qua evidence. There are many basic scientific practices in need of reform. Pseudoreplication5, for example, and the routine destruction of information through dichotomisation6 are far more serious problems than many matters of inferential framing that seem to have excited statisticians.
References
1. Lindley DV. Bayesian statistics: A review. SIAM; 1972.
2. Devezer B, Navarro DJ, Vandekerckhove J, Ozge Buzbas E. The case for formal methodology in scientific reform. R Soc Open Sci. Mar 31 2021;8(3):200805. doi:10.1098/rsos.200805
3. Walker RCS. Theories of Truth. In: Hale B, Wright C, Miller A, eds. A Companion to the Philosophy of Language. John Wiley & Sons,; 2017:532-553:chap 21.
4. Senn SJ. A comment on replication, p-values and evidence by S.N.Goodman, Statistics in Medicine 1992; 11:875-879. Letter. Statistics in Medicine. 2002;21(16):2437-44.
5. Hurlbert SH. Pseudoreplication and the design of ecological field experiments. Ecological monographs. 1984;54(2):187-211.
6. Senn SJ. Being Efficient About Efficacy Estimation. Research. Statistics in Biopharmaceutical Research. 2013;5(3):204-210. doi:10.1080/19466315.2012.754726

Crisis of confidence, p-hacking and the future of psychology

The document discusses issues with statistical analysis and interpretation in research. It notes that traditional null hypothesis significance testing can lead to problems like publication bias. Bayesian statistics are presented as an alternative that considers the probability of the data under both the null and alternative hypotheses. However, Bayesian methods still require transparent reporting and are not a panacea. Overall statistical power in many fields remains low, and selective reporting can still undermine reliability regardless of the statistical approach. Transparency in analysis, open sharing of data and materials, and preregistration of hypotheses are emphasized as ways to improve the credibility of research findings.

The replication crisis: are P-values the problem and are Bayes factors the so...

Today’s posterior is tomorrow’s prior. Dennis Lindley
It has been claimed that science is undergoing a replication crisis and that when looking for culprits, the cult of significance is the chief suspect. It has also been claimed that Bayes factors might provide a solution.
In my opinion, these claims are misleading and part of the problem is our understanding
of the purpose and nature of replication, which has only recently been subject to formal
analysis.
What we are or should be interested in is truth. Replication is a coherence not a correspondence requirement and one that has a strong dependence on the size
of the replication study
.
Consideration of Bayes factors raises a puzzling question. Should the Bayes factor for a replication study be calculated as if it were the initial study? If the answer is yes, the approach is not fully Bayesian and furthermore the Bayes factors will be subject to
exactly the same replication ‘paradox’ as P-values. If the answer is no, then in what
sense can an initially found Bayes factor be replicated and what are the implications for how we should view replication of P-values?
A further issue is that little attention has been paid to false negatives and, by extension
to true negative values. Yet, as is well known from the theory of diagnostic tests, it is
meaningless to consider the performance of a test in terms of false positives alone.
I shall argue that we are in danger of confusing evidence with the conclusions we draw and that any reforms of scientific practice should concentrate on producing evidence
that is reliable as it can be qua evidence. There are many basic scientific practices in
need of reform. Pseudoreplication, for example, and the routine destruction of
information through dichotomisation are far more serious problems than many matters of inferential framing that seem to have excited statisticians.

Severe Testing: The Key to Error Correction

D. G. Mayo's slides for her presentation given March 17, 2017 at Boston Colloquium for Philosophy of Science, Alfred I.Taub forum: "Understanding Reproducibility & Error Correction in Science"

Philosophy of Science and Philosophy of Statistics

D. Mayo 2020 presentation for APA annual Central meeting, section "Epistemology Meets Philosophy of Statistics"

Probability Homework Help

I am Nelson. I am a Probability Assignment Expert at statisticsassignmenthelp.com. I hold a Masters in Statistics from, City University of London.
I have been helping students with their homework for the past 5 years. I solve assignments related to Probability. Visit statisticsassignmenthelp.com or email info@statisticsassignmenthelp.com.
You can also call on +1 678 648 4277 for any assistance with Probability Assignments.

P-Value "Reforms": Fixing Science or Threat to Replication and Falsification

D. G. Mayo
Presented February 7, 2020 at the Fixing Science Workshop, National Assoc. of Scholars, Independent Institute

D. Mayo: Philosophical Interventions in the Statistics Wars

ABSTRACT: While statistics has a long history of passionate philosophical controversy, the last decade especially cries out for philosophical illumination. Misuses of statistics, Big Data dredging, and P-hacking make it easy to find statistically significant, but spurious, effects. This obstructs a test's ability to control the probability of erroneously inferring effects–i.e., to control error probabilities. Disagreements about statistical reforms reflect philosophical disagreements about the nature of statistical inference–including whether error probability control even matters! I describe my interventions in statistics in relation to three events. (1) In 2016 the American Statistical Association (ASA) met to craft principles for avoiding misinterpreting P-values. (2) In 2017, a "megateam" (including philosophers of science) proposed "redefining statistical significance," replacing the common threshold of P ≤ .05 with P ≤ .005. (3) In 2019, an editorial in the main ASA journal called for abandoning all P-value thresholds, and even the words "significant/significance".
A word on each. (1) Invited to be a "philosophical observer" at their meeting, I found the major issues were conceptual. P-values measure how incompatible data are from what is expected under a hypothesis that there is no genuine effect: the smaller the P-value, the more indication of incompatibility. The ASA list of familiar misinterpretations–P-values are not posterior probabilities, statistical significance is not substantive importance, no evidence against a hypothesis need not be evidence for it–I argue, should not be the basis for replacing tests with methods less able to assess and control erroneous interpretations of data. (Mayo 2016, 2019). (2) The "redefine statistical significance" movement appraises P-values from the perspective of a very different quantity: a comparative Bayes Factor. Failing to recognize how contrasting approaches measure different things, disputants often talk past each other (Mayo 2018). (3) To ban P-value thresholds, even to distinguish terrible from warranted evidence, I say, is a mistake (2019). It will not eradicate P-hacking, but it will make it harder to hold P-hackers accountable. A 2020 ASA Task Force on significance testing has just been announced. (I would like to think my blog errorstatistics.com helped.)
To enter the fray between rival statistical approaches, it helps to have a principle applicable to all accounts. There's poor evidence for a claim if little if anything has been done to find it flawed even if it is. This forms a basic requirement for evidence I call the severity requirement. A claim passes with severity only if it is subjected to and passes a test that probably would have found it flawed, if it were. It stems from Popper, though he never adequately cashed it out. A variant is the frequentist principle of evidence developed with Sir David Cox (Mayo and Cox 20

Mayod@psa 21(na)

1) The document discusses philosophical interventions in statistical debates and proposed reforms. It outlines three types of interventions: illuminating debates within statistics, reformulating frequentist tools through a severity perspective, and scrutinizing proposed reforms from the replication crisis.
2) A key idea is that evidence for a claim only comes when it has undergone severe testing, meaning it would probably have failed if it were false. The document argues for keeping the best aspects of Fisherian and Neyman-Pearson testing through a severity interpretation.
3) In scrutinizing reforms, it questions proposals to abandon statistical significance and P-value thresholds, arguing this could exacerbate selective reporting instead of reducing it. The document advocates reformulating tests

Paradoxes and Fallacies - Resolving some well-known puzzles with Bayesian net...

There are a number of paradoxes and fallacies that keep recurring as popular and mind-bending puzzles in the media. Although there is (now) complete agreement among scientists on how to resolve them, the correct answers are often perplexing to the casual observer and still cause bewilderment.
As this paper will show, the formally correct solutions of these probabilistic paradoxes are counterintuitive. In addition to being counterintuitive, there are few tools assisting us in solving them.
Although we won’t be able to overcome inherent mental biases and cognitive limitations, we can now provide a very practical new tool for the correct inference in the form of Bayesian networks.

Applied statistics part 2

Applied statistics part 2Mohammad Hadi Farjoo MD, PhD, Shahid behehsti University of Medical Sciences

This document discusses key concepts in applied statistics including hypothesis testing, p-values, types of errors, sensitivity and specificity. It provides examples and explanations of these topics using scenarios about testing if feeding chickens chocolate changes the gender ratio of offspring. Hypothesis testing involves defining the null and alternative hypotheses and using a statistical test to either reject or fail to reject the null hypothesis based on the p-value. Type I and type II errors in hypothesis testing are explained. Sensitivity and specificity in diagnostic tests are introduced using an example about detecting if a car is being stolen.Frequentist inference only seems easy By John Mount

This is part of Alpine ML Talk Series:
The talk is called “Frequentist inference only seems easy” and is about the theory of simple statistical inference (based on material from this article http://www.win-vector.com/blog/2014/07/frequenstist-inference-only-seems-easy/ ). The talk includes some simple dice games (I bring dice!) that really break the rote methods commonly taught as statistics. This is actually a good thing, as it gives you time and permission to work out how common statistical methods are properly derived from basic principles. This takes a little math (which I develop in the talk), but it changes some statistics from "do this" to "here is why you calculate like this.” It should appeal to people interested in the statistical and machine learning parts of data science.

D. G. Mayo Columbia slides for Workshop on Probability &Learning

- The document discusses ongoing debates between frequentist and Bayesian approaches to statistical inference and the importance of resolving philosophical differences given issues like the replication crisis.
- It notes statisticians are not blameless for failing to agree on a unified methodology and that philosophical correctness is important for methodology.
- The document advocates an "error statistical" or "probativist" approach focused on assessing the severity or stringency of tests, beyond just probabilism or performance, to determine what claims are warranted by evidence. This involves considering a test's ability to detect flaws in claims.

AI for HCI – could this be a better title if I’d asked ChatGPT

Seminar in Pisa, Italy, 11th June 2024
https://www.alandix.com/academic/talks/Pisa-AI4HCI-2024/
AI has entered into all aspects of life. Sometimes this is hidden, below the surface of the devices and applications we use; sometimes much more explicit in interactions with devices and user interfaces. In this talk I'll explore some of the ways in which AI can be used to enhance existing interactions and also how we can effectively design user interfaces for A-rich systems. In addition I can be used by UX designers and AI developers need better ways to interact with their tools and systems. Perhaps more fundamental is not the direct effects of AI, but the ways in which it is fundamentally changing the society and world in which we live.

Just Counting – a tool ecosystem for personal numeric information

, , ,,,,
Paper presented at 17th International Conference on Advanced Visual Interfaces (AVI 2024), Arenzano (Genoa), Italy. June 3rd -7th 2024
https://alandix.com/academic/papers/AVI2024-justcounting/
Numbers are part of day-to-day life from household budgeting to making sense of global warming and planning academic projects. But, for many, dealing with numeric information is daunting with multiple step changes in complexity moving from, say simple calculations to spreadsheet use, as well as difficulties managing different sources of complex information. In this paper we present an ecosystem of interconnected prototype tools that explore this space, including TSoW interpreting unfamiliar orders of magnitude; calQ a four-function calculator that shifts seamlessly to micro-spreadsheet; WS2 embedding spreadsheet-like features in web pages; and myData collating and connecting the diverse data sources. Collectively, these tools offer an envisionment to prompt discussion both of the way end-users can more easily deal with numeric information and of the background technical infrastructure necessary for this to happen.

D. Mayo: Philosophy of Statistics & the Replication Crisis in Science

D. Mayo discusses various disputes-notably the replication crisis in science-in the context of her just released book: Statistical Inference as Severe Testing: How to Get Beyond the Statistics Wars.

Meeting #1 Slides Phil 6334/Econ 6614 SP2019

Mayo slides Meeting #1 Slides Phil 6334/Econ 6614 SP2019: Current Debates on Statistical Inference and Modeling

April 3 2014 slides mayo

1. The document discusses Dorling's solution to Duhem's problem using Bayesian probability and error statistics. Dorling illustrates how anomalous data could rationally discredit an auxiliary hypothesis rather than the main hypothesis using Bayes' theorem.
2. However, an error statistician would argue that simply satisfying Bayesian conditions is not enough to show strong evidence against the auxiliary hypothesis. The test of the auxiliary hypothesis needs to be severe, meaning it would likely uncover errors if present.
3. Examples are discussed where probabilistic instantiation is used incorrectly to derive epistemic probabilities. While epistemic probabilists may endorse this, it fails to ensure the severity assessments needed. Instantiating probabilities to particular hypotheses or individuals is a fall

P values and the art of herding cats

Talk given at RSS 2016 Manchester
I consider the problems that the ASA faced in getting a P-value statement together, not in terms of the process, but by looking at the expressed opinion of 21 published commentaries of the agreed statement. I then trace the history of the development of P-values. I show that the perceived problem with P-values in not just one of a supposed inadequacy of frequentist statistics but reflects a struggle at the very heart of Bayesian inference. I conclude that replacing P-values by automatic Bayesian approaches is unlikely to abolish controversy. It may be better to try and embrace diversity than to pretend it is not there.

Probing with Severity: Beyond Bayesian Probabilism and Frequentist Performance

Slides from Rutgers Seminar talk by Deborah G Mayo
December 3, 2014
Rutgers, Department of Statistics and Biostatistics
Abstract: Getting beyond today’s most pressing controversies revolving around statistical methods, I argue, requires scrutinizing their underlying statistical philosophies.Two main philosophies about the roles of probability in statistical inference are probabilism and performance (in the long-run). The first assumes that we need a method of assigning probabilities to hypotheses; the second assumes that the main function of statistical method is to control long-run performance. I offer a third goal: controlling and evaluating the probativeness of methods. An inductive inference, in this conception, takes the form of inferring hypotheses to the extent that they have been well or severely tested. A report of poorly tested claims must also be part of an adequate inference. I develop a statistical philosophy in which error probabilities of methods may be used to evaluate and control the stringency or severity of tests. I then show how the “severe testing” philosophy clarifies and avoids familiar criticisms and abuses of significance tests and cognate methods (e.g., confidence intervals). Severity may be threatened in three main ways: fallacies of statistical tests, unwarranted links between statistical and substantive claims, and violations of model assumptions.

P value wars

The document summarizes recent criticisms of the use of p-values and significance testing in research. It discusses how influential researchers like Ioannidis and replicability studies have questioned whether most published findings are true. It also summarizes the American Statistical Association's statement on p-values, which advises that scientific conclusions should not be based solely on p-value thresholds. The document then provides background on the history of p-values, noting that the debate is not just between frequentist and Bayesian approaches, but between different Bayesian views. It concludes by advising that p<0.05 is a weak standard, replication is important, and researchers should follow the ASA's recommendations to avoid overreliance on p-values alone.

Phil 6334 Mayo slides Day 1

This document discusses four waves in the philosophy of statistics from the 1930s to the present. The first wave centered around debates between Fisher, Neyman, Pearson, and others regarding hypothesis testing and the interpretation of p-values. The second wave from the 1960s-1980s involved criticisms of Neyman-Pearson methods regarding their applicability before and after obtaining data. The third wave from 1980-2005 explored likelihoodism and debates over the likelihood principle. The fourth and ongoing wave since 2005 continues philosophical debates in statistics.

The replication crisis: are P-values the problem and are Bayes factors the so...

Today’s posterior is tomorrow’s prior. Dennis Lindley1 (P2)
It has been claimed that science is undergoing a replication crisis and that when looking for culprits, the cult of significance is the chief suspect. It has also been claimed that Bayes factors might provide a solution.
In my opinion, these claims are misleading and part of the problem is our understanding of the purpose and nature of replication, which has only recently been subject to formal analysis2. What we are or should be interested in is truth. Replication is a coherence not a correspondence requirement3 and one that has a strong dependence on the size of the replication study4.
Consideration of Bayes factors raises a puzzling question. Should the Bayes factor for a replication study be calculated as if it were the initial study? If the answer is yes, the approach is not fully Bayesian and furthermore the Bayes factors will be subject to exactly the same replication ‘paradox’ as P-values. If the answer is no, then in what sense can an initially found Bayes factor be replicated and what are the implications for how we should view replication of P-values?
A further issue is that little attention has been paid to false negatives and, by extension to true negative values. Yet, as is well known from the theory of diagnostic tests, it is meaningless to consider the performance of a test in terms of false positives alone.
I shall argue that we are in danger of confusing evidence with the conclusions we draw and that any reforms of scientific practice should concentrate on producing evidence that is reliable as it can be qua evidence. There are many basic scientific practices in need of reform. Pseudoreplication5, for example, and the routine destruction of information through dichotomisation6 are far more serious problems than many matters of inferential framing that seem to have excited statisticians.
References
1. Lindley DV. Bayesian statistics: A review. SIAM; 1972.
2. Devezer B, Navarro DJ, Vandekerckhove J, Ozge Buzbas E. The case for formal methodology in scientific reform. R Soc Open Sci. Mar 31 2021;8(3):200805. doi:10.1098/rsos.200805
3. Walker RCS. Theories of Truth. In: Hale B, Wright C, Miller A, eds. A Companion to the Philosophy of Language. John Wiley & Sons,; 2017:532-553:chap 21.
4. Senn SJ. A comment on replication, p-values and evidence by S.N.Goodman, Statistics in Medicine 1992; 11:875-879. Letter. Statistics in Medicine. 2002;21(16):2437-44.
5. Hurlbert SH. Pseudoreplication and the design of ecological field experiments. Ecological monographs. 1984;54(2):187-211.
6. Senn SJ. Being Efficient About Efficacy Estimation. Research. Statistics in Biopharmaceutical Research. 2013;5(3):204-210. doi:10.1080/19466315.2012.754726

Crisis of confidence, p-hacking and the future of psychology

The document discusses issues with statistical analysis and interpretation in research. It notes that traditional null hypothesis significance testing can lead to problems like publication bias. Bayesian statistics are presented as an alternative that considers the probability of the data under both the null and alternative hypotheses. However, Bayesian methods still require transparent reporting and are not a panacea. Overall statistical power in many fields remains low, and selective reporting can still undermine reliability regardless of the statistical approach. Transparency in analysis, open sharing of data and materials, and preregistration of hypotheses are emphasized as ways to improve the credibility of research findings.

The replication crisis: are P-values the problem and are Bayes factors the so...

Today’s posterior is tomorrow’s prior. Dennis Lindley
It has been claimed that science is undergoing a replication crisis and that when looking for culprits, the cult of significance is the chief suspect. It has also been claimed that Bayes factors might provide a solution.
In my opinion, these claims are misleading and part of the problem is our understanding
of the purpose and nature of replication, which has only recently been subject to formal
analysis.
What we are or should be interested in is truth. Replication is a coherence not a correspondence requirement and one that has a strong dependence on the size
of the replication study
.
Consideration of Bayes factors raises a puzzling question. Should the Bayes factor for a replication study be calculated as if it were the initial study? If the answer is yes, the approach is not fully Bayesian and furthermore the Bayes factors will be subject to
exactly the same replication ‘paradox’ as P-values. If the answer is no, then in what
sense can an initially found Bayes factor be replicated and what are the implications for how we should view replication of P-values?
A further issue is that little attention has been paid to false negatives and, by extension
to true negative values. Yet, as is well known from the theory of diagnostic tests, it is
meaningless to consider the performance of a test in terms of false positives alone.
I shall argue that we are in danger of confusing evidence with the conclusions we draw and that any reforms of scientific practice should concentrate on producing evidence
that is reliable as it can be qua evidence. There are many basic scientific practices in
need of reform. Pseudoreplication, for example, and the routine destruction of
information through dichotomisation are far more serious problems than many matters of inferential framing that seem to have excited statisticians.

Severe Testing: The Key to Error Correction

D. G. Mayo's slides for her presentation given March 17, 2017 at Boston Colloquium for Philosophy of Science, Alfred I.Taub forum: "Understanding Reproducibility & Error Correction in Science"

Philosophy of Science and Philosophy of Statistics

D. Mayo 2020 presentation for APA annual Central meeting, section "Epistemology Meets Philosophy of Statistics"

Probability Homework Help

I am Nelson. I am a Probability Assignment Expert at statisticsassignmenthelp.com. I hold a Masters in Statistics from, City University of London.
I have been helping students with their homework for the past 5 years. I solve assignments related to Probability. Visit statisticsassignmenthelp.com or email info@statisticsassignmenthelp.com.
You can also call on +1 678 648 4277 for any assistance with Probability Assignments.

P-Value "Reforms": Fixing Science or Threat to Replication and Falsification

D. G. Mayo
Presented February 7, 2020 at the Fixing Science Workshop, National Assoc. of Scholars, Independent Institute

D. Mayo: Philosophical Interventions in the Statistics Wars

ABSTRACT: While statistics has a long history of passionate philosophical controversy, the last decade especially cries out for philosophical illumination. Misuses of statistics, Big Data dredging, and P-hacking make it easy to find statistically significant, but spurious, effects. This obstructs a test's ability to control the probability of erroneously inferring effects–i.e., to control error probabilities. Disagreements about statistical reforms reflect philosophical disagreements about the nature of statistical inference–including whether error probability control even matters! I describe my interventions in statistics in relation to three events. (1) In 2016 the American Statistical Association (ASA) met to craft principles for avoiding misinterpreting P-values. (2) In 2017, a "megateam" (including philosophers of science) proposed "redefining statistical significance," replacing the common threshold of P ≤ .05 with P ≤ .005. (3) In 2019, an editorial in the main ASA journal called for abandoning all P-value thresholds, and even the words "significant/significance".
A word on each. (1) Invited to be a "philosophical observer" at their meeting, I found the major issues were conceptual. P-values measure how incompatible data are from what is expected under a hypothesis that there is no genuine effect: the smaller the P-value, the more indication of incompatibility. The ASA list of familiar misinterpretations–P-values are not posterior probabilities, statistical significance is not substantive importance, no evidence against a hypothesis need not be evidence for it–I argue, should not be the basis for replacing tests with methods less able to assess and control erroneous interpretations of data. (Mayo 2016, 2019). (2) The "redefine statistical significance" movement appraises P-values from the perspective of a very different quantity: a comparative Bayes Factor. Failing to recognize how contrasting approaches measure different things, disputants often talk past each other (Mayo 2018). (3) To ban P-value thresholds, even to distinguish terrible from warranted evidence, I say, is a mistake (2019). It will not eradicate P-hacking, but it will make it harder to hold P-hackers accountable. A 2020 ASA Task Force on significance testing has just been announced. (I would like to think my blog errorstatistics.com helped.)
To enter the fray between rival statistical approaches, it helps to have a principle applicable to all accounts. There's poor evidence for a claim if little if anything has been done to find it flawed even if it is. This forms a basic requirement for evidence I call the severity requirement. A claim passes with severity only if it is subjected to and passes a test that probably would have found it flawed, if it were. It stems from Popper, though he never adequately cashed it out. A variant is the frequentist principle of evidence developed with Sir David Cox (Mayo and Cox 20

Mayod@psa 21(na)

1) The document discusses philosophical interventions in statistical debates and proposed reforms. It outlines three types of interventions: illuminating debates within statistics, reformulating frequentist tools through a severity perspective, and scrutinizing proposed reforms from the replication crisis.
2) A key idea is that evidence for a claim only comes when it has undergone severe testing, meaning it would probably have failed if it were false. The document argues for keeping the best aspects of Fisherian and Neyman-Pearson testing through a severity interpretation.
3) In scrutinizing reforms, it questions proposals to abandon statistical significance and P-value thresholds, arguing this could exacerbate selective reporting instead of reducing it. The document advocates reformulating tests

Paradoxes and Fallacies - Resolving some well-known puzzles with Bayesian net...

There are a number of paradoxes and fallacies that keep recurring as popular and mind-bending puzzles in the media. Although there is (now) complete agreement among scientists on how to resolve them, the correct answers are often perplexing to the casual observer and still cause bewilderment.
As this paper will show, the formally correct solutions of these probabilistic paradoxes are counterintuitive. In addition to being counterintuitive, there are few tools assisting us in solving them.
Although we won’t be able to overcome inherent mental biases and cognitive limitations, we can now provide a very practical new tool for the correct inference in the form of Bayesian networks.

Applied statistics part 2

Applied statistics part 2Mohammad Hadi Farjoo MD, PhD, Shahid behehsti University of Medical Sciences

This document discusses key concepts in applied statistics including hypothesis testing, p-values, types of errors, sensitivity and specificity. It provides examples and explanations of these topics using scenarios about testing if feeding chickens chocolate changes the gender ratio of offspring. Hypothesis testing involves defining the null and alternative hypotheses and using a statistical test to either reject or fail to reject the null hypothesis based on the p-value. Type I and type II errors in hypothesis testing are explained. Sensitivity and specificity in diagnostic tests are introduced using an example about detecting if a car is being stolen.Frequentist inference only seems easy By John Mount

This is part of Alpine ML Talk Series:
The talk is called “Frequentist inference only seems easy” and is about the theory of simple statistical inference (based on material from this article http://www.win-vector.com/blog/2014/07/frequenstist-inference-only-seems-easy/ ). The talk includes some simple dice games (I bring dice!) that really break the rote methods commonly taught as statistics. This is actually a good thing, as it gives you time and permission to work out how common statistical methods are properly derived from basic principles. This takes a little math (which I develop in the talk), but it changes some statistics from "do this" to "here is why you calculate like this.” It should appeal to people interested in the statistical and machine learning parts of data science.

D. G. Mayo Columbia slides for Workshop on Probability &Learning

- The document discusses ongoing debates between frequentist and Bayesian approaches to statistical inference and the importance of resolving philosophical differences given issues like the replication crisis.
- It notes statisticians are not blameless for failing to agree on a unified methodology and that philosophical correctness is important for methodology.
- The document advocates an "error statistical" or "probativist" approach focused on assessing the severity or stringency of tests, beyond just probabilism or performance, to determine what claims are warranted by evidence. This involves considering a test's ability to detect flaws in claims.

D. Mayo: Philosophy of Statistics & the Replication Crisis in Science

D. Mayo: Philosophy of Statistics & the Replication Crisis in Science

Meeting #1 Slides Phil 6334/Econ 6614 SP2019

Meeting #1 Slides Phil 6334/Econ 6614 SP2019

April 3 2014 slides mayo

April 3 2014 slides mayo

P values and the art of herding cats

P values and the art of herding cats

Probing with Severity: Beyond Bayesian Probabilism and Frequentist Performance

Probing with Severity: Beyond Bayesian Probabilism and Frequentist Performance

P value wars

P value wars

Phil 6334 Mayo slides Day 1

Phil 6334 Mayo slides Day 1

The replication crisis: are P-values the problem and are Bayes factors the so...

The replication crisis: are P-values the problem and are Bayes factors the so...

Crisis of confidence, p-hacking and the future of psychology

Crisis of confidence, p-hacking and the future of psychology

The replication crisis: are P-values the problem and are Bayes factors the so...

The replication crisis: are P-values the problem and are Bayes factors the so...

Severe Testing: The Key to Error Correction

Severe Testing: The Key to Error Correction

Philosophy of Science and Philosophy of Statistics

Philosophy of Science and Philosophy of Statistics

Probability Homework Help

Probability Homework Help

P-Value "Reforms": Fixing Science or Threat to Replication and Falsification

P-Value "Reforms": Fixing Science or Threat to Replication and Falsification

D. Mayo: Philosophical Interventions in the Statistics Wars

D. Mayo: Philosophical Interventions in the Statistics Wars

Mayod@psa 21(na)

Mayod@psa 21(na)

Paradoxes and Fallacies - Resolving some well-known puzzles with Bayesian net...

Paradoxes and Fallacies - Resolving some well-known puzzles with Bayesian net...

Applied statistics part 2

Applied statistics part 2

Frequentist inference only seems easy By John Mount

Frequentist inference only seems easy By John Mount

D. G. Mayo Columbia slides for Workshop on Probability &Learning

D. G. Mayo Columbia slides for Workshop on Probability &Learning

AI for HCI – could this be a better title if I’d asked ChatGPT

Seminar in Pisa, Italy, 11th June 2024
https://www.alandix.com/academic/talks/Pisa-AI4HCI-2024/
AI has entered into all aspects of life. Sometimes this is hidden, below the surface of the devices and applications we use; sometimes much more explicit in interactions with devices and user interfaces. In this talk I'll explore some of the ways in which AI can be used to enhance existing interactions and also how we can effectively design user interfaces for A-rich systems. In addition I can be used by UX designers and AI developers need better ways to interact with their tools and systems. Perhaps more fundamental is not the direct effects of AI, but the ways in which it is fundamentally changing the society and world in which we live.

Just Counting – a tool ecosystem for personal numeric information

, , ,,,,
Paper presented at 17th International Conference on Advanced Visual Interfaces (AVI 2024), Arenzano (Genoa), Italy. June 3rd -7th 2024
https://alandix.com/academic/papers/AVI2024-justcounting/
Numbers are part of day-to-day life from household budgeting to making sense of global warming and planning academic projects. But, for many, dealing with numeric information is daunting with multiple step changes in complexity moving from, say simple calculations to spreadsheet use, as well as difficulties managing different sources of complex information. In this paper we present an ecosystem of interconnected prototype tools that explore this space, including TSoW interpreting unfamiliar orders of magnitude; calQ a four-function calculator that shifts seamlessly to micro-spreadsheet; WS2 embedding spreadsheet-like features in web pages; and myData collating and connecting the diverse data sources. Collectively, these tools offer an envisionment to prompt discussion both of the way end-users can more easily deal with numeric information and of the background technical infrastructure necessary for this to happen.

A flexible QR-code infrastructure for heritage

Paper presented at AVICH 2024: Workshop on Advanced Visual Interfaces and Interactions in Cultural Heritage. AVI 2024 at Arenzano (Genoa), Italy, 4th June 2024
https://alandix.com/academic/papers/AVI2CH2024-qrarch/
QR codes are often used in outdoor cultural heritage settings. They are an established technology but inflexible, especially if the websites to which they point change their structure, or even disappear. This paper describes a web infrastructure for deploying QR codes that can be remapped dynamically, both as web resources move or change, but also to allow personalized and adaptable content. This is a small change in the underlying technology, but radically change potential applications. It can be used to personalise content to viewer’s preferences such as language choices, but could be used to support bespoke events or applications such as school visits or treasure hunts. The infrastructure has been deployed at the Memorial Gardens in the lost village of Troedrhiwfuwch, to enable the stories of fallen WWI and II service men to be retold for the current generation

Epistemic Interaction - tuning interfaces to provide information for AI support

Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.

AI and the Humanities – provocations – The Arts, Humanities & Responsible AI...

Keynote at The Arts, Humanities & Responsible AI Symposium Aberystwyth University, Wales, 29th June 2024
https://www.alandix.com/academic/talks/AHRAI-2024/
The talk takes an excursion through several frameworks or ways of looking at the way artificial intelligence impacts:
* humanities and social science research
* social justice
* fundamental changes in society

Can AI do good? at 'offtheCanvas' India HCI prelude

Invited talk at 'offtheCanvas' IndiaHCI prelude, 29th June 2024.
https://www.alandix.com/academic/talks/offtheCanvas-IndiaHCI2024/
The world is being changed fundamentally by AI and we are constantly faced with newspaper headlines about its harmful effects. However, there is also the potential to both ameliorate theses harms and use the new abilities of AI to transform society for the good. Can you make the difference?

CDT Away Day Talk: Qualitative–Quantitative reasoning and lightweight numbers

Talk at EPIC CDT Away Day, St Davids Hotel, Cardiff, 11th April 2024.
https://alandix.com/academic/talks/CDT-away-day-April-2024-QQ/
As academics we need to deal with numbers including project management spreadsheets and student marks. In addition, they are part of day-to-day life whether household budgeting or working out how many socks to pack for a journey. Perhaps most crucially, many national and global issues require an understanding of numeric information from climate change to tax rates, and of course the Covid-19 pandemic. If citizens are not able to make sense of this, democracy fails. Of course, many are not only uncertain when dealing with numbers, but suffer more or less extreme maths anxiety. Indeed a recent UK survey found that, “over a third of adults (35%) say that doing maths makes them feel anxious, while one in five are so fearful it even makes them feel physically sick”. Sometimes detailed calculations are necessary, but often the critical skill is qualitative–quantitative reasoning, that is a qualitative understanding of quantitative phenomena. This can after be aided by the ability to use back-of-the-envelope calculations and dealing with lightweight numeric information. This talk discusses these issues and presents some prototype tools to explore the design space for personal numeric information.
This talk is largely the same as the one of the same name given at Ulster University in February. However, the slides have been updated to correct web material misattributed to BBC which was actually Guardian. An eagle-eyed member of the audience spotted that the font in the screenshot was one found in the Guardian online web and not the BBC.

Swan(sea) Song – personal research during my six years at Swansea ... and bey...

Talk at the Computational Foundry, Swansea University, 24th April 2024.
https://alandix.com/academic/talks/Swansea-song-April-2024/
This talk was my last formal act as Director of the Computational Foundry, before retiring from Swansea University at the end of April 2024. It was the last part in a research afternoon of the School of Mathematics and Computer Science, during which there were talks by other members of the school including a wonderful potted history of Maths and CS in the University and the Computational Foundry by John Tucker and Matt Jones.
This talk is a summary (partial) of more personal research through my time at Swansea, some in collaboration with others across the university, some with those external to Swansea, and some more individual. The talk used a number of web-based prototypes and systems that I've developed, many as weekend projects, to look at areas including AI, digital humanities and heritage, qualitative-quantitative reasoning, statistics and maths education, physical prototyping and UX tools. The talk included work inspired by teaching, consultancy and other real-world problems, but almost always also including a strong theoretical dimension. This reflects my personal background, as the son of a carpenter, but where mathematics was my academic 'first love' -- always seeking out ways in which practical making and fundamental knowledge interact. A theme that runs through many of the examples is the way in which many if the things that were completed while at Swansea had roots before, and also things I started here will continue on the future. And now I look forward to the coming years; although my employment at Swansea has ended, I will continue to collaborate with many in the University, both those I have met since being in Swansea and those I know before.

From Family Reminiscence to Scholarly Archive .

, digital economy, human-computer interaction, design thinking, computational thinking
Keynote at Transforming Heritage Research in a Transforming World, the
International Hellenic University, Serres, Greece, 16-17 April 2024.
https://alandix.com/academic/talks/CAA-GR-2024/

Human-Centred Artificial Intelligence – Malta 2024

Rarely a day goes by without an AI story in the news. Sometimes, there is good news, such as the use of AI to discover a new pharmaceutical, but often more dark, about AI bias or the way it may rob us of jobs, privacy or autonomy. The human impact of AI is of two kinds. First, what AI does directly - systems that we use and can design better or worse. Second, how AI shapes society, the way AI can create mismatches of power between large corporations and nation states, and between organisations and individuals. Recent advances in large-langage models in particular may mean that AI is only in the hands of those who can afford massive computational power and technical expertise. However, there are signs of hope, in particular the way that generative AI might enable niches applications that would otherwise be impossible. In education this may allow personalised tuition, bit also changes what needs to be learnt ... not necessarily digital; the ability of LLMs to generalise may offer ways for minority languages to survive; and in health there is the possibility of personalised medicine, and affordable ways to help well-being and mental health.

The future of UX design support tools - talk Paris March 2024

talk to ACM SIGCHI Paris Chapter at Université Paris-Saclay, 19th March 2024.
https://alandix.com/academic/talks/Paris-UX-2024/
From the 1980s graphical interfaces have dominated the way we envisage user interactions. While we know and teach our students about the importance of taking a wider perspective, the vast majority of tools used in practical UX (user experience) design are dominated by screens. In this talk I will explore ways in which future design can step beyond the pixelated surface.
One strand is understanding the physical nature of devices, human bodies, and the environments within which they engage; this is addressed in my 2022 book “TouchIT – Understanding Design in a Physical-Digital World” co-authored with Steve Gill, Jo Hare, and Devina Ramduny-Ellis.
The other strand is work over the last few years with Miriam Sturdee and Anna Carter, collectively entitled InContext, in which we have been exploring next generation UX tools, including investigative workshops and focus groups. While not a major issue, AI was mentioned in these workshops, but they were before the recent rise in awareness fostered by ChatGPT. I will contextualise this building on two other books that I am completing relating to AI and Human–Computer Interaction.
I will demonstrate two tools that explore the space of design beyond the screen. One, ScenarioViewer enables screen-based prototypes, at various levels of fidelity, to be embedded within story-board-like contextual images. The other, PhysProto, allows physical prototypes to be interactively explored remotely using video clips and Physigrams, executable models of the physical behaviour of the device. These are prototypes of prototyping tools, but also provotypes, designed to provoke you to consider for yourself the future of UX design support tools.

Qualitative–Quantitative reasoning and lightweight numbers

Seminar at University of Ulster, 21st February 2024.
https://alandix.com/academic/talks/Ulster-2024-QQ/
As academics we need to deal with numbers including project management spreadsheets and student marks. In addition, they are part of day-to-day life whether household budgeting or working out how many socks to pack for a journey. Perhaps most crucially, many national and global issues require an understanding of numeric information from climate change to tax rates, and of course the Covid-19 pandemic. If citizens are not able to make sense of this, democracy fails.
Of course, many are not only uncertain when dealing with numbers, but suffer more or less extreme maths anxiety. Indeed a recent UK survey found that, “over a third of adults (35%) say that doing maths makes them feel anxious, while one in five are so fearful it even makes them feel physically sick”.
Sometimes detailed calculations are necessary, but often the critical skill is qualitative–quantitative reasoning, that is a qualitative understanding of quantitative phenomena. This can after be aided by the ability to use back-of-the-envelope calculations and dealing with lightweight numeric information.
This talk discusses these issues and presents some prototype tools to explore the design space for personal numeric information.

Invited talk at Diversifying Knowledge Production in HCI

Invited talk at workshop on Diversifying Knowledge Production in HCI: Exploring Materiality and Novel Formats for Scholarly Expression.
TEI'24, Cork, 11th Feb 2024
https://alandix.com/academic/talks/TEI-workshop-2024/

Exceptional Experiences for Everyone

Invited talk at AMID 2023 – 1st International Workshop on Accessibility and Multimodal Interaction Design Approaches in Museums for People with Impairments, in conjunction with MoileHCI Conference, 26 Sept. 2023.
https://alandix.com/academic/talks/AMID2023-exceptional/
Often accessibility is an afterthought or sticking plaster to fix the holes in an experience that was designed with a central audience in mind: maybe middle-aged, fully abled, well educated. Ideally we would have user experiences designed specifically for different kinds of modalities and in different tyles, not just because of the wide diversity of users, but also because any one user has varying needs and varying abilities at different times. In the context of a large museum or cultural institution this is already challenging, but appears impossible for smaller archives, or local community heritage. Yet if heritage and history is to be accessible this also applies to production, democratising digitisation and empowering marginalised groups.. We need appropriate architectures, tools, technology infrastructure and platforms, that make this not just possible, but simple. In this talk I offer some insights, some examples and many research challenges towards the goal of enabling exceptional experiences for everyone.

Inclusivity and AI: opportunity or threat

Keynote at 9th International Conference on Computing and Informatics (ICOCI 2023), "Nurturing an inclusive digital society for a sustainable nation", 13-14 September 2023, Kuala Lumpur, Malaysia
https://alandix.com/academic/talks/ICOCI2023-keynote/
AI is transforming many sectors of the economy and our day-today lives. We hear of success stories including medical advances, but also worries that AI will destroy jobs or even be an existential threat to humanity. We also know that for previous waves of technology – mechanical, electronic and digital – the costs and benefits do not fall equally to everyone in society. There are clear dangers that AI will further entrench existing power and deepen the digital divide: both at an individual level and globally. For example, training foundation models, such as GPT-4, requires enormous computational power and massive data sets accessible only to the largest corporations. However, the ways in which these can be used generatively in more niche areas, offers potential for minority languages and individualised learning that was previously only accessible to the rich. Whether the threats of AI or its opportunities dominate is not simply an abstract question, but one that impacts the most disadvantaged around us, and one that, as researchers and practitioners in digital technology, we can affect. If we truly want an inclusive digital society, then we need to make it happen.

Hidden Figures architectural challenges to expose parameters lost in code

Position paper presented at Engineering Interactive Systems Embedding AI Technologies at EICS 2023, Swansea, Wales, UK. 27 June. 2023.
https://alandix.com/academic/papers/EISEAIT2023-hidden/
Many critical user interaction design decisions are made in the heat of detailed development. These include simple parameter choices or more complex weightings in intelligent algorithms. Many would be appropriate for expert design review, user-preference choices or optimisation by machine learning, but they are buried deep in the code. Although the developer may realise this potential, the location of the decision is far removed in the code from where user feedback occurs, data can be collected and machine learning could be applied. This position paper describes several case studies and use them to frame an architectural challenge for tools and infrastructure to uncover these hidden variables to make them available for machine learning and user inspection.

ChatGPT, Culture and Creativity simulacrum and alterity

Keynote at Creative AI Research Conference 2023
https://alandix.com/academic/talks/CAR2023-keynote/
Over the years many of the ‘red lines’ of artificial intelligence have been crossed: challenges that were deemed to require uniquely human understanding. In 1997, chess fell as Deep Blue defeated Kasparov; then, twenty years later, AlphaGo beat Ke Jie, the world’s top Go player. Arguably, game playing can be considered artificial and formal, not representing the rich nuanced nature of human intelligence embodied in the real world. However, large language models have challenged these assumptions producing dialogue and texts that appear human – passing the Turing test . Furthermore, the text, and poems generated by ChatGPT and images created by DALL-E appear almost creative.
Has the last bastion fallen or is it merely the babbling of ‘stochastic parrots’? Is AI the ultimate charlatan peddling plagiarism or instead the child’s cry that reveals the emperor’s cloths of human creativity are sham? And what does it mean to be creative anyway?
I will attempt, if not to answer these deep questions, at least lay down some pointers. We will test the limits of the myth of the individual innate genius with inspiration gifted by the muses; and explore the way creativity is always embodied in culture and technology. Yet, while artists and philosophers debate, the child draws on.

Why pandemics and climate change are hard to understand and make decision mak...

Talk given as part of Online Seminars on Human Computer Interaction and User Experience
Presented by British Computer Society Interaction Group
and Interacting with Computers, 27 February 2023
https://alandix.com/academic/talks/BCS-IwC-Covid-Feb-2023/
This talk draws on diverse psychological, behavioural and numerical literature to understand some of the challenges we all face in making sense of large-scale phenomena and use this to create a roadmap for HCI responses. This body of research points the way toward current challenges and equips us with tools and principles that can help HCI researchers deliver value. The talk is framed by looking at patterns and information that highlight some of the common misunderstandings that arise – not just for politicians and the general public but also for those in the academic community’s heart. This talk does not have all the answers to this, but we hope it provides some and, perhaps more importantly, raises questions that we need to address as scientific and technical communities.

Beyond the Wireframe: tools to design, analyse and prototype physical devices

Keynote at Fifth European Tangible Interaction Studio, ENAC Toulouse. Nov. 7-10 2022.
https://alandix.com/academic/talks/ETIS2022-keynote/
For many years interaction design was driven by the abstractions of WIMP (windows, icons, menus, pointer). The details differ on desktop applications, web pages or smart-phones and the ‘pointer’ has evolved from mice to trackpad and touch-based interactions, however, for many digital applications, the central aspects are unchanged. What is different is that the screens we encounter, as Weiser predicted, are everywhere: embedded in physical appliances such as showers and toasters and situated in office walls and building facades. Furthermore, we are often engaging with digital applications that have no obvious screen or where the screen if present is only a small part of the interaction; these include voice assistants, semi-autonomous vehicles, and smart cities.
Even where the dominant interaction is focused on a screen, the places where we use them and the physical activities, we are doing fundamentally affect the nature of the interactive experience: using a smartphone while sitting in an armchair and watching television, is very different from thumbing a quick message whilst walking down a busy city road on a rainy night.
In this talk I will describe several design techniques and prototype tools that seek to address the physicality of digital interactions including the physical nature of the device itself and the physical context in which it is placed. This will include ‘soft’ formal methods to describe physical aspects of devices, ways to use video to model physical prototypes during early design and tools to encourage designers to keep the context of use in mind even when working on largely screen-based interactions.
The talk draws on some long-standing work, parts of the recently published book 'TouchIT: Understanding Design in a Physical-Digital World' (co-authored with Steve Gill, Devina Ramduny-Ellis, and Jo Hare) and the InContext project (in collaboration with Miriam Sturdee and Anna Carter). The latter arose from the realisation that despite the vast number of design tools available, nearly all focus entirely on the screen and wireframes. We are asking "what is the Next Generation of UX design tool?" – perhaps you would like to join this conversation.

Forever Cyborgs – a long view on physical-digital interaction

Keynote at the European Conference on Cognitive Ergonomics (ECCE 2022), Kaiserslautern, Germany, 7th Oct 2022.
https://alandix.com/academic/talks/ECCE2022-keynote/
From prehistory to the internet age, humans have always lived as part of a technologically mediated world. Knapped flints have given way to touch-screens, cuneiform to CSS, but in both rapid hand-eye coordination and long-term social interactions, our experiences and actions in the world are embedded in a physical, mechanical, symbolic and digital nexus. After far too long in the writing, my co-authors and I are delighted that "TouchIT: Understanding Design in a Physical-Digital World" is finally published – symbolic words, recorded in digital media and printed on physical paper. This book covers established and emergent digital technology, but repeatedly the continuity of current and past technology, physical and digital worlds is evident. The fundamental cognitive resources that enable our digital existence in an age of constant flux are the result of aeons of development in a physical world that we remake and reimagine. In this talk I will explore multiple scales of digital interaction from seconds to years, informed by and illuminating what it means to be a fully embodied and richly reflective human.

AI for HCI – could this be a better title if I’d asked ChatGPT

AI for HCI – could this be a better title if I’d asked ChatGPT

Just Counting – a tool ecosystem for personal numeric information

Just Counting – a tool ecosystem for personal numeric information

A flexible QR-code infrastructure for heritage

A flexible QR-code infrastructure for heritage

Epistemic Interaction - tuning interfaces to provide information for AI support

Epistemic Interaction - tuning interfaces to provide information for AI support

AI and the Humanities – provocations – The Arts, Humanities & Responsible AI...

AI and the Humanities – provocations – The Arts, Humanities & Responsible AI...

Can AI do good? at 'offtheCanvas' India HCI prelude

Can AI do good? at 'offtheCanvas' India HCI prelude

CDT Away Day Talk: Qualitative–Quantitative reasoning and lightweight numbers

CDT Away Day Talk: Qualitative–Quantitative reasoning and lightweight numbers

Swan(sea) Song – personal research during my six years at Swansea ... and bey...

Swan(sea) Song – personal research during my six years at Swansea ... and bey...

From Family Reminiscence to Scholarly Archive .

From Family Reminiscence to Scholarly Archive .

Human-Centred Artificial Intelligence – Malta 2024

Human-Centred Artificial Intelligence – Malta 2024

The future of UX design support tools - talk Paris March 2024

The future of UX design support tools - talk Paris March 2024

Qualitative–Quantitative reasoning and lightweight numbers

Qualitative–Quantitative reasoning and lightweight numbers

Invited talk at Diversifying Knowledge Production in HCI

Invited talk at Diversifying Knowledge Production in HCI

Exceptional Experiences for Everyone

Exceptional Experiences for Everyone

Inclusivity and AI: opportunity or threat

Inclusivity and AI: opportunity or threat

Hidden Figures architectural challenges to expose parameters lost in code

Hidden Figures architectural challenges to expose parameters lost in code

ChatGPT, Culture and Creativity simulacrum and alterity

ChatGPT, Culture and Creativity simulacrum and alterity

Why pandemics and climate change are hard to understand and make decision mak...

Why pandemics and climate change are hard to understand and make decision mak...

Beyond the Wireframe: tools to design, analyse and prototype physical devices

Beyond the Wireframe: tools to design, analyse and prototype physical devices

Forever Cyborgs – a long view on physical-digital interaction

Forever Cyborgs – a long view on physical-digital interaction

原版一比一利兹贝克特大学毕业证(LeedsBeckett毕业证书)如何办理

原版制作【微信:41543339】【利兹贝克特大学毕业证(LeedsBeckett毕业证书)】【微信:41543339】《成绩单、外壳、雅思、offer、真实留信官方学历认证（永久存档/真实可查）》采用学校原版纸张、特殊工艺完全按照原版一比一制作（包括：隐形水印，阴影底纹，钢印LOGO烫金烫银，LOGO烫金烫银复合重叠，文字图案浮雕，激光镭射，紫外荧光，温感，复印防伪）行业标杆！精益求精，诚心合作，真诚制作！多年品质 ,按需精细制作，24小时接单,全套进口原装设备，十五年致力于帮助留学生解决难题，业务范围有加拿大、英国、澳洲、韩国、美国、新加坡，新西兰等学历材料，包您满意。
【我们承诺采用的是学校原版纸张（纸质、底色、纹路）我们拥有全套进口原装设备，特殊工艺都是采用不同机器制作，仿真度基本可以达到100%，所有工艺效果都可提前给客户展示，不满意可以根据客户要求进行调整，直到满意为止！】
【业务选择办理准则】
一、工作未确定，回国需先给父母、亲戚朋友看下文凭的情况，办理一份就读学校的毕业证【微信41543339】文凭即可
二、回国进私企、外企、自己做生意的情况，这些单位是不查询毕业证真伪的，而且国内没有渠道去查询国外文凭的真假，也不需要提供真实教育部认证。鉴于此，办理一份毕业证【微信41543339】即可
三、进国企，银行，事业单位，考公务员等等，这些单位是必需要提供真实教育部认证的，办理教育部认证所需资料众多且烦琐，所有材料您都必须提供原件，我们凭借丰富的经验，快捷的绿色通道帮您快速整合材料，让您少走弯路。
留信网认证的作用:
1:该专业认证可证明留学生真实身份
2:同时对留学生所学专业登记给予评定
3:国家专业人才认证中心颁发入库证书
4:这个认证书并且可以归档倒地方
5:凡事获得留信网入网的信息将会逐步更新到个人身份内，将在公安局网内查询个人身份证信息后，同步读取人才网入库信息
6:个人职称评审加20分
7:个人信誉贷款加10分
8:在国家人才网主办的国家网络招聘大会中纳入资料，供国家高端企业选择人才
留信网服务项目：
1、留学生专业人才库服务（留信分析）
2、国（境）学习人员提供就业推荐信服务
3、留学人员区块链存储服务
【关于价格问题（保证一手价格）】
我们所定的价格是非常合理的，而且我们现在做得单子大多数都是代理和回头客户介绍的所以一般现在有新的单子 我给客户的都是第一手的代理价格，因为我想坦诚对待大家 不想跟大家在价格方面浪费时间
对于老客户或者被老客户介绍过来的朋友，我们都会适当给一些优惠。
选择实体注册公司办理，更放心，更安全！我们的承诺：客户在留信官方认证查询网站查询到认证通过结果后付款，不成功不收费！

DSSML24_tspann_CodelessGenerativeAIPipelines

Codeless Generative AI Pipelines
(GenAI with Milvus)
https://ml.dssconf.pl/user.html#!/lecture/DSSML24-041a/rate
Discover the potential of real-time streaming in the context of GenAI as we delve into the intricacies of Apache NiFi and its capabilities. Learn how this tool can significantly simplify the data engineering workflow for GenAI applications, allowing you to focus on the creative aspects rather than the technical complexities. I will guide you through practical examples and use cases, showing the impact of automation on prompt building. From data ingestion to transformation and delivery, witness how Apache NiFi streamlines the entire pipeline, ensuring a smooth and hassle-free experience.
Timothy Spann
https://www.youtube.com/@FLaNK-Stack
https://medium.com/@tspann
https://www.datainmotion.dev/
milvus, unstructured data, vector database, zilliz, cloud, vectors, python, deep learning, generative ai, genai, nifi, kafka, flink, streaming, iot, edge

Orchestrating the Future: Navigating Today's Data Workflow Challenges with Ai...

Navigating today's data landscape isn't just about managing workflows; it's about strategically propelling your business forward. Apache Airflow has stood out as the benchmark in this arena, driving data orchestration forward since its early days. As we dive into the complexities of our current data-rich environment, where the sheer volume of information and its timely, accurate processing are crucial for AI and ML applications, the role of Airflow has never been more critical.
In my journey as the Senior Engineering Director and a pivotal member of Apache Airflow's Project Management Committee (PMC), I've witnessed Airflow transform data handling, making agility and insight the norm in an ever-evolving digital space. At Astronomer, our collaboration with leading AI & ML teams worldwide has not only tested but also proven Airflow's mettle in delivering data reliably and efficiently—data that now powers not just insights but core business functions.
This session is a deep dive into the essence of Airflow's success. We'll trace its evolution from a budding project to the backbone of data orchestration it is today, constantly adapting to meet the next wave of data challenges, including those brought on by Generative AI. It's this forward-thinking adaptability that keeps Airflow at the forefront of innovation, ready for whatever comes next.
The ever-growing demands of AI and ML applications have ushered in an era where sophisticated data management isn't a luxury—it's a necessity. Airflow's innate flexibility and scalability are what makes it indispensable in managing the intricate workflows of today, especially those involving Large Language Models (LLMs).
This talk isn't just a rundown of Airflow's features; it's about harnessing these capabilities to turn your data workflows into a strategic asset. Together, we'll explore how Airflow remains at the cutting edge of data orchestration, ensuring your organization is not just keeping pace but setting the pace in a data-driven future.
Session in https://budapestdata.hu/2024/04/kaxil-naik-astronomer-io/ | https://dataml24.sessionize.com/session/667627

End-to-end pipeline agility - Berlin Buzzwords 2024

We describe how we achieve high change agility in data engineering by eliminating the fear of breaking downstream data pipelines through end-to-end pipeline testing, and by using schema metaprogramming to safely eliminate boilerplate involved in changes that affect whole pipelines.
A quick poll on agility in changing pipelines from end to end indicated a huge span in capabilities. For the question "How long time does it take for all downstream pipelines to be adapted to an upstream change," the median response was 6 months, but some respondents could do it in less than a day. When quantitative data engineering differences between the best and worst are measured, the span is often 100x-1000x, sometimes even more.
A long time ago, we suffered at Spotify from fear of changing pipelines due to not knowing what the impact might be downstream. We made plans for a technical solution to test pipelines end-to-end to mitigate that fear, but the effort failed for cultural reasons. We eventually solved this challenge, but in a different context. In this presentation we will describe how we test full pipelines effectively by manipulating workflow orchestration, which enables us to make changes in pipelines without fear of breaking downstream.
Making schema changes that affect many jobs also involves a lot of toil and boilerplate. Using schema-on-read mitigates some of it, but has drawbacks since it makes it more difficult to detect errors early. We will describe how we have rejected this tradeoff by applying schema metaprogramming, eliminating boilerplate but keeping the protection of static typing, thereby further improving agility to quickly modify data pipelines without fear.

在线办理(英国UCA毕业证书)创意艺术大学毕业证在读证明一模一样

学校原件一模一样【微信：741003700 】《(英国UCA毕业证书)创意艺术大学毕业证》【微信：741003700 】学位证，留信认证（真实可查，永久存档）原件一模一样纸张工艺/offer、雅思、外壳等材料/诚信可靠,可直接看成品样本，帮您解决无法毕业带来的各种难题！外壳，原版制作，诚信可靠，可直接看成品样本。行业标杆！精益求精，诚心合作，真诚制作！多年品质 ,按需精细制作，24小时接单,全套进口原装设备。十五年致力于帮助留学生解决难题，包您满意。
本公司拥有海外各大学样板无数，能完美还原。
1:1完美还原海外各大学毕业材料上的工艺：水印，阴影底纹，钢印LOGO烫金烫银，LOGO烫金烫银复合重叠。文字图案浮雕、激光镭射、紫外荧光、温感、复印防伪等防伪工艺。材料咨询办理、认证咨询办理请加学历顾问Q/微741003700
【主营项目】
一.毕业证【q微741003700】成绩单、使馆认证、教育部认证、雅思托福成绩单、学生卡等！
二.真实使馆公证(即留学回国人员证明,不成功不收费)
三.真实教育部学历学位认证（教育部存档！教育部留服网站永久可查）
四.办理各国各大学文凭(一对一专业服务,可全程监控跟踪进度)
如果您处于以下几种情况：
◇在校期间，因各种原因未能顺利毕业……拿不到官方毕业证【q/微741003700】
◇面对父母的压力，希望尽快拿到；
◇不清楚认证流程以及材料该如何准备；
◇回国时间很长，忘记办理；
◇回国马上就要找工作，办给用人单位看；
◇企事业单位必须要求办理的
◇需要报考公务员、购买免税车、落转户口
◇申请留学生创业基金
留信网认证的作用:
1:该专业认证可证明留学生真实身份
2:同时对留学生所学专业登记给予评定
3:国家专业人才认证中心颁发入库证书
4:这个认证书并且可以归档倒地方
5:凡事获得留信网入网的信息将会逐步更新到个人身份内，将在公安局网内查询个人身份证信息后，同步读取人才网入库信息
6:个人职称评审加20分
7:个人信誉贷款加10分
8:在国家人才网主办的国家网络招聘大会中纳入资料，供国家高端企业选择人才

原版制作(unimelb毕业证书)墨尔本大学毕业证Offer一模一样

学校原件一模一样【微信：741003700 】《(unimelb毕业证书)墨尔本大学毕业证》【微信：741003700 】学位证，留信认证（真实可查，永久存档）原件一模一样纸张工艺/offer、雅思、外壳等材料/诚信可靠,可直接看成品样本，帮您解决无法毕业带来的各种难题！外壳，原版制作，诚信可靠，可直接看成品样本。行业标杆！精益求精，诚心合作，真诚制作！多年品质 ,按需精细制作，24小时接单,全套进口原装设备。十五年致力于帮助留学生解决难题，包您满意。
本公司拥有海外各大学样板无数，能完美还原。
1:1完美还原海外各大学毕业材料上的工艺：水印，阴影底纹，钢印LOGO烫金烫银，LOGO烫金烫银复合重叠。文字图案浮雕、激光镭射、紫外荧光、温感、复印防伪等防伪工艺。材料咨询办理、认证咨询办理请加学历顾问Q/微741003700
【主营项目】
一.毕业证【q微741003700】成绩单、使馆认证、教育部认证、雅思托福成绩单、学生卡等！
二.真实使馆公证(即留学回国人员证明,不成功不收费)
三.真实教育部学历学位认证（教育部存档！教育部留服网站永久可查）
四.办理各国各大学文凭(一对一专业服务,可全程监控跟踪进度)
如果您处于以下几种情况：
◇在校期间，因各种原因未能顺利毕业……拿不到官方毕业证【q/微741003700】
◇面对父母的压力，希望尽快拿到；
◇不清楚认证流程以及材料该如何准备；
◇回国时间很长，忘记办理；
◇回国马上就要找工作，办给用人单位看；
◇企事业单位必须要求办理的
◇需要报考公务员、购买免税车、落转户口
◇申请留学生创业基金
留信网认证的作用:
1:该专业认证可证明留学生真实身份
2:同时对留学生所学专业登记给予评定
3:国家专业人才认证中心颁发入库证书
4:这个认证书并且可以归档倒地方
5:凡事获得留信网入网的信息将会逐步更新到个人身份内，将在公安局网内查询个人身份证信息后，同步读取人才网入库信息
6:个人职称评审加20分
7:个人信誉贷款加10分
8:在国家人才网主办的国家网络招聘大会中纳入资料，供国家高端企业选择人才

writing report business partner b1+ .pdf

writing report business partner b1+ .pdf

原版一比一多伦多大学毕业证(UofT毕业证书)如何办理

原版制作【微信:41543339】【多伦多大学毕业证(UofT毕业证书)】【微信:41543339】《成绩单、外壳、雅思、offer、真实留信官方学历认证（永久存档/真实可查）》采用学校原版纸张、特殊工艺完全按照原版一比一制作（包括：隐形水印，阴影底纹，钢印LOGO烫金烫银，LOGO烫金烫银复合重叠，文字图案浮雕，激光镭射，紫外荧光，温感，复印防伪）行业标杆！精益求精，诚心合作，真诚制作！多年品质 ,按需精细制作，24小时接单,全套进口原装设备，十五年致力于帮助留学生解决难题，业务范围有加拿大、英国、澳洲、韩国、美国、新加坡，新西兰等学历材料，包您满意。
【我们承诺采用的是学校原版纸张（纸质、底色、纹路）我们拥有全套进口原装设备，特殊工艺都是采用不同机器制作，仿真度基本可以达到100%，所有工艺效果都可提前给客户展示，不满意可以根据客户要求进行调整，直到满意为止！】
【业务选择办理准则】
一、工作未确定，回国需先给父母、亲戚朋友看下文凭的情况，办理一份就读学校的毕业证【微信41543339】文凭即可
二、回国进私企、外企、自己做生意的情况，这些单位是不查询毕业证真伪的，而且国内没有渠道去查询国外文凭的真假，也不需要提供真实教育部认证。鉴于此，办理一份毕业证【微信41543339】即可
三、进国企，银行，事业单位，考公务员等等，这些单位是必需要提供真实教育部认证的，办理教育部认证所需资料众多且烦琐，所有材料您都必须提供原件，我们凭借丰富的经验，快捷的绿色通道帮您快速整合材料，让您少走弯路。
留信网认证的作用:
1:该专业认证可证明留学生真实身份
2:同时对留学生所学专业登记给予评定
3:国家专业人才认证中心颁发入库证书
4:这个认证书并且可以归档倒地方
5:凡事获得留信网入网的信息将会逐步更新到个人身份内，将在公安局网内查询个人身份证信息后，同步读取人才网入库信息
6:个人职称评审加20分
7:个人信誉贷款加10分
8:在国家人才网主办的国家网络招聘大会中纳入资料，供国家高端企业选择人才
留信网服务项目：
1、留学生专业人才库服务（留信分析）
2、国（境）学习人员提供就业推荐信服务
3、留学人员区块链存储服务
【关于价格问题（保证一手价格）】
我们所定的价格是非常合理的，而且我们现在做得单子大多数都是代理和回头客户介绍的所以一般现在有新的单子 我给客户的都是第一手的代理价格，因为我想坦诚对待大家 不想跟大家在价格方面浪费时间
对于老客户或者被老客户介绍过来的朋友，我们都会适当给一些优惠。
选择实体注册公司办理，更放心，更安全！我们的承诺：客户在留信官方认证查询网站查询到认证通过结果后付款，不成功不收费！

Beyond the Basics of A/B Tests: Highly Innovative Experimentation Tactics You...

This webinar will explore cutting-edge, less familiar but powerful experimentation methodologies which address well-known limitations of standard A/B Testing. Designed for data and product leaders, this session aims to inspire the embrace of innovative approaches and provide insights into the frontiers of experimentation!

Predictably Improve Your B2B Tech Company's Performance by Leveraging Data

Harness the power of AI-backed reports, benchmarking and data analysis to predict trends and detect anomalies in your marketing efforts.
Peter Caputa, CEO at Databox, reveals how you can discover the strategies and tools to increase your growth rate (and margins!).
From metrics to track to data habits to pick up, enhance your reporting for powerful insights to improve your B2B tech company's marketing.
- - -
This is the webinar recording from the June 2024 HubSpot User Group (HUG) for B2B Technology USA.
Watch the video recording at https://youtu.be/5vjwGfPN9lw
Sign up for future HUG events at https://events.hubspot.com/b2b-technology-usa/

STATATHON: Unleashing the Power of Statistics in a 48-Hour Knowledge Extravag...

"Join us for STATATHON, a dynamic 2-day event dedicated to exploring statistical knowledge and its real-world applications. From theory to practice, participants engage in intensive learning sessions, workshops, and challenges, fostering a deeper understanding of statistical methodologies and their significance in various fields."

Population Growth in Bataan: The effects of population growth around rural pl...

A population analysis specific to Bataan.

一比一原版(Unimelb毕业证书)墨尔本大学毕业证如何办理

原版制作【微信:41543339】【(Unimelb毕业证书)墨尔本大学毕业证】【微信:41543339】《成绩单、外壳、雅思、offer、留信学历认证（永久存档/真实可查）》采用学校原版纸张、特殊工艺完全按照原版一比一制作（包括：隐形水印，阴影底纹，钢印LOGO烫金烫银，LOGO烫金烫银复合重叠，文字图案浮雕，激光镭射，紫外荧光，温感，复印防伪）行业标杆！精益求精，诚心合作，真诚制作！多年品质 ,按需精细制作，24小时接单,全套进口原装设备，十五年致力于帮助留学生解决难题，业务范围有加拿大、英国、澳洲、韩国、美国、新加坡，新西兰等学历材料，包您满意。
【我们承诺采用的是学校原版纸张（纸质、底色、纹路），我们拥有全套进口原装设备，特殊工艺都是采用不同进口机器一比一制作，仿真度基本可以达到100%，所有工艺效果都可提前给客户展示，不满意可以根据客户要求进行调整，直到满意为止！】
【业务选择办理准则】
一、工作未确定，回国需先给父母、亲戚朋友看下文凭的情况，办理一份就读学校的毕业证【微信41543339】文凭即可
二、回国进私企、外企、自己做生意的情况，这些单位是不查询毕业证真伪的，而且国内没有渠道去查询国外文凭的真假，也不需要提供真实教育部认证。鉴于此，办理一份毕业证【微信41543339】即可
三、进国企，银行，事业单位，考公务员等等，这些单位是必需要提供真实教育部认证的，办理教育部认证所需资料众多且烦琐，所有材料您都必须提供原件，我们凭借丰富的经验，快捷的绿色通道帮您快速整合材料，让您少走弯路。
留信网认证的作用:
1:该专业认证可证明留学生真实身份
2:同时对留学生所学专业登记给予评定
3:国家专业人才认证中心颁发入库证书
4:这个认证书并且可以归档倒地方
5:凡事获得留信网入网的信息将会逐步更新到个人身份内，将在公安局网内查询个人身份证信息后，同步读取人才网入库信息
6:个人职称评审加20分
7:个人信誉贷款加10分
8:在国家人才网主办的国家网络招聘大会中纳入资料，供国家高端企业选择人才
留信网服务项目：
1、留学生专业人才库服务（留信分析）
2、国（境）学习人员提供就业推荐信服务
3、留学人员区块链存储服务
【关于价格问题（保证一手价格）】
我们所定的价格是非常合理的，而且我们现在做得单子大多数都是代理和回头客户介绍的所以一般现在有新的单子 我给客户的都是第一手的代理价格，因为我想坦诚对待大家 不想跟大家在价格方面浪费时间
对于老客户或者被老客户介绍过来的朋友，我们都会适当给一些优惠。
选择实体注册公司办理，更放心，更安全！我们的承诺：客户在留信官方认证查询网站查询到认证通过结果后付款，不成功不收费！

一比一原版(UCSF文凭证书)旧金山分校毕业证如何办理

毕业原版【微信:176555708】【(UCSF毕业证书)旧金山分校毕业证】【微信:176555708】成绩单、外壳、offer、留信学历认证（永久存档真实可查）采用学校原版纸张、特殊工艺完全按照原版一比一制作（包括：隐形水印，阴影底纹，钢印LOGO烫金烫银，LOGO烫金烫银复合重叠，文字图案浮雕，激光镭射，紫外荧光，温感，复印防伪）行业标杆！精益求精，诚心合作，真诚制作！多年品质 ,按需精细制作，24小时接单,全套进口原装设备，十五年致力于帮助留学生解决难题，业务范围有加拿大、英国、澳洲、韩国、美国、新加坡，新西兰等学历材料，包您满意。
【我们承诺采用的是学校原版纸张（纸质、底色、纹路），我们拥有全套进口原装设备，特殊工艺都是采用不同机器制作，仿真度基本可以达到100%，所有工艺效果都可提前给客户展示，不满意可以根据客户要求进行调整，直到满意为止！】
【业务选择办理准则】
一、工作未确定，回国需先给父母、亲戚朋友看下文凭的情况，办理一份就读学校的毕业证【微信176555708】文凭即可
二、回国进私企、外企、自己做生意的情况，这些单位是不查询毕业证真伪的，而且国内没有渠道去查询国外文凭的真假，也不需要提供真实教育部认证。鉴于此，办理一份毕业证【微信176555708】即可
三、进国企，银行，事业单位，考公务员等等，这些单位是必需要提供真实教育部认证的，办理教育部认证所需资料众多且烦琐，所有材料您都必须提供原件，我们凭借丰富的经验，快捷的绿色通道帮您快速整合材料，让您少走弯路。
留信网认证的作用:
1:该专业认证可证明留学生真实身份
2:同时对留学生所学专业登记给予评定
3:国家专业人才认证中心颁发入库证书
4:这个认证书并且可以归档倒地方
5:凡事获得留信网入网的信息将会逐步更新到个人身份内，将在公安局网内查询个人身份证信息后，同步读取人才网入库信息
6:个人职称评审加20分
7:个人信誉贷款加10分
8:在国家人才网主办的国家网络招聘大会中纳入资料，供国家高端企业选择人才
留信网服务项目：
1、留学生专业人才库服务（留信分析）
2、国（境）学习人员提供就业推荐信服务
3、留学人员区块链存储服务
→ 【关于价格问题（保证一手价格）】
我们所定的价格是非常合理的，而且我们现在做得单子大多数都是代理和回头客户介绍的所以一般现在有新的单子 我给客户的都是第一手的代理价格，因为我想坦诚对待大家 不想跟大家在价格方面浪费时间
对于老客户或者被老客户介绍过来的朋友，我们都会适当给一些优惠。
选择实体注册公司办理，更放心，更安全！我们的承诺：客户在留信官方认证查询网站查询到认证通过结果后付款，不成功不收费！

4th Modern Marketing Reckoner by MMA Global India & Group M: 60+ experts on W...

The Modern Marketing Reckoner (MMR) is a comprehensive resource packed with POVs from 60+ industry leaders on how AI is transforming the 4 key pillars of marketing – product, place, price and promotions.

Global Situational Awareness of A.I. and where its headed

You can see the future first in San Francisco.
Over the past year, the talk of the town has shifted from $10 billion compute clusters to $100 billion clusters to trillion-dollar clusters. Every six months another zero is added to the boardroom plans. Behind the scenes, there’s a fierce scramble to secure every power contract still available for the rest of the decade, every voltage transformer that can possibly be procured. American big business is gearing up to pour trillions of dollars into a long-unseen mobilization of American industrial might. By the end of the decade, American electricity production will have grown tens of percent; from the shale fields of Pennsylvania to the solar farms of Nevada, hundreds of millions of GPUs will hum.
The AGI race has begun. We are building machines that can think and reason. By 2025/26, these machines will outpace college graduates. By the end of the decade, they will be smarter than you or I; we will have superintelligence, in the true sense of the word. Along the way, national security forces not seen in half a century will be un-leashed, and before long, The Project will be on. If we’re lucky, we’ll be in an all-out race with the CCP; if we’re unlucky, an all-out war.
Everyone is now talking about AI, but few have the faintest glimmer of what is about to hit them. Nvidia analysts still think 2024 might be close to the peak. Mainstream pundits are stuck on the wilful blindness of “it’s just predicting the next word”. They see only hype and business-as-usual; at most they entertain another internet-scale technological change.
Before long, the world will wake up. But right now, there are perhaps a few hundred people, most of them in San Francisco and the AI labs, that have situational awareness. Through whatever peculiar forces of fate, I have found myself amongst them. A few years ago, these people were derided as crazy—but they trusted the trendlines, which allowed them to correctly predict the AI advances of the past few years. Whether these people are also right about the next few years remains to be seen. But these are very smart people—the smartest people I have ever met—and they are the ones building this technology. Perhaps they will be an odd footnote in history, or perhaps they will go down in history like Szilard and Oppenheimer and Teller. If they are seeing the future even close to correctly, we are in for a wild ride.
Let me tell you what we see.

A presentation that explain the Power BI Licensing

Power BI Licensing

Analysis insight about a Flyball dog competition team's performance

Insight of my analysis about a Flyball dog competition team's last year performance. Find more: https://github.com/rolandnagy-ds/flyball_race_analysis/tree/main

Udemy_2024_Global_Learning_Skills_Trends_Report (1).pdf

Guia de Aprendizagem Globlal

Experts live - Improving user adoption with AI

Bekijk de slides van onze sessie Enhancing Modern Workplace Efficiency op Experts Live 2024.

原版一比一利兹贝克特大学毕业证(LeedsBeckett毕业证书)如何办理

原版一比一利兹贝克特大学毕业证(LeedsBeckett毕业证书)如何办理

DSSML24_tspann_CodelessGenerativeAIPipelines

DSSML24_tspann_CodelessGenerativeAIPipelines

Orchestrating the Future: Navigating Today's Data Workflow Challenges with Ai...

Orchestrating the Future: Navigating Today's Data Workflow Challenges with Ai...

End-to-end pipeline agility - Berlin Buzzwords 2024

End-to-end pipeline agility - Berlin Buzzwords 2024

在线办理(英国UCA毕业证书)创意艺术大学毕业证在读证明一模一样

在线办理(英国UCA毕业证书)创意艺术大学毕业证在读证明一模一样

原版制作(unimelb毕业证书)墨尔本大学毕业证Offer一模一样

原版制作(unimelb毕业证书)墨尔本大学毕业证Offer一模一样

writing report business partner b1+ .pdf

writing report business partner b1+ .pdf

原版一比一多伦多大学毕业证(UofT毕业证书)如何办理

原版一比一多伦多大学毕业证(UofT毕业证书)如何办理

Beyond the Basics of A/B Tests: Highly Innovative Experimentation Tactics You...

Beyond the Basics of A/B Tests: Highly Innovative Experimentation Tactics You...

Predictably Improve Your B2B Tech Company's Performance by Leveraging Data

Predictably Improve Your B2B Tech Company's Performance by Leveraging Data

STATATHON: Unleashing the Power of Statistics in a 48-Hour Knowledge Extravag...

STATATHON: Unleashing the Power of Statistics in a 48-Hour Knowledge Extravag...

Population Growth in Bataan: The effects of population growth around rural pl...

Population Growth in Bataan: The effects of population growth around rural pl...

一比一原版(Unimelb毕业证书)墨尔本大学毕业证如何办理

一比一原版(Unimelb毕业证书)墨尔本大学毕业证如何办理

一比一原版(UCSF文凭证书)旧金山分校毕业证如何办理

一比一原版(UCSF文凭证书)旧金山分校毕业证如何办理

4th Modern Marketing Reckoner by MMA Global India & Group M: 60+ experts on W...

4th Modern Marketing Reckoner by MMA Global India & Group M: 60+ experts on W...

Global Situational Awareness of A.I. and where its headed

Global Situational Awareness of A.I. and where its headed

A presentation that explain the Power BI Licensing

A presentation that explain the Power BI Licensing

Analysis insight about a Flyball dog competition team's performance

Analysis insight about a Flyball dog competition team's performance

Udemy_2024_Global_Learning_Skills_Trends_Report (1).pdf

Udemy_2024_Global_Learning_Skills_Trends_Report (1).pdf

Experts live - Improving user adoption with AI

Experts live - Improving user adoption with AI

- 1. Making Sense of Statistics in HCI: From P to Bayes and Beyond – Alan Dix Making Sense of Statistics in HCI Part 2 – Doing It if not p then what Alan Dix http://alandix.com/statistics/
- 2. Making Sense of Statistics in HCI: From P to Bayes and Beyond – Alan Dix probing the unknown quantified uncertainty accepting unknowing and common sense.
- 3. Making Sense of Statistics in HCI: From P to Bayes and Beyond – Alan Dix recall … the job of statistics
- 4. Making Sense of Statistics in HCI: From P to Bayes and Beyond – Alan Dix conditionals and likelihood probability road will be busy vs probability busy given 4am on Sunday probability die is a 6 vs prob a 6 given it is 4 or bigger conditional probability is probability given some knowledge
- 5. Making Sense of Statistics in HCI: From P to Bayes and Beyond – Alan Dix likelihood likelihood is probability of measurement given unknown things in the world (conditional probability) e.g. prob. of six heads given coin is fair = 1/64 … but is it fair?
- 6. Making Sense of Statistics in HCI: From P to Bayes and Beyond – Alan Dix counterfactual statistics turns this round from likelihood of measurement probabilistic knowledge given the unknown world to uncertain knowledge of the unknown world based on measurement … but in various different ways
- 7. Making Sense of Statistics in HCI: From P to Bayes and Beyond – Alan Dix statistical reasoning
- 8. Making Sense of Statistics in HCI: From P to Bayes and Beyond – Alan Dix types of statistics • hypothesis testing (the dreaded p!) – robust but confusing • confidence intervals – powerful but underused • Bayesian stats – mathematically clean but fragile – issues of independence
- 9. Making Sense of Statistics in HCI: From P to Bayes and Beyond – Alan Dix
- 10. Making Sense of Statistics in HCI: From P to Bayes and Beyond – Alan Dix hypothesis testing the ubiquitous p
- 11. Making Sense of Statistics in HCI: From P to Bayes and Beyond – Alan Dix significance test hypotheses: H1 – what want to show H0 – null hypothesis (to disprove) idea (when experiment/study successful) if H0 were true then observed effect is very unlikely therefore H1 is (likely to be) true
- 12. Making Sense of Statistics in HCI: From P to Bayes and Beyond – Alan Dix 5% significance level? it says: if H0 were true then probability observed effect happening by chance is less than 1 in 20 (5%) so H0 is unlikely to be true and H1 is likely to be true
- 13. Making Sense of Statistics in HCI: From P to Bayes and Beyond – Alan Dix does not say 8 probability of H0 is < 1 in 20 8 probability of H1 is > 0.95 8 effect is large or important i.e. significant in the real sense
- 14. Making Sense of Statistics in HCI: From P to Bayes and Beyond – Alan Dix all it says 4 if H0 were true ... ... effect is unlikely ( prob. < 1 in 20)
- 15. Making Sense of Statistics in HCI: From P to Bayes and Beyond – Alan Dix non-significant result can NEVER reason: non-significant result H1 false things are the same all you can say is: H1 is not statistically proven
- 16. Making Sense of Statistics in HCI: From P to Bayes and Beyond – Alan Dix
- 17. Making Sense of Statistics in HCI: From P to Bayes and Beyond – Alan Dix confidence intervals bounds of uncertainty
- 18. Making Sense of Statistics in HCI: From P to Bayes and Beyond – Alan Dix proving equality non-significant – not proved different real difference may always be smaller than experimental error can never prove equality but can put bounds on inequality
- 19. Making Sense of Statistics in HCI: From P to Bayes and Beyond – Alan Dix confidence interval bound on true value – same theory as p values e.g. mean of data is 0.3 95% confidence interval is [–0.7,1.3] says if the real value not in the range [–0.7,1.3] probability of seeing observed data is less than 5%
- 20. Making Sense of Statistics in HCI: From P to Bayes and Beyond – Alan Dix counterfactuals 95% confidence interval is [–0.7,1.3] does not say: there is 95% probability that the real mean is in the range [–0.7,1.3] it either is or it isn’t! all it says: probability of seeing the observed data if real value outside the range is less than 5%
- 21. Making Sense of Statistics in HCI: From P to Bayes and Beyond – Alan Dix proven … what? H0: no difference (real mean is zero) experimental result: mean is 0.3 significance test: n.s. at 5% – so what? 95% confidence interval: [–0.7,1.3] ? is 1.3 is an important difference
- 22. Making Sense of Statistics in HCI: From P to Bayes and Beyond – Alan Dix … and don’t forget … you still need to say what test/distribution – e.g. Student’s T how many – degrees of freedom it is still uncertain the real value could be outside the interval
- 23. Making Sense of Statistics in HCI: From P to Bayes and Beyond – Alan Dix
- 24. Making Sense of Statistics in HCI: From P to Bayes and Beyond – Alan Dix Bayesian statistics putting a number on it
- 25. Making Sense of Statistics in HCI: From P to Bayes and Beyond – Alan Dix traditional statistics probability of having antennae: if Martian = 1 if human = 0.001 hypotheses: H0: no Martians in the High Street H1: the Martians have landed ! you meet someone with antennae reject null hypothesis p ≤ 0.1% call the Men in Black!
- 26. Making Sense of Statistics in HCI: From P to Bayes and Beyond – Alan Dix Bayesian thinking prior probability of meeting: a Martian = 0.000 001 a human = 0.999 999 probability of having antennae: Martian = 1; human = 0.001 ! you meet someone with antennae posterior probability of being: a Martian ≈ 0.001 a human ≈ 0.999 what you think before evidence
- 27. Making Sense of Statistics in HCI: From P to Bayes and Beyond – Alan Dix Bayesian inference combine prior with likelihood prior distribution posterior
- 28. Making Sense of Statistics in HCI: From P to Bayes and Beyond – Alan Dix Bayesian issues how do you get the prior? sometimes doesn't matter too much! traditional stats rather like uniform prior handling multiple evidence can re-apply iteratively problems with interactions internecine warfare traditionalists and Bayesians often fight ;)
- 29. Making Sense of Statistics in HCI: From P to Bayes and Beyond – Alan Dix
- 30. Making Sense of Statistics in HCI: From P to Bayes and Beyond – Alan Dix philosophical differences probing the unknown
- 31. Making Sense of Statistics in HCI: From P to Bayes and Beyond – Alan Dix philosophical stance ? what do you know about the world traditional statistics – assume nothing! – reason from unknowledge Bayesian statistics – 'prior' probabilities – reason from guess-timates
- 32. Making Sense of Statistics in HCI: From P to Bayes and Beyond – Alan Dix uncertain and unknown assumptions: traditional – NO knowledge of real value Bayesian – precise probability distribution in reality … sort of half know traditional – choice of acceptable p Bayesian – ‘spread’ distributions (uniform, Cauchy) ideally should do sensitivity analysis … but no one does
- 33. Making Sense of Statistics in HCI: From P to Bayes and Beyond – Alan Dix
- 34. Making Sense of Statistics in HCI: From P to Bayes and Beyond – Alan Dix issues from “what is worse” to really significant
- 35. Making Sense of Statistics in HCI: From P to Bayes and Beyond – Alan Dix issues likelihood is conditional probability what to do with 'non-sig'. priors for experiments? finding or verifying? significance vs effect size vs power idea of 'worse' for measures
- 36. Making Sense of Statistics in HCI: From P to Bayes and Beyond – Alan Dix example: playing cards run of cards badly shuffled?
- 37. Making Sense of Statistics in HCI: From P to Bayes and Beyond – Alan Dix example: playing cards (2) probability three cards make a decreasing run: – first card needs to be 3 or bigger: 11/13 – second card exactly the next one down: 1/51 – third card exactly one down again: 1/50 = 11/33150 approx 1 in 3000 – p <0.001 any card except ace or 2 next card down next again
- 38. Making Sense of Statistics in HCI: From P to Bayes and Beyond – Alan Dix example: playing cards (3) but equally surprised if … – either direction – 15 positions for the first card = 15x2x11/33150 approx 1 in 100 and then … how often do I play?
- 39. Making Sense of Statistics in HCI: From P to Bayes and Beyond – Alan Dix dangers if it can go wrong then with 95% confidence …
- 40. Making Sense of Statistics in HCI: From P to Bayes and Beyond – Alan Dix dangers avoiding cherry picking multiple tests, multiple stats outliers, post-hoc hypotheses non-independent effects e.g. fat and sugar correlated features e.g. weight, diet and exercise including trying both traditional and Bayes! the same UI but ‘not’ consistent
- 41. Making Sense of Statistics in HCI: From P to Bayes and Beyond – Alan Dix
- 42. Making Sense of Statistics in HCI: From P to Bayes and Beyond – Alan Dix so which is it? to p or not to p Bayes forward?
- 43. Making Sense of Statistics in HCI: From P to Bayes and Beyond – Alan Dix the statistical crisis? mostly well known problems with known solutions maybe more people doing stats with less training? some of the papers about it use flawed stats!
- 44. Making Sense of Statistics in HCI: From P to Bayes and Beyond – Alan Dix on balance (my advice!) traditional more conservative – if done properly Bayes many of the same problems as trad. plus a few more … needs expertise for most things stick with trad. … but always confidence intervals as well as p for special things go for Bayes if you know prior, decision making with costs, internal algorithms
- 45. Making Sense of Statistics in HCI: From P to Bayes and Beyond – Alan Dix
- 46. Making Sense of Statistics in HCI: From P to Bayes and Beyond – Alan Dix a quick experiment … but are they fair
- 47. Making Sense of Statistics in HCI: From P to Bayes and Beyond – Alan Dix calculations – six coins given coin is fair: probability six heads = 1/26 = 1/64 probability six tails = 1/26 = 1/64 probability either = 2/64 ~ 3% H0 – coin is fair H1 – coin is not-fair likelihood ( HHHHHH or TTTTTT | H0 ) < 5%
- 48. Making Sense of Statistics in HCI: From P to Bayes and Beyond – Alan Dix your experiment toss 6 coins record how many heads or tails if HHHHHH or TTTTTTT you can reject H0 with p< 5% repeat exp. 2 more times with rest of coins
- 49. Making Sense of Statistics in HCI: From P to Bayes and Beyond – Alan Dix