We are turning more and more “work” over to computers. However, this comes with a lot of responsibility. As we automate work, the impact of bad policies and decisions grows exponentially. We need to be vigilant to make sure that our work produces accurate results using sound research methods.
We need to remember that the process of research is as important as the results. It is easy to forsake methodology, as Big Data distances researchers from the research process, and puts the focus on data collection, storage, and processing. However, practicing solid methods is the best way to produce accurate results. During this presentation we will explore important research topics. For example we will explore the exponential increase in noise — spurious relationships — as the number of variables increase and time horizons narrow. We will also cover ways to detect and prevent spurious relationships in a Big Data context.
The Willis 2011 Health & Productivity Survey is now available. This year’s survey theme is Work and Life – The Delicate Balancing Act. The survey includes responses from over 1,500 participants from various size employers.
Algorithm for Modeling Unconventional Machine Tool Machining Parameters using...IDES Editor
Unconventional machining process finds a lot of
application in aerospace and precision industries. It is
preferred over other conventional methods because of the
advent of composite and high strength to weight ratio
materials, complex parts and also because of its high accuracy
and precision. Usually in unconventional machine tools, trial
and error method is used to fix the values of process
parameters. In the proposed work an algorithm which is
developed using Artificial Neural Network (ANN) is proposed
to create mathematical model functionally relating process
parameters and operating parameters of any unconventional
machine tool. This is accomplished by training a feed forward
network with back propagation learning algorithm. The
required data which are used for training and testing the ANN
in the case study is obtained by conducting trial runs in EBW
machine. By adopting the proposed algorithm there will be a
reduction in production time and set-up time along with
reduction in manufacturing cost in unconventional machining
processes. This in general increases the overall productivity.
The programs for training and testing the neural network are
developed, using MATLAB package
The approaches to Artificial Intelligence (AI) in the last century may be labelled as (a) trying to understand and copy (human) nature, (b) being based on heuristic considerations, (c) being formal but from the outset (provably) limited, (d) being (mere) frameworks that leave crucial aspects unspecified. This decade has spawned the first theory of AI, which (e) is principled, formal, complete, and general. This theory, called Universal AI, is about ultimate super-intelligence. It can serve as a gold standard for General AI, and implicitly proposes a formal definition of machine intelligence. After a brief review of the various approaches to (general) AI, I will give an introduction to Universal AI, concentrating on the philosophical, mathematical, and computational aspects behind it. I will also discuss various implications and future challenges.
The Willis 2011 Health & Productivity Survey is now available. This year’s survey theme is Work and Life – The Delicate Balancing Act. The survey includes responses from over 1,500 participants from various size employers.
Algorithm for Modeling Unconventional Machine Tool Machining Parameters using...IDES Editor
Unconventional machining process finds a lot of
application in aerospace and precision industries. It is
preferred over other conventional methods because of the
advent of composite and high strength to weight ratio
materials, complex parts and also because of its high accuracy
and precision. Usually in unconventional machine tools, trial
and error method is used to fix the values of process
parameters. In the proposed work an algorithm which is
developed using Artificial Neural Network (ANN) is proposed
to create mathematical model functionally relating process
parameters and operating parameters of any unconventional
machine tool. This is accomplished by training a feed forward
network with back propagation learning algorithm. The
required data which are used for training and testing the ANN
in the case study is obtained by conducting trial runs in EBW
machine. By adopting the proposed algorithm there will be a
reduction in production time and set-up time along with
reduction in manufacturing cost in unconventional machining
processes. This in general increases the overall productivity.
The programs for training and testing the neural network are
developed, using MATLAB package
The approaches to Artificial Intelligence (AI) in the last century may be labelled as (a) trying to understand and copy (human) nature, (b) being based on heuristic considerations, (c) being formal but from the outset (provably) limited, (d) being (mere) frameworks that leave crucial aspects unspecified. This decade has spawned the first theory of AI, which (e) is principled, formal, complete, and general. This theory, called Universal AI, is about ultimate super-intelligence. It can serve as a gold standard for General AI, and implicitly proposes a formal definition of machine intelligence. After a brief review of the various approaches to (general) AI, I will give an introduction to Universal AI, concentrating on the philosophical, mathematical, and computational aspects behind it. I will also discuss various implications and future challenges.
Data Analysis Basics - Workshop (Frameworks)Angela Obias
This workshop was created to kick off free and open discussion among the Accounts Strategy, Social Media Analytics and Media teams of Nuworks Interactive Labs.
It goes through basic frameworks and rules of thumb that may guide the analysis process.
The flow does not cover specific statistical analysis directives -- it's meant to spark a discussion and an assessment of the way individuals look at data, in an easily understandable way.
Slides containing data exclusive to employees were removed.
Dr. Roitman discusses the use of Artificial Intelligence to solve complex and insoluble problems. Artificial intelligence approach is in the root of I Know First predictive algorithm.
Data Mining: Concepts and Techniques (3rd ed.)- Chapter 3 preprocessingSalah Amean
the chapter contains :
Data Preprocessing: An Overview,
Data Quality,
Major Tasks in Data Preprocessing,
Data Cleaning,
Data Integration,
Data Reduction,
Data Transformation and Data Discretization,
Summary.
Data Analysis Basics - Workshop (Frameworks)Angela Obias
This workshop was created to kick off free and open discussion among the Accounts Strategy, Social Media Analytics and Media teams of Nuworks Interactive Labs.
It goes through basic frameworks and rules of thumb that may guide the analysis process.
The flow does not cover specific statistical analysis directives -- it's meant to spark a discussion and an assessment of the way individuals look at data, in an easily understandable way.
Slides containing data exclusive to employees were removed.
Dr. Roitman discusses the use of Artificial Intelligence to solve complex and insoluble problems. Artificial intelligence approach is in the root of I Know First predictive algorithm.
Data Mining: Concepts and Techniques (3rd ed.)- Chapter 3 preprocessingSalah Amean
the chapter contains :
Data Preprocessing: An Overview,
Data Quality,
Major Tasks in Data Preprocessing,
Data Cleaning,
Data Integration,
Data Reduction,
Data Transformation and Data Discretization,
Summary.
Global Survey Across 32 Countries Shows Worker Appetite for Social Tools is I...Microsoft
Global survey across 32 countries shows worker appetite for social tools is increasing, even if it means spending their own money and defying organizational policy to use the technology.
Nearly half of employees report that social tools at work help increase their productivity, but more than 30 percent of companies underestimate the value of these tools and often restrict their use, according to new Microsoft research released today.
The survey, conducted for Microsoft Corp. by research firm Ipsos among 9,908 information workers in 32 countries, also found that 39 percent of employees feel there isn’t enough collaboration in their workplaces, and 40 percent believe social tools help foster better teamwork. More surprisingly, 31 percent said they are willing to spend their own money to buy social tools.
“Employees are already bringing their own devices into their workplaces, but now they are increasingly bringing their own services as well,” said Charlene Li, founder and analyst at Altimeter Group, a firm that studies social media and other technology trends. “Employees expect to work differently, with tools that feel more modern and connected, but are also reflective of how they interact in their personal lives. Enterprise social represents a new way to work, and organizations embracing these tools are improving collaboration, speeding customer responses and creating competitive advantages.”
The research also found distinct differences between countries, sectors and genders as they relate to the levels of productivity, collaboration and communication tools used in today’s workplace.
Creating Better Customer Experiences Online (with Top Tasks) presented by Ger...Patrick Van Renterghem
Creating Better Customer Experiences Online (with Top Tasks) presented by @GerryMcGovern on Dec. 4th, 2013 @itworks. Interesting for Web (Internet, Intranet, portals) designers, content managers, communication officers, marketing departments, ...
Generating and Qualifying Inbound SMB LeadsBredin, Inc.
Stu Richards was recently join by Stephen Archut, Director of Product Marketing at Explorium.ai, in a special Fastcast on Generating and Qualifying Inbound SMB Leads. In this fast-paced and informative Q&A, you’ll learn:
* The media that SMBs rely on to first learn about offerings like yours
* The content formats that work best at the top of the sales funnel
* The benefits of external data
* How to develop an effective data acquisition strategy
* The most common kinds of external data
* Data integration challenges
* How to onboard external data at scale
Watch the full Fastcast recording here - https://attendee.gotowebinar.com/recording/8783854779455236866
As part of the OESON Data Science internship program OGTIP Oeson, I completed my first project. The goal of the project was to conduct a statistical analysis of the stock values of three well-known companies using Advanced Excel. I used descriptive statistics to analyze the data, created charts to visualize the trends and built regression models for each company.
With the heavy use of social media by most Filipinos to review products and services, Team Unicorn was sponsored by TELUS International Philippines to study the reactions to different posts to help companies improve information dissemination and customer experience.
Jen Marquez, Cherry Bondal and Leia made an NLP study on a FinTech company's publicly available social media comments. They used SVM for sentiment analysis and Random Forest classifier for their final model which can predict the reactions of a social media post with 65% accuracy.
Want to hire a #datascientist? http://bit.ly/hireFTW
A global survey of more than 300 data management professionals conducted by independent research firm Dimensional Research® showed that enterprises of all sizes face challenges on a range of key data performance management issues from stopping bad data to keeping data flows operating effectively. In particular, 87 percent of respondents report flowing bad data into their data stores while just 12 percent consider themselves good at the key aspects of data flow performance management.
Join the data conversation and see how analytics drives decision making across industries. Learn to understand, analyze, and interpret data as you walk through the fundamentals of data analysis, learn introductory analytic functionality in Google Sheet to distill actionable insights from data sets, see how data analysts translate their findings into compelling business narratives, perform an exploratory analysis using real-world data.
This presentation explores the goals of today's data-driven organizations, the challenges imposed by external macro forces, and the imperative for data integrity to enable innovation and drive business success. Learn about the key insights and findings from the latest global survey of over 400 global data professionals, the 2023 Data Integrity Trends and Insights report.
The pandemic has forced many SMBs to reassess their application infrastructure to boost productivity and better support distributed teams. Selling SaaS solutions in this environment raises a number of key questions, though, such as: How familiar are SMBs with the cloud? What cloud solutions are they using? Which are they considering adopting next? What is motivating their cloud adoption? Who do they want to buy from, and what do they want from cloud solution providers?
To answer these questions – and to help you sell the cloud more effectively to SMBs – we recently surveyed 500 U.S. SMB principals.
"Ready or Not, Here Comes 2015: Marketing Trends to Master" TrendLab WebinarBluespire Marketing
During this BlueSpire TrendLab Webinar, our senior living, healthcare and financial services experts dive into the most relevant marketing trends and analyze what to look for in 2015.
Main themes of the webinar included:
•What global trends will have the most impact next year and how those trends will affect brand building and the overall customer experience.
•Changes in marketing staffing and spending, what should be on your radar regarding big data and analytics, and the importance of Web personalization.
• Effectiveness of branded videos, the accelerating trend of sharable and snackable content, and mapping content appropriately to your consumer’s journey.
Economics & Statistics Insights in Data Science by DataPerts TechnologiesRavindra Panwar
DATA is an inevitable part of our life today. These tiny pieces of information from which we derive valuable insights
have their genesis in the domain of ECONOMICS and STATISTICS.
In this webinar, NewMR Founder Ray Poynter discusses how to look for patterns in research data, as part of his wider 'Finding and Communicating the Story' series.
View the recording via NewMR.org
Similar to Big Data Research Methods – Contemporary Analysis (20)
Unleashing the Power of Data_ Choosing a Trusted Analytics Platform.pdfEnterprise Wired
In this guide, we'll explore the key considerations and features to look for when choosing a Trusted analytics platform that meets your organization's needs and delivers actionable intelligence you can trust.
Adjusting OpenMP PageRank : SHORT REPORT / NOTESSubhajit Sahu
For massive graphs that fit in RAM, but not in GPU memory, it is possible to take
advantage of a shared memory system with multiple CPUs, each with multiple cores, to
accelerate pagerank computation. If the NUMA architecture of the system is properly taken
into account with good vertex partitioning, the speedup can be significant. To take steps in
this direction, experiments are conducted to implement pagerank in OpenMP using two
different approaches, uniform and hybrid. The uniform approach runs all primitives required
for pagerank in OpenMP mode (with multiple threads). On the other hand, the hybrid
approach runs certain primitives in sequential mode (i.e., sumAt, multiply).
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...John Andrews
SlideShare Description for "Chatty Kathy - UNC Bootcamp Final Project Presentation"
Title: Chatty Kathy: Enhancing Physical Activity Among Older Adults
Description:
Discover how Chatty Kathy, an innovative project developed at the UNC Bootcamp, aims to tackle the challenge of low physical activity among older adults. Our AI-driven solution uses peer interaction to boost and sustain exercise levels, significantly improving health outcomes. This presentation covers our problem statement, the rationale behind Chatty Kathy, synthetic data and persona creation, model performance metrics, a visual demonstration of the project, and potential future developments. Join us for an insightful Q&A session to explore the potential of this groundbreaking project.
Project Team: Jay Requarth, Jana Avery, John Andrews, Dr. Dick Davis II, Nee Buntoum, Nam Yeongjin & Mat Nicholas
The Building Blocks of QuestDB, a Time Series Databasejavier ramirez
Talk Delivered at Valencia Codes Meetup 2024-06.
Traditionally, databases have treated timestamps just as another data type. However, when performing real-time analytics, timestamps should be first class citizens and we need rich time semantics to get the most out of our data. We also need to deal with ever growing datasets while keeping performant, which is as fun as it sounds.
It is no wonder time-series databases are now more popular than ever before. Join me in this session to learn about the internal architecture and building blocks of QuestDB, an open source time-series database designed for speed. We will also review a history of some of the changes we have gone over the past two years to deal with late and unordered data, non-blocking writes, read-replicas, or faster batch ingestion.
Learn SQL from basic queries to Advance queriesmanishkhaire30
Dive into the world of data analysis with our comprehensive guide on mastering SQL! This presentation offers a practical approach to learning SQL, focusing on real-world applications and hands-on practice. Whether you're a beginner or looking to sharpen your skills, this guide provides the tools you need to extract, analyze, and interpret data effectively.
Key Highlights:
Foundations of SQL: Understand the basics of SQL, including data retrieval, filtering, and aggregation.
Advanced Queries: Learn to craft complex queries to uncover deep insights from your data.
Data Trends and Patterns: Discover how to identify and interpret trends and patterns in your datasets.
Practical Examples: Follow step-by-step examples to apply SQL techniques in real-world scenarios.
Actionable Insights: Gain the skills to derive actionable insights that drive informed decision-making.
Join us on this journey to enhance your data analysis capabilities and unlock the full potential of SQL. Perfect for data enthusiasts, analysts, and anyone eager to harness the power of data!
#DataAnalysis #SQL #LearningSQL #DataInsights #DataScience #Analytics
Adjusting primitives for graph : SHORT REPORT / NOTESSubhajit Sahu
Graph algorithms, like PageRank Compressed Sparse Row (CSR) is an adjacency-list based graph representation that is
Multiply with different modes (map)
1. Performance of sequential execution based vs OpenMP based vector multiply.
2. Comparing various launch configs for CUDA based vector multiply.
Sum with different storage types (reduce)
1. Performance of vector element sum using float vs bfloat16 as the storage type.
Sum with different modes (reduce)
1. Performance of sequential execution based vs OpenMP based vector element sum.
2. Performance of memcpy vs in-place based CUDA based vector element sum.
3. Comparing various launch configs for CUDA based vector element sum (memcpy).
4. Comparing various launch configs for CUDA based vector element sum (in-place).
Sum with in-place strategies of CUDA mode (reduce)
1. Comparing various launch configs for CUDA based vector element sum (in-place).
1. Big Data
& Research Methods
PRESENTED BY
Grant Stanley, CEO
Tadd Wood, Chief Data Scientist
Contemporary Analysis
1209 Harney Street, Suite 200
Omaha, NE 68102
2. Big Data & Research Methods
INTRO
The process of research is as
important as the results.
• Correct research methods improve results,
• And allow others to collaborate and improve
your work.
Contemporary Analysis canworksmart.com
3. Big Data & Research Methods
INTRO
We’ll explore the dangers of:
• Spurious Correlation
• Sampling Errors
• Model Selection
• Heteroscedasticity
• Overfitting
• Lack of Background
Contemporary Analysis canworksmart.com
• Solutions instead of
Theories
• Lack of the Scientific
Method
• Correlation vs.
Causation
Text
4. Big Data & Research Methods
INTRO
Big Data can’t just be about
collecting, processing & storing
more data.
It has to be put to use. We need to
conduct research, build models,
and develop reports.
Contemporary Analysis canworksmart.com
5. Big Data & Research Methods
THE DANGER OF FALSE POSITIVES
The car has little impact without
the highway or interstate.
If we take Big Data beyond
engineering, we are building
the equivalent of the highway
or interstate for the computer &
Internet.
Contemporary Analysis canworksmart.com
6. Big Data & Research Methods
SPURIOUS RELATIONSHIPS
Spurious relationships are when
two or more events or variables
have no direct causal connection,
yet it may be wrongly inferred that
they do, due to either coincidence
or the presence of a certain third,
unseen factor.
Contemporary Analysis canworksmart.com
7. Big Data & Research Methods
SPURIOUS RELATIONSHIPS
Big Data Errors: Spurious Correlations
140,000
CORRELATIONS
80,000
SPURIOUS 20,000
VARIABLES 500 1000 1500 2000
Contemporary Analysis canworksmart.com
8. Big Data & Research Methods
SPURIOUS RELATIONSHIPS
Maine’s divorce rate with US margarine consumption
8
2000 2001 2002 2003 2004 2005 2006 2007 2008 2009
DIVORCES PER 1000 PEOPLE
Divorce rate in Maine
Divorces per 1000 people (US Census)
5 4.7 4.6 4.4 4.3 4.1 4.2 4.2 4.2 4.1
Consumption of margarine (US)
Per capita in pounds (USDA)
8.2 7 6.5 5.3 5.2 4 4.6 4.5 4.2 3.7
Correlation 0.992558
Contemporary Analysis canworksmart.com
MARGARINE CONSUMPTION (POUNDS)
5
4.8
4.6
4.4
4.2
4
2000 2001 2002 2003 2004 2005 2006 2007 2008 2009
9
7
6
5
4
3
DIVORCE RATE IN MAINE
PER CAPITA CONSUMPTION OF MARGARINE (US)
9. Big Data & Research Methods
SAMPLING
There are two reasons for
sampling a population:
• The cost of collecting and processing data
is too high or impossible.
• To ensure that the results are representative
of the population.
Contemporary Analysis canworksmart.com
10. Big Data & Research Methods
SAMPLING
Sampling still matters in Big Data.
Data is not information. It is simply
a representation of information.
You have to think about what the
data you are using represents.
Contemporary Analysis canworksmart.com
11. Big Data & Research Methods
SAMPLING
Is smartphone data representative of the population?
Gender by Platform Age by Platform
iPhone Android
100%
0%
Contemporary Analysis canworksmart.com
12%
18 - 24
iPhone Android
100%
0%
57%
MALE
73%
MALE
43%
FEMALE
27%
FEMALE
7%
17 OR YOUNGER
13%
17 OR YOUNGER
17%
18 - 24
21%
25 - 34
30%
25 - 34
21%
35 - 44
21%
35 - 44
32%
45+
25%
45+
12. Big Data & Research Methods
MODEL SELECTION
OLS is not a catch all.
You have to know your data.
Is it continuous, discrete, binary,
ordinal, or categorical? Is your
data symmetric or asymmetric? Are
there outliers?
Contemporary Analysis canworksmart.com
13. Big Data & Research Methods
MODEL SELECTION
Contemporary Analysis canworksmart.com
14. Big Data & Research Methods
HETEROSCEDASTICITY
Heteroscedasticity refers to
the circumstance in which the
variability of a variable is unequal
across the range of values of a
second variable that predicts it.
Contemporary Analysis canworksmart.com
15. Big Data & Research Methods
HETEROSCEDASTICITY
Predicting equipment pricing based on machine hours
MARKET PRICE
T2
HOURS ON MACHINE
T1
Contemporary Analysis canworksmart.com
T3
^
= a + bx
Y
16. Big Data & Research Methods
Unbiased & Homoscedastic Biased & Homoscedastic Biased & Homoscedastic
Unbiased & Heteroscedastic Biased & Heteroscedastic Biased & Heteroscedastic
Contemporary Analysis canworksmart.com
17. Big Data & Research Methods
OVERFITTING
Overfitting occurs when a
statistical model captures
more than just the underlying
relationships.
The model is fitted to as much
data as possible including random
errors, outliers, and noise.
Contemporary Analysis canworksmart.com
18. Big Data & Research Methods
OVERFITTING
An overfitted model nearly
perfectly matches the training
set, but does not perform well
with new data. While an overfitted
model looks great, it will have poor
predictive performance.
Contemporary Analysis canworksmart.com
19. Big Data & Research Methods
OVERFITTING
The mark of a good model isn’t
how well it performs on the data
used to build the model, but on
fresh data outside of the training
data set.
Contemporary Analysis canworksmart.com
20. Big Data & Research Methods
OVERFITTING
Overfitting Example: Training Classification Table
Contemporary Analysis canworksmart.com
General Election (Predicted)
General Election (Observed) Did not vote Voted Percentage Correct
Did not vote 132423 3 99.99773%
Voted 0 411099 100%
Overall Correct Percentage 100%
21. Big Data & Research Methods
OVERFITTING
Overfitting Example: Prediction Classification Table
Contemporary Analysis canworksmart.com
General Election (Predicted)
General Election (Observed) Did not vote Voted Percentage Correct
Did not vote 35726 4068 90%
Voted 45924 77199 63%
Overall Correct Percentage 69%
24. Big Data & Research Methods
OVERFITTING
Simple Model Example: Training Classification Table
Contemporary Analysis canworksmart.com
General Election (Predicted)
General Election (Observed) Did not vote Voted Percentage Correct
Did not vote 95397 37029 72%
Voted 43439 367660 89%
Overall Correct Percentage 85%
25. Big Data & Research Methods
OVERFITTING
Simple Model Example: Prediction Classification Table
Contemporary Analysis canworksmart.com
General Election (Predicted)
General Election (Observed) Did not vote Voted Percentage Correct
Did not vote 72167 9483 88%
Voted 15131 66136 81%
Overall Correct Percentage 85%
26. Big Data & Research Methods
OVERFITTING
Big Data Errors: Spurious Correlations
140,000
CORRELATIONS
80,000
SPURIOUS 20,000
VARIABLES 500 1000 1500 2000
Contemporary Analysis canworksmart.com
28. Big Data & Research Methods
OVERFITTING
Overstuffing Example: Training Classification Table
Contemporary Analysis canworksmart.com
General Election (Predicted)
General Election (Observed) Did not vote Voted Percentage Correct
Did not vote 93029 39397 70%
Voted 36228 374871 91%
Overall Correct Percentage 86%
29. Big Data & Research Methods
LACK OF BACKGROUND
The farther we are from the work,
the more likely we are to be tricked
by the data.
We owe it to the end user to
get out of the library, and try to
understand what we are modeling.
Contemporary Analysis canworksmart.com
30. Big Data & Research Methods
SOLUTIONS INSTEAD OF THEORIES
There is an element of data
science that should be frustrating,
confusing, & despair inducing.
It should make us stand back in
awe of the complexity of the world,
and not the simplicity to which we
can reduce it to.
Contemporary Analysis canworksmart.com
31. Big Data & Research Methods
SOLUTIONS INSTEAD OF THEORIES
“The great thing about economics,
is that we admit that we know
nothing about anything”
- Thomas Piketty author of “Capital in the Twenty-First Century”
Contemporary Analysis canworksmart.com
32. Big Data & Research Methods
SOLUTIONS INSTEAD OF THEORIES
As we learn more, we realize
there’s more to learn.
The hallmark of genius is the sharp
awareness of what is and what is
not possible. We become aware of
complexity, ambiguity and nuance.
Contemporary Analysis canworksmart.com
33. Big Data & Research Methods
CORRELATION & CAUSATION
The anthem of the Big Data
age is “correlation does not
imply causation.”
Contemporary Analysis canworksmart.com
34. Big Data & Research Methods
CORRELATION & CAUSATION
The problem is that this statement
is tautological. It is always correct,
and can never be wrong.
Contemporary Analysis canworksmart.com
35. Big Data & Research Methods
CORRELATION & CAUSATION
Don’t let people use it as a kill
switch to discussion.
• True causation is pretty rare. There are few
things where, if I do this, this will happen.
• Research should create discussions not shut
them down. Models can’t explain everything.
There is always an “X” variable that captures
the unknown.
Contemporary Analysis canworksmart.com
36. Big Data & Research Methods
SOLUTIONS INSTEAD OF THEORIES
Contemporary Analysis canworksmart.com
37. Big Data & Research Methods
FAILING TO AUDIT
Primary reasons that we fail to
have our work peer-reviewed:
• Lack of funding to “repeat” work.
• We hide behind the complexity of our work.
Contemporary Analysis canworksmart.com
38. Big Data & Research Methods
FAILING TO AUDIT
Contemporary Analysis canworksmart.com
39. Big Data & Research Methods
FAILING TO AUDIT
Other tools:
• rMarkdown: for creating webpages and
documents in R
• iPython notebooks: for creating websites and
documents interactively in Python
• Galaxy Project: for creating reproducible
workflows. (Favorable for people with less
scripting experience.)
Contemporary Analysis canworksmart.com
40. Big Data & Research Methods
TRAINING
We offer
training on:
• Data Visualization
• Managerial Statistics
• Predictive Modeling
Contemporary Analysis canworksmart.com
You will be
introduced to:
• R
• SPSS
• Tableau
• MySQL
• Git
41. Big Data & Research Methods
TRAINING
Trainings sessions last 3 days.
We will work through projects,
practice different approaches,
and which approach is the best for
different scenarios.
Contemporary Analysis canworksmart.com
42. Big Data & Research Methods
QUESTIONS?
Grant Stanley, CEO
Contemporary Analysis
1209 Harney Street, Suite 200
Omaha, NE 68102
grant@canworksmart.com
(402) 679-8398
Contemporary Analysis canworksmart.com
Questions & Learn more.