발표자: 김준호(Lunit)
발표일: 2018.1.
의료 AI 관련 중, nodule detection 문제에 대해 다뤄보고자 합니다.
의료 AI에서는 어떠한 방식으로 classication을 하고, preprocessing은 어떤식으로 진행되는지 LUNA16이라는 의료 challenge에 이용되는 데이터를 가지고 발표를 진행해보고자 합니다.
이후, 이 데이터를 이용해서 최근 2017 MICCAI (의료 영상학회에서는 높은 수준의 학회)에서 발표된 "curriculum adaptive sampling for extreme data imbalance"를 실제 구현해서 적용해보고 이 때 발생할 수 있는 문제를 어떤식으로 해결할 수 있는지에 대한 tip도 제공할 예정입니다. (Python multi-processing data load, input-pipeline)
위 논문을 선정한 이유는, 단순한 classification이 아닌, nodule이 있는 위치도 정확하게 catch하는 논문 중, performance가 상당히 높기 때문입니다.
PLOTCON NYC: Behind Every Great Plot There's a Great Deal of WranglingPlotly
If you are struggling to make a plot, tear yourself away from stackoverflow for a moment and ... take a hard look at your data. Is it really in the most favorable form for the task at hand? Time and time again I have found that my visualization struggles are really a symptom of unfinished data wrangling. R has long had excellent facilities for data aggregation or "split-apply-combine": split an object into pieces, compute on each piece, and glue the result back together again. Recent developments, especially in the purrr package, have made "split-apply-combine" even easier and more general. But this requires a certain comfort level with lists, especially with lists that are columns inside a data frame. This is unfamiliar to most of us. I give an overview of this set of problems and match them up with solutions based on grouped, nested, and split data frames.
Instead of Tree or other weak classifiers we take NaiveBayes which is not necessarily a weak learner and evaluate what happens when Cross Validate a not so weak learner.
발표자: 김준호(Lunit)
발표일: 2018.1.
의료 AI 관련 중, nodule detection 문제에 대해 다뤄보고자 합니다.
의료 AI에서는 어떠한 방식으로 classication을 하고, preprocessing은 어떤식으로 진행되는지 LUNA16이라는 의료 challenge에 이용되는 데이터를 가지고 발표를 진행해보고자 합니다.
이후, 이 데이터를 이용해서 최근 2017 MICCAI (의료 영상학회에서는 높은 수준의 학회)에서 발표된 "curriculum adaptive sampling for extreme data imbalance"를 실제 구현해서 적용해보고 이 때 발생할 수 있는 문제를 어떤식으로 해결할 수 있는지에 대한 tip도 제공할 예정입니다. (Python multi-processing data load, input-pipeline)
위 논문을 선정한 이유는, 단순한 classification이 아닌, nodule이 있는 위치도 정확하게 catch하는 논문 중, performance가 상당히 높기 때문입니다.
PLOTCON NYC: Behind Every Great Plot There's a Great Deal of WranglingPlotly
If you are struggling to make a plot, tear yourself away from stackoverflow for a moment and ... take a hard look at your data. Is it really in the most favorable form for the task at hand? Time and time again I have found that my visualization struggles are really a symptom of unfinished data wrangling. R has long had excellent facilities for data aggregation or "split-apply-combine": split an object into pieces, compute on each piece, and glue the result back together again. Recent developments, especially in the purrr package, have made "split-apply-combine" even easier and more general. But this requires a certain comfort level with lists, especially with lists that are columns inside a data frame. This is unfamiliar to most of us. I give an overview of this set of problems and match them up with solutions based on grouped, nested, and split data frames.
Instead of Tree or other weak classifiers we take NaiveBayes which is not necessarily a weak learner and evaluate what happens when Cross Validate a not so weak learner.
Mini-lab 1: Stochastic Gradient Descent classifier, Optimizing Logistic Regre...Yao Yao
https://github.com/yaowser/data_mining_group_project
https://www.kaggle.com/c/zillow-prize-1/data
From the Zillow real estate data set of properties in the southern California area, conduct the following data cleaning, data analysis, predictive analysis, and machine learning algorithms:
Mini-lab 1: Stochastic Gradient Descent classifier, Optimizing Logistic Regression Model Performance, Optimizing Support Vector Machine Classifier, Accuracy of results and efficiency, Logistic Regression Feature Importance, interpretation of support vectors, Density Graph
Mehar Singh, CEO of ProCogia, and Jason Grahn, Senior Business Analyst at Apptio, co-present on the journey from Excel to R at the second Bellevue chapter useR Group Meetup.
If we’re producing analysis that drives business decision making, that’s production-grade code! This talk will address this question, which in turn shows why R is the way to go – assumptions are built into the code and enables the analyst to automate & reproduce their efforts.
This presentation includes:
- Data importing (opening a CSV or connecting to a SQL in both tools)
- Filtering, grouping, summarizing (pivot tables in Excel vs. tidy code in R)
- Visualizations (charts in excel vs ggplot in R)
Rattle is Free (as in Libre) Open Source Software and the source code is available from the Bitbucket repository. We give you the freedom to review the code, use it for whatever purpose you like, and to extend it however you like, without restriction, except that if you then distribute your changes you also need to distribute your source code too.
Rattle - the R Analytical Tool To Learn Easily - is a popular GUI for data mining using R. It presents statistical and visual summaries of data, transforms data that can be readily modelled, builds both unsupervised and supervised models from the data, presents the performance of models graphically, and scores new datasets. One of the most important features (according to me) is that all of your interactions through the graphical user interface are captured as an R script that can be readily executed in R independently of the Rattle interface.
Rattle clocks between 10,000 and 20,000 installations per month from the RStudio CRAN node (one of over 100 nodes). Rattle has been downloaded several million times overall.
INFORMATIVE ESSAYThe purpose of the Informative Essay assignme.docxcarliotwaycave
INFORMATIVE ESSAY
The purpose of the Informative Essay assignment is to choose a job or task that you know how to do and then write a minimum of 2 full pages, maximum of 3 full pages, Informative Essay teaching the reader how to do that job or task. You will follow the organization techniques explained in Unit 6.
Here are the details:
1. Read the Lecture Notes in Unit 6. You may also find the information in Chapter 10.5 in our text on Process Analysis helpful. The lecture notes will really be the most important to read in writing this assignment. However, here is a link to that chapter that you may look at in addition to the lecture notes:
https://open.lib.umn.edu/writingforsuccess/chapter/10-5-process-analysis/ (Links to an external site.)
2. Choose your topic, that is, the job or task you want to teach. As the notes explain, this should be a job or task that you already know how to do, and it should be something you can do well. At this point, think about your audience (reader). Will your reader need any knowledge or experience to do this job or task, or will you write these instructions for a general reader where no experience is required to perform the job?
3. Plan your outline to organize this essay. Unit 6 notes offer advice on this organization process. Be sure to include an introductory paragraph that has the four main points presented in the lecture notes.
4. Write the essay. It will need to be at least 2 FULL pages long, maximum of 3 full pages long. You will use the MLA formatting that you used in previous essays from Units 3, 4, and 5.
5. Be sure to include a title for your essay.
6. After writing the essay, be sure to take time to read it several times for revision and editing. It would be helpful to have at least one other person proofread it as well before submitting the assignment.
Quiz2
# comments start with #
# to quit q()
# two steps to install any library
#install.packages("rattle")
#library(rattle)
setwd("D:/AJITH/CUMBERLANDS/Ph.D/SEMESTER 3/Data Science & Big Data Analy (ITS-836-51)/RStudio/Week2")
getwd()
x <- 3 # x is a vector of length 1
print(x)
v1 <- c(2,4,6,8,10)
print(v1)
print(v1[3])
v <- c(1:10) #creates a vector of 10 elements numbered 1 through 10. More complicated data
print(v)
print(v[6])
# Import test data
test<-read.csv("CVEs.csv")
test1<-read.csv("CVEs.csv", sep=",")
test2<-read.table("CVEs.csv", sep=",")
write.csv(test2, file="out.csv")
# Write CSV in R
write.table(test1, file = "out1.csv",row.names=TRUE, na="",col.names=TRUE, sep=",")
head(test)
tail(test)
summary(test)
head <- head(test)
tail <- tail(test)
cor(test$X, test$index)
sd(test$index)
var(test$index)
plot(test$index)
hist(test$index)
str(test$index)
quit()
Quiz3
setwd("C:/Users/ialsmadi/Desktop/University_of_Cumberlands/Lectures/Week2/RScripts")
getwd()
# Import test data
data<-read.csv("yearly_sales.csv")
#A 5-number summary is a set of 5 descriptive statistics for summarizing a continuous univariate data set.
#It consists o ...
Data Manipulation with Numpy and Pandas in PythonStarting with NOllieShoresna
Data Manipulation with Numpy and Pandas in Python
Starting with Numpy
#load the library and check its version, just to make sure we aren't using an older version
import numpy as np
np.__version__
'1.12.1'
#create a list comprising numbers from 0 to 9
L = list(range(10))
#converting integers to string - this style of handling lists is known as list comprehension.
#List comprehension offers a versatile way to handle list manipulations tasks easily. We'll learn about them in future tutorials. Here's an example.
[str(c) for c in L]
['0', '1', '2', '3', '4', '5', '6', '7', '8', '9']
[type(item) for item in L]
[int, int, int, int, int, int, int, int, int, int]
Creating Arrays
Numpy arrays are homogeneous in nature, i.e., they comprise one data type (integer, float, double, etc.) unlike lists.
#creating arrays
np.zeros(10, dtype='int')
array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0])
#creating a 3 row x 5 column matrix
np.ones((3,5), dtype=float)
array([[ 1., 1., 1., 1., 1.],
[ 1., 1., 1., 1., 1.],
[ 1., 1., 1., 1., 1.]])
#creating a matrix with a predefined value
np.full((3,5),1.23)
array([[ 1.23, 1.23, 1.23, 1.23, 1.23],
[ 1.23, 1.23, 1.23, 1.23, 1.23],
[ 1.23, 1.23, 1.23, 1.23, 1.23]])
#create an array with a set sequence
np.arange(0, 20, 2)
array([0, 2, 4, 6, 8,10,12,14,16,18])
#create an array of even space between the given range of values
np.linspace(0, 1, 5)
array([ 0., 0.25, 0.5 , 0.75, 1.])
#create a 3x3 array with mean 0 and standard deviation 1 in a given dimension
np.random.normal(0, 1, (3,3))
array([[ 0.72432142, -0.90024075, 0.27363808],
[ 0.88426129, 1.45096856, -1.03547109],
[-0.42930994, -1.02284441, -1.59753603]])
#create an identity matrix
np.eye(3)
array([[ 1., 0., 0.],
[ 0., 1., 0.],
[ 0., 0., 1.]])
#set a random seed
np.random.seed(0)
x1 = np.random.randint(10, size=6) #one dimension
x2 = np.random.randint(10, size=(3,4)) #two dimension
x3 = np.random.randint(10, size=(3,4,5)) #three dimension
print("x3 ndim:", x3.ndim)
print("x3 shape:", x3.shape)
print("x3 size: ", x3.size)
('x3 ndim:', 3)
('x3 shape:', (3, 4, 5))
('x3 size: ', 60)
Array Indexing
The important thing to remember is that indexing in python starts at zero.
x1 = np.array([4, 3, 4, 4, 8, 4])
x1
array([4, 3, 4, 4, 8, 4])
#assess value to index zero
x1[0]
4
#assess fifth value
x1[4]
8
#get the last value
x1[-1]
4
#get the second last value
x1[-2]
8
#in a multidimensional array, we need to specify row and column index
x2
array([[3, 7, 5, 5],
[0, 1, 5, 9],
[3, 0, 5, 0]])
#1st row and 2nd column value
x2[2,3]
0
#3rd row and last value from the 3rd column
x2[2,-1]
0
#replace value at 0,0 index
x2[0,0] = 12
x2
array([[12, 7, 5, 5],
[ 0, 1, 5, 9],
[ 3, 0, 5, 0]])
Array Slicing
Now, we'll learn to access multiple or a range of elements from an array.
x = np.arange(10)
x
array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
#from start to 4th position
x[: ...
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
The Metaverse and AI: how can decision-makers harness the Metaverse for their...Jen Stirrup
The Metaverse is popularized in science fiction, and now it is becoming closer to being a part of our daily lives through the use of social media and shopping companies. How can businesses survive in a world where Artificial Intelligence is becoming the present as well as the future of technology, and how does the Metaverse fit into business strategy when futurist ideas are developing into reality at accelerated rates? How do we do this when our data isn't up to scratch? How can we move towards success with our data so we are set up for the Metaverse when it arrives?
How can you help your company evolve, adapt, and succeed using Artificial Intelligence and the Metaverse to stay ahead of the competition? What are the potential issues, complications, and benefits that these technologies could bring to us and our organizations? In this session, Jen Stirrup will explain how to start thinking about these technologies as an organisation.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™UiPathCommunity
In questo evento online gratuito, organizzato dalla Community Italiana di UiPath, potrai esplorare le nuove funzionalità di Autopilot, il tool che integra l'Intelligenza Artificiale nei processi di sviluppo e utilizzo delle Automazioni.
📕 Vedremo insieme alcuni esempi dell'utilizzo di Autopilot in diversi tool della Suite UiPath:
Autopilot per Studio Web
Autopilot per Studio
Autopilot per Apps
Clipboard AI
GenAI applicata alla Document Understanding
👨🏫👨💻 Speakers:
Stefano Negro, UiPath MVPx3, RPA Tech Lead @ BSP Consultant
Flavio Martinelli, UiPath MVP 2023, Technical Account Manager @UiPath
Andrei Tasca, RPA Solutions Team Lead @NTT Data
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
2. About me
Twitter: @TJO_datasci
Data Scientist (Quant Analyst) in Recruit group
A group of companies in advertisement media and
human resources
Known as a major player with big data
Current mission: ad-hoc analysis on various
marketing data
Actually, still I’m new to the field of data science
2014/4/17 2
3. About me
Original background: neuroscience in the human
brain (6 years experience as postdoc researcher)
2014/4/17 3
(Ozaki, PLoS One, 2011)
4. About me
English version of my blog
http://tjo-en.hatenablog.com/
2014/4/17 4
7. Advantage of this technique
More intuitive
Easy to grasp even for high-
dimensional data
Even lay guys can easily understand
Useful for presentation
2014/4/17 7
8. Supervised learning: lower dimension, more intuitive
In case of 2D data… (e.g. nonlinear SVM)
2014/4/17 8
x y label
0.924335 -1.0665Yes
2.109901 2.615284No
0.988192 -0.90812Yes
1.299749 0.944518No
-0.60885 0.457816Yes
-2.25484 1.615489Yes
12. Why association rules and its visualization?
Much roughly, association rules can be interpreted
as a kind of (likeness of) generative modeling
A large set of conditional probability
If it can be regarded as a set of conditional
probability, it also can be described as (likeness of)
Bayesian network
“XY”
If it’s like a Bayesian network, it can be visualized
as graph representation, e.g. by {igraph}
2014/4/17 12
𝑠𝑢𝑝𝑝 𝑋 → 𝑌 =
𝜎(𝑋 ∪ 𝑌)
𝑀
𝑐𝑜𝑛𝑓 𝑋 → 𝑌 =
𝑠𝑢𝑝𝑝(𝑋 → 𝑌)
𝑠𝑢𝑝𝑝(𝑋)
𝑙𝑖𝑓𝑡 𝑋 → 𝑌 =
𝑐𝑜𝑛𝑓(𝑋 → 𝑌)
𝑠𝑢𝑝𝑝(𝑌)
X Y
15. Sample data “d1”
2014/4/17 15
game1 game2 game3 social1 social2 app1 app2 cv
0 0 0 1 0 0 0No
1 0 0 1 1 0 0No
0 1 1 1 1 1 0Yes
0 0 1 1 0 1 1Yes
1 0 1 0 1 1 1Yes
0 0 0 1 1 1 0No
… … … … … … ……
Imagine you’re working on a certain platform for web entertainment.
It has 3 SP games, 2 SP social networking, 2 apps.
The data records user’s history of any activity on each content in a
month after registration, and “cv” label describes they are still active
after a month passed.
16. In the case with svm {e1071}…
2014/4/17 16
> d1.svm<-svm(cv~.,d1) # install and require {e1071}
# svm {e1071}
> table(d1$cv,predict(d1.svm,d1[,-8]))
No Yes
No 1402 98
Yes 80 1420
# Good accuracy (only for training data)
17. In the case with randomForest {randomForest}…
2014/4/17 17
> tuneRF(d1[,-8],d1[,8],doBest=T) # install and require {randomForest}
# (omitted)
> d1.rf<-randomForest(cv~.,d1,mtry=2)
# randomForest {randomForest}
> table(d1$cv,predict(d1.rf,d1[,-8]))
No Yes
No 1413 87
Yes 92 1408
# Good accuracy
> importance(d1.rf)
MeanDecreaseGini
game1 20.640253
game2 12.115196
game3 2.355584
social1 189.053648
social2 76.476470
app1 796.937087
app2 2.804019
# Variable importance (without any directionality)
20. Run apriori {arules} to get association rules
2014/4/17 20
> d2.ap.small<-apriori(as.matrix(d2)) # install and require {arules}
parameter specification:
confidence minval smax arem aval originalSupport support minlen
maxlen target ext
0.8 0.1 1 none FALSE TRUE 0.1 1 10 rules FALSE
algorithmic control:
filter tree heap memopt load sort verbose
0.1 TRUE TRUE FALSE TRUE 2 TRUE
apriori - find association rules with the apriori algorithm
version 4.21 (2004.05.09) (c) 1996-2004 Christian Borgelt
set item appearances ...[0 item(s)] done [0.00s].
set transactions ...[9 item(s), 3000 transaction(s)] done [0.00s].
sorting and recoding items ... [9 item(s)] done [0.00s].
creating transaction tree ... done [0.00s].
checking subsets of size 1 2 3 4 5 done [0.00s].
writing ... [50 rule(s)] done [0.00s]. # only 50 rules…
creating S4 object ... done [0.00s].
21. Run apriori {arules} to get association rules
2014/4/17 21
> d2.ap.large<-apriori(as.matrix(d2),parameter=list(support=0.001))
parameter specification:
confidence minval smax arem aval originalSupport support minlen
maxlen target ext
0.8 0.1 1 none FALSE TRUE 0.001 1 10 rules FALSE
algorithmic control:
filter tree heap memopt load sort verbose
0.1 TRUE TRUE FALSE TRUE 2 TRUE
apriori - find association rules with the apriori algorithm
version 4.21 (2004.05.09) (c) 1996-2004 Christian Borgelt
set item appearances ...[0 item(s)] done [0.00s].
set transactions ...[9 item(s), 3000 transaction(s)] done [0.00s].
sorting and recoding items ... [9 item(s)] done [0.00s].
creating transaction tree ... done [0.00s].
checking subsets of size 1 2 3 4 5 6 7 8 done [0.00s].
writing ... [182 rule(s)] done [0.00s]. # as much as 182 rules
creating S4 object ... done [0.00s].
22. OK, just visualize it
2014/4/17 22
> require(“arulesViz”)
# (omitted)
> plot(d2.ap.small, method=“graph”, control=list(type=“items”,
layout=layout.fruchterman.reingold,))
> plot(d2.ap.large, method=“graph”, control=list(type=“items”,
layout=layout.fruchterman.reingold,))
# Fruchterman – Reingold force-directed graph drawing algorithm can
locate nodes with distances that is proportional to “shortest path
length” between them
# Then nodes (items) should be located based on their “closeness”
between each other
23. Small set of rules visualized with {arulesViz}
2014/4/17 23
25. Large set of rules visualized with {arulesViz}
2014/4/17 25
26. Compare with a result of randomForest
2014/4/17 26
> tuneRF(d1[,-8],d1[,8],doBest=T) # install and require {randomForest}
# (omitted)
> d1.rf<-randomForest(cv~.,d1,mtry=2)
# randomForest {randomForest}
> table(d1$cv,predict(d1.rf,d1[,-8]))
No Yes
No 1413 87
Yes 92 1408
# Good accuracy
> importance(d1.rf)
MeanDecreaseGini
game1 20.640253
game2 12.115196
game3 2.355584
social1 189.053648
social2 76.476470
app1 796.937087
app2 2.804019
# Variable importance (without any directionality)
27. See how far nodes are from yes / no
2014/4/17 27
28. Large set of rules visualized with {arulesViz}
2014/4/17 28
29. Advantage of this technique
More intuitive
Easy to grasp even for high-
dimensional data
Even lay guys can easily understand
Useful for presentation
2014/4/17 29