Your SlideShare is downloading. ×
ETL
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
2,009
On Slideshare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
270
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. ETL & Basic OLAP OperationsCSE – 590 Data Mining Prof. Anita Wasilewska SUNY Stony Brook
    Presented By :
    -> Preeti Kudva (106887833)
    -> Kinjal Khandhar(106878039)
  • 2. REFERENCES :
    • Data Mining: Concepts & Techniques by Jiawei Han and MichelineKamber.
    • 3. Presentation Slides of Prof. Anita Wasilewska.
    • 4. http://en.wikipedia.org/wiki/Extract,_transform,_load
    • 5. Ralph Kimball, Joe Caserta, The Data Warehouse ETL Toolkit: Practical Techniques for Extracting, Cleaning, Conforming and Delivering Data
    • 6. Conceptual modeling for ETL processes by Panos Vassiliadis,Alkis Simitsis,Spiros Skiadopoulos.
    • 7. http://en.wikipedia.org/wiki/Category:ETL_tools
    • 8. http://www.1keydata.com/datawarehousing/tooletl.html
    • 9. http://www.bi-bestpractices.com/view-articles/4738
    • 10. http://www.computerworld. com/databasetopics/data/story/0,10801,80222,00. html
  • Overview
    What is ETL?
    ETL In the Architecture.
    General ETL issues
    - Extract
    - Transformations/cleansing
    - Load
    ETL Example
  • 11. What is ETL?
    Extract
    - Extract relevant data.
    Transform
    - Transform data to DW format.
    - Build DW keys, etc.
    - Cleansing of data.
    Load
    - Load data into DW.
    - Build aggregates, etc.
    https://eprints.kfupm.edu.sa/74341/1/74341.pdf
  • 12. Other
    sources
    Extract
    Transform
    Load
    Refresh
    Operational
    DBs
    Metadata
    Analysis
    Query
    Reports
    Data mining
    Serve
    Data
    Warehouse
    Data Marts
    Data Sources
    Front-End Tools
    Data Storage
    Data Warehouse Architecture
    http://infolab.stanford.edu/warehousing/
  • 13. Extract
  • 14. Extract
    • Goal: fast extract of relevant data
    • 15. Extract data from different data source formats like flat files, Relational Database Systems,etc.
    • 16. Convert data into a specific format for transformation processing.
    • 17. Parse extracted data.
    • 18. Result in a check if data meets expected pattern/structure.
    http://en.wikipedia.org/wiki/Extract,_transform,_load
  • 19. Types of Data Sources
    • Non-cooperative sources
    - Snapshot sources – provides only full copy of source, e.g., files
    - Specific sources – each is different, e.g., legacy systems
    - Logged sources – writes change log, e.g., DB log
    - Queryable sources – provides query interface, e.g., RDBMS
    • Cooperative sources
    - Replicated sources – publish/subscribe mechanism
    - Call back sources – calls external code (ETL) when changes occur
    - Internal action sources – only internal actions when changes occur.eg.DB triggers.
    • Extract strategy depends on the source types.
    https://intranet.cs.aau.dk/fileadmin/user_upload/Education/Courses/2009/DWML/slides/DW4_ETL.pdf
  • 20. Extract from Operational System
    • Design Time
    – Create/Import Data Sources definition
    – Define Stage or Work Areas()
    – Validate Connectivity
    – Preview/Analyze Sources
    – Define Extraction Scheduling
    Determine Extract Windows for source system
    Batch Extract (Overnight, weekly, monthly)
    Continuous extracts (Trigger on Source Table)
    • Run Time
    – Connect to the predefined Data Sources as scheduled
    – Get raw data save locally in workspace DB
  • 21. Transform
  • 22. Transformation – a series of rules/functions.
    Common Transformations are:
    • Convert data into a consistent, standardized form.
    • 23. Cleanse(Automated)
    • 24. Synonym Substitutions.
    • 25. Spelling Corrections.
    • 26. Encoding free-form values(map Male to 1 & Mr to M)
    • 27. Merge/Purge(join data from multiple sources).
    • 28. Aggregate(eg.rollup)
    • 29. Calculate(sale_amt = qty * price)
    • 30. Data type conversion
    • 31. Data content audit
    • 32. Null value handling(null = not to load)
    • 33. Customized transformation(based on user).
    http://www.bi-bestpractices.com/view-articles/4738
  • 34. Common Transformations..contd
    • Data type conversions
    -EBCDIC ASCII/UniCode.
    -String manipulations.
    -Date/time format conversions.
    -- E.g., unix time 1201928400 = what time?
    • Normalization/Denormalization
    - To the desired DW format.
    - Depending on source format.
    • Building keys
    - Table matches production keys to surrogate DW keys.
    - Correct handling of history - especially for total reload.
    https://intranet.cs.aau.dk/fileadmin/user_upload/Education/Courses/2009/DWML/slides/DW4_ETL.pdf
  • 35. Cleansing
    • Why cleansing? Garbage In Garbage Out.
    • 36. BI does not work on “raw” data
    - Pre-processing necessary for BI analysis.
    • Handle inconsistent data formats
    - Spellings, codings, …
    • Remove unnecessary attributes
    - Production keys, comments,…
    • Replace codes with text [for easy understanding]
    - City name instead of ZIP code, e.g., Aalborg Centrum vs. DK-9000
    • Combine data from multiple sources with common key
    - E.g., customer data from customer address, customer name, …
    Aalborg University 2009 - DWML course
  • 37. Cleansing
    • Don’t use “special” values (e.g., 0, -1) in your data.
    -They are hard to understand in query/analysis operations.
    • Mark facts with Data Status dimension
    - Normal, abnormal, outside bounds, impossible,…
    - Facts can be taken in/out of analyses.
    • Uniform treatment of NULL
    - Use NULLs only for measure values (estimates instead?)
    - Use special dimension key (i.e., surrogate key value) for
    NULL dimension values E.g., for the time dimension, instead of NULL, use special key values to represent “Date not known”, “Soon to happen”.
    • Avoid problems in joins, since NULL is not equal to NULL
    Aalborg University 2009 - DWML course
  • 38. Data Quality – most imp
    • Data almost never has decent quality
    • 39. Data in DW must be:
    1] Precise
    - DW data must match known numbers - or explanation needed.
    2] Complete
    - DW has all relevant data and the users know.
    3] Consistent
    - No contradictory data: aggregates fit with detail data.
    4] Unique
    - The same things is called the same and has the same key(customers).
    5] Timely
    - Data is updated ”frequently enough” and the users know when.
  • 40. Improving Data Quality
    • Appoint “data quality administrator”
    -Responsibility for data quality.
    -Includes manual inspections and corrections!
    • Source-controlled improvements.
    • 41. Construct programs that check data quality
    - Are totals as expected?
    -Do results agree with alternative source?
    -Number of NULL values?
  • 42. Transformation in Operational System
    • Design Time
    – Specify Criteria/Filter for aggregation.
    – Define operators (Mostly Set/SQL based).
    – Map columns using operators/Lookups.
    – Define other transformation rules.
    – Define mappings and/or add new fields.
    • Run Time
    – Transform (Cleanse, consolidate, Apply Business Rule, De-Normalize/Normalize) Extracted Data by applying the operators mapped in design time.
    – Aggregate (create & populate raw table).
    – Create & populate Staging table.
  • 43. Load
  • 44. Load
    • Goal:fast loading into end target(DW).
    -Loading chunks is much faster than total load.
    • SQL-based update is slow.
    - Large overhead (optimization, locking, etc.) for every SQL call.
    -DB load tools are much faster.
    • Index on tables slows load a lot.
    - Drop index and rebuild after load
    - Can be done per index partition
    • Parallellization
    - Dimensions can be loaded concurrently
    - Fact tables can be loaded concurrently
    - Partitions can be loaded concurrently
    http://en.wikipedia.org/wiki/Extract,_transform,_load
  • 45. Load
    • Relationships in the data
    - Referential integrity and data consistency must be ensured before loading (Why?)
    --Because they won’t be checked in the DW again
    - Can be done by loader
    • Aggregates
    - Can be built and loaded at the same time as the detail data.
    • Load tuning
    - Load without log.
    -Sort load file first.
    -Make only simple transformations in loader.
    -Use loader facilities for building aggregates.
    http://en.wikipedia.org/wiki/Extract,_transform,_load
  • 46. Load in Operational Systems
    • Design Time
    – Design Warehouse.
    – Map Staging Data to fact or dimension table attributes.
    • Run Time
    – Publish Staging data to Data mart (update dimension tables along with the fact tables.
  • 47. ETL Tools
    • From big vendors :
    -Oracle Warehouse Builder
    -IBM DB2 Warehouse Manager
    -Microsoft Integration Services
    • Offers much functionality at a reasonable price
    - Data modeling
    -ETL code generation
    -Scheduling DW jobs
    • The “best” tool does not exist
    - Choose based on your own needs
    http://en.wikipedia.org/wiki/Category:ETL_tools
  • 48. ETL Examplehttp://www.stylusstudio.com/etl/
  • 49. Example of how ETL Works!!!!
    Consider the HR Department database:-
  • 50. Extract Step for the Use-Case:
    • Take the data from the dbase III file and convert it into a more usable format - XML .
    • 51. Extracting data can be done using XML convertors .Just select our table and choose dbase III convertor will transfer the data into XML.
    • 52. The result of this extraction will be an XML file similar to this:
    • 53. <?xml version="1.0" encoding="UTF-8"?><table date="20060731" rows="5">    <row row="1">        <NAME>Guiles, Makenzie</NAME>        <STREET>145 Meadowview Road</STREET>        <CITY>South Hadley</CITY>        <STATE>MA</STATE>        <ZIP>01075</ZIP>        <DEAR_WHO>Macy</DEAR_WHO>        <TEL_HOME>(413)555-6225</TEL_HOME>        <BIRTH_DATE>19770201</BIRTH_DATE>        <HIRE_DATE>20060703</HIRE_DATE>        <INSIDE>yes</INSIDE>    </row>    ...</table>
  • Extract – Part 2
    Find out the Target Schema,which can be done by using DB to XML Data Source Module.Currently using Northwind that comes with Standard SQL Server.(save as etl-target.rdbxml)
  • 54. Transforming Data into the Target Form
    • Use series of XSLT transforms to modify this.
    In a production ETL operation, likely each step would be more complicated, and/or would use different technologies or methods.
    1] Convert the dates from CCYYMMDD into CCYY-MM-DD (the "ISO 8601" format) [etl-code-1.xsl]
    2]Split the first and last names [etl-code-2.xsl]
    3]Assign the manager based on inside or external sales [etl-code-3.xsl]
    4]Map the data to the new schema [etl-code-4.xsl]
    This mapping can be done by using XSLT mapper.
  • 55. Output of above steps + etl-target.rdbxml gives:
  • 56. Loading Our ETL Results into the Data Repository
    loading is a just matter of writing the output of the last XSLT transform step into the etl-target.rdbxml map we built earlier.
  • 57. Olap Operations
  • 58. References:
    Data Mining: Concepts & Techniques by Jiawei Han and MichelineKamber.
    http://personalpages.manchester.ac.uk/staff/G.Nenadic/CN3023/Lecture4.pdf
    http://en.wikipedia.org/wiki/Online_Analytical_Processing
    http://www.cs.sfu.ca/~han
    http://en.wikipedia.org/wiki/OLAP_cube#cite_note-OLAPGlossary1995-5
    http://www.fmt.vein.hu/softcom/dw
  • 59. Overview
    OLAP
    OLAP Cube & Multidimensional data
    OLAP Operations:
    - Roll up (Drill up)
    - Drill down (Rolldown)
    - Slice & Dice
    - Pivot
    - Other operations
    • Examples
  • OLAP
    Online analytical processing, or OLAP, is an approach to quickly answer multi-dimensional analytical queries.[http://en.wikipedia.org/wiki/Online_analytical_processing]
    The typical applications of OLAP are in business reporting for sales, marketing, management reporting, business process management (BPM), budgeting and forecasting, financial reporting and similar areas.
    The term OLAP was created as a slight modification of the traditional database term OLTP(Online Transaction Processing).[http://en.wikipedia.org/wiki/Online_analytical_processing]
  • 60. OLAP Cube
    Data warehouse & OLAP tools are based on a multidimensional data model which views data in the form of a data cube.
    An OLAP (Online Analytical Processing) cube is a data structure that allows fast analysis of data.
    The OLAP cube consists of numeric facts called measures which are categorized by dimensions.
    -Dimensions: perspective or entities with respect to which an organization wants to keep records.
    -Facts: quantities by which we want to analyze relations between dimensions.
    The cube metadata may be created from a star schema or snowflake schema of tables in a relational database. Measures are derived from the records in the fact table and dimensions are derived from the dimension tables.
    Reference: http://en.wikipedia.org/wiki/OLAP_cube#cite_note-OLAPGlossary1995-5
  • 61. Concept Hierarchy
    A concept hierarchy defines a sequence of mappings from a set of low level concepts to higher level, more general concepts.
    Each of the elements of a dimension could be summarized using a hierarchy. The hierarchy is a series of parent-child relationships, typically where a parent member represents the consolidation of the members which are its children. Parent members can be further aggregated as the children of another parent.
    Reference: http://en.wikipedia.org/wiki/OLAP_cube#cite_note-OLAPGlossary1995-5
  • 62. Example – Star Schema
    item
    time
    item_key
    item_name
    brand
    type
    supplier_type
    time_key
    day
    day_of_the_week
    month
    quarter
    year
    location
    branch
    location_key
    street
    city
    province_or_street
    country
    branch_key
    branch_name
    branch_type
    Sales Fact Table
    time_key
    item_key
    branch_key
    location_key
    units_sold
    Measures
    Reference: http://www.cs.sfu.ca/~han
  • 63. Example:
    Example:
    Dimensions: Item, Location, Time
    Hierarchical summarization paths
    Region
    Type Region Year
    Brand Country Quarter
    Item City Month
    Street Week
    Day
    Item
    Month
    Reference: http://www.cs.sfu.ca/~han
  • 64. Working Example(1)
    Reference: http://www.fmt.vein.hu/softcom/dw
  • 65. Roll up(Drill up)
    Performs aggregation on a data cube eitherby climbing up the concept hierarchy for a dimension or by dimension reduction.[http://www.cs.sfu.ca/~han]
    Specific grouping on one dimension where we go from a lower level of aggregation to a higher. [http://personalpages.manchester.ac.uk/staff/G.Nenadic/CN3023/lecture4.pdf]
    • e.g. summing-up per whole fiscal year
    • 66. e.g. summarization over aggregate hierarchy (total sales per region, state)
  • Drill down (Roll down)
    Reverse of roll-up .[http://www.cs.sfu.ca/~han]
    Navigates from less detailed data to more detailed data.
    Can be realized by either stepping down a concept hierarchy for a dimension or introducing additional dimensions.
    Finer-grained view on aggregated data, i.e. going from higher to lower aggregation.[http://personalpages.manchester.ac.uk/staff/G.Nenadic/CN3023/lecture4.pdf]
    • e.g. disaggregate volume of products sales by region/city.
  • Roll Up & Drill down on Working Example(1)
    Roll up sales on time from month to quarter
    Drill down sales on location from city to plant
    Reference: http://www.fmt.vein.hu/softcom/dw
  • 67. Slice and Dice
    Slice:
    - performs a selection on one dimension of the given cube, resulting in a sub cube.
    • e.g. slicing volume of products in product dimension for product_model=‘1996’.
    Dice:
    -performs a selection operation on two or more dimensions.
    • e.g. dicing the central cube based on the following selection criteria: (location=“Montreal” or “Vancouver”)and (time=“Q1”or”Q2”)and(item=“cell phone”or”pager”)
  • Slice & Dice on Working Example(1)
    Dicing volume of Products in product & time dimension
    Slicing volume of Products in product dimension
    Reference: http://www.fmt.vein.hu/softcom/dw
  • 68. Pivot
    Visualization operation which rotates the data axes in view in order to provide an alternate presentation of data.
    Select a different dimension (orientation) for analysis[http://personalpages.manchester.ac.uk/staff/G.Nenadic/CN3023/lecture4.pdf]
    E.g. pivot operation where location & item in a 2D slice are rotated.
    Other examples:
    - rotating the axes in a 3D cube.
    - transforming 3D cube into series of 2D planes.
  • 69. Working Example(2)
    Dimension Tables:
    Market(Market_ID, City, Region)
    Product(Product_ID, Name, Category)
    Time( Time_ID,Week,Month,Quarter)
    Fact Table:
    Sales(Market_ID,Product_ID,Time_ID,Amount)
    Reference: http://personalpages.manchester.ac.uk/staff/G.Nenandic/CN3023/lecture4.pdf
  • 70. Roll up & Drill down on Working Example(2)
    SELECT S.Product_ID,M.City, SUM(S.Amount) INTO City_Sales
    FROM Sales S, Market M
    WHEREM.Market_ID = S.Market_ID
    GROUP BY S.Product_ID, M.City
    Roll up salesonMarket from city to region
    SELECTT.Product_ID,M.Region, SUM(T.Amount)
    FROM City_Sales T, Market M
    WHERE T.City = M.City
    GROUP BY T.Product_ID, M.Region
    Drill down sales on Market from region to city
    Reference: http://personalpages.manchester.ac.uk/staff/G.Nenandic/CN3023/lecture4.pdf
  • 71. Slice & Dice on Working Example(2)
    Dicing sales in the time dimension (e.g. total sales for each product in each quarter)
    SELECT S.Product_ID, T.Quarter, SUM(S.Amount)
    FROM Sales S, Time T
    WHERET.Time_ID = S.Time_ID AND T.Week=‘Week12’ AND (S.Product_ID =‘1002’ OR S.Product_ID =‘1003’)
    GROUP BY T.Quarter, S.Product_ID
    Slicing the data cube in the time dimension (e.g. choosing sales only in week 12)
    SELECT S.*
    FROM Sales S, Time T
    WHERET.Time_ID = S.Time_ID AND T.Week = ‘Week12’
    Reference: http://personalpages.manchester.ac.uk/staff/G.Nenandic/CN3023/lecture4.pdf
  • 72. Other Operations
    drill across:
    executes queries involving (across) more than one fact table.
    drill through:
    makes use of relational SQL facilities to drill through the bottom level of the cube to its back-end relational tables.
    Reference: [http://www.cs.sfu.ca/~han]
  • 73. 10 Challenging Problems in Data Mining Research
    Xindong Wu
    Department of Computer Science
    University of Vermont
    33 Colchester Avenue, Burlington, Vermont 05405, USA
    xwu@cs.uvm.edu
    Qiang Yang
    Department of Computer Science
    Hong Kong University of Science & Technology
    Clearwater Bay, Kowloon, Hong Kong, China
    ong Kong University of Science and Technology
    Clearwater Bay, Kowloon, Hong Kong, China
    Presented in : ICDM '05The Fifth IEEE International Conference on Data Mining
  • 74. Contributors
    Pedro Domingos
    Charles Elkan
    Johannes Gehrke
    Jiawei Han
    David Heckerman
    Daniel Keim
    Jiming Liu
    • Gregory Piatetsky-Shapiro
    • 75. Vijay V. Raghavan
    • 76. RajeevRastogi
    • 77. Salvatore J. Stolfo
    • 78. AlexanderTuzhilin
    • 79. Benjamin W. Wah
  • A New Feature at ICDM 2005
    What are the 10 most challenging problems in data mining, today?
    Different people have different views, a function of time as well.
    What do the experts think?
    - Experts we consulted:
    Previous organizers of IEEE ICDM and ACM KDD
    They were asked to list their 10 problems (requests sent out in Oct 05,and replies Obtained in Nov 05)
    Replies:
    - Edited and presented in this paper
    - Hopefully be useful for young researchers
    - Not in any particular importance order
  • 80. 1. Developing a Unifying Theory ofData Mining
    The current state of the art of data-mining research is too ``ad-hoc“
    - techniques are designed for individual problems
    (e.g. classification or clustering)
    - no unifying theory
    A theoretical framework is required that unifies:
    • Data Mining tasks –
    Clustering
    Classification
    Association Rules etc.
    • Data Mining approaches -
    Statistics
    Machine Learning
    Database systems etc.
  • 81. 1. Developing a Unifying Theory ofData Mining
    Long standing problems in statistical research
    - How to avoid spurious correlations?
    - sometimes related to the problem of mining for “deep knowledge”.
    • Example:
    Strong correlation found between the timing of TV series by a particular star and the occurrences of small market crashes in Hong Kong. Can we conclude that there is a hidden cause behind the correlation?
  • 82. 2. Scaling Up for High DimensionalData and High Speed Data Streams
    Scaling up is needed because of following challenges:
    • - Classifiers with hundreds or billions of features to be built for applications like text mining & drug safety analysis.
    Challenge – how to design classifiers to handle ultra high dimensional classification problems.
    • - Satellite and Computer Network data comprise extremely large databases (e.g. 100TB). Data mining technology today is still slow.
    Challenge – how can data mining technology handle data of this scale.
  • 83. 2. Scaling Up for High DimensionalData and High Speed Data Streams
    • Data Mining should be a continuous online process, rather than an occasional one shot process.
    E.g. Analysis of high speed network traffic for identifying anomalous events.
    Challenge: how to compute models over streaming data which accommodate changing environments from which data is drawn. (“Concept drift” or “Environment drift”)
    • Incremental Mining and effective model updating to maintain accurate modeling of the current stream required.
  • 3. Sequential and Time Series Data
    How to efficiently and accurately cluster, classify and predict the trends in sequential and time series data ?
    Time series data used for predictions are
    contaminated by noise
    How to do accurate short-term and long-term predictions?
    Signal processing techniques introduce lags in the filtered data, which reduces accuracy
  • 84. 3. Sequential and Time Series Data
    Real time series data obtained from Wireless sensors in Hong Kong UST CS department hallway
  • 85. 4. Mining Complex Knowledge fromComplex Data
    Important type of complex knowledge is in the form of graphs.
    Challenge: More research required in the field of discovering graphs and structured patterns from large data.
    Data that are not i.i.d. (independent and identically distributed)
    -many objects are not independent of each other, and are not of a single type.
    Challenge: Data mining systems required that can soundly mine the rich structure of relations among objects.
    -E.g.: interlinked Web pages, social networks, metabolic networks in the cell
  • 86. 4. Mining Complex Knowledge fromComplex Data
    • Most organization’s data is in text form and in complex data formats like Image, Multimedia and Web data.
    Challenge: How to mine non-relational data.
    Integration of data mining and knowledge inference required.
    Challenge ( The biggest gap): unable to relate the results of mining to the real-world decisions they affect - all they can do is hand the results back to the user.
    More research on interestingness of knowledge
  • 87. 5. Data Mining in a Network Setting
    Community and Social Networks:
    -Linked data between emails, Web pages, blogs, citations, sequences and people
    Problems:
    • It is critical to have right characterization of the “community” to be detected.
    • 88. Entities/ nodes are distributed. Hence, distributed means of identification desired.
    • 89. Snapshot based dataset may not be able to capture the real picture.
    Challenge:
    To understand –
    • Network’s static structures (e.g. topologies & structures)
    • 90. Dynamic Behavior (e.g. growth factor, robustness, functional efficiency)
  • 5. Data Mining in a Network Setting
    Mining in and for Computer Networks
    - Network links are increasing in speed.(1-10 Gig Ethernet)
    - To be able to detect anomalies, fast capture of IP packets at high speed links and analyzing massive amounts of data required.
    Challenge:
    - highly scalable solutions required.
    - i.e. Good algorithms required to
    (a) detect DoS attacks
    (b) trace back to find attackers
    (c ) drop packets that belong to attack traffic.
  • 91. 6. Distributed Data Mining andMining Multi-agent Data
    • Important in Network Problems.
    • 92. In Distributed Environment(sensor/IP Network),distributed probes are placed at locations within the network.
    • 93. Problems :
    1] Need to correlate & discover data patterns at various probes.
    2] Communication Overhead (amount of data shipped between various sites).
    3] How to mine across multiple heterogeneous data sources.
    • Adversary data mining: deliberately manipulate the data to sabotage them(produce false negatives) e.g. email spam, counter-terrorism, intrusion detection/computer security, click spam, search engine spam, fraud detection, shopbots, file sharing,etc.
    • 94. Multi-agent Data Mining : Agents are often distributed & have proactive and reactive features.
    http://www-ai.cs.uni-dortmund.de/auto?self=$ejr31cyc
    http://www.csc.liv.ac.uk/~ali/wp/MADM.pdf
  • 95. 7. Data Mining for Biological andEnvironmental Problems
    Mining Biological data – extremely imp problem.
    eg.HIV Vaccine design.
    Molecular biology eg. DNA
    chemical properties,
    3D structures,functional
    properties.
  • 96. 7. Data Mining for Ecological & Environmental Problems
    • Utilize our natural environment & resources in proper way..But…
    How can Data Mining be used to study & find out contributing factors to find :
    1] Number of hurricane occurrences
    2] Global climate changes and potential “bird flu” epidemics.
    3] human-centered systems (e.g. user-adapted human-computer interaction or P2P transactions).
    • “Killer” Applications (bioinformatics, CRM/personalization & security applications).
    Reported in Science Magazine
  • 97. 8. Data-mining-Process RelatedProblems
    How to automate mining process?
    Issues:
    1] 90% of cost is in pre-processing.
    2] Systematic documentation of data cleaning.
    3] Combine visual interactive & automatic DM.
    4] In exploratory data analysis,DM goal is undefined.
    • Challenges:
    - The composition of data mining operations.
    - Data cleaning, with logging capabilities.
    - Visualization and mining automation.
    Need a methodology: help users avoid many data mining mistakes.
    -What are the approaches for multi-step mining queries?
    - What is a canonical set of data mining operations?
  • 98. 8. Data-mining-Process RelatedProblems
    Sampling
    Feature Selection
    Mining….
  • 99. 9. Security, Privacy and Data Integrity
    • How to ensure the users privacy while their data are being mined?
    • 100. How to do data mining for protection of security and privacy?
    • 101. Knowledge integrity assessment
    - Data are intentionally modified from their original version, in order to misinform the recipients or for privacy and security
    - Development of measures to evaluate the knowledge integrity of a collection of
    -- Data
    -- Knowledge and patterns
  • 102. 9. Security, Privacy and Data Integrity
    • Challenges:
    1] Develop efficient algorithms for comparing the knowledge contents of the two (before and after) versions of the data.
    2] Develop algorithms for estimating the impact that certain modifications of the data have on the statistical significance of individual patterns obtainable by broad classes of data mining algorithms.
    • Headlines (Nov 21 2005) Senate Panel Approves Data Security Bill - The Senate Judiciary Committee on Thursday passed legislation designed to protect consumers against data security failures by, among other things, requiring companies to notify consumers when their personal information has been compromised. While several other committees in both the House and Senate have their own versions of data security legislation, S. 1789 breaks new ground by including provisions permitting consumers to access their personal files …
    http://www.cdt.org/privacy/
  • 103. 10. Dealing with Non-static,Unbalanced and Cost-sensitive Data
    Data is non-static,constantly changing.eg of collecting data in 2000,then 2001,2002 ……Problem is to correct the bias.
    Deal with unbalanced & cost-sensitive data:
    There is much information on costs and benefits, but no overall model of profit and loss.
    Data may evolve with a bias introduced by sampling
    ICML 2003 Workshop on Learning from Imbalanced Data Sets
  • 104. 10. Dealing with Non-static,Unbalanced and Cost-sensitive Data
    Cardiogram
    ?
    blood test
    ?
    Pressure
    ?
    Temperature 39 degrees
    ?
    biopsy
    ?
    • Each test incurs a cost
    • Data extremely unbalanced
    • Data change with time
  • 105. Conclusion
    There is still a lack of timely exchange of important topics in the community as a whole.
    These problems are sampled from a small, albeit important, segment of the community.
    The list should obviously be a function of time for this dynamic field.
  • 106. THANK YOU!!!

×