Development a framework
The framework allows streaming and visualization of historical (previous) and current currency prices in close to real time
The framework benchmarks every monitored broker to decide whether he/she is trustworthy
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Foreaign Exchange Data Crawling and Analysis for Knowledge Discovery Leading to Informative Decision Making
1. Foreign Exchange Data Crawling and
Analysis for Knowledge Discovery
Leading to Informative Decision
Making
Data mining
Dr. Nasser Ghadiri
Mostafa Arjmand
2. The Presentation Include The Following :
• Introduction
• Data Coverage And Size
• Framework Structure And The Data Fetching Mechanism
• Experimental study
• Ranking And Classification
• Conclusions
2
4. Introduction :
• Foreign exchange (Forex) market
• Development a framework
• The framework allows streaming and visualization of historical (previous) and current currency
prices in close to real time
• The framework benchmarks every monitored broker to decide whether he/she is trustworthy
4
5. Broker
• However, human decision making process is mostly subjective and emotional
• It is not necessary that every broker considers the right factors affecting exchange rate, or even gives the
right weight to each
5
6. CTS-Forex Performance Study
Forex monitoring system designed to :
• Automatically track and monitor brokers
• By fetching, visualizing and analyzing their
announced exchange rate
• Our current effort focuses on EUR/USD exchange
rate
• covers 24.1% of forex market
CTS FOREX PERFORMANCE STUDY MAIN FEATURES
6
7. Main Components Of the Proposed Framework :
Visualizing the Data Streams
CTS FOREX PERFORMANCE STUDY MAIN FEATURES
The ability to visualize previously captured data
as well as current data is essential for a expert
1) Single Broker Visualization
2) Multiple Brokers Visualization
3) Ranking and Classification Visualization
7
8. Main Components Of the Proposed Framework :
Analyzing Captured Data
CTS FOREX PERFORMANCE STUDY MAIN FEATURES
The ability to analyze and interpret the captured
data provides useful information for enhancing
the decision making process
Short
term
Medium
term
Long
term
8
10. Data Coverage And Size :
The Data Structure
Snapshot : The data for each broker
• Date of the snapshot
• Number of seconds recorded in the snapshot
Tick : Each piece of data is called a tick
• Date of the tick
• The ASK price of the currency
• The BID price of the currency
Recording Periods
Indeed data is the main source for knowledge discovery leading to informative decision making
10
4/5 are ignored
11. Data Coverage And Size :
Brokers Data Summary
• They monitored a total of 52 brokers
• All the collected data used is from the
MetaTrader 4 platform
BROKERS SUMMARY TABLE
11
12. Data Coverage And Size :
Daily Data Distribution (Monthly Stacked)
12
15. Framework Features
• Main function of this framework is capturing and visualizing stream data
• The number of online brokers is increasing
• This rapid increase has to be handled by our framework by having the ability of easily add new
brokers
• Brokers use different platforms to provide traders with several features with minimum modifications
• The framework should provide runtime visualization as well as visualization of previously captured data
15
16. Framework Features
Framework should provide the ability to analyze data at three levels:
1) Short term analysis
2) Medium term analysis
3) Long term analysis
16
17. Available Resources
Meta Trader 4
• The software consists of two components:
• Client
• Server
• The server is used by brokers
• To provide a client component to their clients
• The client provides end users with the ability to :
• View live streaming prices
• Place orders
• Write their own scripts that automate the trading process
17
18. Client Applications Installation
• First mission is install the client applications (Meta Trader) to
start fetching the data
• The problem we faced was fitting all clients on the same
computer as they consume a lot of memory
• Distribute client applications
• save more memory for short term analysis
18
19. Local Data Centralization
Client-Server ModelDatabase
Insertion and Selection Model
The first model we came up with was a client/server model
The second model is a database insertion and selection model
Although the first showed better results
• Decided to stick to the second model
• It provides better integrity
19
20. Global Data Centralization
Global Data Centralization
We have all the fetched data in each computer centralized in its
fetcher application
• This allows us to run short term analysis and
• Some medium term analysis
• But it does not allow us to run long term analysis
• brokers ranking
• benchmarking
• Although centralizing data will have the drawback
20
21. Online/Local Visualization
One of the implemented features was a visualization system for online users
• Track their favorite brokers and compare them against others
• The first model was storing data into a MySQL database
• The second model was pre-compiling the data into JSON data files
We still had to find a way to pre-compile the data stored locally on our
computers and push it to the online server
Database model VS JSON model21
22. Manager Application
Online Visualization Structure
The manager application will be responsible for
• Running a lot of medium and long term analysis
• Ranking the brokers
• Benchmarking them
• Prioritizing the snapshots
22
24. Experimental study
Windows 8 Pro with 32GB installed memory (RAM).
Intel Core i7-3770 CPU @ 3.40GHz and 3.40 GHz.
The first disk is a HDD second is a SSD.
DATA SET SUMMARY
Local centralization study
Visualization component study
24
26. Visualization Study
WEEK 1 - VISUALIZATION THROUGHPUT RESULTS (MODEL 1 VS MODEL 2)
WEEK 2 - VISUALIZATION THROUGHPUT RESULTS (MODEL 1 VS MODEL 2)
Fetching data
from the
database
Fetching data
from pre-
compiled JSON
files
26
31. Ranking And Classification
• The ranking algorithm was just the first step for distinguishing good brokers
• We do not have any gold standard rules to specify ranges of good and bad brokers’ scores
• Therefore, we decided to use a clustering algorithm
• For each ranked snapshot, we cluster the brokers based on their final scores
• Thus we are clustering one-dimensional data.
31
33. Ranking And Classification
2) Membership:
To determine whether a broker is good or not, we have to study their memberships in all classes during
their life cycle
33
35. CONCLUSIONS
• We Present a group of issues that exist in the current forex market
• Designed and implemented a full framework
• To monitor a list of brokers by fetching their data continuously
• Comparing them to each other and thus speed up and enhance domain experts’
decision making process
35
As explained in the next section, our framework avoids recording information all the time; instead it only records 1/5 of the available data.
The other 4/5 are ignored since our algorithm tends to study the feed-price performance and brokers are expected to be consistent all the time so it shouldn’t really matter what time portion is recorded for the analysis.
The number of recording periods and the number of seconds per recording can be easily modified in our system from the administrator’s portal.
In our situation, we used two computers, with 4GB RAM each, to install client applications. We installed 30 applications on the first computer and 25 applications on the second computer.
Saturday and Sunday when the market is closed
First used to study the visualization component. The second dataset includes week 3 and week 4 to study local data centralization.
The local centralization study compares two models by testing every tick separately, and watch the throughput of each model.
The visualization component study compares two models by visualizing chunks of ticks at the same time.
The first model is a client/server model that directly sends updated ticks to the fetcher application through a socket with a predefined port number.
The second model is a database model that inserts every updated tick into a database to be fetched later by the fetcher application.
record only one minute every five minutes.
Test were not run simultaneously but consecutively. However, Table X shows results of the experiments which were conducted in the second week scenarios, where each test was run simultaneously to check the effect it has on the throughput. It seems that it had an effect in model 1 but not in model 2.
The ranking algorithm runs separately on each snapshot period. then we create weekly,
monthly, and yearly reports by finding the average score of all snapshot periods combined together.
10% threshold provided the best distribution that helps in differentiating between trusted brokers (class 1 and class 2 with minimum values), gray brokers (class 3 with maximum value), and bad brokers (class 4)