This document describes a study on behavior mining of cloud computing users. It proposes an algorithm called Time Weighted Sequence Mining Algorithm (TWSMA) to mine service usage patterns from log data, considering both service usage frequency and time. TWSMA creates a multi-dimensional weighted service sequence database, then mines it to find frequent sequential patterns. The patterns are used to recommend services to different user groups. Experiments on a test cloud system called Jyaguchi show that TWSMA has higher precision and recall than other algorithms for service recommendation. Future work could include implementing the algorithm on other cloud systems and adding user profile dimensions.
optimize matrix multiplication using concurrent programming
compare execution times of 2 occasions when serial and parallel versions
come up with reasons why serial program and multi threaded program differ
Welcome to International Journal of Engineering Research and Development (IJERD)IJERD Editor
call for paper 2012, hard copy of journal, research paper publishing, where to publish research paper,
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
Efficient Variable Size Template Matching Using Fast Normalized Cross Correla...Gurbinder Gill
In this presentation we propose the parallel implementation of template matching using Full Search using NCC as a measure using the concept of pre-computed sum-tables referred to as FNCC for high resolution images on NVIDIA’s Graphics Processing Units (GP-GPU’s)
optimize matrix multiplication using concurrent programming
compare execution times of 2 occasions when serial and parallel versions
come up with reasons why serial program and multi threaded program differ
Welcome to International Journal of Engineering Research and Development (IJERD)IJERD Editor
call for paper 2012, hard copy of journal, research paper publishing, where to publish research paper,
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
Efficient Variable Size Template Matching Using Fast Normalized Cross Correla...Gurbinder Gill
In this presentation we propose the parallel implementation of template matching using Full Search using NCC as a measure using the concept of pre-computed sum-tables referred to as FNCC for high resolution images on NVIDIA’s Graphics Processing Units (GP-GPU’s)
Efficient load rebalancing for distributed file system in CloudsIJERA Editor
Cloud computing is an upcoming era in software industry. It’s a very vast and developing technology.
Distributed file systems play an important role in cloud computing applications based on map reduce
techniques. While making use of distributed file systems for cloud computing, nodes serves computing and
storage functions at the same time. Given file is divided into small parts to use map reduce algorithms in
parallel. But the problem lies here since in cloud computing nodes may be added, deleted or modified any time
and also operations on files may be done dynamically. This causes the unequal load distribution of load among
the nodes which leads to load imbalance problem in distributed file system. Newly developed distributed file
system mostly depends upon central node for load distribution but this method is not helpful in large-scale and
where chances of failure are more. Use of central node for load distribution creates a problem of single point
dependency and chances of performance of bottleneck are more. As well as issues like movement cost and
network traffic caused due to migration of nodes and file chunks need to be resolved. So we are proposing
algorithm which will overcome all these problems and helps to achieve uniform load distribution efficiently. To
verify the feasibility and efficiency of our algorithm we will be using simulation setup and compare our
algorithm with existing techniques for the factors like load imbalance factor, movement cost and network traffic.
In this research paper, we have conducted work on modeling of local broker policy based on workload profile in Network cloud. For this we are using workload based applications. To handle workload based applications, two Scheduling Policies Random Non-overlap and Workload profile based be used. We compare these two scheduling policies based on three parameters Execution (mean) time, Response (mean) time and Waiting (mean) time. Workload based profile policy gave better results than Random-Non overlap policy in terms of time performance parameter.
Optimizing Parallel Reduction in CUDA : NOTESSubhajit Sahu
Highlighted notes on Optimizing Parallel Reduction in CUDA
While doing research work under Prof. Dip Banerjee, Prof. Kishore Kothapalli.
Interesting optimizations, i should try these soon as PageRank is basically lots of sums.
Introduction to CNN with Application to Object RecognitionArtifacia
This is the presentation from our second AI Meet held on Dec 10, 2016.
You can join Artifacia AI Meet Bangalore Group: https://www.meetup.com/Artifacia-AI-Meet/
Autonomic Resource Provisioning for Cloud-Based SoftwarePooyan Jamshidi
9th International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS'14) @ ICSE 2014, for more information please refer to: http://computing.dcu.ie/~pjamshidi/PDF/SEAMS2014.pdf
An Efficient Cloud based Approach for Service CrawlingIDES Editor
In this paper, we have designed a crawler that
searches services provided by different clouds connected in a
network. Proposed method provides details of freshness and
age of cloud clusters. Crawler checks each router available in
a network providing services. On basis of search criteria, our
design generates output guiding users for accessing requested
cloud services in efficient manner. We have planned to store
the result in an m-way tree and to use traversal technique for
extraction of specific data from the crawling result. We have
compared the result with other typical search techniques.
FCN-Based 6D Robotic Grasping for Arbitrary Placed ObjectsKusano Hitoshi
This is the slide used for IEEE International Conference on Robotics and Automation (ICRA) 2017, Workshop on Learning and Control for Autonomous Manipulation Systems on June 2nd, 2017.
Efficient load rebalancing for distributed file system in CloudsIJERA Editor
Cloud computing is an upcoming era in software industry. It’s a very vast and developing technology.
Distributed file systems play an important role in cloud computing applications based on map reduce
techniques. While making use of distributed file systems for cloud computing, nodes serves computing and
storage functions at the same time. Given file is divided into small parts to use map reduce algorithms in
parallel. But the problem lies here since in cloud computing nodes may be added, deleted or modified any time
and also operations on files may be done dynamically. This causes the unequal load distribution of load among
the nodes which leads to load imbalance problem in distributed file system. Newly developed distributed file
system mostly depends upon central node for load distribution but this method is not helpful in large-scale and
where chances of failure are more. Use of central node for load distribution creates a problem of single point
dependency and chances of performance of bottleneck are more. As well as issues like movement cost and
network traffic caused due to migration of nodes and file chunks need to be resolved. So we are proposing
algorithm which will overcome all these problems and helps to achieve uniform load distribution efficiently. To
verify the feasibility and efficiency of our algorithm we will be using simulation setup and compare our
algorithm with existing techniques for the factors like load imbalance factor, movement cost and network traffic.
In this research paper, we have conducted work on modeling of local broker policy based on workload profile in Network cloud. For this we are using workload based applications. To handle workload based applications, two Scheduling Policies Random Non-overlap and Workload profile based be used. We compare these two scheduling policies based on three parameters Execution (mean) time, Response (mean) time and Waiting (mean) time. Workload based profile policy gave better results than Random-Non overlap policy in terms of time performance parameter.
Optimizing Parallel Reduction in CUDA : NOTESSubhajit Sahu
Highlighted notes on Optimizing Parallel Reduction in CUDA
While doing research work under Prof. Dip Banerjee, Prof. Kishore Kothapalli.
Interesting optimizations, i should try these soon as PageRank is basically lots of sums.
Introduction to CNN with Application to Object RecognitionArtifacia
This is the presentation from our second AI Meet held on Dec 10, 2016.
You can join Artifacia AI Meet Bangalore Group: https://www.meetup.com/Artifacia-AI-Meet/
Autonomic Resource Provisioning for Cloud-Based SoftwarePooyan Jamshidi
9th International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS'14) @ ICSE 2014, for more information please refer to: http://computing.dcu.ie/~pjamshidi/PDF/SEAMS2014.pdf
An Efficient Cloud based Approach for Service CrawlingIDES Editor
In this paper, we have designed a crawler that
searches services provided by different clouds connected in a
network. Proposed method provides details of freshness and
age of cloud clusters. Crawler checks each router available in
a network providing services. On basis of search criteria, our
design generates output guiding users for accessing requested
cloud services in efficient manner. We have planned to store
the result in an m-way tree and to use traversal technique for
extraction of specific data from the crawling result. We have
compared the result with other typical search techniques.
FCN-Based 6D Robotic Grasping for Arbitrary Placed ObjectsKusano Hitoshi
This is the slide used for IEEE International Conference on Robotics and Automation (ICRA) 2017, Workshop on Learning and Control for Autonomous Manipulation Systems on June 2nd, 2017.
Succession “Losers”: What Happens to Executives Passed Over for the CEO Job?
By David F. Larcker, Stephen A. Miles, and Brian Tayan
Stanford Closer Look Series
Overview:
Shareholders pay considerable attention to the choice of executive selected as the new CEO whenever a change in leadership takes place. However, without an inside look at the leading candidates to assume the CEO role, it is difficult for shareholders to tell whether the board has made the correct choice. In this Closer Look, we examine CEO succession events among the largest 100 companies over a ten-year period to determine what happens to the executives who were not selected (i.e., the “succession losers”) and how they perform relative to those who were selected (the “succession winners”).
We ask:
• Are the executives selected for the CEO role really better than those passed over?
• What are the implications for understanding the labor market for executive talent?
• Are differences in performance due to operating conditions or quality of available talent?
• Are boards better at identifying CEO talent than other research generally suggests?
Joget Workflow Clustering and Performance Testing on Amazon Web Services (AWS)Joget Workflow
Joget Workflow is an open source web-based workflow software to develop workflow and BPM applications. It is also a rapid application development platform that offers full-fledged agile development capabilities (consisting of processes, forms, lists, CRUD and UI), not just back-end EAI/orchestration/integration or the task-based interface.
This document is intended to describe and analyze the results of performance tests on a clustered deployment of Joget Workflow on Amazon Web Services (AWS). This document also demonstrates the baseline performance of the Joget Workflow platform for a basic app and shows how horizontal and vertical scaling can be used to support larger deployments.
More information on Joget Workflow at http://www.joget.org
Scaling Experimentation & Data Capture at GrabRoman
This is the slides from the presentation I gave at the Data Science Meetup Hamburg. This talks about how we build and scaled our online experimentation platform and associated event capture system.
FUZZY-BASED ARCHITECTURE TO IMPLEMENT SERVICE SELECTION ADAPTATION STRATEGYijwscjournal
One of the main requirements in service based applications is runtime adaptation to changes that occur in business, user, environment, and computational contexts. Changes in contexts lead to QOS degrade. Continues adaptation mechanism and strategies are required to stay service based applications(SBA) in safe state. In this paper a framework for runtime adaptation in service based application isintroduced. It checks user requirements change continuously and dynamically adopts architecture model. Also it checks providers QOS attributes continuously and if adaptation requirement is triggered, runs service selection adaptation strategy to satisfy user preferences. Thusit is a context aware and automatically adaptable
framework for SBA applications. Wehave implemented a fuzzy based system for web service selection unit. Due to ambiguity of context’s data and cross-cutting effects of quality of services, using fuzzy would result an optimised decision. Finally we illustrated that using of it has a good performance for web service based applications.
RESEARCH ON DISTRIBUTED SOFTWARE TESTING PLATFORM BASED ON CLOUD RESOURCEijcses
In order to solve the low efficiency problem of large-scale distributed software testing , CBDSTP(
Cloud-Based Distributed Software Testing Platform) is put forward.This platform can provide continous
integration and automation of testing for large software systems, which can make full use of resources on
the cloud clients, achieving testing result s in the real environment and reasonable allocating testing jobs,
to resolve the Web application software configuration test, compatibility test and distributed test problems,
to reduce costs, improve efficiency. Through making MySQL testing on this prototype system, the
verification is made for platform architecture and job allocation effectiveness.
Phil Day [Configured Things] | Policy-Driven Real-Time Data Filtering from Io...InfluxData
Policy-Driven Real-Time Data Filtering from IoT Sensors with Flux
Data is central to any smart city, and valuable to a range of different consumers. However, access to the data has to be balanced against privacy concerns to ensure that each recipient only receives the set and quality of data they are authorized to access. This talk describes a solution developed around InfluxDB and Flux which filters data in real time according to a declarative policy model and delivers it securely via web-socket data streams
YAFA-SOA: a GA-based Optimizer for Optimizing Security and Cost in Service Co...wafaa radwan
Jun 23, 2017 publication description Services Computing Conference (SCC)
This paper studies heuristic search-based optimiza- tion of service compositions. We have investigated applying Ge- netic Algorithms (GA) to optimize service-oriented architectures (SOA) in terms of security goals and cost; we help software En- gineers to map the optimized service composition to the business process model based on security and cost. Service composition security risk is measured by implementing the aggregation rules from the local security risk values of the aggregated services in the composition. We adapt the DREAD model for Security risk assessment by suggesting new categorizations for calculating DREAD factors based on a proposed service structure and service attributes. We implemented the YAFA-SOA Optimizer as an extension of an existing GA implementation to solve multi- objective optimization problems for varying number of objectives in the context of SOA. We evaluated the tool in a case study. The study results show that applying multi-objective GA is feasible to find the optimized security and cost in SOA-based systems. We were able to approve that adding security services to the generated composition reduces the risk severity of the generated composition and enhances its security in terms of confidentiality, integrity and availability (CIA). We found that the generated service composition risk severity is less than 0.5, which matches the validation results obtained from a security expert.
FUZZY-BASED ARCHITECTURE TO IMPLEMENT SERVICE SELECTION ADAPTATION STRATEGYijwscjournal
One of the main requirements in service based applications is runtime adaptation to changes that occur in
business, user, environment, and computational contexts. Changes in contexts lead to QOS degrade.
Continues adaptation mechanism and strategies are required to stay service based applications(SBA) in
safe state. In this paper a framework for runtime adaptation in service based application isintroduced. It
checks user requirements change continuously and dynamically adopts architecture model. Also it checks
providers QOS attributes continuously and if adaptation requirement is triggered, runs service selection
adaptation strategy to satisfy user preferences. Thusit is a context aware and automatically adaptable
framework for SBA applications. Wehave implemented a fuzzy based system for web service selection unit.
Due to ambiguity of context’s data and cross-cutting effects of quality of services, using fuzzy would result
an optimised decision. Finally we illustrated that using of it has a good performance for web service based
applications.
Analyzing and debugging production distributed applications built using a service-oriented or microservices architecture is a challenging task. In this session, we will introduce AWS X-Ray, a new service that makes it easier to identify performance bottlenecks and errors, pinpoint issues to specific service(s) in your application, identify the impact of issues on users of your application, and visualize a request call graph and service call graph for your applications. We will show interactive demos, and code samples for the demo will be available to all session attendees.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
To Graph or Not to Graph Knowledge Graph Architectures and LLMs
Shree krishna 20140214
1. A Study on Behavior Mining of Cloud
computing users
Shree Krishna Shrestha
12054071
Graduate School of Engineering
Muroran Institute of Technology, Muroran, Hokkaido, Japan
2014・02.14
3. INTRODUCTION AND PURPOSE
Purpose:
A framework to recommend service
Method to mine services, based on the behavior of service user
Method to Recommend Services based on the result data of mining of
services
Jyaguchi:
A cloud system proposed by Bishnu Prasad Gautam
Based on service on demand and pay per use business model
4. TEST BED CLOUD SYSTEM :JYAGUCHI (OVERVIEW)
Jyaguchi is a SAAS based cloud that provides a platform to develop
application as service with multi-language support.
component
AddService
Component
component
Calculator
Service
Component
JavaScript
component
SubtractService
Component
Ruby
component
MultiplyService
Component
Python
Component
DivideService
Component
Groovy
Ref: As per the Definition of Inventor of Jyaguchi, Asst. Prof. Bishnu Prasad Guatam
Features
Software as
Service(SaaS)
Distributed
Resource
Management
Pay per use
Business Model
Service on
Demand
5. TEST BED CLOUD SYSTEM: JYAGUCHI
Logs the activity of users with in Interface
6. MAJOR ISSUE IN MINING OF SERVICES
What is difference between
mining Item and Service?
Why current Item mining
cannot be used for Service
mining?
Usage Time
Service Mining
Mining for frequent service usage pattern considering not only service usage
frequency but also the service usage time
7. ALGORITHM FOR SERVICE MINING
Propose an algorithm for service mining which consider the
time of service usage.
Time Weight Sequence Mining Algorithm
(TWSMA)
Create Multi-dimensional
Weighted Service
Sequence Database
Mining Multi-dimensional
Sequence
8. CREATION OF SERVICE WEIGHT INPUT
SEQUENCE
Input: Service Usage logs; Unit time u
Output: Multi-dimensional Weighted Service Sequence Database
(MDWSSDB)
1: Calculate service usage time from service usage logs for each service
on each position.
2: Create Multi-dimensional service usage time sequence from service
usage logs
3: Calculate , Service Count, for each service on each position
4: Calculate Absolute Service Weight, for each service on each position
5: Calculate Relative Service Weight for each service on each position
6: Make Weighted Sequence (ws) integrating service id sj with its Related
Service Weight.
7: Create MDWSSDB with integrating ws and associated user id.
9. CALCULATION OF RELATIVE SERVICE WEIGHT
Multi-dimensional service usage time sequence
Seq. id
User_id
Sequence
1
10
(2,6),(123,16),(456,31),(2,33),(456,35)
2
10
(2,21),(2,20),(2,22),(1,22),(2,21)
3
16
(2,1),(123,9),(456,1),(123,1),(456,15
4
15
(456,19),(456,24)(234,24),(456,43
5
15
(234,20),(234,11),(234,30),(456,38)
6
16
(456,19),(123,39),(456,30),(234,30)
Service Weight of service 2
for user 10,
ST2,10 = (6 + 33 + 21 + 20 +
(456, min
22 + 21) min = 12335)
T10 = (6 + 16 + 31
Service
Use time
+33+35+21+20+22+22+ 21)
ID
min = 227 min.
ASW2,10= 123/227 = 0.542
For unit time (ut) 5 min, service usage count for service 2 at position 1 and sequence
1 is (SC2,1,1) = 6/5 = 1.2
RSW2,1,1 = 1.2 * 0.542 = 0.650
10. EXAMPLE OF INPUT SEQUENCE
Jyaguchi log data
Seq. id
User_id
Sequence
1
10
(2,6),(123,16),(456,31),(2,33),(456,35)
2
10
(2,21),(2,20),(2,22),(1,22),(2,21)
3
16
(2,1),(123,9),(456,1),(123,1),(456,15
4
15
(456,19),(456,24)(234,24),(456,43
5
15
(234,20),(234,11),(234,30),(456,38)
6
16
(456,19),(123,39),(456,30),(234,30)
(456, 35)
Service
ID
Use time
Calculation of service weights
Seq. id
User_id
Sequence
1
10
(2,0.650),(123,0.224),(456,1.804),(2,3.577),(456,2.037)
2
10
(2,2.276),(2,2.168),(2,2.385),(1,0.427),(2,2.276)
3
16
(2,0.0014),(123,0.608),(456,0.089),(123,0.068),(456,1.344)
4
15
(456,2.253),(456,2.846)(234,1.954),(456,5.1)
5
15
(234,1.628),(234,0.895),(234,2.442),(456,4.507)
6
16
(456,1.702),(123,2.636),(456,2.688),(234,1.242)
(456, 2.037)
Service
ID
Service
weight
11. MINING MULTIDIMENSIONAL SEQUENCE
Input: Multi-dimensional Weighted Service Sequence
Database: MDWSSDB; Minimum support min support
Output: The complete set of labeled frequent patterns
1: Calculate sequence database weight SDW of MDWSSDB
2: Calculate minimum weight Wm
3: Call ModiedPrexSpan
4: End if no frequent pattern is found or at end of database
5: Form Projected Sequence Database
6: Mine labeled frequent patterns from Projected Sequence
Database
12. MINING SEQUENTIAL PATTERN
Total Database Weight (SDW )= (0.650+0.224+1.804+...+1.242)=49.83
For min_support 5%
min_weight = 49.83*.05
= 2.49
Service id : 123
0.224+0.608+0.068+2.636
Total weight of service id 123 :3.53
User_i
Sequence
d
10
(2,0.650),(123,0.224),(456,1.804),(2,3.577),
(456,2.037)
2
10
(2,2.276),(2,2.168),(2,2.385),(1,0.427),(2,2.
276)
3
16
(2,0.0014),(123,0.608),(456,0.089),(123,0.0
68),(456,1.344)
Prefix
Postfix
4
15
(456,2.253),(456,2.846)(234,1.954),(456,5.
Prefix
Postfix
2
<_123,456,2,456>,<_2,2,1,2>,
1) <2>-projected database
<_123,456,123,456>
123
<_456,2,456>,
5
15
(234,1.628),(234,0.895),(234,2.442),(456,4.
<_456,123,456>, <_456,123,456>, <_456,234> 507)
123
<_456,2,456>, <_456,234>
<123>-projected database
6
16
(456,1.702),(123,2.636),(456,2.688),(234,1.
123,456 <_2,456>, <_123,456>
242)
2,123
Seq.
id
1
<_456,2,456>,<_456,123,456>,<_456>
Frequent Pattern : 123,456
<2,123>-projected database
13. MINING SEQUENTIAL PATTERN
For frequent service sequence<123; 456>
User_i
Sequence
d
10
(2,0.650),(123,0.224),(456,1.804),(2,3.577),
<123,
(456,2.037)
456>
2
10
(2,2.276),(2,2.168),(2,2.385),(1,0.427),(2,2.
276)
User_id 16 and * are found frequent
3
16
(2,0.0014),(123,0.608),(456,0.089),(123,0.0
68),(456,1.344)
from postfix database.
4
15
(456,2.253),(456,2.846)(234,1.954),(456,5.
1)
Prefix
Postfix
15
(234,1.628),(234,0.895),(234,2.442),(456,4.
2
<_123,456,2,456>,<_2,2,1,2>, 5
<2>-projected database
507)
<_123,456,123,456>
6
16
(456,1.702),(123,2.636),(456,2.688),(234,1.
123
<_456,2,456>, <_456,123,456>, <_456,234> 242)
<123>-projected database
Prefix
2,123
Labeled frequent
pattern
<10>;<16 (16; <123; 456>);
(*,<123; 456>)
>; <16>
Postfix
Seq.
id
1
<_456,2,456>,<_456,123,456>,<_456>
<2,123>-projected database
14. RECOMMENDATION OF SERVICES
Based on result labelled Frequent pattern from TWSMA
Categorized for 3 user group
1.
2.
3.
Anonymous Users/First time User Group,
Registered Users group without Previous History of Service
Usage (don’t have current service usage log).,
Registered Users group with Previous History of Service Usage
(have current service usage log).
21. EXPERIMENTS (TWSMA)
Experiment Methodology
Implemented on Jyaguchi system
Used actual log of Jyaguchi Users
Varied minimum support to find variation in No. of
patterns found and processing time.
Comapred No. of patterns found and processing time
with seq-dim algorithm.
22. EXPERIMENT RESULTS (1)
• No. of patterns and Process time with no. of sequences for
varied minimum support
23. EXPERIMENT RESULTS (3)
• No. of patterns and Process time with no. of sequences for
varied minimum support
24. EXPERIMENTS (TWSMA)
Precision and Recall based evaluation
Experiment Methodology
Learning Phase:
Find frequent services from log data of prior to implementing TWSMA algorithm with
various minimum support
did an online survey among Jyaguchi Users about the favorite services.
found common services in between survey data and frequent services for various
minimum support which is used as relevant services.
Evaluation Phase
Users Use Jyaguchi system where services are recommended from 3 algorithms: 1.
TWSMA, 2. SEQ-DIM and 3. Random
Calculate Precision and Recall for each user.
Take average of Precision and Recall for various minimum support.
25. EXPERIMENT RESULTS (3)
Comparision of Precision and recall for Various minimum support for 3 algorithm
Minimum_support:7%
Minimum_support:10%
26. EXPERIMENT RESULTS (4)
Comparision of Precision and recall for Various minimum
support for 3 algorithm
Minimum_support:12%
Minimum_support:15%
27. CONCLUSION AND FUTURE WORKS
Conclusion
• proposed a framework for recommending services utilizing service usage
time as service weight.
• Implemented the algorithm in the Jyaguchi System.
• Evaluated the proposed framework on Jyaguchi System.
Future Tasks
• Implement and evaluate algorithm on other SAAS based Cloud system.
• Add the dimension of user profile for better recommendation
Ladies and gentleman, I am Shree Krishna currently studying on Master degree in Muroran Institute of Technology, Hokkaido, Japan. Today I am going to present on Recommendation of a Cloud Service Item Based on Service Utilization Patterns in Jyaguchi
In this Presentation, I will firstly introduce my research and purpose of my research. Then I will briefly introduce Jyaguchi system which we had used as a testbed for our research. Then I will talk about problem in mining algorithm for mining service and describe our algorithm for service mining. Then I will discuss about the recommendation algorithm used for this research. Then I will explain experiments and results regarding our algorithms. Finally I will conclude our presentation.
In this presentaion we will purpose an algorithm for mining service and recommendation of service based on the behaviour of service user in cloud system. We had defined mining of service as Service mining which is the mining for frequent service usage pattern considering not only service usage frequency but also the service usage time.For this particular purspose we had used a cloud system Jyaguchi which was proposed by Bishnu Prasad Gautam.
Jyaguchi Systemis the system proposed and developed by prof. Bishnu Prasad Gautam during his Master Degree. This has lots of features among which I had listed four here.Another feature of this system is pay per Use Business Model. Unlike the system in which the softwares are installed and used in ersonal machine, all the services are run in the server computer and user pay for the time of that service used which is pay per use business model.Service on Demand is the feature which allows user of Jyaguchi to use the service at their favourable time from system. this sample uses four script languages.JavaScript for addingRuby for substractionPython for MultiplicationGroovy for division
These are the user interfaces of Jyaguchi system. Left one is called as unified user interface which have clouds of services and other information boxes.………With leaving the Jyaguchi now I will talk about why service mining was needed.
These days cloud computing is very hot topic. All big named company are shifting on clouds. Then the clouds which usage pay per use business model how can we recommendation system. Before answering this question, we should answer two question first.What is difference between mining of Item and mining of Service?Why current Item mining cannot be used for Service mining?The main difference is Usage time.The Items once bought are finished. It doesnot matter how long he used that item or when he used that item. So, most of the algorithm of item mining is based on frequency of item. But for services, usage time is very much important. The services are used for certain time and payment will be done for that period. So, service which have long period of use with its frequency of uses should be recommended for user. The solution for mining of services on those cloud systems is given by service mining which is Mining of services utilizing the frequency of uses and service usage time of users.
For the service mining purpose we had purposed a new algorithm called TWSMA. It is not a completely new algorithm but a modification of a existing algorithms. We had modified the multidimensional sequence mining algorithm in a way that it takes count of service usage time also.This algorithm has mainly two partsCreation of service weightMining Multidimensional Sequence
We will calculate the service weight of each service in each sequence. The weight of service in sequence is the time of service used by a user to the total time of that user in the system.Then calculated service weight with service id will make input sequence.
Here is the example of input sequence. The first table has service id and respective usage time separated by commas. After calculating weight of each service in each position, Table2 is service weight input sequence where service_id and service weight are making set.User id is next dimension of this sequence
With the frequent sequence and dimension, projected MD-database will be created.Got 10 and 16 for 2,123,456