In recent years, on the Internet, there is a real phenomenon: the development of social networks that are becoming more popular and more used. Social networks have millions of users worldwide. This provides an opportunity for companies to reach out a large and diverse audience for their advertising campaigns. They do this by creating and spreading content across the social network, which will increase the chance of visibility for their contents, which for them is the assurance of being popular. Every content requires time to reach a specific destination on the networks. In this article, we study competition between several contents that look for attracting more consultations, each characterized by some given popularity. There is competition between the contents of a limited set of destinations. We first model our system, we then study the competition between contents by using the game theory to analyze this behavior. We finally provide numerical results, which provide insights into the effect of various parameters of the system.
Sensing Trending Topics in Twitter for Greater Jakarta Area IJECEIAES
Information and communication technology grows so fast nowadays, especially related to the internet. Twitter is one of internet applications that produce a large amount of textual data called tweets. The tweets may represent real-world situation discussed in a community. Therefore, Twitter can be an important media for urban monitoring. The ability to monitor the situations may guide local government to respond quickly or make public policy. Topic detection is an important automatic tool to understand the tweets, for example, using non-negative matrix factorization. In this paper, we conducted a study to implement Twitter as a media for the urban monitoring in Jakarta and its surrounding areas called Greater Jakarta. Firstly, we analyze the accuracy of the detected topics in term of their interpretability level. Next, we visualize the trend of the topics to identify popular topics easily. Our simulations show that the topic detection methods can extract topics in a certain level of accuracy and draw the trends such that the topic monitoring can be conducted easily.
The document summarizes a research paper on detecting stable and temporal topics from social media data using a unified mixture model. It proposes a model that distinguishes temporal topics, which are discussed intensely for a short period related to real-world events, from stable topics, which are regularly discussed interests. The model mixes user and temporal features to determine topic type, with stable topics dependent on users and temporal topics dependent on time. An EM algorithm is used to estimate model parameters and maximize the log-likelihood of detecting both topic types from a user-time-associated document collection.
This document outlines the author's PhD research, which bridges probabilistic modelling and deep learning. It first provides an overview of the author's work moving from probabilistic modelling to deep learning. It then provides a deep dive into a probabilistic model for temporal interaction data using point processes. Finally, it discusses open questions around bridging statistical methods and deep learning by making each approach more effective.
On the Dynamics of Machine Learning Algorithms and Behavioral Game TheoryRikiya Takahashi
Presentation Material used in guest lecturing at University of Tsukuba on September 17, 2016.
Target audience is part-time PhD student working at a machine learning, data mining, or agent-based simulation project.
Predicting Preference Reversals via Gaussian Process Uncertainty AversionRikiya Takahashi
This document presents a Bayesian model for predicting preference reversals in choice data. The model treats utility as samples disclosed to a dual mental system (UC and DMS). DMS does Bayesian shrinkage estimation of utilities, representing mental conflict. Context effects arise from Bayesian uncertainty aversion. The model was evaluated on real choice data, demonstrating accurate prediction and interpretable mechanisms for compromise effects and prioritization. Future work could explore more realistic human models and choice-set optimization avoiding irrational effects.
Economic indicators such as Consumer Price Index (CPI) have frequently used in predicting future
economic wealth for financial policy makers of respective country. Most central banks, on guidelines of
research studies, have recently adopted an inflation targeting monetary policy regime, which accounts for
high requirement for effective prediction model of consumer price index. However, prediction accuracy by
numerous studies is still low, which raises a need for improvement. This manuscript presents findings of
study that use neuro fuzzy technique to design a machine-learning model that train and test data to predict
a univariate time series CPI. The study establishes a matrix of monthly CPI data from secondary data
source of Tanzania National Bureau of Statistics from January 2000 to December 2015 as case study and
thereafter conducted simulation experiments on MATLAB whereby ninety five percent (95%) of data used
to train the model and five percent (5%) for testing. Furthermore, the study use root mean square error
(RMSE) and mean absolute percentage error (MAPE) as error metrics for model evaluation. The results
show that the neuro fuzzy model have an architecture of 5:74:1 with Gaussian membership functions (2, 2,
2, 2, 2), provides RMSE of 0.44886 and MAPE 0.23384, which is far better compared to existing research
studies.
Selection of optimal hyper-parameter values of support vector machine for sen...journalBEEI
Sentiment analysis and classification task is used in recommender systems to analyze movie reviews, tweets, Facebook posts, online product reviews, blogs, discussion forums, and online comments in social networks. Usually, the classification is performed using supervised machine learning methods such as support vector machine (SVM) classifier, which have many distinct parameters. The selection of the values for these parameters can greatly influence the classification accuracy and can be addressed as an optimization problem. Here we analyze the use of three heuristics, nature-inspired optimization techniques, cuckoo search optimization (CSO), ant lion optimizer (ALO), and polar bear optimization (PBO), for parameter tuning of SVM models using various kernel functions. We validate our approach for the sentiment classification task of Twitter dataset. The results are compared using classification accuracy metric and the Nemenyi test.
On Intuitionistic Fuzzy Transportation Problem Using Pentagonal Intuitionisti...YogeshIJTSRD
In this paper a new method is proposed for finding an optimal solution for Pentagonal intuitionistic fuzzy transportation problems, in which the cost values are Pentagonal intuitionistic fuzzy numbers. The procedure is illustrated with a numerical example. P. Parimala | P. Kamalaveni "On Intuitionistic Fuzzy Transportation Problem Using Pentagonal Intuitionistic Fuzzy Numbers Solved by Modi Method" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-3 , April 2021, URL: https://www.ijtsrd.com/papers/ijtsrd41094.pdf Paper URL: https://www.ijtsrd.com/mathemetics/applied-mathematics/41094/on-intuitionistic-fuzzy-transportation-problem-using-pentagonal-intuitionistic-fuzzy-numbers-solved-by-modi-method/p-parimala
Sensing Trending Topics in Twitter for Greater Jakarta Area IJECEIAES
Information and communication technology grows so fast nowadays, especially related to the internet. Twitter is one of internet applications that produce a large amount of textual data called tweets. The tweets may represent real-world situation discussed in a community. Therefore, Twitter can be an important media for urban monitoring. The ability to monitor the situations may guide local government to respond quickly or make public policy. Topic detection is an important automatic tool to understand the tweets, for example, using non-negative matrix factorization. In this paper, we conducted a study to implement Twitter as a media for the urban monitoring in Jakarta and its surrounding areas called Greater Jakarta. Firstly, we analyze the accuracy of the detected topics in term of their interpretability level. Next, we visualize the trend of the topics to identify popular topics easily. Our simulations show that the topic detection methods can extract topics in a certain level of accuracy and draw the trends such that the topic monitoring can be conducted easily.
The document summarizes a research paper on detecting stable and temporal topics from social media data using a unified mixture model. It proposes a model that distinguishes temporal topics, which are discussed intensely for a short period related to real-world events, from stable topics, which are regularly discussed interests. The model mixes user and temporal features to determine topic type, with stable topics dependent on users and temporal topics dependent on time. An EM algorithm is used to estimate model parameters and maximize the log-likelihood of detecting both topic types from a user-time-associated document collection.
This document outlines the author's PhD research, which bridges probabilistic modelling and deep learning. It first provides an overview of the author's work moving from probabilistic modelling to deep learning. It then provides a deep dive into a probabilistic model for temporal interaction data using point processes. Finally, it discusses open questions around bridging statistical methods and deep learning by making each approach more effective.
On the Dynamics of Machine Learning Algorithms and Behavioral Game TheoryRikiya Takahashi
Presentation Material used in guest lecturing at University of Tsukuba on September 17, 2016.
Target audience is part-time PhD student working at a machine learning, data mining, or agent-based simulation project.
Predicting Preference Reversals via Gaussian Process Uncertainty AversionRikiya Takahashi
This document presents a Bayesian model for predicting preference reversals in choice data. The model treats utility as samples disclosed to a dual mental system (UC and DMS). DMS does Bayesian shrinkage estimation of utilities, representing mental conflict. Context effects arise from Bayesian uncertainty aversion. The model was evaluated on real choice data, demonstrating accurate prediction and interpretable mechanisms for compromise effects and prioritization. Future work could explore more realistic human models and choice-set optimization avoiding irrational effects.
Economic indicators such as Consumer Price Index (CPI) have frequently used in predicting future
economic wealth for financial policy makers of respective country. Most central banks, on guidelines of
research studies, have recently adopted an inflation targeting monetary policy regime, which accounts for
high requirement for effective prediction model of consumer price index. However, prediction accuracy by
numerous studies is still low, which raises a need for improvement. This manuscript presents findings of
study that use neuro fuzzy technique to design a machine-learning model that train and test data to predict
a univariate time series CPI. The study establishes a matrix of monthly CPI data from secondary data
source of Tanzania National Bureau of Statistics from January 2000 to December 2015 as case study and
thereafter conducted simulation experiments on MATLAB whereby ninety five percent (95%) of data used
to train the model and five percent (5%) for testing. Furthermore, the study use root mean square error
(RMSE) and mean absolute percentage error (MAPE) as error metrics for model evaluation. The results
show that the neuro fuzzy model have an architecture of 5:74:1 with Gaussian membership functions (2, 2,
2, 2, 2), provides RMSE of 0.44886 and MAPE 0.23384, which is far better compared to existing research
studies.
Selection of optimal hyper-parameter values of support vector machine for sen...journalBEEI
Sentiment analysis and classification task is used in recommender systems to analyze movie reviews, tweets, Facebook posts, online product reviews, blogs, discussion forums, and online comments in social networks. Usually, the classification is performed using supervised machine learning methods such as support vector machine (SVM) classifier, which have many distinct parameters. The selection of the values for these parameters can greatly influence the classification accuracy and can be addressed as an optimization problem. Here we analyze the use of three heuristics, nature-inspired optimization techniques, cuckoo search optimization (CSO), ant lion optimizer (ALO), and polar bear optimization (PBO), for parameter tuning of SVM models using various kernel functions. We validate our approach for the sentiment classification task of Twitter dataset. The results are compared using classification accuracy metric and the Nemenyi test.
On Intuitionistic Fuzzy Transportation Problem Using Pentagonal Intuitionisti...YogeshIJTSRD
In this paper a new method is proposed for finding an optimal solution for Pentagonal intuitionistic fuzzy transportation problems, in which the cost values are Pentagonal intuitionistic fuzzy numbers. The procedure is illustrated with a numerical example. P. Parimala | P. Kamalaveni "On Intuitionistic Fuzzy Transportation Problem Using Pentagonal Intuitionistic Fuzzy Numbers Solved by Modi Method" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-3 , April 2021, URL: https://www.ijtsrd.com/papers/ijtsrd41094.pdf Paper URL: https://www.ijtsrd.com/mathemetics/applied-mathematics/41094/on-intuitionistic-fuzzy-transportation-problem-using-pentagonal-intuitionistic-fuzzy-numbers-solved-by-modi-method/p-parimala
A Game theoretic approach for competition over visibility in social networksjournalBEEI
Social Networks have known an important evolution in the last few years. These structures, made up of individuals who are tied by one or more specific types of interdependency, constitute the window for members to express their opinions and thoughts by sending posts to their own walls or others' timelines. Actually, when a content arrives, it's located on the top of the timeline pushing away older messages. This situation causes a permanent competition over visibility among subscribers who jump on opponents to promote conflict. Our study presents this competition as a non-cooperative game; each source has to choose frequencies which assure its visibility. We model it, exploring the theory of concave games, to reach a situation of equilibrium; a situation where no player has the ultimate ability to deviate from its current strategy. We formulate the named game, then we analyze it and prove that there is exactly one Nash equilibrium which is the convergence of all players' best responses. We finally provide some numerical results, taking into consideration a system of two sources with a specific frequency space, and analyze the effect of different parameters on sources' visibility on the walls of social networks.
INTELLIGENT SOCIAL NETWORKS MODEL BASED ON SEMANTIC TAG RANKINGdannyijwest
Social Networks has become one of the most popular platforms to allow users to communicate, and share their interests without being at the same geographical location. With the great and rapid growth of Social Media sites such as Facebook, LinkedIn, Twitter…etc. causes huge amount of user-generated content. Thus, the improvement in the information quality and integrity becomes a great challenge to all social media sites, which allows users to get the desired content or be linked to the best link relation using improved search / link technique. So introducing semantics to social networks will widen up the representation of the social networks. In this paper, a new model of social networks based on semantic tag ranking is introduced. This model is based on the concept of multi-agent systems. In this proposed model the representation of social links will be extended by the semantic relationships found in the vocabularies which are known as (tags) in most of social networks.The proposed model for the social media engine is based on enhanced Latent Dirichlet Allocation(E-LDA) as a semantic indexing algorithm, combined with Tag Rank as social network ranking algorithm. The improvements on (E-LDA) phase is done by optimizing (LDA) algorithm using the optimal parameters. Then a filter is introduced to enhance the final indexing output. In ranking phase, using Tag Rank based on the indexing phase has improved the output of the ranking. Simulation results of the proposed model have shown improvements in indexing and ranking output.
INTELLIGENT SOCIAL NETWORKS MODEL BASED ON SEMANTIC TAG RANKINGIJwest
The document presents a new model for intelligent social networks based on semantic tag ranking. It uses a multi-agent system approach with agents performing indexing and ranking. For indexing, it uses an enhanced Latent Dirichlet Allocation (E-LDA) model that optimizes LDA parameters. Tags above a threshold from E-LDA output are ranked using Tag Rank. Simulation results showed improvements in indexing and ranking over conventional methods. The model introduces semantics to social networks to improve search and link recommendation.
INTELLIGENT SOCIAL NETWORKS MODEL BASED ON SEMANTIC TAG RANKINGdannyijwest
Social Networks has become one of the most popular platforms to allow users to communicate, and share
their interests without being at the same geographical location. With the great and rapid growth of Social
Media sites such as Facebook, LinkedIn, Twitter...etc. causes huge amount of user-generated content.
Thus, the improvement in the information quality and integrity becomes a great challenge to all social
media sites, which allows users to get the desired content or be linked to the best link relation using
improved search / link technique. So introducing semantics to social networks will widen up the
representation of the social networks.
The document discusses value encounters and extensions to the e3-value business model. It proposes that value is co-created through interactions between actors in "value encounters" where resources are combined. Value encounters can be connected through causal relationships where one encounter reinforces another. It also outlines dimensions for analyzing value encounters, including financial, operational, knowledge, and social considerations. The conclusion advocates extending e3-value with the concept of value encounters and developing more network analysis methods to better capture co-creation of value.
An improvised model for identifying influential nodes in multi parameter soci...csandit
Influence Maximization is one of the major tasks in the field of viral marketing and community
detection. Based on the observation that social networks in general are multi-parameter graphs
and viral marketing or Influence Maximization is based on few parameters, we propose to
convert the general social networks into “interest graphs”. We have proposed an improvised
model for identifying influential nodes in multi-parameter social networks using these “interest
graphs”. The experiments conducted on these interest graphs have shown better results than the
method proposed in [8].
Rostislav Yavorsky - Research Challenges of Dynamic Socio-Semantic NetworksWitology
This document presents a model for dynamic socio-semantic networks and identifies key research challenges. The model represents social networks as weighted multi-graphs of members and their relationships, and content as multi-graphs of elements and relations. Authorship links members and content. Network dynamics include changes to members, relationships, content, and context over time. Research challenges involve discovering influential members and texts from network evolution data using interdisciplinary approaches that combine social science, linguistics and machine learning.
Clustering plays an important role in data mining as many applications use it as a preprocessing step for data analysis. Traditional clustering focuses on the grouping of similar objects, while two-way co-clustering can group dyadic data (objects as well as their attributes) simultaneously. Most co-clustering research focuses on single correlation data, but there might be other possible descriptions of dyadic data that could improve co-clustering performance. In this research, we extend ITCC (Information Theoretic Co-Clustering) to the problem of co-clustering with augmented matrix. We proposed CCAM (Co-Clustering with Augmented Data Matrix) to include this augmented data for better co-clustering. We apply CCAM in the analysis of on-line advertising, where both ads and users must be clustered. The key data that connect ads and users are the user-ad link matrix, which identifies the ads that each user has linked; both ads and users also have their feature data, i.e. the augmented data matrix. To evaluate the proposed method, we use two measures: classification accuracy and K-L divergence. The experiment is done using the advertisements and user data from Morgenstern, a financial social website that focuses on the advertisement agency. The experiment results show that CCAM provides better performance than ITCC since it consider the use of augmented data during clustering.
In the Internet market, content providers (CPs) continue to play a primordial role in the pro-cess of accessing different types of data: Images, Texts, Videos ..etc. Competition in this area is fierce, customers are looking for providers that offer them good content ( credibility of content and quality of service) with a reasonable price. In this work, we analyze this competition between CPs and the economic influence of their strategies on the market. We formulate our problem as a non-cooperative game among multiple CPs for the same market. Through a detailed analysis, we prove uniqueness of pure Nash Equilibrium (NE). Further-more, a fully distributed algorithm to converge to the NE point is presented. In order to quantify how efficient is the NE point, a detailed analysis of the Price of Anarchy (PoA) is adopted to ensure the performance of the system at equilibrium. Finally, we provide an extensive numerical study to describe the interactions between CPs and to point out the importance of quality of service (QoS) and credibility of content in the market.
Sentiment Analysis SA is an ongoing field of research in text mining field. SA sentiment analysis is the computational treatment of opinions, sentiments and text. This s paper deals in a comprehensive overview of the recent updates in this field. Many recently proposed algorithms amend and various SA applications are investigated and presented briefly in this paper. The related fields to SA transfer learning, emotion detection, and building resources that attracted researchers recently are discussed. The main objective of this paper is to give nearly full image of SA techniques and the related fields with brief details. The main contributions in this paper include the sophisticated categorizations of a large number of recent articles and the illustration of the recent trend of research in the sentiment analysis and its related areas. Prof. Richa Mehra | Diksha Saxena | Shubham Gupta | Joy Joseph ""Sentiment Analysis"" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-3 , April 2019, URL: https://www.ijtsrd.com/papers/ijtsrd23375.pdf
Paper URL: https://www.ijtsrd.com/engineering/computer-engineering/23375/sentiment-analysis/prof-richa-mehra
This document discusses using agent-based modeling and neural networks for web analytics in e-learning. It proposes a hybrid agent-based model with embedded artificial neural networks to simulate the production and dissemination of knowledge among three agent types (authors, tutors, and students) in online courses. The model aims to assess knowledge trends and is implemented using the AnyLogic software package for a case study at a Ukrainian higher education institution. Agent-based modeling and neural networks are described as effective tools for developing learning analytics to understand knowledge flow between e-learning participants.
College Essay Tell Us Something YourselfDiana Jordan
The document discusses two ancient Greek philosophers, Leucippus and Democritus, who are often mentioned together due to their similar ideas about atomism. While the exact relationship between the two men is unclear, they both proposed that all matter is composed of tiny, indivisible particles called atoms moving through empty space, an idea that influenced later philosophy and science. Their works did not survive intact, so knowledge of their ideas comes from fragments and reports by other ancient writers who discussed and debated their theories.
The EigenRumor algorithm calculates contribution scores for participants and information objects in online communities. It considers information provision and evaluation as links between participants and objects. The algorithm calculates three mutually reinforcing scores: authority score for participants' information provision ability, hub score for their evaluation ability, and reputation score for objects. The reputation score of an object is influenced by the authority score of its provider and hub scores of evaluators. In turn, authority and hub scores are influenced by the reputation scores of objects participants provide or evaluate. Calculating the scores through this mutually reinforcing process allows the algorithm to identify high contributors.
Nowadays social media has become one of the largest gatherings of people in online. There are many ways for the industries to promote their products to the public through advertising. The variety of advertisement is increasing dramatically. Businessmen are so much dependent on the advertisement that significantly it really brought out success in the market and hence practiced by major industries. Thus, companies are trying hard to draw the attention of customers on social networks through online advertisement. One of the most popular social media is Twitter which is popular for short text sharing named ‘Tweet'. People here create their profile with basic information. To ensure the advertisements are shown to relative people, Twitter targets people based on language, gender, interest, follower, device, behaviour, tailored audiences, keyword, and geography targeting. Twitter generates interest sets based on their activities on Twitter. What our framework does is that it determines the topic of interest from a given list of Tweets if it has any. This process is called Entity Intersect Categorizing Value (EICV). Each category topic generates a set of words or phrases related to that topic. An entity set is created from processing tweets by keyword generation and Twitters data using Twitter API. Value of entities is matched with the set of categories. If they cross a threshold value, it results in the category which matched the desired interest category. For smaller amounts of data sizes, the results show that our framework performs with higher accuracy rate.
Nowadays social media has become one of the largest gatherings of people in online. There are many ways for the industries to promote their products to the public through advertising. The variety of advertisement is increasing dramatically. Businessmen are so much dependent on the advertisement that significantly it really brought out success in the market and hence practiced by major industries. Thus, companies are trying hard to draw the attention of customers on social networks through online advertisement. One of the most popular social media is Twitter which is popular for short text sharing named ‘Tweet'. People here create their profile with basic information. To ensure the advertisements are shown to relative people, Twitter targets people based on language, gender, interest, follower, device, behaviour, tailored audiences, keyword, and geography targeting. Twitter generates interest sets based on their activities on Twitter. What our framework does is that it determines the topic of interest from a given list of Tweets if it has any. This process is called Entity Intersect Categorizing Value (EICV). Each category topic generates a set of words or phrases related to that topic. An entity set is created from processing tweets by keyword generation and Twitters data using Twitter API. Value of entities is matched with the set of categories. If they cross a threshold value, it results in the category which matched the desired interest category. For smaller amounts of data sizes, the results show that our framework performs with higher accuracy rate.
DISCOVERING USERS TOPIC OF INTEREST FROM TWEETijcsit
Nowadays social media has become one of the largest gatherings of people in online. There are many ways
for the industries to promote their products to the public through advertising. The variety of advertisement is increasing dramatically. Businessmen are so much dependent on the advertisement that significantly it really brought out success in the market and hence practiced by major industries. Thus, companies are trying hard to draw the attention of customers on social networks through online advertisement. One of the
most popular social media is Twitter which is popular for short text sharing named ‘Tweet'. People here create their profile with basic information. To ensure the advertisements are shown to relative people, Twitter targets people based on language, gender, interest, follower, device, behaviour, tailored audiences,
keyword, and geography targeting. Twitter generates interest sets based on their activities on Twitter. What
our framework does is that it determines the topic of interest from a given list of Tweets if it has any. This
process is called Entity Intersect Categorizing Value (EICV). Each category topic generates a set of words
or phrases related to that topic. An entity set is created from processing tweets by keyword generation and
Twitters data using Twitter API. Value of entities is matched with the set of categories. If they cross a
threshold value, it results in the category which matched the desired interest category. For smaller amounts
of data sizes, the results show that our framework performs with higher accuracy rate.
This slide includes :
Types of Machine Learning
Supervised Learning
Brain
Neuron
Design a Learning System
Perspectives
Issues in Machine Learning
Learning Task
Learning as Search
Hypothesis
Version Spaces
Candidate elimination algorithm
linear Discriminant
Perception
Linear Separability
Linear Regression
Unsupervised Learning
Reinforcement Learning
Evolutionary Learning
Name (Last, First) _________________________________Practice Fi.docxrosemarybdodson23141
Name (Last, First): _________________________________
Practice Final Exam
ChE 2002 Introduction to Chemical Engineering Computing
Part 1 Circle all answers that are correct; there can be 0 to 5 correct answers.
1. (5 points) Which of the statements below is true?
a) A subroutine cannot open a message box
b) A subroutine can only return ONE value to an open spreadsheet
c) A subroutine can insert a new spreadsheet
d) A subroutine cannot use a function
e) A subroutine can use either a function and/or another subroutine
2. (5 points) Which of the statements below is false regarding Macro Recording?
a) The macro recorder creates a subroutine that can format a table
b) The macro recorder creates a function that generates a plot in an open worksheet
c) The macro recorder creates VBA code that can calculate a formula by using values from a spreadsheet
d) The macro recorder creates VBA code for a user-defined function within a subroutine
e) The macro recorder creates a subroutine that assigns a variable type in a message box
3. (10 points) If the VBA code below was complete, what is the value of TodaysValue assuming ? Show the calculation the computer would do.
Dim Values(1 To 10, 1 To 10) As Single
For i = 1 To 10
For j = 1 To 10
Values(i, j) = (i+50) / (k * j)
Next j
Next i
TodaysValue = Values(3, 2)
After the code has executed what is the value of the variable TodaysValue?
TodaysValue = __________
Part 2 Programming Exercises
Note: Prepare all of your programming solutions in ONE Excel workbook. Put EACH PROBLEM on a SEPARATE WORKSHEET. Save the workbook with your name, for example Last First Final Exam.xlsm. Submit your Excel workbook on D2L in the electronic drop box designated for the Final Exam.
Problem 1 (20 points)
Using Sheet1 of your Excel workbook, write a user defined function with an statement to evaluate the following function,:
Use your user defined function to plot fromto .
Problem 2 (20 points)
On Sheet2 of your workbook, find the number of real roots of the following polynomial using a user defined function:
Remembering that the number of real roots equals the number of times the polynomial crosses the x-axis, find the roots by plotting the polynomial on the interval . List the number and approximate value of the roots you find:
1. Number of real roots = _________________________
2. Approximate value of roots = ______________________________________________
Problem 3 (35 points)
The Maclaurin series for the inverse hyperbolic tangent is given by
Using Sheet3 of your workbook, create a UserForm that allows the user to
1. Choose the option to write the nth term of the series on the worksheet
2. Choose the option to write the sum of the first n-terms on the worksheet
3. Make the OK CommandButton and the OptionButton for the sum of the n-terms as the default buttons
4. Create a button on the spreadsheet that starts the UserForm
For use your program on Sheet3 to calculate:
1. The value of the 10th te.
An Implementational approach to genetic algorithms for TSPSougata Das
This document summarizes a research paper that proposes using a genetic algorithm to solve the travelling salesman problem (TSP) as applied to water distribution network optimization. The genetic algorithm aims to find Pareto optimal solutions by evolving two populations that optimize the objectives of minimizing cost and maximizing profit of water distribution routes. The algorithm uses techniques like selection, crossover and mutation over generations to evolve solutions toward the Pareto front. It represents solutions as ordered lists of cities and evaluates fitness based on multiple criteria to maintain genetic diversity.
This document describes an electronic doorbell system that uses a keypad and GSM for home security. The system consists of a doorbell connected to a microcontroller that triggers a GSM module to send an SMS to the homeowner when the doorbell is pressed. The homeowner can then respond via button press to open the door, or a message will be displayed if they do not respond. An authorized person can enter a password on the keypad, and if multiple wrong passwords are entered, a message will be sent to the homeowner about a potential burglary attempt. The system aims to provide notification to homeowners and prevent unauthorized access through use of passwords and messaging capabilities.
Augmented reality, the new age technology, has widespread applications in every field imaginable. This technology has proven to be an inflection point in numerous verticals, improving lives and improving performance. In this paper, we explore the various possible applications of Augmented Reality (AR) in the field of Medicine. The objective of using AR in medicine or generally in any field is the fact that, AR helps in motivating the user, making sessions interactive and assist in faster learning. In this paper, we discuss about the applicability of AR in the field of medical diagnosis. Augmented reality technology reinforces remote collaboration, allowing doctors to diagnose patients from a different locality. Additionally, we believe that a much more pronounced effect can be achieved by bringing together the cutting edge technology of AR and the lifesaving field of Medical sciences. AR is a mechanism that could be applied in the learning process too. Similarly, virtual reality could be used in the field where more of practical experience is needed such as driving, sports, neonatal care training.
A Game theoretic approach for competition over visibility in social networksjournalBEEI
Social Networks have known an important evolution in the last few years. These structures, made up of individuals who are tied by one or more specific types of interdependency, constitute the window for members to express their opinions and thoughts by sending posts to their own walls or others' timelines. Actually, when a content arrives, it's located on the top of the timeline pushing away older messages. This situation causes a permanent competition over visibility among subscribers who jump on opponents to promote conflict. Our study presents this competition as a non-cooperative game; each source has to choose frequencies which assure its visibility. We model it, exploring the theory of concave games, to reach a situation of equilibrium; a situation where no player has the ultimate ability to deviate from its current strategy. We formulate the named game, then we analyze it and prove that there is exactly one Nash equilibrium which is the convergence of all players' best responses. We finally provide some numerical results, taking into consideration a system of two sources with a specific frequency space, and analyze the effect of different parameters on sources' visibility on the walls of social networks.
INTELLIGENT SOCIAL NETWORKS MODEL BASED ON SEMANTIC TAG RANKINGdannyijwest
Social Networks has become one of the most popular platforms to allow users to communicate, and share their interests without being at the same geographical location. With the great and rapid growth of Social Media sites such as Facebook, LinkedIn, Twitter…etc. causes huge amount of user-generated content. Thus, the improvement in the information quality and integrity becomes a great challenge to all social media sites, which allows users to get the desired content or be linked to the best link relation using improved search / link technique. So introducing semantics to social networks will widen up the representation of the social networks. In this paper, a new model of social networks based on semantic tag ranking is introduced. This model is based on the concept of multi-agent systems. In this proposed model the representation of social links will be extended by the semantic relationships found in the vocabularies which are known as (tags) in most of social networks.The proposed model for the social media engine is based on enhanced Latent Dirichlet Allocation(E-LDA) as a semantic indexing algorithm, combined with Tag Rank as social network ranking algorithm. The improvements on (E-LDA) phase is done by optimizing (LDA) algorithm using the optimal parameters. Then a filter is introduced to enhance the final indexing output. In ranking phase, using Tag Rank based on the indexing phase has improved the output of the ranking. Simulation results of the proposed model have shown improvements in indexing and ranking output.
INTELLIGENT SOCIAL NETWORKS MODEL BASED ON SEMANTIC TAG RANKINGIJwest
The document presents a new model for intelligent social networks based on semantic tag ranking. It uses a multi-agent system approach with agents performing indexing and ranking. For indexing, it uses an enhanced Latent Dirichlet Allocation (E-LDA) model that optimizes LDA parameters. Tags above a threshold from E-LDA output are ranked using Tag Rank. Simulation results showed improvements in indexing and ranking over conventional methods. The model introduces semantics to social networks to improve search and link recommendation.
INTELLIGENT SOCIAL NETWORKS MODEL BASED ON SEMANTIC TAG RANKINGdannyijwest
Social Networks has become one of the most popular platforms to allow users to communicate, and share
their interests without being at the same geographical location. With the great and rapid growth of Social
Media sites such as Facebook, LinkedIn, Twitter...etc. causes huge amount of user-generated content.
Thus, the improvement in the information quality and integrity becomes a great challenge to all social
media sites, which allows users to get the desired content or be linked to the best link relation using
improved search / link technique. So introducing semantics to social networks will widen up the
representation of the social networks.
The document discusses value encounters and extensions to the e3-value business model. It proposes that value is co-created through interactions between actors in "value encounters" where resources are combined. Value encounters can be connected through causal relationships where one encounter reinforces another. It also outlines dimensions for analyzing value encounters, including financial, operational, knowledge, and social considerations. The conclusion advocates extending e3-value with the concept of value encounters and developing more network analysis methods to better capture co-creation of value.
An improvised model for identifying influential nodes in multi parameter soci...csandit
Influence Maximization is one of the major tasks in the field of viral marketing and community
detection. Based on the observation that social networks in general are multi-parameter graphs
and viral marketing or Influence Maximization is based on few parameters, we propose to
convert the general social networks into “interest graphs”. We have proposed an improvised
model for identifying influential nodes in multi-parameter social networks using these “interest
graphs”. The experiments conducted on these interest graphs have shown better results than the
method proposed in [8].
Rostislav Yavorsky - Research Challenges of Dynamic Socio-Semantic NetworksWitology
This document presents a model for dynamic socio-semantic networks and identifies key research challenges. The model represents social networks as weighted multi-graphs of members and their relationships, and content as multi-graphs of elements and relations. Authorship links members and content. Network dynamics include changes to members, relationships, content, and context over time. Research challenges involve discovering influential members and texts from network evolution data using interdisciplinary approaches that combine social science, linguistics and machine learning.
Clustering plays an important role in data mining as many applications use it as a preprocessing step for data analysis. Traditional clustering focuses on the grouping of similar objects, while two-way co-clustering can group dyadic data (objects as well as their attributes) simultaneously. Most co-clustering research focuses on single correlation data, but there might be other possible descriptions of dyadic data that could improve co-clustering performance. In this research, we extend ITCC (Information Theoretic Co-Clustering) to the problem of co-clustering with augmented matrix. We proposed CCAM (Co-Clustering with Augmented Data Matrix) to include this augmented data for better co-clustering. We apply CCAM in the analysis of on-line advertising, where both ads and users must be clustered. The key data that connect ads and users are the user-ad link matrix, which identifies the ads that each user has linked; both ads and users also have their feature data, i.e. the augmented data matrix. To evaluate the proposed method, we use two measures: classification accuracy and K-L divergence. The experiment is done using the advertisements and user data from Morgenstern, a financial social website that focuses on the advertisement agency. The experiment results show that CCAM provides better performance than ITCC since it consider the use of augmented data during clustering.
In the Internet market, content providers (CPs) continue to play a primordial role in the pro-cess of accessing different types of data: Images, Texts, Videos ..etc. Competition in this area is fierce, customers are looking for providers that offer them good content ( credibility of content and quality of service) with a reasonable price. In this work, we analyze this competition between CPs and the economic influence of their strategies on the market. We formulate our problem as a non-cooperative game among multiple CPs for the same market. Through a detailed analysis, we prove uniqueness of pure Nash Equilibrium (NE). Further-more, a fully distributed algorithm to converge to the NE point is presented. In order to quantify how efficient is the NE point, a detailed analysis of the Price of Anarchy (PoA) is adopted to ensure the performance of the system at equilibrium. Finally, we provide an extensive numerical study to describe the interactions between CPs and to point out the importance of quality of service (QoS) and credibility of content in the market.
Sentiment Analysis SA is an ongoing field of research in text mining field. SA sentiment analysis is the computational treatment of opinions, sentiments and text. This s paper deals in a comprehensive overview of the recent updates in this field. Many recently proposed algorithms amend and various SA applications are investigated and presented briefly in this paper. The related fields to SA transfer learning, emotion detection, and building resources that attracted researchers recently are discussed. The main objective of this paper is to give nearly full image of SA techniques and the related fields with brief details. The main contributions in this paper include the sophisticated categorizations of a large number of recent articles and the illustration of the recent trend of research in the sentiment analysis and its related areas. Prof. Richa Mehra | Diksha Saxena | Shubham Gupta | Joy Joseph ""Sentiment Analysis"" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-3 , April 2019, URL: https://www.ijtsrd.com/papers/ijtsrd23375.pdf
Paper URL: https://www.ijtsrd.com/engineering/computer-engineering/23375/sentiment-analysis/prof-richa-mehra
This document discusses using agent-based modeling and neural networks for web analytics in e-learning. It proposes a hybrid agent-based model with embedded artificial neural networks to simulate the production and dissemination of knowledge among three agent types (authors, tutors, and students) in online courses. The model aims to assess knowledge trends and is implemented using the AnyLogic software package for a case study at a Ukrainian higher education institution. Agent-based modeling and neural networks are described as effective tools for developing learning analytics to understand knowledge flow between e-learning participants.
College Essay Tell Us Something YourselfDiana Jordan
The document discusses two ancient Greek philosophers, Leucippus and Democritus, who are often mentioned together due to their similar ideas about atomism. While the exact relationship between the two men is unclear, they both proposed that all matter is composed of tiny, indivisible particles called atoms moving through empty space, an idea that influenced later philosophy and science. Their works did not survive intact, so knowledge of their ideas comes from fragments and reports by other ancient writers who discussed and debated their theories.
The EigenRumor algorithm calculates contribution scores for participants and information objects in online communities. It considers information provision and evaluation as links between participants and objects. The algorithm calculates three mutually reinforcing scores: authority score for participants' information provision ability, hub score for their evaluation ability, and reputation score for objects. The reputation score of an object is influenced by the authority score of its provider and hub scores of evaluators. In turn, authority and hub scores are influenced by the reputation scores of objects participants provide or evaluate. Calculating the scores through this mutually reinforcing process allows the algorithm to identify high contributors.
Nowadays social media has become one of the largest gatherings of people in online. There are many ways for the industries to promote their products to the public through advertising. The variety of advertisement is increasing dramatically. Businessmen are so much dependent on the advertisement that significantly it really brought out success in the market and hence practiced by major industries. Thus, companies are trying hard to draw the attention of customers on social networks through online advertisement. One of the most popular social media is Twitter which is popular for short text sharing named ‘Tweet'. People here create their profile with basic information. To ensure the advertisements are shown to relative people, Twitter targets people based on language, gender, interest, follower, device, behaviour, tailored audiences, keyword, and geography targeting. Twitter generates interest sets based on their activities on Twitter. What our framework does is that it determines the topic of interest from a given list of Tweets if it has any. This process is called Entity Intersect Categorizing Value (EICV). Each category topic generates a set of words or phrases related to that topic. An entity set is created from processing tweets by keyword generation and Twitters data using Twitter API. Value of entities is matched with the set of categories. If they cross a threshold value, it results in the category which matched the desired interest category. For smaller amounts of data sizes, the results show that our framework performs with higher accuracy rate.
Nowadays social media has become one of the largest gatherings of people in online. There are many ways for the industries to promote their products to the public through advertising. The variety of advertisement is increasing dramatically. Businessmen are so much dependent on the advertisement that significantly it really brought out success in the market and hence practiced by major industries. Thus, companies are trying hard to draw the attention of customers on social networks through online advertisement. One of the most popular social media is Twitter which is popular for short text sharing named ‘Tweet'. People here create their profile with basic information. To ensure the advertisements are shown to relative people, Twitter targets people based on language, gender, interest, follower, device, behaviour, tailored audiences, keyword, and geography targeting. Twitter generates interest sets based on their activities on Twitter. What our framework does is that it determines the topic of interest from a given list of Tweets if it has any. This process is called Entity Intersect Categorizing Value (EICV). Each category topic generates a set of words or phrases related to that topic. An entity set is created from processing tweets by keyword generation and Twitters data using Twitter API. Value of entities is matched with the set of categories. If they cross a threshold value, it results in the category which matched the desired interest category. For smaller amounts of data sizes, the results show that our framework performs with higher accuracy rate.
DISCOVERING USERS TOPIC OF INTEREST FROM TWEETijcsit
Nowadays social media has become one of the largest gatherings of people in online. There are many ways
for the industries to promote their products to the public through advertising. The variety of advertisement is increasing dramatically. Businessmen are so much dependent on the advertisement that significantly it really brought out success in the market and hence practiced by major industries. Thus, companies are trying hard to draw the attention of customers on social networks through online advertisement. One of the
most popular social media is Twitter which is popular for short text sharing named ‘Tweet'. People here create their profile with basic information. To ensure the advertisements are shown to relative people, Twitter targets people based on language, gender, interest, follower, device, behaviour, tailored audiences,
keyword, and geography targeting. Twitter generates interest sets based on their activities on Twitter. What
our framework does is that it determines the topic of interest from a given list of Tweets if it has any. This
process is called Entity Intersect Categorizing Value (EICV). Each category topic generates a set of words
or phrases related to that topic. An entity set is created from processing tweets by keyword generation and
Twitters data using Twitter API. Value of entities is matched with the set of categories. If they cross a
threshold value, it results in the category which matched the desired interest category. For smaller amounts
of data sizes, the results show that our framework performs with higher accuracy rate.
This slide includes :
Types of Machine Learning
Supervised Learning
Brain
Neuron
Design a Learning System
Perspectives
Issues in Machine Learning
Learning Task
Learning as Search
Hypothesis
Version Spaces
Candidate elimination algorithm
linear Discriminant
Perception
Linear Separability
Linear Regression
Unsupervised Learning
Reinforcement Learning
Evolutionary Learning
Name (Last, First) _________________________________Practice Fi.docxrosemarybdodson23141
Name (Last, First): _________________________________
Practice Final Exam
ChE 2002 Introduction to Chemical Engineering Computing
Part 1 Circle all answers that are correct; there can be 0 to 5 correct answers.
1. (5 points) Which of the statements below is true?
a) A subroutine cannot open a message box
b) A subroutine can only return ONE value to an open spreadsheet
c) A subroutine can insert a new spreadsheet
d) A subroutine cannot use a function
e) A subroutine can use either a function and/or another subroutine
2. (5 points) Which of the statements below is false regarding Macro Recording?
a) The macro recorder creates a subroutine that can format a table
b) The macro recorder creates a function that generates a plot in an open worksheet
c) The macro recorder creates VBA code that can calculate a formula by using values from a spreadsheet
d) The macro recorder creates VBA code for a user-defined function within a subroutine
e) The macro recorder creates a subroutine that assigns a variable type in a message box
3. (10 points) If the VBA code below was complete, what is the value of TodaysValue assuming ? Show the calculation the computer would do.
Dim Values(1 To 10, 1 To 10) As Single
For i = 1 To 10
For j = 1 To 10
Values(i, j) = (i+50) / (k * j)
Next j
Next i
TodaysValue = Values(3, 2)
After the code has executed what is the value of the variable TodaysValue?
TodaysValue = __________
Part 2 Programming Exercises
Note: Prepare all of your programming solutions in ONE Excel workbook. Put EACH PROBLEM on a SEPARATE WORKSHEET. Save the workbook with your name, for example Last First Final Exam.xlsm. Submit your Excel workbook on D2L in the electronic drop box designated for the Final Exam.
Problem 1 (20 points)
Using Sheet1 of your Excel workbook, write a user defined function with an statement to evaluate the following function,:
Use your user defined function to plot fromto .
Problem 2 (20 points)
On Sheet2 of your workbook, find the number of real roots of the following polynomial using a user defined function:
Remembering that the number of real roots equals the number of times the polynomial crosses the x-axis, find the roots by plotting the polynomial on the interval . List the number and approximate value of the roots you find:
1. Number of real roots = _________________________
2. Approximate value of roots = ______________________________________________
Problem 3 (35 points)
The Maclaurin series for the inverse hyperbolic tangent is given by
Using Sheet3 of your workbook, create a UserForm that allows the user to
1. Choose the option to write the nth term of the series on the worksheet
2. Choose the option to write the sum of the first n-terms on the worksheet
3. Make the OK CommandButton and the OptionButton for the sum of the n-terms as the default buttons
4. Create a button on the spreadsheet that starts the UserForm
For use your program on Sheet3 to calculate:
1. The value of the 10th te.
An Implementational approach to genetic algorithms for TSPSougata Das
This document summarizes a research paper that proposes using a genetic algorithm to solve the travelling salesman problem (TSP) as applied to water distribution network optimization. The genetic algorithm aims to find Pareto optimal solutions by evolving two populations that optimize the objectives of minimizing cost and maximizing profit of water distribution routes. The algorithm uses techniques like selection, crossover and mutation over generations to evolve solutions toward the Pareto front. It represents solutions as ordered lists of cities and evaluates fitness based on multiple criteria to maintain genetic diversity.
This document describes an electronic doorbell system that uses a keypad and GSM for home security. The system consists of a doorbell connected to a microcontroller that triggers a GSM module to send an SMS to the homeowner when the doorbell is pressed. The homeowner can then respond via button press to open the door, or a message will be displayed if they do not respond. An authorized person can enter a password on the keypad, and if multiple wrong passwords are entered, a message will be sent to the homeowner about a potential burglary attempt. The system aims to provide notification to homeowners and prevent unauthorized access through use of passwords and messaging capabilities.
Augmented reality, the new age technology, has widespread applications in every field imaginable. This technology has proven to be an inflection point in numerous verticals, improving lives and improving performance. In this paper, we explore the various possible applications of Augmented Reality (AR) in the field of Medicine. The objective of using AR in medicine or generally in any field is the fact that, AR helps in motivating the user, making sessions interactive and assist in faster learning. In this paper, we discuss about the applicability of AR in the field of medical diagnosis. Augmented reality technology reinforces remote collaboration, allowing doctors to diagnose patients from a different locality. Additionally, we believe that a much more pronounced effect can be achieved by bringing together the cutting edge technology of AR and the lifesaving field of Medical sciences. AR is a mechanism that could be applied in the learning process too. Similarly, virtual reality could be used in the field where more of practical experience is needed such as driving, sports, neonatal care training.
Image fusion is a sub field of image processing in which more than one images are fused to create an image where all the objects are in focus. The process of image fusion is performed for multi-sensor and multi-focus images of the same scene. Multi-sensor images of the same scene are captured by different sensors whereas multi-focus images are captured by the same sensor. In multi-focus images, the objects in the scene which are closer to the camera are in focus and the farther objects get blurred. Contrary to it, when the farther objects are focused then closer objects get blurred in the image. To achieve an image where all the objects are in focus, the process of images fusion is performed either in spatial domain or in transformed domain. In recent times, the applications of image processing have grown immensely. Usually due to limited depth of field of optical lenses especially with greater focal length, it becomes impossible to obtain an image where all the objects are in focus. Thus, it plays an important role to perform other tasks of image processing such as image segmentation, edge detection, stereo matching and image enhancement. Hence, a novel feature-level multi-focus image fusion technique has been proposed which fuses multi-focus images. Thus, the results of extensive experimentation performed to highlight the efficiency and utility of the proposed technique is presented. The proposed work further explores comparison between fuzzy based image fusion and neuro fuzzy fusion technique along with quality evaluation indices.
Graphs have become the dominant life-form of many tasks as they advance a
structure to represent many tasks and the corresponding relations. A powerful
role of networks/graphs is to bridge local feats that exist in vertices as they
blossom into patterns that help explain how nodal relations and their edges
impacts a complex effect that ripple via a graph. User cluster are formed as a
result of interactions between entities. Many users can hardly categorize their
contact into groups today such as “family”, “friends”, “colleagues” etc. Thus,
the need to analyze such user social graph via implicit clusters, enables the
dynamism in contact management. Study seeks to implement this dynamism
via a comparative study of deep neural network and friend suggest algorithm.
We analyze a user’s implicit social graph and seek to automatically create
custom contact groups using metrics that classify such contacts based on a
user’s affinity to contacts. Experimental results demonstrate the importance
of both the implicit group relationships and the interaction-based affinity in
suggesting friends.
This paper projects Gryllidae Optimization Algorithm (GOA) has been applied to solve optimal reactive power problem. Proposed GOA approach is based on the chirping characteristics of Gryllidae. In common, male Gryllidae chirp, on the other hand some female Gryllidae also do as well. Male Gryllidae draw the females by this sound which they produce. Moreover, they caution the other Gryllidae against dangers with this sound. The hearing organs of the Gryllidae are housed in an expansion of their forelegs. Through this, they bias to the produced fluttering sounds. Proposed Gryllidae Optimization Algorithm (GOA) has been tested in standard IEEE 14, 30 bus test systems and simulation results show that the projected algorithms reduced the real power loss considerably.
In the wake of the sudden replacement of wood and kerosene by gas cookers for several purposes in Nigeria, gas leakage has caused several damages in our homes, Laboratories among others. installation of a gas leakage detection device was globally inspired to eliminate accidents related to gas leakage. We present an alternative approach to developing a device that can automatically detect and control gas leakages and also monitor temperature. The system detects the leakage of the LPG (Liquefied Petroleum Gas) using a gas sensor, then triggred the control system response which employs ventilator system, Mobile phone alert and alarm when the LPG concentration in the air exceeds a certain level. The performance of two gas sensors (MQ5 and MQ6) were tested for a guided decision. Also, when the temperature of the environment poses a danger, LED (indicator), buzzer and LCD (16x2) display was used to indicate temperature and gas leakage status in degree Celsius and PPM respectively. Attension was given to the response time of the control system, which was ascertained that this system significantly increases the chances and efficiency of eliminating gas leakage related accident.
Feature selection problem is one of the main important problems in the text and data mining domain. This paper presents a comparative study of feature selection methods for Arabic text classification. Five of the feature selection methods were selected: ICHI square, CHI square, Information Gain, Mutual Information and Wrapper. It was tested with five classification algorithms: Bayes Net, Naive Bayes, Random Forest, Decision Tree and Artificial Neural Networks. In addition, Data Collection was used in Arabic consisting of 9055 documents, which were compared by four criteria: Precision, Recall, F-measure and Time to build model. The results showed that the improved ICHI feature selection got almost all the best results in comparison with other methods.
The document proposes the Gentoo Penguin Algorithm (GPA) to solve the optimal reactive power problem. The goal is to minimize real power loss. GPA is inspired by the natural behaviors of Gentoo penguins. In GPA, penguin positions represent potential solutions. Penguins move toward other penguins with lower "cost" or higher heat concentration, representing better solutions. Cost is defined by heat concentration and distance between penguins. Heat radiation decreases with distance. The algorithm is tested on the IEEE 57 bus system and reduces real power loss effectively compared to other methods.
08 20272 academic insight on applicationIAESIJEECS
This research has thrown up many questions in need of further investigation.There was an expressive quantitative-qualitative research, which a common investigation form was used in.The dialogue item was also applied to discover if the contributors asserted the media-based attitude supplements their learning of academic English writing classes or not.Data recounted academic” insights toward using Skype as a sustaining implement for lessons releasing based on chosen variables: their occupation, year of education, and knowledge with Skype discovered that there were no important statistical differences in the use of Skype units because of medical academics major knowledge. There are statistically important differences in using Skype units. The findings also, disclosed that there are statistically significant differences in using Skype units due to the practice with Skype variable, in favors of academics with no Skype use practice. Skype instrument as an instructive media is a positive medium to be employed to supply academic medical writing data and assist education. Academics who do not have enough time to contribute in classes believe comfortable using the Skype-based attitude in scientific writing. They who took part in the course claimed that their approval of this media is due to learning academic innovative medical writing.
Cloud computing has sweeping impact on the human productivity. Today it’s used for Computing, Storage, Predictions and Intelligent Decision Making, among others. Intelligent Decision-Making using Machine Learning has pushed for the Cloud Services to be even more fast, robust and accurate. Security remains one of the major concerns which affect the cloud computing growth however there exist various research challenges in cloud computing adoption such as lack of well managed service level agreement (SLA), frequent disconnections, resource scarcity, interoperability, privacy, and reliability. Tremendous amount of work still needs to be done to explore the security challenges arising due to widespread usage of cloud deployment using Containers. We also discuss Impact of Cloud Computing and Cloud Standards. Hence in this research paper, a detailed survey of cloud computing, concepts, architectural principles, key services, and implementation, design and deployment challenges of cloud computing are discussed in detail and important future research directions in the era of Machine Learning and Data Science have been identified.
Notary is an official authorized to make an authentic deed regarding all deeds, agreements and stipulations required by a general rule. Activities carried out at the notary office such as recording client data and file data still use traditional systems that tend to be manual. The problem that occurs is the inefficiency in data processing and providing information to clients. Clients have difficulty getting information related to the progress of documents that are being taken care of at the notary's office. The client must take the time to arrive to the notary's office repeatedly to check the progress of the work of the document file. The purpose of this study is to facilitate clients in obtaining information about the progress of the work in progress, and make it easier for employees to process incoming documents by implementing an administrative system. This system was developed with the waterfall system development method and uses the Multi-Channel Access Technology integrated in the website to simplify the process of delivering information and requesting information from clients and to clients with Telegram and SMS Gateway. Clients will come to the office only when there is a notification from the system via Telegram or SMS notifying that the client must come directly to the notary's office, thus leading to an efficient time and avoiding excessive transportation costs. The overall functional system can function properly based on the results of alpha testing. The results of beta testing conducted by distributing the system feasibility test questionnaire to end users, get a percentage of 96% of users agree the system is feasible to be implemented.
In this work Tundra wolf algorithm (TWA) is proposed to solve the optimal reactive power problem. In the projected Tundra wolf algorithm (TWA) in order to avoid the searching agents from trapping into the local optimal the converging towards global optimal is divided based on two different conditions. In the proposed Tundra wolf algorithm (TWA) omega tundra wolf has been taken as searching agent as an alternative of indebted to pursue the first three most excellent candidates. Escalating the searching agents’ numbers will perk up the exploration capability of the Tundra wolf wolves in an extensive range. Proposed Tundra wolf algorithm (TWA) has been tested in standard IEEE 14, 30 bus test systems and simulation results show the proposed algorithm reduced the real power loss effectively.
In this work Predestination of Particles Wavering Search (PPS) algorithm has been applied to solve optimal reactive power problem. PPS algorithm has been modeled based on the motion of the particles in the exploration space. Normally the movement of the particle is based on gradient and swarming motion. Particles are permitted to progress in steady velocity in gradient-based progress, but when the outcome is poor when compared to previous upshot, immediately particle rapidity will be upturned with semi of the magnitude and it will help to reach local optimal solution and it is expressed as wavering movement. In standard IEEE 14, 30, 57,118,300 bus systems Proposed Predestination of Particles Wavering Search (PPS) algorithm is evaluated and simulation results show the PPS reduced the power loss efficiently.
In this paper, Mine Blast Algorithm (MBA) has been intermingled with Harmony Search (HS) algorithm for solving optimal reactive power dispatch problem. MBA is based on explosion of landmines and HS is based on Creativeness progression of musicians-both are hybridized to solve the problem. In MBA Initial distance of shrapnel pieces are reduced gradually to allow the mine bombs search the probable global minimum location in order to amplify the global explore capability. Harmony search (HS) imitates the music creativity process where the musicians supervise their instruments’ pitch by searching for a best state of harmony. Hybridization of Mine Blast Algorithm with Harmony Search algorithm (MH) improves the search effectively in the solution space. Mine blast algorithm improves the exploration and harmony search algorithm augments the exploitation. At first the proposed algorithm starts with exploration & gradually it moves to the phase of exploitation. Proposed Hybridized Mine Blast Algorithm with Harmony Search algorithm (MH) has been tested on standard IEEE 14, 300 bus test systems. Real power loss has been reduced considerably by the proposed algorithm. Then Hybridized Mine Blast Algorithm with Harmony Search algorithm (MH) tested in IEEE 30, bus system (with considering voltage stability index)- real power loss minimization, voltage deviation minimization, and voltage stability index enhancement has been attained.
Artificial Neural Networks have proved their efficiency in a large number of research domains. In this paper, we have applied Artificial Neural Networks on Arabic text to prove correct language modeling, text generation, and missing text prediction. In one hand, we have adapted Recurrent Neural Networks architectures to model Arabic language in order to generate correct Arabic sequences. In the other hand, Convolutional Neural Networks have been parameterized, basing on some specific features of Arabic, to predict missing text in Arabic documents. We have demonstrated the power of our adapted models in generating and predicting correct Arabic text comparing to the standard model. The model had been trained and tested on known free Arabic datasets. Results have been promising with sufficient accuracy.
In the present-day communications speech signals get contaminated due to
various sorts of noises that degrade the speech quality and adversely impacts
speech recognition performance. To overcome these issues, a novel approach
for speech enhancement using Modified Wiener filtering is developed and
power spectrum computation is applied for degraded signal to obtain the
noise characteristics from a noisy spectrum. In next phase, MMSE technique
is applied where Gaussian distribution of each signal i.e. original and noisy
signal is analyzed. The Gaussian distribution provides spectrum estimation
and spectral coefficient parameters which can be used for probabilistic model
formulation. Moreover, a-priori-SNR computation is also incorporated for
coefficient updation and noise presence estimation which operates similar to
the conventional VAD. However, conventional VAD scheme is based on the
hard threshold which is not capable to derive satisfactory performance and a
soft-decision based threshold is developed for improving the performance of
speech enhancement. An extensive simulation study is carried out using
MATLAB simulation tool on NOIZEUS speech database and a comparative
study is presented where proposed approach is proved better in comparison
with existing technique.
Previous research work has highlighted that neuro-signals of Alzheimer’s disease patients are least complex and have low synchronization as compared to that of healthy and normal subjects. The changes in EEG signals of Alzheimer’s subjects start at early stage but are not clinically observed and detected. To detect these abnormalities, three synchrony measures and wavelet-based features have been computed and studied on experimental database. After computing these synchrony measures and wavelet features, it is observed that Phase Synchrony and Coherence based features are able to distinguish between Alzheimer’s disease patients and healthy subjects. Support Vector Machine classifier is used for classification giving 94% accuracy on experimental database used. Combining, these synchrony features and other such relevant features can yield a reliable system for diagnosing the Alzheimer’s disease.
Attenuation correction designed for PET/MR hybrid imaging frameworks along with portion making arrangements used for MR-based radiation treatment remain testing because of lacking high-energy photon weakening data. We present a new method so as to uses the learned nonlinear neighborhood descriptors also highlight coordinating toward foresee pseudo-CT pictures starting T1w along with T2w MRI information. The nonlinear neighborhood descriptors are acquired through anticipating the direct descriptors interested in the nonlinear high-dimensional space utilizing an unequivocal constituent guide also low-position guess through regulated complex regularization. The nearby neighbors of every near descriptor inside the data MR pictures are looked during an obliged spatial extent of the MR pictures among the training dataset. By that point, the pseudo-CT patches are evaluated through k-closest neighbor relapse. The planned procedure designed for pseudo-CT forecast is quantitatively broke downward on top of a dataset comprising of coordinated mind MRI along with CT pictures on or after 13 subjects.
The cognitive radio prototype performance is to alleviate the scarcity of spectral resources for wireless communication through intelligent sensing and quick resource allocation techniques. Secondary users (SU’s) actively obtain the spectrum access opportunity by supporting primary users (PU’s) in cognitive radio networks (CRNs). In present generation, spectrum access is endowed through cooperative communication-based link-level frame-based cooperative (LLC) principle. In this SUs independently act as conveyors for PUs to achieve spectrum access opportunities. Unfortunately, this LLC approach cannot fully exploit spectrum access opportunities to enhance the throughput of CRNs and fails to motivate PUs to join the spectrum sharing processes. Therefore, to overcome this con, network level cooperative (NLC) principle was used, where SUs are integrated mutually to collaborate with PUs session by session, instead of frame based cooperation for spectrum access opportunities. NLC approach has justified the challenges facing in LLC approach. In this paper we make a survey of some models that have been proposed to tackle the problem of LLC. We show the relevant aspects of each model, in order to characterize the parameters that we should take in account to achieve a spectrum access opportunity.
In this paper, the author provides insights and lessons that can be learned from colleagues at American universities about their online education experiences. The literature review and previous studies of online educations gains are explored and summarized in this research. Emerging trends in online education are discussed in detail, and strategies to implement these trends are explained. The author provides several tools and strategies that enable universities to ensure the quality of online education. At the end of this research paper, the researcher provides examples from Arab universities who have successfully implemented online education and expanded their impact on the society. This research provides a strategy and a model that can be used by universities in the Middle East as a roadmap to implement online education in their regions.
Skybuffer AI: Advanced Conversational and Generative AI Solution on SAP Busin...Tatiana Kojar
Skybuffer AI, built on the robust SAP Business Technology Platform (SAP BTP), is the latest and most advanced version of our AI development, reaffirming our commitment to delivering top-tier AI solutions. Skybuffer AI harnesses all the innovative capabilities of the SAP BTP in the AI domain, from Conversational AI to cutting-edge Generative AI and Retrieval-Augmented Generation (RAG). It also helps SAP customers safeguard their investments into SAP Conversational AI and ensure a seamless, one-click transition to SAP Business AI.
With Skybuffer AI, various AI models can be integrated into a single communication channel such as Microsoft Teams. This integration empowers business users with insights drawn from SAP backend systems, enterprise documents, and the expansive knowledge of Generative AI. And the best part of it is that it is all managed through our intuitive no-code Action Server interface, requiring no extensive coding knowledge and making the advanced AI accessible to more users.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Trusted Execution Environment for Decentralized Process MiningLucaBarbaro3
Presentation of the paper "Trusted Execution Environment for Decentralized Process Mining" given during the CAiSE 2024 Conference in Cyprus on June 7, 2024.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
This presentation provides valuable insights into effective cost-saving techniques on AWS. Learn how to optimize your AWS resources by rightsizing, increasing elasticity, picking the right storage class, and choosing the best pricing model. Additionally, discover essential governance mechanisms to ensure continuous cost efficiency. Whether you are new to AWS or an experienced user, this presentation provides clear and practical tips to help you reduce your cloud costs and get the most out of your budget.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on integration of Salesforce with Bonterra Impact Management.
Interested in deploying an integration with Salesforce for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Digital Marketing Trends in 2024 | Guide for Staying AheadWask
https://www.wask.co/ebooks/digital-marketing-trends-in-2024
Feeling lost in the digital marketing whirlwind of 2024? Technology is changing, consumer habits are evolving, and staying ahead of the curve feels like a never-ending pursuit. This e-book is your compass. Dive into actionable insights to handle the complexities of modern marketing. From hyper-personalization to the power of user-generated content, learn how to build long-term relationships with your audience and unlock the secrets to success in the ever-shifting digital landscape.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
2. IJ-ICT ISSN: 2252-8776
Analysis of Competition Fronting the Popularity of Content in Social Networks (Siham Hafidi)
87
In this work, we present a competitive situation among the contents of a social network, or each
content seeks to attract more customers and maximize its own revenues, while unveiling the modeling of the
problem, formulation and analysis of the game.
Related Works:
Applying game theory in competition problems in social networks is an active research area, in
which game-theoretic models have been developed and studied in the last decades, [1-3].
Past works in this area have focused on the competition between contents in social networks over
popularity and over visibility space, in conjunction with advertisement issues to promote content.
Both fully dynamic [3] models as well as semi-dynamic [1] models have been proposed. It has been
noted that these problems are similar in nature to the problem of competition over shelf space.
Our model is inspired from [1], where Altman handled a situation of competition between service
providers that create content, and that use methods of acceleration which increases the popularity of their
content and in particular the publication in social networks.
Altman examined 𝑥𝑖(t) as the number of destinations that have obtained by time t content from seed i, and
𝑥(𝑡) = ∑ 𝑥𝑖
𝑁
𝑖=1 (t), and he showed that:
lim
𝑡→∞
𝑥𝑖(t) = M
λ 𝑖
𝜆
(1)
With λ=∑ λ𝑖
𝑁
𝑖=1 .
2. UTILITY MODEL
In this section, we formulate the interaction among contents as a non-cooperative game. We assume
that there is a set N of N competing contents of a social network, and let M the number of subscribers for
access to the network and who are interested in the N content.
We assume that opportunities for accessing a content n arrive at destination m according to an
exponential law with parameter starting at time t=0.
We assume that the possibilities of access to content i arrive at destination m according to an
exponential law parameter that maintains a time. The time for a content i arrived at a destination m is
exponentially distributed with the parameter, and according to the mean or expected value of an
exponentially distributed random variable t , the average time for a content i arrived at a destination m is
exactly the opposite of parameter and we note:
=1/ .
We shall consider opportunities of acceleration rates of content by putting some efforts, such as the
publication of these contents in different social networks. Without any such effort, we assume that=whereare
some constants.
Thus, we assume that each content i has a (unique) utility function. This function depends on the
content variable , but also on cost and price.
The utility function of the content i is given by the following formula:
Ui(λ, p𝑖, γ𝑖) = p𝑖 × M
λi
∑ λj
N
j=1
− γ𝑖 M (λi − ϕi) (2)
Where p𝑖and γ𝑖 are two parameters, which respectively represent the price and the cost of
production of a content. This function refers to net incomes of content, and which is represented by the
difference between total income and expenses of this content.
This function refers to net incomes of content, and which is represented by the difference between
total income and expenses of this content.
Where:
p𝑖 × M
λi
∑ λj
N
j=1
: the total income, γiM (λi − ϕi) : the total cost to produce the content.
and:
γ𝑖: production cost of the content I,
p𝑖: the price of content,
ϕ𝑖: popularity rate of content i,
M: the total number of subscribers to the network.
3. A NON COOPERATIVE GAME FORMULATION
3. ISSN: 2252-8776
IJ-ICT Vol. 6, No. 2, August 2017 : 86 – 94
88
For a precise formulation of a non-cooperative game, we have to specify the number of players, the
possible actions available to each player, and any constraints that may be imposed on them, and finally the
utility function of each player which she attempts to optimize.
Here we will consider the following formulation of games:
Let J={N, U_i(λ,p_i,γ_i), λ=(λ_1,...,λ_N)} denote the non-cooperative λ game,
where:
N={1... N} is the index set identifying the contents,
U_i is the utility function,
λ =(λ_1...λ_N) is the strategy space of each content,
P=(p_1...p_n) price vector,
γ=(γ_1...γ_N) cost vector.
We assume that each content i has only one endpoint that is λ_i , and seeks to maximize his utility
function. λ_i represents an equilibrium for each content i, when λ_i maximizes U_i(λ,p_i,γ_i), we can express
that as :
Ui(λ∗
, pi, γi) = max
λ
Ui ( λ1, ... ,λN , pi, γi) (3)
3.1. The Nash Equilibrium
The Nash equilibrium is the natural concept solution of non-cooperative games, which describes a
state of a game where no player has sensitive to deviate unilaterally from its current strategy. In this section,
we first will study the Nash equilibrium solution for the induced game, we will show that a Nash equilibrium
solution exists and is unique by using the theory of concave games [4].
We recall that a non-cooperative game J is called concave if all player’s utility functions are strictly
concave with respect to their corresponding strategies [4]. According to [4], a Nash equilibrium exists in a
concave game if the strategy space is compact and convex, and the utility function that any given player
seeks to maximize is concave in its own strategy and continuous at every point in the product strategy space.
The Nash equilibrium in λ is formally defined as:
Definition1: The vector 𝜆∗
= (λ1
∗
, … , λ 𝑁
∗
) is a Nash equilibrium of the game
J={N, U_i(λ,p_i,γ_i) ,λ=(λ_1,...,λ_N)}, if : ∀ (i,λ_i )∈ (N,λ)
𝑈 𝑖(λ1
∗
, … , λ𝑖
∗
, … , λ 𝑁
∗
) ≥ 𝑈 𝑖(λ1
∗
, … , λ𝑖, … , λ 𝑁
∗
) (4)
Theorem1: A Nash equilibrium in terms of λ for game J= {N, Ui(λ, pi, γi) ,λ = (λ1, . . . , λN)} exists and is
unique.
Proof:
To prove existence, we note that each content’s strategy space λ𝑖 is defined in the closed interval
bounded by the minimum and maximum λ. Thus, the joint strategy space λ is a nonempty, convex, and
compact subset of the Euclidean space ℝN
. In addition, a Nash equilibrium exist if: the utility functions are
concave with respect to λ as can be seen from the second derivative test, in other words, A Nash equilibrium
exists if:
∀ i ∈ N
∂2
δλi
2 Ui(λ, p𝑖, γ𝑖) ≤ 0 (5)
We have:
∂2
δλi
2 Ui(λ, p𝑖, γ𝑖) =
∂2
δλi
2 (p𝑖 × M
λi
∑ λj
N
j=1
− γ𝑖 × M (λi − ϕi))
∂2
δλi
2 Ui(λ, p𝑖, γ𝑖) = −
2 p 𝑖 M
(∑ λj)N
j=1
2 +
2 p 𝑖 M λi
(∑ λj)N
j=1
3
After simplification, we got:
4. IJ-ICT ISSN: 2252-8776
Analysis of Competition Fronting the Popularity of Content in Social Networks (Siham Hafidi)
89
∂2
δλi
2 Ui(λ, p𝑖, γ𝑖) =
2 p 𝑖 M (λi−(∑ λi
N
i=1 ))
(∑ λi)N
i=1
3 ≤ 0 ∀ i ∈ N
where: λi < ∑ λi
N
i=1 , then
∂2
δλi
2 Ui(λ, p𝑖, γ𝑖) ≤ 0.
Which ensures existence of a Nash equilibrium.
In order to prove uniqueness of Nash equilibrium for this game, we proceeded as follows:
We have assumed that there are many equilibrium states, and each one is influenced by the initial
state of the game, in other words, the equilibrium states depend on the initial values which takes the
parameter λ of the game.
Then, we calculated the equilibrium corresponding to 2 situations that have different values of the
parameter λ, and for each situation, the corresponding curve is plotted.
The following figures illustrate the curves obtained:
Figure 1. Nash Equilibrium for different values of λ
Figure 1 shows that, whatever the strategies selected by the players in the initial state, it always
takes the same equilibrium state which is given by: λ1
∗
= 0.16 and λ2
∗
= 0.24.
These results ensure the uniqueness of Nash equilibrium
3.2. Algorithm: Best Response Dynamics
In this section, we study a fully distributed algorithm to learn the parameter equilibrium. Assuming
that contents are selfish and choose dynamically each one the best lambda that maximize his profiles, the
distributed algorithms can be thought of as protocols that players are programmed to follow. The design and
5. ISSN: 2252-8776
IJ-ICT Vol. 6, No. 2, August 2017 : 86 – 94
90
analysis of distributed algorithms converging to equilibria in the context of games has also received
considerable attention, most commonly convergence of best response dynamics.
Solutions of equations induces by vanishing the partial derivatives correspond respectively to the
best response in terms of lambda BRλ
i
(.), of each content as a function of the strategies of its opponents.
Since Nash equilibrium point is unique, then a best response-based dynamics would converge to the joint
Lambda NE. The parameter best response dynamic is detailed in Algorithm 1.
Algorithm1: Best Response Dynamics
1-Initialize the parameter 𝜆;
2-For each content 𝑖 ∈N at iteration t:
. λi
t+1
=BRλ
i
(λ).
3.3. Price of Anarchy
The concept of social welfare [5] or total surplus [6], is defined as the sum of the utilities of all
agents in the systems. It is well known in game theory that agent selfishness, such as in a Nash equilibrium,
does not lead in general to a socially efficient situation. As a measure of the loss of efficiency due to the
divergence of user interests, we use the Price of Anarchy (PoA) [7], this latter is a measure of the loss of
efficiency due to actor’s selfishness.
A PoA close to 1 indicates that the equilibrium is approximately socially optimal, and thus the
consequences of selfish behavior are relatively benevolent. The term Price of Anarchy was first used by
Koutsoupias and Papadimitriou [7] but the idea of measuring inefficiency of equilibrium is older.
As in [8], we measure the loss of efficiency due to actor’s selfishness as the quotient between the
social welfare obtained at the Nash equilibrium and the maximum value of the social welfare:
PoA =
∑ Ui
N
i=1 (𝜆∗)
max
𝜆
∑ Ui
N
i=1 (𝜆)
(6)
where:
∑ Ui
N
i=1 (λ∗) Represents the sum of utilities of all players in the Nash equilibrium.
and ∑ Ui
N
i=1 (λ) Represents the social welfare.
4. NUMERICAL INVESTIGATIONS
To test the validity of our theoretical study, we perform a numerical study based on the best
response dynamics and the contents utility functions as well as the price of the anarchy. Therefore, we
consider a system with two content N=2 seeking to maximize their revenues, then, we plot curves to the Nash
equilibrium, varying the parameters of the utility function. Table 1 summarizes the system parameter values
used in this numerical study:
Table 1. System Parameters used for Numerical Examples
𝑃1 𝑃2 𝛾1 𝛾2 𝜆1 𝜆2
10 5 15 5 0.01 0.001
𝛾1 = 𝛾2 𝑃1 = 𝑃2 𝜙1 = 𝜙2 M λ1
̅ = λ2
̅
5 15 5 3000 20
λ1 = λ2 P1
̅ = P2
̅ P1 = P2 γ1̅ = γ2̅̅̅ γ1 = γ2
0 100 1 100 1
Figure 2 present curve of the convergence to Nash Equilibrium of the parameter λ. It is clear that the
best response dynamics converges to the unique Nash equilibrium for the parameter λ.
We also observe that the speed of convergence is relatively high (4 iterations are sufficient to
converge the Nash equilibrium of the parameter λ).
6. IJ-ICT ISSN: 2252-8776
Analysis of Competition Fronting the Popularity of Content in Social Networks (Siham Hafidi)
91
Figure 2. Convergence to the λ Nash Equilibrium
Then, we plot in figures 3 and 4, respectively, the interplay of cost γ and price p on the parameter λ
at the Nash equilibrium
Figure 3: The interplay of cost γ on λ
Figure 3 shows the cost impact on the parameter λ: when the cost increases, λ decreases, and then
the time for a content n reaches a destination m increases progressively.
This means that users of this network will misjudge the content, which influence the revenue of
content. Then, players are supposed to adopt situations which the parameter λ is great, and whose production
costs are slight to increase their profits.
7. ISSN: 2252-8776
IJ-ICT Vol. 6, No. 2, August 2017 : 86 – 94
92
Figure 4: The interplay of price p on λ
Figure 4 shows the price impact on the parameter λ: when the price increases, λ increases, and the
propagation time t decreases, while subscribers of this network are satisfied, consequently, these contents
will attract more customers, and this will increase their revenues.
In the following, we plot the variation of utility based on the system parameters.
Figure 5. Variation utility as a function of the cost
Figure 5 shows that the utility reduced when the cost γ get a high values, in this situation, players
must raise their prices to increase their profits.
8. IJ-ICT ISSN: 2252-8776
Analysis of Competition Fronting the Popularity of Content in Social Networks (Siham Hafidi)
93
Figure 6. Variation utility as a function of the price p
Figure 6 shows that the utility increases progressively when prices are increasing, this reveals that
growing price ensures optimal revenues.
Finally, in this section, we show the impact of the system parameters on the system efficiency in
terms of Price of anarchy:
Figure 7. PoA variation as a function of the cost γ
Figure 7 shows the PoA variation as a function of cost γ with fixed priceP1 = P2. For small values of
γ, players are selfish (PoA<0.5), well each player tries to maximize his outcomes without considering the
other player; for higher costs, the contents converge to the same price of anarchy which is close to 1, showing
that players have cooperated for an optimum social welfare. We remark that the Nash equilibrium performs
well and the loss of efficiency is only about 1%. This result indicates that the Nash equilibrium of this game
is good and socially efficient.
Figure 8 shows the PoA variation as a function of price p with fixed cost γ1 = γ2. We remark that
the price of anarchy reduce when the price rises. For the little price, the price of anarchy is to 1 which means,
that players cooperate and the Nash equilibrium is socially efficient; while for higher prices, players have
become more selfish, everyone reacts alone seeking maximize his outcomes.
9. ISSN: 2252-8776
IJ-ICT Vol. 6, No. 2, August 2017 : 86 – 94
94
Figure 8. PoA variation as a function of the price
5. CONCLUSION
We have presented in this paper a type of competitive interactions between contents in a social
network. There is a competition over a limited common set of destinations. We analyzed the behavior of the
contents that seek to attract more consultations. The proposed model is based on a simple utility functions
which takes into account not only the characteristics of a current content, but also of all other contents, also,
it is based on two parameters describing each content price and cost of production.
We showed that the equilibrium becomes more important (socially efficient) when the cost of
production of content is included in a given interval. We also showed that when the prices becomes higher,
the competition becomes fierce.
REFERENCES
[1] Eitan Altman, “A Semi-Dynamic Model for Competition Over Popularity and Over Advertisement Space in Social
Networks’’, INRIA Sophia-Antipolis, 2004 Route des Lucioles, 06902 Sophia-Antipolis Cedex, France.
[2] Eitan Altman, Parmod Kumar, Srinivasan Venkatramanan, Anurag Kumar, “Competition Over Timeline in Social
Networks’’, INRIA Sophia-Antipolis, 2004 Route des Lucioles, 06902 Sophia-Antipolis Cedex, France.
[3] Eitan Altman, “Stochastic Game Approach for Competition Over Popularity in Social Networks”, INRIA Sophia-
Antipolis, 2004 Route des Lucioles, 06902 Sophia-Antipolis Cedex, France.
[4] Rosen J., “Existence and Uniqueness of Equilibrium Points for Concave n-person Games, Econometrica 33: 520–
534, 1965.
[5] Maille, P. & Tuffin, B., “Analysis of Price Competition in a Slotted Resource Allocation Game”, in Proc. of IEEE
INFOCOM. 2008.
[6] Varian, H., “Microeconomic Analysis”, Norton New York. 1992.
[7] Papadimitriou, K. & Koutsoupias, E. “Worst-Case Equilibria”, in STACS, pp.404–413. 1999.
[8] Guijarro, L., Pla, V., Vidal, J. & Martinez-Bauset, J. “Analysis of Price Competition under Peering and Transit
Agreements in Internet Service Provision to peer-to-peer Users”, IEEE Consumer Communications and Networking
Conference (CCNC2011), Las Vegas, Nevada USA, pp. 9–12. 2011.