The document proposes an S-curve network model to describe finite networks with bulk growth. It summarizes that most network models assume infinite growth, but real networks are finite. The model adds new nodes exponentially at each time step based on a logistic curve, with the total number of nodes approaching a carrying capacity. It connects new nodes preferentially to existing high-degree nodes. The model aims to better represent features like the limited growth of real networks like the Chinese IPv4 address network.
Computational Frameworks for Higher-order Network Data AnalysisAustin Benson
1. The document discusses computational frameworks for analyzing higher-order network data, where interactions can involve more than two nodes. Real-world systems often involve higher-order interactions that are reduced to pairwise connections.
2. The author presents several datasets involving higher-order interactions and shows that predicting the formation of new higher-order connections is similar to link prediction but considers groups of nodes rather than individual links. Structural properties like edge density and tie strength influence the likelihood of simplicial closure.
3. Models are proposed to score open simplices based on structural features and predict which will transition to closed simplices. Accounting for higher-order structure provides new insights beyond traditional network analysis of pairwise connections.
This document discusses predicting new friendships in social networks using temporal information. It describes research on predicting new links in social networks over time using supervised learning models trained on temporal features from past network interactions. The researchers used anonymized Facebook data over 28 months to train decision tree and neural network classifiers to predict new relationships, finding models using temporal information performed better than those without it.
LCF is a temporal approach to link prediction in dynamic social networks. It proposes a new predictor called Latest Common Friend (LCF) that incorporates temporal aspects. Social networks are modeled as sequences of snapshots over time periods. Each edge is assigned a weight based on timestamp. LCF score for node pairs is the cumulative weight of their common friends, giving more weight to friends with later timestamps. LCF outperforms traditional predictors like Common Neighbor, Adamic-Adar and Jaccard coefficient on 8 real-world dynamic network datasets based on average AUC scores. Modeling networks temporally and weighting edges by timestamp allows LCF to better predict future links in dynamic social networks.
EKAW2014 - A Hybrid Semantic Approach to Building Dynamic Maps of Research C...Francesco Osborne
A Hybrid Semantic Approach to Building Dynamic Maps of Research Communities
by F. Osborne, G. Scavo, E. Motta
URL: http://oro.open.ac.uk/41083/
In earlier papers we characterised the notion of diachronic topic-based communities –i.e., communities of people who work on semantically related topics at the same time. These communities are important to enable topic-centred analyses of the dynamics of the research world. In this paper we present an innovative algorithm, called Research Communities Map Builder (RCMB), which is able to automatically link diachronic topic-based communities over subsequent time intervals to identify significant events. These include topic shifts within a research community; the appearance and fading of a community; communities splitting, merging, spawning other communities; and others. The output of our algorithm is a map of research communities, annotated with the detected events, which provides a concise visual representation of the dynamics of a research area. In contrast with existing approaches, RCMB enables a much more fine-grained understanding of the evolution of research communities, with respect to both the granularity of the events and the granularity of the topics. This improved understanding can, for example, inform the research strategies of funders and researchers alike. We illustrate our approach with two case studies, highlighting the main communities and events that characterized the World Wide Web and Semantic Web areas in the 2000 – 2010 decade.
Higher-order link prediction and other hypergraph modelingAustin Benson
Higher-order link prediction and other hypergraph modeling can better model real-world systems composed of higher-order interactions that are often reduced to pairwise ones. Hypergraphs allow the modeling of interactions between more than two nodes, like groups of people collaborating, multiple recipients of emails, students gathering in groups, and drug compounds made of several substances.
Descobrindo o tesouro escondido nos seus dados usando grafos.Ana Appel
The document discusses networks and graph theory. It begins by introducing some key concepts in networks like nodes, edges, degree distribution, connected components, and clustering. It then discusses properties of real-world networks like having power law degree distributions (heavy tails), high clustering, and small world properties. The document emphasizes that networks are useful frameworks for understanding complex systems in many domains like social interactions, biological systems, and information networks.
This document describes a new method for detecting community structure in complex networks based on node similarity. The method works as follows:
1. It calculates the similarity between all node pairs using a local node similarity metric.
2. It treats each node as its own community initially. Then it iteratively incorporates the community of the current node with the communities containing its most similar nodes.
3. It selects the most similar uncovered node as the next current node, and repeats the process until all nodes have been incorporated into communities.
The method requires only local network information and has a computational complexity of O(nk) for a network with n nodes and average degree k. It is evaluated on real and computer-generated networks, demonstrating
Linked Open (Geo)Data and the Distributed Ontology Language – a perfect matchChristoph Lange
The Distributed Ontology Language is a meta-language for integrating
ontologies written in different languages. Our notion of “distributed”
comprises logical heterogeneity within ontologies, modularity and reuse,
and links across ontologies in different places of the Web. Not only
can ontologies be distributed across the Web, but DOL's supply of
supported ontology languages can also be extended in a decentral way.
For this functionality, DOL builds on the Linked Open Data (LOD)
principles. But DOL also contributes to LOD use cases. Many current
LOD applications are limited by the weak expressivity of the RDF and
RDFS languages commonly used to express data and vocabularies.
Completely switching to a more expressive language would impair
scalability to big datasets. DOL addresses the scalability and
expressivity requirements by allowing to represent each aspect of a
dataset in the most suitable language and keeping these different
representations connected. This is particularly useful in geographic
information systems, where big datasets (e.g. Linked Geo Data, the LOD
version of OpenStreetMap) need to be integrated with formalisations of
complex spatial notions (e.g. in the first-order language Common Logic).
Computational Frameworks for Higher-order Network Data AnalysisAustin Benson
1. The document discusses computational frameworks for analyzing higher-order network data, where interactions can involve more than two nodes. Real-world systems often involve higher-order interactions that are reduced to pairwise connections.
2. The author presents several datasets involving higher-order interactions and shows that predicting the formation of new higher-order connections is similar to link prediction but considers groups of nodes rather than individual links. Structural properties like edge density and tie strength influence the likelihood of simplicial closure.
3. Models are proposed to score open simplices based on structural features and predict which will transition to closed simplices. Accounting for higher-order structure provides new insights beyond traditional network analysis of pairwise connections.
This document discusses predicting new friendships in social networks using temporal information. It describes research on predicting new links in social networks over time using supervised learning models trained on temporal features from past network interactions. The researchers used anonymized Facebook data over 28 months to train decision tree and neural network classifiers to predict new relationships, finding models using temporal information performed better than those without it.
LCF is a temporal approach to link prediction in dynamic social networks. It proposes a new predictor called Latest Common Friend (LCF) that incorporates temporal aspects. Social networks are modeled as sequences of snapshots over time periods. Each edge is assigned a weight based on timestamp. LCF score for node pairs is the cumulative weight of their common friends, giving more weight to friends with later timestamps. LCF outperforms traditional predictors like Common Neighbor, Adamic-Adar and Jaccard coefficient on 8 real-world dynamic network datasets based on average AUC scores. Modeling networks temporally and weighting edges by timestamp allows LCF to better predict future links in dynamic social networks.
EKAW2014 - A Hybrid Semantic Approach to Building Dynamic Maps of Research C...Francesco Osborne
A Hybrid Semantic Approach to Building Dynamic Maps of Research Communities
by F. Osborne, G. Scavo, E. Motta
URL: http://oro.open.ac.uk/41083/
In earlier papers we characterised the notion of diachronic topic-based communities –i.e., communities of people who work on semantically related topics at the same time. These communities are important to enable topic-centred analyses of the dynamics of the research world. In this paper we present an innovative algorithm, called Research Communities Map Builder (RCMB), which is able to automatically link diachronic topic-based communities over subsequent time intervals to identify significant events. These include topic shifts within a research community; the appearance and fading of a community; communities splitting, merging, spawning other communities; and others. The output of our algorithm is a map of research communities, annotated with the detected events, which provides a concise visual representation of the dynamics of a research area. In contrast with existing approaches, RCMB enables a much more fine-grained understanding of the evolution of research communities, with respect to both the granularity of the events and the granularity of the topics. This improved understanding can, for example, inform the research strategies of funders and researchers alike. We illustrate our approach with two case studies, highlighting the main communities and events that characterized the World Wide Web and Semantic Web areas in the 2000 – 2010 decade.
Higher-order link prediction and other hypergraph modelingAustin Benson
Higher-order link prediction and other hypergraph modeling can better model real-world systems composed of higher-order interactions that are often reduced to pairwise ones. Hypergraphs allow the modeling of interactions between more than two nodes, like groups of people collaborating, multiple recipients of emails, students gathering in groups, and drug compounds made of several substances.
Descobrindo o tesouro escondido nos seus dados usando grafos.Ana Appel
The document discusses networks and graph theory. It begins by introducing some key concepts in networks like nodes, edges, degree distribution, connected components, and clustering. It then discusses properties of real-world networks like having power law degree distributions (heavy tails), high clustering, and small world properties. The document emphasizes that networks are useful frameworks for understanding complex systems in many domains like social interactions, biological systems, and information networks.
This document describes a new method for detecting community structure in complex networks based on node similarity. The method works as follows:
1. It calculates the similarity between all node pairs using a local node similarity metric.
2. It treats each node as its own community initially. Then it iteratively incorporates the community of the current node with the communities containing its most similar nodes.
3. It selects the most similar uncovered node as the next current node, and repeats the process until all nodes have been incorporated into communities.
The method requires only local network information and has a computational complexity of O(nk) for a network with n nodes and average degree k. It is evaluated on real and computer-generated networks, demonstrating
Linked Open (Geo)Data and the Distributed Ontology Language – a perfect matchChristoph Lange
The Distributed Ontology Language is a meta-language for integrating
ontologies written in different languages. Our notion of “distributed”
comprises logical heterogeneity within ontologies, modularity and reuse,
and links across ontologies in different places of the Web. Not only
can ontologies be distributed across the Web, but DOL's supply of
supported ontology languages can also be extended in a decentral way.
For this functionality, DOL builds on the Linked Open Data (LOD)
principles. But DOL also contributes to LOD use cases. Many current
LOD applications are limited by the weak expressivity of the RDF and
RDFS languages commonly used to express data and vocabularies.
Completely switching to a more expressive language would impair
scalability to big datasets. DOL addresses the scalability and
expressivity requirements by allowing to represent each aspect of a
dataset in the most suitable language and keeping these different
representations connected. This is particularly useful in geographic
information systems, where big datasets (e.g. Linked Geo Data, the LOD
version of OpenStreetMap) need to be integrated with formalisations of
complex spatial notions (e.g. in the first-order language Common Logic).
Sub-Graph Finding Information over Nebula Networksijceronline
Social and information networks have been extensively studied over years. This paper studies a new query on sub graph search on heterogeneous networks. Given an uncertain network of N objects, where each object is associated with a network to an underlying critical problem of discovering, top-k sub graphs of entities with rare and surprising associations returns k objects such that the expected matching sub graph queries efficiently involves, Compute all matching sub graphs which satisfy "Nebula computing requests" and this query is useful in ranking such results based on the rarity and the interestingness of the associations among nebula requests in the sub graphs. "In evaluating Top k-selection queries, "we compute information nebula using a global structural context similarity, and our similarity measure is independent of connection sub graphs". We need to compute the previous work on the matching problem can be harnessed for expected best for a naive ranking after matching for large graphs. Top k-selection sets and search for the optimal selection set with the large graphs; sub graphs may have enormous number of matches. In this paper, we identify several important properties of top-k selection queries, We propose novel top–K mechanisms to exploit these indexes for answering interesting sub graph queries efficiently.
The document presents a new greedy incremental approach for community detection in social networks. It begins by calculating the degree of nodes and sorting them in descending order. Initial communities are formed with the highest degree nodes. Then nodes are incrementally added to communities if it increases the community density. The approach is tested on standard datasets and able to detect communities reasonably well in less dense graphs. However, there is scope to improve performance on very dense graphs such as implementing it in parallel processing.
An Efficient Modified Common Neighbor Approach for Link Prediction in Social ...IOSR Journals
This document discusses link prediction in social networks. It analyzes shortcomings of existing leading link prediction methods like common neighbor. It then proposes a modified common neighbor approach that takes into account both topological network structure and node similarities based on features. The approach generates a weight for each link based on the number of common features between nodes, divided by the total number of features. It then calculates a contribution score for each common neighbor by multiplying the weights of that neighbor's links to the two nodes. Experimental results on co-authorship networks show the modified common neighbor approach outperforms existing methods.
This document discusses extracting communities from web archives over time. It begins by defining key terms used, such as the web community chart and notations for time periods and communities. It then describes types of changes that can occur to communities over time, such as emerging, dissolving, growing, shrinking, splitting, and merging. It also defines metrics to measure a community's evolution, such as growth rate, stability, disappearance rate, and merge rate. The document explains how web archives are used to build web graphs and extract community structures over multiple time periods to analyze how the community structure changes dynamically over time.
An information-theoretic, all-scales approach to comparing networksJim Bagrow
My presentation at NetSci 2018 on Portrait Divergence, a new approach to comparing networks that is simple, general-purpose, and easy to interpret.
The preprint: https://arxiv.org/abs/1804.03665
The code: https://github.com/bagrow/portrait-divergence
Simplicial closure and higher-order link prediction LA/OPTAustin Benson
- The speaker proposes a framework called "higher-order link prediction" to evaluate models of higher-order network data. This extends classical link prediction to predict new groups of nodes that will form simplices.
- Analysis of datasets shows that many have many "open triangles" of nodes connected by edges but not in a simplex. A simple probabilistic model can account for variation in open triangles.
- Simplicial closure probability depends on edge density and tie strength between nodes, both for 3 and 4-node groups.
- For higher-order link prediction, the speaker evaluates score functions based on edge weights, structural properties, whole-network similarities, and machine learning to predict which open triangles will close.
The document discusses the development of the Semantic Web, which extends the current web to a web of data through the use of metadata, ontologies, and formal semantics. It describes key technologies like the Resource Description Framework (RDF) and Web Ontology Language (OWL) that add machine-readable meaning to web documents. The Semantic Web aims to enable machines to process and understand the semantics of information on the web.
Interlinking Data and Knowledge in Enterprises, Research and Society with Lin...Christoph Lange
The Linked Data paradigm has emerged as a powerful enabler for data and knowledge interlinking and exchange using standardised Web technologies.
In this article, we discuss our vision how the Linked Data paradigm can be employed to evolve the intranets of large organisations -- be it enterprises, research organisations or governmental and public administrations -- into networks of internal data and knowledge.
In particular for large enterprises data integration is still a key challenge. The Linked Data paradigm seems a promising approach for integrating enterprise data. Like the Web of Data, which now complements the original document-centred Web, data intranets may help to enhance and flexibilise the intranets and service-oriented architectures that exist in large organisations. Furthermore, using Linked Data gives enterprises access to 50+ billion facts from the growing Linked Open Data (LOD) cloud. As a result, a data intranet can help to bridge the gap between structured data management (in ERP, CRM or SCM systems) and semi-structured or unstructured information in documents, wikis or web portals, and make all of these sources searchable in a coherent way.
Keynote at Baltic DB&IS 2014, 9 June 2014, Tallinn, Estonia
Forecasting the Spreading of Technologies in Research Communities @ K-CAP 2017Francesco Osborne
Technologies such as algorithms, applications and formats are an
important part of the knowledge produced and reused in the
research process. Typically, a technology is expected to originate
in the context of a research area and then spread and contribute to several other fields. For example, Semantic Web technologies
have been successfully adopted by a variety of fields, e.g.,
Information Retrieval, Human Computer Interaction, Biology, and
many others. Unfortunately, the spreading of technologies across
research areas may be a slow and inefficient process, since it is
easy for researchers to be unaware of potentially relevant
solutions produced by other research communities. In this paper,
we hypothesise that it is possible to learn typical technology
propagation patterns from historical data and to exploit this
knowledge i) to anticipate where a technology may be adopted
next and ii) to alert relevant stakeholders about emerging and
relevant technologies in other fields. To do so, we propose the
Technology-Topic Framework, a novel approach which uses a
semantically enhanced technology-topic model to forecast the
propagation of technologies to research areas. A formal evaluation
of the approach on a set of technologies in the Semantic Web and
Artificial Intelligence areas has produced excellent results,
confirming the validity of our solution.
This document lists and describes the top 5 men's style gifts for the 12 Days of #MBOB Gift Giving. The gifts include: 1) 300 SL gearshift knob cuff links made of stainless steel and acetate; 2) A three-piece BBQ set including tongs, fork, and spatula packaged in a metal case; 3) An MB microfiber cap with tapered gray accents; 4) A 2016 Mercedes-Benz Classic calendar featuring moments from the company's history; 5) Black metal aviator style sunglasses with gray tinted lenses.
Shan Jiang completed a Bachelor of Arts/Science degree in Software Engineering and Management from the University of Gothenburg in 2016. Over the course of the 180-credit program, Shan Jiang achieved strong grades in courses related to programming, software processes, quality management, and software architecture. Shan Jiang's bachelor's thesis project involved comparing approaches for mobile application development on iOS and Android platforms. The transcript is verifiable online through the university's registration system.
A Facebook built-in application that could reduce the distraction of using social media at night so user could have a proper amount of peaceful sleep and hopefully a fresh wake up in the morning.
p.s. recommended to be downloaded for better resolution.
Accessories for handrails in stainless steel. Different ranges of railing systems in satin and polish. AISI 304, AISI 316.
Easy fixing. Modular railing systems in stainless steel.
This document describes the Tribeca Collection bedroom set from Ligna Furniture. The set is available in graphite and snow white colors and features tropical hardwoods and hardwood veneers. It includes nightstands, panel beds, arched beds, a high chest, and a dresser. The pieces undergo a multi-step hand-rubbed finishing process and use high-quality wood and construction methods.
Revista digital especializada en el desarrollo deportivo de la Región Ciénega. Dirigida a personas de dieciocho a treinta años que gustan de informarse sobre los acontecimientos deportivos en Ocotlán, Jamay, La Barca, Poncitlán y Atotonilco el Alto.
As diretrizes para a higienização bucal de crianças recomendam o uso de gaze úmida nos primeiros 6 meses, dedeira até 2 anos e meio, creme dental a partir de 2 anos e meio com supervisão dos pais, e escovação correta a partir de 5 anos e meio usando a técnica de Bass.
Sub-Graph Finding Information over Nebula Networksijceronline
Social and information networks have been extensively studied over years. This paper studies a new query on sub graph search on heterogeneous networks. Given an uncertain network of N objects, where each object is associated with a network to an underlying critical problem of discovering, top-k sub graphs of entities with rare and surprising associations returns k objects such that the expected matching sub graph queries efficiently involves, Compute all matching sub graphs which satisfy "Nebula computing requests" and this query is useful in ranking such results based on the rarity and the interestingness of the associations among nebula requests in the sub graphs. "In evaluating Top k-selection queries, "we compute information nebula using a global structural context similarity, and our similarity measure is independent of connection sub graphs". We need to compute the previous work on the matching problem can be harnessed for expected best for a naive ranking after matching for large graphs. Top k-selection sets and search for the optimal selection set with the large graphs; sub graphs may have enormous number of matches. In this paper, we identify several important properties of top-k selection queries, We propose novel top–K mechanisms to exploit these indexes for answering interesting sub graph queries efficiently.
The document presents a new greedy incremental approach for community detection in social networks. It begins by calculating the degree of nodes and sorting them in descending order. Initial communities are formed with the highest degree nodes. Then nodes are incrementally added to communities if it increases the community density. The approach is tested on standard datasets and able to detect communities reasonably well in less dense graphs. However, there is scope to improve performance on very dense graphs such as implementing it in parallel processing.
An Efficient Modified Common Neighbor Approach for Link Prediction in Social ...IOSR Journals
This document discusses link prediction in social networks. It analyzes shortcomings of existing leading link prediction methods like common neighbor. It then proposes a modified common neighbor approach that takes into account both topological network structure and node similarities based on features. The approach generates a weight for each link based on the number of common features between nodes, divided by the total number of features. It then calculates a contribution score for each common neighbor by multiplying the weights of that neighbor's links to the two nodes. Experimental results on co-authorship networks show the modified common neighbor approach outperforms existing methods.
This document discusses extracting communities from web archives over time. It begins by defining key terms used, such as the web community chart and notations for time periods and communities. It then describes types of changes that can occur to communities over time, such as emerging, dissolving, growing, shrinking, splitting, and merging. It also defines metrics to measure a community's evolution, such as growth rate, stability, disappearance rate, and merge rate. The document explains how web archives are used to build web graphs and extract community structures over multiple time periods to analyze how the community structure changes dynamically over time.
An information-theoretic, all-scales approach to comparing networksJim Bagrow
My presentation at NetSci 2018 on Portrait Divergence, a new approach to comparing networks that is simple, general-purpose, and easy to interpret.
The preprint: https://arxiv.org/abs/1804.03665
The code: https://github.com/bagrow/portrait-divergence
Simplicial closure and higher-order link prediction LA/OPTAustin Benson
- The speaker proposes a framework called "higher-order link prediction" to evaluate models of higher-order network data. This extends classical link prediction to predict new groups of nodes that will form simplices.
- Analysis of datasets shows that many have many "open triangles" of nodes connected by edges but not in a simplex. A simple probabilistic model can account for variation in open triangles.
- Simplicial closure probability depends on edge density and tie strength between nodes, both for 3 and 4-node groups.
- For higher-order link prediction, the speaker evaluates score functions based on edge weights, structural properties, whole-network similarities, and machine learning to predict which open triangles will close.
The document discusses the development of the Semantic Web, which extends the current web to a web of data through the use of metadata, ontologies, and formal semantics. It describes key technologies like the Resource Description Framework (RDF) and Web Ontology Language (OWL) that add machine-readable meaning to web documents. The Semantic Web aims to enable machines to process and understand the semantics of information on the web.
Interlinking Data and Knowledge in Enterprises, Research and Society with Lin...Christoph Lange
The Linked Data paradigm has emerged as a powerful enabler for data and knowledge interlinking and exchange using standardised Web technologies.
In this article, we discuss our vision how the Linked Data paradigm can be employed to evolve the intranets of large organisations -- be it enterprises, research organisations or governmental and public administrations -- into networks of internal data and knowledge.
In particular for large enterprises data integration is still a key challenge. The Linked Data paradigm seems a promising approach for integrating enterprise data. Like the Web of Data, which now complements the original document-centred Web, data intranets may help to enhance and flexibilise the intranets and service-oriented architectures that exist in large organisations. Furthermore, using Linked Data gives enterprises access to 50+ billion facts from the growing Linked Open Data (LOD) cloud. As a result, a data intranet can help to bridge the gap between structured data management (in ERP, CRM or SCM systems) and semi-structured or unstructured information in documents, wikis or web portals, and make all of these sources searchable in a coherent way.
Keynote at Baltic DB&IS 2014, 9 June 2014, Tallinn, Estonia
Forecasting the Spreading of Technologies in Research Communities @ K-CAP 2017Francesco Osborne
Technologies such as algorithms, applications and formats are an
important part of the knowledge produced and reused in the
research process. Typically, a technology is expected to originate
in the context of a research area and then spread and contribute to several other fields. For example, Semantic Web technologies
have been successfully adopted by a variety of fields, e.g.,
Information Retrieval, Human Computer Interaction, Biology, and
many others. Unfortunately, the spreading of technologies across
research areas may be a slow and inefficient process, since it is
easy for researchers to be unaware of potentially relevant
solutions produced by other research communities. In this paper,
we hypothesise that it is possible to learn typical technology
propagation patterns from historical data and to exploit this
knowledge i) to anticipate where a technology may be adopted
next and ii) to alert relevant stakeholders about emerging and
relevant technologies in other fields. To do so, we propose the
Technology-Topic Framework, a novel approach which uses a
semantically enhanced technology-topic model to forecast the
propagation of technologies to research areas. A formal evaluation
of the approach on a set of technologies in the Semantic Web and
Artificial Intelligence areas has produced excellent results,
confirming the validity of our solution.
This document lists and describes the top 5 men's style gifts for the 12 Days of #MBOB Gift Giving. The gifts include: 1) 300 SL gearshift knob cuff links made of stainless steel and acetate; 2) A three-piece BBQ set including tongs, fork, and spatula packaged in a metal case; 3) An MB microfiber cap with tapered gray accents; 4) A 2016 Mercedes-Benz Classic calendar featuring moments from the company's history; 5) Black metal aviator style sunglasses with gray tinted lenses.
Shan Jiang completed a Bachelor of Arts/Science degree in Software Engineering and Management from the University of Gothenburg in 2016. Over the course of the 180-credit program, Shan Jiang achieved strong grades in courses related to programming, software processes, quality management, and software architecture. Shan Jiang's bachelor's thesis project involved comparing approaches for mobile application development on iOS and Android platforms. The transcript is verifiable online through the university's registration system.
A Facebook built-in application that could reduce the distraction of using social media at night so user could have a proper amount of peaceful sleep and hopefully a fresh wake up in the morning.
p.s. recommended to be downloaded for better resolution.
Accessories for handrails in stainless steel. Different ranges of railing systems in satin and polish. AISI 304, AISI 316.
Easy fixing. Modular railing systems in stainless steel.
This document describes the Tribeca Collection bedroom set from Ligna Furniture. The set is available in graphite and snow white colors and features tropical hardwoods and hardwood veneers. It includes nightstands, panel beds, arched beds, a high chest, and a dresser. The pieces undergo a multi-step hand-rubbed finishing process and use high-quality wood and construction methods.
Revista digital especializada en el desarrollo deportivo de la Región Ciénega. Dirigida a personas de dieciocho a treinta años que gustan de informarse sobre los acontecimientos deportivos en Ocotlán, Jamay, La Barca, Poncitlán y Atotonilco el Alto.
As diretrizes para a higienização bucal de crianças recomendam o uso de gaze úmida nos primeiros 6 meses, dedeira até 2 anos e meio, creme dental a partir de 2 anos e meio com supervisão dos pais, e escovação correta a partir de 5 anos e meio usando a técnica de Bass.
This document analyzes the impact of network coding configuration on performance in ad hoc networks. It considers throughput loss and decoding loss as overhead of network coding. For static networks using physical-layer network coding, results show network coding does not improve goodput or delay/goodput tradeoff. For mobile ad hoc networks using random linear network coding, two transmission schemes are analyzed under different mobility models. The optimal network coding configuration is derived to optimize delay/goodput tradeoff and goodput for each scenario. Main findings are that network coding improves goodput for mobile networks, but does not significantly improve delay/goodput tradeoff except for one case. This is the first work to investigate scaling laws of network coding performance and configuration while considering
Optimal Configuration of Network Coding in Ad Hoc Networks1crore projects
IEEE PROJECTS 2015
1 crore projects is a leading Guide for ieee Projects and real time projects Works Provider.
It has been provided Lot of Guidance for Thousands of Students & made them more beneficial in all Technology Training.
Dot Net
DOTNET Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
Java Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
ECE IEEE Projects 2015
1. Matlab project
2. Ns2 project
3. Embedded project
4. Robotics project
Eligibility
Final Year students of
1. BSc (C.S)
2. BCA/B.E(C.S)
3. B.Tech IT
4. BE (C.S)
5. MSc (C.S)
6. MSc (IT)
7. MCA
8. MS (IT)
9. ME(ALL)
10. BE(ECE)(EEE)(E&I)
TECHNOLOGY USED AND FOR TRAINING IN
1. DOT NET
2. C sharp
3. ASP
4. VB
5. SQL SERVER
6. JAVA
7. J2EE
8. STRINGS
9. ORACLE
10. VB dotNET
11. EMBEDDED
12. MAT LAB
13. LAB VIEW
14. Multi Sim
CONTACT US
1 CRORE PROJECTS
Door No: 214/215,2nd Floor,
No. 172, Raahat Plaza, (Shopping Mall) ,Arcot Road, Vadapalani, Chennai,
Tamin Nadu, INDIA - 600 026
Email id: 1croreprojects@gmail.com
website:1croreprojects.com
Phone : +91 97518 00789 / +91 72999 51536
ALBERT-L SZL BARAB SI NETWORK SCIENCE THE BARAB SI-ALBERT MODEL ACKNOWLEDGEM...Todd Turner
This document discusses the Barabási-Albert model of network growth and the emergence of scale-free networks. It introduces two key concepts: (1) networks expand through the continuous addition of new nodes, and (2) new nodes prefer to attach to existing nodes that already have many connections, known as preferential attachment. The Barabási-Albert model, introduced in 1999, was the first to incorporate these two principles of growth and preferential attachment to explain the emergence of hubs and power-law degree distributions in real-world networks, resolving limitations of previous random network models.
Six Degrees of Separation to Improve Routing in Opportunistic Networksijujournal
This document discusses using small-world network concepts for routing in opportunistic networks. It analyzes three real-world datasets representing contact graphs and finds they exhibit small-world properties with high clustering and short path lengths. The document proposes a simple routing algorithm that applies these findings and concludes it outperforms other algorithms in simulations by taking temporal contact factors into account.
SIX DEGREES OF SEPARATION TO IMPROVE ROUTING IN OPPORTUNISTIC NETWORKSijujournal
Opportunistic Networks are able to exploit social behavior to create connectivity opportunities. This
paradigm uses pair-wise contacts for routing messages between nodes. In this context we investigated if the
“six degrees of separation” conjecture of small-world networks can be used as a basis to route messages in
Opportunistic Networks. We propose a simple approach for routing that outperforms some popular
protocols in simulations that are carried out with real world traces using ONE simulator. We conclude that
static graph models are not suitable for underlay routing approaches in highly dynamic networks like
Opportunistic Networks without taking account of temporal factors such as time, duration and frequency of
previous encounters.
Distribution of maximal clique size of theIJCNCJournal
Our primary objective in this paper is to study the distribution of the maximal clique size of the vertices in complex networks. We define the maximal clique size for a vertex as the maximum size of the clique that the vertex is part of and such a clique need not be the maximum size clique for the entire network. We determine the maximal clique size of the vertices using a modified version of a branch-and-bound based exact algorithm that has been originally proposed to determine the maximum size clique for an entire network graph. We then run this algorithm on two categories of complex networks: One category of networks capture the evolution of small-world networks from regular network (according to the well-known Watts-Strogatz model) and their subsequent evolution to random networks; we show that the distribution of
the maximal clique size of the vertices follows a Poisson-style distribution at different stages of the evolution of the small-world network to a random network; on the other hand, the maximal clique size of the vertices is observed to be in-variant and to be very close to that of the maximum clique size for the entire network graph as the regular network is transformed to a small-world network. The second category
of complex networks studied are real-world networks (ranging from random networks to scale-free networks) and we observe the maximal clique size of the vertices in five of the six real-world networks to follow a Poisson-style distribution. In addition to the above case studies, we also analyze the correlation between the maximal clique size and clustering coefficient as well as analyze the assortativity index of the
vertices with respect to maximal clique size and node degree.
A Distributed Approach to Solving Overlay Mismatching ProblemZhenyun Zhuang
This document proposes an algorithm called Adaptive Connection Establishment (ACE) to address the topology mismatch problem between the logical overlay network and physical underlying network in unstructured peer-to-peer systems. ACE builds a minimum spanning tree among each source node and its neighbors within a certain diameter, optimizes connections not on the tree to reduce redundant traffic, while retaining search scope. It evaluates tradeoffs between topology optimization and information exchange overhead by changing the diameter. Simulation results show ACE can significantly reduce unnecessary P2P traffic by efficiently matching the overlay and physical network topologies.
International Journal of Computational Engineering Research(IJCER) ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
New prediction method for data spreading in social networks based on machine ...TELKOMNIKA JOURNAL
Information diffusion prediction is the study of the path of dissemination of news, information, or topics in a structured data such as a graph. Research in this area is focused on two goals, tracing the information diffusion path and finding the members that determine future the next path. The major problem of traditional approaches in this area is the use of simple probabilistic methods rather than intelligent methods. Recent years have seen growing interest in the use of machine learning algorithms in this field. Recently, deep learning, which is a branch of machine learning, has been increasingly used in the field of information diffusion prediction. This paper presents a machine learning method based on the graph neural network algorithm, which involves the selection of inactive vertices for activation based on the neighboring vertices that are active in a given scientific topic. Basically, in this method, information diffusion paths are predicted through the activation of inactive vertices byactive vertices. The method is tested on three scientific bibliography datasets: The Digital Bibliography and Library Project (DBLP), Pubmed, and Cora. The method attempts to answer the question that who will be the publisher of thenext article in a specific field of science. The comparison of the proposed method with other methods shows 10% and 5% improved precision in DBL Pand Pubmed datasets, respectively.
The document discusses analyzing the Web of Data (WoD) as a complex network at multiple scales. At the graph scale, the WoD contains over 100 nodes (datasets) connected by 350 edges. At the triple scale, a network of over 600,000 nodes and 800,000 edges was analyzed. Network analysis found the WoD exhibits properties like short average path lengths, power law degree distributions, and a few highly central nodes like DBpedia. Ongoing challenges include implicit links, multi-relations, and dynamics as data is continuously added.
Scale-Free Networks to Search in Unstructured Peer-To-Peer NetworksIOSR Journals
This document discusses using scale-free networks to improve search efficiency in unstructured peer-to-peer networks. It proposes the EQUATOR architecture, which creates an overlay network topology based on the scale-free Barabasi-Albert model. Simulation results show that EQUATOR achieves good lookup performance comparable to the ideal Barabasi-Albert network, with low message overhead even under node churn. The scale-free topology allows random walks to efficiently locate resources by directing searches to high-degree "hub" nodes with greater knowledge of the network.
An Optimal Algorithm For Relay Node Assignment In Cooperative Ad Hoc NetworksNancy Ideker
The document presents an optimal polynomial time algorithm called ORA for solving the relay node assignment problem in cooperative ad hoc networks. The goal is to assign available relay nodes to multiple competing source-destination pairs in order to maximize the minimum data rate among all pairs.
The key contributions are:
1) Developing the ORA algorithm that uses a "linear marking" mechanism to achieve polynomial time complexity for relay node assignment.
2) Providing a formal proof that ORA finds the optimal solution.
3) Demonstrating via numerical results that ORA efficiently assigns relay nodes to maximize the minimum data rate.
Studies towards heterogeneous Mobile Adhoc Network (MANET) as well as inter-domain routing is still in much infancy stage. After reviewing the existing literaturs, it was found that problems associated with scalability, interoperability, and security is not defined up to the mark as it should be part of pervasive computing in future networks. Moreover, it was found that existing studies do not consider the complexities associated with heterogeneous MANET to a large extent leading to narrowed research scope. Hence, this paper introduces a novel scheme called as Secure, Scalable and Interoperable (SSI )routing, where a joint algorithm is designed, developed, and implemented. The outcome exhibits the correctness of this scheme by simulation assisted by analysis for inter-domain routing.
A Review Paper On IPv4 And IPv6 A Comprehensive SurveyHannah Baker
This document summarizes a review paper on IPv4 and IPv6. It discusses that IPv4 addresses are running out due to increased internet usage. IPv6 was created as a successor to IPv4 to address this issue by using a 128-bit address space providing vastly more addresses. However, transitioning the entire internet from IPv4 to IPv6 is challenging due to incompatibility between the protocols. The paper reviews literature on IPv4 and IPv6 addressing issues and proposed transition solutions.
A DYNAMIC ROUTE DISCOVERY SCHEME FOR HETEROGENEOUS WIRELESS SENSOR NETWORKS B...csandit
With the development of new networking paradigms and wireless protocols, nodes with different capabilities are used to form a heterogeneous network. The performance of this kind of networks is seriously deteriorated because of the bottlenecks inside the network. In addition, because of the application requirements, different routing schemes are required toward one particular application. This needs a tool to design protocols to avoid the bottlenecked nodes and adaptable to application requirement. Polychromatic sets theory has the ability to do so. This paper demonstrates the applications of polychromatic sets theory in route discovery and protocols design for heterogeneous networks. From extensive simulations, it shows the nodes with high priority are selected for routing, which greatly increases the performance of the network. This demonstrates that a new type of graph theory could be applied to solve problems of complex networks.
COMMUNICATIONS OF THE ACM November 2004Vol. 47, No. 11 15.docxmonicafrancis71118
COMMUNICATIONS OF THE ACM November 2004/Vol. 47, No. 11 15
N
etworks are hot. The
Internet has made it pos-
sible to observe and mea-
sure linkages
representing relationships of
all kinds. We now recognize
networks everywhere: air
traffic, banking, chemical
bonds, data communications,
ecosystems, finite element
grids, fractals, interstate
highways, journal citations,
material structures, nervous
systems, oil pipelines, orga-
nizational networks, power
grids, social structures, trans-
portation, voice communica-
tion, water supply, Web
URLs, and more.
Several fields are collabo-
rating on the development of
network theory, measurement,
and mapping: mathematics
(graph theory), sociology (net-
works of influence and communi-
cation), computing (Internet), and
business (organizational net-
works). This convergence has pro-
duced useful results for risk
assessment and reduction in com-
plex infrastructure networks,
attacking and defending networks,
protecting against network con-
nectivity failures, operating busi-
nesses, spreading epidemics
(pathogens as well as computer
viruses), and spreading innova-
tion. Here, I will survey the fun-
damental laws of networks that
enable these results.
Defining a Network
A network is usually defined as a
set of nodes and links. The nodes
represent entities such as persons,
machines, molecules, documents,
or businesses; the links represent
relationships between pairs of
entities. A link can be directed
(one-way relationship) or undi-
rected (mutual relationship). A
hop is a transition from one node
to another across a single link
separating them. A path is a series
of hops. Networks are very gen-
eral: they can represent any kind
of relation among entities.
Some common network
topologies (interconnection pat-
terns) have their own names:
clique or island (a connected sub-
network that may be isolated
from other cliques), hierarchical
network (tree structured), hub-
and-spoke network (a special
node, the hub, connected directly
to every other node), and multi-
hub network (several hubs con-
nected directly to many nodes).
Some network topologies are
planned, such as the electric grid,
the interstate highway system, or
Network Laws
M
IC
H
A
EL
S
LO
A
N
Peter J. Denning
Many networks, physical and social, are complex and scale-invariant.
This has important implications from the spread of epidemics and
innovations to protection from attack.
The Profession of IT
16 November 2004/Vol. 47, No. 11 COMMUNICATIONS OF THE ACM
the air traffic system; others are
unplanned. In his seminal papers
about the Internet, Paul Baran
proposed that a planned, distrib-
uted network would be more
resilient to failures than a hub-
and-spoke network.
A host of physical systems eas-
ily fit a network model. Perhaps
less obvious is that human social
networks also fit the model. The
individuals of an organization are
linked by their relationships—
who emails whom, who seeks
advice from whom, or who influ-
ences w.
The document reviews different routing protocols for mobile ad hoc networks (MANETs). It begins with an introduction to MANETs and discusses some of their key characteristics including decentralized operation and dynamic topology. It then reviews several popular routing protocol categories for MANETs - flat, hierarchical, and location-based. Flat protocols like distance vector and link state are discussed as well as their limitations in dynamic MANET environments. The review covers over a dozen different specific routing protocols that have been proposed.
This paper targets at a comparative study on the through-puts in bits/ seconds, packet through-puts, delay in networks, response time in seconds of both IPv4 and IPv6. Hence, since the system proposes for co-existence of both IPv4 and IPv6, the solution projected in this paper is “DUAL STACK where you can and TUNNEL where you have to”.
Technology S-Curve Analysis (TSC) is a method to determine the relationship between investment in improving technology and corresponding market sales. The curve typically takes the shape of an S as the market develops through phases of launch, growth, maturity, and decline. TSC Analysis is particularly useful for launching new technologies and assessing the remaining lifespan of technologies to guide business strategy and shifting resources to next-generation technologies. TSC Analysis provides intelligence on the current technology lifespan, potential limits, economics of innovation, a company's position, and quantifying investment versus payoff.
The document discusses S curves, which plot cumulative project quantities like costs or hours against time. It describes different types of S curves, including baseline, target, and actual S curves. The document explains how S curves can be used to analyze a project's progress, growth, and slippage. It also provides details on generating S curves from project schedule data and interpreting S curve analyses.
The S curve is used to spread project costs over time in a bell-shaped curve. It is based on the sine and cosine waves, with the integral of the sine wave producing a leaning S-shaped cumulative cost curve. The document describes tweaking the basic S curve model by introducing variables to skew the timing of costs (make them front-loaded or back-loaded) and adjust the peakness to flatten or steepen the curve. Equations are provided to calculate cost at any point in time or rate of spending based on total project cost, time, skewness, and peakness variables.
This document discusses technology S-curves and their implications for renewable energy alternatives. It finds that both wind and geothermal energy are poised to become more economical than fossil fuels relatively soon based on historical data. Government R&D funding has been lower for wind and geothermal compared to solar, and funding for fossil fuels may be excessive given their diminishing performance. Analyzing renewable technologies through an S-curve lens provides insights not found using other approaches and has implications for future government and industry investment decisions.
The document provides safety instructions and an overview of the features and operation of the Dual 31 Band Equalizer. Key points include:
- The equalizer has 31 frequency bands covering 20Hz to 20kHz that can each be boosted or cut by up to 12dB or 6dB depending on the range setting.
- It has electronically balanced inputs and outputs, variable low cut filter, bypass switches, and LED VU meters.
- The document explains the controls on the front and rear panels and provides instructions for setting up and using the equalizer for different applications such as with a mixer, patchbay, or real time analyzer.
This document discusses the application of S-shaped curves, also known as logistic curves or S-curves, to model the evolution of systems over time. It provides background on the origin and development of S-curves as models of growth. S-curves have been widely used across many domains to describe trends like population growth, market penetration of new technologies, and diffusion of innovations. The document reviews several examples of S-curve applications and discusses their use in areas like technological forecasting and TRIZ problem solving. It argues that S-curves provide forecasting power because growth is ultimately limited by scarce resources based on mathematical concepts like Verhulst's logistic growth equation.
This document presents a case study analyzing the S-curve of a construction project. It describes the original planned S-curve, which projected slow initial progress accelerating in the middle before slowing at completion. Actual progress fell far below this, requiring three adjustments - adding scope, and extending the schedule by 9 months total - to get the project back on track. By the third adjustment, actual progress more closely aligned with the planned S-curve, allowing the project to progress through start-up, growth, and commissioning phases to completion. The case study demonstrates how S-curve analysis can identify issues and inform decisions to successfully manage a project over its lifecycle.
The document provides documentation and a tutorial for implementing an S-curve motion profile in a MyoStat Motion Control system. An S-curve allows for smooth acceleration and deceleration during motion to prevent damage. The K69 parameter controls the S-curve function, with higher values creating a more pronounced curve. The tutorial instructs the user to set parameters, collect speed data, and graph results at different K69 values to observe the S-curve motion profile.
The document discusses optimizing S-curve velocity profiles for motion control. An S-curve velocity profile is a smooth curve that is differentiable to the second order. The document describes decomposing 3D motion into 1D components and synthesizing physical constraints on jerk, acceleration, and velocity for each axis. It then presents calculations for generating an optimized S-curve velocity profile that satisfies the constraints and produces smooth, fast, and accurate motion between a start and end point.
The document discusses how to generate S-curves in Oracle Primavera P6 to analyze project progress and performance. S-curves show cumulative costs, labor hours, or other metrics plotted against time and typically have an S-shape. In Primavera P6, S-curves can be generated by activity or resource in the usage profile windows. Various analysis can then be done by comparing baseline, target, and actual S-curves to determine project growth, slippage, and progress. The S-curves can also be published from Primavera P6 as prints or embedded in webpages.
This document describes a methodology for modeling S-curves to forecast cost distribution over time for construction projects. It discusses three approaches to S-curve modeling: 1) Analyzing cost-time curves from literature, 2) Examining data from completed projects, and 3) Creating standard critical path models. The results from each approach are presented through mathematical expressions and diagrams. Finally, the results of the three approaches are integrated to develop final S-curves showing standard cost dynamics over time for different project types (buildings, tunnels, motorways). The proposed methodology can be used to forecast cost schedules during early project phases when detailed information is limited.
This document summarizes an optimization model for a contractor's S-curve developed using genetic algorithms. The model aims to minimize total construction costs while considering the tradeoff between different resource productivity and costs of resource mobilization/demobilization. An example project is optimized, resulting in a smoother resource allocation and lower total cost compared to the early and late schedules. The optimal S-curve developed from the model provides a baseline for measuring impacts of changes on construction costs.
This document discusses innovation lifecycles and how companies can leverage S-curves to drive breakthrough growth. It contains the following key points:
1. Products, services, and technologies progress along S-curves over time from emergence to maturity. Understanding where opportunities fall in their lifecycles can help companies innovate and avoid disruption.
2. Companies often fail to transition to new S-curves due to not focusing on or defending emerging technologies, cultural inertia, or lack of foresight.
3. Organizational strategies like leadership, structure, and metrics should evolve to support innovation or optimization depending on a business's point in the lifecycle. Tailoring the organization maximizes growth across the portfolio
This document discusses three frameworks used by Gartner Group to analyze information systems research: the technology maturity curve, adoption curve, and identification of strategic technologies. The maturity curve tracks how a technology matures over time through various stages from embryonic to obsolescence. The adoption curve shows how technologies are adopted cumulatively by organizations over time. Considering where technologies fall on these curves can provide insights into appropriate research questions and methodologies. Identifying strategic technologies may help determine promising areas for new research.
The document discusses an S-curve model that relates per-capita income to insurance penetration. It finds:
1) Estimating life and non-life insurance penetration globally yields an S-curve, where income elasticity starts and ends at 1 but exceeds 1 at intermediate income levels.
2) For life insurance, the income at maximum elasticity is $15,000, while for non-life it is $10,000.
3) Using purchasing power parities rather than market exchange rates increases estimated penetration levels and elasticities for developing countries.
This document discusses exploring the limits of technology S-curves by examining their usefulness for managers in planning new technology development. It focuses on the disk drive industry as a case study. The author makes four key points: 1) S-curves accurately describe industry-level technology substitution patterns, 2) to improve products, managers must oversee both component and architectural technology development, 3) S-curves describe individual firm experiences with components but cannot prescribe strategy, and 4) attackers gain advantage in this industry through architectural, not just technological, innovation in new applications.
This document describes proposed changes to improve a model for estimating project S-curves based on project attributes and conditions. The key changes are:
1. Changing the model outputs from the polynomial function's parameters (a, b) to the inflection point position (p) and slope (s), which better indicate schedule performance.
2. Adding two new input factors - project difficulty and participant competence - to capture schedule influences beyond basic attributes like cost and duration.
3. Using fuzzy inference systems instead of neural networks to build transparent input-output relationships from historical project data, applying fuzzy clustering and hybrid training.
The goal is to develop a model that more accurately predicts project progress curves by incorporating schedule performance indicators
The document discusses patterns of technological substitution that challenge the traditional S-curve model. It presents several historical examples that demonstrate more complex substitution patterns than the smooth S-curve, including: concatenated generations in steelmaking technologies; overlapping generations in IBM mainframe computers; and a case of long-term feedback reversing the substitution of DDT as an insecticide due to environmental concerns. The author argues that accounting for these complex real-world patterns requires broadening the theoretical framework for understanding technological substitutions.
This document discusses the history and development of semiconductors and integrated circuits. It describes how the transistor enabled electronics to be performed using silicon, leading to solid-state electronics like transistor radios. The integrated circuit was developed using the planar process to fabricate multiple transistors on silicon wafers. Moore's Law, proposed in 1965, predicted that the number of transistors on an integrated circuit would double every 18 months. This prediction has proven remarkably accurate and has driven innovation in the semiconductor industry for over 40 years. Continued shrinking of circuit elements has enabled faster processing speeds, higher functionality, and lower costs over time.
The document summarizes the key points from the book "Jumping the S-Curve" by Paul Nunes and Tim Breene. It discusses how companies can succeed by repeatedly climbing business S-curves and jumping to new S-curves before performance plateaus. To climb an S-curve, companies must identify a large market opportunity, build the necessary capabilities, and attract top talent. To jump to a new S-curve, companies must manage hidden S-curves, develop edge-centric strategies, reconstitute leadership teams early, and cultivate a talent pipeline. High performers are able to continually reinvent themselves through repeated S-curve climbing and jumping.
artificial intelligence and data science contents.pptxGauravCar
What is artificial intelligence? Artificial intelligence is the ability of a computer or computer-controlled robot to perform tasks that are commonly associated with the intellectual processes characteristic of humans, such as the ability to reason.
› ...
Artificial intelligence (AI) | Definitio
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
International Conference on NLP, Artificial Intelligence, Machine Learning an...gerogepatton
International Conference on NLP, Artificial Intelligence, Machine Learning and Applications (NLAIM 2024) offers a premier global platform for exchanging insights and findings in the theory, methodology, and applications of NLP, Artificial Intelligence, Machine Learning, and their applications. The conference seeks substantial contributions across all key domains of NLP, Artificial Intelligence, Machine Learning, and their practical applications, aiming to foster both theoretical advancements and real-world implementations. With a focus on facilitating collaboration between researchers and practitioners from academia and industry, the conference serves as a nexus for sharing the latest developments in the field.
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.
Use PyCharm for remote debugging of WSL on a Windo cf5c162d672e4e58b4dde5d797...shadow0702a
This document serves as a comprehensive step-by-step guide on how to effectively use PyCharm for remote debugging of the Windows Subsystem for Linux (WSL) on a local Windows machine. It meticulously outlines several critical steps in the process, starting with the crucial task of enabling permissions, followed by the installation and configuration of WSL.
The guide then proceeds to explain how to set up the SSH service within the WSL environment, an integral part of the process. Alongside this, it also provides detailed instructions on how to modify the inbound rules of the Windows firewall to facilitate the process, ensuring that there are no connectivity issues that could potentially hinder the debugging process.
The document further emphasizes on the importance of checking the connection between the Windows and WSL environments, providing instructions on how to ensure that the connection is optimal and ready for remote debugging.
It also offers an in-depth guide on how to configure the WSL interpreter and files within the PyCharm environment. This is essential for ensuring that the debugging process is set up correctly and that the program can be run effectively within the WSL terminal.
Additionally, the document provides guidance on how to set up breakpoints for debugging, a fundamental aspect of the debugging process which allows the developer to stop the execution of their code at certain points and inspect their program at those stages.
Finally, the document concludes by providing a link to a reference blog. This blog offers additional information and guidance on configuring the remote Python interpreter in PyCharm, providing the reader with a well-rounded understanding of the process.
Discover the latest insights on Data Driven Maintenance with our comprehensive webinar presentation. Learn about traditional maintenance challenges, the right approach to utilizing data, and the benefits of adopting a Data Driven Maintenance strategy. Explore real-world examples, industry best practices, and innovative solutions like FMECA and the D3M model. This presentation, led by expert Jules Oudmans, is essential for asset owners looking to optimize their maintenance processes and leverage digital technologies for improved efficiency and performance. Download now to stay ahead in the evolving maintenance landscape.
1. arXiv:1005.2122v2
1
S-curve networks and an approximate method for estimating degree
distributions of complex networks*
Guo Jin-Li (郭进利)
Business School, University of Shanghai for Science and Technology, Shanghai,
200093 ,China
In the study of complex networks almost all theoretical models have the
property of infinite growth, but the size of actual networks is finite. According to
statistics from the China Internet IPv4 (Internet Protocol version 4) addresses, this
paper proposes a forecasting model by using S curve (Logistic curve). The growing
trend of IPv4 addresses in China is forecasted. There are some reference value for
optimizing the distribution of IPv4 address resource and the development of IPv6.
Based on the laws of IPv4 growth, that is, the bulk growth and the finitely growing
limit, it proposes a finite network model with a bulk growth. The model is said to be
an S-curve network. Analysis demonstrates that the analytic method based on
uniform distributions (i.e., Barabási-Albert method) is not suitable for the network.
It develops an approximate method to predict the growth dynamics of the individual
nodes, and use this to calculate analytically the degree distribution and the scaling
exponents. The analytical result agrees with the simulation well, obeying an
approximately power-law form. This method can overcome a shortcoming of
Barabási-Albert method commonly used in current network research.
*
Project supported by the National Natural Science Foundation of China ( Grant No. 70871082), and the Shanghai
Leading Academic Discipline Project under Grant No S30504.
2. arXiv:1005.2122v2
2
Keywords: Complex network, scale-free network, power-law distribution,
IPv4 standard, standard diffusion, Logistic curve.
PACC: 0540J , 8980H
1. Introduction
In the past decade, network science has made some breakthroughs. More recently
network science efforts have focused on mathematically describing different network
topologies. In 1998, Watts and Strogatz developed a small world network.[1,2]
In 1999,
Barabási and Albert reconciled empirical data on networks with mathematical
representation, describing a scale-free network. Small world effect and scale-free
characteristics in complex networks have a high degree of universality. This has aroused
the concern and attention at home and abroad.
Since ten years ago, Barabási and Albert have made groundbreaking work on
scale-free networks. Complex networks focus on empirical studies, synchronization of
complex networks, evolving models of networks, and so on.[1,2-10]
The empirical researches
on complex networks report scale-free phenomena, what kind of model can depict the
actually scale-free network are particularly important. Many scholars generalized and
modified the BA (Barabási and Albert) model, such as models with nonlinear preferential
attachment, with dynamic edge rewiring, fitness models and deterministically growing
models can be found in the literature.[4]
In real-life networks, incomers may only connect to
3. arXiv:1005.2122v2
3
a few others in a local area for their limited information, and individuals in a local area are
likely to have close relations. Accordingly, we propose a local preferential attachment
model.[8]
Here, a local-area-network stands for a node and all its neighbors. Tanaka and
Aoyagi present a weighted scale-free network model, in which the power-law exponents
can be controlled by the model parameters.[9]
The network is generated through the
weight-driven preferential attachment of new nodes to existing nodes and the growth of the
weights of existing links.[9]
In sociology the preferential attachment is referred to as the
Matthew effect. Price called it cumulative advantage.[6]
When Price studied the citation
network, in order to avoid a zero probability that a new node attaches to an isolate node he
suggested that for each node attached an initial attractiveness.[6]
One can consider the initial
publication of a paper to be its first citation (of itself by itself).[6]
Dorogovtsev et. al. further
discussed the network with the initial attractiveness. They used the initial attractiveness to
tune a power-law exponent. This attractiveness is a constant, and the attractiveness between
nodes is independent of each other. Obviously, this is not reasonable. In the real world,
many complex networks have a changing attractiveness over time. For example, WWW
pages have the timeliness. Some new Web pages are fresh. Their attractiveness may be
larger. As time goes by, the fresh may gradually disappear. The attractiveness of these Web
pages may also gradually decrease. In the citation network, a paper is not the reference
itself when it is published, but with an evolution of the citation network, its contents have a
certain attraction. To describe this kind of networks, a competitive network with dynamic
attractiveness is proposed in Refs.[11, 12], the fitness model[13]
is a special case of the
competitive network.
There are two common features of networks mentioned above. Nodes are linear growth,
4. arXiv:1005.2122v2
4
that is, to add a single node at each time steps; the networks are infinitely growing.
However, the growth of actual networks is limited. In the literature, except Refs.[7,14]
almost all models grow to be unlimited. Are there any finitely growing networks in the
real world? How can we establish a finitely growing model for complex networks? What is
topology of this kind of networks?
Bulk arrival is important as well. Barabasi et al. proposed a deterministic model of
geometrically growing networks.[15]
In order to demonstrate that the metabolic networks
have features such as modularity, locally high clustering coefficients and a scale-free
topology, Ravasz et al. proposed a deterministic model of geometrically growing
network.[16]
Inspired by the idea of Ravasz et al., a random model of geometrically growing
network is proposed.[17]
It grows by continuously copying nodes and can better describe
metabolic networks than the model of Ravasz et al. Concerning a problem of Apollonian
packing, the geometrically growing network with common ratio 3 was introduced by
Andrade et al. in 2005. Unfortunately, the degree distribution of Apollonian network is
incorrect and the method using by Andrade et al. exits a regrettable shortcoming. We firstly
found this error, and modified it.[17,18]
The analytic method based on uniform distributions
(i.e., Barabási-Albert method) is also not suitable for the analysis of the geometrically
growing network.[17]
In fact, Barabási-Albert method have a shortcoming for the analysis of
other models as well.[19]
Up to now, the finite networks with bulk growth are studied to be
relative rarity. This paper aims to propose a finite model with bulk growth. We develop a
method to predict the growth dynamics of the individual nodes.
The paper is organized as follows. In Sec. 2 we collected the China’s IPv4 (Internet
Protocol version 4) data. It is fitted by Logistic curve (S curve). We forecast the growing
5. arXiv:1005.2122v2
5
trend of China’s IPv4. In Sec. 3 we address above some questions. According to a law of
IPv4 growth, we propose a finite network model with bulk growth. The model is called
S-curve network. In Sec. 4 we firstly show that the Barabási-Albert method is not suitable
for the analysis of the model. We develop the method to predict the degree distribution of
complex networks, and estimate the degree distribution of S-curve network. In Sec. 5
simulations of the network are given. The analytical result agrees with the simulation well,
obeying an approximately power-law form. Conclusions are drawn in Sec. 6.
2. Study on IPv4 standard diffusion tendency in China
2.1 Data set
IPv4 is the fourth revision in the development of the Internet Protocol (IP) and IPv6
(Internet Protocol version 6) is the sixth revision. IPv4 is the first version of the protocol
to be widely deployed. Together with IPv6, it is at the core of standards-based
internetworking methods of the Internet. IPv4 is still by far the most widely deployed
Internet Layer protocol. Using this protocol, people find that Internet implements
interconnection between different hardware structures and different operating systems. In
Internet, each node based on the unique IP addresses ties each other. IPv4 uses only 32
bits. Internet Protocol version 4 provides approximately 4.3 billion addresses. China
Internet Network Information Center annually publishes “Statistical Survey Report on
the Internet Development in China”. 25th statistical survey report on the internet
development in China pointed out that by December 2009, the number of Chinese IPv4
6. arXiv:1005.2122v2
6
addresses had reached 232 million, up 28.2% from late 2008. Recent two years, the
average IPv4 addresses of Internet users fell slightly; meanwhile, IPv4 addresses are
facing the exhaustion. This data is obtained from the previous released by China Internet
Network Information Center “Statistical Survey Report on the Internet Development in
China”. Table 1 is the number of China IPv4 addresses over time.
Table 1. Accumulative number of Chinese IPv4 address
Time n(IPv4)/104
Time n(IPv4)/104
2002-12 2900 2006-12 9802
2003-06 3208 2007-06 11825
2003-12 4146 2007-12 13527
2004-06 4942 2008-06 15814
2004-12 5995 2008-12 18127
2005-06 6830 2009-06 20503
2005-12 7439 2009-12 23245
2006-06 8479
2.2 Forecasting model
Logistic curve model, it is also called S curve. It is developed by biological
mathematicians Verhulst in 1845 to study population growth. Unfortunately, it was
neglected for a long time, Pearl and Reed rediscovered its applications in the 1920s of the
20th
century. Logistic curve are widely used in economics, politics, population statistics,
human tumor proliferation, chemistry, plant population dynamics, insect ecology and forest
growth.
7. arXiv:1005.2122v2
7
Logistic curve
rta
e
L
N −
+
=
1
(1)
Where N is the growth; t is time; r is a constant, it is called an instantaneous growth rate; L
is also the constant, it is called carrying capacity. Coefficient a determines the position of
Logistic curve on time axis. If we find a , r and L, the curve is obtained. Based on four
points method[20]
and data in table 1, we have L=140359.5
According to the regression analysis, we obtain
4,159.0 ≈≈ ar
Thus, Logistic fitting curve is as follows
t
e
N 159.0
6.541
140359.5
−
+
= (2)
0
20000
40000
60000
80000
100000
120000
140000
0 10 20 30 40 50 60
t
N
Actual data
Prediction curve
Fig.1 Logistic curve fitting the actual number of IPv4
Table 2. Accumulative number of Chinese IPv4 address comparing with the
prediction of Logistic curve
9. arXiv:1005.2122v2
9
3. Description of the model
3.1. Competitive network model
Ten years ago, the mechanism of cumulative advantage proposed by Barabási and
Albert is now widely accepted as the probable explanation for the power-law degree
distribution observed not only in WWW network but in a wide variety of other networks
also. In fact, for the mechanism of citation networks has been proposed by Price three
decades ago.[6,21]
The work of Price himself, however, is largely unknown in the scientific
community, and cumulative advantage did not achieve currency as a model of network
growth until its rediscovery some decades later by Barabási and Albert.[6]
The BA model satisfies following rules. Starting with a small number( 0m )of nodes,
at every time step, we add a new node with m )( 0m≤ edges that link the new node to m
different nodes already present in the system; When choosing the nodes to which the new
node connects, we assume that the probability Π that a new will be connected to node i
depends on the degree ik of node i , such that
∑
=Π
j
j
i
i
k
k
k )( . In order to avoid the
isolated node is never connected, initial attractiveness A of nodes is introduced.[10]
Because A is a constant, the model in Ref.[10] can not reflect the competition of
networks. To describe the competition, we proposed Poisson NPA competition model. The
algorithm of our model is as follows. (1) Random growth: Starting with a small number
( 0m )of nodes. The arrival process of nodes is a Poisson process having rateλ . At time t,
if a new node is added to the system, the new node is connected to m )( 0m≤ different
10. arXiv:1005.2122v2
10
nodes already present in the system; (2) Preferential attachment: When choosing the nodes
to which the new node connects, we assume that the probability Π that a new will be
connected to node i depends on the degree )(tki of node i and its attractiveness )(tai ,
such that
∑ +
+
=∏
i
ii
ii
i
tatk
tatk
tk
))()((
)()(
))(( , where, at time t , attractiveness )(tai of node i
and the degree of nodes of the network is independent of each other, or
)(
)(
tk
ta
i
i
is chosen
from a distribution )(xF ; )(tN denotes the number of nodes at time t ; and
L,2,1),(1)( =−> itkta ii , )()(
)(
0
θ
tOta
tN
i
i =∑=
, 10 ≤≤ θ .
3.2. S-curve network
An IPv4 is as a node and the contact between two IPv4 as an edge. If there is any
contact between two IPv4, one edge is added between the two nodes. Therefore, all IPv4 in
China form a complex network. It is called as an IPv4 network. Its key features are as
follows. (1) Nodes are bulk growth; (2) the limit of the growth is finite, accumulative
number of nodes is Logistic curve. Based on the two features, we propose an S-curve
network to be as follows. (a) S-curve growth: Starting with a small number( 0m )of nodes.
At time t, we add 2
))exp(1(
)exp(
rta
rtarL
−+
−
new nodes to the system. Each new node is
individually connected to m )( 0m≤ different nodes already present in the system, and all
the new nodes do not connected to each other; (b) Preferential attachment: When
choosing the nodes to which the new node connects, we assume that the probability Π
that a new will be connected to node i depends on the degree ik of node i , such that
11. arXiv:1005.2122v2
11
∑ +
+
=∏
i
i
i
i
Atk
Atk
tk
))((
)(
))(( , (3)
where A is the initial attractiveness of node i , and mA −> .
4. Analysis of the model
4.1 Estimation of network size
For convenience, taking
)exp(1
0
a
L
m
+
≈ , at 0=t , first batch nodes are added to the
system. Then at time t the number of nodes of the network is as follows
)exp(1)exp(1)exp(1))exp(1(
)exp(
)( 0
0 20
rta
L
a
L
rta
L
mdt
rta
rtarL
mtN
t
−+
=
+
−
−+
+=
−+
−
+= ∫
(4)
L
rta
L
tN
tt
=
−+
=
∞→∞→ )exp(1
lim)(lim (5)
Therefore, although the network is growing, the network size is finite. It displays S curve,
see Fig. 2.
12. arXiv:1005.2122v2
12
0
20000
40000
60000
80000
100000
120000
140000
0 10 20 30 40 50 60 70
t
N(t)
Fig. 2 Schematic diagram of the size of S-curve network
4.2 Degree distribution based on Barabási-Albert method
it denotes the time that the i th batch nodes is added to the system. For convenience,
let us label each node of the i th batch nodes with j accordingly. ijk denotes the degree of
the jth node in the ith batch at time t. Assuming that )(tkij is a continuous real variable,
the rate at which )(tkij changes is expected to be proportional to degree )(tkij of the jth
node in the ith batch. Consequently )(tkij satisfies the dynamical equation
∑ +
+
−+
−
=
∂
∂
ji
ij
ijij
Ak
Ak
rta
rtarL
m
t
k
,
2
)())exp(1(
)exp(
. (6)
Taking into account
)2(
)exp(1
)()(2)(
, m
A
rta
mL
tANtmNAk
ji
ij +
−+
=+=+∑
Substituting this into Eq. (6) we obtain
)(
))exp(1)(2(
)exp(
Ak
rta
m
A
rtar
t
k
ij
ij
+
−++
−
=
∂
∂
, (7)
13. arXiv:1005.2122v2
13
The solution of this equation, with the initial condition mtk iij =)( , is
A
rta
rta
Amtk m
A
i
ij −⎟⎟
⎠
⎞
⎜⎜
⎝
⎛
−+
−+
+=
+2
1
)exp(1
)exp(1
)()( . (8)
Using Eq. (8), one can write the probability that a node a degree )(tkij smaller than
k , })({ ktkP ij < , as
]}1))exp(1()ln[(
1
{})({
2
−−+
+
+
−>=<
+
rta
Am
Ak
rr
a
tPktkP m
A
iij (9)
Assuming that we add the batch of nodes at equal time intervals to the network, the
it values have a constant probability[2.3]
tm
tP i
+
=
0
1
)( (10)
Substituting this into Eq. (9) we obtain
]]1))exp(1()ln[(
1
[
1
1})({
2
0
−−+
+
+
−
+
−=<
+
rta
Am
Ak
rr
a
mt
ktkP m
A
ij (11)
The degree distribution )(kP can be obtain using
))())exp(1())(((
))exp(1())(2(})({
)(
22
0
1
m
A
m
A
m
A
ij
AmrtaAkmtrm
rtaAkAm
k
ktkP
kP
++
+
+−−+++
−+++
=
∂
<∂
= ,
predicting that asymptotically ( ∞→t )
L,2,1,0)( ++== mmkkP . (12)
Eq. (12) shows that the analytic method based on uniform distributions (i.e.,
Barabási-Albert method) is not suitable for the model.
14. arXiv:1005.2122v2
14
4.3. Approximate method for estimating distributions
For any given 0,, >thi , and tttih ih ≤<< , , from (8), we can know
)(}
)exp(1
)exp(1
){(}
)exp(1
)exp(1
){()(
2
1
2
1
tkA
rta
rta
AmA
rta
rta
Amtk ij
m
A
im
A
h
hj =−⎟⎟
⎠
⎞
⎜⎜
⎝
⎛
−+
−+
+>−⎟⎟
⎠
⎞
⎜⎜
⎝
⎛
−+
−+
+=
++
(13)
From (13) we know that the degree of the node entered the network before ith batch is
almost everywhere greater than ijk . Because the degree of the network may be continuous
growth, that is, at time t when ith batch nodes add to the system the degree of the network
may be as follows, jij ktkmmm 0,),(,,2,1, LL++ . Thus, for any given integer
mktk j >≥)(0 ,there may exist )(tkij ,such that, ktkij =)( . The number of nodes that the
degree is greater than or equal to k is )( itN . Thus, the cumulative degree distribution is
obtained as follows
m
A
m
A
ij
i
cum
Ak
Am
Ak
Am
tN
tN
kP
++
+
+
=
+
+
==
22
)()(
)(
)(
)( (14)
which is the probability that the degree is greater than or equal to k. The cumulative
distribution reduces the noise in the tail. Since the degree of the network may be
continuous growth, from (14) we know that the network is scale-free, and degree
exponent
m
A
+= 3γ .
5. Simulation result
In succession, we show some simulations. The number of nodes of our simulation
is 120060=L . Other parameters is as follows, 600 =m , 8=m , 7=a and 0=A . The
15. arXiv:1005.2122v2
15
analytical result agrees with the simulation well, obeying an approximately power-law
form with exponent 3, see Fig.3—Fig.6.
Fig. 3. The degree distribution of the present model with r= 1. The squares and solid curve
represent the simulation and analytical results, respectively.
Fig. 4. The degree distribution of the present model with r= 0.8. The squares and solid
curve represent the simulation and analytical results, respectively.
0.000001
0.00001
0.0001
0.001
0.01
0.1
1
1 10 100 1000 10000
k
P(k)
Simulation result
r=1
Analytical result
0.000001
0.00001
0.0001
0.001
0.01
0.1
1
1 10 100 1000 10000
k
P(k)
Simulation result
r=0.8
Analytical result
16. arXiv:1005.2122v2
16
Fig. 5. The degree distribution of the present model with r= 0.5. The squares and solid
curve represent the simulation and analytical results, respectively.
Fig. 6. The degree distribution of the present model with r= 0.1. The squares and solid
curve represent the simulation and analytical results, respectively.
0.000001
0.00001
0.0001
0.001
0.01
0.1
1
1 10 100 1000 10000
k
P(k)
Simulation result
r=0.5
Analytical result
0.000001
0.00001
0.0001
0.001
0.01
0.1
1
1 10 100 1000 10000
k
P(k)
Simulation result
r=0.1
Aanalytical result
17. arXiv:1005.2122v2
17
6. Summary and conclusions
Using Logistic model we study IPv4 standard diffusion tendency in China. We
forecast IPv4 diffusion tendency. Our result shows that the limited resources of IPv4 will
be a great restriction on china’s Internet development. Therefore, we suggest that the
country should step up IPv6 standards related to the development and marketing efforts.
From the strategic high, the development of the next-generation Internet is a task of top
priority.[22]
The network formed by IPv4 is a bulk growth and the limit of the growth is finite. The
accumulative number of the network is S curve. A lot of practical networks have S-curve
characteristics such as the network formed by Internet users. We propose the S-curve
network model based on the feature of IPv4 network. Barabási and Albert assumed that the
time that the node is added to the system obeys uniform distributions. Although this
assumption is not reasonable, it does not affect the result of estimating the distribution of
singly growing networks.[2]
However, Barabási-Albert method can not estimate the degree
distribution of bulk growing networks. To overcome the difficulty, we propose an
approximate method for estimating the degree distribution of complex networks.
Simulation results show that this method is effective. Although this is not a rigorous
method, it is an intuitive and effective method for estimating the degree distribution. The
rigorous method for analyzing complex networks needs a stochastic process theory.[7,12,19]
18. arXiv:1005.2122v2
18
References
[1] D J Watts and Strogatz S H 1998 Nature 393 440
[2] Albert R and Barabási A L 2002 Rev.Mod. Phys. 74 47
[3] Barabási A L and Albert R 1999 Science 286 509
[4] Boccalettia S, Latora V, Moreno Y, Chavez M and Hwang D U 2006 Physics Reports
424 175
[5] Wang D, Yu H, Jing Y W, Jiang N and Zhang S Y 2009 Acta Physica Sinica 58 6802
(in Chinese)
[6] Newman M E J 2003 SIAM Review 45 167
[7] Guo J L 2007 Chin. Phys. B 16 1239
[8] Wang L N, Guo J L, Yang H X, and Zhou T 2009 Physica A 388 1713.
[9] Tanaka T and Aoyagi T 2008 Physica D 237 898
[10] Dorogovtsev S N, Mendes J F F and Samukhin A N 2000 Phys. Rev.Lett. 85 4633.
[11] Guo J L and Jia H Y 2008 J. University of Shanghai for Science and Technology 30
205 (in Chinese)
[12] Guo J L 2010 Mathematics in Practice and Theory 40(4) 175 (in Chinese)
[13] Bianconi1 G. and Barabási A L 2001 Europhys. Lett. 54 (4) 436
[14] Xie Y B, Zhou T and Wang B H 2008 Physica A 387 1683
[15] Barabási A L, Ravasz E and Vicsek T 2001 Physica A 299 559
[16] Ravasz E, Somera A L, Mongru D A, Oltvai Z N and Barabási A L 2002 Science 276
1551
[17] Guo J L 2010 Chin. Phys. Lett. 27 038901.
19. arXiv:1005.2122v2
19
[18] Guo J L and Wang L N 2010 Physics Procedia 3 1791
[19] Guo J L and Bai Y Q 2006 Dyn. Conti. Disc. Impul. Syst. B 13 523
[20] Yin Z Y 2002 Application of Statistics and Management 21 41
[21] Price D J de S 1976 J. Amer. Soc. Inform. Sci. 27 292
[22] Meng F D and He M S 2008 Journal of Dalian University of Technology 48 137 (in
Chinese)