M.tech Term paper report | Cognitive Radio Network
1. Term Paper Report
On
Artificial Intelligence Based Cognitive Routing for
Cognitive Radio Networks
Submitted in the partial fulfilment of the requirements for the degree
Of
Master of Technology
In
Digital Communication
By
Shashank Narayan
Under supervision of
Dr. Aarti Jain
Assistant Professor
Ambedkar Institute of Advanced Communication and Research
Geeta Colony, New Delhi
2. 2
DECLARATION
I hereby declare that the term paper project report entitled “Artificial Intelligence Based
Cognitive Routing for Cognitive Radio Networks” submitted by me in partial fulfilment of
the requirement for the award of degree M. Tech. in Digital Communication to Ambedkar
Institute of Advanced Communication and Research is a record of bonafide project work
carried out by me under the guidance of Dr. Aarti Jain, Assistant Professor. I further declare
that the work reported in this project comprises only my original work and due
acknowledgement has been made in the text to all other material used.
Date -
Shashank Narayan
3. 3
CERTIFICATE
This is to certify that the term paper project report entitled “Artificial Intelligence Based
Cognitive Routing for Cognitive Radio Networks” submitted by me in partial fulfilment of
the requirement for the award of degree M. Tech. in Digital Communication to Ambedkar
Institute of Advanced Communication and Research is an authentic work carried out by
him under my supervision and guidance. The matter in this thesis is original and has not been
submitted for the award of any other degree.
Date -
Dr. R. K. Sharma Dr. Aarti Jain
Head of Department, ECE Assistant Prof.,
ECE
4. 4
ACKNOWLEDGEMENT
I would like to express my sincere thanks to my project supervisor Dr. Aarti Jain, Assistant
Professor, Electronics and Communication Engineering, Ambedkar Institute of Advanced
Communication and Research for his constant support, timely help, guidance and sincere co-
operation during the entire period of my work. I am grateful to her for providing all the
necessary facilities during the course of the project work.
Last but not the least I am thankful to all those persons with whom I have interacted and who
directly or indirectly contributed significantly to the successful completion of my project.
Shashank Narayan
6. 6
Table of Figures
Figure 1 : Spectrum Hole Concept..................................................................................................... 9
Figure 2 : Cognitive Radio transceiver............................................................................................. 10
Figure 3 : Cognitive Cycle ................................................................................................................ 11
Figure 4 : Digital Implementation of Energy Detector..................................................................... 20
Figure 5 : Adder Output. User Present (1st and 5th), User Absent (2nd, 3rd, and 4th) .................. 21
Figure 6 : Used bands (1st and 5th), Unused bands (2nd, 3rd, and 4th).......................................... 22
Figure 7 : 1st unused band assigned to Secondary User 1 .............................................................. 22
Figure 8 : 2nd unused band assigned to Secondary User 2.............................................................. 23
Figure 9 : All of the Spectrum bands are in use............................................................................... 23
Figure 10 : SNR = 5dB ..................................................................................................................... 24
Figure 11 : SNR = 15dB.................................................................................................................... 25
Figure 12 : Attenuation = 10%......................................................................................................... 25
Figure 13 : Attenuation = 15%........................................................................................................ 26
7. 7
ABSTRACT
In the following term paper titled “Artificial Intelligence Based Cognitive Routing for
Cognitive Radio Networks”, Cognitive radio networks (CRNs) are networks of nodes
equipped with cognitive radios that can optimize performance by adapting to network
conditions. While cognitive radio networks (CRN) are envisioned as intelligent networks,
relatively little research has focused on the network level functionality of CRNs. Although
various routing protocols, incorporating varying degrees of adaptiveness, have been proposed
for CRNs, it is imperative for the long term success of CRNs that the design of cognitive routing
protocols be pursued by the research community.
Cognitive routing protocols are envisioned as routing protocols that fully and seamless
incorporate AI-based techniques into their design. In this paper, we provide a self-contained
tutorial on various AI and machine-learning techniques that have been, or can be, used for
developing cognitive routing protocols. We also survey the application of various classes of AI
techniques to CRNs in general, and to the problem of routing in particular. We discuss various
decision making techniques and learning techniques from AI and document their current and
potential applications to the problem of routing in CRNs. We also highlight the various
inference, reasoning, modeling, and learning sub tasks that a cognitive routing protocol must
solve. Finally, open research issues and future directions of work are identified.
8. 8
INTRODUCTION
In cognitive radio networks (CRNs), nodes are equipped with cognitive radios (CRs) that can
sense, learn, and react to changes in network conditions. Mitola envisioned that CRs could be
realized through incorporation of substantial computational or artificial intelligence (AI)—
particularly, machine learning, knowledge reasoning and natural language processing — into
SDR hardware. In a modern setting, this is achieved by incorporation of a cognitive engine
(CE) using various AI-based techniques through which the CR adapts to the network conditions
to satisfy some notion of optimality. CRs have also been proposed for a wide range of
applications including intelligent transport systems, public safety systems, femtocells,
cooperative networks, dynamic spectrum access, and smart grid communications. CR promises
to dramatically improve spectrum access, capacity, and link performance while also
incorporating the needs and the context of the user. CRs are increasingly being viewed as an
essential component of next-generation wireless networks. Although cognitive behavior of
CRNs can enable diverse applications, perhaps the most cited application of CRNs is dynamic
spectrum access (DSA). DSA is proposed as a solution to the problem of artificial spectrum
scarcity that results from static allocation of available wireless spectrum using the command-
and-control licensing approach. Under this approach, licensed applications represented by
primary users (PUs) are allocated exclusive access to portions of the available wireless
spectrum prohibiting other users from access even when the spectrum is idle. With most of the
radio spectrum already being licensed in this fashion, innovation in wireless technology is
constrained. The problem is compounded by the observation, replicated in numerous
measurement based studies world over, that the licensed spectrum is grossly underutilized. The
DSA paradigm proposes to allow secondary users (SUs), also called cognitive users, access to
the licensed spectrum subject to the condition that SUs do not interfere with the operations of
the primary network of incumbents.
9. 9
3. COGNITIVE RADIO
Cognitive radio is a radio which alters its transmission parameters according to the environment
in which it operates. Cognitive radio is dynamic in nature. The main objective of CR is to
choose the best spectrum. The CR user senses the spectrum in order to find the vacant one. The
vacant spectrum is called as the spectrum holes or white space. CR user continues its
transmission until the PU reappears otherwise it leaves the spectrum as illustrated in Fig. 1.
The CR user should be aware about the interference level with the PU. For seamless
transmission it moves to new vacant spectrum.
Figure 1 : Spectrum Hole Concept
The CR transceiver contains a Radio Frequency (RF) unit, analog to digital converter and
baseband processing unit. RF and analog to digital converter together called as the RF front
end. General CR transceiver is shows in Fig. 2. The RF front end amplifies the received signal
and it converted to digital signal. Then the signal is modulated/demodulated and encoded
/decoded at the base processing unit.
10. 10
Figure 2 : Cognitive Radio transceiver
3.1 CHARACTERSTICS
The main characteristics of cognitive radio are:
3.1.1 Cognitive capability: It refers to the ability of CR node to sense and gather the information
such as transmission frequency, bandwidth, power, modulation, etc from its environment. By
appropriate sensing the SU can choose the best spectrum by adjusting the parameters.
3.1.2 Reconfigurability: It adjusts the parameters such as operating frequency, modulation,
transmission power, etc. based on the gathered information without any modification in
hardware components.
3.2 FUNCTIONS
The main functions of CR are illustrated in Fig. 3. The CR senses the environment and collects
the information. Based on this it make a decision and adjust the parameters. These functions
are named as spectrum sensing, spectrum decision, spectrum sharing and spectrum mobility.
3.2.1 Spectrum sensing: CR senses the spectrum and determines the spectrum holes. Also it
captures their information.
3.2.2 Spectrum decision: Out of the sensed spectrum the CR selects the best spectrum and
determines the transmission parameters.
3.2.3 Spectrum sharing: It coordinates the spectrum access with other users.
11. 11
3.2.4 Spectrum mobility: SU vacate the channel when the licensed user reappears. For
continuous transmission the CR user moves to another spectrum hole.
Figure 3 : Cognitive Cycle
12. 12
4. MACHINE LEARNING
For a radio to be deemed a cognitive radio, it is necessary for it to be equipped with the ability
of learning. On receiving certain environmental input, systems (e.g., animals, automata, and in
our case, cognitive radios) exhibit some kind of behavior. If the system changes changing its
behavior over time in order to improve its performance at a certain task, it is said to learn from
its interaction with its environment. This implies that these systems may respond differently to
the same input later on than they did earlier. The field of machine learning focuses on the
theory, properties and performance of learning algorithms. Machine learning is a field of
research that formally studies learning systems and algorithms. It is a highly interdisciplinary
field building upon ideas from diverse fields such as statistics, artificial intelligence, cognitive
science, information theory, optimization theory, optimal control, operations research, and
many other disciplines of science, engineering and mathematics. Russell and Norvigdescribe
machine learning to be the ability to “adapt to new circumstances and to detect and extrapolate
patterns”. Machine learning techniques have proven themselves to be of great practical utility
in diverse domains such as pattern recognition, robotics, natural language processing,
autonomous control systems. They are particularly useful in domains, like CRNs, where the
agents must dynamically adapt to changing conditions.
Type of machine learning algorithms: Machine learning concerns itself with a learner using a
set of observations to uncover the underlying process. There are principally three variations to
this broad definition and machine learning can be classified into three broad classes with
respect to the sort of feedback that the learner can access:
i) supervised learning,
ii) unsupervised learning, and
iii) reinforcement learning.
Briefly, supervised learning is one extreme in which the learner is provided with labeled
examples by its environment (alternatively, a supervisor or teacher) in a training phase through
which the learner attempts to generalize so that it can respond correctly to inputs it has not seen
yet. We can think of learning a simple categorization task as supervised learning. Unsupervised
learning is the other extreme in which the learner receives no feedback from the environment
at all. The learner’s task is to organize or categorize the inputs in clusters, categories, or with
reduced set of dimensions. A third alternative, closer to supervised learning than to
unsupervised learning, is reinforcement learning in which although the learner is not provided
feedback about what exactly the correct response should have been, it gets indirect feedback
13. 13
about the appropriateness of the response through a reward (or reinforcement). Reinforcement
learning, therefore, depends more on exploration through trial-and-error. We will be covering
these three kinds of learning in more detail later in sections II-A, II-B, and II-C, respectively.
4.1 SUPERVISED LEARNING
In supervised learning, algorithms are developed to learn and extract knowledge from a set of
training data which is composed of inputs and corresponding outputs assumed to be labelled
correctly by a ‘teacher’ or a ‘supervisor’. To understand supervised learning, imagine a
machine that experiences a series of inputs: x1, x2, x3, and so on. The machine is also given
the corresponding desired outputs y1, y2, y3, and so on, and the goal is to learn the general
function f(x) through which correct output can be determined given a new input xi (not
necessarily seen in the training examples provided).
The output can be a continuous value for a regression problem, or can be a discrete value for a
classification problem. The objective of supervised learning is to predict the output given any
valid input. In other words, the task in supervised learning is to discover the function through
which an input is transformed into output. This contrasts with ‘unsupervised learning’ in which
the example of objects are available in an unlabelled or unclassified fashion.
Types of supervised learning problems: There are essentially two types of supervised learning
problems—classification and regression (or estimation). Classifiers itself can be further
classified into computational classifiers such as support vector machines (SVM), statistical
classifiers such as linear classifiers (e.g., Naive Bayes classifier or logistic regression), hidden
Markov model (HMM) and Bayesian networks, or connectionist classifiers such as neural
networks.
A central result in ‘supervised learning theory’ is the ‘no free lunch theorem’ which informs
that there is no single learning method that will outperform all others regardless of the problem
domain and the underlying distributions. For this reason, a variety of domain and application
specific techniques have emerged to deal with diverse applications with varying degrees of
success. The design of practical learning algorithms is therefore a mixture of art and science
[19]. Major issues in supervised learning: The major issue with supervised learning is the need
to generalize a function from the learned data so that the technique may be able to conjure up
the correct output even for inputs it has not explicitly seen in the training data. This task of
generalization cannot be solved exactly without some additional assumptions2 being made
about the nature of the target function as it is possible for the yet unseen inputs to have arbitrary
14. 14
output values. Potential problems arise in supervised learning of creating a model that is
underfitted (perhaps due to limited amounts of training data) or overfitted (in which a
unnecessarily complex model is built to model the spurious and uncharacteristic noisy
attributes of data). Depending on the application, huge amounts of training data may be
necessary for the supervised learning algorithm to work.
4.2 UNSUPERVISED LEARNING
In supervised learning, it was assumed that a labeled set of training data consisting of some
inputs and their corresponding outputs was provided. In contrast, in unsupervised learning, no
such assumption is made. The objective of unsupervised learning is to identify the structure of
the input data. To understand unsupervised learning, again imagine the machine that
experiences a series of inputs: x1, x2, x3, and so on. The goal of the machine in unsupervised
learning is to build a model of x that can be useful for decision making, reasoning, prediction,
communication, etc. The basic method in unsupervised learning is clustering (which can be
thought of as the unsupervised counterpart of the supervised learning task of classification).
This clustering is used to find the groups of inputs which have similarity in their characteristics.
Application of unsupervised learning to CRNs: An application to which unsupervised learning
is particularly suited to is the extraction of knowledge about primary signals on the basis of
measurements. A prominent unsupervised classification technique that has been applied to
CRNs particularly for this problem is the Dirichlet process mixture model (DPMM). The
DPMM is a Bayesian non-parametric model which makes very few assumptions about the
distribution from which the data are drawn by using a Dirichlet process prior distribution.
The benefit of Dirichlet process based learning is that training data is not needed anymore, thus
allowing this approach to be used for identification of unknown signals in an unsupervised
setting. Dirichlet process has been proposed in literature for identifying and classifying
spectrum usage by unidentified systems in CRNs.
4.3 REINFORCEMENT LEARNING
Reinforcement learning (RL) is inspired from how learning takes place in animals. It is well
known that an animal can be taught to respond in a desired way by rewarding and punishing it
appropriately; conversely, it can be said that the animal learns how it must act so as to maximize
positive reinforcement or reward. A crucial advantage of reinforcement learning over other
learning approaches, and a main reason for its practical significance, is that it does not require
any information about the environment except for the reinforcement signal.
15. 15
To understand RL, we again take recourse to the example of the machine which experiences a
series of inputs: x1, x2, x3, and so on. In this new setting, the machine can also perform certain
actions a1, a2,... through which it can affect the state of the world and receive rewards (or
punishments) r1, r2, and so on.3 The mapping from the actions to rewards is probabilistic in
general. The objective of a reinforcement learner is to discover a policy (i.e., a mapping from
situations to actions) such that expected long-term reward is maximized.
16. 16
5. DECISION AND PLANNING TECHNIQUES
The cognitive cycle which epitomizes the essence of a cognitive radio is based on a cognitive
radio’s ability to:
i) observe its operating environment, decide on how to
ii) best adapt to the environment, and then as the cycle repeats, to reason and
iii) learn from past actions and observations.
The term planning, for the purpose of our discussion, refers to any computational process that
produces (or improves) a decision policy of how to interact with the environment given a model
of the environment. Planning is sometimes often referred to as a search task, since we are
essentially searching through the space of all possible plans.
In the remainder of this section, we will discuss two major decision planning frameworks that
have been widely applied to CRNs. Specifically, we shall be studying Markov decision
processes and game theory.
5.1 MARKOV DECISION PROCESSES
Markov decision processes (MDPs) provide a mathematical framework for modeling
sequential planning or decision making by an agent in real-life stochastic situations where the
outcome does not follow deterministically from actions. In such cases, the output (also, called
the reward) is specified by a probability distribution that depends on the action adopted in a
particular state. MDPs approach this multi-stage decision making process sequential as an
‘optimal’ control problem in which the aim is to select actions that maximize some measure of
long-term reward.4 MDPs differ from classical deterministic AI planning algorithms in that its
action model is stochastic (i.e., the outcome does not follow deterministically from the action
chosen).
More formally, an MDP is a discrete time stochastic optimal control process. Every time step,
the process is in some states, and the decision maker has to choose some action a from amongst
the A actions available in the current state. After taking the action, the process will move
randomly to some new state s′, with the decision maker obtaining a corresponding reward Ra(s,
s′). We note here that the reward is used in a neutral sense: it can imply both a positive reward
or a negative reinforcement (i.e., a penalty). The choice of action a in state s influences the
probability that the process will move to some new state s′. This probability (of going from
state s to s′ by taking action a) is given by the state transition function Pa(s, s′).5 The next state
17. 17
s′, therefore, depends stochastically on current state s and the action a taken therein by the
decision maker. In MDPs, an extra condition holds crucially: given s and a, the Pa(s, s′) is
conditionally independent of all previous states and actions. This condition is known as the
Markov property and this condition is critical for keeping MDP analysis tractable.
To put MDPs into perspective, we note here that they are a generalization of Markov chains.
The difference is that MDPs incorporate actions and rewards in the model while Markov chains
do not. Conversely, the special case of MDPs with only one action available for each state and
with identical rewards (e.g., zero) is in fact a Markov chain. This, and the relationship of
various Markov models and games that we will develop later in this paper, can be seen
graphically in figure. The roots of such problems can be traced to the work of Richard Bellman
who showed that the computational burden of solving an MDP can be reduced quite
dramatically via Techniques that are now referred to as dynamic programming (DP). We will
discuss these techniques next. Solving an MDP: The core problem in MDPs is to determining
an optimal ‘policy’ for the decision maker which is defined to be a function π that maps a state
s to an action π(s). Intuitively, the policy π specifies what action must the agent perform when
in various states so that the long-term rewards are maximized. It may be noted that once the
MDP is specified with a policy, the action at various states is fixed, and the resulting MDP
effectively behaves like a Markov chain. We can now make the notion of long-term rewards
more precise now. In a potentially infinite horizon environment, with continuous decision
making which goes on forever, to reason about the various different possible policies, it is
important that the reward function be non-finite. This is usually accomplished through
discounting through which the preference of immediate rewards over delayed rewards may be
quantified. Discounting works by reducing future rewards by a factor of γ chosen such that 0
γ < 1 in every time step. The discount factor γ is used as a parameter to describe the relative
importance of future rewards. If γ is chosen to be 0, the agent will become short sighted or
‘myopic’ and will consider current rewards only. As γ approaches 1, the agent will become
long-sighted and it will strive for longterm rewards. To ensure that action values do not diverge,
5.2 GAME THEORY
Game theory is a mathematical decision framework composed of various models and tools
through which we can study and analyze competitive interaction between multiple self-
interested rational agents. Although, game theoretic models exist for both cooperative and non-
cooperative settings, the ability to model competition mathematically distinguishes game
18. 18
theory from optimal control-theoretic frameworks such as the MDP. Game theory is also
differentiated from optimization theory (which caters to a single decision maker scenario) in
their ability to model multi-agent decision making scenarios where the decisions of each agent
affect each other. Every game involves a set of players, actions for each of the players
representing how players interact, preferences for each of the players defined over all the
possible outcomes. The preferences, or payoffs, are typically defined through a utility function,
or a payoff function, which maps each possible outcome to a number representing that
outcome’s desirability. An outcome brings more reward, or is more desirable, if it has a higher
utility. In order to maximize its payoff, each player acts according to its strategy. More
formally, a game can be mathematically represented by the 3-tuple G = (N, S,U) where N
represents the set of players, S the set of strategies, and U the set of payoff functions.
The terms strategy and action should not be confused together: the strategy in fact specifies
how the player should act in each possible situation, and can be envisioned as a complete
algorithm documenting how the player will play the game. The strategy of a player can be a
single action (for a single-shot or a static games) or a set of actions during the game (for a
sequential or a dynamic games). A player’s strategy set defines what strategies are available
for it to play: the strategy set may be finite (e.g., when a choice is made from a countable
discrete set of values) or infinite (e.g., when some continuous value is chosen). A pure strategy
deterministically defines how a player will play a game, while a mixed strategy defines a
stochastic definition by assigning probability to each pure strategy. The strategy profile, or the
action profile, documents the strategy of each player and it fully specifies all actions in a game.
The outcome of the game depends, possibly stochastically, on the player’s strategy profile and
returns payoffs to various players.
Game theory is popularly used in CRNs since each CR in a CRN interacts with a dynamic
environment composed of other rational agents that sense, act, and learn while aiming to
maximize personal utility. For games specific to CRNs, individual CRs typically represent the
players, and the actions may include the choice of various system or design parameters such
as, e.g., the modulation scheme, transmit power level, flow control parameter, etc. One of the
main goal of game theory is to determine equilibria points for a given game. These are sets of
stable strategies in which individuals are unlikely to unilaterally change their behaviour. To
gauge their efficiency, these equilibria points are often contrasted with some notion of socially
optimal point which produces the ‘best’ outcome when interests of all the players is taken into
accounts.
19. 19
In recent years, game theory has provided deep insights into how to design decentralized
algorithms for resource sharing in networks particularly through the theory known as
mechanism design sometimes known as reverse game theory. While traditional game theory
focuses on analyzing how rational players would play a given game, in mechanism design, we
are interested in engineering or design a game which rational players will play into a desired
equilibrium point. Intuitively, mechanism design aims to set up the game such that players do
what the designers want them to do but because the players themselves want to do it.
20. 20
6. IMPLEMENTATION
The implementation is done using MATLAB Simulink software. Digital implementations offer
more flexibility by using FFT-based spectral estimates. Fig. shows the architecture for digital
implementation of an energy detector of a Primary User.
There are three different techniques which are commonly used in signal processing techniques
for spectrum sensing:
• Matched filter
• Energy detector
• Cyclostationary feature detector
Figure 4 : Digital Implementation of Energy Detector
Energy detector based approach is the most common way of spectrum sensing because of its
low computational and implementation complexities. When the primary user signal is unknown
or the receiver cannot gather sufficient information about the primary user signal, the energy
detection method is used. About the primary user signal, the energy detection method is used.
This method is optimal for detecting any unknown zero-mean constellation signals and can be
applied to cognitive radios (CRs). The process flow of the energy detector is, the received
signal is passed through the ADC then calculate the FFT coefficient values then squared those
values and average over the observation interval. Then the output of the detector is compared
to a predefined threshold value to decide whether the primary user is present or not.
6.1 SIMULATION RESULTS AND DISCUSSIONS
The cognitive radio system continuously searches the spectrum hole where primary user is not
present and is determined by the method of energy detection. When it finds out the spectrum
21. 21
hole, immediately it allots to the Secondary User (SU) and whenever Primary User (PU) wants
to occupy the slot, Secondary User immediately leaves it. For 5(Five) signals, the carrier
frequencies are 1MHz, 2MHz, 3MHz, 4MHz, 5MHz and sampling frequency is 12MHz used
for simulation. Power Spectrum Density (PSD) of signal is calculated, compared with the
predefined threshold value and determined the presence of primary user signal. In this paper,
it has assumed that 1st, 5th primary users are present and 2nd, 3rd and 4th
primary users are not
present. Then, the following results are obtained which are shown in the following figures.
Figure 5 : Adder Output. User Present (1st and 5th), User Absent (2nd, 3rd, and 4th)
22. 22
Figure 6 : Used bands (1st and 5th), Unused bands (2nd, 3rd, and 4th)
Now the Cognitive Radio (CR) system will look for the first available gap (Spectrum hole) and
automatically assign it to the secondary user (SU) in the spectrum. It is shown in the figure that
the first available spectral gap was occupied by the secondary user (SU) 1.
Figure 7 : 1st unused band assigned to Secondary User 1
Now the system will search the next spectrum hole and automatically assign it to the secondary
user (SU) in the spectrum. As shown in the Figure, the next available gap was occupied by the
secondary user (SU) 2.
23. 23
Figure 8 : 2nd unused band assigned to Secondary User 2
Figure 9 : All of the Spectrum bands are in use
Now just one slot left empty which will get filled by addition of another Secondary User (SU)
as shown in Figure. Here all of the frequency bands are in use efficiently after the last spectrum
hole is filled by secondary user 3. Low peaks in Figure are for 2nd, 3rd and 4th primary users
who are not present and high peaks for the 1st and 5th primary user (PU) who are present. It is
seen in Figure that there is an increase in the peak of 2nd slot after allocating the 2nd slot to
secondary user1. Similarly in Figure, an increase in the peak of 3rd slot allocating it to the
24. 24
secondary user (SU) 2 is observed. At this instant 4th primary user leaves the slot. So, finally,
the allotment of 4th slot to secondary user 3 by the cognitive radio network is seen Figure.
Once all the slots are being assigned, system will not entertain other user and will be able to
free up the spectral gap (slot) one by one. If asked to empty a slot, it will delete the data in the
first spectral gap and make it ready for the next assignment.
To analyze the channel characteristics we can add noise and attenuation parameter. Now if the
Signal to Noise Ratio (SNR) is 5dB, 15dB. Then, the following results are shown in the Figure.
The disturbance in the spectrum can be observed to decrease with the increase in SNR. This
means that the noisy channel will increase the probability of error in the received signal.
Figure 10 : SNR = 5dB
25. 25
Figure 11 : SNR = 15dB
If the attenuation percentage values 10% and 15% are taken, then the following results are
observed in the Figure. Here the signal peaks are proportionately reduced with increasing
attenuation thus attenuation in the channel will reduce the signal power which in essence
impairs the proper signal reception.
Figure 12 : Attenuation = 10%
26. 26
Figure 13 : Attenuation = 15%
The simulation can also be carried out with the following combination of parameters:
SNR=20dB and Attenuation=15%
SNR=25dB and Attenuation=10%
SNR=20dB and Attenuation=20%
SNR=20dB and Attenuation=15%
In all the cases probability of false alarm will also be taken into account. These results will be
formulated and communicated later on.
28. 28
[7] Mohamed Hamid, December 2008 “Dynamic Spectrum Access In Cognitive Radio
Networks:Aspects of Mac Layer Sensing” , Blekinge Institute of Technology,
[8] A. Bansal, Ms. R. Mahajan “Building Cognitive Radio System Using Matlab”, IJECSE,
Vol.1,
Number 3, 2012
[9] Linda E. Doyle “Essentials of Cognitive Radio”, Trinity College, Dublin
[10] Mansi Subhedar and Gajanan Birajdar (2011),“Spectrum Sensing Techniques in Cognitive
Radio Networks: A Survey”, International Journal of Next –Generation Networks, Vol.3, No.2.
[11] Proakis, John G. Digital Communications. McGraw-Hill College, 2000.
[12] Anita Garhwal and Partha Pratim Bhattacharya, December 2011, “A Survey on Dynamic
Spectrum Access Techniques For Cognitive Radio”, International Journal of Next-Generation
Networks (IJNGN) Vol.3, No.4,
[13] Tulika Mehta, Naresh Kumar, Surender S Saini , AprIl- June 2013, “Comparison of
Spectrum Sensing Techniques in Cognitive Radio Networks” , IJECT Vol. 4, Issue spl- 3,
[14] Thesis report on,“Cognitive Radios–Spectrum Sensing Issues”,by Amit Kataria presented
to the Faculty of the Graduate School at the University of Missouri-Columbia
[15] FCC (2003), ET Docket No 03-237, Notice of inquiry and notice of proposed Rulemaking
[16] S.Taruna, Bhumika Pahwa, September – 2013 “Simulation of Cognitive Radio Using
Periodogram”, International Journal of Engineering Research & Technology (IJERT), Vol. 2
Issue 9, also been identified.