ECDSA stands for “Elliptic Curve Digital Signature Algorithm”, it’s used to create a digital signature of
data (a file for example) in order to allow you to verify its authenticity without compromising its security.
This paper presents the architecture of finite field multiplication. The proposed multiplier is hybrid
Karatsuba multiplier used in this processor. For multiplicative inverse we choose the Itoh-Tsujii
Algorithm(ITA). This work presents the design of high performance elliptic curve crypto processor(ECCP)
for an elliptic curve over the finite field GF(2^233). The curve which we choose is the standard curve for
the digital signature. The processor is synthesized for Xilinx FPGA.
A Fast and Dirty Intro to NetworkX (and D3)Lynn Cherny
Using the python lib NetworkX to calculate stats on a Twitter network, and then display the results in several D3.js visualizations. Links to demos and source files. I'm @arnicas and live at www.ghostweather.com.
This presentation has information about what do you mean by an algorithm, what is hashing and various hashing algorithms and their applications. Approximate counting Algorithm and their applications, Counting Distinct Elements Algorithm and their applications and Frequency estimation algorithm and their applications . Also, the research papers we referenced.
Secure Linear Transformation Based Cryptosystem using Dynamic Byte SubstitutionCSCJournals
Many classical cryptosystems are developed based on simple substitution. Hybrid cryptosystem using byte substitution and variable length sub key groups is a simple nonlinear algorithm but the cryptanalyst can find the length of each sub key group because the byte substitution is static and if the modulo prime number is small then byte substitution is limited to first few rows of S-box. In this paper an attempt is made to introduce the nonlinearity to the linear transformation based cryptosystem using dynamic byte substitution over GF (28). The secret value is added to the index value to shift the substitution to a new secret location dynamically. This adds extra security in addition to non-linearity, confusion and diffusion. The performance evaluation of the method is also studied and presented.
NetworkX is a Python language software package and an open-source tool for the creation, manipulation, and study of the structure, dynamics, and functions of complex networks. NetworkX can load, store and analyze networks, generate new networks, build network models, and draw networks. It is a computational network modelling tool and not a software tool development. The first public release of the library, which is all based on Python, was in April 2005.
A Fast and Dirty Intro to NetworkX (and D3)Lynn Cherny
Using the python lib NetworkX to calculate stats on a Twitter network, and then display the results in several D3.js visualizations. Links to demos and source files. I'm @arnicas and live at www.ghostweather.com.
This presentation has information about what do you mean by an algorithm, what is hashing and various hashing algorithms and their applications. Approximate counting Algorithm and their applications, Counting Distinct Elements Algorithm and their applications and Frequency estimation algorithm and their applications . Also, the research papers we referenced.
Secure Linear Transformation Based Cryptosystem using Dynamic Byte SubstitutionCSCJournals
Many classical cryptosystems are developed based on simple substitution. Hybrid cryptosystem using byte substitution and variable length sub key groups is a simple nonlinear algorithm but the cryptanalyst can find the length of each sub key group because the byte substitution is static and if the modulo prime number is small then byte substitution is limited to first few rows of S-box. In this paper an attempt is made to introduce the nonlinearity to the linear transformation based cryptosystem using dynamic byte substitution over GF (28). The secret value is added to the index value to shift the substitution to a new secret location dynamically. This adds extra security in addition to non-linearity, confusion and diffusion. The performance evaluation of the method is also studied and presented.
NetworkX is a Python language software package and an open-source tool for the creation, manipulation, and study of the structure, dynamics, and functions of complex networks. NetworkX can load, store and analyze networks, generate new networks, build network models, and draw networks. It is a computational network modelling tool and not a software tool development. The first public release of the library, which is all based on Python, was in April 2005.
Backtracking based integer factorisation, primality testing and square root c...csandit
Breaking a big integer into two factors is a famous problem in the field of Mathematics and
Cryptography for years. Many crypto-systems use such a big number as their key or part of a
key with the assumption - it is too big that the fastest factorisation algorithms running on the
fastest computers would take impractically long period of time to factorise. Hence, many efforts
have been provided to break those crypto-systems by finding two factors of an integer for
decades. In this paper, a new factorisation technique is proposed which is based on the concept
of backtracking. Binary bit by bit operations are performed to find two factors of a given
integer. This proposed solution can be applied in computing square root, primality test, finding
prime factors of integer numbers etc. If the proposed solution is proven to be efficient enough, it
may break the security of many crypto-systems. Implementation and performance comparison of
the technique is kept for future research.
A New Signature Protocol Based on RSA and Elgamal SchemeZac Darcy
In this paper, we present a new signature scheme based on factoring and discrete logarithm problems.
Derived from a variant of ElGamal signature protocol and the RSA alg
Enhancement of DES Algorithm with Multi State LogicIJORCS
The principal goal to design any encryption algorithm must be the security against unauthorized access or attacks. Data Encryption Standard algorithm is a symmetric key algorithm and it is used to secure the data. Enhanced DES algorithm works on increasing the key length or complex S-BOX design or increased the number of states in which the information is to be represented or combination of above criteria. By increasing the key length, the number of combinations for key will increase which is hard for the intruder to do the brute force attack. As the S-BOX design will become the complex there will be a good avalanche effect. As the number of states increases in which the information is represented, it is hard for the intruder to crack the actual information. Proposed algorithm replace the predefined XOR operation applied during the 16 round of the standard algorithm by a new operation called “Hash function” depends on using two keys. One key used in “F” function and another key consists of a combination of 16 states (0,1,2…13,14,15) instead of the ordinary 2 state key (0, 1). This replacement adds a new level of protection strength and more robustness against breaking methods.
A LIGHT-WEIGHT DISTRIBUTED SYSTEM FOR THE PROCESSING OF REPLICATED COUNTER-LI...ijdpsjournal
In order to increase availability in a distributed system some or all of the data items are replicated and
stored at separate sites. This is an issue of key concern especially since there is such a proliferation of
wireless technologies and mobile users. However, the concurrent processing of transactions at separate
sites can generate inconsistencies in the stored information. We have built a distributed service that
manages updates to widely deployed counter-like replicas. There are many heavy-weight distributed
systems targeting large information critical applications. Our system is intentionally, relatively lightweight
and useful for the somewhat reduced information critical applications. The service is built on our
distributed concurrency control scheme which combines optimism and pessimism in the processing of
transactions. The service allows a transaction to be processed immediately (optimistically) at any
individual replica as long as the transaction satisfies a cost bound. All transactions are also processed in a
concurrent pessimistic manner to ensure mutual consistency
HIGHLY SCALABLE, PARALLEL AND DISTRIBUTED ADABOOST ALGORITHM USING LIGHT WEIG...ijdpsjournal
AdaBoost is an important algorithm in machine learning and is being widely used in object detection.
AdaBoost works by iteratively selecting the best amongst weak classifiers, and then combines several weak
classifiers to obtain a strong classifier. Even though AdaBoost has proven to be very effective, its learning
execution time can be quite large depending upon the application e.g., in face detection, the learning time
can be several days. Due to its increasing use in computer vision applications, the learning time needs to
be drastically reduced so that an adaptive near real time object detection system can be incorporated. In
this paper, we develop a hybrid parallel and distributed AdaBoost algorithm that exploits the multiple
cores in a CPU via light weight threads, and also uses multiple machines via a web service software
architecture to achieve high scalability. We present a novel hierarchical web services based distributed
architecture and achieve nearly linear speedup up to the number of processors available to us. In
comparison with the previously published work, which used a single level master-slave parallel and
distributed implementation [1] and only achieved a speedup of 2.66 on four nodes, we achieve a speedup of
95.1 on 31 workstations each having a quad-core processor, resulting in a learning time of only 4.8
seconds per feature.
SURVEY ON QOE\QOS CORRELATION MODELS FORMULTIMEDIA SERVICESijdpsjournal
This paper presents a brief review of some existing correlation models which attempt to map Quality of
Service (QoS) to Quality of Experience (QoE) for multimedia services. The term QoS refers to deterministic
network behaviour, so that data can be transported with a minimum of packet loss, delay and maximum
bandwidth. QoE is a subjective measure that involves human dimensions; it ties together user perception,
expectations, and experience of the application and network performance. The Holy Grail of subjective
measurement is to predict it from the objective measurements; in other words predict QoE from a given set
of QoS parameters or vice versa. Whilst there are many quality models for multimedia, most of them are
only partial solutions to predicting QoE from a given QoS. This contribution analyses a number of previous
attempts and optimisation techniquesthat can reliably compute the weighting coefficients for the QoS/QoE
mapping.
Crypto multi tenant an environment of secure computing using cloud sqlijdpsjournal
Today’s most modern research area of computing is cloud comput
ing due to its ability to diminish the costs
associated with virtualization, high availability, dynamic resource pools and increases the efficien
cy of
computing. But still it contains some drawbacks such as privacy, security, etc. This paper is thorou
ghly
focused on the security of data of multi tenant model obtains from the virtualization feature of clo
ud
computing. We use AES
-
128 bit algorithm and cloud SQL to protect sensitive data before storing in the
cloud. When the authorized customer arises for usag
e of data, then data firstly decrypted after that
provides to the customer. Multi tenant infrastructure is supported by Google, which prefers pushing
of
contents in short iteration cycle. As the customer is distributed and their demands can arise anywhe
re,
anytime so data can’t store at particular site it must be available different sites also. For this f
aster
accessing by different users from different places Google is the best one. To get high reliability a
nd
availability data is stored in encrypted befor
e storing in database and updated every time after usage. It is
very easy to use without requiring any software. This authenticate user can recover their encrypted
and
decrypted data, afford efficient and data storage security in the cloud.
Advanced delay reduction algorithm based on GPS with Load Balancingijdpsjournal
A Mobile Ad-Hoc Network (MANET) is a self-configuring network of mobile nodes connected by wireless
links, to form an arbitrary topology. The nodes are free to move arbitrarily in the topology. Thus, the
network's wireless topology may be random and may change quickly. An ad Hoc network is formed by
sensor networks consisting of sensing, data processing, and communication components. There is frequent
occurrence of congested links in such a network as wireless links inherently have significantly lower
capacity than hardwired links and are therefore more prone to congestion. Here we proposed a algorithm
which involves the reduction in the delay with the help of Request_set created on the basis of the location
information of the destination node. Across the paths found in the Route_reply (RREP) packets the load is
equally distributed
Management of context aware software resources deployed in a cloud environmen...ijdpsjournal
In cloud computing environments, context information is continuously created by context providers and
consumed by the applications on mobile devices. An important characteristic of cloud-based context aware
services is meeting the service level agreements (SLAs) to deliver a certain quality of service (Qos), such as
guarantees on response time or price. The response time to a request of context-aware software is affected
by loading extensive context data from multiple resources on the chosen server. Therefore, the speed of
such software would be decreased during execution time. Hence, proper scheduling of such services is
indispensable because the customers are faced with time constraints. In this research, a new scheduling
algorithm for context aware services is proposed which is based on classifying similar context consumers
and dynamically scoring the requests to improve the performance of the server hosting highly-requested
context-aware software while reducing costs of cloud provider. The approach is evaluated via simulation
and comparison with gi-FIFO scheduling algorithm. Experimental results demonstrate the efficiency of the
proposed approach.
Implementing database lookup method in mobile wimax for location management a...ijdpsjournal
The mobile WiMAX plays a vital role in accessing the delay sensitive audio, video streaming and mobi
le
IPTV. To minimize the handover delay, a Location
Management Area (LMA) based Multicast and
Broadcast Service (MBS) zone is established. The handover delay is increased based on the size of th
e MBS
zone. In this paper, Location Management Area is easily identified by using Database Lookup Method t
o
obtai
n efficient bandwidth utilization along with reduced handover delay and increased throughput. The
handover delay and throughput is calculated by implementing this scenario in OPNET tool.
On the fly porn video blocking using distributed multi gpu and data mining ap...ijdpsjournal
Preventing users from accessing adult videos and at the same time allowing them to access good
educational videos and other materials through campus wide network is a big challenge for organizations.
Major existing web filtering systems are textual content or link analysis based. As a result, potential users
cannot access qualitative and informative video content which is available online. Adult content detection
in video based on motion features or skin detection requires significant computing power and time.
Judgment to identify pornography videos is taken based on processing of every chunk from video,
consisting specific number of frames, sequentially one after another. This solution is not feasible in real
time when user has started watching the video and decision about blocking needs to be taken within few
seconds.
In this paper, we propose a model where user is allowed to start watching any video; at the backend porn
detection process using extracted video and image features shall run on distributed nodes with multiple
GPUs (Graphics Processing Units). The video is processed on parallel and distributed platform in shortest
time and decision about filtering the video is taken in real time. Track record of blocked content and
websites is cached, too. For every new video downloads, cache is verified to prevent repetitive content
analysis. On the fly blocking is feasible due to latest GPU architecture, CUDA (Compute Unified Device
Architecture) and CUDA aware MPI (Message Passing Interface). It is possible to achieve coarse grained
as well as fine grained parallelism. Video Chunks are processed parallel on distributed nodes. Porn
detection algorithm on frames of chunks of videos can also achieve parallelism using GPUs on single node.
It ultimately results into blocking porn video on the fly and allowing educational and informative videos.
Survey comparison estimation of various routing protocols in mobile ad hoc ne...ijdpsjournal
MANET is
an autonomous system of mobile nodes attached by wireless links. It represents
a complex and
dynamic distributed systems that consist of mobile wireless nodes that can freely self organize into
an ad
-
hoc network topology. The devices in the network may hav
e limited transmission
range therefore multiple
hops may be needed by one node to transfer data to another node in network. This leads to the need f
or an
effective routing protocol. In this paper we study various classifications of routing protocols and
th
eir types
for wireless mobile ad
-
hoc networks like DSDV, GSR, AODV, DSR, ZRP, FSR, CGSR, LAR, and Geocast
Protocols. In this paper we also compare different routing proto
cols on based on a given set of
parameters
Scalability, Latency, Bandwidth, Control
-
ov
erhead, Mobility impact
Spectrum requirement estimation for imt systems in developing countriesijdpsjournal
In this paper
we analyze the methodology develope
d by
the
International Telecommunication Union (ITU)
for
estimat
ing
the spectrum requirement for International Mobile Telecommunications (IMT) systems.
The
International
Telecommunication Union estimates spectrum requirement
s
by following ITU
-
R
-
Rec.M1768.
Although
this
methodology is adopted by ITU
-
R
,
there are
discrepancies
for
estimat
ing
the spectrum
requirement for developing countries. ITU estimates the spectrum requirement by considering technica
l
and market
parameters
that
were provided by the
most de
veloped countries with high income and high
development index
. Developed countries have
a very rapid expansible telecom
market
due to the high level
of penetration
,
dominant
user density
and usage of high
-
volume multimedia services.
In contrast,
developing
countries
use less bandwidth
-
intensive services such as voice communication,
low rate data
,
low and medium multimedia.
However,
while
the input parameters are adequate for developed countries
,
they
do not reflect the status of developing countries. For th
is reason
the
ITU spectrum estimation
overestimate
s
the exact requirement
s
of spectrum for IMT systems for developing countries. This paper
presents an approach based on the technical and market related parameters, which is thought to be
applicable
for
ove
rcom
ing
the shortcomings of
the
current ITU methodology in estimatin
g
the spectrum
requirement
for developing
countries like Bangladesh
Derivative threshold actuation for single phase wormhole detection with reduc...ijdpsjournal
Communication in mobile Ad hoc networks is completed via multi
-
hop ways. Owing to the distributed
specification and restricted resource of nodes, MANET is a lot prone
to wormhole attacks i.e. wormhole
attacks place severe threats to each Ad hoc routing protocol and a few security enhancements. Thus,
so as
to discover wormholes, totally different techniques are in use. In all those techniques fixation of
threshold
is mer
ely by trial & error methodology or by random manner. Conjointly wormhole detection is in twin
part by putting the nodes that is higher than the edge in a suspicious set, however predicting the n
ode as a
wormhole by using some other algorithms. Our aim in
this paper is to deduce the traffic threshold level by
derivational approach for identifying wormholes in a very single phase in relay network having dissi
milar
characteristics.
BREAST CANCER DIAGNOSIS USING MACHINE LEARNING ALGORITHMS –A SURVEYijdpsjournal
Breast cancer has become a common factor now-a-days. Despite the fact, not all general hospitals
have the facilities to diagnose breast cancer through mammograms. Waiting for diagnosing a breast
cancer for a long time may increase the possibility of the cancer spreading. Therefore a computerized
breast cancer diagnosis has been developed to reduce the time taken to diagnose the breast cancer and
reduce the death rate. This paper summarizes the survey on breast cancer diagnosis using various machine
learning algorithms and methods, which are used to improve the accuracy of predicting cancer. This survey
can also help us to know about number of papers that are implemented to diagnose the breast cancer.
Backtracking based integer factorisation, primality testing and square root c...csandit
Breaking a big integer into two factors is a famous problem in the field of Mathematics and
Cryptography for years. Many crypto-systems use such a big number as their key or part of a
key with the assumption - it is too big that the fastest factorisation algorithms running on the
fastest computers would take impractically long period of time to factorise. Hence, many efforts
have been provided to break those crypto-systems by finding two factors of an integer for
decades. In this paper, a new factorisation technique is proposed which is based on the concept
of backtracking. Binary bit by bit operations are performed to find two factors of a given
integer. This proposed solution can be applied in computing square root, primality test, finding
prime factors of integer numbers etc. If the proposed solution is proven to be efficient enough, it
may break the security of many crypto-systems. Implementation and performance comparison of
the technique is kept for future research.
A New Signature Protocol Based on RSA and Elgamal SchemeZac Darcy
In this paper, we present a new signature scheme based on factoring and discrete logarithm problems.
Derived from a variant of ElGamal signature protocol and the RSA alg
Enhancement of DES Algorithm with Multi State LogicIJORCS
The principal goal to design any encryption algorithm must be the security against unauthorized access or attacks. Data Encryption Standard algorithm is a symmetric key algorithm and it is used to secure the data. Enhanced DES algorithm works on increasing the key length or complex S-BOX design or increased the number of states in which the information is to be represented or combination of above criteria. By increasing the key length, the number of combinations for key will increase which is hard for the intruder to do the brute force attack. As the S-BOX design will become the complex there will be a good avalanche effect. As the number of states increases in which the information is represented, it is hard for the intruder to crack the actual information. Proposed algorithm replace the predefined XOR operation applied during the 16 round of the standard algorithm by a new operation called “Hash function” depends on using two keys. One key used in “F” function and another key consists of a combination of 16 states (0,1,2…13,14,15) instead of the ordinary 2 state key (0, 1). This replacement adds a new level of protection strength and more robustness against breaking methods.
A LIGHT-WEIGHT DISTRIBUTED SYSTEM FOR THE PROCESSING OF REPLICATED COUNTER-LI...ijdpsjournal
In order to increase availability in a distributed system some or all of the data items are replicated and
stored at separate sites. This is an issue of key concern especially since there is such a proliferation of
wireless technologies and mobile users. However, the concurrent processing of transactions at separate
sites can generate inconsistencies in the stored information. We have built a distributed service that
manages updates to widely deployed counter-like replicas. There are many heavy-weight distributed
systems targeting large information critical applications. Our system is intentionally, relatively lightweight
and useful for the somewhat reduced information critical applications. The service is built on our
distributed concurrency control scheme which combines optimism and pessimism in the processing of
transactions. The service allows a transaction to be processed immediately (optimistically) at any
individual replica as long as the transaction satisfies a cost bound. All transactions are also processed in a
concurrent pessimistic manner to ensure mutual consistency
HIGHLY SCALABLE, PARALLEL AND DISTRIBUTED ADABOOST ALGORITHM USING LIGHT WEIG...ijdpsjournal
AdaBoost is an important algorithm in machine learning and is being widely used in object detection.
AdaBoost works by iteratively selecting the best amongst weak classifiers, and then combines several weak
classifiers to obtain a strong classifier. Even though AdaBoost has proven to be very effective, its learning
execution time can be quite large depending upon the application e.g., in face detection, the learning time
can be several days. Due to its increasing use in computer vision applications, the learning time needs to
be drastically reduced so that an adaptive near real time object detection system can be incorporated. In
this paper, we develop a hybrid parallel and distributed AdaBoost algorithm that exploits the multiple
cores in a CPU via light weight threads, and also uses multiple machines via a web service software
architecture to achieve high scalability. We present a novel hierarchical web services based distributed
architecture and achieve nearly linear speedup up to the number of processors available to us. In
comparison with the previously published work, which used a single level master-slave parallel and
distributed implementation [1] and only achieved a speedup of 2.66 on four nodes, we achieve a speedup of
95.1 on 31 workstations each having a quad-core processor, resulting in a learning time of only 4.8
seconds per feature.
SURVEY ON QOE\QOS CORRELATION MODELS FORMULTIMEDIA SERVICESijdpsjournal
This paper presents a brief review of some existing correlation models which attempt to map Quality of
Service (QoS) to Quality of Experience (QoE) for multimedia services. The term QoS refers to deterministic
network behaviour, so that data can be transported with a minimum of packet loss, delay and maximum
bandwidth. QoE is a subjective measure that involves human dimensions; it ties together user perception,
expectations, and experience of the application and network performance. The Holy Grail of subjective
measurement is to predict it from the objective measurements; in other words predict QoE from a given set
of QoS parameters or vice versa. Whilst there are many quality models for multimedia, most of them are
only partial solutions to predicting QoE from a given QoS. This contribution analyses a number of previous
attempts and optimisation techniquesthat can reliably compute the weighting coefficients for the QoS/QoE
mapping.
Crypto multi tenant an environment of secure computing using cloud sqlijdpsjournal
Today’s most modern research area of computing is cloud comput
ing due to its ability to diminish the costs
associated with virtualization, high availability, dynamic resource pools and increases the efficien
cy of
computing. But still it contains some drawbacks such as privacy, security, etc. This paper is thorou
ghly
focused on the security of data of multi tenant model obtains from the virtualization feature of clo
ud
computing. We use AES
-
128 bit algorithm and cloud SQL to protect sensitive data before storing in the
cloud. When the authorized customer arises for usag
e of data, then data firstly decrypted after that
provides to the customer. Multi tenant infrastructure is supported by Google, which prefers pushing
of
contents in short iteration cycle. As the customer is distributed and their demands can arise anywhe
re,
anytime so data can’t store at particular site it must be available different sites also. For this f
aster
accessing by different users from different places Google is the best one. To get high reliability a
nd
availability data is stored in encrypted befor
e storing in database and updated every time after usage. It is
very easy to use without requiring any software. This authenticate user can recover their encrypted
and
decrypted data, afford efficient and data storage security in the cloud.
Advanced delay reduction algorithm based on GPS with Load Balancingijdpsjournal
A Mobile Ad-Hoc Network (MANET) is a self-configuring network of mobile nodes connected by wireless
links, to form an arbitrary topology. The nodes are free to move arbitrarily in the topology. Thus, the
network's wireless topology may be random and may change quickly. An ad Hoc network is formed by
sensor networks consisting of sensing, data processing, and communication components. There is frequent
occurrence of congested links in such a network as wireless links inherently have significantly lower
capacity than hardwired links and are therefore more prone to congestion. Here we proposed a algorithm
which involves the reduction in the delay with the help of Request_set created on the basis of the location
information of the destination node. Across the paths found in the Route_reply (RREP) packets the load is
equally distributed
Management of context aware software resources deployed in a cloud environmen...ijdpsjournal
In cloud computing environments, context information is continuously created by context providers and
consumed by the applications on mobile devices. An important characteristic of cloud-based context aware
services is meeting the service level agreements (SLAs) to deliver a certain quality of service (Qos), such as
guarantees on response time or price. The response time to a request of context-aware software is affected
by loading extensive context data from multiple resources on the chosen server. Therefore, the speed of
such software would be decreased during execution time. Hence, proper scheduling of such services is
indispensable because the customers are faced with time constraints. In this research, a new scheduling
algorithm for context aware services is proposed which is based on classifying similar context consumers
and dynamically scoring the requests to improve the performance of the server hosting highly-requested
context-aware software while reducing costs of cloud provider. The approach is evaluated via simulation
and comparison with gi-FIFO scheduling algorithm. Experimental results demonstrate the efficiency of the
proposed approach.
Implementing database lookup method in mobile wimax for location management a...ijdpsjournal
The mobile WiMAX plays a vital role in accessing the delay sensitive audio, video streaming and mobi
le
IPTV. To minimize the handover delay, a Location
Management Area (LMA) based Multicast and
Broadcast Service (MBS) zone is established. The handover delay is increased based on the size of th
e MBS
zone. In this paper, Location Management Area is easily identified by using Database Lookup Method t
o
obtai
n efficient bandwidth utilization along with reduced handover delay and increased throughput. The
handover delay and throughput is calculated by implementing this scenario in OPNET tool.
On the fly porn video blocking using distributed multi gpu and data mining ap...ijdpsjournal
Preventing users from accessing adult videos and at the same time allowing them to access good
educational videos and other materials through campus wide network is a big challenge for organizations.
Major existing web filtering systems are textual content or link analysis based. As a result, potential users
cannot access qualitative and informative video content which is available online. Adult content detection
in video based on motion features or skin detection requires significant computing power and time.
Judgment to identify pornography videos is taken based on processing of every chunk from video,
consisting specific number of frames, sequentially one after another. This solution is not feasible in real
time when user has started watching the video and decision about blocking needs to be taken within few
seconds.
In this paper, we propose a model where user is allowed to start watching any video; at the backend porn
detection process using extracted video and image features shall run on distributed nodes with multiple
GPUs (Graphics Processing Units). The video is processed on parallel and distributed platform in shortest
time and decision about filtering the video is taken in real time. Track record of blocked content and
websites is cached, too. For every new video downloads, cache is verified to prevent repetitive content
analysis. On the fly blocking is feasible due to latest GPU architecture, CUDA (Compute Unified Device
Architecture) and CUDA aware MPI (Message Passing Interface). It is possible to achieve coarse grained
as well as fine grained parallelism. Video Chunks are processed parallel on distributed nodes. Porn
detection algorithm on frames of chunks of videos can also achieve parallelism using GPUs on single node.
It ultimately results into blocking porn video on the fly and allowing educational and informative videos.
Survey comparison estimation of various routing protocols in mobile ad hoc ne...ijdpsjournal
MANET is
an autonomous system of mobile nodes attached by wireless links. It represents
a complex and
dynamic distributed systems that consist of mobile wireless nodes that can freely self organize into
an ad
-
hoc network topology. The devices in the network may hav
e limited transmission
range therefore multiple
hops may be needed by one node to transfer data to another node in network. This leads to the need f
or an
effective routing protocol. In this paper we study various classifications of routing protocols and
th
eir types
for wireless mobile ad
-
hoc networks like DSDV, GSR, AODV, DSR, ZRP, FSR, CGSR, LAR, and Geocast
Protocols. In this paper we also compare different routing proto
cols on based on a given set of
parameters
Scalability, Latency, Bandwidth, Control
-
ov
erhead, Mobility impact
Spectrum requirement estimation for imt systems in developing countriesijdpsjournal
In this paper
we analyze the methodology develope
d by
the
International Telecommunication Union (ITU)
for
estimat
ing
the spectrum requirement for International Mobile Telecommunications (IMT) systems.
The
International
Telecommunication Union estimates spectrum requirement
s
by following ITU
-
R
-
Rec.M1768.
Although
this
methodology is adopted by ITU
-
R
,
there are
discrepancies
for
estimat
ing
the spectrum
requirement for developing countries. ITU estimates the spectrum requirement by considering technica
l
and market
parameters
that
were provided by the
most de
veloped countries with high income and high
development index
. Developed countries have
a very rapid expansible telecom
market
due to the high level
of penetration
,
dominant
user density
and usage of high
-
volume multimedia services.
In contrast,
developing
countries
use less bandwidth
-
intensive services such as voice communication,
low rate data
,
low and medium multimedia.
However,
while
the input parameters are adequate for developed countries
,
they
do not reflect the status of developing countries. For th
is reason
the
ITU spectrum estimation
overestimate
s
the exact requirement
s
of spectrum for IMT systems for developing countries. This paper
presents an approach based on the technical and market related parameters, which is thought to be
applicable
for
ove
rcom
ing
the shortcomings of
the
current ITU methodology in estimatin
g
the spectrum
requirement
for developing
countries like Bangladesh
Derivative threshold actuation for single phase wormhole detection with reduc...ijdpsjournal
Communication in mobile Ad hoc networks is completed via multi
-
hop ways. Owing to the distributed
specification and restricted resource of nodes, MANET is a lot prone
to wormhole attacks i.e. wormhole
attacks place severe threats to each Ad hoc routing protocol and a few security enhancements. Thus,
so as
to discover wormholes, totally different techniques are in use. In all those techniques fixation of
threshold
is mer
ely by trial & error methodology or by random manner. Conjointly wormhole detection is in twin
part by putting the nodes that is higher than the edge in a suspicious set, however predicting the n
ode as a
wormhole by using some other algorithms. Our aim in
this paper is to deduce the traffic threshold level by
derivational approach for identifying wormholes in a very single phase in relay network having dissi
milar
characteristics.
BREAST CANCER DIAGNOSIS USING MACHINE LEARNING ALGORITHMS –A SURVEYijdpsjournal
Breast cancer has become a common factor now-a-days. Despite the fact, not all general hospitals
have the facilities to diagnose breast cancer through mammograms. Waiting for diagnosing a breast
cancer for a long time may increase the possibility of the cancer spreading. Therefore a computerized
breast cancer diagnosis has been developed to reduce the time taken to diagnose the breast cancer and
reduce the death rate. This paper summarizes the survey on breast cancer diagnosis using various machine
learning algorithms and methods, which are used to improve the accuracy of predicting cancer. This survey
can also help us to know about number of papers that are implemented to diagnose the breast cancer.
STUDY OF VARIOUS FACTORS AFFECTING PERFORMANCE OF MULTI-CORE PROCESSORSijdpsjournal
Advances in Integrated Circuit processing allow for more microprocessor design options. As Chip Multiprocessor system (CMP) become the predominant topology for leading microprocessors, critical components of the system are now integrated on a single chip. This enables sharing of computation resources that was not previously possible. In addition the virtualization of these computation resources exposes the system to a mix of diverse and competing workloads. On chip Cache memory is a resource of primary concern as it can be dominant in controlling overall throughput. This Paper presents analysis of various parameters affecting the performance of Multi-core Architectures like varying the number of cores, changes L2 cache size, further we have varied directory size from 64 to 2048 entries on a 4 node, 8 node 16 node and 64 node Chip multiprocessor which in turn presents an open area of research on multicore processors with private/shared last level cache as the future trend seems to be towards tiled architecture executing multiple parallel applications with optimized silicon area utilization and excellent performance.
Target Detection System (TDS) for Enhancing Security in Ad hoc Networkijdpsjournal
The idea of an ad hoc network is a new pattern that allows mobile hosts (nodes) to converse without relying
on a predefined communications to keep the network connected. Most nodes are implicit to be mobile and
communication is implicit to be wireless. Ad-hoc networks are collaborative in the sense that each node is
assumed to relay packets for other nodes that will in return relay their packets. Thus all nodes in an ad-hoc
network form part of the network’s routing infrastructure. The mobility of nodes in an ad-hoc network
denotes that both the public and the topology of the network are extremely active. It is very difficult to
design a once-for-all target detection system. Instead, an incremental enrichment strategy may be more
feasible. A safe and sound protocol should at least include mechanisms against known assault types. In
addition, it should provide a system to easily add new security features in the future. Due to the
significance of MANET routing protocols, we focus on the recognition of attacks targeted at MANET
routing protocols.
Intrusion detection techniques for cooperation of node in MANET have been chosen as the security
parameter. This includes Watchdog and Path rater approach. It also nearby Reputation Based Schemes in
which Reputation concerning every node is measured and will be move to every node in network.
Reputation is defined as Someone’s donation to network operation. CONFIDANT [23], CORE [25],
OCEAN [24] schemes are analyzed and will be here also compared based on various parameters.
EFFICIENT SCHEDULING STRATEGY USING COMMUNICATION AWARE SCHEDULING FOR PARALL...ijdpsjournal
In the area of Computer Science, Parallel job scheduling is an important field of research. Finding a best
suitable processor on the high performance or cluster computing for user submitted jobs plays an
important role in measuring system performance. A new scheduling technique called communication aware
scheduling is devised and is capable of handling serial jobs, parallel jobs, mixed jobs and dynamic jobs.
This work focuses the comparison of communication aware scheduling with the available parallel job
scheduling techniques and the experimental results show that communication aware scheduling performs
better when compared to the available parallel job scheduling techniques.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Mathematics Research Paper - Mathematics of Computer Networking - Final DraftAlexanderCominsky
This Research Paper goes into the mathematics of computer networking hardware as well as encryption methods used to ensure data can safely and securely be transmitted from one point to another across a computer network and the web.
A New Signature Protocol Based on RSA and Elgamal SchemeZac Darcy
In this paper, we present a new signature scheme based on factoring and discrete logarithm problems.Derived from a variant of ElGamal signature protocol and the RSA algorithm, this method can be seen asan alternative protocol if known systems are broken.
A New Signature Protocol Based on RSA and Elgamal SchemeZac Darcy
In this paper, we present a new signature scheme based on factoring and discrete logarithm problems.
Derived from a variant of ElGamal signature protocol and the RSA algorithm, this method can be seen as
an alternative protocol if known systems are broken.
Symmetric Image Encryption Algorithm Using 3D Rossler System........................................................1
Vishnu G. Kamat and Madhu Sharma
Node Monitoring with Fellowship Model against Black Hole Attacks in MANET.................................... 14
Rutuja Shah, M.Tech (I.T.-Networking), Lakshmi Rani, M.Tech (I.T.-Networking) and S. Sumathy, AP [SG]
Load Balancing using Peers in an E-Learning Environment ...................................................................... 22
Maria Dominic and Sagayaraj Francis
E-Transparency and Information Sharing in the Public Sector ................................................................ 30
Edison Lubua (PhD)
A Survey of Frequent Subgraphs and Subtree Mining Methods ............................................................. 39
Hamed Dinari and Hassan Naderi
A Model for Implementation of IT Service Management in Zimbabwean State Universities ................ 58
Munyaradzi Zhou, Caroline Ruvinga, Samuel Musungwini and Tinashe Gwendolyn Zhou
Present a Way to Find Frequent Tree Patterns using Inverted Index ..................................................... 66
Saeid Tajedi and Hasan Naderi
An Approach for Customer Satisfaction: Evaluation and Validation ....................................................... 79
Amina El Kebbaj and A. Namir
Spam Detection in Twitter – A Review...................................................................................................... 92
C. Divya Gowri and Professor V. Mohanraj
ANOTHER PROOF OF THE DENUMERABILITY OF THE COMPLEX NUMBERScsandit
This short paper suggests that there might be numerals that do not represent numbers. It
introduces an alternative proof that the set of complex numbers is denumerable, and also an
algorithm for denumerating them.
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology
Implementation of RSA Algorithm with Chinese Remainder Theorem for Modulus N ...CSCJournals
Cryptography has several important aspects in supporting the security of the data, which
guarantees confidentiality, integrity and the guarantee of validity (authenticity) data. One of the
public-key cryptography is the RSA cryptography. The greater the size of the modulus n, it will be
increasingly difficult to factor the value of n. But the flaws in the RSA algorithm is the time
required in the decryption process is very long. Theorem used in this research is the Chinese
Remainder Theorem (CRT). The goal is to find out how much time it takes RSA-CRT on the size
of modulus n 1024 bits and 4096 bits to perform encryption and decryption process and its
implementation in Java programming. This implementation is intended as a means of proof of
tests performed and generate a cryptographic system with the name "RSA and RSA-CRT Text
Security". The results of the testing algorithm is RSA-CRT 1024 bits has a speed of
approximately 3 times faster in performing the decryption. In testing the algorithm RSA-CRT 4096
bits, the conclusion that the decryption process is also effective undertaken more rapidly.
However, the flaws in the key generation process and the RSA 4096 bits RSA-CRT is that the
time needed is longer to generate the keys.
This short paper suggests that there might be numerals that do not represent numbers. It introduces an
alternative proof that the set of complex numbers is denumerable, and also an algorithm for denumerating
them. Both the proof and the contained denumeration are easy to be understood.
RSA ALGORITHM WITH A NEW APPROACH ENCRYPTION AND DECRYPTION MESSAGE TEXT BY A...ijcisjournal
In many research works, there has been an orientation to studying and developing many of the applications of public-key cryptography to secure the data while transmitting in the systems, In this paper we present an approach to encrypt and decrypt the message text according to the ASCII(American Standard Code for Information Interchange) and RSA algorithm by converting the message text into binary representation and dividing this representation to bytes(8s of 0s and 1s) and applying a bijective function between the group of those bytes and the group of characters of ASCII and then using this mechanism to be compatible with using RSA algorithm, finally, Java application was built to apply this approach directly.
This paper provides an overview of the RSA algorithm, exploring the foundations and mathematics in detail. It then uses this base information to explore issues with the RSA algorithm that detract from the algorithms security.
Performance Analysis of Data Encryption Standard DESijtsrd
Information security is becoming much more important in data storage and transmission with the fast progression of digital data exchange in electronic way. Cryptography has come up as a solution which plays a vital role in information security system against malicious attacks. The cryptography is most important aspect of communications security and becoming an important building block for computer security. This security mechanism uses some algorithms to scramble data into unreadable text which can be only being decoded or decrypted by party those possesses the associated key. To protect sent messages that some of the most commonly used cryptography methods with private key based algorithm are LOKI 89, 91, 97 , DES, triple DES, AES, Blowfish, etc. These algorithms also include several computational issues as well as the analysis of DES algorithm. The main features that specify and differentiate one algorithm from another are the ability to the speed of encryption and decryption of the input plain text. This paper analyzes the private key based algorithm DES and LOKI91 by computing index of coincidence IC and time efficiency. Thida Soe | Soe Soe Mon | Khin Aye Thu "Performance Analysis of Data Encryption Standard (DES)" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd26650.pdfPaper URL: https://www.ijtsrd.com/computer-science/computer-security/26650/performance-analysis-of-data-encryption-standard-des/thida-soe
Similar to Design Of Elliptic Curve Crypto Processor with Modified Karatsuba Multiplier and its Performance Analysis (20)
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Enhancing Performance with Globus and the Science DMZGlobus
ESnet has led the way in helping national facilities—and many other institutions in the research community—configure Science DMZs and troubleshoot network issues to maximize data transfer performance. In this talk we will present a summary of approaches and tips for getting the most out of your network infrastructure using Globus Connect Server.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
The Metaverse and AI: how can decision-makers harness the Metaverse for their...Jen Stirrup
The Metaverse is popularized in science fiction, and now it is becoming closer to being a part of our daily lives through the use of social media and shopping companies. How can businesses survive in a world where Artificial Intelligence is becoming the present as well as the future of technology, and how does the Metaverse fit into business strategy when futurist ideas are developing into reality at accelerated rates? How do we do this when our data isn't up to scratch? How can we move towards success with our data so we are set up for the Metaverse when it arrives?
How can you help your company evolve, adapt, and succeed using Artificial Intelligence and the Metaverse to stay ahead of the competition? What are the potential issues, complications, and benefits that these technologies could bring to us and our organizations? In this session, Jen Stirrup will explain how to start thinking about these technologies as an organisation.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™UiPathCommunity
In questo evento online gratuito, organizzato dalla Community Italiana di UiPath, potrai esplorare le nuove funzionalità di Autopilot, il tool che integra l'Intelligenza Artificiale nei processi di sviluppo e utilizzo delle Automazioni.
📕 Vedremo insieme alcuni esempi dell'utilizzo di Autopilot in diversi tool della Suite UiPath:
Autopilot per Studio Web
Autopilot per Studio
Autopilot per Apps
Clipboard AI
GenAI applicata alla Document Understanding
👨🏫👨💻 Speakers:
Stefano Negro, UiPath MVPx3, RPA Tech Lead @ BSP Consultant
Flavio Martinelli, UiPath MVP 2023, Technical Account Manager @UiPath
Andrei Tasca, RPA Solutions Team Lead @NTT Data
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Design Of Elliptic Curve Crypto Processor with Modified Karatsuba Multiplier and its Performance Analysis
1. International Journal of Distributed and Parallel Systems (IJDPS) Vol.4, No.3, May 2013
DOI : 10.5121/ijdps.2013.4308 95
Design Of Elliptic Curve Crypto Processor with
Modified Karatsuba Multiplier and its
Performance Analysis
Asst. Prof.Y.A.Suryawanshi
Department of Electronics Engineering
Yeshwantrao Chavan College of Engineering
Nagpur (M.S.), India
yogesh_surya8@rediffmail.com
Neha Trimbak Khadgi M.tech(VLSI)
Department of Electronics Engineering
Yeshwantrao Chavan College of Engineering
Nagpur (M.S.), India
nehakhadgi@gmail.com
ABSTRACT
ECDSA stands for “Elliptic Curve Digital Signature Algorithm”, it’s used to create a digital signature of
data (a file for example) in order to allow you to verify its authenticity without compromising its security.
This paper presents the architecture of finite field multiplication. The proposed multiplier is hybrid
Karatsuba multiplier used in this processor. For multiplicative inverse we choose the Itoh-Tsujii
Algorithm(ITA). This work presents the design of high performance elliptic curve crypto processor(ECCP)
for an elliptic curve over the finite field GF(2^233). The curve which we choose is the standard curve for
the digital signature. The processor is synthesized for Xilinx FPGA.
KEYWORDS
Elliptic Curve Cryptography (ECC), Field programmable gate array(FPGA), ITA,Elliptic Curve Crypto
Processor(ECCP).
1. INTRODUCTION
In this era communications using world wide web has increased. The data which is transmitted by
using wired or wireless network. Some of the transactions have critical data which need to be
confidential and users authenticated. Hence it requires some security. Cryptography is used to
provide secure communications. An authorized user is identified by a cryptographic key, a user
having the correct key will be able to access the transmitted information while the fake users or
the other people will not have rights to use the information.
There are two types of crypto graphic algorithms, symmetric key and asymmetric key algorithms.
Symmetric key cryptography contains both encryption and decryption. It is most widely used
method because this method is fast and simple. But it can be used only when the two parties are
agreed for the secret keys. So it is difficult because exchanging the keys is not easy for the users.
2. International Journal of Distributed and Parallel Systems (IJDPS) Vol.4, No.3, May 2013
96
1.1 PRINCIPLE OF ECDSA
ECDSA stands for “Elliptic Curve Digital Signature Algorithm”, it’s used to create a digital
signature of data (a file for example) in order to allow you to verify its authenticity without
compromising its security. The ECDSA algorithm is basically all about mathematics. You have a
mathematical equation which draws a curve on a graph, and you choose a random point on that
curve and consider that your point of origin. Then you generate a random number, this is your
private key, you do some mathematical equation using that random number and that “point of
origin” and you get a second point on the curve, that’s your public key. When you want to sign a
file, you will use this private key (the random number) with a hash of the file (a unique number to
represent the file) into an equation and that will give you your signature. The signature itself is
divided into two parts, called R and S. In order to verify that the signature is correct, you only
need the public key (that point on the curve that was generated using the private key) and you put
that into another equation with one part of the signature (S), and if it was signed correctly using
the private key, it will give you the other part of the signature (R). So to make it short, a signature
consists of two numbers, R and S, and you use a private key to generate R and S, and if a
mathematical equation using the public key and S gives you R, then the signature is valid. There
is no way to know the private key or to create a signature using only the public key.
ECDSA uses only integer mathematics, there are no floating points (this means possible values
are 1, 2, 3, etc.. but not 1.5..), also, the range of the numbers is bound by how many bits are used
in the signature (more bits means higher numbers, means more security as it becomes harder to
‘guess’ the critical numbers used in the equation), as you should know, computers use ‘bits’ to
represent data, a bit is a ‘digit’ in binary notation (0 and 1) and 8 bits represent one byte. Every
time you add one bit, the maximum number that can be represented doubles, with 4 bits you can
represent values 0 to 15 (for a total of 16 possible values), with 5 bits, you can represent 32
values, with 6 bits, you can represent 64 values, etc.. one byte (8 bits) can represent 256 values,
and 32 bits can represent 4294967296 values (4 Giga).. Usually ECDSA will use 160 bits total, so
that makes well, a very huge number with 49 digits in it.
ECDSA is used with a SHA1 cryptographic hash of the message to sign (the file). A hash is
simply another mathematical equation that you apply on every byte of data which will give you a
number that is unique to your data. Like for example, the sum of the values of all bytes may be
considered a very dumb hash function. So if anything changes in the message (the file) then the
hash will be completely different. In the case of the SHA1 hash algorithm, it will always be 20
bytes (160 bits). It’s very useful to validate that a file has not been modified or corrupted, you get
the 20 bytes hash for a file of any size, and you can easily recalculate that hash to make sure it
matches. What ECDSA signs is actually that hash, so if the data changes, the hash changes, and
the signature isn’t valid anymore.
Elliptic Curve cryptography is based on an equation of the form :
y^2 = (x^3 + a * x + b) mod p
First thing you notice is that there is a modulo and that the ‘y‘ is a square. This means that for any
x coordinate, you will have two values of y and that the curve is symmetric on the X axis. The
modulo is a prime number and makes sure that all the values are within our range of 160 bits and
it allows the use of “modular square root” and “modular multiplicative inverse” mathematics
which make calculating stuff easier (I think). Since we have a modulo (p) , it means that the
possible values of y^2 are between 0 and p-1, which gives us p total possible values. However,
since we are dealing with integers, only a smaller subset of those values will be a “perfect square”
(the square value of two integers), which gives us N possible points on the curve where N < p (N
being the number of perfect squares between 0 and p). Since each x will yield two points (positive
3. International Journal of Distributed and Parallel Systems (IJDPS) Vol.4, No.3, May 2013
97
and negative values of the square-root of y^2), this means that there are N/2 possible ‘x‘
coordinates that are valid and that give a point on the curve. So this elliptic curve has a finite
number of points on it, and it’s all because of the integer calculations and the modulus. Another
thing you need to know about Elliptic curves, is the notion of “point addition“. It is defined as
adding one point P to another point Q will lead to a point S such that if you draw a line from P to
Q, it will intersect the curve on a third point R which is the negative value of S (remember that
the curve is symmetric on the X axis). In this case, we define R = -S to represent the symmetrical
point of R on the X axis.
There is also point multiplication where k*P is the addition of the point P itself to k times. One
particularity of the point multiplication is that if you have a point R = k*P, where you know R
and you know P, there is no way to find out what the value of ‘k‘ is. Since there is no point
subtraction or point division, you cannot just resolve k = R/P. Also, since you could be doing
millions of point additions, you will just end up on another point on the curve, and you would
have no way of knowing “how” you got there. You can’t reverse this operation, and you can’t
find the value ‘k‘ which was multiplied with your point P to give you the resulting point R.
This thing where you can’t find the multiplicand even when you know the original and
destination points is the whole basis of the security behind the ECDSA algorithm, and the
principle is called a “trap door function“.
For ECDSA, you first need to know your curve parameters, those are a, b, p, N and G. You
already know that ‘a‘ and ‘b‘ are the parameters of the curve function (y^2 = x^3 + ax + b), that
‘p‘ is the prime modulus, and that ‘N‘ is the number of points of the curve, but there is also ‘G‘
that is needed for ECDSA, and it represents a ‘reference point’ or a point of origin if you prefer.
Those curve parameters are important and without knowing them, you obviously can’t sign or
verify a signature. Yes, verifying a signature isn’t just about knowing the public key, you also
need to know the curve parameters for which this public key is derived from.
So first of all, you will have a private and a public key.. the private key is a random number (of
20 bytes) that is generated, and the public key is a point on the curve generated from the point
multiplication of G with the private key. We set ‘dA‘ as the private key (random number) and
‘Qa‘ as the public key (a point), so we have : Qa = dA * G (where G is the point of reference in
the curve parameters).
First, you need to know that the signature is 40 bytes and is represented by two values of 20 bytes
each, the first one is called R and the second one is called S. so the pair (R, S) together is your
ECDSA signature. First you must generate a random value ‘k‘ (of 20 byes), and use point
multiplication to calculate the point P=k*G. That point’s x value will represent ‘R‘. Since the
point on the curve P is represented by its (x, y) coordinates (each being 20 bytes long), you only
need the ‘x‘ value (20 bytes) for the signature, and that value will be called ‘R‘. Now all you need
is the ‘S‘ value.
To calculate S, you must make a SHA1 hash of the message, this gives you a 20 bytes value that
you will consider as a very huge integer number and we’ll call it ‘z‘. Now you can calculate
S using the equation :
S = k^-1 (z + dA * R) mod p
Note here the k^-1 which is the ‘modular multiplicative inverse‘ of k, it’s basically the inverse of
k, but since we are dealing with integer numbers, then that’s not possible, so it’s a number such
that (k^-1 * k ) mod p is equal to 1. And k is the random number used to generate R, z is the hash
4. International Journal of Distributed and Parallel Systems (IJDPS) Vol.4, No.3, May 2013
98
of the message to sign, dA is the private key and R is the x coordinate of k*G (where G is the
point of origin of the curve parameters).
Now that you have your signature, you want to verify it, you only need the public key (and curve
parameters of course) to do that. You use this equation to calculate a point P :
P= S^-1*z*G + S^-1 * R * Qa
If the x coordinate of the point P is equal to R, that means that the signature is valid, otherwise
it’s not.
Now we are going to verify it and this require some mathematics to verify :
We have :
P = S^-1*z*G + S^-1 * R *Qa
but Qa = dA*G, so:
P = S^-1*z*G + S^-1 * R * dA*G = S^-1 (z + dA* R) * G
But the x coordinate of P must match R and R is the x coordinate of k * G, which means that :
k*G = S^-1 (z + dA * R) *G
we can simplify by removing G which gives us :
k = S^-1(z + dA * R)
by inverting k and S, we get :
S = k^-1 (z + dA *R)
and that is the equation used to generate the signature.. so it matches, and that is the reason why
you can verify the signature with it.
You can note that you need both ‘k‘ (random number) and ‘dA‘ (the private key) in order to
calculate S, but you only need R and Qa (public key) to validate the signature. And since R=k*G
and Qa = dA*G and because of the trap door function in the ECDSA point multiplication
(explained above), we cannot calculate dA or k from knowing Qa and R, this makes the ECDSA
algorithm secure, there is no way of finding the private keys, and there is no way of faking a
signature without knowing the private key.
The ECDSA algorithm is used everywhere and has not been cracked and it is a vital part of most
of today’s security.
5. International Journal of Distributed and Parallel Systems (IJDPS) Vol.4, No.3, May 2013
99
2. IMPLEMENTATION OF ELLIPTIC CURVE CRYPTO PROCESSOR
Figure1. Block Diagram of Elliptic Curve Crypto Processor.
3. DESCRIPTION OF THE BLOCKS USED IN ELLIPTIC CURVE CRYPTO
PROCESSOR
The quad based ITA architecture was shown to have best performance compared to other
architectures, therefore we design our 2^n circuit based ITA as a generalization of this
architecture.
We now give the brief description of the 2^n based ITA architecture which is shown in block
diagram. This design presents the high performance ECCP for the curve over finite field
GF(2^233). The finite field multiplier used is of combinational type and follows the Karatsuba
Algorithm. The Quad block is used to raise the power of its input to any power of 2. Buffers are
used to latch the output of the multiplier and the quad block respectively. Control signal is used in
each clock cycle to latch the output of either the multiplier or the power block but not both. Any
intermediate result that is required in a later step is stored in a register bank. A control block is
used to generate all the control signals that drive a Finite State Machine(FSM).
3.1 REGISTER BANK
The register file contains eight registers each of size 233 bits. The registers are used to store the
results of the computations done at every clock cycle.
6. International Journal of Distributed and Parallel Systems (IJDPS) Vol.4, No.3, May 2013
100
Table 1. Utility of Registers in Register Bank
Register Description
RA1 1.During initialization it is loaded with Px.
2.Stores the x co ordinate of the result.
3.Also used for temporary storage.
RA2 Stores Px.
RB1 1.During initialization it is loaded with Py.
2.Stores the y co ordinate of the result.
3.Also used for temporary storage.
RB2 Stores Py
RB3 Used for temporary storage.
RB4 Stores the curve constant b.
RC1 1.During initialization it is set to 1.
2.Store z co ordinate of the projective result.
RC2 Used for temporary storage.
3.2 FINITE FIELD ARITHMETIC UNIT
The arithmetic unit is build using finite field arithmetic circuit. The AU has five inputs (A0 to A3
and Qin) and 3 outputs (C0,C1 and Qout). The main components of AU is quad block and a
multiplier. The multiplier is based on the hybrid karatsuba algorithm. The quad block consists of
14 cascaded quad circuits and is capable of generating the output. The quad block is used only for
inversion operation. The AU has several adders and squarer circuits.
233 Bit Hybrid Karatsuba Multiplier.
Figure 2. Architecture of 233 Bit Hybrid Karatsuba Multiplier.
3.3 CONTROL UNIT
At every clock cycle the control unit produces a control word. The control word signals control
the flow of data and also decide the operations performed on the data. There are 33 control signal
generated by the control word.
7. International Journal of Distributed and Parallel Systems (IJDPS) Vol.4, No.3, May 2013
101
4.RESULT OF ALU BLOCK USED IN ELLIPTIC CURVE CRYPTO PROCESSOR
Figure 3.RTL view of Arithmetic Unit.
Figure 4. Output Of ALU.
8. International Journal of Distributed and Parallel Systems (IJDPS) Vol.4, No.3, May 2013
102
5. Result Of Complete Processor With Hybrid Karatsuba Multiplier
Figure 5. RTL View of ECCP Processor using Hybrid Karatsuba Multiplier
Figure 6. RTL View Of ECCP Processor.
9. International Journal of Distributed and Parallel Systems (IJDPS) Vol.4, No.3, May 2013
103
Figure 7. Output of the ECCP Processor.
6. CONCLUSION
The arithmetic unit is used in a Elliptic Curve Crypto Processor to compute the scalar product kP.
This design has more efficient FPGA utilization .High speed is obtained by implementation of
quad and squarer circuits. The Quad ITA algorithm is used to reduce computation time. We use
carry look ahead adder in ALU because it is faster than any other adder. It will generate and
propagate the carry so that computation time requirement is less.
Also the area and power requirements are less in this design. The most important factor
contributing the performance is the finite field multiplication and finite field inversion.
7. FUTURE WORK
I design ALU and ECCP processor completely for the curve over the finite field GF(2^233) . In
ALU hybrid karatsuba multiplier is used so I will design modified karatsuba multiplier and
compare the results of both multipliers. Then I will decide which multiplier gives the best results.
I will calculate the time requirement, area and power of the ECCP Processor by using both
multipliers in ALU unit.
8. RELATED WORK
Revisiting the Itoh-Tsujii Inversion Algorithm for FPGA Platforms [1]paper generates
components of several cryptographic implementations such as elliptic curve cryptography. For
binary fields generated by irreducible trinomials, this paper process a modified ITA algorithm for
efficient implementations on field programmable gate array(FPGA) platforms.
Theoretical Modelling of the Itoh-Tsujii Inversion Algorithm for Enhanced Performance on k-
LUT based FPGAs [5],this paper maximizing the performance of the ITA algorithm on FPGAs
requires tunning of several design parameters. This is often time consuming and difficult. This
paper presents a theoretical model for the ITA foe any Galois field and k-input LUT based FPGA.
Such a model would aid a hardware designer to select the ideal design parameters quickly.
10. International Journal of Distributed and Parallel Systems (IJDPS) Vol.4, No.3, May 2013
104
A high performance elliptic curve cryptographic processor for general curves over GF(p) based
on a systolic arithmetic unit [4]this paper brief presents high performance elliptic curve
cryptographic processor for general curves over GF(p), which features a systolic arithmetic unit.
This paper propose a new unified systolic array that efficiently implements addition.substraction
and multiplication.
The Elliptic Curve digital signature algorithm [9]this paper shows that ECDSA algorithm is the
elliptic curve analogue of the digital Signature Algorithm. It was accepted in 1999 as an ANSI
standard, and was accepted in 2000 as IEEE and NIST standards. Unlike the ordinary discrete
logarithm problem and the integer factorization problem, no sub exponential time algorithm is
known for the elliptic curve discrete logarithm problem.
REFERENCES
[1] Chester Rebeiro, Sujoy Sinha “Revisiting The Itoh-Tsujii Inversion Algorithm for FPGA Platforms”
IEEE transactions on VLSI Systems, VOL. 19, NO. 8, August 2011.
[2] Surabhi Mahajan“Security and Privacy in VANET” International Journal of Computer Applications
volume1,February 2010.
[3] Kuldeep Singh “Implementation of Elliptic Curve Digital Signature Algorithm” International Journal
of Computer Applications, volume2 No.2, May 2010.
[4] Gang Chen and Guoqiang Bai “A High Performance Elliptic Curve Cryptographic Processor for
General Curves over GF(p) Based On a Systolic Arithmetic Unit.” IEEE transactions on circuit and
systems-II: Express Briefs, VOL 54, No.5, May 2007.
[5] Sujoy Sinha Roy and Chester Rebeiro “Theoretical Modelling of the Itoh-Tsujii Inversion Algorithm
for Enhanced Performance on k- LUT based FPGAs.”
[6] Debdeep mukhopadhay “ High Performance Elliptic Curve Crypto Processor for FPGA Platforms”
NTT Labs, Japan, June 2010.
[7] Qiucheng Deng, Xuefei Bai, Li Guo and Yao wang “ Hardware Implementation of Multiplicative
Inversion in GF(2^m)”,Department of Electronics Science and Technology Of China.
[8] Mohen Machhout, Zied Guitouni, Kholdoun Torki and Rached “Coupled FPGA/ASIC
Implementation Of Elliptic Curve Crypto Processor”, International Journal Of Network Security and
its Applications ( IJNSA), Volume 2, Number 2, April 2010.
[9] Don Johnson, Alfred Menezes and Scott Vanstone “ The Elliptic Curve Digital Signature Algorithm
(ECDSA)”, Certicom Research, Canada.
[10] Felipe Tellez and Jorge Ortiz “ Behaviour of Elliptic Curve Cryptosystems for the Wormhole
Intrusion in Manet : A Survey and Analysis”, IJCSNS International Journal of Computer Science and
Network Security, VOL.11 No.9, September 2011.