SlideShare a Scribd company logo
1 of 92
ANDROID WALKIE TALKIE
A Dissertation Submitted to School of Computer Science
In Partial Fulfillment of the Requirement of the Degree of
Bachelor in Computer Science
Under the Supervision of
DR. ABDUL HYEE
Deputy Director (ERP), FESCO.
by
Talha Habib
Registration No. FD0121231728
Email: talha@codeot.com
National College of Business Administration and Economics
40/E-1, Gulberg III, Lahore-54660, Pakistan
2
3
ANDROID WALKIE TALKIE
A Dissertation Submitted to
School of Computer Science
In Partial Fulfillment of the
Requirement of the Degree of
BS (Computer Science)
by
Talha Habib
Registration No. FD0121231728
Under the Supervision of
DR. ABDUL HYEE
Deputy Director (ERP), FESCO.
National College of Business Administration and Economics
40/E-1, Gulberg III, Lahore-54660, Pakistan
4
Declaration by student
I hereby declare that the contents of the thesis “Android Walkie Talkie” is research based
and no part has been copied from any published source (except the references, some standard
mathematical or genetic models/equations/protocols etc.). I further declare that this work has not
been submitted for the award of any other diploma/degree. The University may take action if the
above statement is found inaccurate at any stage.
__________________________
Name: Talha Habib
5
To,
The Controller ofExaminations,
Chenab College ofAdvanced Studies, Faisalabad
We, the supervisory committee, certify that the contents and form of thesis submitted by
Mr. Talha Habib have been found satisfactory and recommend that it be processed for evaluation
by the external examiner(s) for the award of the degree.
Supervisory Committee
1. Supervisor :_______________________________
(Dr. Abdul Hyee)
2. Member :_______________________________
3. Member :_______________________________
6
DEDICATED
TO
The Holy Prophet Hazrat
MUHAMMAD
Peace Be Upon Him
He is the greatest Teacher of the World
&
My Loving & Caring Parents
Who praised every moment of my life with and untiring sustenance. Whose affection, love,
encouragement and prayers of day and night make me able to get such success and honor to
accomplish this task.
TeacherRespectableMy
Who is always with me and guided me with love and gratitude
7
Acknowledgement
First of all, I would like to thank “ALLAH Almighty” the Merciful, the Creator of mind; who blessed
me with the knowledge and granted me the courage and ability to complete this documentation
successfully.
Thanks to my parents, who cherished every moment of my life with support. Their hands always rose
for me in their prayers.
I deeply appreciate the efforts of my supervisor, Dr. Abdul Hyee who helped me a lot. Despite the
pressure of work he spent time to listen and assist and offered guidance. He knew where to look for the
answers to obstacles while leading me to the right source, theory and perspective. He was always
available for my questions and he was positive and gave generously of his time and vast knowledge.
Without his guidance I would not have been able to accomplish this task.
Talha Habib
8
Table of contents
DECLARATION BY STUDENT........................................................................................................ 4
ACKNOWLEDGEMENT................................................................................................................... 7
TABLE OFCONTENTS ..................................................................................................................... 8
LIST OF FIGURES.............................................................................................................................. 9
LIST OFABBREVIATIONS ............................................................................................................ 10
WALKIE TALKIE ............................................................................................................................. 12
History......................................................................................................................................................................................................12
Amateur radio........................................................................................................................................................................................13
Personal Use............................................................................................................................................................................................14
OBJECTIVES..................................................................................................................................... 14
LIMITATION OF STUDY................................................................................................................ 15
HYPOTHESIS SET TO ACHIEVE THE OBJECTIVE................................................................ 15
Send andreceive procedure................................................................................................................................................................17
Connectivity and searching for station............................................................................................................................................18
HAND-SHAKE CLIENT VS HAND-SHAKE SERVER ............................................................... 18
SOFTWARE REQUIREMENT SPECIFICATION ....................................................................... 18
Functional requirements .....................................................................................................................................................................19
None Functional Requirements .........................................................................................................................................................19
SYSTEM DESIGNS........................................................................................................................... 19
Strings.xml ..............................................................................................................................................................................................21
XML (Extensible Markup Language) ............................................................................................................................................22
Hand-Shake Server-Client..................................................................................................................................................................24
Client side handshake........................................................................................................................................................................25
Server side handshake.......................................................................................................................................................................25
TCP-Three Way Handshaking.........................................................................................................................................................26
SMTP ...................................................................................................................................................................................................27
TLS.......................................................................................................................................................................................................27
WPA2 Wireless ..................................................................................................................................................................................29
Dial up access modems .....................................................................................................................................................................30
SERVER SIDE NDS HANDSHAKE – RECEIVING PACKETS ................................................. 31
Station Information and Connectivity ............................................................................................................................................33
Channel................................................................................................................................................................................................37
Audio Player .......................................................................................................................................................................................43
Audio Recorder ..................................................................................................................................................................................46
Session Manager................................................................................................................................................................................52
State View ...........................................................................................................................................................................................53
Walkie Talkie Services......................................................................................................................................................................55
Switch Button .....................................................................................................................................................................................59
Main Activity......................................................................................................................................................................................65
Channel Session .................................................................................................................................................................................71
Configuration......................................................................................................................................................................................74
Database...............................................................................................................................................................................................74
PROTOCOL ....................................................................................................................................... 75
Basic Requirement of protocols.........................................................................................................................................................75
Protocols and Programming languages...........................................................................................................................................77
Protocol Layering..................................................................................................................................................................................78
Software Layering.................................................................................................................................................................................82
APPLICATION STRUCTURE ........................................................................................................ 85
USE CASE .......................................................................................................................................... 87
SDLC ................................................................................................................................................... 88
SEQUENCE DIAGRAM................................................................................................................... 90
ENTITY RELATION DIAGRAM.................................................................................................... 91
9
List of figures
List of figures Page No.
Figure 1.0 Working model of JS collider 16
Figure 2.0 sending-receiving voice 17
Figure 3.0 Hand-shaking 24
Figure 4.0 Three-way handshake 26
Figure 5.0 SMTP based handshake 27
Figure 6.0 TLS Layout 27
Figure 7.0 TLS Handshake over SSL 28
Figure 8.0 Simple TLS Handshaking 28
Figure 9.0 TCP Four Way Handshake 29
Figure 10.0 Modem/Device/Server connection hand-shaking 30
Figure 11.0 how ping works 33
Figure 12.0 App setting layout/Station name setting 34
Figure 13.0 Volume control in setting layout/screen 34
Figure 14.0 Use volume buttons as PTT on settings screen 35
Figure 15.0 Wi-Fi Status check on start 36
Figure 16.0 Channel 37
Figure 17.0 Playing voice using inner audio player 46
Figure 18.0 Protocol Layering without modem 78
Figure 19.0 Protocol Layering with modem/router 80
Figure 20.0 Software Layering 82
Figure 21.0 Protocols and software layering working model 84
Figure 22.0 Use Case 87
Figure 23.0 SDLC concept 88
Figure 24.0 Sequence Design Process – water fall model 90
Figure 25.0 Entity Diagram for Walkie Talkie 91
10
List of abbreviations
NDS: Network Discovery Service
N: Nodes
P2p: peer to peer
N2n: node to node
JS: Java script
JSC: JS Collide
BT: Blue-tooth
DPI: Dots per inch
PX: pixels
UHF: Ultra high frequency
VHF: Very high frequency
PTT: push-to-talk
SCR: Silicon-controlled-rectifier
RF: Radio frequency
HT: Handheld transceiver
AN/PRC: Army Navy/ Portable Radio communicator/communication
AN/PRR: Army Navy/ Pattern recognition receptor
HDPI: High-density Pixels
XHDPI: Extra High-density Pixels
MDPI: Medium-density Pixels
LDPI: Low-density Pixels
ACK: Acknowledgment
SYN: Sync
FRS: Financial Reporting Standard
GMRS: General Mobile Radio Service
PMR: personal mobile radio
GPS: Global Positioning service
NFS: Network File System
DHCP: Dynamic host configuration protocol
NPM: Node Package Manager
IEEE: Institute of Electrical and Electronic Engineers
11
Abstract
Android Wi-Fi Walkie Talkie is generic term defining app is based on Walkie Talkie concept which
runs using Wi-Fi technologies to deal with autonomous communication between devices, For the last
several years the current era has been moving forward faster than before, despite the fact of
technologies Walkie Talkie has been proven a great helping utility and for this same reason it is
currently in use for police and also for other metered communication e.g. within large building
contacting support/administrator or calling out for management etc. The study investigates the
possibility of an app development which is lightweight and alternative solution using peer to peer
communications by only using common gateways such as normal DHCP server, modems Wi-Fi hot-
spot to connect android devices to treat as Walkie Talkie handsets.
In the study a prototype was developed as simple sound recorder application for interaction who sent
the recorded voice over medium to other device and application plays the voice on device which
make it easier to communicate by voice, it was first implemented as Bluetooth voice sender and
receiver, the more the app was used the more the new features and flexibility became visible and
using some real time helper libraries such as JS-collider it became flexible enough to cast its very
own port for sockets.
Keywords: Android Wi-Fi communication.
12
Chapter 1
Walkie Talkie
A Walkie Talkie is a hand-held, portable, two-way radio transceiver. Its development during the
Second World War has been variously credited to Donald L. Hings, radio engineer Alfred J. Gross,
and engineering teams at Motorola. First used for infantry, similar designs were created for field
artillery and tank units, and after the war, Walkie Talkies spread to public safety and eventually
commercial and jobsite work. Walkie Talkie is a half-duplex communication device; multiple Walkie
Talkies use a single radio channel, and only one radio on the channel can transmit at a time, although
any number can listen. The transceiver is normally in receive mode; when the user wants to talk, he
presses a "push-to-talk” button that turns off the receiver and turns on the transmitter. Typical Walkie
Talkies resemble a telephone handset, possibly slightly larger but still a single unit, with an antenna
mounted on the top of the unit. Where a phone's earpiece is only loud enough to be heard by the user,
a Walkie Talkie's built-in speaker can be heard by the user and those in the user's immediate vicinity.
Hand-held transceivers may be used to communicate between each other, or to vehicle-mounted or
base stations.
History
The Walkie Talkie was developed by the US military during World War 2. The first radio transceiver
to be widely nicknamed "Walkie Talkie" was the backpacked Motorola SCR-300, created by an
engineering team in 1940 at the Galvin Manufacturing Company. The team consisted of Dan Noble,
who conceived of the design using frequency modulation; Henryk Magnuski, who was the principal
RF engineer; Marion Bond; Lloyd Morris; and Bill Vogel. The first hand-held Walkie Talkie was the
AM SCR-536 transceiver also made by Motorola, named the "Handie-Talkie". The terms are often
confused today, but the original Walkie Talkie referred to the back mounted model, while the handie-
talkie was the device which could be held entirely in the hand. Both devices used vacuum tubes and
were powered by high voltage dry cell batteries. Alfred J. Gross, a radio engineer and one of the
developers of the Joan-Eleanor system, also worked on the early technology behind the Walkie Talkie
between 1934 and 1941, and is sometimes credited with inventing it. Canadian inventor Donald
Hings is also credited with the invention of the Walkie Talkie: he created a portable radio signaling
system for his employer CM&S in 1937. He called the system a "packset", but it later became known
as the "Walkie Talkie". In 2001, Hings was formally decorated for its significance to the war effort.
Hing's model C-58 "Handy-Talkie" was in military service by 1942, the result of a secret R&D effort
that began in 1940.Following World War II, Raytheon developed the SCR-536's military replacement,
the AN/PRC-6. The AN/PRC-6 circuit used 13 vacuum tubes; a second set of 13 tubes was supplied
with the unit as running spares. The unit was factory set with one crystal which could be changed to a
13
different frequency in the field by replacing the crystal and re-tuning the unit. It used a 24-inch whip
antenna. There was an optional handset H-33C/PT that could be connected to the AN/PRC-6 by a 5-
foot cable. A web sling was provided.
In the mid-1970s the United States Marine Corps initiated an effort to develop a squad radio to
replace the unsatisfactory helmet-mounted AN/PRR-9 receiver and receiver/transmitter hand-held
AN/PRT-4. The AN/PRC-68 was first produced in 1976 by Magnavox, was issued to the Marines in
the 1980s, and was adopted by the US Army as well. The abbreviation HT, derived from Motorola's
"Handie Talkie" trademark, is commonly used to refer to portable handheld ham radios, with "Walkie
Talkie" often used as a layman's term or specifically to refer to a toy. Public safety or commercial
users generally refer to their handhelds simply as "radios". Surplus Motorola Handie Talkies found
their way into the hands of ham radio operators immediately following World War II. Motorola's
public safety radios of the 1950s and 1960s, were loaned or donated to ham groups as part of the
Civil Defense program. To avoid trademark infringement, other manufacturers use designations such
as "Handheld Transceiver" or "Handie Transceiver" for their products
Amateur radio
Walkie Talkies are widely used among amateur radio operators. While converted commercial gear by
companies such as Motorola are not uncommon, many companies such as Yaesu, Icom, and
Kenwood design models specifically for amateur use. While superficially similar to commercial and
personal units, amateur gear usually has a number of features that are not common to other gear,
including:
Wide-band receivers, often including radio scanner functionality, for listening to non-amateur radio
bands.
Multiple bands; while some operate only on specific bands such as 2 meters or 70 cm, others support
several UHF and VHF amateur allocations available to the user. Since amateur allocations usually are
not channelized, the user can dial in any frequency desired in the authorized band. Multiple
modulation schemes: a few amateur HTs may allow modulation modes other than FM, including AM,
SSB, and CW, and digital modes such as radio-tele-type or PSK31. Some may have TNCs built in to
support packet radio data transmission without additional hardware. A newer addition to the Amateur
Radio service is Digital Smart Technology for Amateur Radio or D-STAR. Handheld radios with this
technology have several advanced features, including narrower bandwidth, simultaneous voice and
14
messaging, GPS position reporting, and call-sign routed radio calls over a wide ranging international
network.
As mentioned, commercial Walkie Talkies can sometimes be reprogrammed to operate on amateur
frequencies. Amateur radio operators may do this for cost reasons or due to a perception that
commercial gear is more solidly constructed or better designed than purpose-built amateur gear.
Personal Use
The personal Walkie Talkie has become popular also because of the U.S. Family Radio Service and
similar license-free services in other countries. While FRS Walkie Talkies are also sometimes used as
toys because mass-production makes them low cost, they have proper super heterodyne receivers and
are a useful communication tool for both business and personal use. The boom in license-free
transceivers has, however, been a source of frustration to users of licensed services that are
sometimes interfered with. For example, FRS and GMRS overlap in the United States, resulting in
substantial pirate use of the GMRS frequencies. Use of the GMRS frequencies requires a license;
however, most users either disregard this requirement or are unaware. Canada reallocated frequencies
for license-free use due to heavy interference from US GMRS users. The European PMR446
channels fall in the middle of a United States UHF amateur allocation, and the US FRS channels
interfere with public safety communications in the United Kingdom. Designs for personal Walkie
Talkies are in any case tightly regulated, generally requiring non-removable antennas and forbidding
modified radios.
Objectives
 The broad objective was to study about real-time communication with android, and functionality of
Walkie Talkie. The specific objectives of study were:
 To examine the real-time communication on android
 To examine how flexible an android can handle communication and how much further one can go
using java as language and android as OS.
 To examine if android can be used as sender-receiver without using GSM or internet services or any
other third party software or hardware.
 To determine if android can act as sender-receiver by staying an offline device and using only local
network to communicate
 To examine the local network communication speed and limitation
 To examine how many nodes can communication through one channel, its speed and limitation
 To examine how many nodes can communicate to each other at same time by staying on one channel.
 To determine if increasing in number of nodes slow down channel
 To determine if increasing in number of nodes slow down android
15
Limitation of study
 All modems were not considered as communication medium because of different firewall settings,
variation in firmware, absence of DHCP
 Android additional/third-party firewall, or firewall in medium was a challenge because in-out bond
connection for specific channel was required to be open for android to receive and send voice over
line without getting interrupted.
 Android variation in OS version was a big challenge, package dependency does not work on
modified OS or older OS, it at least require 4.x.0+ version of android to perform its fullest
 BT (Blue-tooth), infrared, NFS was too slow that they can only handle 2-3 nodes per channel.
 GSM based channel broadcast was required but because the objective was to make it work offline,
GSM was not use to broadcast signal instead of signal a medium was introduce as Wi-Fi, hot-spot,
DHCP
 Wi-Fi/Hot-spot based medium are intent to have faster connection but lesser nodes, Hot-spot can only
handle up to 50-70 nodes.
 DHCP is the only and best option, but connection in DHCP is above average and firewall is the only
challenge in it.
Hypothesis set to achieve the objective.
The Objective of study is to make communication as fast and real time as possible by using local
network only.
It is hypothesized that they real time communication might be possible if we were to use node based
module or JS ajax based service to update the communication line, instead of making it native which
will make it big in size it would be best to use JS based library which can also be proved to be a
short-cut as well as can be handy in case there are too many nodes and native system is too busy to
run its own operation in the result app being crashed. NodeJS is new in market but almost every
developer knows that it is no less than a standalone stable platform, the most fantastic thing about
NodeJS was io.socket which is basically a socket based system works with a custom port committing
and emitting to communicate. The real-time speed and performance of NodeJS is unmatched thus I
started to find an alternative and a way to use JS library if not NodeJS module itself because NodeJS
runs on its own platform that used NPM, android cannot emulate node module in native app.In the
end JS-collider was used as alternative, it provides TCP/IP session emit and commit just like node
module and is also very light-weight and developer friendly to use.
16
Chapter 2
JS-Collider Working
According to authors and developer of JS-collider:
“JS-Collider is an asynchronous event-driven Java network (NIO) application framework designed to
provide maximum performance and scalability for applications having not too many connections but
significant amount of network traffic (both incoming and outgoing).Performance is achieved by
specially designed threading model and lock-free algorithms”
Working module of JS-collider:
(Figure 1.0 – Working model of JS collider)
In Figure 1.0 the model tells us how JS-collider works.
The area with “S” in it are devices or nodes connected within local area network each device are
emitting their station number and acting like hand-shake server on their own, they are finding another
hand-shake client to validate and bind a connectivity with them. The Green “S” blocks are devices
connected in local area network but they are not verified yet. The purple “S” block is device emitting
his station info in local area network, the yellow “S” block is device validating station info and
establishing connectivity.
The model keeps expanding according to more nodes keep connecting. Each node is server on their
own and treat other device as client, DHCP, hot-spot, modem is just medium used to establish
connection between them for interactions and communication connectivity procedures.
17
Working with JS-collider:
JS-collider is a connectivity service which connects nodes together and make communication as real
time as possible by emit and commit functionality, sending and receiving was achieved.
Now that we have come to this, the only challenge that was remaining was to send voice over
medium. Wi-Fi hotspot is completed first step of android connection, JS collider complete other 2
steps of Connectivity between broad cast channel and sending receiving, the only complexity was
“How to keep the emit and commit alive”
If Emit and commit is in interval or is connected like p2p it is easy to manage, but the thing is,
Walkie Talkie has PTT (push to talk) button and user have to push it before broad casting his voice
over medium, making emit on button click and stopping emit on button click was pretty complex
because it will no longer be p2p connected and because of the variations in connections there has to
be other solution.
Send and receive procedure
In the study where sending receiving was figured out, to send voice over medium, it was decided to
use sound recording and send it to medium after one commit, when other device will receive emit it
will automatically play the committed voice, recording voice and treating it is data chunk/slicing it
before sending was used to make packet light and make voice send-receive-play possible.
(Figure 2.0 sending-receiving voice)
18
Connectivity and searching for station.
Application is uses hard-coded string/parameter as station which helps other devices running same
application to look for each other, this procedure is called hand-shake. Once the application is
running it will broadcast its signature within Local network, if the same application is running
somewhere else and is in local network both will hand-shake and confirm identity, while they are
validating and connecting application receives other’s device name or station name. Application
makes a list of connected nodes and display it, each node will have their own name displaying so the
user can know where exactly he is talking.
Hand-shake Client vs Hand-Shake Server
The procedure of hand-shake is divided into two parts.
 Server hand-shake
 Client hand-shake
1. Server Hand-shake:
Server hand-shake are devices who are acting as DHCP by turning on their hot-spot
and connecting other devices through their hot-spot.
2. Client hand-shake:
Client hand-shake are nodes and simple devices connected to each other by centralized
medium/DHCP/network, these devices lookup for other devices within network and
hand-shake with them to know their name and add them to User interface.
Software requirement specification
Application do not require any additional libraries support from android or any other third-
party resource to perform functional, all libraries application require is already a part of
application. There are no external API or resource call for application, however application
requires android permissions to work normally, without those permission applications can’t
run.
Permission required by applications from android system are:
o Internet permission
o Wi-Fi permission
o Recording permission
o Change/Read Wi-Fi state permission
19
Functional requirements
Wi-Fi hardware and API level 21 or above is required and encouraged. Older version of android
comes with minimum hardware specification which can led to application crashes and devices lags. It
is possibility that app will not install on older version at first place, but even if it is installed it may
not work, and if it is installed and is working, due to hardware specification more device connection
will slow down sending/receiving in result of device and application lagging or crashes. Application
is tested on various API level and OS versions of android, the test result for application are as follow:
None Functional Requirements
Devices should be on same network/local network; Application is intended to work in local networks
only it can’t work online or remotely.
Chapter 3
System Designs
App has various methods working together, Instead of Database app uses shared preferences to store
settings, and Module/methods that are part of app are as follows:
1. Hand-Shake Client
2. Hand-Shake Server
3. Station Information
4. Connectivity
5. Channel
6. Audio player
7. Audio recorder
8. Protocol
9. Session manager
# OS VERSION API VERSION STATUS
1 2.x.x 8 FAIL
2 3.x.x 12 FAIL
3 4.x.x 18 BUGS
4 5.x.x 21 PASS
5 6.x.x 23 PASS
6 7.x.x 25 PASS
20
10. State view
11. Walkie Talkie services
12. Switch buttons
13. Main Activity
14. Channel Session
15. Configurations
App has several layouts as follows:
1. Home
a. Connected devices lists
b. Drop down
i. About
ii. Setting
iii. Exit
2. Wi-Fi connectivity
App has 4 dimensions of image, 5 dimension of app logo, 5 dimensions of status bar logo, the
dimension uses in application logo are as follows:
1. image
a. HDPI
b. MDPI
c. XHDPI
d. XXHDPI
2. App logo:
a. HDPI
b. MDPI
c. XHDPI
d. XXHDPI
e. LDPI
3. Status bar logo:
a. HDPI
b. MDPI
c. XHDPI
d. XXHDPI
e. LDPI
21
App permission are requested through main activity and validated from other method accordingly,
whenever a process is about to happen the first step system take is validating permissions, all of that
permissions are requested from Manifest.xmlfile.
Strings.xml
Android holds a feature called string values where all the string used in android is declared / define
there. Whenever the string is needed it is called by its name. For example, I’ve a sentence saying
“this app is developed by Talha Habib” I can define this sentence using xml markup in strings.xml
file in strings folder/file system structure. String.xml string node or DOM object/line in xml structure
can be used to define/declare/initialize an id on that string line DOM object so it can be called and
used whenever is needed later.
Xml is DOM object based structure where we can define our own node names and define our own
attribute on that DOM object, by example it could be anything like this.
<class>
<section name=”c”>
<student name=”Talha Habib” roll=”1807” id=”talha”></student>
</section>
<section name=”d”>
<student name=”Umer Najeeb” roll=”1802” id=”umer”></student>
</section>
</class>
Our code will look like this in XML file, it says it has 2 records on class section “c” and “d” there is
node name “student” container student id and name and other attribute it can be anything, now if we
need to know the name of 1807 roll number person it will give us name under “name” attribute so we
can keep going with our records. Similarly, just like that string.xml has values we can use for later,
for example we know that our app name is “Wi-Fi Walkie Talkie” whenever we need to display it
again we don’t have to write it again if we have said its id all we need to do is to call that id. XML
Stands for Extensible markup language, don’t change "Extensible" to "Xtensible"
XML nodes are called Elements, not tags! In HTML DOM objects, they nodes are called Tags, XML
and HTML/DHTML may look like the same in syntax but they have different way of working and
scope.
22
XML (Extensible Markup Language)
In computing, Extensible Markup Language is a markup language that defines a set of rules for
encoding documents in a format that is both human-readable and machine-readable. The W3C's XML
1.0 Specification and several other related specifications all of them free open standards define XML.
The design goals of XML emphasize simplicity, generality, and usability across the Internet. It is a
textual data format with strong support via Unicode for different human languages. Although the
design of XML focuses on documents, the language is widely used for the representation of arbitrary
data structures such as those used in web services. Several schema systems exist to aid in the
definition of XML-based languages, while programmers have developed many application
programming interfaces to aid the processing of XML data. Applications of XML, 100 document
formats using XML syntax had been developed, including RSS, Atom, SOAP, and XHTML. XML-
based formats became the default for many office-productivity tools, including Microsoft Office,
OpenOffice.org and LibreOffice, and Apple's iWork.
XML has also provided the base language for communication protocols such as XMPP. Applications
for the Microsoft .NET Framework use XML files for configuration. Apple has an implementation of
a registry based on XML.XML has come into common use for the interchange of data over the
Internet. IETF RFC 7303 gives rules for the construction of Internet Media Types for use when
sending XML. It also defines the media type’s application/xml and text/xml, which say only that the
data is in XML, and nothing about its semantics. The use of text/xml has been criticized as a potential
source of encoding problems and it has been suggested that it should be deprecated. With some
format beyond what XML defines itself. Usually this is either a comma or semi-colon delimited list
or, if the individual values are known not to contain spaces, a space-delimited list can be used. div
class=’inner-greeting-box’>Welcome! < /div>; where the attribute "class" has both the value "inner
greeting-box" and also indicates the two CSS class names "inner" and "greeting-box".
23
XML declaration
XML documents consist entirely of characters from the Unicode repertoire. Except for a small
number of specifically excluded control characters, any character defined by Unicode may appear
within the content of an XML document. XML includes facilities for identifying the encoding of the
Unicode characters that make up the document, and for expressing characters that, for one reason or
another, cannot be used directly.
Valid characters
Unicode code points in the following ranges are valid in XML 1.0 documents: U+0009, U+000A,
U+000D: these are the only C0 controls accepted in XML 1.0; U+0020–U+D7FF, U+E000–
U+FFFD: this excludes some non-characters in the BMP; U+10000–U+10FFFF: this includes all
code points in supplementary planes, including non-characters.XML 1.1 extends the set of allowed
characters to include all the above, plus the remaining characters in the range U+0001–U+001F. At
the same time, however, it restricts the use of C0 and C1 control characters other than U+0009,
U+000A, U+000D, and U+0085 by requiring them to be written in escaped form. In the case of C1
characters, this restriction is a backwards incompatibility; it was introduced to allow common
encoding errors to be detected. The code point U+0000 is the only character that is not permitted in
any XML 1.0 or 1.1 document.
Encoding detection
The Unicode character set can be encoded into bytes for storage or transmission in a variety of
different ways, called "encodings". Unicode itself defines encodings that cover the entire repertoire;
well-known ones include UTF-8 and UTF-16. There are many other text encodings that predate
Unicode, such as ASCII and ISO/IEC 8859; their character repertoires in almost every case are
subsets of the Unicode character set.XML allows the use of any of the Unicode-defined encodings,
and any other encodings whose characters also appear in Unicode. XML also provides a mechanism
whereby an XML processor can reliably, without any prior knowledge, determine which encoding is
being used. Encodings other than UTF-8 and UTF-16 are not necessarily recognized by every XML
parser.
24
Hand-Shake Server-Client
In information technology, telecommunications, and related fields, handshaking is an automated
process of negotiation that dynamically sets parameters of a communications channel established
between two entities before normal communication over the channel begins. It follows the physical
establishment of the channel and precedes normal information transfer. The handshaking process
usually takes place in order to establish rules for communication when a computer sets about
communicating with a foreign device. When a computer communicates with another device like a
modem, printer, or network server, it needs to handshake with it to establish a
connection.Handshaking can negotiate parameters that are acceptable to equipment and systems at
both ends of the communication channel, including information transfer rate, coding alphabet, parity,
interrupt procedure, and other protocol or hardware features. Handshaking is a technique of
communication between two entities. However, within TCP/IP RFCs, the term "handshake" is most
commonly used to reference the TCP three-way handshake. For example, the term "handshake" is not
present in RFCs covering FTP or SMTP. A simple handshaking protocol might only involve the
receiver sending a message meaning”
(Figure 3.0 Hand-shaking Client – Server)
25
Client side handshake
Public HandshakeClientSession (ARGS){
// DECLARATION
If(pingInterval >0){// ping interval for packet interactions.
m_timerHandler =new TimerHandler ();
timerQueue.schedule(m_timerHandler, pingInterval, TimeUnit.SECONDS);
}
Try{
Final ByteBuffer handshakeRequest =
Protocol.HandshakeRequest.create(audioFormat, stationName);
session.sendData (handshakeRequest); //send data through handshake request
}catch(final CharacterCodingException ex){
Log.e (LOG_TAG, getLogPrefix ()+ ex.toString ()); //debugging
session.closeConnection (); //close session
}
}
Server side handshake
Public HandshakeServerSession(ARGS){
// declaration
If(pingInterval >0){
m_timerHandler =new TimerHandler();
m_timerQueue.schedule(m_timerHandler, pingInterval, TimeUnit.SECONDS);
}
Log.i(LOG_TAG, getLogPrefix()+"connection accepted");
}
There are many other types of handshaking and several of ways to do it... some of methods are as
follows:
1. TCP-Three-way handshake
2. WPA/WPA2 Four-way Handshake
26
TCP-Three Way Handshaking
The first host (Alice) sends the second host (Bob) a "synchronize" (SYN) message with its own
sequence number {displaystyle x} x, which Bob receives. Bob replies with a synchronize-
acknowledgment (SYN-ACK) message with its own sequence number {displaystyle y} y and
acknowledgement number {displaystyle x+1} x+1, which Alice receives. Alice replies with an
acknowledgment message with acknowledgement number {displaystyle y+1} y+1, which Bob
receives and to which he doesn't need to reply. In this setup, the synchronize messages act as service
requests from one server to the other, while the acknowledgement messages return to the requesting
server to let it know the message was received.
Establishing a normal TCP connection requires three separate steps:
(Figure 4.0 Three-way handshake)
One of the most important factors of three-way handshake is that, in order to exchange the starting
sequence number, the two sides plan to use, the client first sends a segment with its own initial
sequence number {displaystyle x} x, then the server responds by sending a segment with its own
sequence number {displaystyle y} y and the acknowledgement number {displaystyle x+1} x+1, and
finally the client responds by sending a segment with acknowledgement number {displaystyle y+1}
y+1.
The reason for the client and server not using the default sequence number such as 0 for establishing
connection is to protect against two incarnations of the same connection reusing the same sequence
number too soon, which means a segment from an earlier incarnation of a connection might interfere
with a later incarnation of the connection.
27
Hand-shaking could use one of many protocols as following:
1. SMTP
2. TLS
3. WPA2 wireless
4. Dial-up access modems
SMTP
The Simple Mail Transfer Protocol (SMTP) is the key Internet standard for email transmission. It
includes handshaking to negotiate authentication, encryption and maximum message size.
(Figure 5.0 SMTP based handshake)
TLS
When a Transport Layer Security (SSLor TLS) connection starts, the record encapsulates a "control"
protocol—the handshake messaging protocol. This protocol is used to exchange all the information
required by both sides for the exchange of the actual application data by TLS. It defines the messages
formatting or containing this information and the order of their exchange.
(Figure 6.0 TLS Layout)
28
These may vary according to the demands of the client and server—i.e., there are several possible
procedures to set up the connection. This initial exchange results in a successful TLS connection
(both parties ready to transfer application data with TLS) or an alert message. The protocol is used to
negotiate the secure attributes of a session.
(Figure 7.0 TLS handshake over SSL)
(Figure 8.0 Simple TLS Handshaking)
29
WPA2 Wireless
The WPA2 standard for wireless uses a four-way handshake defined in IEEE 802.11i-2004.Wi-Fi
Protected Access (WPA) and Wi-Fi Protected Access II (WPA2) are two security protocols and
security certification programs developed by the Wi-Fi Alliance to secure wireless computer
networks. The Alliance defined these in response to serious weaknesses researchers had found in the
previous system, Wired Equivalent Privacy (WEP). WPA (sometimes referred to as the draft IEEE
802.11i standard) became available in 2003.The Wi-Fi Alliance intended it as an intermediate
measure in anticipation of the availability of the more secure and complex WPA2. WPA2 became
available in 2004 and is a common shorthand for the full IEEE 802.11i (or IEEE 802.11i-2004)
standard. A flaw in a feature added to Wi-Fi, called Wi-Fi Protected Setup, allows WPA and WPA2
security to be bypassed and effectively broken in many situations.
The WPA and WPA2 security protocols implemented without using the Wi-Fi Protected Setup
feature are unaffected by the security vulnerability. The WPA protocol implements much of the IEEE
802.11i standard. Specifically, the Temporal Key Integrity Protocol (TKIP) was adopted for WPA.
WEP used a 64-bit or 128-bit encryption key that must be manually entered on wireless access points
and devices and does not change. TKIP employs a per-packet key, meaning that it dynamically
generates a new 128-bit key for each packet and thus prevents the types of attacks that compromised
WEP.
(Figure 9.0 TCP Four Way Handshake)
WPA also includes a message integrity check, which is designed to prevent an attacker from altering
and resending data packets.This replaces the cyclic redundancy check (CRC) that was used by the
WEP standard. CRC's main flaw was that it did not provide a sufficiently strong data integrity
guarantee for the packets it handled. Well tested message authentication codes existed to solve these
problems, but they required too much computation to be used on old network cards. WPA uses a
30
message integrity check algorithm called TKIP to verify the integrity of the packets. TKIP is much
stronger than a CRC, but not as strong as the algorithm used in WPA2.
Researchers have since discovered a flaw in WPA that relied on older weaknesses in WEP and the
limitations of Michael to retrieve the keystream from short packets to use for re-injection and
spoofing.
Dial up access modems
One classic example of handshaking is that of dial-up modems, which typically negotiate
communication parameters for a brief period when a connection is first established, and thereafter use
those parameters to provide optimal information transfer over the channel as a function of its quality
and capacity.
(Figure 10.0 Modem/Device/Server connection hand-shaking)
The "squealing" (which is actually a sound that changes in pitch 100 times every second) noises
made by some modems with speaker output immediately after a connection is established are in fact
the sounds of modems at both ends engaging in a handshaking procedure; once the procedure is
completed, the speaker might be silenced, depending on the settings of operating system or the
application controlling the modem.
31
Server side NDS handshake – receiving packets:
Public void onDataReceived(RetainableByteBuffer data){ // function to get packets
final RetainableByteBuffer msg = m_streamDefragger.getNext(data); // get stream
If(msg ==null){// message is empty
/* HandshakeRequest is fragmented, very rare but is still happens */
}
elseif(msg == StreamDefragger.INVALID_HEADER){// if message header is invalid
m_session.closeConnection(); // close connection
}
else{// message is not empty
if(m_timerHandler !=null){ //idle
try{
if(m_timerQueue.cancel(m_timerHandler)!=0){
return;
}
}
catch(final InterruptedException ex){ // got error
Thread.currentThread().interrupt(); // break current thread.
}
}
// get message ID
if(messageID == Protocol.HandshakeRequest.ID){ // verify ID
finalshort protocolVersion = Protocol.HandshakeRequest.getProtocolVersion(msg);
if(protocolVersion == Protocol.VERSION){
try{ // try
final String audioFormat = Protocol.HandshakeRequest.getAudioFormat(msg);
final String stationName = Protocol.HandshakeRequest.getStationName(msg);
final AudioPlayer audioPlayer = AudioPlayer.create(args);
if(audioPlayer ==null){ // no audioPlayer
Log.i (LOG_TAG, getLogPrefix()); // debug case
m_session.closeConnection(); // close connection
}else{
Log.i(LOG_TAG, getLogPrefix()+"handshake ok"); // debug case
final ByteBuffer handshakeReply = Protocol.HandshakeReplyOk.create();
32
m_session.sendData(handshakeReply);
m_channel.setStationName(m_session, stationName);
final ChannelSession channelSession =new ChannelSession(args);
m_session.replaceListener(channelSession);
}
}catch(final CharacterCodingException ex){
Log.e (LOG_TAG, getLogPrefix ()+ ex.toString ());
m_session.closeConnection();
}
}else{
/* Protocol version is different, cannot continue. */
}
Client side based NDS handshake – receiving packets:
final String statusText ="Protocol version mismatch:; //
try{
final ByteBuffer handshakeReply = Protocol.HandshakeReplyFail.create(statusText);
m_session.sendData(handshakeReply);
}
catch(final CharacterCodingException ex){
Log.i(LOG_TAG, ex.toString());
}
m_session.closeConnection();
}
}
else{
// debug
m_session.closeConnection();
}
}
}
33
Chapter 4
Station Information and Connectivity
App running on devices will have their own unique address, even with the unique ID like addresses
all running Walkie Talkie application will hold same signature on every node so that system can look
up to them by using handshaking, pings.
(Figure 11.0 how ping works)
Let us assume Device A, B, C are android devices, and 1.1.1.1 is their LAN IP, /24 at the end is sub-
net masking used to calculate how many nodes are connected within LAN, sub-net masking is also
use to discover another visible device which can accept pings, In Diagram Device A is sending B a
request to know if he is online and can respond, if Device B responded back that means Device B is
online and discoverable, The same goes with Device B to C, The ping basically collects all nodes
which can reply back and then after handshake and validation of station information and proper
signature response connectivity between devices are established. Ping uses Send package in bytes and
measure them by Latency, the quicker the response is the faster the connection is. Latency is
measured in micro seconds which means 1000 micro seconds is 1 second, the normal and
recommended latency between two nodes are 20-60 micro seconds. If one device is taking longer
than 200 micro seconds it will cause a slight lagging and delayed response on both ends, it is because
the server has sent his package and package arrive too soon to be collected by other node, in result
some of package data goes missing and corrupted. User can change their broadcast name, station
name which is used to display to make a proper UI/UX understanding and make it user friendly. The
changing on username/broadcasting person name will not have any effect on station because the
station signature is same and can’t be changed for connectivity establishment and security reasons.
34
(Figure 12.0 App Setting layout/Station name setting)
It is in app preview of settings dialog/popup, layout contains one input of station name, which is
basically node name, the real station name which we are using as signature for connectivity is hard-
coded, station name is like a person name using application, if someone changes his station name
other connected devices can see his name in station lists in main page.
(Figure 13.0 Volume control in setting layout/screen)
35
Volume control is given as alternative control if user wish to use his volume button as PTT (Push-to-
talk) he can manage his volume settings through settings screen.
(Figure 14.0 Use volume buttons as PTT on settings screen)
One can also start background services and check for Wi-Fi Status, it is useful when user didn’t turn
his Wi-Fi on and is trying to use App, Application will simply popup him a dialog saying he need to
activate/turn on his Wi-Fi in order to use the application, because main purpose of this app is to run
on Wi-Fi, Checkbox control inside settings screen is automated service which check Wi-Fi status on
every start of application, this way user will not miss any important connectivity by mistake and
chances of efficient scope will be prompted.
36
(Figure 15.0 Wi-Fi Status check on start)
However, all controls in settings screens are optional, it is not required for user to set-it up before
using an app, it is basically and additional customization and performance tweaking flexibility for
more productivity on progressive scale.
Station information parameters and values:
public StationInfo (String name, String addr,int transmission,long ping){
this.name = name;
this.addr = addr;
this.transmission = transmission;
this.ping = ping;
}
37
Channel
Channels are simply identifiers used to communicate and calculate integrity between nodes, it is also
used to make a sequence connection between them and broadcasting of packets through it.
(Figure 16.0 Channel)
Channel also identify the signature and reflects the station in it, Connectivity is possible through
Channel to Channel, Channels are also a method of keeping sessions and extracting other information
like, device state, ping rate, station name, session life span etc.
Through Channel keeping a background service which will trigger the connection events every
specific interval is possible, App will keep in contact with other app even if user interface is shut
down/switch to another application. An activity will run which will renew sessions and keeps
connectivity all in background and gather newly updated versions of commits and changes like,
sending of voice, changing of name, change in ping rate, session renewals. These sessions creates a
cloud of local devices for distributed communication.
38
Accepting Connection:
privateclass ChannelAcceptor extends Acceptor {
public Session.Listener createSessionListener(Session session){
Log.i("session accepted");
m_lock.lock();
try{
if(m_stopLatch ==null){
final SessionInfo sessionInfo =new SessionInfo();
m_sessions.put(session, sessionInfo);
returnnew HandshakeServerSession(args);
}
}finally{
m_lock.unlock();
}
returnnull;
}
}
When Channel connection is accepted:
Public void onAcceptorStarted(Collider collider,int localPort){
Log.i(LOG_TAG, m_name +": acceptor started: "+ localPort);
m_lock.lock();
try{
if(m_stopLatch ==null){
m_localPort = localPort;
}
if(m_stateListener !=null){
updateStateLocked();
}
Log.i("register service");
return;
}
}
39
State listener and Exception handling:
privateclass ChannelConnector extends Connector {
privatefinal String m_serviceName;
public ChannelConnector(InetSocketAddress addr, String serviceName){
super(addr);
m_serviceName = serviceName;
}
public Session.Listener createSessionListener(Session session){ // listen sessions
m_lock.lock(); // lock when find another device to prevent
publicvoid onException(IOException ex){ // on error
m_lock.lock();
try{
final ServiceInfo serviceInfo = m_serviceInfo.get(m_serviceName);
if(serviceInfo ==null){
// if serviceInfo is empty throw error
}else{
if(BuildConfig.DEBUG
&&((serviceInfo.connector !=this)||(serviceInfo.session !=null))){
thrownew AssertionError();
}
}
}
40
Getting station info
private StationInfo[] getStationListLocked(){
if(BuildConfig.DEBUG){
if(!m_lock.isHeldByCurrentThread())
thrownew AssertionError();
if(m_serviceName ==null)
thrownew AssertionError();
}
elseif(m_serviceName ==null)
returnnew StationInfo[0];
int sessions =0;
for(Map.Entry < String, ServiceInfo > e: m_serviceInfo.entrySet()){
if(m_serviceName.compareTo(e.getKey())>0){
if(e.getValue().stationName !=null)
sessions++;
}
}
for(Map.Entry < Session, SessionInfo > e: m_sessions.entrySet()){
if(e.getValue().stationName !=null)
sessions++;
}
final StationInfo[] stationInfo =new StationInfo[sessions];
int idx =0;
for(Map.Entry < String, ServiceInfo > e: m_serviceInfo.entrySet()){
if(m_serviceName.compareTo(e.getKey())>0){
if(e.getValue().stationName !=null){
final ServiceInfo serviceInfo = e.getValue();
stationInfo[idx++]=new StationInfo(args);
}
}
}
}
return stationInfo;
}
41
Establishing connection between nodes and DHCP:
publicvoid onServiceFound(NsdServiceInfo nsdServiceInfo){
final String serviceName = nsdServiceInfo.getServiceName();
m_lock.lock();
try{
if(BuildConfig.DEBUG &&(m_stopLatch !=null))
thrownew AssertionError();
ServiceInfo serviceInfo = m_serviceInfo.get(serviceName);
if(serviceInfo ==null){
serviceInfo =new ServiceInfo();
m_serviceInfo.put(serviceName, serviceInfo);
}
serviceInfo.nsdServiceInfo = nsdServiceInfo;
serviceInfo.nsdUpdates++;
if((m_serviceName !=null)&&(m_serviceName.compareTo(serviceName)>0)){
if((serviceInfo.session ==null)&&(serviceInfo.connector ==null)){
if(m_resolveListener ==null){
Log.i(LOG_TAG, m_name +": onServiceFound, resolve: "+
nsdServiceInfo);
serviceInfo.nsdUpdates =0;
m_resolveListener =new ResolveListener(serviceName);
m_nsdManager.resolveService(nsdServiceInfo, m_resolveListener);
}else{
Log.i(LOG_TAG, m_name +": onServiceFound: "+ nsdServiceInfo);
}
}
}
}finally{
m_lock.unlock();
}
}
42
On Connection lost
publicvoid onServiceLost( NsdServiceInfo nsdServiceInfo )
{
final String serviceName = nsdServiceInfo.getServiceName();
m_lock.lock();
try
{
final ServiceInfo serviceInfo = m_serviceInfo.get( serviceName );
if(serviceInfo ==null)
{
Log.w(": internal error: service not found: "+ nsdServiceInfo );
}
elseif((m_serviceName !=null)&&(m_serviceName.compareTo(serviceName)>0)){
if(((m_resolveListener !=null)&&
m_resolveListener.getServiceName().equals(serviceName))||
(serviceInfo.connector !=null)||(serviceInfo.session !=null)){
serviceInfo.nsdServiceInfo =null;
}else{
m_serviceInfo.remove( serviceName );
final StateListener stateListener = m_stateListener;
if(stateListener !=null)
stateListener.onStationListChanged( getStationListLocked());
}
}
else{
m_serviceInfo.remove( serviceName );
}
}
finally{
m_lock.unlock();
}
}
43
Setting Station Name:
Setting station name, generating and getting station name according to session generated,
register/unregister handling of sessions, setting ping rates etc.:
public void setStationName( String serviceName, String stationName )
{
m_lock.lock();
try{
final ServiceInfo serviceInfo = m_serviceInfo.get( serviceName );
if (serviceInfo != null)
{
serviceInfo.stationName = stationName;
serviceInfo.addr = serviceInfo.session.getRemoteAddress().toString();
serviceInfo.state = 0;
serviceInfo.ping = 0;
}
finally
{
m_lock.unlock();
}
}
Audio Player
Application does not use a physical/external audio player, Application is programmed to play audio
when it receives from other node and play it in system embedded player, player do not have body of
its own, it is programmatically developed as player which only trigger the speaker hardware to play
voice.
44
Playing Audio:
public void play( RetainableByteBuffer audioFrame )
{
final Node node = new Node( audioFrame );
audioFrame.retain();
for (;;)
{
final Node tail = m_tail;
if (BuildConfig.DEBUG && (tail != null) && (tail.audioFrame == null))
{
audioFrame.release();
throw new AssertionError();
}
if (s_tailUpdater.compareAndSet(this, tail, node))
{
if (tail == null)
{
m_head = node;
m_sema.release();
}
else {
tail.next = node;
break;
}
}
}
45
Waiting for other voices, and stop after playing one voice broadcast:
public void stopAndWait()
{
final Node node = new Node( null );
for (;;){
final Node tail = m_tail;
if (BuildConfig.DEBUG && (tail != null) && (tail.audioFrame == null)) {
throw new AssertionError();
}
if (s_tailUpdater.compareAndSet(this, tail, node)){
if (tail == null) {
m_head = node;
m_sema.release();
}
else{
tail.next = node;
break;
}}try{
m_thread.join();
}
catch (final InterruptedException ex)
{Log.e( LOG_TAG, ex.toString() ); }
}
46
(Figure 17.0 Playing voice using inner audio player)
Having no external location or triggers/calls to audio player of android stock music player or other
android music player app save’s trouble from getting the same exact player for app to perform
functionally, and increases it size and adds and extra validation to find/match and allocate audio
player, make it standby for keeping it operational.
Audio Recorder
Audio Player is triggered when Push-to-talk is active, while pressing and holding PTT button
application will record audio until user leave button, right after user leaves app transmit voice within
LAN where other devices gets it and play it inside app.
Figure 2.0 Sending-receiving voice display a detailed model how PTT button works and how the
recording plays it role.
47
Recording voice:
public void startRecording()
{
Log.d( LOG_TAG, "startRecording" );
m_lock.lock();
try
{
if (m_state == IDLE)
{
m_state = START;
m_cond.signal();
}
else if (m_state == STOP)
m_state = RUN;
}
finally
{
m_lock.unlock();
}
}
public void stopRecording()
{
m_lock.lock();
try
{
if (m_state != IDLE)
m_state = STOP;
}
finally
{
m_lock.unlock();
}
}
48
Initializing AudioRecorder:
public static AudioRecorder create( SessionManager sessionManager, boolean repeat )
{
final int rates [] = { 11025, 16000, 22050, 44100 };
for (int sampleRate : rates)
{
final int channelConfig = AudioFormat.CHANNEL_IN_MONO;
final int minBufferSize = AudioRecord.getMinBufferSize(
sampleRate, channelConfig, AudioFormat.ENCODING_PCM_16BIT );
if ((minBufferSize != AudioRecord.ERROR) &&
(minBufferSize != AudioRecord.ERROR_BAD_VALUE))
{
final int frameSize = (sampleRate * (Short.SIZE / Byte.SIZE) / 2) & (Integer.MAX_VALUE - 1);
int bufferSize = (frameSize * 4);
if (bufferSize < minBufferSize)
bufferSize = minBufferSize;
final AudioRecord audioRecord = new AudioRecord(
MediaRecorder.AudioSource.MIC,
sampleRate,
channelConfig,
AudioFormat.ENCODING_PCM_16BIT,
bufferSize );
final String audioFormat = ("PCM:" + sampleRate);
return new AudioRecorder( sessionManager, audioRecord, audioFormat, frameSize,
bufferSize, repeat );
}
}
return null;
}
49
Sending voice over protocol layering and handling recorder process:
public void run()
{
Log.i( LOG_TAG, "run [" + m_audioFormat + "]: frameSize=" + m_frameSize + "
bufferSize=" + m_bufferSize );
android.os.Process.setThreadPriority( Process.THREAD_PRIORITY_URGENT_AUDIO );
RetainableByteBuffer byteBuffer = m_byteBufferCache.get();
byte [] byteBufferArray = byteBuffer.getNioByteBuffer().array();
int byteBufferArrayOffset = byteBuffer.getNioByteBuffer().arrayOffset();
int frames = 0;
try
{
for (;;)
{
m_lock.lock();
try
{
while (m_state == IDLE)
m_cond.await();
if (m_state == START)
{
m_audioRecord.startRecording();
}
else if (m_state == STOP)
{
m_audioRecord.stop();
m_state = IDLE;
if (m_list != null)
{
int replayedFrames = 0;
for (RetainableByteBuffer msg : m_list)
{
m_audioPlayer.play( msg );
msg.release();
50
replayedFrames++;
}
m_list.clear();
Log.i( LOG_TAG, "Replayed " + replayedFrames + " frames." );
}
Log.i( LOG_TAG, "Sent " + frames + " frames." );
continue;
}
else if (m_state == SHTDN)
break;
}
finally
{
m_lock.unlock();
}
int position = byteBuffer.position();
if ((byteBuffer.limit() - position) < Protocol.AudioFrame.getMessageSize(m_frameSize))
{
byteBuffer.release();
byteBuffer = m_byteBufferCache.get();
byteBufferArray = byteBuffer.getNioByteBuffer().array();
byteBufferArrayOffset = byteBuffer.getNioByteBuffer().arrayOffset();
position = 0;
if (BuildConfig.DEBUG && (byteBuffer.position() != position))
throw new AssertionError();
}
Protocol.AudioFrame.init( byteBuffer.getNioByteBuffer(), m_frameSize );
if (BuildConfig.DEBUG && (byteBuffer.remaining() <m_frameSize))
throw new AssertionError();
final int bytesReady = m_audioRecord.read(
byteBufferArray, byteBufferArrayOffset+byteBuffer.position(), m_frameSize );
if (bytesReady == m_frameSize)
{
final int limit = position + Protocol.AudioFrame.getMessageSize( m_frameSize );
byteBuffer.position( position );
51
byteBuffer.limit( limit );
final RetainableByteBuffer msg = byteBuffer.slice();
m_sessionManager.send( msg );
frames++;
if (m_list != null)
{
m_list.add( Protocol.AudioFrame.getAudioData(msg) );
}
msg.release();
byteBuffer.limit( byteBuffer.capacity() );
byteBuffer.position( limit );
}
else
{
Log.e( LOG_TAG, "readSize=" + m_frameSize + " bytesReady=" + bytesReady );
break;
}
}
}
catch (final InterruptedException ex)
{
Log.e( LOG_TAG, ex.toString() );
Thread.currentThread().interrupt();
}
m_audioRecord.stop();
m_audioRecord.release();
byteBuffer.release();
Log.i( LOG_TAG, "run [" + m_audioFormat + "]: done" );
}
52
Session Manager
Session manager is part of back-end procedure in application file system, it plays it roles to allocate,
control, connect, save sessions flows. Session manager is main part which contribute his distribute
administrative control to view/alter process and retrieve sessions, these sessions are used build a
connection path in which communication will take place.
Adding/removing session:
public void addSession( ChannelSession session )
{
m_lock.lock();
try
{
if (BuildConfig.DEBUG &&m_sessions.contains(session))
throw new AssertionError();
final HashSet<ChannelSession> sessions = (HashSet<ChannelSession>) m_sessions.clone();
sessions.add( session );
m_sessions = sessions;
}
finally
{
m_lock.unlock();
}
}
public void removeSession( ChannelSession session )
{
m_lock.lock();
try
{
final HashSet<ChannelSession> sessions = (HashSet<ChannelSession>) m_sessions.clone();
final boolean removed = sessions.remove( session );
if (BuildConfig.DEBUG && !removed)
throw new AssertionError();
m_sessions = sessions;
}
53
finally
{
m_lock.unlock();
}
}
Sending Sessionbroadcast:
public void send( RetainableByteBuffer msg )
{
for (ChannelSession session : m_sessions)
session.sendMessage(msg);
}
State View
State view is indicator, indicating green circle on left side of node head in list, to show “that” node
has sent broadcast. State view method uses canvas to draw a circle and highlight or handle it using
the draw-able attributes of canvas
Drawing state indicator using canvas
protected void onDraw( Canvas canvas )
{
super.onDraw( canvas );
if (m_state <m_paint.length)
{
final float cx = (getWidth() / 2);
final float cy = (getHeight() / 2);
final float cr = (cx - cx / 2f);
canvas.drawCircle( cx, cy, cr, m_paint[m_state] );
}
}
public StateView( Context context, AttributeSet attrs )
{
super( context, attrs );
final TypedArray a = context.obtainStyledAttributes(
attrs, new int [] { android.R.attr.minHeight }, android.R.attr.buttonStyle, 0 );
if (a != null)
54
{
final int minHeight = a.getDimensionPixelSize( 0, -1 );
if (minHeight != -1)
setMinimumHeight( minHeight );
a.recycle();
}
setWillNotDraw( false );
m_paint = new Paint[2];
m_paint[0] = new Paint();
m_paint[0].setColor( Color.DKGRAY );
m_paint[1] = new Paint();
m_paint[1].setColor( Color.GREEN );
}
Indication of state:
void setIndicatorState( int state )
{
if (state <m_paint.length)
{
if (m_state != state)
{
m_state = state;
invalidate();
}
}
else if (BuildConfig.DEBUG)
throw new AssertionError();
}
55
Walkie Talkie Services
Walkie Talkie is method/class in back-end system plays it roles to send/receive packets, it’s the main
engine which is responsible for sending and receiving functionality. It is container which hold js-
collider functionality and all NDS based handshakes and signatures generations.
Performing NDS via JS-collider, initialing Js-collider process:
private static class ColliderThread extends Thread
{
private final Collider m_collider;
public ColliderThread( Collider collider )
{
super( "ColliderThread" );
m_collider = collider;
}
public void run()
{
Log.i( LOG_TAG, "Collider thread: start" );
m_collider.run();
Log.i( LOG_TAG, "Collider thread: done" );
}
}
Discoverother nodes with same services/signature nearby, performance of NDS:
private class DiscoveryListener implements NsdManager.DiscoveryListener
{
public void onStartDiscoveryFailed( String serviceType, int errorCode )
{
m_lock.lock();
try
{
if (m_cond != null)
m_cond.signal();
}
finally
{
56
m_lock.unlock();
}
}
public void onStopDiscoveryFailed( String serviceType, int errorCode )
{
Log.e( LOG_TAG, "Stop discovery failed: " + errorCode );
}
public void onDiscoveryStarted( String serviceType )
{
Log.i( LOG_TAG, "Discovery started" );
m_lock.lock();
try
{
if (m_cond == null)
m_discoveryStarted = true;
else
m_nsdManager.stopServiceDiscovery( this );
}
finally
{
m_lock.unlock();
}
}
When a service/node is found:
public void onServiceFound( NsdServiceInfo nsdServiceInfo )
{
try
{
final String[] ss = nsdServiceInfo.getServiceName().split( SERVICE_NAME_SEPARATOR );
final String channelName = new String( Base64.decode( ss[0], 0 ) );
Log.i( LOG_TAG, "onServiceFound: " + channelName + ": " + nsdServiceInfo );
if (channelName.compareTo( SERVICE_NAME ) == 0)
m_channel.onServiceFound( nsdServiceInfo );
57
}
catch (final IllegalArgumentException ex)
{
Log.w( LOG_TAG, ex.toString() );
}
}
Getting device ID:
private static String getDeviceID( ContentResolver contentResolver )
{
long deviceID = 0;
final String str = Settings.Secure.getString( contentResolver, Settings.Secure.ANDROID_ID );
if (str != null)
{
try
{
final BigInteger bi = new BigInteger( str, 16 );
deviceID = bi.longValue();
}
catch (final NumberFormatException ex)
{
Log.i( LOG_TAG, ex.toString() );
}
}
if (deviceID == 0)
{
/* Let's use random number */
deviceID = new Random().nextLong();
}
final byte [] bb = new byte[Long.SIZE / Byte.SIZE];
for (int idx=(bb.length - 1); idx>=0; idx--)
{
bb[idx] = (byte) (deviceID &0xFF);
deviceID >>= Byte.SIZE;
}
58
return Base64.encodeToString( bb, (Base64.NO_PADDING | Base64.NO_WRAP) );
}
Allocating other resources:
public int onStartCommand( Intent intent, int flags, int startId )
{
Log.d( LOG_TAG, "onStartCommand: flags=" + flags + " startId=" + startId );
if (m_audioRecorder == null)
{
final String deviceID = getDeviceID( getContentResolver() );
final SessionManager sessionManager = new SessionManager();
m_audioRecorder = AudioRecorder.create( sessionManager, /*repeat*/false );
if (m_audioRecorder != null)
{
startForeground( 0, null );
final int audioStream = MainActivity.AUDIO_STREAM;
final AudioManager audioManager = (AudioManager) getSystemService( AUDIO_SERVICE );
m_audioPrvVolume = audioManager.getStreamVolume( audioStream );
final String stationName = intent.getStringExtra( MainActivity.KEY_STATION_NAME );
int audioVolume = intent.getIntExtra( MainActivity.KEY_VOLUME, -1 );
if (audioVolume <0)
audioVolume = audioManager.getStreamMaxVolume( audioStream );
Log.d( LOG_TAG, "setStreamVolume(" + audioStream + ", " + audioVolume + ")" );
audioManager.setStreamVolume( audioStream, audioVolume, 0 );
try
{
m_collider = Collider.create();
m_colliderThread = new ColliderThread( m_collider );
final TimerQueue timerQueue = new TimerQueue( m_collider.getThreadPool() );
m_channel = new Channel(
deviceID,
stationName,
m_audioRecorder.getAudioFormat(),
m_collider,
m_nsdManager,
SERVICE_TYPE,
59
SERVICE_NAME,
sessionManager,
timerQueue,
Config.PING_INTERVAL );
m_discoveryListener = new DiscoveryListener();
m_nsdManager.discoverServices( SERVICE_TYPE, NsdManager.PROTOCOL_DNS_SD,
m_discoveryListener );
m_colliderThread.start();
}
catch (final IOException ex)
{
Log.w( LOG_TAG, ex.toString() );
}
}
}
return START_REDELIVER_INTENT;
}
Switch Button
SwitchButton.Java is class/method in back-end file system plays it roles to secure PTT, and make
hand gestures based handling operational, such as on press PTT turn on recorder and on slide down
move the list main activity downward to display all nodes connected, it is also responsible for
gestured pattern based handling currently operational in application.
Handling touch events:
public boolean onTouchEvent( MotionEvent ev )
{
final int action = ev.getAction();
switch (action)
{
case MotionEvent.ACTION_DOWN:
if (isEnabled())
{
if (m_state == STATE_IDLE)
{
60
setPressed( true );
setBackground( m_pressedBackground );
m_state = STATE_DOWN;
m_touchX = ev.getX();
m_touchY = ev.getY();
if (m_stateListener != null)
m_stateListener.onStateChanged( true );
return true;
}
else if (m_state == STATE_LOCKED)
{
m_state = STATE_DOWN;
m_touchX = ev.getX();
m_touchY = ev.getY();
return true;
}
else
{
if (BuildConfig.DEBUG)
throw new AssertionError();
}
}
break;
case MotionEvent.ACTION_MOVE:
{
final float x = ev.getX();
final float y = ev.getY();
final float dx = (x - m_touchX);
final float dy = (y - m_touchY);
switch (m_state)
{
case STATE_IDLE:
break;
case STATE_DOWN:
if ((Math.abs(dx) >m_touchSlop) ||
61
(Math.abs(dy) >m_touchSlop))
{
if (Math.abs(dx) > Math.abs(dy))
{
if (dx >0.0)
{
m_state = STATE_DRAGGING_RIGHT;
Log.d( LOG_TAG, "STATE_DOWN -> STATE_DRAGGING_RIGHT" );
}
else if (dx <0.0)
{
m_state = STATE_DRAGGING_LEFT;
Log.d( LOG_TAG, "STATE_DOWN -> STATE_DRAGGING_LEFT" );
}
getParent().requestDisallowInterceptTouchEvent( true );
m_touchX = x;
m_touchY = y;
}
}
return true;
case STATE_DRAGGING_RIGHT:
if ((dx > -0.5f) && (Math.abs(dx) > Math.abs(dy)))
{
m_touchX = x;
m_touchY = y;
}
else if (dy >= 0)
{
m_touchX = x;
m_touchY = y;
m_state = STATE_DRAGGING_DOWN;
Log.d( LOG_TAG, "STATE_DRAGGING_RIGHT ->
STATE_DRAGGING_DOWN" );
}
else
62
{
getParent().requestDisallowInterceptTouchEvent( false );
m_state = STATE_IDLE;
Log.d( LOG_TAG, "STATE_DRAGGING_RIGHT -> STATE_IDLE" );
}
return true;
case STATE_DRAGGING_LEFT:
if ((dx <0.5f) && (Math.abs(dx) > Math.abs(dy)))
{
m_touchX = x;
m_touchY = y;
}
else if (dy >= 0)
{
m_touchX = x;
m_touchY = y;
m_state = STATE_DRAGGING_DOWN;
Log.d( LOG_TAG, "STATE_DRAGGING_LEFT -> STATE_DRAGGING_DOWN"
);
}
else
{
getParent().requestDisallowInterceptTouchEvent( false );
m_state = STATE_IDLE;
Log.d( LOG_TAG, "STATE_DRAGGING_LEFT -> STATE_IDLE" );
}
return true;
case STATE_DRAGGING_DOWN:
if ((dy > -1.0f) || (Math.abs(dx) <1.0f))
{
m_touchX = x;
m_touchY = y;
}
else
{
63
getParent().requestDisallowInterceptTouchEvent( false );
m_state = STATE_IDLE;
Log.d( LOG_TAG, "STATE_DRAGGING_DOWN -> STATE_IDLE" );
}
return true;
}
}
break;
case MotionEvent.ACTION_UP:
case MotionEvent.ACTION_CANCEL:
if (m_state == STATE_DRAGGING_DOWN)
{
/* Keep button pressed */
m_state = STATE_LOCKED;
getParent().requestDisallowInterceptTouchEvent( false );
}
else
{
m_stateListener.onStateChanged( false );
setBackground( m_defaultBackground );
setPressed( false );
if (m_state != STATE_IDLE)
{
m_state = STATE_IDLE;
getParent().requestDisallowInterceptTouchEvent( false );
}
}
break;
}
return super.onTouchEvent( ev );
}
Initializing functionality, running canvas drawers.
protected void onDraw( Canvas canvas )
{
super.onDraw( canvas );
64
if ((m_state == STATE_DOWN) && (m_pl != null) && (m_pr != null))
{
final int width = getWidth();
final int height = getHeight();
canvas.drawCircle( width/2, height/2, height/8, m_paint );
canvas.drawPath( m_pl, m_paint );
canvas.drawPath( m_pr, m_paint );
}
}
Drawing with canvas
protected void onSizeChanged( int width, int height, int oldWidth, int oldHeight )
{
final float centerX = (width / 2);
final float centerY = (height / 2);
final int hh = (height / 8);
int w = (width / hh / 2);
if (w <14)
{
/* Too small */
m_pl = null;
m_pr = null;
}
else
{
if (w >20)
w = 20;
m_pl = new Path();
/*1*/ m_pl.moveTo( centerX - hh*2, centerY - hh );
/*2*/ m_pl.lineTo( centerX - hh*(w-4), centerY-hh );
/*3*/ m_pl.lineTo( centerX - hh*(w-4), centerY+hh*2 );
/*4*/ m_pl.lineTo( centerX - hh*(w-2), centerY+hh*2 );
/*5*/ m_pl.lineTo( centerX - hh*(w-5), centerY+hh*4 );
/*6*/ m_pl.lineTo( centerX - hh*(w-8), centerY+hh*2 );
/*7*/ m_pl.lineTo( centerX - hh*(w-6), centerY+hh*2 );
65
/*8*/ m_pl.lineTo( centerX - hh*(w-6), centerY+hh );
/*9*/ m_pl.lineTo( centerX - hh*2, centerY + hh );
m_pl.close();
m_pr = new Path();
/*1*/ m_pr.moveTo( centerX + hh*2, centerY - hh );
/*2*/ m_pr.lineTo( centerX + hh*(w-4), centerY-hh );
/*3*/ m_pr.lineTo( centerX + hh*(w-4), centerY+hh*2 );
/*4*/ m_pr.lineTo( centerX + hh*(w-2), centerY+hh*2 );
/*5*/ m_pr.lineTo( centerX + hh*(w-5), centerY+hh*4 );
/*6*/ m_pr.lineTo( centerX + hh*(w-8), centerY+hh*2 );
/*7*/ m_pr.lineTo( centerX + hh*(w-6), centerY+hh*2 );
/*8*/ m_pr.lineTo( centerX + hh*(w-6), centerY+hh );
/*9*/ m_pr.lineTo( centerX + hh*2, centerY + hh );
m_pr.close();
}
}
Chapter 5
Main Activity
MainActivity.java is main screen container of all visible features and screens. Validation on start,
UI/UX based operations, functionality attachment to objects and layouts are performed inside
MainActivity.java class.
Main activity display titled android activity, shows logo on upper left side and application name with
it aligned centered, on upper right corner inside that title bar there is menu button, which opens menu
layout/screens and display the following list:
1. Settings
2. About
3. Exit
MainActivity.java prevents application termination on app switching and on screen off, because to
make the connection un-interrupted, app is intended to run on background, whenever app is running a
status indicator will show in status bar showing the app logo on left side and the title and description
about the status bar entry saying “app is running.” this way user won’t miss his important
communication between other nodes.
66
Register Buttons and allocate button switch:
private class ButtonTalkListener implements SwitchButton.StateListener
{
public void onStateChanged( boolean state )
{
if (state)
{
if (!m_recording)
{
m_recording = true;
m_audioRecorder.startRecording();
}
}
else
{
if (m_recording)
{
m_recording = false;
m_audioRecorder.stopRecording();
}
}
}
}
Retrieving and generating list for connected nodes:
private static class ListViewAdapter extends ArrayAdapter<StationInfo>
{
private final LayoutInflater m_inflater;
private final StringBuilder m_stringBuilder;
private StationInfo [] m_stationInfo;
private static class RowViewInfo
{
public final TextView textViewStationName;
public final TextView textViewAddrAndPing;
67
public final StateView stateView;
public RowViewInfo( TextView textViewStationName, TextView textViewAddrAndPing,
StateView stateView )
{
this.textViewStationName = textViewStationName;
this.textViewAddrAndPing = textViewAddrAndPing;
this.stateView = stateView;
}
}
Start Recording on button press:
public boolean onKeyDown( int keyCode, KeyEvent event )
{
if (m_useVolumeButtonsToTalk)
{
if ((keyCode == KeyEvent.KEYCODE_VOLUME_UP) ||
(keyCode == KeyEvent.KEYCODE_VOLUME_DOWN))
{
if (!m_recording)
{
m_audioRecorder.startRecording();
m_recording = true;
m_buttonTalk.setPressed( true );
}
return true;
}
}
return super.onKeyDown( keyCode, event );
}
68
1. Settings:
Settings contain a dialog box popup having station name, volume control option, check Wi-Fi
status on start settings/preferences.
These settings are stored inside shared android preferences.
Setting station info:
public void setStationInfo( StationInfo [] stationInfo )
{
m_stationInfo = stationInfo;
notifyDataSetChanged();
}
If user has selectedvolume control as PTT, start recording on volume button press.
public boolean onKeyUp( int keyCode, KeyEvent event )
{
if (m_useVolumeButtonsToTalk)
{
if ((keyCode == KeyEvent.KEYCODE_VOLUME_UP) ||
(keyCode == KeyEvent.KEYCODE_VOLUME_DOWN))
{
if (m_recording)
{
m_audioRecorder.stopRecording();
m_recording = false;
m_buttonTalk.setPressed( false );
}
return true;
}
}
return super.onKeyDown( keyCode, event );
}
69
2. About:
About screen contains dialog box popup having a little description about app, its dependency
and disclaimer.
Registering dialog services, and making it standby for operation:
private class SettingsDialogClickListener implements DialogInterface.OnClickListener
{
private final EditText m_editTextStationName;
private final SeekBar m_seekBarVolume;
private final CheckBox m_checkBoxCheckWiFiStateOnStart;
private final CheckBox m_switchButtonUseVolumeButtonsToTalk;
public SettingsDialogClickListener(
EditText editTextStationName,
SeekBar seekBarVolume,
CheckBox checkBoxCheckWiFiStateOnStart,
CheckBox switchButtonUseVolumeButtonsToTalk )
{
m_editTextStationName = editTextStationName;
m_seekBarVolume = seekBarVolume;
m_checkBoxCheckWiFiStateOnStart = checkBoxCheckWiFiStateOnStart;
m_switchButtonUseVolumeButtonsToTalk = switchButtonUseVolumeButtonsToTalk;
}
public void onClick( DialogInterface dialog, int which )
{
if (which == DialogInterface.BUTTON_POSITIVE)
{
final String stationName = m_editTextStationName.getText().toString();
final int audioVolume = m_seekBarVolume.getProgress();
final SharedPreferences sharedPreferences = getPreferences(Context.MODE_PRIVATE);
final SharedPreferences.Editor editor = sharedPreferences.edit();
if (m_stationName.compareTo(stationName) != 0)
{
final String title = getString(R.string.app_name) + ": " + stationName;
70
setTitle(title);
editor.putString( KEY_STATION_NAME, stationName );
m_binder.setStationName( stationName );
m_stationName = stationName;
}
if (audioVolume != m_audioVolume)
{
editor.putString( KEY_VOLUME, Integer.toString(audioVolume) );
final int audioStream = MainActivity.AUDIO_STREAM;
final AudioManager audioManager = (AudioManager) getSystemService( AUDIO_SERVICE );
Log.d(LOG_TAG, "setStreamVolume(" + audioStream + ", " + audioVolume + ")");
audioManager.setStreamVolume(audioStream, audioVolume, 0);
m_audioVolume = audioVolume;
}
final boolean useVolumeButtonsToTalk = m_switchButtonUseVolumeButtonsToTalk.isChecked();
editor.putBoolean();
editor.putBoolean(KEY_USE_VOLUME_BUTTONS_TO_TALK, useVolumeButtonsToTalk);
editor.apply();
MainActivity.this.m_useVolumeButtonsToTalk = useVolumeButtonsToTalk;
}
}
3. Exit
The only option to terminate app using application system is to use Exit list selection, it is the last
entry in menu list and is responsible to terminate all services including application instance
itself.After the title bar, there is main centered container, which is container for all nodes connected,
inside that list there are one header block container two headers on left side, the upper header shows
the name of device station the second app shows the channel and session info about that node, on the
right side there is one greyish circle indicating who is speaking, whose message is being played. If
someone just used that PTT service app will show green indicator for that node who is using PTT at
the moment and play his voice. On bottom there PTT button labeled as TALK. Is responsible for all
interaction between sending receiving unit, PTT button ID triggers recording, as soon as the
recording is completed and user release the PTT button, it triggers second event which is to send
voice, the voice is sent using Walkie Talkie Service after all channel and switching process.
71
Destroying all instance:
public void onDestroy()
{
Log.i( LOG_TAG, "onDestroy" );
super.onDestroy();
}
Channel Session
Channel Session class is responsible for renewal and alternation of session in question currently
interacting with another device.
Handling ping rates:
private void handlePingTimeout()
{
if (m_lastBytesReceived == m_totalBytesReceived)
{
if (++m_pingTimeouts == 10)
{
Log.i( LOG_TAG, getLogPrefix() + "connection timeout, closing connection." );
m_session.closeConnection();
}
}
else
{
m_lastBytesReceived = m_totalBytesReceived;
m_pingTimeouts = 0;
}
Log.v( LOG_TAG, getLogPrefix() + "ping" );
m_pingSendTime = System.currentTimeMillis();
m_session.sendData( Protocol.Ping.create() );
}
72
Receiving packets from nodes:
public void onDataReceived( RetainableByteBuffer data )
{
final int bytesReceived = data.remaining();
RetainableByteBuffer msg = m_streamDefragger.getNext( data );
while (msg != null)
{
if (msg == StreamDefragger.INVALID_HEADER)
{
Log.i("invalid message received, close connection." );
m_session.closeConnection();
break;
}
else
{
handleMessage( msg );
msg = m_streamDefragger.getNext();
}
}
s_totalBytesReceivedUpdater.addAndGet( this, bytesReceived );
}
Sending sessiondata to other node for validation:
public final int sendMessage( RetainableByteBuffer msg )
{
return m_session.sendData( msg );
}
73
Handling messages
private void handleMessage( RetainableByteBuffer msg )
{
final short messageID = Protocol.Message.getID( msg );
switch (messageID)
{
case Protocol.AudioFrame.ID:
final RetainableByteBuffer audioFrame = Protocol.AudioFrame.getAudioData( msg );
m_audioPlayer.play( audioFrame );
audioFrame.release();
break;
case Protocol.Ping.ID:
m_session.sendData( Protocol.Pong.create() );
break;
case Protocol.Pong.ID:
final long ping = (System.currentTimeMillis() - m_pingSendTime) / 2;
if (Math.abs(ping - m_ping) >10)
{
m_ping = ping;
m_channel.setPing( m_serviceName, m_session, ping );
}
break;
case Protocol.StationName.ID:
try
{
final String stationName = rotocol.StationName.getStationName( msg );
if (stationName.length() >0)
{
if (m_serviceName == null)
m_channel.setStationName( m_session, stationName );
else
m_channel.setStationName( m_serviceName, stationName );
}
}
74
catch (final CharacterCodingException ex)
{
Log.w( LOG_TAG, ex.toString() );
}
break;
default:
Log.w( LOG_TAG, getLogPrefix() + "unexpected message " + messageID );
break;
}
}
Configuration
Configuration class is set of rules and parameter value and variable which contain almost all settings
configuration for current system, it holds settings like session, ping interval, ping rate, station
information, hard-coded signature etc.
Configuring ping rates:
class Config
{
public static int PING_INTERVAL = 5;
}
Database
Application do not use any Database however, it is using android shared preference system for
storing information/settings like check Wi-Fi status on start, use volume control as PTT, changing
Station name.
Setting volume control as PTT
checkBoxUseVolumeButtonsToTalk.setChecked(arg);
Allocating preferences:
final SharedPreferences sharedPreferences = getPreferences( Context.MODE_PRIVATE );
75
Protocol
A network medium could/may have many protocols. In telecommunications, a communication
protocol is a system of rules that allow two or more entities of a communications system to transmit
information via any kind of variation of a physical quantity. These are the rules or standard that
defines the syntax, semantics and synchronization of communication and possible error recovery
methods. Protocols may be implemented by hardware, software, or a combination of both,
communicating systems use well-defined formats (protocol) for exchanging various messages. Each
message has an exact meaning intended to elicit a response from a range of possible responses pre-
determined for that particular situation. The specified behavior is typically independent of how it is to
be implemented. Communications protocols have to be agreed upon by the parties involved. To reach
agreement, a protocol may be developed into a technical standard. A programming language
describes the same for computations, so there is a close analogy between protocols and programming
languages: protocols are to communications what programming languages are to computations.
Multiple protocols often describe different aspects of a single communication. A group of protocols
designed to work together are known as a protocol suite; when implemented in software they are a
protocol stack.
Most recent protocols are assigned by the IETF for Internet communications, and the IEEE, or the
ISO organizations for other types. The ITU-T handles telecommunications protocols and formats for
the PSTN. As the PSTN and Internet converge, the two sets of standards are also being driven
towards convergence.
Basic Requirement of protocols
Getting the data across a network is only part of the problem for a protocol. The data received has to
be evaluated in the context of the progress of the conversation, so a protocol has to specify rules
describing the context. These kinds of rules are said to express the syntax of the communications.
Other rules determine whether the data is meaningful for the context in which the exchange takes
place. These kinds of rules are said to express the semantics of the communications.
Messages are sent and received on communicating systems to establish communications. Protocols
should therefore specify rules governing the transmission. In general, much of the following should
be addressed: Data formats for data exchange. Digital message bit-strings are exchanged. The bit-
strings are divided in fields and each field carries information relevant to the protocol. Conceptually
the bit-string is divided into two parts called the header area and the data area. The actual message is
76
stored in the data area, so the header area contains the fields with more relevance to the protocol. Bit-
strings longer than the maximum transmission unit (MTU) are divided in pieces of appropriate size.
Address formats for data exchange. Addresses are used to identify both the sender and the intended
receiver(s). The addresses are stored in the header area of the bit-strings, allowing the receivers to
determine whether the bit-strings are intended for themselves and should be processed or should be
ignored. A connection between a sender and a receiver can be identified using an address pair (sender
address, receiver address). Usually some address values have special meanings. An all-1s address
could be taken to mean an addressing of all stations on the network, so sending to this address would
result in a broadcast on the local network. The rules describing the meanings of the address value are
collectively called an addressing scheme.
Address mapping. Sometimes protocols need to map addresses of one scheme on addresses of
another scheme. For instance, to translate a logical IP address specified by the application to an
Ethernet hardware address. This is referred to as address mapping.
Routing. When systems are not directly connected, intermediary systems along the route to the
intended receiver(s) need to forward messages on behalf of the sender. On the Internet, the networks
are connected using routers. This way of connecting networks is called internetworking. Detection of
transmission errors is necessary on networks which cannot guarantee error-free operation. In a
common approach, CRCs of the data area are added to the end of packets, making it possible for the
receiver to detect differences caused by errors. The receiver rejects the packets on CRC differences
and arranges somehow for retransmission Acknowledgements of correct reception of packets is
required for connection-oriented communication. Acknowledgements are sent from receivers back to
their respective senders
Loss of information - timeouts and retries. Packets may be lost on the network or suffer from long
delays. To cope with this, under some protocols, a sender may expect an acknowledgement of correct
reception from the receiver within a certain amount of time. On timeouts, the sender must assume the
packet was not received and retransmit it. In case of a permanently broken link, the retransmission
has no effect so the number of retransmissions is limited. Exceeding the retry limit is considered an
error.
Direction of information flow needs to be addressed if transmissions can only occur in one direction
at a time as on half-duplex links. This is known as Media Access Control. Arrangements have to be
made to accommodate the case when two parties want to gain control at the same time.
Sequence control. We have seen that long bit-strings are divided in pieces, and then sent on the
network individually. The pieces may get lost or delayed or take different routes to their destination
on some types of networks. As a result, pieces may arrive out of sequence. Retransmissions can result
77
in duplicate pieces. By marking the pieces with sequence information at the sender, the receiver can
determine what was lost or duplicated, ask for necessary retransmissions and reassemble the original
message.
Flow control is needed when the sender transmits faster than the receiver or intermediate network
equipment can process the transmissions. Flow control can be implemented by messaging from
receiver to sender.
Chapter 6
Protocols and Programming languages
Protocols are to communications what algorithms or programming languages are to computations.
This analogy has important consequences for both the design and the development of protocols. One
has to consider the fact that algorithms, programs and protocols are just different ways of describing
expected behavior of interacting objects. A familiar example of a protocolling language is the HTML
language used to describe web pages which are the actual web protocols. In programming languages,
the association of identifiers to a value is termed a definition. Program text is structured using block
constructs and definitions can be local to a block. The localized association of an identifier to a value
established by a definition is termed a binding and the region of program text in which a binding is
effective is known as its scope. The computational state is kept using two components: the
environment, used as a record of identifier bindings, and the store, which is used as a record of the
effects of assignments.
In communications, message values are transferred using transmission media. By analogy, the
equivalent of a store would be a collection of transmission media, instead of a collection of memory
locations. A valid assignment in a protocol (as an analog of programming language) could be
Ethernet: ='message’, meaning a message is to be broadcast on the local Ethernet.
On a transmission medium there can be many receivers. For instance, a mac-address identifies an
ether network card on the transmission medium (the 'ether'). In our imaginary protocol, the
assignment Ethernet[mac-address]: =message value could therefore make sense. By extending the
assignment statement of an existing programming language with the semantics described, a
protocolling language could easily be imagined. Operating systems provide reliable communication
and synchronization facilities for communicating objects confined to the same system by means of
system libraries. A programmer using a general-purpose programming language (like C or Ada) can
use the routines in the libraries to implement a protocol, instead of using a dedicated protocolling
language.
78
Protocol Layering
Protocol layering now forms the basis of protocol design. It allows the decomposition of single,
complex protocols into simpler, cooperating protocols, but it is also a functional decomposition,
because each protocol belongs to a functional class, called a protocol layer. The protocol layers each
solve a distinct class of communication problems. The Internet protocol suite consists of the
following layers: application-, transport-, internet- and network interface-functions. Together, the
layers make up a layering scheme or model.
(Figure 18.0 Protocol Layering without modem)
In computations, we have algorithms and data, and in communications, we have protocols and
messages, so the analog of a data flow diagram would be some kind of message flow diagram. To
visualize protocol layering and protocol suites, a diagram of the message flows in and between two
systems, A and B, is shown in figure 3.
The systems both make use of the same protocol suite. The vertical flows (and protocols) are in
system and the horizontal message flows (and protocols) are between systems. The message flows are
governed by rules, and data formats specified by protocols. The blue lines therefore mark the
boundaries of the (horizontal) protocol layers.
79
The vertical protocols are not layered because they don't obey the protocol layering principle which
states that a layered protocol is designed so that layer n at the destination receives exactly the same
object sent by layer n at the source. The horizontal protocols are layered protocols and all belong to
the protocol suite. Layered protocols allow the protocol designer to concentrate on one layer at a time,
without worrying about how other layers perform.
The vertical protocols need not be the same protocols on both systems, but they have to satisfy some
minimal assumptions to ensure the protocol layering principle holds for the layered protocols. This
can be achieved using a technique called Encapsulation.
Usually, a message or a stream of data is divided into small pieces, called messages or streams,
packets, IP datagrams or network frames depending on the layer in which the pieces are to be
transmitted. The pieces contain a header area and a data area. The data in the header area identifies
the source and the destination on the network of the packet, the protocol, and other data meaningful
to the protocol like CRC's of the data to be sent, data length, and a timestamp.
The rule enforced by the vertical protocols is that the pieces for transmission are to be encapsulated in
the data area of all lower protocols on the sending side and the reverse is to happen on the receiving
side. The result is that at the lowest level the piece looks like this: 'Header1, Header2, Header3, data'
and in the layer directly above it: 'Header2, Header3, data' and in the top layer: 'Header3, data', both
on the sending and receiving side. This rule therefore ensures that the protocol layering principle
holds and effectively virtualizes all but the lowest transmission lines, so for this reason some message
flows are colored red in figure 3.
To ensure both sides use the same protocol, the pieces also carry data identifying the protocol in their
header.
The design of the protocol layering and the network (or Internet) architecture are interrelated, so one
cannot be designed without the other. Some of the more important features in this respect of the
Internet architecture and the network services it provides are described next.
The Internet offers universal interconnection, which means that any pair of computers connected to
the Internet is allowed to communicate. Each computer is identified by an address on the Internet. All
the interconnected physical networks appear to the user as a single large network. This
interconnection scheme is called an internetwork or internet.
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis

More Related Content

Viewers also liked

Ecuacionytangente
EcuacionytangenteEcuacionytangente
Ecuacionytangentefgrino2
 
Presentacion 135555 132381
Presentacion 135555 132381Presentacion 135555 132381
Presentacion 135555 132381gloabraham
 
Im training september-2014- orig
Im training september-2014- origIm training september-2014- orig
Im training september-2014- origBarbara Solomon
 
Introduction to ferpa module 1
Introduction to ferpa   module 1Introduction to ferpa   module 1
Introduction to ferpa module 1Barbara Solomon
 
Magazine Cover Deconstruction
Magazine Cover Deconstruction Magazine Cover Deconstruction
Magazine Cover Deconstruction hope_wildfire
 
Presentación plástica contenidos primaria
Presentación plástica contenidos primariaPresentación plástica contenidos primaria
Presentación plástica contenidos primariaanitaalba
 
What is the best strategy to facilitate the decarbonization of existing tower...
What is the best strategy to facilitate the decarbonization of existing tower...What is the best strategy to facilitate the decarbonization of existing tower...
What is the best strategy to facilitate the decarbonization of existing tower...Josephine (Viet Ha) Pham
 
Solomon final reflection
Solomon   final reflectionSolomon   final reflection
Solomon final reflectionBarbara Solomon
 
Agile Results for Modern Life
Agile Results for Modern LifeAgile Results for Modern Life
Agile Results for Modern LifeZarrus
 
PERÍFRASIS VERBALS. "EL PENTÀGON"
PERÍFRASIS VERBALS. "EL PENTÀGON"PERÍFRASIS VERBALS. "EL PENTÀGON"
PERÍFRASIS VERBALS. "EL PENTÀGON"anitaalba
 
International Language Conference Report-min
International Language Conference Report-minInternational Language Conference Report-min
International Language Conference Report-minAhmet Ozirmak
 
Bab1 KBAT nombor berarah
Bab1 KBAT nombor berarah Bab1 KBAT nombor berarah
Bab1 KBAT nombor berarah hapiszah
 

Viewers also liked (18)

Ecuacionytangente
EcuacionytangenteEcuacionytangente
Ecuacionytangente
 
Presentacion 135555 132381
Presentacion 135555 132381Presentacion 135555 132381
Presentacion 135555 132381
 
Cambridge Checkpoint
Cambridge CheckpointCambridge Checkpoint
Cambridge Checkpoint
 
Im training september-2014- orig
Im training september-2014- origIm training september-2014- orig
Im training september-2014- orig
 
Introduction to ferpa module 1
Introduction to ferpa   module 1Introduction to ferpa   module 1
Introduction to ferpa module 1
 
Magazine Cover Deconstruction
Magazine Cover Deconstruction Magazine Cover Deconstruction
Magazine Cover Deconstruction
 
Adelman Writing
Adelman WritingAdelman Writing
Adelman Writing
 
Bachelor thesis
Bachelor thesisBachelor thesis
Bachelor thesis
 
Presentación plástica contenidos primaria
Presentación plástica contenidos primariaPresentación plástica contenidos primaria
Presentación plástica contenidos primaria
 
What is the best strategy to facilitate the decarbonization of existing tower...
What is the best strategy to facilitate the decarbonization of existing tower...What is the best strategy to facilitate the decarbonization of existing tower...
What is the best strategy to facilitate the decarbonization of existing tower...
 
Solomon final reflection
Solomon   final reflectionSolomon   final reflection
Solomon final reflection
 
Solomon redesign im
Solomon redesign imSolomon redesign im
Solomon redesign im
 
Agile Results for Modern Life
Agile Results for Modern LifeAgile Results for Modern Life
Agile Results for Modern Life
 
Our Creative Power to Innovate
Our Creative Power to InnovateOur Creative Power to Innovate
Our Creative Power to Innovate
 
PERÍFRASIS VERBALS. "EL PENTÀGON"
PERÍFRASIS VERBALS. "EL PENTÀGON"PERÍFRASIS VERBALS. "EL PENTÀGON"
PERÍFRASIS VERBALS. "EL PENTÀGON"
 
Context clues
Context cluesContext clues
Context clues
 
International Language Conference Report-min
International Language Conference Report-minInternational Language Conference Report-min
International Language Conference Report-min
 
Bab1 KBAT nombor berarah
Bab1 KBAT nombor berarah Bab1 KBAT nombor berarah
Bab1 KBAT nombor berarah
 

Similar to Thesis

Final presentation1
Final presentation1Final presentation1
Final presentation1amalalsubaie
 
Employability_Skills_XIIBOOK.pdf for typography subject
Employability_Skills_XIIBOOK.pdf for typography subjectEmployability_Skills_XIIBOOK.pdf for typography subject
Employability_Skills_XIIBOOK.pdf for typography subjectANKITKUMARGAUTAM13
 
Employability_Skills_XII (1).pdf
Employability_Skills_XII (1).pdfEmployability_Skills_XII (1).pdf
Employability_Skills_XII (1).pdfscore97in12th
 
Employability_Skills_XII_for_class12.pptx
Employability_Skills_XII_for_class12.pptxEmployability_Skills_XII_for_class12.pptx
Employability_Skills_XII_for_class12.pptxgakixoc612
 
OEI Student Success Fall 2015
OEI Student Success Fall 2015OEI Student Success Fall 2015
OEI Student Success Fall 2015Cyrus Helf
 
Independence, Critical Thinking, and Blended Learning
Independence, Critical Thinking, and Blended LearningIndependence, Critical Thinking, and Blended Learning
Independence, Critical Thinking, and Blended LearningStaci Trekles
 
Hafiza Gulnaz Fatima 2020 lcwu lhr lahore college for women university.pdf
Hafiza Gulnaz Fatima 2020 lcwu lhr lahore college for women university.pdfHafiza Gulnaz Fatima 2020 lcwu lhr lahore college for women university.pdf
Hafiza Gulnaz Fatima 2020 lcwu lhr lahore college for women university.pdfShinyMerriment
 
Advancing Teaching and Learning Conference
Advancing Teaching and Learning ConferenceAdvancing Teaching and Learning Conference
Advancing Teaching and Learning ConferenceLisa DuBois Low
 
The Role of Multi-Access Learning in Mainstreaming Open Education
The Role of Multi-Access Learning in Mainstreaming Open EducationThe Role of Multi-Access Learning in Mainstreaming Open Education
The Role of Multi-Access Learning in Mainstreaming Open EducationBCcampus
 
Plaksha University | Post Graduate Program - Technology Leaders Program (TLP)
Plaksha University | Post Graduate Program - Technology Leaders Program (TLP)Plaksha University | Post Graduate Program - Technology Leaders Program (TLP)
Plaksha University | Post Graduate Program - Technology Leaders Program (TLP)Plaksha University
 
EDUCATIONAL COUNSELING SERVICES A NEEDS ASSESSMENT OF JUNIOR SECONDARY SCHOO...
EDUCATIONAL COUNSELING SERVICES  A NEEDS ASSESSMENT OF JUNIOR SECONDARY SCHOO...EDUCATIONAL COUNSELING SERVICES  A NEEDS ASSESSMENT OF JUNIOR SECONDARY SCHOO...
EDUCATIONAL COUNSELING SERVICES A NEEDS ASSESSMENT OF JUNIOR SECONDARY SCHOO...Gloria Mazhim De Decker
 
The Community Stakeholders in educational leadership
The Community Stakeholders in educational leadershipThe Community Stakeholders in educational leadership
The Community Stakeholders in educational leadershipRusselMartinezPagana
 
COMPARATIVE ANALYSIS OF HIGHER EDUCATION IN INDIA AND ABROAD
COMPARATIVE ANALYSIS OF HIGHER EDUCATION IN INDIA AND ABROADCOMPARATIVE ANALYSIS OF HIGHER EDUCATION IN INDIA AND ABROAD
COMPARATIVE ANALYSIS OF HIGHER EDUCATION IN INDIA AND ABROADUniversal Business School
 
Building online learning environments
Building online learning environmentsBuilding online learning environments
Building online learning environmentsStephanie Sherman
 
Building online learning environments
Building online learning environmentsBuilding online learning environments
Building online learning environmentsStephanie Sherman
 

Similar to Thesis (20)

Final presentation1
Final presentation1Final presentation1
Final presentation1
 
Cd submission
Cd submissionCd submission
Cd submission
 
Er.lakshita
Er.lakshitaEr.lakshita
Er.lakshita
 
Employability_Skills_XIIBOOK.pdf for typography subject
Employability_Skills_XIIBOOK.pdf for typography subjectEmployability_Skills_XIIBOOK.pdf for typography subject
Employability_Skills_XIIBOOK.pdf for typography subject
 
Employability_Skills_XII (1).pdf
Employability_Skills_XII (1).pdfEmployability_Skills_XII (1).pdf
Employability_Skills_XII (1).pdf
 
Employability_Skills_XII_for_class12.pptx
Employability_Skills_XII_for_class12.pptxEmployability_Skills_XII_for_class12.pptx
Employability_Skills_XII_for_class12.pptx
 
OEI Student Success Fall 2015
OEI Student Success Fall 2015OEI Student Success Fall 2015
OEI Student Success Fall 2015
 
Field study 3
Field study 3Field study 3
Field study 3
 
Independence, Critical Thinking, and Blended Learning
Independence, Critical Thinking, and Blended LearningIndependence, Critical Thinking, and Blended Learning
Independence, Critical Thinking, and Blended Learning
 
Hafiza Gulnaz Fatima 2020 lcwu lhr lahore college for women university.pdf
Hafiza Gulnaz Fatima 2020 lcwu lhr lahore college for women university.pdfHafiza Gulnaz Fatima 2020 lcwu lhr lahore college for women university.pdf
Hafiza Gulnaz Fatima 2020 lcwu lhr lahore college for women university.pdf
 
Advancing Teaching and Learning Conference
Advancing Teaching and Learning ConferenceAdvancing Teaching and Learning Conference
Advancing Teaching and Learning Conference
 
The Role of Multi-Access Learning in Mainstreaming Open Education
The Role of Multi-Access Learning in Mainstreaming Open EducationThe Role of Multi-Access Learning in Mainstreaming Open Education
The Role of Multi-Access Learning in Mainstreaming Open Education
 
Plaksha University | Post Graduate Program - Technology Leaders Program (TLP)
Plaksha University | Post Graduate Program - Technology Leaders Program (TLP)Plaksha University | Post Graduate Program - Technology Leaders Program (TLP)
Plaksha University | Post Graduate Program - Technology Leaders Program (TLP)
 
EDUCATIONAL COUNSELING SERVICES A NEEDS ASSESSMENT OF JUNIOR SECONDARY SCHOO...
EDUCATIONAL COUNSELING SERVICES  A NEEDS ASSESSMENT OF JUNIOR SECONDARY SCHOO...EDUCATIONAL COUNSELING SERVICES  A NEEDS ASSESSMENT OF JUNIOR SECONDARY SCHOO...
EDUCATIONAL COUNSELING SERVICES A NEEDS ASSESSMENT OF JUNIOR SECONDARY SCHOO...
 
The Community Stakeholders in educational leadership
The Community Stakeholders in educational leadershipThe Community Stakeholders in educational leadership
The Community Stakeholders in educational leadership
 
COMPARATIVE ANALYSIS OF HIGHER EDUCATION IN INDIA AND ABROAD
COMPARATIVE ANALYSIS OF HIGHER EDUCATION IN INDIA AND ABROADCOMPARATIVE ANALYSIS OF HIGHER EDUCATION IN INDIA AND ABROAD
COMPARATIVE ANALYSIS OF HIGHER EDUCATION IN INDIA AND ABROAD
 
Project Report
Project ReportProject Report
Project Report
 
Aliyu shehu yakubu. sbs22
Aliyu shehu yakubu. sbs22Aliyu shehu yakubu. sbs22
Aliyu shehu yakubu. sbs22
 
Building online learning environments
Building online learning environmentsBuilding online learning environments
Building online learning environments
 
Building online learning environments
Building online learning environmentsBuilding online learning environments
Building online learning environments
 

Thesis

  • 1. ANDROID WALKIE TALKIE A Dissertation Submitted to School of Computer Science In Partial Fulfillment of the Requirement of the Degree of Bachelor in Computer Science Under the Supervision of DR. ABDUL HYEE Deputy Director (ERP), FESCO. by Talha Habib Registration No. FD0121231728 Email: talha@codeot.com National College of Business Administration and Economics 40/E-1, Gulberg III, Lahore-54660, Pakistan
  • 2. 2
  • 3. 3 ANDROID WALKIE TALKIE A Dissertation Submitted to School of Computer Science In Partial Fulfillment of the Requirement of the Degree of BS (Computer Science) by Talha Habib Registration No. FD0121231728 Under the Supervision of DR. ABDUL HYEE Deputy Director (ERP), FESCO. National College of Business Administration and Economics 40/E-1, Gulberg III, Lahore-54660, Pakistan
  • 4. 4 Declaration by student I hereby declare that the contents of the thesis “Android Walkie Talkie” is research based and no part has been copied from any published source (except the references, some standard mathematical or genetic models/equations/protocols etc.). I further declare that this work has not been submitted for the award of any other diploma/degree. The University may take action if the above statement is found inaccurate at any stage. __________________________ Name: Talha Habib
  • 5. 5 To, The Controller ofExaminations, Chenab College ofAdvanced Studies, Faisalabad We, the supervisory committee, certify that the contents and form of thesis submitted by Mr. Talha Habib have been found satisfactory and recommend that it be processed for evaluation by the external examiner(s) for the award of the degree. Supervisory Committee 1. Supervisor :_______________________________ (Dr. Abdul Hyee) 2. Member :_______________________________ 3. Member :_______________________________
  • 6. 6 DEDICATED TO The Holy Prophet Hazrat MUHAMMAD Peace Be Upon Him He is the greatest Teacher of the World & My Loving & Caring Parents Who praised every moment of my life with and untiring sustenance. Whose affection, love, encouragement and prayers of day and night make me able to get such success and honor to accomplish this task. TeacherRespectableMy Who is always with me and guided me with love and gratitude
  • 7. 7 Acknowledgement First of all, I would like to thank “ALLAH Almighty” the Merciful, the Creator of mind; who blessed me with the knowledge and granted me the courage and ability to complete this documentation successfully. Thanks to my parents, who cherished every moment of my life with support. Their hands always rose for me in their prayers. I deeply appreciate the efforts of my supervisor, Dr. Abdul Hyee who helped me a lot. Despite the pressure of work he spent time to listen and assist and offered guidance. He knew where to look for the answers to obstacles while leading me to the right source, theory and perspective. He was always available for my questions and he was positive and gave generously of his time and vast knowledge. Without his guidance I would not have been able to accomplish this task. Talha Habib
  • 8. 8 Table of contents DECLARATION BY STUDENT........................................................................................................ 4 ACKNOWLEDGEMENT................................................................................................................... 7 TABLE OFCONTENTS ..................................................................................................................... 8 LIST OF FIGURES.............................................................................................................................. 9 LIST OFABBREVIATIONS ............................................................................................................ 10 WALKIE TALKIE ............................................................................................................................. 12 History......................................................................................................................................................................................................12 Amateur radio........................................................................................................................................................................................13 Personal Use............................................................................................................................................................................................14 OBJECTIVES..................................................................................................................................... 14 LIMITATION OF STUDY................................................................................................................ 15 HYPOTHESIS SET TO ACHIEVE THE OBJECTIVE................................................................ 15 Send andreceive procedure................................................................................................................................................................17 Connectivity and searching for station............................................................................................................................................18 HAND-SHAKE CLIENT VS HAND-SHAKE SERVER ............................................................... 18 SOFTWARE REQUIREMENT SPECIFICATION ....................................................................... 18 Functional requirements .....................................................................................................................................................................19 None Functional Requirements .........................................................................................................................................................19 SYSTEM DESIGNS........................................................................................................................... 19 Strings.xml ..............................................................................................................................................................................................21 XML (Extensible Markup Language) ............................................................................................................................................22 Hand-Shake Server-Client..................................................................................................................................................................24 Client side handshake........................................................................................................................................................................25 Server side handshake.......................................................................................................................................................................25 TCP-Three Way Handshaking.........................................................................................................................................................26 SMTP ...................................................................................................................................................................................................27 TLS.......................................................................................................................................................................................................27 WPA2 Wireless ..................................................................................................................................................................................29 Dial up access modems .....................................................................................................................................................................30 SERVER SIDE NDS HANDSHAKE – RECEIVING PACKETS ................................................. 31 Station Information and Connectivity ............................................................................................................................................33 Channel................................................................................................................................................................................................37 Audio Player .......................................................................................................................................................................................43 Audio Recorder ..................................................................................................................................................................................46 Session Manager................................................................................................................................................................................52 State View ...........................................................................................................................................................................................53 Walkie Talkie Services......................................................................................................................................................................55 Switch Button .....................................................................................................................................................................................59 Main Activity......................................................................................................................................................................................65 Channel Session .................................................................................................................................................................................71 Configuration......................................................................................................................................................................................74 Database...............................................................................................................................................................................................74 PROTOCOL ....................................................................................................................................... 75 Basic Requirement of protocols.........................................................................................................................................................75 Protocols and Programming languages...........................................................................................................................................77 Protocol Layering..................................................................................................................................................................................78 Software Layering.................................................................................................................................................................................82 APPLICATION STRUCTURE ........................................................................................................ 85 USE CASE .......................................................................................................................................... 87 SDLC ................................................................................................................................................... 88 SEQUENCE DIAGRAM................................................................................................................... 90 ENTITY RELATION DIAGRAM.................................................................................................... 91
  • 9. 9 List of figures List of figures Page No. Figure 1.0 Working model of JS collider 16 Figure 2.0 sending-receiving voice 17 Figure 3.0 Hand-shaking 24 Figure 4.0 Three-way handshake 26 Figure 5.0 SMTP based handshake 27 Figure 6.0 TLS Layout 27 Figure 7.0 TLS Handshake over SSL 28 Figure 8.0 Simple TLS Handshaking 28 Figure 9.0 TCP Four Way Handshake 29 Figure 10.0 Modem/Device/Server connection hand-shaking 30 Figure 11.0 how ping works 33 Figure 12.0 App setting layout/Station name setting 34 Figure 13.0 Volume control in setting layout/screen 34 Figure 14.0 Use volume buttons as PTT on settings screen 35 Figure 15.0 Wi-Fi Status check on start 36 Figure 16.0 Channel 37 Figure 17.0 Playing voice using inner audio player 46 Figure 18.0 Protocol Layering without modem 78 Figure 19.0 Protocol Layering with modem/router 80 Figure 20.0 Software Layering 82 Figure 21.0 Protocols and software layering working model 84 Figure 22.0 Use Case 87 Figure 23.0 SDLC concept 88 Figure 24.0 Sequence Design Process – water fall model 90 Figure 25.0 Entity Diagram for Walkie Talkie 91
  • 10. 10 List of abbreviations NDS: Network Discovery Service N: Nodes P2p: peer to peer N2n: node to node JS: Java script JSC: JS Collide BT: Blue-tooth DPI: Dots per inch PX: pixels UHF: Ultra high frequency VHF: Very high frequency PTT: push-to-talk SCR: Silicon-controlled-rectifier RF: Radio frequency HT: Handheld transceiver AN/PRC: Army Navy/ Portable Radio communicator/communication AN/PRR: Army Navy/ Pattern recognition receptor HDPI: High-density Pixels XHDPI: Extra High-density Pixels MDPI: Medium-density Pixels LDPI: Low-density Pixels ACK: Acknowledgment SYN: Sync FRS: Financial Reporting Standard GMRS: General Mobile Radio Service PMR: personal mobile radio GPS: Global Positioning service NFS: Network File System DHCP: Dynamic host configuration protocol NPM: Node Package Manager IEEE: Institute of Electrical and Electronic Engineers
  • 11. 11 Abstract Android Wi-Fi Walkie Talkie is generic term defining app is based on Walkie Talkie concept which runs using Wi-Fi technologies to deal with autonomous communication between devices, For the last several years the current era has been moving forward faster than before, despite the fact of technologies Walkie Talkie has been proven a great helping utility and for this same reason it is currently in use for police and also for other metered communication e.g. within large building contacting support/administrator or calling out for management etc. The study investigates the possibility of an app development which is lightweight and alternative solution using peer to peer communications by only using common gateways such as normal DHCP server, modems Wi-Fi hot- spot to connect android devices to treat as Walkie Talkie handsets. In the study a prototype was developed as simple sound recorder application for interaction who sent the recorded voice over medium to other device and application plays the voice on device which make it easier to communicate by voice, it was first implemented as Bluetooth voice sender and receiver, the more the app was used the more the new features and flexibility became visible and using some real time helper libraries such as JS-collider it became flexible enough to cast its very own port for sockets. Keywords: Android Wi-Fi communication.
  • 12. 12 Chapter 1 Walkie Talkie A Walkie Talkie is a hand-held, portable, two-way radio transceiver. Its development during the Second World War has been variously credited to Donald L. Hings, radio engineer Alfred J. Gross, and engineering teams at Motorola. First used for infantry, similar designs were created for field artillery and tank units, and after the war, Walkie Talkies spread to public safety and eventually commercial and jobsite work. Walkie Talkie is a half-duplex communication device; multiple Walkie Talkies use a single radio channel, and only one radio on the channel can transmit at a time, although any number can listen. The transceiver is normally in receive mode; when the user wants to talk, he presses a "push-to-talk” button that turns off the receiver and turns on the transmitter. Typical Walkie Talkies resemble a telephone handset, possibly slightly larger but still a single unit, with an antenna mounted on the top of the unit. Where a phone's earpiece is only loud enough to be heard by the user, a Walkie Talkie's built-in speaker can be heard by the user and those in the user's immediate vicinity. Hand-held transceivers may be used to communicate between each other, or to vehicle-mounted or base stations. History The Walkie Talkie was developed by the US military during World War 2. The first radio transceiver to be widely nicknamed "Walkie Talkie" was the backpacked Motorola SCR-300, created by an engineering team in 1940 at the Galvin Manufacturing Company. The team consisted of Dan Noble, who conceived of the design using frequency modulation; Henryk Magnuski, who was the principal RF engineer; Marion Bond; Lloyd Morris; and Bill Vogel. The first hand-held Walkie Talkie was the AM SCR-536 transceiver also made by Motorola, named the "Handie-Talkie". The terms are often confused today, but the original Walkie Talkie referred to the back mounted model, while the handie- talkie was the device which could be held entirely in the hand. Both devices used vacuum tubes and were powered by high voltage dry cell batteries. Alfred J. Gross, a radio engineer and one of the developers of the Joan-Eleanor system, also worked on the early technology behind the Walkie Talkie between 1934 and 1941, and is sometimes credited with inventing it. Canadian inventor Donald Hings is also credited with the invention of the Walkie Talkie: he created a portable radio signaling system for his employer CM&S in 1937. He called the system a "packset", but it later became known as the "Walkie Talkie". In 2001, Hings was formally decorated for its significance to the war effort. Hing's model C-58 "Handy-Talkie" was in military service by 1942, the result of a secret R&D effort that began in 1940.Following World War II, Raytheon developed the SCR-536's military replacement, the AN/PRC-6. The AN/PRC-6 circuit used 13 vacuum tubes; a second set of 13 tubes was supplied with the unit as running spares. The unit was factory set with one crystal which could be changed to a
  • 13. 13 different frequency in the field by replacing the crystal and re-tuning the unit. It used a 24-inch whip antenna. There was an optional handset H-33C/PT that could be connected to the AN/PRC-6 by a 5- foot cable. A web sling was provided. In the mid-1970s the United States Marine Corps initiated an effort to develop a squad radio to replace the unsatisfactory helmet-mounted AN/PRR-9 receiver and receiver/transmitter hand-held AN/PRT-4. The AN/PRC-68 was first produced in 1976 by Magnavox, was issued to the Marines in the 1980s, and was adopted by the US Army as well. The abbreviation HT, derived from Motorola's "Handie Talkie" trademark, is commonly used to refer to portable handheld ham radios, with "Walkie Talkie" often used as a layman's term or specifically to refer to a toy. Public safety or commercial users generally refer to their handhelds simply as "radios". Surplus Motorola Handie Talkies found their way into the hands of ham radio operators immediately following World War II. Motorola's public safety radios of the 1950s and 1960s, were loaned or donated to ham groups as part of the Civil Defense program. To avoid trademark infringement, other manufacturers use designations such as "Handheld Transceiver" or "Handie Transceiver" for their products Amateur radio Walkie Talkies are widely used among amateur radio operators. While converted commercial gear by companies such as Motorola are not uncommon, many companies such as Yaesu, Icom, and Kenwood design models specifically for amateur use. While superficially similar to commercial and personal units, amateur gear usually has a number of features that are not common to other gear, including: Wide-band receivers, often including radio scanner functionality, for listening to non-amateur radio bands. Multiple bands; while some operate only on specific bands such as 2 meters or 70 cm, others support several UHF and VHF amateur allocations available to the user. Since amateur allocations usually are not channelized, the user can dial in any frequency desired in the authorized band. Multiple modulation schemes: a few amateur HTs may allow modulation modes other than FM, including AM, SSB, and CW, and digital modes such as radio-tele-type or PSK31. Some may have TNCs built in to support packet radio data transmission without additional hardware. A newer addition to the Amateur Radio service is Digital Smart Technology for Amateur Radio or D-STAR. Handheld radios with this technology have several advanced features, including narrower bandwidth, simultaneous voice and
  • 14. 14 messaging, GPS position reporting, and call-sign routed radio calls over a wide ranging international network. As mentioned, commercial Walkie Talkies can sometimes be reprogrammed to operate on amateur frequencies. Amateur radio operators may do this for cost reasons or due to a perception that commercial gear is more solidly constructed or better designed than purpose-built amateur gear. Personal Use The personal Walkie Talkie has become popular also because of the U.S. Family Radio Service and similar license-free services in other countries. While FRS Walkie Talkies are also sometimes used as toys because mass-production makes them low cost, they have proper super heterodyne receivers and are a useful communication tool for both business and personal use. The boom in license-free transceivers has, however, been a source of frustration to users of licensed services that are sometimes interfered with. For example, FRS and GMRS overlap in the United States, resulting in substantial pirate use of the GMRS frequencies. Use of the GMRS frequencies requires a license; however, most users either disregard this requirement or are unaware. Canada reallocated frequencies for license-free use due to heavy interference from US GMRS users. The European PMR446 channels fall in the middle of a United States UHF amateur allocation, and the US FRS channels interfere with public safety communications in the United Kingdom. Designs for personal Walkie Talkies are in any case tightly regulated, generally requiring non-removable antennas and forbidding modified radios. Objectives  The broad objective was to study about real-time communication with android, and functionality of Walkie Talkie. The specific objectives of study were:  To examine the real-time communication on android  To examine how flexible an android can handle communication and how much further one can go using java as language and android as OS.  To examine if android can be used as sender-receiver without using GSM or internet services or any other third party software or hardware.  To determine if android can act as sender-receiver by staying an offline device and using only local network to communicate  To examine the local network communication speed and limitation  To examine how many nodes can communication through one channel, its speed and limitation  To examine how many nodes can communicate to each other at same time by staying on one channel.  To determine if increasing in number of nodes slow down channel  To determine if increasing in number of nodes slow down android
  • 15. 15 Limitation of study  All modems were not considered as communication medium because of different firewall settings, variation in firmware, absence of DHCP  Android additional/third-party firewall, or firewall in medium was a challenge because in-out bond connection for specific channel was required to be open for android to receive and send voice over line without getting interrupted.  Android variation in OS version was a big challenge, package dependency does not work on modified OS or older OS, it at least require 4.x.0+ version of android to perform its fullest  BT (Blue-tooth), infrared, NFS was too slow that they can only handle 2-3 nodes per channel.  GSM based channel broadcast was required but because the objective was to make it work offline, GSM was not use to broadcast signal instead of signal a medium was introduce as Wi-Fi, hot-spot, DHCP  Wi-Fi/Hot-spot based medium are intent to have faster connection but lesser nodes, Hot-spot can only handle up to 50-70 nodes.  DHCP is the only and best option, but connection in DHCP is above average and firewall is the only challenge in it. Hypothesis set to achieve the objective. The Objective of study is to make communication as fast and real time as possible by using local network only. It is hypothesized that they real time communication might be possible if we were to use node based module or JS ajax based service to update the communication line, instead of making it native which will make it big in size it would be best to use JS based library which can also be proved to be a short-cut as well as can be handy in case there are too many nodes and native system is too busy to run its own operation in the result app being crashed. NodeJS is new in market but almost every developer knows that it is no less than a standalone stable platform, the most fantastic thing about NodeJS was io.socket which is basically a socket based system works with a custom port committing and emitting to communicate. The real-time speed and performance of NodeJS is unmatched thus I started to find an alternative and a way to use JS library if not NodeJS module itself because NodeJS runs on its own platform that used NPM, android cannot emulate node module in native app.In the end JS-collider was used as alternative, it provides TCP/IP session emit and commit just like node module and is also very light-weight and developer friendly to use.
  • 16. 16 Chapter 2 JS-Collider Working According to authors and developer of JS-collider: “JS-Collider is an asynchronous event-driven Java network (NIO) application framework designed to provide maximum performance and scalability for applications having not too many connections but significant amount of network traffic (both incoming and outgoing).Performance is achieved by specially designed threading model and lock-free algorithms” Working module of JS-collider: (Figure 1.0 – Working model of JS collider) In Figure 1.0 the model tells us how JS-collider works. The area with “S” in it are devices or nodes connected within local area network each device are emitting their station number and acting like hand-shake server on their own, they are finding another hand-shake client to validate and bind a connectivity with them. The Green “S” blocks are devices connected in local area network but they are not verified yet. The purple “S” block is device emitting his station info in local area network, the yellow “S” block is device validating station info and establishing connectivity. The model keeps expanding according to more nodes keep connecting. Each node is server on their own and treat other device as client, DHCP, hot-spot, modem is just medium used to establish connection between them for interactions and communication connectivity procedures.
  • 17. 17 Working with JS-collider: JS-collider is a connectivity service which connects nodes together and make communication as real time as possible by emit and commit functionality, sending and receiving was achieved. Now that we have come to this, the only challenge that was remaining was to send voice over medium. Wi-Fi hotspot is completed first step of android connection, JS collider complete other 2 steps of Connectivity between broad cast channel and sending receiving, the only complexity was “How to keep the emit and commit alive” If Emit and commit is in interval or is connected like p2p it is easy to manage, but the thing is, Walkie Talkie has PTT (push to talk) button and user have to push it before broad casting his voice over medium, making emit on button click and stopping emit on button click was pretty complex because it will no longer be p2p connected and because of the variations in connections there has to be other solution. Send and receive procedure In the study where sending receiving was figured out, to send voice over medium, it was decided to use sound recording and send it to medium after one commit, when other device will receive emit it will automatically play the committed voice, recording voice and treating it is data chunk/slicing it before sending was used to make packet light and make voice send-receive-play possible. (Figure 2.0 sending-receiving voice)
  • 18. 18 Connectivity and searching for station. Application is uses hard-coded string/parameter as station which helps other devices running same application to look for each other, this procedure is called hand-shake. Once the application is running it will broadcast its signature within Local network, if the same application is running somewhere else and is in local network both will hand-shake and confirm identity, while they are validating and connecting application receives other’s device name or station name. Application makes a list of connected nodes and display it, each node will have their own name displaying so the user can know where exactly he is talking. Hand-shake Client vs Hand-Shake Server The procedure of hand-shake is divided into two parts.  Server hand-shake  Client hand-shake 1. Server Hand-shake: Server hand-shake are devices who are acting as DHCP by turning on their hot-spot and connecting other devices through their hot-spot. 2. Client hand-shake: Client hand-shake are nodes and simple devices connected to each other by centralized medium/DHCP/network, these devices lookup for other devices within network and hand-shake with them to know their name and add them to User interface. Software requirement specification Application do not require any additional libraries support from android or any other third- party resource to perform functional, all libraries application require is already a part of application. There are no external API or resource call for application, however application requires android permissions to work normally, without those permission applications can’t run. Permission required by applications from android system are: o Internet permission o Wi-Fi permission o Recording permission o Change/Read Wi-Fi state permission
  • 19. 19 Functional requirements Wi-Fi hardware and API level 21 or above is required and encouraged. Older version of android comes with minimum hardware specification which can led to application crashes and devices lags. It is possibility that app will not install on older version at first place, but even if it is installed it may not work, and if it is installed and is working, due to hardware specification more device connection will slow down sending/receiving in result of device and application lagging or crashes. Application is tested on various API level and OS versions of android, the test result for application are as follow: None Functional Requirements Devices should be on same network/local network; Application is intended to work in local networks only it can’t work online or remotely. Chapter 3 System Designs App has various methods working together, Instead of Database app uses shared preferences to store settings, and Module/methods that are part of app are as follows: 1. Hand-Shake Client 2. Hand-Shake Server 3. Station Information 4. Connectivity 5. Channel 6. Audio player 7. Audio recorder 8. Protocol 9. Session manager # OS VERSION API VERSION STATUS 1 2.x.x 8 FAIL 2 3.x.x 12 FAIL 3 4.x.x 18 BUGS 4 5.x.x 21 PASS 5 6.x.x 23 PASS 6 7.x.x 25 PASS
  • 20. 20 10. State view 11. Walkie Talkie services 12. Switch buttons 13. Main Activity 14. Channel Session 15. Configurations App has several layouts as follows: 1. Home a. Connected devices lists b. Drop down i. About ii. Setting iii. Exit 2. Wi-Fi connectivity App has 4 dimensions of image, 5 dimension of app logo, 5 dimensions of status bar logo, the dimension uses in application logo are as follows: 1. image a. HDPI b. MDPI c. XHDPI d. XXHDPI 2. App logo: a. HDPI b. MDPI c. XHDPI d. XXHDPI e. LDPI 3. Status bar logo: a. HDPI b. MDPI c. XHDPI d. XXHDPI e. LDPI
  • 21. 21 App permission are requested through main activity and validated from other method accordingly, whenever a process is about to happen the first step system take is validating permissions, all of that permissions are requested from Manifest.xmlfile. Strings.xml Android holds a feature called string values where all the string used in android is declared / define there. Whenever the string is needed it is called by its name. For example, I’ve a sentence saying “this app is developed by Talha Habib” I can define this sentence using xml markup in strings.xml file in strings folder/file system structure. String.xml string node or DOM object/line in xml structure can be used to define/declare/initialize an id on that string line DOM object so it can be called and used whenever is needed later. Xml is DOM object based structure where we can define our own node names and define our own attribute on that DOM object, by example it could be anything like this. <class> <section name=”c”> <student name=”Talha Habib” roll=”1807” id=”talha”></student> </section> <section name=”d”> <student name=”Umer Najeeb” roll=”1802” id=”umer”></student> </section> </class> Our code will look like this in XML file, it says it has 2 records on class section “c” and “d” there is node name “student” container student id and name and other attribute it can be anything, now if we need to know the name of 1807 roll number person it will give us name under “name” attribute so we can keep going with our records. Similarly, just like that string.xml has values we can use for later, for example we know that our app name is “Wi-Fi Walkie Talkie” whenever we need to display it again we don’t have to write it again if we have said its id all we need to do is to call that id. XML Stands for Extensible markup language, don’t change "Extensible" to "Xtensible" XML nodes are called Elements, not tags! In HTML DOM objects, they nodes are called Tags, XML and HTML/DHTML may look like the same in syntax but they have different way of working and scope.
  • 22. 22 XML (Extensible Markup Language) In computing, Extensible Markup Language is a markup language that defines a set of rules for encoding documents in a format that is both human-readable and machine-readable. The W3C's XML 1.0 Specification and several other related specifications all of them free open standards define XML. The design goals of XML emphasize simplicity, generality, and usability across the Internet. It is a textual data format with strong support via Unicode for different human languages. Although the design of XML focuses on documents, the language is widely used for the representation of arbitrary data structures such as those used in web services. Several schema systems exist to aid in the definition of XML-based languages, while programmers have developed many application programming interfaces to aid the processing of XML data. Applications of XML, 100 document formats using XML syntax had been developed, including RSS, Atom, SOAP, and XHTML. XML- based formats became the default for many office-productivity tools, including Microsoft Office, OpenOffice.org and LibreOffice, and Apple's iWork. XML has also provided the base language for communication protocols such as XMPP. Applications for the Microsoft .NET Framework use XML files for configuration. Apple has an implementation of a registry based on XML.XML has come into common use for the interchange of data over the Internet. IETF RFC 7303 gives rules for the construction of Internet Media Types for use when sending XML. It also defines the media type’s application/xml and text/xml, which say only that the data is in XML, and nothing about its semantics. The use of text/xml has been criticized as a potential source of encoding problems and it has been suggested that it should be deprecated. With some format beyond what XML defines itself. Usually this is either a comma or semi-colon delimited list or, if the individual values are known not to contain spaces, a space-delimited list can be used. div class=’inner-greeting-box’>Welcome! < /div>; where the attribute "class" has both the value "inner greeting-box" and also indicates the two CSS class names "inner" and "greeting-box".
  • 23. 23 XML declaration XML documents consist entirely of characters from the Unicode repertoire. Except for a small number of specifically excluded control characters, any character defined by Unicode may appear within the content of an XML document. XML includes facilities for identifying the encoding of the Unicode characters that make up the document, and for expressing characters that, for one reason or another, cannot be used directly. Valid characters Unicode code points in the following ranges are valid in XML 1.0 documents: U+0009, U+000A, U+000D: these are the only C0 controls accepted in XML 1.0; U+0020–U+D7FF, U+E000– U+FFFD: this excludes some non-characters in the BMP; U+10000–U+10FFFF: this includes all code points in supplementary planes, including non-characters.XML 1.1 extends the set of allowed characters to include all the above, plus the remaining characters in the range U+0001–U+001F. At the same time, however, it restricts the use of C0 and C1 control characters other than U+0009, U+000A, U+000D, and U+0085 by requiring them to be written in escaped form. In the case of C1 characters, this restriction is a backwards incompatibility; it was introduced to allow common encoding errors to be detected. The code point U+0000 is the only character that is not permitted in any XML 1.0 or 1.1 document. Encoding detection The Unicode character set can be encoded into bytes for storage or transmission in a variety of different ways, called "encodings". Unicode itself defines encodings that cover the entire repertoire; well-known ones include UTF-8 and UTF-16. There are many other text encodings that predate Unicode, such as ASCII and ISO/IEC 8859; their character repertoires in almost every case are subsets of the Unicode character set.XML allows the use of any of the Unicode-defined encodings, and any other encodings whose characters also appear in Unicode. XML also provides a mechanism whereby an XML processor can reliably, without any prior knowledge, determine which encoding is being used. Encodings other than UTF-8 and UTF-16 are not necessarily recognized by every XML parser.
  • 24. 24 Hand-Shake Server-Client In information technology, telecommunications, and related fields, handshaking is an automated process of negotiation that dynamically sets parameters of a communications channel established between two entities before normal communication over the channel begins. It follows the physical establishment of the channel and precedes normal information transfer. The handshaking process usually takes place in order to establish rules for communication when a computer sets about communicating with a foreign device. When a computer communicates with another device like a modem, printer, or network server, it needs to handshake with it to establish a connection.Handshaking can negotiate parameters that are acceptable to equipment and systems at both ends of the communication channel, including information transfer rate, coding alphabet, parity, interrupt procedure, and other protocol or hardware features. Handshaking is a technique of communication between two entities. However, within TCP/IP RFCs, the term "handshake" is most commonly used to reference the TCP three-way handshake. For example, the term "handshake" is not present in RFCs covering FTP or SMTP. A simple handshaking protocol might only involve the receiver sending a message meaning” (Figure 3.0 Hand-shaking Client – Server)
  • 25. 25 Client side handshake Public HandshakeClientSession (ARGS){ // DECLARATION If(pingInterval >0){// ping interval for packet interactions. m_timerHandler =new TimerHandler (); timerQueue.schedule(m_timerHandler, pingInterval, TimeUnit.SECONDS); } Try{ Final ByteBuffer handshakeRequest = Protocol.HandshakeRequest.create(audioFormat, stationName); session.sendData (handshakeRequest); //send data through handshake request }catch(final CharacterCodingException ex){ Log.e (LOG_TAG, getLogPrefix ()+ ex.toString ()); //debugging session.closeConnection (); //close session } } Server side handshake Public HandshakeServerSession(ARGS){ // declaration If(pingInterval >0){ m_timerHandler =new TimerHandler(); m_timerQueue.schedule(m_timerHandler, pingInterval, TimeUnit.SECONDS); } Log.i(LOG_TAG, getLogPrefix()+"connection accepted"); } There are many other types of handshaking and several of ways to do it... some of methods are as follows: 1. TCP-Three-way handshake 2. WPA/WPA2 Four-way Handshake
  • 26. 26 TCP-Three Way Handshaking The first host (Alice) sends the second host (Bob) a "synchronize" (SYN) message with its own sequence number {displaystyle x} x, which Bob receives. Bob replies with a synchronize- acknowledgment (SYN-ACK) message with its own sequence number {displaystyle y} y and acknowledgement number {displaystyle x+1} x+1, which Alice receives. Alice replies with an acknowledgment message with acknowledgement number {displaystyle y+1} y+1, which Bob receives and to which he doesn't need to reply. In this setup, the synchronize messages act as service requests from one server to the other, while the acknowledgement messages return to the requesting server to let it know the message was received. Establishing a normal TCP connection requires three separate steps: (Figure 4.0 Three-way handshake) One of the most important factors of three-way handshake is that, in order to exchange the starting sequence number, the two sides plan to use, the client first sends a segment with its own initial sequence number {displaystyle x} x, then the server responds by sending a segment with its own sequence number {displaystyle y} y and the acknowledgement number {displaystyle x+1} x+1, and finally the client responds by sending a segment with acknowledgement number {displaystyle y+1} y+1. The reason for the client and server not using the default sequence number such as 0 for establishing connection is to protect against two incarnations of the same connection reusing the same sequence number too soon, which means a segment from an earlier incarnation of a connection might interfere with a later incarnation of the connection.
  • 27. 27 Hand-shaking could use one of many protocols as following: 1. SMTP 2. TLS 3. WPA2 wireless 4. Dial-up access modems SMTP The Simple Mail Transfer Protocol (SMTP) is the key Internet standard for email transmission. It includes handshaking to negotiate authentication, encryption and maximum message size. (Figure 5.0 SMTP based handshake) TLS When a Transport Layer Security (SSLor TLS) connection starts, the record encapsulates a "control" protocol—the handshake messaging protocol. This protocol is used to exchange all the information required by both sides for the exchange of the actual application data by TLS. It defines the messages formatting or containing this information and the order of their exchange. (Figure 6.0 TLS Layout)
  • 28. 28 These may vary according to the demands of the client and server—i.e., there are several possible procedures to set up the connection. This initial exchange results in a successful TLS connection (both parties ready to transfer application data with TLS) or an alert message. The protocol is used to negotiate the secure attributes of a session. (Figure 7.0 TLS handshake over SSL) (Figure 8.0 Simple TLS Handshaking)
  • 29. 29 WPA2 Wireless The WPA2 standard for wireless uses a four-way handshake defined in IEEE 802.11i-2004.Wi-Fi Protected Access (WPA) and Wi-Fi Protected Access II (WPA2) are two security protocols and security certification programs developed by the Wi-Fi Alliance to secure wireless computer networks. The Alliance defined these in response to serious weaknesses researchers had found in the previous system, Wired Equivalent Privacy (WEP). WPA (sometimes referred to as the draft IEEE 802.11i standard) became available in 2003.The Wi-Fi Alliance intended it as an intermediate measure in anticipation of the availability of the more secure and complex WPA2. WPA2 became available in 2004 and is a common shorthand for the full IEEE 802.11i (or IEEE 802.11i-2004) standard. A flaw in a feature added to Wi-Fi, called Wi-Fi Protected Setup, allows WPA and WPA2 security to be bypassed and effectively broken in many situations. The WPA and WPA2 security protocols implemented without using the Wi-Fi Protected Setup feature are unaffected by the security vulnerability. The WPA protocol implements much of the IEEE 802.11i standard. Specifically, the Temporal Key Integrity Protocol (TKIP) was adopted for WPA. WEP used a 64-bit or 128-bit encryption key that must be manually entered on wireless access points and devices and does not change. TKIP employs a per-packet key, meaning that it dynamically generates a new 128-bit key for each packet and thus prevents the types of attacks that compromised WEP. (Figure 9.0 TCP Four Way Handshake) WPA also includes a message integrity check, which is designed to prevent an attacker from altering and resending data packets.This replaces the cyclic redundancy check (CRC) that was used by the WEP standard. CRC's main flaw was that it did not provide a sufficiently strong data integrity guarantee for the packets it handled. Well tested message authentication codes existed to solve these problems, but they required too much computation to be used on old network cards. WPA uses a
  • 30. 30 message integrity check algorithm called TKIP to verify the integrity of the packets. TKIP is much stronger than a CRC, but not as strong as the algorithm used in WPA2. Researchers have since discovered a flaw in WPA that relied on older weaknesses in WEP and the limitations of Michael to retrieve the keystream from short packets to use for re-injection and spoofing. Dial up access modems One classic example of handshaking is that of dial-up modems, which typically negotiate communication parameters for a brief period when a connection is first established, and thereafter use those parameters to provide optimal information transfer over the channel as a function of its quality and capacity. (Figure 10.0 Modem/Device/Server connection hand-shaking) The "squealing" (which is actually a sound that changes in pitch 100 times every second) noises made by some modems with speaker output immediately after a connection is established are in fact the sounds of modems at both ends engaging in a handshaking procedure; once the procedure is completed, the speaker might be silenced, depending on the settings of operating system or the application controlling the modem.
  • 31. 31 Server side NDS handshake – receiving packets: Public void onDataReceived(RetainableByteBuffer data){ // function to get packets final RetainableByteBuffer msg = m_streamDefragger.getNext(data); // get stream If(msg ==null){// message is empty /* HandshakeRequest is fragmented, very rare but is still happens */ } elseif(msg == StreamDefragger.INVALID_HEADER){// if message header is invalid m_session.closeConnection(); // close connection } else{// message is not empty if(m_timerHandler !=null){ //idle try{ if(m_timerQueue.cancel(m_timerHandler)!=0){ return; } } catch(final InterruptedException ex){ // got error Thread.currentThread().interrupt(); // break current thread. } } // get message ID if(messageID == Protocol.HandshakeRequest.ID){ // verify ID finalshort protocolVersion = Protocol.HandshakeRequest.getProtocolVersion(msg); if(protocolVersion == Protocol.VERSION){ try{ // try final String audioFormat = Protocol.HandshakeRequest.getAudioFormat(msg); final String stationName = Protocol.HandshakeRequest.getStationName(msg); final AudioPlayer audioPlayer = AudioPlayer.create(args); if(audioPlayer ==null){ // no audioPlayer Log.i (LOG_TAG, getLogPrefix()); // debug case m_session.closeConnection(); // close connection }else{ Log.i(LOG_TAG, getLogPrefix()+"handshake ok"); // debug case final ByteBuffer handshakeReply = Protocol.HandshakeReplyOk.create();
  • 32. 32 m_session.sendData(handshakeReply); m_channel.setStationName(m_session, stationName); final ChannelSession channelSession =new ChannelSession(args); m_session.replaceListener(channelSession); } }catch(final CharacterCodingException ex){ Log.e (LOG_TAG, getLogPrefix ()+ ex.toString ()); m_session.closeConnection(); } }else{ /* Protocol version is different, cannot continue. */ } Client side based NDS handshake – receiving packets: final String statusText ="Protocol version mismatch:; // try{ final ByteBuffer handshakeReply = Protocol.HandshakeReplyFail.create(statusText); m_session.sendData(handshakeReply); } catch(final CharacterCodingException ex){ Log.i(LOG_TAG, ex.toString()); } m_session.closeConnection(); } } else{ // debug m_session.closeConnection(); } } }
  • 33. 33 Chapter 4 Station Information and Connectivity App running on devices will have their own unique address, even with the unique ID like addresses all running Walkie Talkie application will hold same signature on every node so that system can look up to them by using handshaking, pings. (Figure 11.0 how ping works) Let us assume Device A, B, C are android devices, and 1.1.1.1 is their LAN IP, /24 at the end is sub- net masking used to calculate how many nodes are connected within LAN, sub-net masking is also use to discover another visible device which can accept pings, In Diagram Device A is sending B a request to know if he is online and can respond, if Device B responded back that means Device B is online and discoverable, The same goes with Device B to C, The ping basically collects all nodes which can reply back and then after handshake and validation of station information and proper signature response connectivity between devices are established. Ping uses Send package in bytes and measure them by Latency, the quicker the response is the faster the connection is. Latency is measured in micro seconds which means 1000 micro seconds is 1 second, the normal and recommended latency between two nodes are 20-60 micro seconds. If one device is taking longer than 200 micro seconds it will cause a slight lagging and delayed response on both ends, it is because the server has sent his package and package arrive too soon to be collected by other node, in result some of package data goes missing and corrupted. User can change their broadcast name, station name which is used to display to make a proper UI/UX understanding and make it user friendly. The changing on username/broadcasting person name will not have any effect on station because the station signature is same and can’t be changed for connectivity establishment and security reasons.
  • 34. 34 (Figure 12.0 App Setting layout/Station name setting) It is in app preview of settings dialog/popup, layout contains one input of station name, which is basically node name, the real station name which we are using as signature for connectivity is hard- coded, station name is like a person name using application, if someone changes his station name other connected devices can see his name in station lists in main page. (Figure 13.0 Volume control in setting layout/screen)
  • 35. 35 Volume control is given as alternative control if user wish to use his volume button as PTT (Push-to- talk) he can manage his volume settings through settings screen. (Figure 14.0 Use volume buttons as PTT on settings screen) One can also start background services and check for Wi-Fi Status, it is useful when user didn’t turn his Wi-Fi on and is trying to use App, Application will simply popup him a dialog saying he need to activate/turn on his Wi-Fi in order to use the application, because main purpose of this app is to run on Wi-Fi, Checkbox control inside settings screen is automated service which check Wi-Fi status on every start of application, this way user will not miss any important connectivity by mistake and chances of efficient scope will be prompted.
  • 36. 36 (Figure 15.0 Wi-Fi Status check on start) However, all controls in settings screens are optional, it is not required for user to set-it up before using an app, it is basically and additional customization and performance tweaking flexibility for more productivity on progressive scale. Station information parameters and values: public StationInfo (String name, String addr,int transmission,long ping){ this.name = name; this.addr = addr; this.transmission = transmission; this.ping = ping; }
  • 37. 37 Channel Channels are simply identifiers used to communicate and calculate integrity between nodes, it is also used to make a sequence connection between them and broadcasting of packets through it. (Figure 16.0 Channel) Channel also identify the signature and reflects the station in it, Connectivity is possible through Channel to Channel, Channels are also a method of keeping sessions and extracting other information like, device state, ping rate, station name, session life span etc. Through Channel keeping a background service which will trigger the connection events every specific interval is possible, App will keep in contact with other app even if user interface is shut down/switch to another application. An activity will run which will renew sessions and keeps connectivity all in background and gather newly updated versions of commits and changes like, sending of voice, changing of name, change in ping rate, session renewals. These sessions creates a cloud of local devices for distributed communication.
  • 38. 38 Accepting Connection: privateclass ChannelAcceptor extends Acceptor { public Session.Listener createSessionListener(Session session){ Log.i("session accepted"); m_lock.lock(); try{ if(m_stopLatch ==null){ final SessionInfo sessionInfo =new SessionInfo(); m_sessions.put(session, sessionInfo); returnnew HandshakeServerSession(args); } }finally{ m_lock.unlock(); } returnnull; } } When Channel connection is accepted: Public void onAcceptorStarted(Collider collider,int localPort){ Log.i(LOG_TAG, m_name +": acceptor started: "+ localPort); m_lock.lock(); try{ if(m_stopLatch ==null){ m_localPort = localPort; } if(m_stateListener !=null){ updateStateLocked(); } Log.i("register service"); return; } }
  • 39. 39 State listener and Exception handling: privateclass ChannelConnector extends Connector { privatefinal String m_serviceName; public ChannelConnector(InetSocketAddress addr, String serviceName){ super(addr); m_serviceName = serviceName; } public Session.Listener createSessionListener(Session session){ // listen sessions m_lock.lock(); // lock when find another device to prevent publicvoid onException(IOException ex){ // on error m_lock.lock(); try{ final ServiceInfo serviceInfo = m_serviceInfo.get(m_serviceName); if(serviceInfo ==null){ // if serviceInfo is empty throw error }else{ if(BuildConfig.DEBUG &&((serviceInfo.connector !=this)||(serviceInfo.session !=null))){ thrownew AssertionError(); } } }
  • 40. 40 Getting station info private StationInfo[] getStationListLocked(){ if(BuildConfig.DEBUG){ if(!m_lock.isHeldByCurrentThread()) thrownew AssertionError(); if(m_serviceName ==null) thrownew AssertionError(); } elseif(m_serviceName ==null) returnnew StationInfo[0]; int sessions =0; for(Map.Entry < String, ServiceInfo > e: m_serviceInfo.entrySet()){ if(m_serviceName.compareTo(e.getKey())>0){ if(e.getValue().stationName !=null) sessions++; } } for(Map.Entry < Session, SessionInfo > e: m_sessions.entrySet()){ if(e.getValue().stationName !=null) sessions++; } final StationInfo[] stationInfo =new StationInfo[sessions]; int idx =0; for(Map.Entry < String, ServiceInfo > e: m_serviceInfo.entrySet()){ if(m_serviceName.compareTo(e.getKey())>0){ if(e.getValue().stationName !=null){ final ServiceInfo serviceInfo = e.getValue(); stationInfo[idx++]=new StationInfo(args); } } } } return stationInfo; }
  • 41. 41 Establishing connection between nodes and DHCP: publicvoid onServiceFound(NsdServiceInfo nsdServiceInfo){ final String serviceName = nsdServiceInfo.getServiceName(); m_lock.lock(); try{ if(BuildConfig.DEBUG &&(m_stopLatch !=null)) thrownew AssertionError(); ServiceInfo serviceInfo = m_serviceInfo.get(serviceName); if(serviceInfo ==null){ serviceInfo =new ServiceInfo(); m_serviceInfo.put(serviceName, serviceInfo); } serviceInfo.nsdServiceInfo = nsdServiceInfo; serviceInfo.nsdUpdates++; if((m_serviceName !=null)&&(m_serviceName.compareTo(serviceName)>0)){ if((serviceInfo.session ==null)&&(serviceInfo.connector ==null)){ if(m_resolveListener ==null){ Log.i(LOG_TAG, m_name +": onServiceFound, resolve: "+ nsdServiceInfo); serviceInfo.nsdUpdates =0; m_resolveListener =new ResolveListener(serviceName); m_nsdManager.resolveService(nsdServiceInfo, m_resolveListener); }else{ Log.i(LOG_TAG, m_name +": onServiceFound: "+ nsdServiceInfo); } } } }finally{ m_lock.unlock(); } }
  • 42. 42 On Connection lost publicvoid onServiceLost( NsdServiceInfo nsdServiceInfo ) { final String serviceName = nsdServiceInfo.getServiceName(); m_lock.lock(); try { final ServiceInfo serviceInfo = m_serviceInfo.get( serviceName ); if(serviceInfo ==null) { Log.w(": internal error: service not found: "+ nsdServiceInfo ); } elseif((m_serviceName !=null)&&(m_serviceName.compareTo(serviceName)>0)){ if(((m_resolveListener !=null)&& m_resolveListener.getServiceName().equals(serviceName))|| (serviceInfo.connector !=null)||(serviceInfo.session !=null)){ serviceInfo.nsdServiceInfo =null; }else{ m_serviceInfo.remove( serviceName ); final StateListener stateListener = m_stateListener; if(stateListener !=null) stateListener.onStationListChanged( getStationListLocked()); } } else{ m_serviceInfo.remove( serviceName ); } } finally{ m_lock.unlock(); } }
  • 43. 43 Setting Station Name: Setting station name, generating and getting station name according to session generated, register/unregister handling of sessions, setting ping rates etc.: public void setStationName( String serviceName, String stationName ) { m_lock.lock(); try{ final ServiceInfo serviceInfo = m_serviceInfo.get( serviceName ); if (serviceInfo != null) { serviceInfo.stationName = stationName; serviceInfo.addr = serviceInfo.session.getRemoteAddress().toString(); serviceInfo.state = 0; serviceInfo.ping = 0; } finally { m_lock.unlock(); } } Audio Player Application does not use a physical/external audio player, Application is programmed to play audio when it receives from other node and play it in system embedded player, player do not have body of its own, it is programmatically developed as player which only trigger the speaker hardware to play voice.
  • 44. 44 Playing Audio: public void play( RetainableByteBuffer audioFrame ) { final Node node = new Node( audioFrame ); audioFrame.retain(); for (;;) { final Node tail = m_tail; if (BuildConfig.DEBUG && (tail != null) && (tail.audioFrame == null)) { audioFrame.release(); throw new AssertionError(); } if (s_tailUpdater.compareAndSet(this, tail, node)) { if (tail == null) { m_head = node; m_sema.release(); } else { tail.next = node; break; } } }
  • 45. 45 Waiting for other voices, and stop after playing one voice broadcast: public void stopAndWait() { final Node node = new Node( null ); for (;;){ final Node tail = m_tail; if (BuildConfig.DEBUG && (tail != null) && (tail.audioFrame == null)) { throw new AssertionError(); } if (s_tailUpdater.compareAndSet(this, tail, node)){ if (tail == null) { m_head = node; m_sema.release(); } else{ tail.next = node; break; }}try{ m_thread.join(); } catch (final InterruptedException ex) {Log.e( LOG_TAG, ex.toString() ); } }
  • 46. 46 (Figure 17.0 Playing voice using inner audio player) Having no external location or triggers/calls to audio player of android stock music player or other android music player app save’s trouble from getting the same exact player for app to perform functionally, and increases it size and adds and extra validation to find/match and allocate audio player, make it standby for keeping it operational. Audio Recorder Audio Player is triggered when Push-to-talk is active, while pressing and holding PTT button application will record audio until user leave button, right after user leaves app transmit voice within LAN where other devices gets it and play it inside app. Figure 2.0 Sending-receiving voice display a detailed model how PTT button works and how the recording plays it role.
  • 47. 47 Recording voice: public void startRecording() { Log.d( LOG_TAG, "startRecording" ); m_lock.lock(); try { if (m_state == IDLE) { m_state = START; m_cond.signal(); } else if (m_state == STOP) m_state = RUN; } finally { m_lock.unlock(); } } public void stopRecording() { m_lock.lock(); try { if (m_state != IDLE) m_state = STOP; } finally { m_lock.unlock(); } }
  • 48. 48 Initializing AudioRecorder: public static AudioRecorder create( SessionManager sessionManager, boolean repeat ) { final int rates [] = { 11025, 16000, 22050, 44100 }; for (int sampleRate : rates) { final int channelConfig = AudioFormat.CHANNEL_IN_MONO; final int minBufferSize = AudioRecord.getMinBufferSize( sampleRate, channelConfig, AudioFormat.ENCODING_PCM_16BIT ); if ((minBufferSize != AudioRecord.ERROR) && (minBufferSize != AudioRecord.ERROR_BAD_VALUE)) { final int frameSize = (sampleRate * (Short.SIZE / Byte.SIZE) / 2) & (Integer.MAX_VALUE - 1); int bufferSize = (frameSize * 4); if (bufferSize < minBufferSize) bufferSize = minBufferSize; final AudioRecord audioRecord = new AudioRecord( MediaRecorder.AudioSource.MIC, sampleRate, channelConfig, AudioFormat.ENCODING_PCM_16BIT, bufferSize ); final String audioFormat = ("PCM:" + sampleRate); return new AudioRecorder( sessionManager, audioRecord, audioFormat, frameSize, bufferSize, repeat ); } } return null; }
  • 49. 49 Sending voice over protocol layering and handling recorder process: public void run() { Log.i( LOG_TAG, "run [" + m_audioFormat + "]: frameSize=" + m_frameSize + " bufferSize=" + m_bufferSize ); android.os.Process.setThreadPriority( Process.THREAD_PRIORITY_URGENT_AUDIO ); RetainableByteBuffer byteBuffer = m_byteBufferCache.get(); byte [] byteBufferArray = byteBuffer.getNioByteBuffer().array(); int byteBufferArrayOffset = byteBuffer.getNioByteBuffer().arrayOffset(); int frames = 0; try { for (;;) { m_lock.lock(); try { while (m_state == IDLE) m_cond.await(); if (m_state == START) { m_audioRecord.startRecording(); } else if (m_state == STOP) { m_audioRecord.stop(); m_state = IDLE; if (m_list != null) { int replayedFrames = 0; for (RetainableByteBuffer msg : m_list) { m_audioPlayer.play( msg ); msg.release();
  • 50. 50 replayedFrames++; } m_list.clear(); Log.i( LOG_TAG, "Replayed " + replayedFrames + " frames." ); } Log.i( LOG_TAG, "Sent " + frames + " frames." ); continue; } else if (m_state == SHTDN) break; } finally { m_lock.unlock(); } int position = byteBuffer.position(); if ((byteBuffer.limit() - position) < Protocol.AudioFrame.getMessageSize(m_frameSize)) { byteBuffer.release(); byteBuffer = m_byteBufferCache.get(); byteBufferArray = byteBuffer.getNioByteBuffer().array(); byteBufferArrayOffset = byteBuffer.getNioByteBuffer().arrayOffset(); position = 0; if (BuildConfig.DEBUG && (byteBuffer.position() != position)) throw new AssertionError(); } Protocol.AudioFrame.init( byteBuffer.getNioByteBuffer(), m_frameSize ); if (BuildConfig.DEBUG && (byteBuffer.remaining() <m_frameSize)) throw new AssertionError(); final int bytesReady = m_audioRecord.read( byteBufferArray, byteBufferArrayOffset+byteBuffer.position(), m_frameSize ); if (bytesReady == m_frameSize) { final int limit = position + Protocol.AudioFrame.getMessageSize( m_frameSize ); byteBuffer.position( position );
  • 51. 51 byteBuffer.limit( limit ); final RetainableByteBuffer msg = byteBuffer.slice(); m_sessionManager.send( msg ); frames++; if (m_list != null) { m_list.add( Protocol.AudioFrame.getAudioData(msg) ); } msg.release(); byteBuffer.limit( byteBuffer.capacity() ); byteBuffer.position( limit ); } else { Log.e( LOG_TAG, "readSize=" + m_frameSize + " bytesReady=" + bytesReady ); break; } } } catch (final InterruptedException ex) { Log.e( LOG_TAG, ex.toString() ); Thread.currentThread().interrupt(); } m_audioRecord.stop(); m_audioRecord.release(); byteBuffer.release(); Log.i( LOG_TAG, "run [" + m_audioFormat + "]: done" ); }
  • 52. 52 Session Manager Session manager is part of back-end procedure in application file system, it plays it roles to allocate, control, connect, save sessions flows. Session manager is main part which contribute his distribute administrative control to view/alter process and retrieve sessions, these sessions are used build a connection path in which communication will take place. Adding/removing session: public void addSession( ChannelSession session ) { m_lock.lock(); try { if (BuildConfig.DEBUG &&m_sessions.contains(session)) throw new AssertionError(); final HashSet<ChannelSession> sessions = (HashSet<ChannelSession>) m_sessions.clone(); sessions.add( session ); m_sessions = sessions; } finally { m_lock.unlock(); } } public void removeSession( ChannelSession session ) { m_lock.lock(); try { final HashSet<ChannelSession> sessions = (HashSet<ChannelSession>) m_sessions.clone(); final boolean removed = sessions.remove( session ); if (BuildConfig.DEBUG && !removed) throw new AssertionError(); m_sessions = sessions; }
  • 53. 53 finally { m_lock.unlock(); } } Sending Sessionbroadcast: public void send( RetainableByteBuffer msg ) { for (ChannelSession session : m_sessions) session.sendMessage(msg); } State View State view is indicator, indicating green circle on left side of node head in list, to show “that” node has sent broadcast. State view method uses canvas to draw a circle and highlight or handle it using the draw-able attributes of canvas Drawing state indicator using canvas protected void onDraw( Canvas canvas ) { super.onDraw( canvas ); if (m_state <m_paint.length) { final float cx = (getWidth() / 2); final float cy = (getHeight() / 2); final float cr = (cx - cx / 2f); canvas.drawCircle( cx, cy, cr, m_paint[m_state] ); } } public StateView( Context context, AttributeSet attrs ) { super( context, attrs ); final TypedArray a = context.obtainStyledAttributes( attrs, new int [] { android.R.attr.minHeight }, android.R.attr.buttonStyle, 0 ); if (a != null)
  • 54. 54 { final int minHeight = a.getDimensionPixelSize( 0, -1 ); if (minHeight != -1) setMinimumHeight( minHeight ); a.recycle(); } setWillNotDraw( false ); m_paint = new Paint[2]; m_paint[0] = new Paint(); m_paint[0].setColor( Color.DKGRAY ); m_paint[1] = new Paint(); m_paint[1].setColor( Color.GREEN ); } Indication of state: void setIndicatorState( int state ) { if (state <m_paint.length) { if (m_state != state) { m_state = state; invalidate(); } } else if (BuildConfig.DEBUG) throw new AssertionError(); }
  • 55. 55 Walkie Talkie Services Walkie Talkie is method/class in back-end system plays it roles to send/receive packets, it’s the main engine which is responsible for sending and receiving functionality. It is container which hold js- collider functionality and all NDS based handshakes and signatures generations. Performing NDS via JS-collider, initialing Js-collider process: private static class ColliderThread extends Thread { private final Collider m_collider; public ColliderThread( Collider collider ) { super( "ColliderThread" ); m_collider = collider; } public void run() { Log.i( LOG_TAG, "Collider thread: start" ); m_collider.run(); Log.i( LOG_TAG, "Collider thread: done" ); } } Discoverother nodes with same services/signature nearby, performance of NDS: private class DiscoveryListener implements NsdManager.DiscoveryListener { public void onStartDiscoveryFailed( String serviceType, int errorCode ) { m_lock.lock(); try { if (m_cond != null) m_cond.signal(); } finally {
  • 56. 56 m_lock.unlock(); } } public void onStopDiscoveryFailed( String serviceType, int errorCode ) { Log.e( LOG_TAG, "Stop discovery failed: " + errorCode ); } public void onDiscoveryStarted( String serviceType ) { Log.i( LOG_TAG, "Discovery started" ); m_lock.lock(); try { if (m_cond == null) m_discoveryStarted = true; else m_nsdManager.stopServiceDiscovery( this ); } finally { m_lock.unlock(); } } When a service/node is found: public void onServiceFound( NsdServiceInfo nsdServiceInfo ) { try { final String[] ss = nsdServiceInfo.getServiceName().split( SERVICE_NAME_SEPARATOR ); final String channelName = new String( Base64.decode( ss[0], 0 ) ); Log.i( LOG_TAG, "onServiceFound: " + channelName + ": " + nsdServiceInfo ); if (channelName.compareTo( SERVICE_NAME ) == 0) m_channel.onServiceFound( nsdServiceInfo );
  • 57. 57 } catch (final IllegalArgumentException ex) { Log.w( LOG_TAG, ex.toString() ); } } Getting device ID: private static String getDeviceID( ContentResolver contentResolver ) { long deviceID = 0; final String str = Settings.Secure.getString( contentResolver, Settings.Secure.ANDROID_ID ); if (str != null) { try { final BigInteger bi = new BigInteger( str, 16 ); deviceID = bi.longValue(); } catch (final NumberFormatException ex) { Log.i( LOG_TAG, ex.toString() ); } } if (deviceID == 0) { /* Let's use random number */ deviceID = new Random().nextLong(); } final byte [] bb = new byte[Long.SIZE / Byte.SIZE]; for (int idx=(bb.length - 1); idx>=0; idx--) { bb[idx] = (byte) (deviceID &0xFF); deviceID >>= Byte.SIZE; }
  • 58. 58 return Base64.encodeToString( bb, (Base64.NO_PADDING | Base64.NO_WRAP) ); } Allocating other resources: public int onStartCommand( Intent intent, int flags, int startId ) { Log.d( LOG_TAG, "onStartCommand: flags=" + flags + " startId=" + startId ); if (m_audioRecorder == null) { final String deviceID = getDeviceID( getContentResolver() ); final SessionManager sessionManager = new SessionManager(); m_audioRecorder = AudioRecorder.create( sessionManager, /*repeat*/false ); if (m_audioRecorder != null) { startForeground( 0, null ); final int audioStream = MainActivity.AUDIO_STREAM; final AudioManager audioManager = (AudioManager) getSystemService( AUDIO_SERVICE ); m_audioPrvVolume = audioManager.getStreamVolume( audioStream ); final String stationName = intent.getStringExtra( MainActivity.KEY_STATION_NAME ); int audioVolume = intent.getIntExtra( MainActivity.KEY_VOLUME, -1 ); if (audioVolume <0) audioVolume = audioManager.getStreamMaxVolume( audioStream ); Log.d( LOG_TAG, "setStreamVolume(" + audioStream + ", " + audioVolume + ")" ); audioManager.setStreamVolume( audioStream, audioVolume, 0 ); try { m_collider = Collider.create(); m_colliderThread = new ColliderThread( m_collider ); final TimerQueue timerQueue = new TimerQueue( m_collider.getThreadPool() ); m_channel = new Channel( deviceID, stationName, m_audioRecorder.getAudioFormat(), m_collider, m_nsdManager, SERVICE_TYPE,
  • 59. 59 SERVICE_NAME, sessionManager, timerQueue, Config.PING_INTERVAL ); m_discoveryListener = new DiscoveryListener(); m_nsdManager.discoverServices( SERVICE_TYPE, NsdManager.PROTOCOL_DNS_SD, m_discoveryListener ); m_colliderThread.start(); } catch (final IOException ex) { Log.w( LOG_TAG, ex.toString() ); } } } return START_REDELIVER_INTENT; } Switch Button SwitchButton.Java is class/method in back-end file system plays it roles to secure PTT, and make hand gestures based handling operational, such as on press PTT turn on recorder and on slide down move the list main activity downward to display all nodes connected, it is also responsible for gestured pattern based handling currently operational in application. Handling touch events: public boolean onTouchEvent( MotionEvent ev ) { final int action = ev.getAction(); switch (action) { case MotionEvent.ACTION_DOWN: if (isEnabled()) { if (m_state == STATE_IDLE) {
  • 60. 60 setPressed( true ); setBackground( m_pressedBackground ); m_state = STATE_DOWN; m_touchX = ev.getX(); m_touchY = ev.getY(); if (m_stateListener != null) m_stateListener.onStateChanged( true ); return true; } else if (m_state == STATE_LOCKED) { m_state = STATE_DOWN; m_touchX = ev.getX(); m_touchY = ev.getY(); return true; } else { if (BuildConfig.DEBUG) throw new AssertionError(); } } break; case MotionEvent.ACTION_MOVE: { final float x = ev.getX(); final float y = ev.getY(); final float dx = (x - m_touchX); final float dy = (y - m_touchY); switch (m_state) { case STATE_IDLE: break; case STATE_DOWN: if ((Math.abs(dx) >m_touchSlop) ||
  • 61. 61 (Math.abs(dy) >m_touchSlop)) { if (Math.abs(dx) > Math.abs(dy)) { if (dx >0.0) { m_state = STATE_DRAGGING_RIGHT; Log.d( LOG_TAG, "STATE_DOWN -> STATE_DRAGGING_RIGHT" ); } else if (dx <0.0) { m_state = STATE_DRAGGING_LEFT; Log.d( LOG_TAG, "STATE_DOWN -> STATE_DRAGGING_LEFT" ); } getParent().requestDisallowInterceptTouchEvent( true ); m_touchX = x; m_touchY = y; } } return true; case STATE_DRAGGING_RIGHT: if ((dx > -0.5f) && (Math.abs(dx) > Math.abs(dy))) { m_touchX = x; m_touchY = y; } else if (dy >= 0) { m_touchX = x; m_touchY = y; m_state = STATE_DRAGGING_DOWN; Log.d( LOG_TAG, "STATE_DRAGGING_RIGHT -> STATE_DRAGGING_DOWN" ); } else
  • 62. 62 { getParent().requestDisallowInterceptTouchEvent( false ); m_state = STATE_IDLE; Log.d( LOG_TAG, "STATE_DRAGGING_RIGHT -> STATE_IDLE" ); } return true; case STATE_DRAGGING_LEFT: if ((dx <0.5f) && (Math.abs(dx) > Math.abs(dy))) { m_touchX = x; m_touchY = y; } else if (dy >= 0) { m_touchX = x; m_touchY = y; m_state = STATE_DRAGGING_DOWN; Log.d( LOG_TAG, "STATE_DRAGGING_LEFT -> STATE_DRAGGING_DOWN" ); } else { getParent().requestDisallowInterceptTouchEvent( false ); m_state = STATE_IDLE; Log.d( LOG_TAG, "STATE_DRAGGING_LEFT -> STATE_IDLE" ); } return true; case STATE_DRAGGING_DOWN: if ((dy > -1.0f) || (Math.abs(dx) <1.0f)) { m_touchX = x; m_touchY = y; } else {
  • 63. 63 getParent().requestDisallowInterceptTouchEvent( false ); m_state = STATE_IDLE; Log.d( LOG_TAG, "STATE_DRAGGING_DOWN -> STATE_IDLE" ); } return true; } } break; case MotionEvent.ACTION_UP: case MotionEvent.ACTION_CANCEL: if (m_state == STATE_DRAGGING_DOWN) { /* Keep button pressed */ m_state = STATE_LOCKED; getParent().requestDisallowInterceptTouchEvent( false ); } else { m_stateListener.onStateChanged( false ); setBackground( m_defaultBackground ); setPressed( false ); if (m_state != STATE_IDLE) { m_state = STATE_IDLE; getParent().requestDisallowInterceptTouchEvent( false ); } } break; } return super.onTouchEvent( ev ); } Initializing functionality, running canvas drawers. protected void onDraw( Canvas canvas ) { super.onDraw( canvas );
  • 64. 64 if ((m_state == STATE_DOWN) && (m_pl != null) && (m_pr != null)) { final int width = getWidth(); final int height = getHeight(); canvas.drawCircle( width/2, height/2, height/8, m_paint ); canvas.drawPath( m_pl, m_paint ); canvas.drawPath( m_pr, m_paint ); } } Drawing with canvas protected void onSizeChanged( int width, int height, int oldWidth, int oldHeight ) { final float centerX = (width / 2); final float centerY = (height / 2); final int hh = (height / 8); int w = (width / hh / 2); if (w <14) { /* Too small */ m_pl = null; m_pr = null; } else { if (w >20) w = 20; m_pl = new Path(); /*1*/ m_pl.moveTo( centerX - hh*2, centerY - hh ); /*2*/ m_pl.lineTo( centerX - hh*(w-4), centerY-hh ); /*3*/ m_pl.lineTo( centerX - hh*(w-4), centerY+hh*2 ); /*4*/ m_pl.lineTo( centerX - hh*(w-2), centerY+hh*2 ); /*5*/ m_pl.lineTo( centerX - hh*(w-5), centerY+hh*4 ); /*6*/ m_pl.lineTo( centerX - hh*(w-8), centerY+hh*2 ); /*7*/ m_pl.lineTo( centerX - hh*(w-6), centerY+hh*2 );
  • 65. 65 /*8*/ m_pl.lineTo( centerX - hh*(w-6), centerY+hh ); /*9*/ m_pl.lineTo( centerX - hh*2, centerY + hh ); m_pl.close(); m_pr = new Path(); /*1*/ m_pr.moveTo( centerX + hh*2, centerY - hh ); /*2*/ m_pr.lineTo( centerX + hh*(w-4), centerY-hh ); /*3*/ m_pr.lineTo( centerX + hh*(w-4), centerY+hh*2 ); /*4*/ m_pr.lineTo( centerX + hh*(w-2), centerY+hh*2 ); /*5*/ m_pr.lineTo( centerX + hh*(w-5), centerY+hh*4 ); /*6*/ m_pr.lineTo( centerX + hh*(w-8), centerY+hh*2 ); /*7*/ m_pr.lineTo( centerX + hh*(w-6), centerY+hh*2 ); /*8*/ m_pr.lineTo( centerX + hh*(w-6), centerY+hh ); /*9*/ m_pr.lineTo( centerX + hh*2, centerY + hh ); m_pr.close(); } } Chapter 5 Main Activity MainActivity.java is main screen container of all visible features and screens. Validation on start, UI/UX based operations, functionality attachment to objects and layouts are performed inside MainActivity.java class. Main activity display titled android activity, shows logo on upper left side and application name with it aligned centered, on upper right corner inside that title bar there is menu button, which opens menu layout/screens and display the following list: 1. Settings 2. About 3. Exit MainActivity.java prevents application termination on app switching and on screen off, because to make the connection un-interrupted, app is intended to run on background, whenever app is running a status indicator will show in status bar showing the app logo on left side and the title and description about the status bar entry saying “app is running.” this way user won’t miss his important communication between other nodes.
  • 66. 66 Register Buttons and allocate button switch: private class ButtonTalkListener implements SwitchButton.StateListener { public void onStateChanged( boolean state ) { if (state) { if (!m_recording) { m_recording = true; m_audioRecorder.startRecording(); } } else { if (m_recording) { m_recording = false; m_audioRecorder.stopRecording(); } } } } Retrieving and generating list for connected nodes: private static class ListViewAdapter extends ArrayAdapter<StationInfo> { private final LayoutInflater m_inflater; private final StringBuilder m_stringBuilder; private StationInfo [] m_stationInfo; private static class RowViewInfo { public final TextView textViewStationName; public final TextView textViewAddrAndPing;
  • 67. 67 public final StateView stateView; public RowViewInfo( TextView textViewStationName, TextView textViewAddrAndPing, StateView stateView ) { this.textViewStationName = textViewStationName; this.textViewAddrAndPing = textViewAddrAndPing; this.stateView = stateView; } } Start Recording on button press: public boolean onKeyDown( int keyCode, KeyEvent event ) { if (m_useVolumeButtonsToTalk) { if ((keyCode == KeyEvent.KEYCODE_VOLUME_UP) || (keyCode == KeyEvent.KEYCODE_VOLUME_DOWN)) { if (!m_recording) { m_audioRecorder.startRecording(); m_recording = true; m_buttonTalk.setPressed( true ); } return true; } } return super.onKeyDown( keyCode, event ); }
  • 68. 68 1. Settings: Settings contain a dialog box popup having station name, volume control option, check Wi-Fi status on start settings/preferences. These settings are stored inside shared android preferences. Setting station info: public void setStationInfo( StationInfo [] stationInfo ) { m_stationInfo = stationInfo; notifyDataSetChanged(); } If user has selectedvolume control as PTT, start recording on volume button press. public boolean onKeyUp( int keyCode, KeyEvent event ) { if (m_useVolumeButtonsToTalk) { if ((keyCode == KeyEvent.KEYCODE_VOLUME_UP) || (keyCode == KeyEvent.KEYCODE_VOLUME_DOWN)) { if (m_recording) { m_audioRecorder.stopRecording(); m_recording = false; m_buttonTalk.setPressed( false ); } return true; } } return super.onKeyDown( keyCode, event ); }
  • 69. 69 2. About: About screen contains dialog box popup having a little description about app, its dependency and disclaimer. Registering dialog services, and making it standby for operation: private class SettingsDialogClickListener implements DialogInterface.OnClickListener { private final EditText m_editTextStationName; private final SeekBar m_seekBarVolume; private final CheckBox m_checkBoxCheckWiFiStateOnStart; private final CheckBox m_switchButtonUseVolumeButtonsToTalk; public SettingsDialogClickListener( EditText editTextStationName, SeekBar seekBarVolume, CheckBox checkBoxCheckWiFiStateOnStart, CheckBox switchButtonUseVolumeButtonsToTalk ) { m_editTextStationName = editTextStationName; m_seekBarVolume = seekBarVolume; m_checkBoxCheckWiFiStateOnStart = checkBoxCheckWiFiStateOnStart; m_switchButtonUseVolumeButtonsToTalk = switchButtonUseVolumeButtonsToTalk; } public void onClick( DialogInterface dialog, int which ) { if (which == DialogInterface.BUTTON_POSITIVE) { final String stationName = m_editTextStationName.getText().toString(); final int audioVolume = m_seekBarVolume.getProgress(); final SharedPreferences sharedPreferences = getPreferences(Context.MODE_PRIVATE); final SharedPreferences.Editor editor = sharedPreferences.edit(); if (m_stationName.compareTo(stationName) != 0) { final String title = getString(R.string.app_name) + ": " + stationName;
  • 70. 70 setTitle(title); editor.putString( KEY_STATION_NAME, stationName ); m_binder.setStationName( stationName ); m_stationName = stationName; } if (audioVolume != m_audioVolume) { editor.putString( KEY_VOLUME, Integer.toString(audioVolume) ); final int audioStream = MainActivity.AUDIO_STREAM; final AudioManager audioManager = (AudioManager) getSystemService( AUDIO_SERVICE ); Log.d(LOG_TAG, "setStreamVolume(" + audioStream + ", " + audioVolume + ")"); audioManager.setStreamVolume(audioStream, audioVolume, 0); m_audioVolume = audioVolume; } final boolean useVolumeButtonsToTalk = m_switchButtonUseVolumeButtonsToTalk.isChecked(); editor.putBoolean(); editor.putBoolean(KEY_USE_VOLUME_BUTTONS_TO_TALK, useVolumeButtonsToTalk); editor.apply(); MainActivity.this.m_useVolumeButtonsToTalk = useVolumeButtonsToTalk; } } 3. Exit The only option to terminate app using application system is to use Exit list selection, it is the last entry in menu list and is responsible to terminate all services including application instance itself.After the title bar, there is main centered container, which is container for all nodes connected, inside that list there are one header block container two headers on left side, the upper header shows the name of device station the second app shows the channel and session info about that node, on the right side there is one greyish circle indicating who is speaking, whose message is being played. If someone just used that PTT service app will show green indicator for that node who is using PTT at the moment and play his voice. On bottom there PTT button labeled as TALK. Is responsible for all interaction between sending receiving unit, PTT button ID triggers recording, as soon as the recording is completed and user release the PTT button, it triggers second event which is to send voice, the voice is sent using Walkie Talkie Service after all channel and switching process.
  • 71. 71 Destroying all instance: public void onDestroy() { Log.i( LOG_TAG, "onDestroy" ); super.onDestroy(); } Channel Session Channel Session class is responsible for renewal and alternation of session in question currently interacting with another device. Handling ping rates: private void handlePingTimeout() { if (m_lastBytesReceived == m_totalBytesReceived) { if (++m_pingTimeouts == 10) { Log.i( LOG_TAG, getLogPrefix() + "connection timeout, closing connection." ); m_session.closeConnection(); } } else { m_lastBytesReceived = m_totalBytesReceived; m_pingTimeouts = 0; } Log.v( LOG_TAG, getLogPrefix() + "ping" ); m_pingSendTime = System.currentTimeMillis(); m_session.sendData( Protocol.Ping.create() ); }
  • 72. 72 Receiving packets from nodes: public void onDataReceived( RetainableByteBuffer data ) { final int bytesReceived = data.remaining(); RetainableByteBuffer msg = m_streamDefragger.getNext( data ); while (msg != null) { if (msg == StreamDefragger.INVALID_HEADER) { Log.i("invalid message received, close connection." ); m_session.closeConnection(); break; } else { handleMessage( msg ); msg = m_streamDefragger.getNext(); } } s_totalBytesReceivedUpdater.addAndGet( this, bytesReceived ); } Sending sessiondata to other node for validation: public final int sendMessage( RetainableByteBuffer msg ) { return m_session.sendData( msg ); }
  • 73. 73 Handling messages private void handleMessage( RetainableByteBuffer msg ) { final short messageID = Protocol.Message.getID( msg ); switch (messageID) { case Protocol.AudioFrame.ID: final RetainableByteBuffer audioFrame = Protocol.AudioFrame.getAudioData( msg ); m_audioPlayer.play( audioFrame ); audioFrame.release(); break; case Protocol.Ping.ID: m_session.sendData( Protocol.Pong.create() ); break; case Protocol.Pong.ID: final long ping = (System.currentTimeMillis() - m_pingSendTime) / 2; if (Math.abs(ping - m_ping) >10) { m_ping = ping; m_channel.setPing( m_serviceName, m_session, ping ); } break; case Protocol.StationName.ID: try { final String stationName = rotocol.StationName.getStationName( msg ); if (stationName.length() >0) { if (m_serviceName == null) m_channel.setStationName( m_session, stationName ); else m_channel.setStationName( m_serviceName, stationName ); } }
  • 74. 74 catch (final CharacterCodingException ex) { Log.w( LOG_TAG, ex.toString() ); } break; default: Log.w( LOG_TAG, getLogPrefix() + "unexpected message " + messageID ); break; } } Configuration Configuration class is set of rules and parameter value and variable which contain almost all settings configuration for current system, it holds settings like session, ping interval, ping rate, station information, hard-coded signature etc. Configuring ping rates: class Config { public static int PING_INTERVAL = 5; } Database Application do not use any Database however, it is using android shared preference system for storing information/settings like check Wi-Fi status on start, use volume control as PTT, changing Station name. Setting volume control as PTT checkBoxUseVolumeButtonsToTalk.setChecked(arg); Allocating preferences: final SharedPreferences sharedPreferences = getPreferences( Context.MODE_PRIVATE );
  • 75. 75 Protocol A network medium could/may have many protocols. In telecommunications, a communication protocol is a system of rules that allow two or more entities of a communications system to transmit information via any kind of variation of a physical quantity. These are the rules or standard that defines the syntax, semantics and synchronization of communication and possible error recovery methods. Protocols may be implemented by hardware, software, or a combination of both, communicating systems use well-defined formats (protocol) for exchanging various messages. Each message has an exact meaning intended to elicit a response from a range of possible responses pre- determined for that particular situation. The specified behavior is typically independent of how it is to be implemented. Communications protocols have to be agreed upon by the parties involved. To reach agreement, a protocol may be developed into a technical standard. A programming language describes the same for computations, so there is a close analogy between protocols and programming languages: protocols are to communications what programming languages are to computations. Multiple protocols often describe different aspects of a single communication. A group of protocols designed to work together are known as a protocol suite; when implemented in software they are a protocol stack. Most recent protocols are assigned by the IETF for Internet communications, and the IEEE, or the ISO organizations for other types. The ITU-T handles telecommunications protocols and formats for the PSTN. As the PSTN and Internet converge, the two sets of standards are also being driven towards convergence. Basic Requirement of protocols Getting the data across a network is only part of the problem for a protocol. The data received has to be evaluated in the context of the progress of the conversation, so a protocol has to specify rules describing the context. These kinds of rules are said to express the syntax of the communications. Other rules determine whether the data is meaningful for the context in which the exchange takes place. These kinds of rules are said to express the semantics of the communications. Messages are sent and received on communicating systems to establish communications. Protocols should therefore specify rules governing the transmission. In general, much of the following should be addressed: Data formats for data exchange. Digital message bit-strings are exchanged. The bit- strings are divided in fields and each field carries information relevant to the protocol. Conceptually the bit-string is divided into two parts called the header area and the data area. The actual message is
  • 76. 76 stored in the data area, so the header area contains the fields with more relevance to the protocol. Bit- strings longer than the maximum transmission unit (MTU) are divided in pieces of appropriate size. Address formats for data exchange. Addresses are used to identify both the sender and the intended receiver(s). The addresses are stored in the header area of the bit-strings, allowing the receivers to determine whether the bit-strings are intended for themselves and should be processed or should be ignored. A connection between a sender and a receiver can be identified using an address pair (sender address, receiver address). Usually some address values have special meanings. An all-1s address could be taken to mean an addressing of all stations on the network, so sending to this address would result in a broadcast on the local network. The rules describing the meanings of the address value are collectively called an addressing scheme. Address mapping. Sometimes protocols need to map addresses of one scheme on addresses of another scheme. For instance, to translate a logical IP address specified by the application to an Ethernet hardware address. This is referred to as address mapping. Routing. When systems are not directly connected, intermediary systems along the route to the intended receiver(s) need to forward messages on behalf of the sender. On the Internet, the networks are connected using routers. This way of connecting networks is called internetworking. Detection of transmission errors is necessary on networks which cannot guarantee error-free operation. In a common approach, CRCs of the data area are added to the end of packets, making it possible for the receiver to detect differences caused by errors. The receiver rejects the packets on CRC differences and arranges somehow for retransmission Acknowledgements of correct reception of packets is required for connection-oriented communication. Acknowledgements are sent from receivers back to their respective senders Loss of information - timeouts and retries. Packets may be lost on the network or suffer from long delays. To cope with this, under some protocols, a sender may expect an acknowledgement of correct reception from the receiver within a certain amount of time. On timeouts, the sender must assume the packet was not received and retransmit it. In case of a permanently broken link, the retransmission has no effect so the number of retransmissions is limited. Exceeding the retry limit is considered an error. Direction of information flow needs to be addressed if transmissions can only occur in one direction at a time as on half-duplex links. This is known as Media Access Control. Arrangements have to be made to accommodate the case when two parties want to gain control at the same time. Sequence control. We have seen that long bit-strings are divided in pieces, and then sent on the network individually. The pieces may get lost or delayed or take different routes to their destination on some types of networks. As a result, pieces may arrive out of sequence. Retransmissions can result
  • 77. 77 in duplicate pieces. By marking the pieces with sequence information at the sender, the receiver can determine what was lost or duplicated, ask for necessary retransmissions and reassemble the original message. Flow control is needed when the sender transmits faster than the receiver or intermediate network equipment can process the transmissions. Flow control can be implemented by messaging from receiver to sender. Chapter 6 Protocols and Programming languages Protocols are to communications what algorithms or programming languages are to computations. This analogy has important consequences for both the design and the development of protocols. One has to consider the fact that algorithms, programs and protocols are just different ways of describing expected behavior of interacting objects. A familiar example of a protocolling language is the HTML language used to describe web pages which are the actual web protocols. In programming languages, the association of identifiers to a value is termed a definition. Program text is structured using block constructs and definitions can be local to a block. The localized association of an identifier to a value established by a definition is termed a binding and the region of program text in which a binding is effective is known as its scope. The computational state is kept using two components: the environment, used as a record of identifier bindings, and the store, which is used as a record of the effects of assignments. In communications, message values are transferred using transmission media. By analogy, the equivalent of a store would be a collection of transmission media, instead of a collection of memory locations. A valid assignment in a protocol (as an analog of programming language) could be Ethernet: ='message’, meaning a message is to be broadcast on the local Ethernet. On a transmission medium there can be many receivers. For instance, a mac-address identifies an ether network card on the transmission medium (the 'ether'). In our imaginary protocol, the assignment Ethernet[mac-address]: =message value could therefore make sense. By extending the assignment statement of an existing programming language with the semantics described, a protocolling language could easily be imagined. Operating systems provide reliable communication and synchronization facilities for communicating objects confined to the same system by means of system libraries. A programmer using a general-purpose programming language (like C or Ada) can use the routines in the libraries to implement a protocol, instead of using a dedicated protocolling language.
  • 78. 78 Protocol Layering Protocol layering now forms the basis of protocol design. It allows the decomposition of single, complex protocols into simpler, cooperating protocols, but it is also a functional decomposition, because each protocol belongs to a functional class, called a protocol layer. The protocol layers each solve a distinct class of communication problems. The Internet protocol suite consists of the following layers: application-, transport-, internet- and network interface-functions. Together, the layers make up a layering scheme or model. (Figure 18.0 Protocol Layering without modem) In computations, we have algorithms and data, and in communications, we have protocols and messages, so the analog of a data flow diagram would be some kind of message flow diagram. To visualize protocol layering and protocol suites, a diagram of the message flows in and between two systems, A and B, is shown in figure 3. The systems both make use of the same protocol suite. The vertical flows (and protocols) are in system and the horizontal message flows (and protocols) are between systems. The message flows are governed by rules, and data formats specified by protocols. The blue lines therefore mark the boundaries of the (horizontal) protocol layers.
  • 79. 79 The vertical protocols are not layered because they don't obey the protocol layering principle which states that a layered protocol is designed so that layer n at the destination receives exactly the same object sent by layer n at the source. The horizontal protocols are layered protocols and all belong to the protocol suite. Layered protocols allow the protocol designer to concentrate on one layer at a time, without worrying about how other layers perform. The vertical protocols need not be the same protocols on both systems, but they have to satisfy some minimal assumptions to ensure the protocol layering principle holds for the layered protocols. This can be achieved using a technique called Encapsulation. Usually, a message or a stream of data is divided into small pieces, called messages or streams, packets, IP datagrams or network frames depending on the layer in which the pieces are to be transmitted. The pieces contain a header area and a data area. The data in the header area identifies the source and the destination on the network of the packet, the protocol, and other data meaningful to the protocol like CRC's of the data to be sent, data length, and a timestamp. The rule enforced by the vertical protocols is that the pieces for transmission are to be encapsulated in the data area of all lower protocols on the sending side and the reverse is to happen on the receiving side. The result is that at the lowest level the piece looks like this: 'Header1, Header2, Header3, data' and in the layer directly above it: 'Header2, Header3, data' and in the top layer: 'Header3, data', both on the sending and receiving side. This rule therefore ensures that the protocol layering principle holds and effectively virtualizes all but the lowest transmission lines, so for this reason some message flows are colored red in figure 3. To ensure both sides use the same protocol, the pieces also carry data identifying the protocol in their header. The design of the protocol layering and the network (or Internet) architecture are interrelated, so one cannot be designed without the other. Some of the more important features in this respect of the Internet architecture and the network services it provides are described next. The Internet offers universal interconnection, which means that any pair of computers connected to the Internet is allowed to communicate. Each computer is identified by an address on the Internet. All the interconnected physical networks appear to the user as a single large network. This interconnection scheme is called an internetwork or internet.