Ignou MCA 6th Semester Synopsis file. This is a proposal file for MCA 6th semester. The project name is Project Management System. It is real world working scenario use by IT companies.
Call Girl Gorakhpur * 8250192130 Service starts from just ₹9999 ✅
Ignou MCA 6th Semester Synopsis file. This is a proposal file for MCA 6th semester.pdf
1. PROJECT REPORT ON E- Doctor Appointment
By
ADARSH MISHRA
Enrollment No.: 186246990
Under Guidance
Of
Mr.Shahab Ahmad Siddiqui
Submitted to the School of Computer and Information Sciences, IGNOU
in partial fulfilment of the requirements
for the award of the degree
Bachelor of Computer Applications (BCA)
2020-21
Indira Gandhi Na tional Open University
Maidan Garhi ,New Delhi – 110068
2. ACKNOWLEDGEMENT
First of all I would like to express our heartful thanks to ―The Almighty
God‖ for this opportunity, which he rendered to us and gives the physical
strength and pleasant mind to complete this project.
I thank my project guide Mr. Shahab Ahmad Siddiqui for the correct guidance, advice
and support to complete the project on time.
I extent my thanks and gratitude to my parents, sister, uncle and those who
helped me directly and indirectly for the successful completion of this project
Work.
3. TABLE OF CONTENT
Contents
Bachelor of Computer Applications (BCA)..........................................................................................................................1
INTRODUCTION SYNOPSIS...................................................................................................................................................5
INTRODUCTION ................................................................................................................................................................6
OBJECTIVE.....................................................................................................................................................................7
1.1 PURPOSE AND SCOPE.............................................................................................................................................8
2. PROJECT CATEGORY .................................................................................................................................................8
Software Requirements- ............................................................................................................................................10
Client side-..................................................................................................................................................................11
PROJECT OBJECTIVE........................................................................................................................................................31
MODULE DESCRIPTION........................................................................................................................................32
SPECIFICATION ...............................................................................................................................................................34
Processor- : Pentium 4........................................................................................................................................35
Software Requirements- ............................................................................................................................................35
Client side-..................................................................................................................................................................36
PROJECT CATEGORY ............................................................................................................................................37
SYSTEM ANALYSIS...........................................................................................................................................................47
IDENTIFY OF NEED................................................................................................................................................47
PROBLEM ANALYSIS ............................................................................................................................................47
FEASIBILITY CHECKPOINTS ...............................................................................................................................48
SYSTEM DESIGN..............................................................................................................................................................49
SYSTEM DESIGN.....................................................................................................................................................49
INPUT DESIGN.........................................................................................................................................................51
OUTPUT DESIGN.....................................................................................................................................................51
DATABASE DESIGN .........................................................................................................................................................52
Doctor Registration ...................................................................................................................................................53
2.Patient Table............................................................................................................................................................53
4.Admin ......................................................................................................................................................................54
5.Hospital Table..........................................................................................................................................................54
6.Appointment ...........................................................................................................................................................55
DATA FLOW DIAGRAM.........................................................................................................................................56
SYSTEM TESTING ............................................................................................................................................................64
6. INTRODUCTION
If anybody is ill and wants to visit a doctor for checkup, he or she needs to visit the hospital
and waits until the doctor is available. The patient also waits in a queue while getting
appointment. If the doctor cancels the appointment for some emergency reasons then the patient
is not able to know about the cancelation of the appointment unless or until he or she visits the
hospital. As the mobile communication technology is developing rapidly, therefore, one can
use the mobile‘s applications to overcome such problems and inconvenience for the patients.
There is much work in the literature in this regard. Patients for whom there is no appointment
who ask to be seen on the same day can be a source of irritation. During March 1978, 214 of
these patients were compared with 749 who made appointments. No evidence was found that
such patients were abusing the system, and almost half were found to be children. It was
concluded that these consultations were inevitable, but the attitude of the doctors to them may
have been affecting their management. The system used produces some loss of personal care,
the effects of which need to be studied further. In two separate studies that included data from
more than 650 patients, researchers from Brock University discovered that around 50 per cent
of doctor's appointments start late. Most often, it was because the physician was running late,
rather than the patients. Writing about their research recently in The Conversation, Brock
Professor of Operations Management Kenneth Klassen and Associate Professor of Operations
Management Reena Yoogalingam said the key to fixing the problem of late appointments could
be better scheduling. "Scheduling would be easy if no one ever ran late," the pair wrote about
their research, which was originally completed in 2013 and 2014. "You could simply spread
out the appointments evenly across the day. If treatments always take 10 minutes, then schedule
one patient every 10 minutes." The problem is, health care is unpredictable. Appointments
sometimes take longer than expected, physicians get interrupted by emergencies, or a doctor or
patient arrives late. Using simulation modeling from real-world data, Klassen and
Yoogalingam's research discovered creative scheduling could be the answer. The first method
was to put appointments closer together at the beginning and the end of the day or work session,
which keeps physicians busy, but spreads appointments farther apart in between. In this
method, if a physician is working a session from 8 a.m. until a noon lunch break, appointments
at the start of the day and just before noon might be scheduled eight or nine minutes apart while
mid-morning appointments would be 11 or 12 minutes apart. The second approach is to book
7. appointments closer together, but in clusters of two or three, with a bit of time in between each
cluster. As the day unfolds, the time between appointments shrinks, but the time between
clusters increases. "The clusters keep physicians busy. The spaces between clusters reduce
patient waiting," the researchers wrote in their Conversation piece which they co-authored with
Brock colleague Michael Armstrong, Associate Professor of Operations Research."By keeping
physicians busy, effective appointment scheduling helps them see more patients per day. That
increased capacity reduces the number of days patients must wait for their appointments,"
In the view of above problems E- Doctor Appointment is a smart web application and solution this
provides a registration and login for both doctors, patients and Institution. Doctors can register by
giving his necessary details like timings, fee, category, etc. After successful registration, the doctor can
log in by giving username and password. The doctor can view the booking request by patients and if
he accepts the patient requests the status will be shown as booking confirmed to the patient. He can
also view the feedback given by the patient. The patients must be registered and log in to book a
doctor basing the category and the type of problem patient is facing and the location. The search
results will show the list of doctors matching patients required criteria and he can select one and send
a request the request will be forwarded to admin and admin forward to doctor and if he is available
he will send the confirmation request back to admin the admin update the booking request and says
confirmed to the patient. the patient can view the status in the status tab and also he will get the mail
saying the booking is Confirmed.
OBJECTIVE
Following are the main objective of E- Doctor Appointment System:-
1. Provide a common platform to doctors , patient and hospital.
2. Tine saving as e- appointment gives the confirmation about appointment with proper
date , time and location.
3. Patient will get confirmation about the appointment on requested date whether their
appointment is confirmed or not.
8. 1.1 PURPOSE AND SCOPE
1.2.1 PURPOSE
The purpose of this project is to create a platform where patients and doctors can access /interact
efficiently with each other and provide ease and comfort to the patients. It also aims to resolve the
problems that patients have to face while taking appointments and keeping medical files. Patients can
choose a medical practitioner based on their professional profile and other patient's reviews. While
doctors can access and update a patient’s medical record after every checkup.
1.2.2 SCOPE
This system is implemented for all the individuals who want to get treated by the city
practitioners. The users can participate only if they have created an account through the registration
form and have provided their medical history. Once they get registered they can easily apply for
appointment and the doctor can approve the appointment with the selected hospital.
2. PROJECT CATEGORY
The project can be categorized as an internet based data based application i.e. a web site supported by
MSSQLl as a backend, therefore the project falls under the following categories:
1. RDBMS:
9. Since the project is data based and uses MSSQL as backend therefore is falls under this
category. A Relational Database Management System (RDBMS)is a Database
Management System that is based on the relational model as introduced by ‗E.F. Codd‘.
The relational model represents the database as a collection of relations. Each relation
resembles a table of values or, to some extent, a flat file of records. When a relation is
thought of as a table of values, each row in the table represents a collection a collection of
related data values. In the formal relational model terminology, a row is called tuple, a
column header is called a relation. The data types describing the types of values that can
appear in each column is represented by a domain of possible values.
2. OOPS:
Since the project uses C# as front-end therefore it falls under this category.
Objectoriented programming (OOP) concept is the foundation of the Java programming
language. Object – oriented programming is a very different approach to software
development as compared to what most of us have experienced before. OOP is a method
of implementation in which programs are organized as collection of objects, each of which
represents an instance of some class, and whose classes are members of a hierarchy of
classes related via relationships.
There are three important postulated of OOP:
1. Objects, not algorithms, as fundamental logical building blocks of programs, i.e.
OOP supports objects that are data abstractions with operations and hidden local
state.
2. Objects have an associated type, i.e. each object is an instance of some class.
3. Classes are related to one another via inheritance relationships.
Some of the key features of OOP are:
1. Emphasis is on data rather than procedures.
2. Data is hidden and cannot be accessed by external functions.
3. Follows bottom-up approach in program design.
3. TOOLS & PLATFORMS:
10. 3.1 SOFTWARE & HARDWARE SPECIFICATION
These following are the minimum software & hardware requirements for running the project
Server side-
Hardware Requirements-
Processor- : Pentium 4
RAM- : 1 G.B.
Hard disk- : 256 G.B.
Monitor- :SVGA color
Printer- :Dot-matrix, Laser Printer
Other- :C.D., D.V.D., pen drive etc.
Software Requirements-
OS : Windows 2000 server & above
Data Base- :SQL Server 2008
Frame work- :.NET 4.0
Server- :ASP.NET Server
Browser : Internet Explorer 6 or above , OPERAetc.)
11. Client side-
Hardware Requirements-
Processor- : Pentium 4 or higher
RAM- :1 G.B.
Hard disk :256 G.B.
Monitor- : S VGA color
Printer- : Dot-matrix, Laser printer
Internet :Compatible
Software Requirements-
OS : Any operating system
Browser :Internet Explorer 6.0 & above , OPERAetc.
3.2 TOOLS USED
3.2.1 Server side Components: Active Server Pages
ASP.NET web pages, known officially as Web Forms, was the main building blocks for application
development in ASP.NET before the introduction of MVC. There are two basic methodologies for Web
Forms, a web application format and a web site format. Web applications need to be compiled before
deployment, while web sites structures allows the user to copy the files directly to the server without
prior compilation. Web forms are contained in files with a ".aspx" extension; these files typically
contain static (X)HTML markup or component markup. The component markup can include server-
side Web Controls and User Controls that have been defined in the framework or the web
12. page. For example, a textbox component can be defined on a page as
, which is rendered into a html input box. Additionally, dynamic code, which runs on
the server, can be placed in a page within a block , which is similar to
other Web development technologies such as PHP, JSP, and ASP. With ASP.NET Framework 2.0,
Microsoft introduced a new code-behind model that lets static text remain on the .aspx page, while
dynamic code remains in an .aspx.vb or .aspx.cs or .aspx.fs file (depending on the programming
language used).
Microsoft recommends dealing with dynamic program code by using the code-behind model, which
places this code in a separate file or in a specially designated script tag. Code-behind files typically
have names like "MyPage.aspx.cs" or "MyPage.aspx.vb" while the page file is MyPage.aspx (same
filename as the page file (ASPX), but with the final extension denoting the page language). This practice
is automatic in Visual Studio and other IDEs, though the user can change the code-behind page. Also,
in the web application format, the pagename.aspx.cs is a partial class that is linked to the
pagename.designer.cs file. The designer file is a file that is autogenerated from the ASPX page and
allows the programmer to reference components in the ASPX page from the CS page without having
to declare them manually, as was necessary in ASP.NET versions before version 2. When using this
style of programming, the developer writes code to respond to different events, such as the page
being loaded, or a control being clicked, rather than a procedural walkthrough of the document.
ASP.NET's code-behind model marks a departure from Classic ASP in that it encourages
developers to build applications with separation of presentation and content in mind. In theory,
this would allow a Web designer, for example, to focus on the design markup with less potential
for disturbing the programming code that drives it. This is similar to the separation of the
controller from the view in model–view–controller (MVC) frameworks.
ASP.NET applications are hosted by a Web server and are accessed using the stateless HTTP
protocol. As such, if an application uses stateful interaction, it has to implement state
management on its own. ASP.NET provides various functions for state management.
Conceptually, Microsoft treats "state" as GUI state. Problems may arise if an application must
track "data state"; for example, a finite-state machine that may be in a transient state between
requests (lazy evaluation) or takes a long time to initialize. State management in ASP.NET
pages with authentication can make Web scraping difficult or impossible.
Server-side session state is held by a collection of user-defined session variables that are
persistent during a user session. These variables, accessed using the Session collection, are
<asp:textbox id='myid'
runat='server'>
<% -- dynamic code -- %>
13. unique to each session instance. The variables can be set to be automatically destroyed after a
defined time of inactivity even if the session does not end. Client-side user session is
maintained by either a cookie or by encoding the session ID in the URL itself.
ASP.NET supports three modes of persistence for server-side session variables:
1. In-process mode
The session variables are maintained within the ASP.NET process. This is the fastest way;
however, in this mode the variables are destroyed when the ASP.NET process is recycled or
shut down.
2. State server mode
ASP.NET runs a separate Windows service that maintains the state variables. Because state
management happens outside the ASP.NET process, and because the ASP.NET engine
accesses data using .NET Remoting, ASP State is slower than In-Process. This mode allows an
ASP.NET application to be load-balanced and scaled across multiple servers. Because the state
management service runs independently of ASP.NET, the session variables can persist across
ASP.NET process shutdowns. However, since session state server runs as one instance, it is
still one point of failure for session state. The session-state service cannot be load-balanced,
and there are restrictions on types that can be stored in a session variable.
3. SQL Server mode
State variables are stored in a database, allowing session variables to be persisted across
ASP.NET process shutdowns. The main advantage of this mode is that it allows the application
to balance load on a server cluster, sharing sessions between servers. This is the slowest
method of session state management in ASP.NET. ASP.NET session state enables you to store
and retrieve values for a user as the user navigates ASP.NET pages in a Web application. HTTP
is a stateless protocol. This means that a Web server treats each HTTP request for a page as
an independent request. The server retains no knowledge of variable values that were used
during previous requests. ASP.NET session state identifies requests from the same browser
during a limited time window as a session, and provides a way to persist variable values for
the duration of that session. By default, ASP.NET session state is enabled for all ASP.NET
applications.
14. Middleware: C#
During the development of the .NET Framework, the class libraries were originally written
using a managed code compiler system called "Simple Managed C" (SMC). In January 1999,
Anders Hejlsberg formed a team to build a new language at the time called Cool, which stood
for "C-like Object Oriented Language".Microsoft had considered keeping the name "Cool" as
the final name of the language, but chose not to do so for trademark reasons. By the time the
.NET project was publicly announced at the July 2000 Professional Developers Conference,
the language had been renamed C#, and the class libraries and ASP.NET runtime had been
ported to C#.
Hejlsberg is C#'s principal designer and lead architect at Microsoft, and was
previously involved with the design of Turbo Pascal, Embarcadero Delphi (formerly CodeGear
Delphi, Inprise Delphi and Borland Delphi), and Visual J++. In interviews and technical papers
he has stated that flaws in most major programming languages (e.g. C++, Java, Delphi, and
Smalltalk) drove the fundamentals of the Common Language Runtime (CLR), which, in turn,
drove the design of the C# language itself.
James Gosling, who created the Java programming language in 1994, and Bill Joy, a cofounder
of Sun Microsystems, the originator of Java, called C# an "imitation" of Java; Gosling further
said that "[C# is] sort of Java with reliability, productivity and security deleted." Klaus Kreft
and Angelika Langer (authors of a C++ streams book) stated in a blog post that "Java and C#
are almost identical programming languages. Boring repetition that lacks innovation," "Hardly
anybody will claim that Java or C# are revolutionary programming languages that changed the
way we write programs," and "C# borrowed a lot from Java - and vice versa. Now that C#
supports boxing and unboxing, we'll have a very similar feature in Java." In July 2000,
Hejlsberg said that C# is "not a Java clone" and is "much closer to C++" in its design.
Since the release of C# 2.0 in November 2005, the C# and Java languages have
evolved on increasingly divergent trajectories, becoming two very different languages. One of
the first major departures came with the addition of generics to both languages, with vastly
different implementations. C# makes use of reification to provide "first-class" generic objects
that can be used like any other class, with code generation performed at class-load time.
Furthermore, C# has added several major features to accommodate functional-style
programming, culminating in the LINQ extensions released with C# 3.0 and its supporting
framework of lambda expressions, extension methods, and types. These features enable C#
15. programmers to use functional programming techniques, such as closures, when it is
advantageous to their application. The LINQ extensions and the functional imports help
developers reduce the amount of boilerplate code that is included in common tasks like
querying a database, parsing an xml file, or searching through a data structure, shifting the
emphasis onto the actual program logic to help improve readability and maintainability.
C# used to have a mascot called Andy (named after Anders Hejlsberg). It was retired
on January 29, 2004. C# was originally submitted to the ISO subcommittee JTC 1/SC 22 for
review under ISO/IEC 23270:2003,was withdrawn and was then approved under ISO/IEC
23270:2006.C# supports strongly typed implicit variable declarations with the keyword var,
and implicitly typed arrays with the keyword new[] followed by a collection initializer.
C# supports a strict Boolean data type, bool. Statements that take conditions, such as while and
if, require an expression of a type that implements the true operator, such as the Boolean type.
While C++ also has a Boolean type, it can be freely converted to and from integers, and
expressions such as if(a) require only that a is convertible to bool, allowing a to be an int, or a
pointer. C# disallows this "integer meaning true or false" approach, on the grounds that forcing
programmers to use expressions that return exactly bool can prevent certain types of
programming mistakes such as if (a = b) (use of assignment = instead of equality ==).
C# is more type safe than C++. The only implicit conversions by default are those that are
considered safe, such as widening of integers. This is enforced at compile-time, during JIT,
and, in some cases, at runtime. No implicit conversions occur between Booleans and integers,
nor between enumeration members and integers (except for literal 0, which can be implicitly
converted to any enumerated type). Any user-defined conversion must be explicitly marked as
explicit or implicit, unlike C++ copy constructors and conversion operators, which are both
implicit by default.
C# has explicit support for covariance and contravariance in generic types, unlike C++ which
has some degree of support for contravariance simply through the semantics of return types on
virtual methods.Enumeration members are placed in their own scope.The C# language does
not allow for global variables or functions. All methods and members must be declared within
classes. Static members of public classes can substitute for global variables and functions.Local
variables cannot shadow variables of the enclosing block, unlike C and C++.
16. 3.2.3 BACK-END
MSSQL Server 2008 Database Management System
Microsoft SQL Server also allows user-defined composite types (UDTs) to be defined and used.
It also makes server statistics available as virtual tables and views (called Dynamic
Management Views or DMVs). In addition to tables, a database can also contain other objects
including views, stored procedures, indexes and constraints, along with a transaction log. A
SQL Server database can contain a maximum of 231 objects, and can span multiple OS-level
files with a maximum file size of 260 bytes (1 exabyte). The data in the database are stored in
primary data files with an extension .mdf. Secondary data files, identified with a .ndf extension,
are used to allow the data of a single database to be spread across more than one file, and
optionally across more than one file system. Log files are identified with the .ldf extension.
Storage space allocated to a database is divided into sequentially numbered pages, each 8 KB
in size. A page is the basic unit of I/O for SQL Server operations. A page is marked with a 96-
byte header which stores metadata about the page including the page number, page type, free
space on the page and the ID of the object that owns it. Page type defines the data contained in
the page: data stored in the database, index, allocation map which holds information about how
pages are allocated to tables and indexes, change map which holds information about the
changes made to other pages since last backup or logging, or contain large data types such as
image or text. While page is the basic unit of an I/O operation, space is actually managed in
terms of an extent which consists of 8 pages. A database object can either span all 8 pages in
an extent ("uniform extent") or share an extent with up to 7 more objects ("mixed extent"). A
row in a database table cannot span more than one page, so is limited to 8 KB in size. However,
if the data exceeds 8 KB and the row
contains varchar or varbinary data, the data in those columns are moved to a new page (or
possibly a sequence of pages, called an allocation unit) and replaced with a pointer to the data.
For physical storage of a table, its rows are divided into a series of partitions (numbered 1 to
n). The partition size is user defined; by default all rows are in a single partition. A table is split
into multiple partitions in order to spread a database over a computer cluster. Rows in each
partition are stored in either B-tree or heap structure. If the table has an associated, clustered
index to allow fast retrieval of rows, the rows are stored in-order according to their index values,
17. with a B-tree providing the index. The data is in the leaf node of the leaves, and other nodes
storing the index values for the leaf data reachable from the respective nodes. If the index is
non-clustered, the rows are not sorted according to the index keys. An indexed view has the
same storage structure as an indexed table. A table without a clustered index is stored in an
unordered heap structure. However, the table may have non-clustered indices to allow fast
retrieval of rows. In some situations the heap structure has performance advantages over the
clustered structure. Both heaps and B-trees can span multiple allocation units.
SQL Server buffers pages in RAM to minimize disk I/O. Any 8 KB page can be buffered
inmemory, and the set of all pages currently buffered is called the buffer cache. The amount of
memory available to SQL Server decides how many pages will be cached in memory. The
buffer cache is managed by the Buffer Manager. Either reading from or writing to any page
copies it to the buffer cache. Subsequent reads or writes are redirected to the in-memory copy,
rather than the on-disc version. The page is updated on the disc by the Buffer Manager only if
the in-memory cache has not been referenced for some time. While writing pages back to disc,
asynchronous I/O is used whereby the I/O operation is done in a background thread so that
other operations do not have to wait for the I/O operation to complete. Each page is written
along with its checksum when it is written. When reading the page back, its checksum is
computed again and matched with the stored version to ensure the page has not been damaged
or tampered with in the meantime.
SQL Server allows multiple clients to use the same database concurrently. As such, it needs to
control concurrent access to shared data, to ensure data integrity—when multiple clients update
the same data, or clients attempt to read data that is in the process of being changed by another
client. SQL Server provides two modes of concurrency control: pessimistic concurrency and
optimistic concurrency. When pessimistic concurrency control is being used, SQL Server
controls concurrent access by using locks. Locks can be either shared or exclusive. Exclusive
lock grants the user exclusive access to the data—no other user can access the data as long as
the lock is held. Shared locks are used when some data is being read—multiple users can read
from data locked with a shared lock, but not acquire an exclusive lock. The latter would have
to wait for all shared locks to be released.
Locks can be applied on different levels of granularity—on entire tables, pages, or even on a
per-row basis on tables. For indexes, it can either be on the entire index or on index leaves. The
level of granularity to be used is defined on a per-database basis by the database administrator.
While a fine-grained locking system allows more users to use the table or index simultaneously,
18. it requires more resources, so it does not automatically yield higher performance. SQL Server
also includes two more lightweight mutual exclusion solutions— latches and spinlocks—which
are less robust than locks but are less resource intensive. SQL Server uses them for DMVs and
other resources that are usually not busy. SQL Server also monitors all worker threads that
acquire locks to ensure that they do not end up in deadlocks—in case they do, SQL Server takes
remedial measures, which in many cases are to kill one of the threads entangled in a deadlock
and roll back the transaction it started. To implement locking, SQL Server contains the Lock
Manager. The Lock Manager maintains an in-memory table that manages the database objects
and locks, if any, on them along with other metadata about the lock. Access to any shared object
is mediated by the lock manager, which either grants access to the resource or blocks it.
SQL Server also provides the optimistic concurrency control mechanism, which is similar to
the multiversion concurrency control used in other databases. The mechanism allows a new
version of a row to be created whenever the row is updated, as opposed to overwriting the row,
i.e., a row is additionally identified by the ID of the transaction that created the version of the
row. Both the old as well as the new versions of the row are stored and maintained, though the
old versions are moved out of the database into a system database identified as Tempdb. When
a row is in the process of being updated, any other requests are not blocked (unlike locking)
but are executed on the older version of the row. If the other request is an update statement, it
will result in two different versions of the rows—both of them will be stored by the database,
identified by their respective transaction IDs.
19. 4. ANALYSIS
System analysis is the first and basic step in the development of software. This is the backbone for the
software development. This refers to a systematic investigation of a real or planned system to
determine the functions of the system and how they relate to each other and to any other system. It
is performed to develop a proper design of software to be developed and to fulfill the needs of the
firm or customer. System analysis includes requirement analysis, which plays an important role in
identifying what are the expectations of a firm from the proposed system. The task of requirement
analysis is a process of discovery, refinement, modeling and specification. After requirement analysis,
an analysis model is built which gives information about the data required in the system and also
specifies the functions and behavior of the system. The main purpose of conducting system analysis is
to study the various processes and to find out its requirements. These may include ways of capturing
or processing data, producing information, controlling a business activity or supporting management.
The determination of requirements entails studying the existing details about it to find out what these
requirements are? System analysis is conducted with the following objectives in mind:
1. Identify the needs of the customer
2. Evaluate the system concept for feasibility & perform economic and technical analysis
3. Allocate functions to system elements like hardware, software, people, database and others
4. Establish cost and schedule constraints
5. Create a system definition that forms the foundation for all subsequent engineering work
Analysis was performed to accomplish the objectives given in the title objectives.
1. The necessary details required in the analysis of the project were collected from the record registers,
staff members, students and the principal of the institution.
During the analysis phase of this project following set of principles were considered:
1. The information domain of the problem must be represented and understood
2. The functions that the software is to perform must be defined
3.The behavior of the software must be represented
4. The models that depict information function and behavior must be partitioned in a manner that
uncovers details in a layered fashion
5.The analysis process should move from essential information toward implementation details.
20. 4.1 Gantt Chat
A Gantt chart is a popular type of chart that illustrates a project schedule. Gantt charts illustrate the
start and finish dates of the terminal elements and summary elements of a project. Terminal elements
and summary elements comprise the work breakdown structure of the project.
25. 5. Structure
Name of Modules
1. Admin Module
2. Patient Module
3. Hospital Module
4. Doctor Module
Details of the modules
Admin:
Admin needs to login with username and password and in the admin home
screen, he can see the basic functionalities of admin. Admin can view the
registered doctors and patients. He can also view the patient‘s request and
doctors requests and he will confirm the patients and doctors requests.
Doctor:
Doctor need to be registered by giving the necessary details like experience,
timing, fees etc. After registering he need to log in and in the home screen he
can view the basic functionalities. He can view the patient request forwarded
from admin and he can accept and he can also view the feedback given by
patients.
Patient:
The patient needs to be registered and log in after logging on he can search for
the doctor by giving the location, the reason or problem. Basing on the doctor
26. availability the admin will confirm the booking request and will send to mail
that the booking is confirmed he can also view in the status and he can also
give feedback basing the performance of the doctor.
Hospital :
The Hospital needs to be registered and log in after logging on it can search for
the doctor by appointment date prepare themselves with the facilities..
5.1 DATA STRUCTURE
1. Doctor Registration
Field name Data type [size] Constraints
id Varchar20) Primary key
Username Varchar(20) Not Null
Password varchar(15) Not Null
Firstname Varchar(15) Not null
Lastname Varchar(15) Not Null
Address Varchar(50) Not Null
2.Patient Table
27. Field name Data type [size] Constraints
Patient _Name Varchar(50) Primary key
User _ Id Varchar(20) Not Null
Password Varchar(50) Not Null
Problem Varchar(20) Not Null
3.FeedBack
Field name Data type [size] Constraints
Patient Id Varchar(50) Allow Nulls
Doctor Id Varchar(50) Allow Nulls
Ratings Varchar(50) Primary key
4.Admin
28. Field name Data type [size] Constraints
Amin_Name Varchar(50) Allow Nulls
Password Varchar(50) Allow Nulls
5.Hospital Table
Field name Data type [size] Constraints
Hospital Id Varchar(50) Primary Key
Name Varchar(50) Allow Nulls
Address Varchar(50) Allow Nulls
6.Appointment
Field name Data type [size] Constraints
Patient _id Varchar20) Primary key
Doctor Id Varchar(20) Not Null
29. Appointment Date varchar(15) Not Null
Status Varchar(15) Not null
5.3 Testing Process
Following testing techniques are used in the development of the project:
1. Black-Box Testing
The technique of testing without having any knowledge of the interior workings of the
application is called black-box testing. The tester is oblivious to the system architecture and
does not have access to the source code. Typically, while performing a black-box test, a
tester will interact with the system's user interface by providing inputs and examining
outputs without knowing how and where the inputs are worked upon.
2. White-Box Testing
White-box testing is the detailed investigation of internal logic and structure of the code.
White-box testing is also called glass testing or open-box testing. In order to perform whitebox
testing on an application, a tester needs to know the internal workings of the code.The tester
needs to have a look inside the source code and find out which unit/chunk of the code is
behaving inappropriately.
3. Grey-Box Testing
Grey-box testing is a technique to test the application with having a limited knowledge of the
internal workings of an application. In software testing, the phrase the more you know, the
better carries a lot of weight while testing an application.Mastering the domain of a system
always gives the tester an edge over someone with limited domain knowledge. Unlike black-
box testing, where the tester only tests the application’s user interface; in greybox testing, the
30. tester has access to design documents and the database. Having this knowledge, a tester can
prepare better test data and test scenarios while making a test plan.
5. FUTURE SCOPE
Following points will be cover in future development of the system:-
1. Online billing will be available
2. Online sale of medicine will be started
3. Chat process with doctor and specialist will be available in future development of system.
31. PROJECT OBJECTIVE
OBJECTIVES
Following are the main objective of E- Doctor Appointment System:-
1. Provide a common platform to doctors , patient and hospital.
2. Tine saving as e- appointment gives the confirmation about appointment with proper
date , time and location.
3. Patient will get confirmation about the appointment on requested date whether their
appointment is confirmed or not.
4. To increase efficiency of managing the Doctor,Patient and Hospitals.
5. It deals with monitoring the information and appointments of patients.
6. Editing ,adding and updating of records is improved which results in proper resource
management of doctor data.
7. Integration of all records of Appointment numbers.
32. MODULE DESCRIPTION
This project contains modules such as:
1. Admin Module
2. Patient Module
3. Hospital Module
4. Doctor Module
Module Description
Admin:
Admin needs to login with username and password and in the admin home
screen, he can see the basic functionalities of admin. Admin can view the
registered doctors and patients. He can also view the patient‘s request and
doctors requests and he will confirm the patients and doctors requests.
Doctor:
33. Doctor need to be registered by giving the necessary details like experience,
timing, fees etc. After registering he need to log in and in the home screen he
can view the basic functionalities. He can view the patient request forwarded
from admin and he can accept and he can also view the feedback given by
patients. Patient:
The patient needs to be registered and log in after logging on he can search for
the doctor by giving the location, the reason or problem. Basing on the doctor
availability the admin will confirm the booking request and will send to mail
that the booking is confirmed he can also view in the status and he can also
give feedback basing the performance of the doctor.
Hospital :
The Hospital needs to be registered and log in after logging on it can search for
the doctor by appointment date prepare themselves with the facilities..
35. SYSTEM SPECIFICATION
These following are the minimum software & hardware requirements for running the project
Server side-
Hardware Requirements-
Processor- : Pentium 4
RAM- : 1 G.B.
Hard disk- : 256 G.B.
Monitor- :SVGA color
Printer- :Dot-matrix, Laser Printer
Other- :C.D., D.V.D., pen drive etc.
Software Requirements-
OS : Windows 2000 server & above
Data Base- :SQL Server 2008
Frame work- :.NET 4.0
Server- :ASP.NET Server
Browser : Internet Explorer 6 or above , OPERAetc.)
36. Client side-
Hardware Requirements-
Processor- : Pentium 4 or higher
RAM- :1 G.B.
Hard disk :256 G.B.
Monitor- : S VGA color
Printer- : Dot-matrix, Laser printer
Internet :Compatible
Software Requirements-
OS : Any operating system
Browser :Internet Explorer 6.0 & above , OPERAetc.
38. PROJECT CATEGORY
1. RDBMS:
Since the project is data based and uses MSSQL as backend therefore is falls under this
category. A Relational Database Management System (RDBMS)is a Database
Management System that is based on the relational model as introduced by ‗E.F. Codd‘.
The relational model represents the database as a collection of relations. Each relation
resembles a table of values or, to some extent, a flat file of records. When a relation is
thought of as a table of values, each row in the table represents a collection a collection of
related data values. In the formal relational model terminology, a row is called tuple, a
column header is called a relation. The data types describing the types of values that can
appear in each column is represented by a domain of possible values.
Feature of RDBMS:-
Creation & Manipulation:- Creating the database on creation and
Updation and deletion done on manipulation.
Speed:-The RDBMS provide faster retrieve of the database.
Integrity Rule:-Basically integrity is checks applied and managed by
DBA (Database Administrator).
Concurrency Control:-It provides parellel access of data.
Redundancy Control:-RDBMS also control duplicity of the data.
Flexibility:-RDBMS are flexible because users do no have to use
providing keys to input information.
Productive :-RDBMS are more productive because SQL is easier to use and
learn.
Normalizations:-It is most important feature of the RDBMS to normalization of
the records.
3. OOPS:
39. Since the project uses C# as front-end therefore it falls under this category.
Objectoriented programming (OOP) concept is the foundation of the Java programming
language. Object – oriented programming is a very different approach to software
development as compared to what most of us have experienced before. OOP is a method
of implementation in which programs are organized as collection of objects, each of which
represents an instance of some class, and whose classes are members of a hierarchy of
classes related via relationships.
There are three important postulated of OOP:
4. Objects, not algorithms, as fundamental logical building blocks of programs, i.e.
OOP supports objects that are data abstractions with operations and hidden local
state.
5. Objects have an associated type, i.e. each object is an instance of some class.
6. Classes are related to one another via inheritance relationships.
Some of the key features of OOP are:
4. Emphasis is on data rather than procedures.
5. Data is hidden and cannot be accessed by external functions.
6. Follows bottom-up approach in program design.
SOFTWARE SPECIFICATION
Server side Components: Active Server Pages
40. ASP.NET web pages, known officially as Web Forms, was the main building blocks for application
development in ASP.NET before the introduction of MVC. There are two basic methodologies for Web
Forms, a web application format and a web site format. Web applications need to be compiled before
deployment, while web sites structures allows the user to copy the files directly to the server without
prior compilation. Web forms are contained in files with a ".aspx" extension; these files typically
contain static (X)HTML markup or component markup. The component markup can include server-
side Web Controls and User Controls that have been defined in the framework or the web
page. For example, a textbox component can be defined on a page as
, which is rendered into a html input box. Additionally, dynamic code, which runs on
the server, can be placed in a page within a block , which is similar to
other Web development technologies such as PHP, JSP, and ASP. With ASP.NET Framework 2.0,
Microsoft introduced a new code-behind model that lets static text remain on the .aspx page, while
dynamic code remains in an .aspx.vb or .aspx.cs or .aspx.fs file (depending on the programming
language used).
Microsoft recommends dealing with dynamic program code by using the code-behind model, which
places this code in a separate file or in a specially designated script tag. Code-behind files typically
have names like "MyPage.aspx.cs" or "MyPage.aspx.vb" while the page file is MyPage.aspx (same
filename as the page file (ASPX), but with the final extension denoting the page language). This practice
is automatic in Visual Studio and other IDEs, though the user can change the code-behind page. Also,
in the web application format, the pagename.aspx.cs is a partial class that is linked to the
pagename.designer.cs file. The designer file is a file that is autogenerated from the ASPX page and
allows the programmer to reference components in the ASPX page from the CS page without having
to declare them manually, as was necessary in ASP.NET versions before version 2. When using this
style of programming, the developer writes code to respond to different events, such as the page
being loaded, or a control being clicked, rather than a procedural walkthrough of the document.
ASP.NET's code-behind model marks a departure from Classic ASP in that it encourages
developers to build applications with separation of presentation and content in mind. In theory,
this would allow a Web designer, for example, to focus on the design markup with less potential
for disturbing the programming code that drives it. This is similar to the separation of the
controller from the view in model–view–controller (MVC) frameworks.
ASP.NET applications are hosted by a Web server and are accessed using the stateless HTTP
protocol. As such, if an application uses stateful interaction, it has to implement state
<asp:textbox id='myid'
runat='server'>
<% -- dynamic code -- %>
41. management on its own. ASP.NET provides various functions for state management.
Conceptually, Microsoft treats "state" as GUI state. Problems may arise if an application must
track "data state"; for example, a finite-state machine that may be in a transient state between
requests (lazy evaluation) or takes a long time to initialize. State management in ASP.NET
pages with authentication can make Web scraping difficult or impossible.
Server-side session state is held by a collection of user-defined session variables that are
persistent during a user session. These variables, accessed using the Session collection, are
unique to each session instance. The variables can be set to be automatically destroyed after a
defined time of inactivity even if the session does not end. Client-side user session is
maintained by either a cookie or by encoding the session ID in the URL itself.
ASP.NET supports three modes of persistence for server-side session variables:
1. In-process mode
The session variables are maintained within the ASP.NET process. This is the fastest way;
however, in this mode the variables are destroyed when the ASP.NET process is recycled or
shut down.
2. State server mode
ASP.NET runs a separate Windows service that maintains the state variables. Because state
management happens outside the ASP.NET process, and because the ASP.NET engine
accesses data using .NET Remoting, ASP State is slower than In-Process. This mode allows an
ASP.NET application to be load-balanced and scaled across multiple servers. Because the state
management service runs independently of ASP.NET, the session variables can persist across
ASP.NET process shutdowns. However, since session state server runs as one instance, it is
still one point of failure for session state. The session-state service cannot be load-balanced,
and there are restrictions on types that can be stored in a session variable.
3. SQL Server mode
State variables are stored in a database, allowing session variables to be persisted across
ASP.NET process shutdowns. The main advantage of this mode is that it allows the application
to balance load on a server cluster, sharing sessions between servers. This is the slowest
method of session state management in ASP.NET. ASP.NET session state enables you to store
and retrieve values for a user as the user navigates ASP.NET pages in a Web application. HTTP
is a stateless protocol. This means that a Web server treats each HTTP request for a page as
an independent request. The server retains no knowledge of variable values that were used
42. during previous requests. ASP.NET session state identifies requests from the same browser
during a limited time window as a session, and provides a way to persist variable values for
the duration of that session. By default, ASP.NET session state is enabled for all ASP.NET
applications.
Middleware: C#
During the development of the .NET Framework, the class libraries were originally written
using a managed code compiler system called "Simple Managed C" (SMC). In January 1999,
Anders Hejlsberg formed a team to build a new language at the time called Cool, which stood
for "C-like Object Oriented Language".Microsoft had considered keeping the name "Cool" as
the final name of the language, but chose not to do so for trademark reasons. By the time the
.NET project was publicly announced at the July 2000 Professional Developers Conference,
the language had been renamed C#, and the class libraries and ASP.NET runtime had been
ported to C#.
Hejlsberg is C#'s principal designer and lead architect at Microsoft, and was
previously involved with the design of Turbo Pascal, Embarcadero Delphi (formerly CodeGear
Delphi, Inprise Delphi and Borland Delphi), and Visual J++. In interviews and technical papers
he has stated that flaws in most major programming languages (e.g. C++, Java, Delphi, and
Smalltalk) drove the fundamentals of the Common Language Runtime (CLR), which, in turn,
drove the design of the C# language itself.
James Gosling, who created the Java programming language in 1994, and Bill Joy, a cofounder
of Sun Microsystems, the originator of Java, called C# an "imitation" of Java; Gosling further
said that "[C# is] sort of Java with reliability, productivity and security deleted." Klaus Kreft
and Angelika Langer (authors of a C++ streams book) stated in a blog post that "Java and C#
are almost identical programming languages. Boring repetition that lacks innovation," "Hardly
anybody will claim that Java or C# are revolutionary programming languages that changed the
way we write programs," and "C# borrowed a lot from Java - and vice versa. Now that C#
supports boxing and unboxing, we'll have a very similar feature in Java." In July 2000,
Hejlsberg said that C# is "not a Java clone" and is "much closer to C++" in its design.
Since the release of C# 2.0 in November 2005, the C# and Java languages have
evolved on increasingly divergent trajectories, becoming two very different languages. One of
43. the first major departures came with the addition of generics to both languages, with vastly
different implementations. C# makes use of reification to provide "first-class" generic objects
that can be used like any other class, with code generation performed at class-load time.
Furthermore, C# has added several major features to accommodate functional-style
programming, culminating in the LINQ extensions released with C# 3.0 and its supporting
framework of lambda expressions, extension methods, and types. These features enable C#
programmers to use functional programming techniques, such as closures, when it is
advantageous to their application. The LINQ extensions and the functional imports help
developers reduce the amount of boilerplate code that is included in common tasks like
querying a database, parsing an xml file, or searching through a data structure, shifting the
emphasis onto the actual program logic to help improve readability and maintainability.
C# used to have a mascot called Andy (named after Anders Hejlsberg). It was retired
on January 29, 2004. C# was originally submitted to the ISO subcommittee JTC 1/SC 22 for
review under ISO/IEC 23270:2003,was withdrawn and was then approved under ISO/IEC
23270:2006.C# supports strongly typed implicit variable declarations with the keyword var,
and implicitly typed arrays with the keyword new[] followed by a collection initializer.
C# supports a strict Boolean data type, bool. Statements that take conditions, such as while and
if, require an expression of a type that implements the true operator, such as the Boolean type.
While C++ also has a Boolean type, it can be freely converted to and from integers, and
expressions such as if(a) require only that a is convertible to bool, allowing a to be an int, or a
pointer. C# disallows this "integer meaning true or false" approach, on the grounds that forcing
programmers to use expressions that return exactly bool can prevent certain types of
programming mistakes such as if (a = b) (use of assignment = instead of equality ==).
C# is more type safe than C++. The only implicit conversions by default are those that are
considered safe, such as widening of integers. This is enforced at compile-time, during JIT,
and, in some cases, at runtime. No implicit conversions occur between Booleans and integers,
nor between enumeration members and integers (except for literal 0, which can be implicitly
converted to any enumerated type). Any user-defined conversion must be explicitly marked as
explicit or implicit, unlike C++ copy constructors and conversion operators, which are both
implicit by default.
C# has explicit support for covariance and contravariance in generic types, unlike C++ which
has some degree of support for contravariance simply through the semantics of return types on
44. virtual methods.Enumeration members are placed in their own scope.The C# language does
not allow for global variables or functions. All methods and members must be declared within
classes. Static members of public classes can substitute for global variables and functions.Local
variables cannot shadow variables of the enclosing block, unlike C and C++.
BACK-END
MSSQL Server 2008 Database Management System
Microsoft SQL Server also allows user-defined composite types (UDTs) to be defined and used.
It also makes server statistics available as virtual tables and views (called Dynamic
Management Views or DMVs). In addition to tables, a database can also contain other objects
including views, stored procedures, indexes and constraints, along with a transaction log. A
SQL Server database can contain a maximum of 231 objects, and can span multiple OS-level
files with a maximum file size of 260 bytes (1 exabyte). The data in the database are stored in
primary data files with an extension .mdf. Secondary data files, identified with a .ndf extension,
are used to allow the data of a single database to be spread across more than one file, and
optionally across more than one file system. Log files are identified with the .ldf extension.
Storage space allocated to a database is divided into sequentially numbered pages, each 8 KB
in size. A page is the basic unit of I/O for SQL Server operations. A page is marked with a 96-
byte header which stores metadata about the page including the page number, page type, free
space on the page and the ID of the object that owns it. Page type defines the data contained in
the page: data stored in the database, index, allocation map which holds information about how
pages are allocated to tables and indexes, change map which holds information about the
changes made to other pages since last backup or logging, or contain large data types such as
image or text. While page is the basic unit of an I/O operation, space is actually managed in
terms of an extent which consists of 8 pages. A database object can either span all 8 pages in
an extent ("uniform extent") or share an extent with up to 7 more objects ("mixed extent"). A
row in a database table cannot span more than one page, so is limited to 8 KB in size. However,
if the data exceeds 8 KB and the row
contains varchar or varbinary data, the data in those columns are moved to a new page (or
possibly a sequence of pages, called an allocation unit) and replaced with a pointer to the data.
For physical storage of a table, its rows are divided into a series of partitions (numbered 1 to
n). The partition size is user defined; by default all rows are in a single partition. A table is split
45. into multiple partitions in order to spread a database over a computer cluster. Rows in each
partition are stored in either B-tree or heap structure. If the table has an associated, clustered
index to allow fast retrieval of rows, the rows are stored in-order according to their index values,
with a B-tree providing the index. The data is in the leaf node of the leaves, and other nodes
storing the index values for the leaf data reachable from the respective nodes. If the index is
non-clustered, the rows are not sorted according to the index keys. An indexed view has the
same storage structure as an indexed table. A table without a clustered index is stored in an
unordered heap structure. However, the table may have non-clustered indices to allow fast
retrieval of rows. In some situations the heap structure has performance advantages over the
clustered structure. Both heaps and B-trees can span multiple allocation units.
SQL Server buffers pages in RAM to minimize disk I/O. Any 8 KB page can be buffered
inmemory, and the set of all pages currently buffered is called the buffer cache. The amount of
memory available to SQL Server decides how many pages will be cached in memory. The
buffer cache is managed by the Buffer Manager. Either reading from or writing to any page
copies it to the buffer cache. Subsequent reads or writes are redirected to the in-memory copy,
rather than the on-disc version. The page is updated on the disc by the Buffer Manager only if
the in-memory cache has not been referenced for some time. While writing pages back to disc,
asynchronous I/O is used whereby the I/O operation is done in a background thread so that
other operations do not have to wait for the I/O operation to complete. Each page is written
along with its checksum when it is written. When reading the page back, its checksum is
computed again and matched with the stored version to ensure the page has not been damaged
or tampered with in the meantime.
SQL Server allows multiple clients to use the same database concurrently. As such, it needs to
control concurrent access to shared data, to ensure data integrity—when multiple clients update
the same data, or clients attempt to read data that is in the process of being changed by another
client. SQL Server provides two modes of concurrency control: pessimistic concurrency and
optimistic concurrency. When pessimistic concurrency control is being used, SQL Server
controls concurrent access by using locks. Locks can be either shared or exclusive. Exclusive
lock grants the user exclusive access to the data—no other user can access the data as long as
the lock is held. Shared locks are used when some data is being read—multiple users can read
from data locked with a shared lock, but not acquire an exclusive lock. The latter would have
to wait for all shared locks to be released.
46. Locks can be applied on different levels of granularity—on entire tables, pages, or even on a
per-row basis on tables. For indexes, it can either be on the entire index or on index leaves. The
level of granularity to be used is defined on a per-database basis by the database administrator.
While a fine-grained locking system allows more users to use the table or index simultaneously,
it requires more resources, so it does not automatically yield higher performance. SQL Server
also includes two more lightweight mutual exclusion solutions— latches and spinlocks—which
are less robust than locks but are less resource intensive. SQL Server uses them for DMVs and
other resources that are usually not busy. SQL Server also monitors all worker threads that
acquire locks to ensure that they do not end up in deadlocks—in case they do, SQL Server takes
remedial measures, which in many cases are to kill one of the threads entangled in a deadlock
and roll back the transaction it started. To implement locking, SQL Server contains the Lock
Manager. The Lock Manager maintains an in-memory table that manages the database objects
and locks, if any, on them along with other metadata about the lock. Access to any shared object
is mediated by the lock manager, which either grants access to the resource or blocks it.
SQL Server also provides the optimistic concurrency control mechanism, which is similar to
the multiversion concurrency control used in other databases. The mechanism allows a new
version of a row to be created whenever the row is updated, as opposed to overwriting the row,
i.e., a row is additionally identified by the ID of the transaction that created the version of the
row. Both the old as well as the new versions of the row are stored and maintained, though the
old versions are moved out of the database into a system database identified as Tempdb. When
a row is in the process of being updated, any other requests are not blocked (unlike locking)
but are executed on the older version of the row. If the other request is an update statement, it
will result in two different versions of the rows—both of them will be stored by the database,
identified by their respective transaction IDs.
47. SYSTEM ANALYSIS
System analysis is the first and basic step in the development of software. This is the backbone for
the software development. This refers to a systematic investigation of a real or planned system to
determine the functions of the system and how they relate to each other and to any other system. It
is performed to develop a proper design of software to be developed and to fulfill the needs of the
firm or customer. System analysis includes requirement analysis, which plays an important role in
identifying what are the expectations of a firm from the proposed system. The task of requirement
analysis is a process of discovery, refinement, modeling and specification. After requirement analysis,
an analysis model is built which gives information about the data required in the system and also
specifies the functions and behavior of the system. The main purpose of conducting system analysis is
to study the various processes and to find out its requirements. These may include ways of capturing
or processing data, producing information, controlling a business activity or supporting management.
The determination of requirements entails studying the existing details about it to find out what these
requirements are? System analysis is conducted with the following objectives in mind:
IDENTIFY OF NEED
This project is use to store information of new visitors.
Into past the data will be stored into the conventional file system and it is very costly
and time consuming. In today time have very importance. So, to save our time and
money we need this project. Because this project is used into the computer and store
information into the memory of computer, and save our TIME and MONEY.
PROBLEM ANALYSIS
Problem analysis is done to obtain a clear understanding of the users and what exactly
is desired from the software information and documentation and so forth. Once of the
major problems during analysis is how to organize the information obtained that can
be effectively evaluated for completeness and consistency. Second major during
analysis is resolving the contradiction that may exists in the information from different
sources.
48. FEASIBILITY ANALYSIS
Feasibility of a system refers to the potentiality and workability of the system. A
system is said to be a feasible one if its development is beneficial to an organization.
Feasibility analysis is the process of analyzing the system so as to determine whether
would be feasible or not. Feasible analysis should be performed throughout the system
development life cycle.
FEASIBILITY CHECKPOINTS
Feasibility study is done at various points in the system development life cycle. The
scope and complexity of an apparently feasible project can change after the current
problems are fully understood, after the end-user‘s needs have been defined in detail,
or after the technical requirements have been established. The project feasible at one
stage may become infeasible or less feasible at any other checkpoint.
Various checkpoints in the system development life cycle where feasibility study is
performed are:
1. Survey phase checkpoint
2. Study phase checkpoint
3. Selection phase checkpoint
4. Acquisition phase checkpoint
5. Design phase checkpoint
49. SYSTEM DESIGN
SYSTEM DESIGN
System design is the first step in software development , Which needs careful and
intricate planning. It helps us to prepare detailed technical design of the
applicationbased system. It is based on Requirement Analysis. It provides the
specification and design for system giving a brief overview of user functions,
requirements and their actual implementation.
DESIGN OBJECTIVES
The goal that was kept in mind while designing the system are:
1. To make the system user friendly as much as possible.
2. To make the flow of program comprehensible to the user.
3. To have transparency in work i.e. show how everything is being Done
by the system stepwise.
50. ARCHITECTURAL DESIGN
Architectural design represents the data structure and program components that are
required to build the computer based system. It consider the structure and properties of
the components that constitute the system and relationship that exist between all
architectural components of the system.
PROCEDURAL DESIGN
Procedural design or component level design occur after data,
architectural, and interface design have been established. The intent is to
translate the design model into operational software. But the level of
abstraction of the existing design model is relatively high, and the
abstraction level of the operational program is low.
The system design process encompasses the following activities:
Partition the analysis model into subsystem.
Identify concurrency that is dictated by the problem.
Allocate subsystem to processors and tasks.
Develop a design for the user interface.
Choose a basic strategy for implementing data management.
Identify global resources and the control mechanisms required to access them.
Design an appropriate control mechanism for the system, include task management.
Consider how boundary condition should be handled.
Review and consider trade-offs.
51. INPUT DESIGN
Input design is a part of overall system design; require the very careful analysis of the
input data items. The goal of the input design is to make the data entry easier, logical
and free from errors. The user controls input data.
The commonly used input, output devices are mouse, keyboard and the visual
display unit. The well designed well organized screen formats are used to acquire the
inputs. The data accepted as stored on database on files.
Our system is classified into subsystem such as
Admin
Customers
Room details
Facilities
Booking details
Payment details
Data report
OUTPUT DESIGN
Output is the most important and direct source of information the user.
Efficient & intelligent output design improve the system relationships with the users
and help in decision—making. The output is collected in order to help the user to
make a wise decision.
53. Doctor Registration
Field name Data type [size] Constraints
Id Varchar20) Primary key
Username Varchar(20) Not Null
Password varchar(15) Not Null
Firstname Varchar(15) Not null
Lastname Varchar(15) Not Null
Address Varchar(50) Not Null
2.Patient Table
Field name Data type [size] Constraints
Patient _Name Varchar(50) Primary key
User _ Id Varchar(20) Not Null
Password Varchar(50) Not Null
Problem Varchar(20) Not Null
54. 3.FeedBack
Field name Data type [size] Constraints
Patient Id Varchar(50) Allow Nulls
Doctor Id Varchar(50) Allow Nulls
Ratings Varchar(50) Primary key
4.Admin
Field name Data type [size] Constraints
Amin_Name Varchar(50) Allow Nulls
Password Varchar(50) Allow Nulls
5.Hospital Table
Field name Data type [size] Constraints
55. Hospital Id Varchar(50) Primary Key
Name Varchar(50) Allow Nulls
Address Varchar(50) Allow Nulls
6.Appointment
Field name Data type [size] Constraints
Patient _id Varchar20) Primary key
Doctor Id Varchar(20) Not Null
Appointment Date varchar(15) Not Null
Status Varchar(15) Not null
57. DATA FLOW DIAGRAM
Data Flow Diagram (DFD) is a design tool constructed to show how data within the
system. It is designed from the data which is collected during data Collection phase.
DFD is otherwise called as ―Bubble chart‖.
There are five symbol used in DFD. They are Rectangle, Open Rectangle,
Circle, Arrow. Each one has its own meaning.
- External Entity
-Source or Destination
-Data Flow
-Process
-Data Store
62. ENTITY-RELATIONSHIP DIAGRAM
An entity–relationship model (ER model) describes inter-related things of interest in a
specific domain of knowledge. An ER model is composed of entity types (which
classify the things of interest) and pacifies relationships that can exist between
instances of those entity types.
In software engineering an ER model is commonly formed to represent things that a
business needs to remember in order to perform business processes. Consequently, the
ER model becomes an abstract data model that defines a data or information structure
that can be implemented in a database, typically a relational database.
Entity–relationship modeling was developed for database design by Peter Chen
and published in a 1976 paper. However, variants of the idea existed
previously. Some ER modelers show super and subtype entities connected by
generalization-specialization relationships, and an ER model can be used also
in the specification of domain-specific ontologies.
65. SYSTEM TESTING
System testing is a very aspect of the software quality assurance (SQA) and represents
the ultimate review of specification, design and code generation. The design process
focuses on the logical internals of the software, ensuring that all the statements have
been tested and all the functional external i.e. defined input will produce actual results
that agree with the required results.
System testing is performed on the entire system in the context of a Functional
Requirement Specification(s) (FRS) and/or a System Requirement Specification
(SRS). System testing tests not only the design, but also the behaviour and even the
believed expectations of the customer. It is also intended to test up to and beyond the
bounds defined in the software/hardware requirements specification.
Unit Testing:
Unit testing is a confusing part of the software development process. Unit testing
involves individually testing unit of code separately to make sure that it works on its
own, independent of the other units.
Unit testing is essentially a set of path, test performed to examine the several different
paths through the modules. Unit testing is remarkably done by programmers with the
help of Unit framework (like J Unit, CPP Unit etc. depending up on the language
source code is written). Unit testing is usually an automated process and performed
within the programmers IDE.
Unit testing is an action used to validate that separate units of source code remains
working properly. Example: - A function, method, Loop or statement in program is
working fine. It is executed by the Developer. In unit testing Individual functions or
procedures are tested to make sure that they are operating correctly and all
components are tested individually.
Integration Testing:
Integration testing, as the name suggests, is a software test where individual test units
are culminated and examined as one unit. The idea behind this test is to validate the
performance, functionalities and reliability of the software.
66. The integration testing or I&T (integration and testing) tests the conjunction between
components and point-to-point intersections among various systems. This test is
executed between unit testing and validation testing. The software is first split into
modules and each module is then unit-tested. These unit -tested modules are then
packaged for integration testing. Integration testing is generally carried out by the
development team and is considered to be a part of the development cycle. The
integration testing has two approaches - one being the Big Bang approach and other
the Incremental approach.
Validation Testing:
Validation testing can be defined in many ways, but a simple definition is that can be
reasonably expected by the customer. After validation test has been conducted, one of
two possible conditions exists.
• The functions or performance characteristics confirm to specification
and are accepted.
• A deviation from specification is uncovered and a deficiency list is
created.
Proposed system under consideration has been tested by using Validation testing
and found to be working satisfactorily.
White-Box Testing:
White-box testing (also known as clear box testing, glass box testing, transparent box
testing, and structural testing) is a method of testing software that tests internal
structures or workings of an application, as opposed to its functionality (i.e. black-box
testing). In white-box testing an internal perspective of the system, as well as
programming skills, are used to design test cases. The tester chooses inputs to exercise
paths through the code and determine the appropriate outputs. This is analogous to
testing nodes in a circuit, e.g. in-circuit testing (ICT). White-box testing can be
applied at the unit, integration and system levels of the software testing process.
Although traditional testers tended to think of white-box testing as being done at the
unit level, it is used for integration and system testing more frequently today. It can
67. test paths within a unit, paths between units during integration, and between
subsystems during a system–level test. Though this method of test design can uncover
many errors or problems, it has the potential to miss unimplemented parts of the
specification or missing requirements.
Black Box Testing:
Black box testing is also known as functional testing. The sole purpose of black box
testing is to test the application or software from its functionality point of view. In this
types of testing, the software is tested to check whether the software fulfills all the
specified requirements. In black box testing, a tester is not concerned about testing the
logic of the program. The internal details of the program are known to the tester. In
this types of testing, the software is like a black box to the tester where internal details
are undisclosed. The tester only tests the functionality of the program by supplying an
input and observing the output.
As already stated, an application or software is developed to fulfill certain objectives
or requirements. Black box testing is a detailed inspection of the software
functionality against the already specified requirements for which it is developed.
68. Limitation of the Project:
Every system may be developed by a professional can not be said to be ideal
in its own. The present project implement by us is not an exception. There are
different types of constraints that have led to a system with limitation. But these
can be removed with some modification. Because of limited time some of these
limitation remain in system.
CODING