1. Low level design inspections
• Low-level design (LLD) is a component-level
design process that follows a step-by-step
refinement process.
• This process can be used for designing data
structures, required software architecture,
source code and ultimately, performance
algorithms.
2. Low-Level Design (LLD)
• In LLD, the focus is more on designing each
component in detail such as what classes are
needed, what abstractions to use, how object
creation should happen, how data flows
between different objects, etc. LLD converts
the high-level design into detailed design
(ready to code) components.
3. High Level Design
• In HLD, the focus is more on designing the
high-level architecture of the system, defining
the high-level components with their
interactions, and also the database design.
HLD converts the business requirements to a
high-level solution.
4. Phases of code inspection
• Planning: The inspection is planned by the
moderator.
• Overview meeting: The author describes the
background of the work product.
• Preparation: Each inspector examines the
work product to identify possible defects.
5. Components of LLD
• The LLD comprises granular-level details of the
functional logic of each module as
pseudocode, database tables, all the
properties with their type and size, interface,
API details, dependencies as well as error
message listings.
• With a well-analyzed low-level design
document, creating programs becomes fairly
easy.
6. Purpose Of Code Inspection
• The main purpose of code inspection is to find defects
and it can also spot any process improvement if any.
• An inspection report lists the findings, which include
metrics that can be used to aid improvements to the
process as well as correcting defects in the document
under review.
• Preparation before the meeting is essential, which
includes reading of any source documents to ensure
consistency.
• Inspections are often led by a trained moderator, who
is not the author of the code.
7. Purpose Of Code Inspection
• The inspection process is the most formal type
of review based on rules and checklists and
makes use of entry and exit criteria.
• It usually involves peer examination of the
code and each one has a defined set of roles.
• After the meeting, a formal follow-up process
is used to ensure that corrective action is
completed in a timely manner.
8. Code Review
• Code Review is a systematic examination,
which can find and remove the vulnerabilities
in the code such as memory leaks and buffer
overflows.
• Technical reviews are well documented and
use a well-defined defect detection process
that includes peers and technical experts.
• Reviewers prepare for the review meeting and
prepare a review report with a list of findings.
9. Advantages Of Code Inspection
• Improves overall product quality.
• Discovers the bugs/defects in software code.
• Marks any process enhancement in any case.
• Finds and removes defective efficiently and
quickly.
• Helps to learn from previous defeats.
10. Unit Tests
• Unit testing is a software development
process in which the smallest testable parts of
an application, called units, are individually
scrutinized for proper operation.
• Software developers and sometimes QA staff
complete unit tests during the development
process.
11. Purpose of Unit Test
• A unit test is a type of software test that
focuses on components of a software product.
• The purpose is to ensure that each unit of
software code works as expected.
• A unit can be a function, method, module,
object, or other entity in an application's
source code.
12.
13.
14. Unit Testing Best Practices
Tests should be isolated:
• While writing unit tests, it is important to keep in
mind that the unit tests are written individually
from each other.
• The arrangement of the cases might vary from
person to person.
• The clusters also can be defined by your own
choice.
• Just note that each test must be orthogonal in a
certain way that it must be different from other
test cases.
15. High Speed:
• Unit tests are planned by developers to be executed
repeatedly to make sure that there are no errors and
minor bugs in the system.
• If the time taken to execute these tests is more and the
speed is slow, it will consequently increase the
execution time of the test cases.
• Even the speed of one slow test case will have an
impact on the overall speed of execution of the test
cases.
• That’s why, developers must use the best coding
methods to lessen the execution time of every test
case, which will result in an overall decrease in the
execution time.
16. High Readability
• The readability of any unit test must be very easy
and high.
• The test has to be clear and readable.
• One must be able to understand the functionality
error the test is explaining only by reading it.
• It must properly state the scenario that is going
under test and if it is failing, it should mention
the reasons for the failure in a clear way.
17. Securing Information
• Information security protects sensitive
information from unauthorized activities,
including inspection, modification, recording,
and any disruption or destruction.
• The goal is to ensure the safety and privacy of
critical data such as customer account details,
financial data or intellectual property.
18. practical ways to keep Information
safe and secure
1. Back up your data
2. Use strong passwords and multi-factor
authentication
3. Be aware of your surroundings
4. Be wary of suspicious emails
5. Install anti-virus and malware protection
6. Protect your device when it’s unattended
7. Make sure your Wi-Fi connection is secure
8. Take care when sharing your screen
19. 3 Principles of Information Security
Confidentiality
• Confidentiality measures are designed to
prevent unauthorized disclosure of
information.
• The purpose of the confidentiality principle is
to keep personal information private .
• To ensure that it is visible and accessible only
to those individuals who own it or need it to
perform their organizational functions.
20. Integrity
• Consistency includes protection against
unauthorized changes (additions, deletions,
alterations, etc.) to data.
• The principle of integrity ensures that data is
accurate and reliable and is not modified
incorrectly, whether accidentally or
maliciously.
21. Availability
• Availability is the protection of a system’s ability
to make software systems and data fully available
when a user needs it (or at a specified time).
• The purpose of availability is to make the
technology infrastructure, the applications and
the data available when they are needed for an
organizational process or for an organization’s
customers.
22. Data Integrity
• Data integrity is a concept and process that
ensures the accuracy, completeness,
consistency, and validity of an organization's
data.
• By following the process, organizations not
only ensure the integrity of the data but
guarantee they have accurate and correct data
in their database.
24. • Data integrity means the data has been
collected and stored accurately, as well as
being contextually accurate to the model at
hand.
• To maintain integrity, data must be collected
and stored in an ethical, law-abiding way and
must have a complete structure where all
defining characteristics are correct and can be
validated.
25. • Data can become compromised in a variety of
ways:
• Human error, such as unintended alterations
• Errors in transferring
• Malware/hacker interference
• Disk crashes
• Bugs and physical device damage
• Illegal data collection
26. Different Types of Data Integrity
• Physical integrity
• logical integrity
• PHYSICAL INTEGRITY
• Physical integrity is the overall protection of the
wholeness of a data set as it is stored and
retrieved.
• Anything that impedes the ability to retrieve this
data, such as power disruption, malicious
disruption, storage erosion and a slew of
additional issues may cause a lack of physical
integrity.
27. • Many companies outsource their data storage
to cloud providers, such as AWS, to manage
the physical integrity of the data. This is
particularly useful for small companies that
benefit from offloading data storage to spend
more time focusing on their business.
28. LOGICAL INTEGRITY
• Logical integrity allows data to remain
unchanged as it is utilized in a relational
database.
• Maintaining logical integrity helps protect
from human error and malicious intervention
as well, but does so in different ways than
physical integrity depending on its form.
29. Databases use four variations of logical
integrity:
• Entity integrity
• Referential integrity
• Domain integrity
• User-defined integrity
30. Entity integrity
• It involves the creation of primary keys to
identify data as distinct entities and ensure
that no data is listed more than once or is null.
• This allows data to be linked to and enables its
usage in a variety of ways.
31. Referential integrity
• It is the series of processes that is used to store
and access data uniformly, which allows rules to
be embedded into a database’s structure
regarding the use of foreign keys.
• This allows for a consistent and meaningful
combination of data sets across the database.
• Critically, referential integrity allows the ability to
combine various tables within a relational
database, facilitating uniform insertion and
deletion practices.
32. • Domain integrity refers to the collection of
processes that ensure accuracy in each piece
of data included in a domain, or a set of
acceptable values that a column may contain.
• User-defined integrity provides rules and
constraints that are created by the user in
order to use data for their specific purpose.
33. Java- Managing Denial of Service
• The Denial of Service (DoS) attack is focused
on making a resource (site, application,
server) unavailable for the purpose it was
designed.
• There are many ways to make a service
unavailable for legitimate users by
manipulating network packets, programming,
logical, or resources handling vulnerabilities,
among others.
34. • Denial of service is typically accomplished by
flooding the targeted machine or resource
with surplus requests in an attempt to
overload systems and prevent some or all
legitimate requests from being fulfilled.
• For example, if a bank website can handle 10
people a second by clicking the Login button,
an attacker only has to send 10 fake requests
per second to make it so no legitimate users
can log in.
35. • The most famous DoS technique is the Ping of
Death.
• The Ping of Death attack works by generating
and sending special network messages
specifically, ICMP (Internet Control Message
Protocol)packets of non-standard sizes, that
cause problems for systems that receive them.
36. Following is the command for performing flooding
of requests on an IP.
ping ip_address –t -65500
• “ping” sends the data packets to the victim.
• “ip_address” is the IP address of the victim.
• “-t” means the data packets should be sent until
the program is stopped.
• “-l(65500)” specifies the data load to be sent to
the victim.
37. Challenges faced by Dos attacks
• Ineffective services
• Inaccessible services
• Interruption of network traffic
• Connection interference
38. Features to help mitigate DoS attacks:
• Network Segmentation: Segmenting the
network can help prevent a DoS attack from
spreading throughout the entire network.
• This limits the impact of an attack and helps to
isolate the affected systems.
• Implement Firewalls: Firewalls can help
prevent DoS attacks by blocking traffic from
known malicious IP addresses or by limiting
the amount of traffic allowed from a single
source.
39. • Use Intrusion Detection and Prevention
Systems: Intrusion Detection and Prevention
Systems (IDS/IPS) can help to detect and block
DoS attacks by analyzing network traffic and
blocking malicious traffic.