This document discusses distributed systems applications in real life, including three key areas: distributed rendering in computer graphics, peer-to-peer networks, and massively multiplayer online gaming. It describes how distributed rendering parallelizes graphics processing across multiple computers. Peer-to-peer networks are defined as decentralized networks where nodes act as both suppliers and consumers of resources. Examples of peer-to-peer applications include file sharing and content delivery networks. The document also outlines the challenges of designing multiplayer online games using a distributed architecture rather than a traditional client-server model.
Message and Stream Oriented CommunicationDilum Bandara
Message and Stream Oriented Communication in distributed systems. Persistent vs. Transient Communication. Event queues, Pub/sub networks, MPI, Stream-based communication, Multicast communication
Message and Stream Oriented CommunicationDilum Bandara
Message and Stream Oriented Communication in distributed systems. Persistent vs. Transient Communication. Event queues, Pub/sub networks, MPI, Stream-based communication, Multicast communication
Replication in computing involves sharing information so as to ensure consistency between redundant resources, such as software or hardware components, to improve reliability, fault-tolerance, or accessibility.
Agreement Protocols, Distributed Resource Management: Issues in distributed File Systems, Mechanism for building distributed file systems, Design issues in Distributed Shared Memory, Algorithm for Implementation of Distributed Shared Memory.
This ppt covers different aspects about timing issues and various algorithms involved in having better sync between different systems in a distributed environment
Proactive routing protocol
Each node maintain a routing table.
Sequence number is used to update the topology information
Update can be done based on event driven or periodic
Observations
May be energy expensive due to high mobility of the nodes
Delay can be minimized, as path to destination is already known to all nodes.
4.1Introduction
- Potential Threats and Attacks on Computer System
- Confinement Problems
- Design Issues in Building Secure Distributed Systems
4.2 Cryptography
- Symmetric Cryptosystem Algorithm: DES
- Asymmetric Cryptosystem
4.3 Secure Channels
- Authentication
- Message Integrity and Confidentiality
- Secure Group Communication
4.4 Access Control
- General Issues
- Firewalls
- Secure Mobile Code
4.5 Security Management
- Key Management
- Issues in Key Distribution
- Secure Group Management
- Authorization Management
Inter-Process Communication in distributed systemsAya Mahmoud
Inter-Process Communication is at the heart of all distributed systems, so we need to know the ways that processes can exchange information.
Communication in distributed systems is based on Low-level message passing as offered by the underlying network.
Synchronization in distributed computingSVijaylakshmi
Synchronization in distributed systems is achieved via clocks. The physical clocks are used to adjust the time of nodes. Each node in the system can share its local time with other nodes in the system. The time is set based on UTC (Universal Time Coordination).
THIS DESCRIBES VARIOUS ELEMENTS OF TRANSPORT PROTOCOL IN TRANSPORT LAYER OF COMPUTER NETWORKS
THERE ARE SIX ELEMENTS OF TRANSPORT PROTOCOL NAMELY
1. ADDRESSING
2. CONNECTION ESTABLISHMENT
3.CONNECTION REFUSE
4.FLOW CONTROL AND BUFFERS
5.MULTIPLEXING
6.CRASH RECOVERY
Replication in computing involves sharing information so as to ensure consistency between redundant resources, such as software or hardware components, to improve reliability, fault-tolerance, or accessibility.
Agreement Protocols, Distributed Resource Management: Issues in distributed File Systems, Mechanism for building distributed file systems, Design issues in Distributed Shared Memory, Algorithm for Implementation of Distributed Shared Memory.
This ppt covers different aspects about timing issues and various algorithms involved in having better sync between different systems in a distributed environment
Proactive routing protocol
Each node maintain a routing table.
Sequence number is used to update the topology information
Update can be done based on event driven or periodic
Observations
May be energy expensive due to high mobility of the nodes
Delay can be minimized, as path to destination is already known to all nodes.
4.1Introduction
- Potential Threats and Attacks on Computer System
- Confinement Problems
- Design Issues in Building Secure Distributed Systems
4.2 Cryptography
- Symmetric Cryptosystem Algorithm: DES
- Asymmetric Cryptosystem
4.3 Secure Channels
- Authentication
- Message Integrity and Confidentiality
- Secure Group Communication
4.4 Access Control
- General Issues
- Firewalls
- Secure Mobile Code
4.5 Security Management
- Key Management
- Issues in Key Distribution
- Secure Group Management
- Authorization Management
Inter-Process Communication in distributed systemsAya Mahmoud
Inter-Process Communication is at the heart of all distributed systems, so we need to know the ways that processes can exchange information.
Communication in distributed systems is based on Low-level message passing as offered by the underlying network.
Synchronization in distributed computingSVijaylakshmi
Synchronization in distributed systems is achieved via clocks. The physical clocks are used to adjust the time of nodes. Each node in the system can share its local time with other nodes in the system. The time is set based on UTC (Universal Time Coordination).
THIS DESCRIBES VARIOUS ELEMENTS OF TRANSPORT PROTOCOL IN TRANSPORT LAYER OF COMPUTER NETWORKS
THERE ARE SIX ELEMENTS OF TRANSPORT PROTOCOL NAMELY
1. ADDRESSING
2. CONNECTION ESTABLISHMENT
3.CONNECTION REFUSE
4.FLOW CONTROL AND BUFFERS
5.MULTIPLEXING
6.CRASH RECOVERY
With the arrival of cloud technology, game accessibility and ub
iquity have a bright future; Games can be
hosted in a centralize server and accessed through the Internet by a thin client on a wide variety of devices
with modest capabilities: cloud gaming. However, current cloud gaming systems have very strong
requireme
nts in terms of network resources, thus reducing the accessibility and ubiquity of cloud games,
because devices with little bandwidth and people located in area with limited
and unstable
network
connectivity, cannot take advantage of these cloud services.
In this paper we present an adaptation technique inspired by the level of detail (LoD) approach in 3D
graphics. It delivers multiple platform accessibility
and network adaptability
, while improving user’s
quality of experience (QoE) by reducing the impa
ct of poor
and
unstable
network parameters (delay,
packet loss, jitter) on game interactivity. We validate our approach using a prototype game
in a controlled
environment
and characterize the user QoE in a pilot experiment. The results show that the propos
ed
framework provides a significant QoE enhancement
Introduction to Cloud Computing
Cloud computing is a transformative technology that allows businesses and individuals to access computing resources over the internet. Instead of owning and maintaining physical hardware and software, users can leverage cloud services provided by companies like Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), and others. This shift has revolutionized how we think about IT infrastructure, software development, data storage, and more.
Key Concepts of Cloud Computing
On-Demand Self-Service:
Users can provision computing resources as needed without human intervention from the service provider. This includes servers, storage, and applications.
Broad Network Access:
Cloud services are available over the network and accessed through standard mechanisms, enabling use from a variety of devices like laptops, smartphones, and tablets.
Resource Pooling:
Providers use a multi-tenant model to serve multiple customers with dynamically assigned resources. This model allows for economies of scale and efficient resource utilization.
Rapid Elasticity:
Resources can be elastically provisioned and released, sometimes automatically, to scale rapidly outward and inward commensurate with demand.
Measured Service:
Cloud systems automatically control and optimize resource use by leveraging a metering capability, allowing for pay-as-you-go pricing models.
Types of Cloud Computing Services
Infrastructure as a Service (IaaS):
Provides virtualized computing resources over the internet. Examples include AWS EC2, Google Compute Engine, and Azure Virtual Machines.
Platform as a Service (PaaS):
Offers hardware and software tools over the internet, typically used for application development. Examples include Google App Engine, AWS Elastic Beanstalk, and Azure App Services.
Software as a Service (SaaS):
Delivers software applications over the internet, on a subscription basis. Examples include Google Workspace, Microsoft Office 365, and Salesforce.
Deployment Models
Public Cloud:
Services are delivered over the public internet and shared across multiple organizations. It offers cost savings but might pose concerns regarding data security and privacy.
Private Cloud:
Dedicated to a single organization, offering enhanced security and control over data and infrastructure. It's more expensive than public cloud but can be tailored to specific business needs.
Hybrid Cloud:
Combines public and private clouds, allowing data and applications to be shared between them. This model offers greater flexibility and optimization of existing infrastructure, security, and compliance.
Community Cloud:
Shared between organizations with common concerns (e.g., security, compliance, jurisdiction). It can be managed internally or by a third-party.
Advantages of Cloud Computing
Cost Efficiency: Reduces the need for significant capital expenditure on hardware and software.
Scalability and Flexibility: Easily scales up or down based on
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...Sérgio Sacani
Since volcanic activity was first discovered on Io from Voyager images in 1979, changes
on Io’s surface have been monitored from both spacecraft and ground-based telescopes.
Here, we present the highest spatial resolution images of Io ever obtained from a groundbased telescope. These images, acquired by the SHARK-VIS instrument on the Large
Binocular Telescope, show evidence of a major resurfacing event on Io’s trailing hemisphere. When compared to the most recent spacecraft images, the SHARK-VIS images
show that a plume deposit from a powerful eruption at Pillan Patera has covered part
of the long-lived Pele plume deposit. Although this type of resurfacing event may be common on Io, few have been detected due to the rarity of spacecraft visits and the previously low spatial resolution available from Earth-based telescopes. The SHARK-VIS instrument ushers in a new era of high resolution imaging of Io’s surface using adaptive
optics at visible wavelengths.
The increased availability of biomedical data, particularly in the public domain, offers the opportunity to better understand human health and to develop effective therapeutics for a wide range of unmet medical needs. However, data scientists remain stymied by the fact that data remain hard to find and to productively reuse because data and their metadata i) are wholly inaccessible, ii) are in non-standard or incompatible representations, iii) do not conform to community standards, and iv) have unclear or highly restricted terms and conditions that preclude legitimate reuse. These limitations require a rethink on data can be made machine and AI-ready - the key motivation behind the FAIR Guiding Principles. Concurrently, while recent efforts have explored the use of deep learning to fuse disparate data into predictive models for a wide range of biomedical applications, these models often fail even when the correct answer is already known, and fail to explain individual predictions in terms that data scientists can appreciate. These limitations suggest that new methods to produce practical artificial intelligence are still needed.
In this talk, I will discuss our work in (1) building an integrative knowledge infrastructure to prepare FAIR and "AI-ready" data and services along with (2) neurosymbolic AI methods to improve the quality of predictions and to generate plausible explanations. Attention is given to standards, platforms, and methods to wrangle knowledge into simple, but effective semantic and latent representations, and to make these available into standards-compliant and discoverable interfaces that can be used in model building, validation, and explanation. Our work, and those of others in the field, creates a baseline for building trustworthy and easy to deploy AI models in biomedicine.
Bio
Dr. Michel Dumontier is the Distinguished Professor of Data Science at Maastricht University, founder and executive director of the Institute of Data Science, and co-founder of the FAIR (Findable, Accessible, Interoperable and Reusable) data principles. His research explores socio-technological approaches for responsible discovery science, which includes collaborative multi-modal knowledge graphs, privacy-preserving distributed data mining, and AI methods for drug discovery and personalized medicine. His work is supported through the Dutch National Research Agenda, the Netherlands Organisation for Scientific Research, Horizon Europe, the European Open Science Cloud, the US National Institutes of Health, and a Marie-Curie Innovative Training Network. He is the editor-in-chief for the journal Data Science and is internationally recognized for his contributions in bioinformatics, biomedical informatics, and semantic technologies including ontologies and linked data.
Cancer cell metabolism: special Reference to Lactate PathwayAADYARAJPANDEY1
Normal Cell Metabolism:
Cellular respiration describes the series of steps that cells use to break down sugar and other chemicals to get the energy we need to function.
Energy is stored in the bonds of glucose and when glucose is broken down, much of that energy is released.
Cell utilize energy in the form of ATP.
The first step of respiration is called glycolysis. In a series of steps, glycolysis breaks glucose into two smaller molecules - a chemical called pyruvate. A small amount of ATP is formed during this process.
Most healthy cells continue the breakdown in a second process, called the Kreb's cycle. The Kreb's cycle allows cells to “burn” the pyruvates made in glycolysis to get more ATP.
The last step in the breakdown of glucose is called oxidative phosphorylation (Ox-Phos).
It takes place in specialized cell structures called mitochondria. This process produces a large amount of ATP. Importantly, cells need oxygen to complete oxidative phosphorylation.
If a cell completes only glycolysis, only 2 molecules of ATP are made per glucose. However, if the cell completes the entire respiration process (glycolysis - Kreb's - oxidative phosphorylation), about 36 molecules of ATP are created, giving it much more energy to use.
IN CANCER CELL:
Unlike healthy cells that "burn" the entire molecule of sugar to capture a large amount of energy as ATP, cancer cells are wasteful.
Cancer cells only partially break down sugar molecules. They overuse the first step of respiration, glycolysis. They frequently do not complete the second step, oxidative phosphorylation.
This results in only 2 molecules of ATP per each glucose molecule instead of the 36 or so ATPs healthy cells gain. As a result, cancer cells need to use a lot more sugar molecules to get enough energy to survive.
Unlike healthy cells that "burn" the entire molecule of sugar to capture a large amount of energy as ATP, cancer cells are wasteful.
Cancer cells only partially break down sugar molecules. They overuse the first step of respiration, glycolysis. They frequently do not complete the second step, oxidative phosphorylation.
This results in only 2 molecules of ATP per each glucose molecule instead of the 36 or so ATPs healthy cells gain. As a result, cancer cells need to use a lot more sugar molecules to get enough energy to survive.
introduction to WARBERG PHENOMENA:
WARBURG EFFECT Usually, cancer cells are highly glycolytic (glucose addiction) and take up more glucose than do normal cells from outside.
Otto Heinrich Warburg (; 8 October 1883 – 1 August 1970) In 1931 was awarded the Nobel Prize in Physiology for his "discovery of the nature and mode of action of the respiratory enzyme.
WARNBURG EFFECT : cancer cells under aerobic (well-oxygenated) conditions to metabolize glucose to lactate (aerobic glycolysis) is known as the Warburg effect. Warburg made the observation that tumor slices consume glucose and secrete lactate at a higher rate than normal tissues.
Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...Sérgio Sacani
We characterize the earliest galaxy population in the JADES Origins Field (JOF), the deepest
imaging field observed with JWST. We make use of the ancillary Hubble optical images (5 filters
spanning 0.4−0.9µm) and novel JWST images with 14 filters spanning 0.8−5µm, including 7 mediumband filters, and reaching total exposure times of up to 46 hours per filter. We combine all our data
at > 2.3µm to construct an ultradeep image, reaching as deep as ≈ 31.4 AB mag in the stack and
30.3-31.0 AB mag (5σ, r = 0.1” circular aperture) in individual filters. We measure photometric
redshifts and use robust selection criteria to identify a sample of eight galaxy candidates at redshifts
z = 11.5 − 15. These objects show compact half-light radii of R1/2 ∼ 50 − 200pc, stellar masses of
M⋆ ∼ 107−108M⊙, and star-formation rates of SFR ∼ 0.1−1 M⊙ yr−1
. Our search finds no candidates
at 15 < z < 20, placing upper limits at these redshifts. We develop a forward modeling approach to
infer the properties of the evolving luminosity function without binning in redshift or luminosity that
marginalizes over the photometric redshift uncertainty of our candidate galaxies and incorporates the
impact of non-detections. We find a z = 12 luminosity function in good agreement with prior results,
and that the luminosity function normalization and UV luminosity density decline by a factor of ∼ 2.5
from z = 12 to z = 14. We discuss the possible implications of our results in the context of theoretical
models for evolution of the dark matter halo mass function.
THE IMPORTANCE OF MARTIAN ATMOSPHERE SAMPLE RETURN.Sérgio Sacani
The return of a sample of near-surface atmosphere from Mars would facilitate answers to several first-order science questions surrounding the formation and evolution of the planet. One of the important aspects of terrestrial planet formation in general is the role that primary atmospheres played in influencing the chemistry and structure of the planets and their antecedents. Studies of the martian atmosphere can be used to investigate the role of a primary atmosphere in its history. Atmosphere samples would also inform our understanding of the near-surface chemistry of the planet, and ultimately the prospects for life. High-precision isotopic analyses of constituent gases are needed to address these questions, requiring that the analyses are made on returned samples rather than in situ.
2. CONTENTS
• Applications of Distributed Systems
1. Distributed Rendering in Computer
Graphics
2. Peer-To-Peer Networks
3. Massively Multiplayer Online Gaming
3. APPLICATIONS OF
DISTRIBUTED SYSTEMS
• Telecommunication networks:
Telephone networks and cellular networks
Computer networks such as the Internet
Wireless sensor networks
Routing algorithms
• Network applications:
World wide web and peer-to-peer networks
Massively multiplayer online games and virtual reality communities
Distributed databases and distributed database management systems
Network file systems
• Real-time process control:
Aircraft control systems
Industrial control systems
• Parallel computation:
Scientific computing, including cluster computing and grid computing
Distributed rendering in computer graphics
5. DISTRIBUTED RENDERING
• Basic concepts of parallel rendering in
computer graphics
• The importance of parallel rendering
• Different types of rendering parallelism
• Algorithms in parallel rendering
6. INTRODUCTION
• What is Rendering and Parallel / distributed
rendering?
• For complex objects or high quality images, the
rendering process requires massive computational
resources. For example, the rendering of medical
visualization and some CAD applications may need
millions or billions of floating-point and integer
operations for each image.
• To obtain the required computing power, the only
practical solution is to exploit multiple processing
units to speed up the rendering process. This
technology is known as parallel rendering
7. RENDERING FUNCTIONS
• The rendering process can be divided into
a number of distinct functions:
• 1) Geometry processing: modeling
transformation.
• 2) Rasterization: scan-conversion, shading,
9. FUNCTIONAL PARALLELISM
• Functions are applied in series to individual data items.
• Functions are assigned to different processing units; once a
processing unit completes its work on a data item, it sends the data
item to the next unit and receives a new item from its upstream
neighbor.
• The functional parallelism is suitable for polygon and surface
rendering applications. We feed the 3D geometric objects into the
beginning of the pipeline, and retrieve the final pixel values from the
end of pipeline.
10. DATA PARALLELISM
• Data parallelism focuses on distributing the data across different
processing units
• The graphics data are divided into a number of streams; multiple
data items can then be processed simultaneously on different
processing units, and the results are merged to create the final
image
11. TEMPORAL PARALLELISM
• Temporal parallelism usually adopted by animation
applications. In this kind of applications, a large number
of images are generated for subsequent playback;
parallelism is obtained by decomposing the overall
rendering task in the time domain; and processors are
assigned a number of frames to render, along with the
data needed to produce those frames.
12. PARALLEL RENDERING –
A SORTING PROBLEM
• As stated In the stage of rasterization, each processor is
assigned a portion of the pixel calculations. The
rendering can be considered as determining how each
object affects each pixel. Since the modeling and viewing
transformations are arbitrary, an object can fall anywhere
on the screen. Therefore we can view rendering as a
sorting problem, i.e. sorting objects to the screen
13. THREE GROUPS OF ALGORITHM
1. Sort-First
2. Sort-Middle
3. Sort-Last
17. CONCLUSION
• Although parallel rendering techniques have
been successfully employed in many
applications; challenges still remain to improve
the scalability, load balancing, communication,
and image composition of these algorithms and
produce higher performance.
19. INTRODUCTION
• A peer-to-peer (P2P) network is a type of
decentralized and distributed network
architecture in which individual nodes in the
network (called "peers") act as both suppliers
and consumers of resources, in contrast to the
centralized client–server model where client
nodes request access to resources provided by
central servers.
20. CURRENT APPLICATIONS OF P2P
• File-sharing networks
• Communications
• Content delivery
• Streaming media
21. PEER-TO-PEER FILE SHARING
• Peer-to-peer file sharing is the distribution and
sharing of digital documents and computer files
using the technology of peer-to-peer (P2P)
networking.
• P2P file sharing allows users to access media files
such as books, music, movies, and games using a
specialized P2P software program that searches for
other connected computers on a P2P network and
locates the desired content. The nodes (peers) of
such networks are end-user computer systems that
are interconnected via the Internet.
22. THE BITTORRENT PROTOCOL
• BitTorrent is a protocol that enables fast downloading
of large files using minimum Internet Bandwidth. It
costs nothing to use and includes no spyware or pop-
up advertising.
• Unlike other download methods, BitTorrent maximizes
transfer speed by gathering pieces of the file you want
and downloading these pieces simultaneously from
people who already have them. This process makes
popular and very large files, such as videos and
television programs, download much faster than is
possible with other protocols.
23. TRADITIONAL CLIENT-SERVER
DOWNLOADING
• You open a Web page and click a link to download a file
to your computer.
• The Web browser software on your computer (the client)
tells the server (a central computer that holds the Web
page and the file you want to download) to transfer a
copy of the file to your computer.
• The transfer is handled by a protocol (a set of rules), such
as FTP (File Transfer Protocol) or HTTP (HyperText
Transfer Protocol).
26. FUTURE SCOPE
• Academic Study Material in Schools and
Colleges
• E-papers
• Free-to-Air Radio channel services
• Large software updates and other general
data files
• Global availability of private data
28. INTRODUCTION
• Previously, Client/server communication architecture was used for
gaming.
• This architecture has the advantage that a single authority orders
events, resolves conflicts in the simulation, acts as a central
repository for data, and is easy to secure.
• First, it introduces delay because messages between players are
always forwarded through the server. Second, traffic at the server
increases with the number of players, creating localized congestion1.
Third, the size of the virtual world and the quantity the data it
contains is limited by the storage capacity of the server.
29. A distributed architecture for MMOGs has the potential
to overcome these problems.
• Players can send messages directly to each other, thereby reducing
delay and eliminating localized congestion. The storage capacity
created by combining the storage resources of all players can easily
exceed that of a single server. Furthermore, the ability to execute
thousands of remote processes in parallel on user machines
provides unparalleled computational power for MMOGs
30. Distributed architecture has several fundamental
and challenging problems that must be overcome:
• Authenticating players and granting access rights to the
game.
• Maintaining consistency, ordering events, and
propagating events to intended recipients
• Providing tamper-resistant storage of characters and
game state
• Scheduling computations across the players
• Providing responsiveness to interactive elements of the
game
• Ensuring the architecture is cheat-proof by preventing
players from taking advantage of the architecture itself in
order to cheat.
31. PROTOCOLS
1st protocol – MiMaze Protocol
Uses multicast , problem of cheat
2nd protocol - Lockstep protocol
32. Lockstep protocol
• The lockstep protocol is a partial solution to the look-ahead cheating problem
in peer-to-peer architecture multiplayer games, in which a cheating client delays his
own actions to await the messages of other players.
• To avoid this method of cheating, the lockstep protocol requires each player to first
announce a "commitment”
• Drawback:
• As all players must wait for all commitments to arrive before sending their actions, the
game progresses as slowly as the player with the highest latency.
• Overcome: asynchronous lockstep protocol
• Lockstep is a major advance in communication protocols for distributed games
because it is provably secure against several cheats. Unfortunately, time progresses in
the lockstep protocol at the speed of the slowest player.
• Unfortunately , cannot meet the bounded time requirements necessary for games.
33. KNUTSSON
• The virtual world is divided into regions, and players in each region form a group.
Each region maps to a multicast group through Scribe so that updates from the
players are multicast to the group. Consistency is achieved through the use of
coordinators. Every object in the game is assigned to a coordinator; therefore, any
updates to an object must be sent to the coordinator who resolves any consistency
problems. Fault tolerance is achieved through replication.
34. PEER TO PEER GAME
ARCHITECTURE
We divide the architecture into four main components:
Authentication, Communications, Storage, and Computation.
First, the authentication component is responsible for controlling
access to the game. The communication component determines how
players send messages to each other and the storage component
provides long-term storage of world state.
Last, the computation component schedules computations
across the player base.
35. AUTHENTICATION
The challenging problems with distributed authentication are security and
scalability
We use hash table : provides unique ID WITH NAME AND expiration date
To help with scalability, once the pairs are generated, they are stored on a DHT
by the player or server.
The advantage of using a DHT is that as the number of players increase, the
storage capacity increases.
36. COMMUNICATION COMPONENT
A distributed communication architecture has the possibility of supporting millions of players.
client/server architectures, all communications must be sent to the server before being sent to
players, introducing additional delay. Distributed communication allows players to send
messages directly to each other, creating a more responsive game.
Consistency is required if players are from different regions if error arises they wont be able to
play together.
Dividing communication between zones. As the players inc. we divide the zones into virtual
world to peer to peer groups.
To organize the peer-to-peer networks into a hierarchy of groups, we use distributed election
protocols to elect one or more leaders to act as nodes in the hierarchy. The nodes assist in
event propagation and group location. To help build the hierarchy of groups, we propose using
a DHT to map unique zone IDs to the list of nodes in a given zone.
TO provide consistency and event ordering using a new protocol called NEO . NEO ensures low
latency and also prevents cheating
37. STORAGE
• TWO TYPES:
1. Ephemeral Data
• Data which is ephemeral is simply stored on the DHT, indexed by its
geographical area in the virtual world. Players that enter an area, query the
DHT for all ephemeral data.
1. .
2. Permanent Data
• This allows the player to make sure the data is always available when they
are online, but provides a backup in case their local data becomes
corrupted.
38. COMPUTATION COMPONENT
• The server in a client/server architecture typically does
not have enough power to simulate complex interactions
between players and the artificial intelligence (AI) behind
monsters and other non-player characters.
39. FUTURE SCOPE
• Currently available for expert users only.
Can be extended to novice users.
• Scope for cost reduction in terms of
hardware and resources.
• Maximum no. of players is currently
limited.