Resilience design pattern. What is resilience? What's different? and How can we use it. Let us use it to design E-commerce flush sales events. Second case is a Uber like App design.
Static Analysis Security Testing for Dummies... and YouKevin Fealey
Most enterprise application security teams have at least one Static Analysis Security Testing (SAST) tool in their tool-belt; but for many, the tool never leaves the belt. SAST tools have gotten a reputation for being slow, error-prone, and difficult to use; and out of the box, many of them are – but with a little more knowledge behind how these tools are designed, a SAST tool can be a valuable part of any security program.
In this talk, we’ll help you understand the strengths and weaknesses of SAST tools by illustrating how they trace your code for vulnerabilities. You’ll see out-of-the-box rules for commercial and open-source SAST tools, and learn how to write custom rules for the widely-used open source SAST tool, PMD. We’ll explain the value of customizing tools for your organization; and you’ll learn how to integrate SAST technologies into your existing build and deployment pipelines. Lastly, we’ll describe many of the common challenges organizations face when deploying a new security tool to security or development teams, as well as some helpful hints to resolve these issues
Apache Kafka becoming the message bus to transfer huge volumes of data from various sources into Hadoop.
It's also enabling many real-time system frameworks and use cases.
Managing and building clients around Apache Kafka can be challenging. In this talk, we will go through the best practices in deploying Apache Kafka
in production. How to Secure a Kafka Cluster, How to pick topic-partitions and upgrading to newer versions. Migrating to new Kafka Producer and Consumer API.
Also talk about the best practices involved in running a producer/consumer.
In Kafka 0.9 release, we’ve added SSL wire encryption, SASL/Kerberos for user authentication, and pluggable authorization. Now Kafka allows authentication of users, access control on who can read and write to a Kafka topic. Apache Ranger also uses pluggable authorization mechanism to centralize security for Kafka and other Hadoop ecosystem projects.
We will showcase open sourced Kafka REST API and an Admin UI that will help users in creating topics, re-assign partitions, Issuing
Kafka ACLs and monitoring Consumer offsets.
CI/CD with an Idempotent Kafka Producer & Consumer | Kafka Summit London 2022HostedbyConfluent
"Idempotence is a mathematical requirement of particular operations where the operation can be applied multiple times without changing the result beyond the initial application.
The main driver behind the idempotency requirement is often to handle duplicated messages. As developers and architects, we need to pay close attention to how we deal with our production data during new deployments to ensure we are not losing any data, duplicating messages, or introducing malformed data into our system. Furthermore, we need to figure out how to automate the process and add testing guarantees to prevent any potential human error.
In this session, you will learn about the idempotent Kafka Producer & Consumer architecture and how to automate the CI/CD process with open-source tools."
Static Analysis Security Testing for Dummies... and YouKevin Fealey
Most enterprise application security teams have at least one Static Analysis Security Testing (SAST) tool in their tool-belt; but for many, the tool never leaves the belt. SAST tools have gotten a reputation for being slow, error-prone, and difficult to use; and out of the box, many of them are – but with a little more knowledge behind how these tools are designed, a SAST tool can be a valuable part of any security program.
In this talk, we’ll help you understand the strengths and weaknesses of SAST tools by illustrating how they trace your code for vulnerabilities. You’ll see out-of-the-box rules for commercial and open-source SAST tools, and learn how to write custom rules for the widely-used open source SAST tool, PMD. We’ll explain the value of customizing tools for your organization; and you’ll learn how to integrate SAST technologies into your existing build and deployment pipelines. Lastly, we’ll describe many of the common challenges organizations face when deploying a new security tool to security or development teams, as well as some helpful hints to resolve these issues
Apache Kafka becoming the message bus to transfer huge volumes of data from various sources into Hadoop.
It's also enabling many real-time system frameworks and use cases.
Managing and building clients around Apache Kafka can be challenging. In this talk, we will go through the best practices in deploying Apache Kafka
in production. How to Secure a Kafka Cluster, How to pick topic-partitions and upgrading to newer versions. Migrating to new Kafka Producer and Consumer API.
Also talk about the best practices involved in running a producer/consumer.
In Kafka 0.9 release, we’ve added SSL wire encryption, SASL/Kerberos for user authentication, and pluggable authorization. Now Kafka allows authentication of users, access control on who can read and write to a Kafka topic. Apache Ranger also uses pluggable authorization mechanism to centralize security for Kafka and other Hadoop ecosystem projects.
We will showcase open sourced Kafka REST API and an Admin UI that will help users in creating topics, re-assign partitions, Issuing
Kafka ACLs and monitoring Consumer offsets.
CI/CD with an Idempotent Kafka Producer & Consumer | Kafka Summit London 2022HostedbyConfluent
"Idempotence is a mathematical requirement of particular operations where the operation can be applied multiple times without changing the result beyond the initial application.
The main driver behind the idempotency requirement is often to handle duplicated messages. As developers and architects, we need to pay close attention to how we deal with our production data during new deployments to ensure we are not losing any data, duplicating messages, or introducing malformed data into our system. Furthermore, we need to figure out how to automate the process and add testing guarantees to prevent any potential human error.
In this session, you will learn about the idempotent Kafka Producer & Consumer architecture and how to automate the CI/CD process with open-source tools."
Zero Code Multi-Cloud Automation with Ansible and TerraformAvi Networks
Does your automation require more or less work? Avi's take is less. That’s why Avi offers zero-code multi-cloud automation for Day 0 and Day 1+. DevOps and IT teams can achieve self-service application and infrastructure resources provisioning (Day 0) without writing custom scripts per app or per cloud. We will walk through how to leverage Ansible and Terraform to automate tasks throughout the lifecycle of an application (Day 1+) using YAML-based declarative configurations.
Learn how to:
- Achieve efficient, repeatable, and automated app provisioning without writing code
- Use Ansible roles and modules or Terraform providers to easily automate common tasks
- Deploy across multi-cloud environments with consistent experience without customizations
- Gain visibility into network topology and app performance
- Apply closed-loop analytics to drive automation
Watch the full webinar: https://info.avinetworks.com/webinars-ansible-and-terraform-recipes
Trunk based development and Canary deploymentHai Lu
Apply trunk based development and canary deployment to VinID Platform
- Intro to trunk based development
- Intro to deployment strategies
- Canary deployment with Kubernetes and Istio
- Acceptance testing and load testing with Postman and K6
- Safe and automation friendly canary deployments with Flagger
- Next year challenge: Multi-cloud canary deployment with Spinnaker
C* Summit 2013: How Not to Use Cassandra by Axel LiljencrantzDataStax Academy
At Spotify, we see failure as an opportunity to learn. During the two years we've used Cassandra in our production environment, we have learned a lot. This session touches on some of the exciting design anti-patterns, performance killers and other opportunities to lose a finger that are at your disposal with Cassandra.
Pragmatic Guide to Apache Kafka®'s Exactly Once Semanticsconfluent
Gwen Shapira, Confluent, Engineering Leader
It is easy to find information on how Kafka's exactly once semantics work. It isn't as easy to understand what it all means for you - what is and what is not guaranteed? Which kinds of use-cases are a good fit, and which are unlikely to work as expected? In this talk, we will separate hype from reality and explore what Kafka's Exactly-Once semantics means to developers using Kafka.
https://www.meetup.com/KafkaBayArea/events/276013048/
Integrating Splunk into your Spring ApplicationsDamien Dallimore
How much visibility do you really have into your Spring applications? How effectively are you capturing,harnessing and correlating the logs, metrics, & messages from your Spring applications that can be used to deliver this visibility ? What tools and techniques are you providing your Spring developers with to better create and utilize this mass of machine data ? In this session I'll answer these questions and show how Splunk can be used to not only provide historical and realtime visibility into your Spring applications , but also as a platform that developers can use to become more "devops effective" & easily create custom big data integrations and standalone solutions.I'll discuss and demonstrate many of Splunk's Java apps,frameworks and SDK and also cover the Spring Integration Adaptors for Splunk.
Learning Rust the Hard Way for a Production Kafka + ScyllaDB PipelineScyllaDB
🎥 Sign up for upcoming webinars or browse through our library of on-demand recordings here: https://www.scylladb.com/resources/webinars/
About this webinar:
Numberly operates business-critical data pipelines and applications where failure and latency means "lost money" in the best-case scenario. Most of their data pipelines and applications are deployed on Kubernetes and rely on Kafka and ScyllaDB, with Kafka acting as the message bus and ScyllaDB as the source of data for enrichment. The availability and latency of both systems are thus very important for data pipelines. While most of Numberly’s applications are developed using Python, they found a need to move high-performance applications to Rust in order to benefit from a lower-level programming language.
Learn the lessons from Numberly’s experience, including:
- Rationale in selecting a lower-level language
- Developing using a lower-level Rust code base
- Observability and analyzing latency impacts with Rust
- Tuning everything from Apache Avro to driver client settings
- How to build a mission-critical system combining Apache Kafka and ScyllaDB
- Half a year Rust in production feedback
This is the latest version of the State of the DevSecOps presentation, which was given by Stefan Streichsbier, founder of guardrails.io, as the keynote for the Singapore Computer Society - DevSecOps Seminar in Singapore on the 13th January 2020.
Druid and Hive Together : Use Cases and Best PracticesDataWorks Summit
Two popular open source technologies, Druid and Apache Hive, are often mentioned as viable solutions for large-scale analytics. Hive works well for storing large volumes of data, although not optimized for ingesting streaming data and making it available for queries in realtime. On the other hand, Druid excels at low-latency, interactive queries over streaming data and making data available in realtime for queries. Although the high level messaging presented by both projects may lead you to believe they are competing for same use case, the technologies are in fact extremely complementary solutions.
By combining the rich query capabilities of Hive with the powerful realtime streaming and indexing capabilities of Druid, we can build more powerful, flexible, and extremely low latency realtime streaming analytics solutions. In this talk we will discuss the motivation to combine Hive and Druid together alongwith the benefits, use cases, best practices and benchmark numbers.
The Agenda of the talk will be -
1. Motivation behind integrating Druid with Hive
2. Druid and Hive together - benefits
3. Use Cases with Demos and architecture discussion
4. Best Practices - Do's and Don'ts
5. Performance vs Cost Tradeoffs
6. SSB Benchmark Numbers
Agile Velocity - Deliver double the value in half the timeDavid Hawks
Learn practical techniques to guide your teams and escape the top 6 traps preventing organizations from realizing the full benefits of agile.
64% of product features built in software development are rarely or never used. Too many teams focus on increasing the amount of output. Not enough teams focus on delivering the most value with the least amount of output. In this interactive presentation, David Hawks will share the key factors that sabotage product success and what to do about it. Learn practical tools and techniques that accelerate learning throughout the product development cycle to deliver double the value in half the time.
Zero Code Multi-Cloud Automation with Ansible and TerraformAvi Networks
Does your automation require more or less work? Avi's take is less. That’s why Avi offers zero-code multi-cloud automation for Day 0 and Day 1+. DevOps and IT teams can achieve self-service application and infrastructure resources provisioning (Day 0) without writing custom scripts per app or per cloud. We will walk through how to leverage Ansible and Terraform to automate tasks throughout the lifecycle of an application (Day 1+) using YAML-based declarative configurations.
Learn how to:
- Achieve efficient, repeatable, and automated app provisioning without writing code
- Use Ansible roles and modules or Terraform providers to easily automate common tasks
- Deploy across multi-cloud environments with consistent experience without customizations
- Gain visibility into network topology and app performance
- Apply closed-loop analytics to drive automation
Watch the full webinar: https://info.avinetworks.com/webinars-ansible-and-terraform-recipes
Trunk based development and Canary deploymentHai Lu
Apply trunk based development and canary deployment to VinID Platform
- Intro to trunk based development
- Intro to deployment strategies
- Canary deployment with Kubernetes and Istio
- Acceptance testing and load testing with Postman and K6
- Safe and automation friendly canary deployments with Flagger
- Next year challenge: Multi-cloud canary deployment with Spinnaker
C* Summit 2013: How Not to Use Cassandra by Axel LiljencrantzDataStax Academy
At Spotify, we see failure as an opportunity to learn. During the two years we've used Cassandra in our production environment, we have learned a lot. This session touches on some of the exciting design anti-patterns, performance killers and other opportunities to lose a finger that are at your disposal with Cassandra.
Pragmatic Guide to Apache Kafka®'s Exactly Once Semanticsconfluent
Gwen Shapira, Confluent, Engineering Leader
It is easy to find information on how Kafka's exactly once semantics work. It isn't as easy to understand what it all means for you - what is and what is not guaranteed? Which kinds of use-cases are a good fit, and which are unlikely to work as expected? In this talk, we will separate hype from reality and explore what Kafka's Exactly-Once semantics means to developers using Kafka.
https://www.meetup.com/KafkaBayArea/events/276013048/
Integrating Splunk into your Spring ApplicationsDamien Dallimore
How much visibility do you really have into your Spring applications? How effectively are you capturing,harnessing and correlating the logs, metrics, & messages from your Spring applications that can be used to deliver this visibility ? What tools and techniques are you providing your Spring developers with to better create and utilize this mass of machine data ? In this session I'll answer these questions and show how Splunk can be used to not only provide historical and realtime visibility into your Spring applications , but also as a platform that developers can use to become more "devops effective" & easily create custom big data integrations and standalone solutions.I'll discuss and demonstrate many of Splunk's Java apps,frameworks and SDK and also cover the Spring Integration Adaptors for Splunk.
Learning Rust the Hard Way for a Production Kafka + ScyllaDB PipelineScyllaDB
🎥 Sign up for upcoming webinars or browse through our library of on-demand recordings here: https://www.scylladb.com/resources/webinars/
About this webinar:
Numberly operates business-critical data pipelines and applications where failure and latency means "lost money" in the best-case scenario. Most of their data pipelines and applications are deployed on Kubernetes and rely on Kafka and ScyllaDB, with Kafka acting as the message bus and ScyllaDB as the source of data for enrichment. The availability and latency of both systems are thus very important for data pipelines. While most of Numberly’s applications are developed using Python, they found a need to move high-performance applications to Rust in order to benefit from a lower-level programming language.
Learn the lessons from Numberly’s experience, including:
- Rationale in selecting a lower-level language
- Developing using a lower-level Rust code base
- Observability and analyzing latency impacts with Rust
- Tuning everything from Apache Avro to driver client settings
- How to build a mission-critical system combining Apache Kafka and ScyllaDB
- Half a year Rust in production feedback
This is the latest version of the State of the DevSecOps presentation, which was given by Stefan Streichsbier, founder of guardrails.io, as the keynote for the Singapore Computer Society - DevSecOps Seminar in Singapore on the 13th January 2020.
Druid and Hive Together : Use Cases and Best PracticesDataWorks Summit
Two popular open source technologies, Druid and Apache Hive, are often mentioned as viable solutions for large-scale analytics. Hive works well for storing large volumes of data, although not optimized for ingesting streaming data and making it available for queries in realtime. On the other hand, Druid excels at low-latency, interactive queries over streaming data and making data available in realtime for queries. Although the high level messaging presented by both projects may lead you to believe they are competing for same use case, the technologies are in fact extremely complementary solutions.
By combining the rich query capabilities of Hive with the powerful realtime streaming and indexing capabilities of Druid, we can build more powerful, flexible, and extremely low latency realtime streaming analytics solutions. In this talk we will discuss the motivation to combine Hive and Druid together alongwith the benefits, use cases, best practices and benchmark numbers.
The Agenda of the talk will be -
1. Motivation behind integrating Druid with Hive
2. Druid and Hive together - benefits
3. Use Cases with Demos and architecture discussion
4. Best Practices - Do's and Don'ts
5. Performance vs Cost Tradeoffs
6. SSB Benchmark Numbers
Agile Velocity - Deliver double the value in half the timeDavid Hawks
Learn practical techniques to guide your teams and escape the top 6 traps preventing organizations from realizing the full benefits of agile.
64% of product features built in software development are rarely or never used. Too many teams focus on increasing the amount of output. Not enough teams focus on delivering the most value with the least amount of output. In this interactive presentation, David Hawks will share the key factors that sabotage product success and what to do about it. Learn practical tools and techniques that accelerate learning throughout the product development cycle to deliver double the value in half the time.
Converting an idea or even a lab prototype into a real, customer-ready product is no simple task. Steve Carkner of Panacis Medical discusses the topic of product development.
HOnza Koudelka erklärt an der FileMaker Konferenz 2016 in Salzburg wie man mit FileMaker eine Audit Lösung machen und die Geschwindigkeit verbessern kann.
This presentation by Kyle Sherman, LinkedIn iOS Developer for the SlideShare iOS app, goes over fixing issues with jittery scroll performance in iOS applications. The presentation goes over the basics of using Instruments to measure and fix problems, tips for using Instruments, and a concrete example from the new LinkedIn iOS flagship application.
A lean automation blueprint for testing in continuous deliverySauce Labs
Testing in Continuous Delivery changes test automation. It demands more automation but also requires immediate feedback. Many test teams today suffer from two extremes. Too little or no automation to organizations with hundreds of thousands of tests constantly running all kinds of VMs takes multiple days to execute. Any hope of Continuous Delivery or Pipeline Automation makes these states unsustainable.
[WSO2Con EU 2017] Resilience Patterns with BallerinaWSO2
Today almost all systems are distributed and have complex interactions between each other to provide useful functionality. In a software system, resilience is the ability to recover to a working condition after being affected by a serious incident. Ballerina has inbuilt functionality to make programs resilient for network failures. This slide deck explores how to build resilience patterns with Ballerina.
Meet TransmogrifAI, Open Source AutoML That Powers Einstein PredictionsMatthew Tovbin
Despite huge progress in machine learning over the past decade, building production-ready machine learning systems is still hard. Three years ago when we set out to build machine learning capabilities into the Salesforce platform we learned that building enterprise-scale machine learning systems is even harder.To solve the problems we encountered, we built TransmogrifAI (https://transmogrif.ai) (pronounced trans-mog-ri-phi), an end-to-end automated machine learning library for structured data, that is used in production today to help power our Salesforce Einstein AI platform. This talk highlights key capabilities of TransmogrifAI library and demonstrates them in action on a real-life machine learning application.
About
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Technical Specifications
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
Key Features
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface
• Compatible with MAFI CCR system
• Copatiable with IDM8000 CCR
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
Application
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Final project report on grocery store management system..pdfKamal Acharya
In today’s fast-changing business environment, it’s extremely important to be able to respond to client needs in the most effective and timely manner. If your customers wish to see your business online and have instant access to your products or services.
Online Grocery Store is an e-commerce website, which retails various grocery products. This project allows viewing various products available enables registered users to purchase desired products instantly using Paytm, UPI payment processor (Instant Pay) and also can place order by using Cash on Delivery (Pay Later) option. This project provides an easy access to Administrators and Managers to view orders placed using Pay Later and Instant Pay options.
In order to develop an e-commerce website, a number of Technologies must be studied and understood. These include multi-tiered architecture, server and client-side scripting techniques, implementation technologies, programming language (such as PHP, HTML, CSS, JavaScript) and MySQL relational databases. This is a project with the objective to develop a basic website where a consumer is provided with a shopping cart website and also to know about the technologies used to develop such a website.
This document will discuss each of the underlying technologies to create and implement an e- commerce website.
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)MdTanvirMahtab2
This presentation is about the working procedure of Shahjalal Fertilizer Company Limited (SFCL). A Govt. owned Company of Bangladesh Chemical Industries Corporation under Ministry of Industries.
Saudi Arabia stands as a titan in the global energy landscape, renowned for its abundant oil and gas resources. It's the largest exporter of petroleum and holds some of the world's most significant reserves. Let's delve into the top 10 oil and gas projects shaping Saudi Arabia's energy future in 2024.
Hierarchical Digital Twin of a Naval Power SystemKerry Sado
A hierarchical digital twin of a Naval DC power system has been developed and experimentally verified. Similar to other state-of-the-art digital twins, this technology creates a digital replica of the physical system executed in real-time or faster, which can modify hardware controls. However, its advantage stems from distributing computational efforts by utilizing a hierarchical structure composed of lower-level digital twin blocks and a higher-level system digital twin. Each digital twin block is associated with a physical subsystem of the hardware and communicates with a singular system digital twin, which creates a system-level response. By extracting information from each level of the hierarchy, power system controls of the hardware were reconfigured autonomously. This hierarchical digital twin development offers several advantages over other digital twins, particularly in the field of naval power systems. The hierarchical structure allows for greater computational efficiency and scalability while the ability to autonomously reconfigure hardware controls offers increased flexibility and responsiveness. The hierarchical decomposition and models utilized were well aligned with the physical twin, as indicated by the maximum deviations between the developed digital twin hierarchy and the hardware.
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdffxintegritypublishin
Advancements in technology unveil a myriad of electrical and electronic breakthroughs geared towards efficiently harnessing limited resources to meet human energy demands. The optimization of hybrid solar PV panels and pumped hydro energy supply systems plays a pivotal role in utilizing natural resources effectively. This initiative not only benefits humanity but also fosters environmental sustainability. The study investigated the design optimization of these hybrid systems, focusing on understanding solar radiation patterns, identifying geographical influences on solar radiation, formulating a mathematical model for system optimization, and determining the optimal configuration of PV panels and pumped hydro storage. Through a comparative analysis approach and eight weeks of data collection, the study addressed key research questions related to solar radiation patterns and optimal system design. The findings highlighted regions with heightened solar radiation levels, showcasing substantial potential for power generation and emphasizing the system's efficiency. Optimizing system design significantly boosted power generation, promoted renewable energy utilization, and enhanced energy storage capacity. The study underscored the benefits of optimizing hybrid solar PV panels and pumped hydro energy supply systems for sustainable energy usage. Optimizing the design of solar PV panels and pumped hydro energy supply systems as examined across diverse climatic conditions in a developing country, not only enhances power generation but also improves the integration of renewable energy sources and boosts energy storage capacities, particularly beneficial for less economically prosperous regions. Additionally, the study provides valuable insights for advancing energy research in economically viable areas. Recommendations included conducting site-specific assessments, utilizing advanced modeling tools, implementing regular maintenance protocols, and enhancing communication among system components.
Cosmetic shop management system project report.pdfKamal Acharya
Buying new cosmetic products is difficult. It can even be scary for those who have sensitive skin and are prone to skin trouble. The information needed to alleviate this problem is on the back of each product, but it's thought to interpret those ingredient lists unless you have a background in chemistry.
Instead of buying and hoping for the best, we can use data science to help us predict which products may be good fits for us. It includes various function programs to do the above mentioned tasks.
Data file handling has been effectively used in the program.
The automated cosmetic shop management system should deal with the automation of general workflow and administration process of the shop. The main processes of the system focus on customer's request where the system is able to search the most appropriate products and deliver it to the customers. It should help the employees to quickly identify the list of cosmetic product that have reached the minimum quantity and also keep a track of expired date for each cosmetic product. It should help the employees to find the rack number in which the product is placed.It is also Faster and more efficient way.
Student information management system project report ii.pdfKamal Acharya
Our project explains about the student management. This project mainly explains the various actions related to student details. This project shows some ease in adding, editing and deleting the student details. It also provides a less time consuming process for viewing, adding, editing and deleting the marks of the students.
10. Disaster Recovery
A good disaster
recovery solution is all
about getting your
business’s valuable
data and operations
out of the failed
infrastructure and
running again on new
infrastructure.
Disaster recovery is
not about saving the
infrastructure.
Disaster recovery is
about saving the
business.
11. High Availability
A high availability design
means that the outage
will be brief because it
will not take long to
redeploy the required
component.
Unlike replacing a tire
with high availability the
redeployment is
completely automated
following a component
failure.
45. Challenges
• A smart queue
• It’s impossible to call
back “users”.
• Avoid sticky sections.
• Keep stateless for the
frontend. Can quickly
launch machines and
share the loads
• A shared backend and can
afford the capacity
• Minimize states
• # of total requests /
second
• # of limited requests
to checkout
• # of current inventory
• # of planed inventory
• 3 round burst
60. Key Take Away – 3-round burst
• Use O(1) commands. Don’t Use O(n) commands.
• PHP and Node.js are different.
• You can’t expected Node.js callback “microsecond” delay time interval.
• Use persistent connection to the redis server. Reduce the open
connection overhead
• Must execute loading tests.
Architecture
67. One man limited – Client side and Server side
• Client side – Use token or cookie
• Some flush sales design: if you did not win “lottery” ticket at first connect. You
never have chances. => clean up your cookie and cache.
• Benefits: easy to implement and scale.
• Server side – Use Redis Server
• Keep user id to record done-payment.
• Drawbacks: need to change your checkout logics for this special case.
Suggest don’t add this
business constraints
93. Token Base vs Cookie Base Benefits
• Stateless : backend
don’t need to keep
token records.
• Scalable: server don’t
need to consider O(n)
• Decouple: app and
security server can be
separated.
• Cross-Origin Resource
Sharing (CORS) :
cookie can’t cross
domain
• Mobile Ready
Drawbacks
• Size
• Encode/Decode: don’t
put sensitive data
95. Dedupe
-How can we quickly do it?
• Design unique key => use JWT signature
• Set Key and EXPIRE in Redis => if EXISTS(Key), ask
and confirm to the user.
• Time Slot Window => use JWT expired value
• Compare data
Data Table
Time Slot Window
For example: 10 mins
Benefits
• Avoid to scan all data
• Use No-SQL quickly
check
• User confirm to make
order in such short time
slot window.
Be aware
• Key length -> max 64
bytes for SHA256.
• Can’t use memcached
250 chars.
• Atomically data table
and redis key
Redis:
JWT + EXPIRE
147. Grid (1990s ~ 2007)
Grid computing is the collection of
computer resources from multiple
locations to reach a common goal.
• Loosely coupled
• Heterogeneous network
• Geographically dispersed
158. Fallacies of distributed computing
1. The network is reliable.
2. Latency is zero.
3. Bandwidth is infinite.
4. The network is secure.
5. Topology doesn't change.
6. There is one administrator.
7. Transport cost is zero.
8. The network is homogeneous
• Must have error-handling on
networking errors.
• Minimize the communication
cost as bandwidth, size,
latency, frequencies, etc.
165. ACID Model (CA)
• Atomic
• All operations in a transaction succeed or every operation is rolled
back.
• Consistent
• On the completion of a transaction, the database is structurally sound.
• Isolated
• Transactions do not contend with one another. Contentious access to
data is moderated by the database so that transactions appear to run
sequentially.
• Durable
• The results of applying a transaction are permanent, even in the
presence of failures.