This document provides guidance on optimizing application performance for Azure. It discusses several techniques:
1. Using event-driven messaging in Service Bus to process messages asynchronously and improve throughput.
2. Avoiding lazy loading of entity data for complex processing scenarios to reduce unnecessary data retrieval.
3. Token size and use of encryption when using Access Control Service can impact performance; smaller tokens and optional encryption improve performance.
4. Data serialization is important for Azure applications due to costs for storage, transfers and caching; efficient serialization optimizes performance and costs.
IRJET - Confidential Image De-Duplication in Cloud StorageIRJET Journal
This document proposes a confidential image de-duplication system for cloud storage. It introduces a hybrid cloud architecture using both public and private clouds. To provide greater security, the private cloud employs tiered authentication. The system performs de-duplication by comparing hash values of files generated using MD5 and SHA algorithms, to detect duplicate files and reduce storage usage. It encrypts files using AES before storage in the cloud. The private cloud server manages encryption keys and performs de-duplication checks by comparing file hashes and contents. This allows detection of duplicate files while preserving data privacy through encryption.
IRJET- Continuous Auditing Approach to the Cloud Service Addressing Attri...IRJET Journal
This document proposes a continuous auditing approach for cloud services to address security attributes. It discusses using a third-party auditor to continuously audit selected security certification criteria of cloud services to increase trust in certificates over time as the cloud environment changes. The document outlines a system where data owners can delegate auditing to a trusted third party, allowing audits to be done publicly and efficiently while protecting data privacy. It describes desirable properties for such a public auditing system, such as minimizing overhead, protecting data privacy during audits, supporting dynamic data changes, and allowing the third party to efficiently handle multiple concurrent auditing tasks.
Trusted Hardware Database With Privacy And Data Confidentialitytheijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
Kerberos Survival Guide - St. Louis Day of .NetJ.D. Wade
This document provides an overview and introduction to Kerberos authentication. It discusses the logon process, accessing a web site, troubleshooting Kerberos, and delegation. The presenter JD Wade is a SharePoint consultant who will demonstrate how Kerberos works and common troubleshooting techniques. The agenda includes details on the Kerberos protocol, dependencies, service principal names, and references for further reading.
The Three Musketeers (Authentication, Authorization, Accounting)Sarah Conway
The document discusses authentication, authorization and accounting (AAA) in PostgreSQL. It provides an overview of the AAA model and covers topics like authentication methods, user accounts, SSL configuration, and authorization files like pg_hba.conf and postgresql.conf. Specific configuration options for authentication timeouts, SSL certificates and other security settings are also examined.
This document proposes a system for secure and dependable storage in cloud computing. It introduces key challenges with cloud data security and proposes a distributed storage solution with lightweight communication and computation. The solution ensures strong data security, fast error detection, and supports dynamic operations on outsourced data. It uses algorithms like Byzantine fault tolerance and Reed-Solomon coding to detect errors and recover from failures. An overview of the system architecture, modules, use cases and technologies used is also provided.
MMB Cloud-Tree: Verifiable Cloud Service SelectionIJAEMSJORNAL
In the existing cloud brokerage system, the client does not have the ability to verify the result of the cloud service selection. There are possibilities that the cloud broker can be biased in selecting the best Cloud Service Provider (CSP) for a client. A compromised or dishonest cloud broker can unfairly select a CSP for its own advantage by cooperating with the selected CSP. To address this problem, we propose a mechanism to verify the CSP selection result of the cloud broker. In this verification mechanism, properties of every CSP will also be verified. It uses a trusted third party to gather clustering result from the cloud broker. This trusted third party is also used as a base station to collect CSP properties in a multi-agent’s system. Software Agents are installed and running on every CSP. The CSP is monitored by agents as the representative of the customer inside the cloud. These multi-agents give reports to a third party that must be trusted by CSPs, customers and the Cloud Broker. The third party provides transparency by publishing reports to the authorized parties (CSPs and Customers).
Based on the Star Wars theme, this session focuses on how Java EE 7 provides an extensive set of new and enhanced features to support standards such as HTML5, WebSocket, and Server-sent events, among others. The session shows how these new features are designed and matched to work together for developing lightweight solutions matching end users’ high expectations for Web application responsiveness. It covers best practices and design patterns associating the technologies with analogies from Star Wars. So join me in this fun filled talk where technology meets science and innovation..
May the force be with you!
IRJET - Confidential Image De-Duplication in Cloud StorageIRJET Journal
This document proposes a confidential image de-duplication system for cloud storage. It introduces a hybrid cloud architecture using both public and private clouds. To provide greater security, the private cloud employs tiered authentication. The system performs de-duplication by comparing hash values of files generated using MD5 and SHA algorithms, to detect duplicate files and reduce storage usage. It encrypts files using AES before storage in the cloud. The private cloud server manages encryption keys and performs de-duplication checks by comparing file hashes and contents. This allows detection of duplicate files while preserving data privacy through encryption.
IRJET- Continuous Auditing Approach to the Cloud Service Addressing Attri...IRJET Journal
This document proposes a continuous auditing approach for cloud services to address security attributes. It discusses using a third-party auditor to continuously audit selected security certification criteria of cloud services to increase trust in certificates over time as the cloud environment changes. The document outlines a system where data owners can delegate auditing to a trusted third party, allowing audits to be done publicly and efficiently while protecting data privacy. It describes desirable properties for such a public auditing system, such as minimizing overhead, protecting data privacy during audits, supporting dynamic data changes, and allowing the third party to efficiently handle multiple concurrent auditing tasks.
Trusted Hardware Database With Privacy And Data Confidentialitytheijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
Kerberos Survival Guide - St. Louis Day of .NetJ.D. Wade
This document provides an overview and introduction to Kerberos authentication. It discusses the logon process, accessing a web site, troubleshooting Kerberos, and delegation. The presenter JD Wade is a SharePoint consultant who will demonstrate how Kerberos works and common troubleshooting techniques. The agenda includes details on the Kerberos protocol, dependencies, service principal names, and references for further reading.
The Three Musketeers (Authentication, Authorization, Accounting)Sarah Conway
The document discusses authentication, authorization and accounting (AAA) in PostgreSQL. It provides an overview of the AAA model and covers topics like authentication methods, user accounts, SSL configuration, and authorization files like pg_hba.conf and postgresql.conf. Specific configuration options for authentication timeouts, SSL certificates and other security settings are also examined.
This document proposes a system for secure and dependable storage in cloud computing. It introduces key challenges with cloud data security and proposes a distributed storage solution with lightweight communication and computation. The solution ensures strong data security, fast error detection, and supports dynamic operations on outsourced data. It uses algorithms like Byzantine fault tolerance and Reed-Solomon coding to detect errors and recover from failures. An overview of the system architecture, modules, use cases and technologies used is also provided.
MMB Cloud-Tree: Verifiable Cloud Service SelectionIJAEMSJORNAL
In the existing cloud brokerage system, the client does not have the ability to verify the result of the cloud service selection. There are possibilities that the cloud broker can be biased in selecting the best Cloud Service Provider (CSP) for a client. A compromised or dishonest cloud broker can unfairly select a CSP for its own advantage by cooperating with the selected CSP. To address this problem, we propose a mechanism to verify the CSP selection result of the cloud broker. In this verification mechanism, properties of every CSP will also be verified. It uses a trusted third party to gather clustering result from the cloud broker. This trusted third party is also used as a base station to collect CSP properties in a multi-agent’s system. Software Agents are installed and running on every CSP. The CSP is monitored by agents as the representative of the customer inside the cloud. These multi-agents give reports to a third party that must be trusted by CSPs, customers and the Cloud Broker. The third party provides transparency by publishing reports to the authorized parties (CSPs and Customers).
Based on the Star Wars theme, this session focuses on how Java EE 7 provides an extensive set of new and enhanced features to support standards such as HTML5, WebSocket, and Server-sent events, among others. The session shows how these new features are designed and matched to work together for developing lightweight solutions matching end users’ high expectations for Web application responsiveness. It covers best practices and design patterns associating the technologies with analogies from Star Wars. So join me in this fun filled talk where technology meets science and innovation..
May the force be with you!
FRONT END AND BACK END DATABASE SECURITY IN THREE TIER WEB APPLICATIONijiert bestjournal
This document discusses security techniques for front-end and back-end databases in three-tier web applications. It proposes a double security system that assigns each user session to a dedicated container or virtual computing environment. This allows the system to map and profile activity between the web server and database server, enabling it to detect attacks. The system separates traffic by session, analyzes HTTP requests and SQL queries, maps requests to queries, and can detect direct database attacks or SQL injection attacks by checking for unmapped queries.
This document describes CryptDB, a system that allows unmodified database servers to store and query encrypted data while providing confidentiality guarantees. It addresses the key challenges of supporting SQL queries on encrypted data, carefully defining and achieving privacy even with an untrusted database server, and making the system practical. CryptDB uses an SQL-aware encryption strategy with encryption schemes that allow queries to be executed directly on ciphertexts. It also employs adjustable query-based encryption to dynamically adjust encryption levels based on the queries used. Data is encrypted in layers from weaker to stronger encryption to efficiently adjust levels as needed. The goal is to enable standard SQL queries over encrypted data without client-side processing or database modifications while protecting data confidentiality.
Mmb authenticated index for verifiableKamal Spring
Cloud brokers have been recently introduced as an additional computational layer to facilitate cloud selection and service management tasks for cloud consumers. However, existing brokerage schemes on cloud service selection typically assume that brokers are completely trusted, and do not provide any guarantee over the correctness of the service recommendations. It is then possible for a compromised or dishonest broker to easily take advantage of the limited capabilities of the clients and provide incorrect or incomplete responses. To address this problem, we propose an innovative Cloud Service Selection Verification (CSSV) scheme and index structures (MMBcloud-tree) to enable cloud clients to detect misbehavior of the cloud brokers during the service selection process. We demonstrate correctness and efficiency of our approaches both theoretically and empirically.
Cued click point image based kerberos authentication protocolIAEME Publication
The document presents a proposed authentication system that combines cued click point (CCP) graphical passwords with the Kerberos authentication protocol. CCP uses a sequence of images where the user selects one click point per image. This is made more secure through the addition of a sound signature. The system aims to address weaknesses in text passwords by leveraging human memory for visual information. It also utilizes Kerberos to provide network security and mutual authentication between clients and servers. The proposed model would allow administrators to assign user credentials for system access. Users would select a tolerance level and set graphical passwords by choosing images and click points. Their profile would be generated and the entire login process secured using Kerberos authentication.
Towards secure and dependable storage service in cloudsibidlegend
The document proposes a distributed storage integrity auditing mechanism for cloud data storage that allows for lightweight communication and computation during audits. The proposed design ensures strong correctness guarantees for stored data and enables fast error localization to identify misbehaving servers. It also supports secure and efficient dynamic operations like modifying, deleting, and appending blocks of outsourced data. Analysis shows the scheme is efficient and resilient against various attacks.
Service operator aware trust scheme for resourcejayaramb
The document proposes a service operator-aware trust scheme (SOTS) for resource matchmaking across multiple clouds. SOTS uses a middleware framework to evaluate trust based on multi-dimensional resource service operators to improve dependability. The broker can efficiently select the most trusted resources in advance using an adaptive trust evaluation approach based on information entropy theory. This overcomes limitations of traditional schemes that manually weight trust factors.
Towards secure & dependable storage services in cloud computingRahid Abdul Kalam
The document discusses a project presented towards secure and dependable storage services in cloud computing. It discusses algorithms used including Byzantine Fault Tolerance and Reed-Solomon, and covers existing system limitations. It then describes the design modules including login, user registration, client manipulation, and administrator login and manipulation. Finally it discusses operational modules, use case diagrams, class diagrams, flowcharts, and dataflow diagrams related to the project.
This document provides an overview and guide to Kerberos authentication including:
- The logon process involving the KDC and TGTs
- Accessing a web site using Kerberos and the request for a service ticket
- Common troubleshooting steps like checking SPNs and time sync
- Demos of delegation and forms-based authentication
- References for further Kerberos reading
- Firewalls provide protection from external threats but not internal threats, where Kerberos authentication comes in. Kerberos uses encryption and tickets to verify the identity of clients and servers on a network, preventing data sniffing and impersonation. It involves an authentication server that issues session keys and tickets to allow communication between clients and servers.
- Kerberos is an authentication protocol that allows clients to prove their identity to servers in a secure manner. It uses tickets and encryptions to authenticate users and allows authorized access to resources.
- The logon process involves a client getting a ticket-granting ticket from the key distribution center after proving their identity, which can then be used to request service tickets to access specific resources.
- Common issues that can break Kerberos authentication include time synchronization problems, incorrect service principal name configurations, expired tickets, and non-default port configurations.
This document provides an overview and agenda for a Kerberos survival guide presentation. The presentation will cover Kerberos logon process, accessing a web site using Kerberos, miscellaneous Kerberos information, and complex Kerberos configurations. It includes dependencies, service principal names (SPNs), and troubleshooting tools for Kerberos. The presentation aims to provide essential information about Kerberos without overcomplicating details.
Survey on Restful Web Services Using Open Authorization (Oauth)I01545356IOSR Journals
Abstract: Web services are application based programming interfaces (API) or web APIs that are accessed
through Hypertext Transfer Protocol (HTTP) to execute on a remote system hosting the requested services. A
RESTFUL web service is a budding technology, and a light weight approach that do not restrict the clientserver
communication. The open authorization (OAuth) 2.0 protocol enables the users to grant third-party
application access to their web resources without sharing their login credential data. The Authorization Server
includes authorization information with the Access Token and signs the Access Token. An access token can be
reused until it expires. An authentication filter is used for business services. This paper presents a secure
communication at the message level with minimum overhead and provides a fine grained authenticity using the
Jersey framework.
Keywords: Open authorization (oauth), Restful web services, HTTP protocols and uniform resource
identifier(URI).
secure data transfer and deletion from counting bloom filter in cloud computing.Venkat Projects
The document discusses a proposed system for secure data transfer and deletion from one cloud to another. It aims to achieve verifiable data transfer and reliable data deletion without a trusted third party. The system uses a counting Bloom filter scheme to allow a data owner, original cloud, and target cloud to verify that data was completely and accurately transferred or deleted. The scheme ensures data confidentiality, integrity, and public verifiability during the transfer and deletion processes.
This document discusses preserving data integrity in cloud computing through third party auditing. It introduces an effective third party auditor that can perform multiple auditing tasks simultaneously using the technique of bilinear aggregate signature. This reduces computation costs and storage overhead for integrity verification. The system supports dynamic data operations through techniques like fragment structure, random sampling and an index-hash table. It also allows efficient scheduling of audit activities in an audit period and assigns each third party auditor to audit a batch of files to save time. The system provides advantages like improved performance and reduced extra storage requirements.
O po r enabling proof of retrievability in cloud computing with resource cons...Pvrtechnologies Nellore
OPoR is a new cloud storage scheme involving a cloud storage server and a cloud audit server. It aims to enable proof of retrievability for cloud storage with resource-constrained devices by outsourcing heavy computation of data tag generation to the cloud audit server. The cloud audit server pre-processes and uploads data on behalf of clients, eliminating their involvement in auditing and preprocessing. OPoR is proven secure against reset attacks while supporting efficient public verifiability and dynamic data operations. Future work may further reduce trust in the audit server and find more efficient solutions.
Tomcat is a web container, not a web server. It uses the HTTPConnector to act as a web server and handle HTTP requests. To enable SSL/HTTPS in Tomcat, one must:
1. Generate a self-signed certificate using keytool to create a keystore file for secure connections.
2. Configure the server.xml file to enable the SSL connector and specify the keystore file location.
3. Add a security constraint to the application's web.xml file to specify "CONFIDENTIAL" transport guarantee and require HTTPS for resources.
SSL can also be enabled on PHP applications running on XAMPP without additional configuration since XAMPP already includes OpenSSL support. HTTPS requests
Mutual Authentication For Wireless Communicationmanish kumar
The document discusses mutual authentication for wireless communication. It defines mutual authentication as a process where a client and server authenticate each other by exchanging digital certificates using TLS protocol. It describes different types of mutual authentication like certificate-based and username/password-based. It also discusses how to set up mutual authentication, common authentication protocols, and attacks on protocols. It covers advantages of mutual authentication and limitations.
RADIUS uses UDP for authentication and authorization, encrypting only the password field, while TACACS+ uses TCP and encrypts the entire payload. TACACS+ separates authentication, authorization, and accounting functions, allowing for different authentication mechanisms to be used, while RADIUS combines these steps. TACACS+ supports additional network protocols and provides more granular control over authorized commands.
Symmetric cryptography is required in case of asymmetric cryptography because it is symmetric cryptography which helps to generate the key which can be expanded further to generate multiple keys which are required in case of symmetric cryptography. Thus, the symmetric cryptography acts as the background for the creation of multiple keys in asymmetric cryptography. Due to same key generation, symmetric key cipher is comparatively faster then asymmetric one. They are mainly used for the generation of the bulk data. On the other hand, asymmetric cryptography also support symmetric cryptography technique, as it helps to recognise the relatives strengths and weakness of symmetric cryptography which in turn is used to determine the instances where symmetric key can be used. Thus, both symmetric and asymmetric key cryptography are needed according to their own needs and requirements.
The document provides an overview of the Secure Sockets Layer (SSL) protocol. It discusses SSL's goals of providing confidentiality, integrity, and authentication for network communications. It describes the SSL handshake process, where the client and server authenticate each other and negotiate encryption parameters before transmitting application data. It also discusses SSL applications like securing web traffic and online payments. The document concludes that SSL is vital for web security and ensures user confidentiality and integrity.
IRJET- Data and Technical Security Issues in Cloud Computing DatabasesIRJET Journal
This document discusses several technical security issues related to cloud computing databases. It begins with an introduction to cloud computing and its benefits of reducing costs. However, security concerns arise when data is outsourced to external cloud providers. The document then examines specific security issues like XML signature wrapping attacks on web services. It also discusses how browser-based access to cloud services introduces vulnerabilities related to the same-origin policy and TLS verification. Potential attacks on cloud authentication using programs are explained. In summary, the document analyzes technical challenges regarding data security, integrity and privacy in cloud computing environments.
The document discusses web security considerations and threats. It provides 3 levels at which security can be implemented - at the IP level using IPSec, at the transport level using SSL/TLS, and at the application level using protocols like SET. SSL/TLS works by establishing an encrypted channel between the client and server for secure communication. It uses handshake, change cipher spec, and alert protocols for negotiation and management of the secure session. Common web security threats include eavesdropping, message modification, denial of service attacks, and impersonation which can be mitigated using encryption, authentication and other cryptographic techniques.
FRONT END AND BACK END DATABASE SECURITY IN THREE TIER WEB APPLICATIONijiert bestjournal
This document discusses security techniques for front-end and back-end databases in three-tier web applications. It proposes a double security system that assigns each user session to a dedicated container or virtual computing environment. This allows the system to map and profile activity between the web server and database server, enabling it to detect attacks. The system separates traffic by session, analyzes HTTP requests and SQL queries, maps requests to queries, and can detect direct database attacks or SQL injection attacks by checking for unmapped queries.
This document describes CryptDB, a system that allows unmodified database servers to store and query encrypted data while providing confidentiality guarantees. It addresses the key challenges of supporting SQL queries on encrypted data, carefully defining and achieving privacy even with an untrusted database server, and making the system practical. CryptDB uses an SQL-aware encryption strategy with encryption schemes that allow queries to be executed directly on ciphertexts. It also employs adjustable query-based encryption to dynamically adjust encryption levels based on the queries used. Data is encrypted in layers from weaker to stronger encryption to efficiently adjust levels as needed. The goal is to enable standard SQL queries over encrypted data without client-side processing or database modifications while protecting data confidentiality.
Mmb authenticated index for verifiableKamal Spring
Cloud brokers have been recently introduced as an additional computational layer to facilitate cloud selection and service management tasks for cloud consumers. However, existing brokerage schemes on cloud service selection typically assume that brokers are completely trusted, and do not provide any guarantee over the correctness of the service recommendations. It is then possible for a compromised or dishonest broker to easily take advantage of the limited capabilities of the clients and provide incorrect or incomplete responses. To address this problem, we propose an innovative Cloud Service Selection Verification (CSSV) scheme and index structures (MMBcloud-tree) to enable cloud clients to detect misbehavior of the cloud brokers during the service selection process. We demonstrate correctness and efficiency of our approaches both theoretically and empirically.
Cued click point image based kerberos authentication protocolIAEME Publication
The document presents a proposed authentication system that combines cued click point (CCP) graphical passwords with the Kerberos authentication protocol. CCP uses a sequence of images where the user selects one click point per image. This is made more secure through the addition of a sound signature. The system aims to address weaknesses in text passwords by leveraging human memory for visual information. It also utilizes Kerberos to provide network security and mutual authentication between clients and servers. The proposed model would allow administrators to assign user credentials for system access. Users would select a tolerance level and set graphical passwords by choosing images and click points. Their profile would be generated and the entire login process secured using Kerberos authentication.
Towards secure and dependable storage service in cloudsibidlegend
The document proposes a distributed storage integrity auditing mechanism for cloud data storage that allows for lightweight communication and computation during audits. The proposed design ensures strong correctness guarantees for stored data and enables fast error localization to identify misbehaving servers. It also supports secure and efficient dynamic operations like modifying, deleting, and appending blocks of outsourced data. Analysis shows the scheme is efficient and resilient against various attacks.
Service operator aware trust scheme for resourcejayaramb
The document proposes a service operator-aware trust scheme (SOTS) for resource matchmaking across multiple clouds. SOTS uses a middleware framework to evaluate trust based on multi-dimensional resource service operators to improve dependability. The broker can efficiently select the most trusted resources in advance using an adaptive trust evaluation approach based on information entropy theory. This overcomes limitations of traditional schemes that manually weight trust factors.
Towards secure & dependable storage services in cloud computingRahid Abdul Kalam
The document discusses a project presented towards secure and dependable storage services in cloud computing. It discusses algorithms used including Byzantine Fault Tolerance and Reed-Solomon, and covers existing system limitations. It then describes the design modules including login, user registration, client manipulation, and administrator login and manipulation. Finally it discusses operational modules, use case diagrams, class diagrams, flowcharts, and dataflow diagrams related to the project.
This document provides an overview and guide to Kerberos authentication including:
- The logon process involving the KDC and TGTs
- Accessing a web site using Kerberos and the request for a service ticket
- Common troubleshooting steps like checking SPNs and time sync
- Demos of delegation and forms-based authentication
- References for further Kerberos reading
- Firewalls provide protection from external threats but not internal threats, where Kerberos authentication comes in. Kerberos uses encryption and tickets to verify the identity of clients and servers on a network, preventing data sniffing and impersonation. It involves an authentication server that issues session keys and tickets to allow communication between clients and servers.
- Kerberos is an authentication protocol that allows clients to prove their identity to servers in a secure manner. It uses tickets and encryptions to authenticate users and allows authorized access to resources.
- The logon process involves a client getting a ticket-granting ticket from the key distribution center after proving their identity, which can then be used to request service tickets to access specific resources.
- Common issues that can break Kerberos authentication include time synchronization problems, incorrect service principal name configurations, expired tickets, and non-default port configurations.
This document provides an overview and agenda for a Kerberos survival guide presentation. The presentation will cover Kerberos logon process, accessing a web site using Kerberos, miscellaneous Kerberos information, and complex Kerberos configurations. It includes dependencies, service principal names (SPNs), and troubleshooting tools for Kerberos. The presentation aims to provide essential information about Kerberos without overcomplicating details.
Survey on Restful Web Services Using Open Authorization (Oauth)I01545356IOSR Journals
Abstract: Web services are application based programming interfaces (API) or web APIs that are accessed
through Hypertext Transfer Protocol (HTTP) to execute on a remote system hosting the requested services. A
RESTFUL web service is a budding technology, and a light weight approach that do not restrict the clientserver
communication. The open authorization (OAuth) 2.0 protocol enables the users to grant third-party
application access to their web resources without sharing their login credential data. The Authorization Server
includes authorization information with the Access Token and signs the Access Token. An access token can be
reused until it expires. An authentication filter is used for business services. This paper presents a secure
communication at the message level with minimum overhead and provides a fine grained authenticity using the
Jersey framework.
Keywords: Open authorization (oauth), Restful web services, HTTP protocols and uniform resource
identifier(URI).
secure data transfer and deletion from counting bloom filter in cloud computing.Venkat Projects
The document discusses a proposed system for secure data transfer and deletion from one cloud to another. It aims to achieve verifiable data transfer and reliable data deletion without a trusted third party. The system uses a counting Bloom filter scheme to allow a data owner, original cloud, and target cloud to verify that data was completely and accurately transferred or deleted. The scheme ensures data confidentiality, integrity, and public verifiability during the transfer and deletion processes.
This document discusses preserving data integrity in cloud computing through third party auditing. It introduces an effective third party auditor that can perform multiple auditing tasks simultaneously using the technique of bilinear aggregate signature. This reduces computation costs and storage overhead for integrity verification. The system supports dynamic data operations through techniques like fragment structure, random sampling and an index-hash table. It also allows efficient scheduling of audit activities in an audit period and assigns each third party auditor to audit a batch of files to save time. The system provides advantages like improved performance and reduced extra storage requirements.
O po r enabling proof of retrievability in cloud computing with resource cons...Pvrtechnologies Nellore
OPoR is a new cloud storage scheme involving a cloud storage server and a cloud audit server. It aims to enable proof of retrievability for cloud storage with resource-constrained devices by outsourcing heavy computation of data tag generation to the cloud audit server. The cloud audit server pre-processes and uploads data on behalf of clients, eliminating their involvement in auditing and preprocessing. OPoR is proven secure against reset attacks while supporting efficient public verifiability and dynamic data operations. Future work may further reduce trust in the audit server and find more efficient solutions.
Tomcat is a web container, not a web server. It uses the HTTPConnector to act as a web server and handle HTTP requests. To enable SSL/HTTPS in Tomcat, one must:
1. Generate a self-signed certificate using keytool to create a keystore file for secure connections.
2. Configure the server.xml file to enable the SSL connector and specify the keystore file location.
3. Add a security constraint to the application's web.xml file to specify "CONFIDENTIAL" transport guarantee and require HTTPS for resources.
SSL can also be enabled on PHP applications running on XAMPP without additional configuration since XAMPP already includes OpenSSL support. HTTPS requests
Mutual Authentication For Wireless Communicationmanish kumar
The document discusses mutual authentication for wireless communication. It defines mutual authentication as a process where a client and server authenticate each other by exchanging digital certificates using TLS protocol. It describes different types of mutual authentication like certificate-based and username/password-based. It also discusses how to set up mutual authentication, common authentication protocols, and attacks on protocols. It covers advantages of mutual authentication and limitations.
RADIUS uses UDP for authentication and authorization, encrypting only the password field, while TACACS+ uses TCP and encrypts the entire payload. TACACS+ separates authentication, authorization, and accounting functions, allowing for different authentication mechanisms to be used, while RADIUS combines these steps. TACACS+ supports additional network protocols and provides more granular control over authorized commands.
Symmetric cryptography is required in case of asymmetric cryptography because it is symmetric cryptography which helps to generate the key which can be expanded further to generate multiple keys which are required in case of symmetric cryptography. Thus, the symmetric cryptography acts as the background for the creation of multiple keys in asymmetric cryptography. Due to same key generation, symmetric key cipher is comparatively faster then asymmetric one. They are mainly used for the generation of the bulk data. On the other hand, asymmetric cryptography also support symmetric cryptography technique, as it helps to recognise the relatives strengths and weakness of symmetric cryptography which in turn is used to determine the instances where symmetric key can be used. Thus, both symmetric and asymmetric key cryptography are needed according to their own needs and requirements.
The document provides an overview of the Secure Sockets Layer (SSL) protocol. It discusses SSL's goals of providing confidentiality, integrity, and authentication for network communications. It describes the SSL handshake process, where the client and server authenticate each other and negotiate encryption parameters before transmitting application data. It also discusses SSL applications like securing web traffic and online payments. The document concludes that SSL is vital for web security and ensures user confidentiality and integrity.
IRJET- Data and Technical Security Issues in Cloud Computing DatabasesIRJET Journal
This document discusses several technical security issues related to cloud computing databases. It begins with an introduction to cloud computing and its benefits of reducing costs. However, security concerns arise when data is outsourced to external cloud providers. The document then examines specific security issues like XML signature wrapping attacks on web services. It also discusses how browser-based access to cloud services introduces vulnerabilities related to the same-origin policy and TLS verification. Potential attacks on cloud authentication using programs are explained. In summary, the document analyzes technical challenges regarding data security, integrity and privacy in cloud computing environments.
The document discusses web security considerations and threats. It provides 3 levels at which security can be implemented - at the IP level using IPSec, at the transport level using SSL/TLS, and at the application level using protocols like SET. SSL/TLS works by establishing an encrypted channel between the client and server for secure communication. It uses handshake, change cipher spec, and alert protocols for negotiation and management of the secure session. Common web security threats include eavesdropping, message modification, denial of service attacks, and impersonation which can be mitigated using encryption, authentication and other cryptographic techniques.
Towards secure and dependable storage service in cloudsibidlegend
The document proposes a distributed storage integrity auditing mechanism for cloud data storage that allows for lightweight communication and computation during audits. The proposed design ensures strong correctness guarantees for stored data and enables fast error localization to identify misbehaving servers. It also supports secure and efficient dynamic operations like modifying, deleting, and appending blocks of outsourced data. Analysis shows the scheme is efficient and resilient against various attacks.
IRJET- Survey on Blockchain based Digital Certificate SystemIRJET Journal
The document discusses using blockchain technology to create a digital certificate system. It provides an overview of blockchain and how it can be used to issue and verify graduation certificates in a secure and decentralized manner. Several examples of digital certificate systems that use blockchain and smart contracts are described to address issues with forgery and validate the authenticity and integrity of certificates.
Kerberos Security in Distributed SystemsIRJET Journal
Kerberos is a network authentication protocol that provides single sign-on capabilities for client-server applications by allowing nodes communicating over a non-secure network to prove their identity to one another in a secure manner. It uses tickets and secret session keys to authenticate users and services. When a client wants to access a service, Kerberos issues it a ticket-granting ticket which it can use to obtain service tickets from the ticket granting service. These tickets contain encrypted proofs of the client's identity that can be verified by the service. Kerberos supports cross-realm authentication and uses shared symmetric keys and timestamps to securely authenticate users within distributed systems. While effective, it has some limitations such as increased computation load, single point of failure if the
This document summarizes a research paper published in the International Journal of Computer Engineering and Technology. The paper proposes a model for data storage security in cloud computing using Kerberos authentication. Kerberos is an authentication protocol that allows nodes on a network to securely prove their identity to one another. The proposed model uses Kerberos to authenticate customers connecting to cloud servers. When a customer wants to store data in the cloud, they must first register with a third party. They are then issued a password and identity. The customer connects to Kerberos and gets a ticket-granting ticket, which they can use to obtain tickets to access specific cloud services. The model aims to address security issues with managing data in cloud computing environments by leveraging Kerberos
IRJET- A Novel and Secure Approach to Control and Access Data in Cloud St...IRJET Journal
This document proposes a novel approach to securely control and access data stored in the cloud using Ciphertext-Policy Attribute-Based Encryption (CP-ABE). The approach aims to address abuse of access credentials by tracing malicious insiders and revoking their access. It presents two new CP-ABE frameworks that allow traceability of malicious cloud clients, identification of misbehaving authorities, and auditing without requiring extensive storage. The frameworks provide fine-grained access control and can revoke credentials of traced attackers.
An Auditing Protocol for Protected Data Storage in Cloud Computingijceronline
Cloud computing is a mechanism which provides us resources, information as per user requirement by the help of internet. Cloud is used to store important content material for a longer period of time which requires trust and safety of content that is stored in cloud. The main issue of cloud computing is security of data. Many techniques proposed earlier were beneficial for static archived data. Some encryption techniques were introduced later for dynamic data which includes masking technique, bilinear property with dynamic auditing. This paper proposed an effective auditing protocol to maintain the dynamic operations on data with RSA, MD5 and ID3 algorithms for enhancing data safety. The analysis and simulation results are effectual and protected as it incurs least communication cost and least computation cost of the auditor.
This document discusses enhancing security through token generation in a distributed environment. It proposes a new token generation scheme to encrypt user data with specified key parameters, making resources more robust. The token generation scheme would add security for both authentication and authorization. Existing algorithms focus on encrypting data on the user side, which incurs high computational and communication costs. The document suggests a token generation algorithm for distributed data files that provides secure and dependable server storage while maintaining low overhead. It analyzes related work on token-based authentication and security techniques to provide context.
This document proposes a decentralized KYC (Know Your Customer) system using blockchain and IPFS. The current centralized KYC systems have issues like single points of failure, data redundancy, and third party involvement. The proposed system stores user identity data like documents and photos in a distributed IPFS database for redundancy and security. It then stores the IPFS hash and username on an Ethereum blockchain to make the data immutable. This removes single points of failure and third party involvement. Testing showed the proposed system uses less gas, making it more cost efficient than alternatives without using IPFS for storage. The system provides the same functionality as traditional KYC systems in a decentralized manner with improved security, efficiency and trust.
Providing user security guarantees in public infrastructure cloudsKamal Spring
The infrastructure cloud (IaaS) service model offers improved resource flexibility and availability, where tenants – insulated from the minutiae of hardware maintenance – rent computing resources to deploy and operate complex systems. Large-scale services running on IaaS platforms demonstrate the viability of this model; nevertheless, many organizations operating on sensitive data avoid migrating operations to IaaS platforms due to security concerns. In this paper, we describe a framework for data and operation security in IaaS, consisting of protocols for a trusted launch of virtual machines and domain-based storage protection. We continue with an extensive theoretical analysis with proofs about protocol resistance against attacks in the defined threat model. The protocols allow trust to be established by remotely attesting host platform configuration prior to launching guest virtual machines and ensure confidentiality of data in remote storage, with encryption keys maintained outside of the IaaS domain. Presented experimental results demonstrate the validity and efficiency of the proposed protocols. The framework prototype was implemented on a test bed operating a public electronic health record system, showing that the proposed protocols can be integrated into existing cloud environments.
1) The document provides tips for optimizing performance on WebSphere DataPower devices by adjusting caching, enabling persistent connections, using processing rules efficiently, optimizing MQ and XSLT configurations, and leveraging synchronous and asynchronous actions appropriately.
2) It recommends creating a "facade service" to monitor and shape requests to external services like logging servers to prevent slow responses from impacting core transactions. This facade service would use monitors and service level management policies to control latencies.
3) Using separate delegate services with monitoring is suggested to avoid direct connections to external services that could become slow and bottleneck transactions if they degrade in performance.
Cloud-Trust—a Security Assessment Model
for Infrastructure as a Service (IaaS) Clouds
Dan Gonzales, Member, IEEE, Jeremy M. Kaplan, Evan Saltzman, Zev Winkelman, and Dulani Woods
Abstract—The vulnerability of cloud computing systems (CCSs) to advanced persistent threats (APTs) is a significant concern to
government and industry. We present a cloud architecture reference model that incorporates a wide range of security controls and best
practices, and a cloud security assessment model—Cloud-Trust—that estimates high level security metrics to quantify the degree of
confidentiality and integrity offered by a CCS or cloud service provider (CSP). Cloud-Trust is used to assess the security level of four
multi-tenant IaaS cloud architectures equipped with alternative cloud security controls. Results show the probability of CCS penetration
(high value data compromise) is high if a minimal set of security controls are implemented. CCS penetration probability drops
substantially if a cloud defense in depth security architecture is adopted that protects virtual machine (VM) images at rest, strengthens
CSP and cloud tenant system administrator access controls, and which employs other network security controls to minimize cloud
network surveillance and discovery of live VMs.
Index Terms—Cloud computing, cyber security, advanced persistent threats, security metrics, virtual machine (VM) isolation
Ç
1 INTRODUCTION
THE flexibility and scalability of CCSs can offer signifi-cant benefits to government and private industry [1],
[2]. However, it can be difficult to transition legacy software
to the cloud [3]. Concerns have also been raised as to
whether cloud users can trust CSPs to protect cloud tenant
data and whether CCSs can prevent the unauthorized dis-
closure of sensitive or private information. The literature is
rife with studies of CCS security vulnerabilities that can be
exploited by APTs [4], [5], [6], [7].
Virtualization, the basis for most CCSs, enables CSPs to
start, stop, move, and restart computing workloads on
demand. VMs run on computing hardware that may be
shared by cloud tenants. This enables flexibility and elastic-
ity, but introduces security concerns. The security status of
a CCS depends on many factors, including security applica-
tions running on the system, the hypervisor (HV) and asso-
ciated protection measures, the design patterns used to
isolate the control plane from cloud tenants, the level of pro-
tection provided by the CSP to cloud tenant user data and
VM images, as well as other factors.
These concerns raise questions. Can the overall security
status of a CCS or a CSP offering be assessed using a frame-
work that addresses the unique vulnerabilities of CCSs and
can such assessments be applied to alternative CCS architec-
tures and CSP offerings in an unbiased way? The federal
government has issued security controls that CSPs must
implement to obtain FEDRAMP CCS security certification [8]
that are based on Na ...
The document discusses the hardware and software requirements for launching an online trading company called Primus Securities. It addresses the servers needed, including application servers to process transactions and web servers to deliver content to customers. It also covers software considerations like website design, security, and data storage. Connectivity options are examined to make the website available to customers. Primus will need to match or exceed the features of competitors like Ameritrade, Charles Schwab, and E-Trade to attract new customers.
The document discusses considerations for using an API gateway in a microservices architecture. It describes how an API gateway acts as a single entry point, addressing concerns like security, monitoring, and routing requests to backend services. The gateway can provide authentication, authorization, throttling, caching, load balancing and other capabilities in a centralized manner. It abstracts microservices and allows flexible scaling. Security features the document outlines include using federated identity protocols like OAuth for authentication, and configuring the gateway to protect against DDoS attacks and ensure secure communication.
IRJET- Improving Data Storage Security and Performance in Cloud EnvironmentIRJET Journal
1. The document discusses improving data storage security and performance in cloud environments. It proposes a middleware framework that integrates different Infrastructure as a Service (IaaS) storage clouds and relies on a service level manager to split files during upload according to node computing capabilities, encrypt file segments, and decrypt and merge files for download.
2. It analyzes factors affecting the performance of the OpenStack Cinder block storage service, such as the number of API workers and storage driver selection. Distributed and encrypted storage of file segments across nodes based on their capabilities could improve both security and performance.
3. The proposed system authenticates users in OpenStack and uses block encryption of volumes, with keys provided via secure connections, to enhance security of
This document discusses security considerations for software-as-a-service (SaaS) providers. It covers identity management including internal authentication, single sign-on, and authorization. It also addresses data storage through encryption at the customer level or using multiple database instances. Data transmission security is discussed in terms of confidentiality, integrity, and non-repudiation using SSL/TLS encryption. Physical security of SaaS infrastructure is also highlighted as an important consideration. The document provides an overview of key security best practices for SaaS providers across technical architectural components.
Similar to Azure applications performance checklist (20)
This document provides a template for documenting a software development project. It outlines sections to include such as authentication methods, documentation location, naming conventions, coding guidelines, database details, deployment process, testing procedures, and revision history. The template aims to standardize project documentation to make projects easier to maintain and develop. Specific project requirements are also defined for different project types including MVC, Web API, and WCF.
The document describes how to configure single sign-on from an on-premises Active Directory to Office 365 using federated identity. It discusses migrating users from AD to Azure AD, configuring Azure AD Connect for directory synchronization and federated authentication, and filtering options for controlling which on-premises objects are synchronized to Azure AD. The goal is to allow users to authenticate to Office 365 with their on-premises AD credentials without re-entering their password.
Decide if PhoneGap is for you as your mobile platform selectionSalim M Bhonhariya
The document discusses strategies for developing a mobile application. It compares web applications, hybrid applications, and native applications. Hybrid applications like PhoneGap allow developing using HTML5/JavaScript while accessing device features, providing a compromise between web and native. The document suggests PhoneGap is best if performance and user experience are not primary concerns and a shorter timeline is needed, as it allows building once and releasing across platforms quickly. Otherwise, native may be preferable for the best performance, experience, and access to device features.
This document provides guidance on optimizing performance when connecting large numbers of devices to Azure Service Bus. It discusses the different protocols that can be used to connect (SBMP, AMQP, HTTP), and explains that SBMP and AMQP use persistent TCP connections while HTTP uses request/response. It emphasizes that the MessagingFactory class establishes TCP connections, and that creating multiple factories can increase throughput by utilizing multiple connections. Quotas for Service Bus entities are also addressed, with clarification that each client connected over a "connection link" counts towards quotas, even though they may share a single TCP connection.
This document discusses several ways to access on-premise LOB systems from Azure applications, including Service Bus Relay, BizTalk Services Hybrid Connections, Service Bus queues, and Express Route. It analyzes the similarities and differences between Service Bus Relay and Hybrid Connections, and provides examples of scenarios where each option would be best used. Both products allow firewall-friendly connectivity but have some variations in supported protocols, security models, and integration capabilities. The document considers tradeoffs for each approach based on architecture and development strategies.
This document provides an outline for an architect to document a software system. It includes sections for describing the functional overview, quality attributes, constraints, principles, software architecture, external interfaces, code structure, vision, risks, data model, infrastructure, deployment, operations and support, and decisions made. The goal is to model all possible failures and reasons for failure in order to understand how to avoid and fix issues when they occur. Details are only included if they can help reason about potential failures.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
1. [TYPE THE COMPANY NAME]
Azure Application
Performance Guide
Need to check matrix
Bhonhariya, Salim(Cloud Architect)
1/9/2015
Al rights reserved
2. 1
Contents
Event Driver Messaging programming in Receiver side of Service Bus:.......................................................2
Azure Access Control Service (ACS) ..............................................................................................................3
Azure Web Applications and Serialization....................................................................................................5
Physical location of services..........................................................................................................................5
New D-Series of Azure VMs with 60% Faster CPUs, More Memory and Local SSD Disks............................5
Local SSD Disk and SQL Server Buffer Pool Extensions.................................................................................6
Pre-load required data for complex processing ...........................................................................................7
Use batch processing....................................................................................................................................8
Use Asynchronous processing ......................................................................................................................8
SQL Azure Performance:...............................................................................................................................8
Scaling Out: ...............................................................................................................................................8
Setting in SQL Azure:...............................................................................................................................11
Monitoring Azure SQL Database Using Dynamic Management Views...................................................12
Monitoring Connections .....................................................................................................................12
Monitoring Query Performance..........................................................................................................13
Monitoring Blocked Queries...............................................................................................................14
Monitoring Query Plans......................................................................................................................14
Get SQL Profiler info from SQL Azure .................................................................................................14
Analysis Resources and Tools .....................................................................................................................17
Tracing:........................................................................................................................................................17
Coming soon: ..............................................................................................................................................20
3. 2
Event Driver Messaging programming in Receiver side of Service Bus:
Instead of polling the queue we’ll be using the new message pump(Push). On
receiver side of code.
http://fabriccontroller.net/blog/posts/introducing-the-event-driven-message-
programming-model-for-the-windows-azure-service-bus/
// Build the messaging options.
var eventDrivenMessagingOptions = new OnMessageOptions();
eventDrivenMessagingOptions.AutoComplete = true;
eventDrivenMessagingOptions.ExceptionReceived += OnExceptionReceived;
eventDrivenMessagingOptions.MaxConcurrentCalls = 5;
MaxConcurrentCalls: define how many theads will be processing the queue (for
improved performance)
Avoid lazy loading of Entity Data for intense processing scenarios
If you are not careful while writing Navigational properties, EF will lazy load
unnecessary data behind the scenes. For EF entities having relations, there will be
chances for a developer to write like this..
public class Customer :
{
public int CustomerID { get; set; }
public int CustomerTypeId{get;set;}
//Foreign keyp
public virtual CustomerType CustomerType { get; set; }
}
This is naturally the easiest way, especially if your application need to retrieve
“CustomerType: data using Customer entity. If we carefully analyze the above
query, each query to Customer object, may cause the EF to lazy load “Customer
entity” too. This will have an impact on performance, especially when the data
becomes too heavy. In these scenarios, to increase performance, there is a way.
Instead of defining “public virtual CustomerType CustomerType” define “public
ICollection CustomerType CustomerType”. By using ICollection, EF stops lazy
loading.
And in queries you can use “Customer” object, and force the EF to use lazy
loading by using “Include”
4. 3
ie : var data = Customers.Include("CustomerEntity").ToList(). This will force EF to
lazy load.
Azure Access Control Service (ACS)
The two main factors affecting ACS resource usage, and thus performance, are
the token size, and encryption.
In general, the key attributes of performance are response time, throughput, and
resource utilization. For example, if the application has limited resources such as
memory, chances are some information will hit the file system which is much
slower than in-memory operations – this will affect overall response time. Here is
another example, if the application sends a large amount of data over a network
with limited bandwidth, it results in slower than desired response times. One
approach to solve performance issues is to add more resources such as network
bandwidth, faster CPU, and more memory. This approach may help although not
always. Another approach is to improve code so that it uses fewer resources and
exchanges less data. Considering the context of claims aware applications and
what’s under developer’s control there are a few key factors related to using ACS
that affect performance, namely, tokens and cryptographic operations related to
the tokens.
Token size and encryption are key factors under the developer’s control that
affect performance of applications integrated with ACS.
Token Size. Token size affects performance in two ways. First it directly
affects performance related to network bandwidth to some degree. The
larger the token the more network bandwidth it will take resulting in slower
response overall. Second, the larger the token the more CPU cycles required
to verify the integrity and extract claims in the token. Token processing
includes parsing the token and de-serializing it into binary format so that
your code can use it, the processing also includes several cryptography
operations such as signature validation, and optionally decryption. The larger
the token the more CPU cycles spent on its processing resulting in higher
resource utilization and slower overall response. Token size depends on
several factors: token format, cryptography applied to the token, and the
claims in the token. ACS supports SAML, SWT, and JWT tokens. Generally a
SWT or a JWTJWT token is smaller than a SAML token that carries equivalent
5. 4
amount of information. For more information read Token Formats Supported
in ACS. There is caveat though, different token formats are optimized for
different protocols and application architectures.
SWT Tokens are issued over WS-Federation, WS-Trust, and OAuth WRAP
or OAuth 2.0 protocols. This means that SWT tokens can be used in Web
applications, WCF (SOAP) services, and WCF (REST) services. WIF does
not support the SWT token handler.
SAML Tokens are issued over WS-Trust and WS-Federation protocols.
This means that SAML tokens can be used in Web applications and WCF
(SOAP) services. WIF supports both SAML 2.0 and SAML 1.1 tokens.
Read more about WIF in the following topic Windows Identity
Foundation
JWT Tokens are issued over WS-Federation, WS-Trust, and OAuth 2.0
protocols. This means that SWT tokens can be used in Web applications,
WCF (SOAP) services, and WCF (REST) services.
One factor that contributes most to the token size is the claims
contained in the token. The more claims the token carries the larger its
size. In most cases claims that come with the token are under
developer’s control. The claims used by an application are added,
removed, or changed by the Security Token Service (STS) such as AD FS
or ACS. ACS uses rule groups and rules to add, remove or change claims
in a token. For more information read Rule Groups and Rules.
Encryption. Encryption and other cryptographic operations such as signing,
signature validation, and decryption directly affect performance.
Cryptography operations consume computation power due to the complex
algorithms involved. ACS signs all tokens issued as an integrity measure to
counter tampering attacks. Signature validation of tokens is not optional.
Token encryption is required if a relying party application is a web service
that is not using SSL to encrypt communications. WCF-based services using
SOAP require encrypted, proof-of-possession tokens with the WS-Trust
protocol. Token encryption is required to protect sensitive information over
an unencrypted channel. However, in cases where the communication
channel is encrypted, such as using SSL encryption, then using token
encryption is optional and may not be applied in favor of improved
performance.
6. 5
Azure Web Applications and Serialization
When writing applications for Azure, data serialization becomes increasingly
important when compared to writing on-premises applications. Azure
applications get charged for database use, general storage, data transfers and
caching. For more information on pricing see, Azure Pricing Overview.
Additionally Azure applications may be used from mobile devices like phones,
tablets, etc. and this can introduce latency because of their partially connected
nature. All of these factors mean it is very important to think about how your
application will send and receive data. Smaller payloads will reduce your costs for
bandwidth and storage and may help minimize latency.
The best serializer and encoder to use will depend upon your application and its
interoperability needs. For scenarios where the service and client are both
running under the .NET Framework consider using the binary encoder and the
DataContractSerializer. For scenarios where interoperability is required, use the
text encoder. In these types of scenarios the DataContractJsonSerializer will
provide the smallest representation, but this requires use of a non-SOAP service.
If you need to use SOAP, consider using the DataContractSerializer.
Physical location of services
If possible, co-locate different nodes or application layers within the same data
center. Otherwise network latency and cost will be greater.
For example, locate the web application in the same data center as the SQL
Database instance that it accesses, rather than in a different data center, or on-
premises.
New D-Series of Azure VMs with 60% Faster CPUs, More Memory and Local SSD
Disks
(http://weblogs.asp.net/scottgu/new-d-series-of-azure-vms-with-60-faster-cpus-
more-memory-and-local-ssd-disks)
7. 6
Today I’m excited to announce that we just released a new set of VM sizes for
Microsoft Azure. These VM sizes are now available to be used immediately by
every Azure customer.
The new D-Series of VMs can be used with both Azure Virtual Machines and Azure
Cloud Services. In addition to offering faster vCPUs (approximately 60% faster
than our A series) and more memory (up to 112 GB), the new VM sizes also all
have a local SSD disk (up to 800 GB) to enable much faster IO reads and writes.
The new VM sizes available today include the following:
General Purpose D-Series VMs
Name vCores Memory (GB) Local SSD Disk (GB)
Standard_D1 1 3.5 50
Standard_D2 2 7 100
Standard_D3 4 14 200
Standard_D4 8 28 400
High Memory D-Series VMs
Name vCores Memory (GB) Local SSD Disk (GB)
Standard_D11 2 14 100
Standard_D12 4 28 200
Standard_D13 8 56 400
Standard_D14 16 112 800
For pricing information, please see Virtual Machine Pricing Details.
Local SSD Disk and SQL Server Buffer Pool Extensions
A temporary drive on the VMs (D: on Windows, /mnt or /mnt/resource on Linux)
is mapped to the local SSDs exposed on the D-Service VMs, and provides a really
good option for replicated storage workloads, like MongoDB, or for significantly
increasing the performance of SQL Server 2014 by enabling its unique Buffer Pool
Extensions (BPE) feature.
8. 7
SQL Server 2014’s Buffer Pool Extensions allows you to extend the SQL Engine
Buffer Pool with the memory of local SSD disks to significantly improve the
performance of SQL workloads. The Buffer Pool is a global memory resource used
to cache data pages for much faster read operations. Without any code changes
in your application, you can enable the buffer pool support with the SSDs of the
D-Series VMs using a simple T-SQL query with just four lines:
ALTER SERVER CONFIGURATION
SET BUFFER POOL EXTENSION ON
SIZE = <size> [ KB | MB | GB ]
FILENAME = 'D:SSDCACHEEXAMPLE.BPE'
No code changes are required in your application, and all write operations will
continue to be durably persisted in VM drives persisted in Azure Storage. More
details on configuring and using BPE can be found here.
Pre-load required data for complex processing
Typically there may be chances for writing code like given below
void Process(){
//Loop stars here
var data = DBContext.Customers.ToList();
//Loop ends here
}
The problem with the above approach is that, each time the loop executes, it will
query data from DB. Each time we call the .ToList() method, EF will send
underlying SQL queries to db, which makes DB hits more, and in production
environment, it will cause significant performance issue. To resolve this, you can
follow another approach which is quite simple. Load the data to c# object and
manipulate it from that object. Use Linq data manipulation from stored c#
objects. SO the modified code will look like this
9. 8
void Process(){
List<object> listData = DBContext.Customers.ToList(); //Initially load the
data, so db hit occurs only once.
//Loop starting here
var data = listData.SelectAl()...;
//Loop ending here
}
)
Use batch processing
If you application involved processing of data as well as writing to DB, then you
can make use of batch methods, which reduce DB hits
Use Asynchronous processing
If you try to call a select query to retrieve 10,000 records from an Azure SQL, you
would notice that Azure sends “selected data” asynchronously, i.e. first 1000
records, then 1000 records and so on. So for our applications listing/display data
scenarios, you can make the application behave asynchronously, makes user
experience much greater.
SQL Azure Performance:
Scaling Out:
The most dramatic performance improvements achievable in Azure
applications come from the scaling out and partitioning of resources.
Building scalable applications in Azure requires leveraging the scale-out of
resources by their physical partitioning: SQL databases, storage, compute
nodes, etc. This partitioning enables parallel execution of application tasks,
and is thus the basis for high performance, because Azure has the
resources of an entire data center available, and handles the physical
partitioning for you. To achieve this level of overall performance requires
the use of proper scale-out design patterns.
10. 9
Ensuring maximum scalability: deciding whether, and how to partition your
data.
Custom Sharding in SQL Azure and Federations in SQL Azure Database
Auto scaling the SQL Azure: By creating more instances of SQL virtual
machine (Same as multiple database instance, which is similar to creating
more instances of Web role or worker role)
Premium SQL AZURE, set concurrent request can be more than 180.
12. 11
Setting in SQL Azure:
Verify to make sure disk Cache is none for high throughput I/0 operations in
this case.
Make sure data compression is on, compress table and indexes. That way
more data are stored on fewer pages and read I/O operations are faster.
Make sure replication of SQL Azure database is done in same region for
minimum latency.
Use SQL file groups across multiple disks instead of disk striping
Put logs, data and backup on separate disks when doing this we have to
disable geo-replication on storage account for consistency. Because
replication is done asynchronously.
The one storage account IOPS limits are 20,000 times foe 1KB message size
in our case it will be 20,000/1kb per message. Only 500 IOPS PER DISK is
allowed. If it goes more than that we have to add more storage accounts
automatically to avoid throttling in storage account. We should put a
storage throttling alert in azure to see up to what IOPS we go to using
(http://blogs.msdn.com/b/mast/archive/2014/08/02/how-to-monitor-for-
storage-account-throttling.aspx)
13. 12
Monitoring Azure SQL Database Using Dynamic Management Views
(http://msdn.microsoft.com/en-us/library/windowsazure/ff394114.aspx)
Microsoft Azure SQL Database partially supports three categories of dynamic
management views:
Database-related dynamic management views.
Execution-related dynamic management views.
Transaction-related dynamic management views.
Monitoring Connections
You can use the sys.dm_exec_connections view to retrieve information about the
connections established to a specific Azure SQL Database server and the details of
14. 13
each connection. In addition, thesys.dm_exec_sessions view is helpful when
retrieving information about all active user connections and internal tasks.
The following query retrieves information on the current connection:
-- monitor connections
SELECT
e.connection_id,
s.session_id,
s.login_name,
s.last_request_end_time,
s.cpu_time
FROM
sys.dm_exec_sessions s
INNER JOIN sys.dm_exec_connections e
ON s.session_id = e.session_id
GO
Monitoring Query Performance
Slow or long running queries can consume significant system resources. This
section demonstrates how to use dynamic management views to detect a few
common query performance problems. For detailed information,
see Troubleshooting Performance Problems in SQL Server 2005 article on
Microsoft TechNet.
Finding Top N Queries
The following example returns information about the top five queries ranked by
average CPU time. This example aggregates the queries according to their query
hash, so that logically equivalent queries are grouped by their cumulative
resource consumption.
-- Find top 5queries
SELECT TOP 5 query_stats.query_hash AS "Query Hash",
SUM(query_stats.total_worker_time) / SUM(query_stats.execution_count) AS
"Avg CPU Time",
MIN(query_stats.statement_text) AS "Statement Text"
FROM
(SELECT QS.*,
SUBSTRING(ST.text, (QS.statement_start_offset/2) + 1,
((CASE statement_end_offset
WHEN -1 THEN DATALENGTH(st.text)
ELSE QS.statement_end_offset END
- QS.statement_start_offset)/2) + 1) AS statement_text
FROM sys.dm_exec_query_stats AS QS
CROSS APPLY sys.dm_exec_sql_text(QS.sql_handle) as ST) as query_stats
15. 14
GROUP BY query_stats.query_hash
ORDER BY 2 DESC;
GO
Monitoring Blocked Queries
Slow or long-running queries can contribute to excessive resource consumption
and be the consequence of blocked queries. The cause of the blocking can be
poor application design, bad query plans, the lack of useful indexes, and so on.
You can use the sys.dm_tran_locks view to get information about the current
locking activity in your Azure SQL Database. For example code,
see sys.dm_tran_locks (Transact-SQL)in SQL Server Books Online.
Monitoring Query Plans
An inefficient query plan also may increase CPU consumption. The following
example uses the sys.dm_exec_query_stats view to determine which query uses
the most cumulative CPU.
-- Monitor query plans
SELECT
highest_cpu_queries.plan_handle,
highest_cpu_queries.total_worker_time,
q.dbid,
q.objectid,
q.number,
q.encrypted,
q.[text]
FROM
(SELECT TOP 50
qs.plan_handle,
qs.total_worker_time
FROM
sys.dm_exec_query_stats qs
ORDER BY qs.total_worker_time desc) AS highest_cpu_queries
CROSS APPLY sys.dm_exec_sql_text(plan_handle) AS q
ORDER BY highest_cpu_queries.total_worker_time desc
Get SQL Profiler info from SQL Azure
(http://blogs.msdn.com/b/benko/archive/2012/05/19/cloudtip-14-how-do-i-get-
sql-profiler-info-from-sql-azure.aspx)
18. 17
Analysis Resources and Tools
A number of third-party non-Microsoft tools are available for analyzing Azure
performance:
Cerebrata
SQL Server and SQL Database Performance Testing: Enzo SQL Baseline
Other Resources
SQL Database Performance and Elasticity Guide
SQL Database
Storage
Networking
Service Bus
Azure Planning - A Post-decision Guide to Integrate Azure in Your Environment
Tracing:
To see which method is taking time,
We can do following:
In code in receive worker process and
In web.config
Configure the azure site for trace logging
19. 18
Then start stream logging service as follows
Click on logs, Stream logs and see as below
20. 19
Then run the web site code and it will give us where is the most time it is taking
If worker process is crashing, it may cause performance issue, we can find out by
the following process:
21. 20
Open evenlog.xml in your event log folder in azure for the app and see why?
Click on download file after test and see.
Coming soon:
Coming soon in preview are a larger maximum database size (the current limit is
500GB), parallel queries (using multiple threads for query optimization)
There will also be support for in-memory columnstore queries in the premium
version of Azure SQL database. Columnstore indexes are a type of in-memory
database, enabling much faster queries at the expense of easy updating. They are