The opportunity to access on-demand, unbounded computation and storage resources has increasingly motivated users to move their health records from local data centers to the cloud environment. This change can reduce the costs associated with the management of data sharing, communication overhead and improve Quality of Service (QoS). Processing, storing, hosting and archiving data related to e-Health systems without physical access and control can exacerbate authentication and access control issues in this new environment. Therefore, convincing users to move sensitive medical records to the cloud environment requires implementing secure and strong authentication and access control methods to protect the data.
This document discusses security aspects in distributed operating systems. It covers authentication methods like symmetric cryptography and public key cryptography. It also discusses authorization mechanisms like access control matrices, lists and capabilities. The document proposes a design for distributed authentication using public key Kerberos and distributed authorization using trust management systems. It concludes that current security in distributed OS still relies on traditional centralized mechanisms and that alternatives like trust management systems are still immature.
An authentication protocol allows two entities to verify each other's identity. It is a type of computer communications protocol designed to transfer authentication data between two entities, allowing the receiving entity to authenticate the connecting entity. There are several types of authentication protocols including Kerberos, NTLM, SSL/TLS, digest authentication, smart cards, VPN and RAS. The authentication process involves determining a user's identity and access levels through security mechanisms before authorizing access to system resources.
The document outlines 10 security principles for data security: economy of mechanism, fail-safe defaults, complete mediation, open design, separation of privilege, least privilege, least common mechanism, psychological acceptability, work factor, and compromise recording. The principles stress simplicity, conservative protection by default, checking all access for compliance, public availability of design, requiring multiple conditions for access, minimal necessary privileges, minimizing shared mechanisms, intuitive user interfaces, comparing circumvention cost to attacker resources, and sometimes prioritizing recording intrusions over prevention.
Externalising or modularising access rules outside of the application in so called policies, represents a fundamental software engineering tactic. In this talk, we discuss ways how application developers can decouple access control rules from the application code and how this is done in current technologies. We also discuss the ways a security expert can express access control policies in an independent format and how he can manage these policies to align them with for example rules from different departments or enterprise-wide regulations.
PLM Software, What is PLM, Cloud Based PLM, Supply Chain, Cloud vs. on premises PLM systems, PLM Benefits, Bill or Material, BOM Management, BOMControl, Content Management
Data security authorization and access controlLeo Mark Villar
This document discusses various methods of authorization and access control for data security. It describes identification, authentication, authorization, and the principle of least privilege. Access control methods like access control lists and capability-based security are explained. Different access control models such as discretionary access control, mandatory access control, role-based access control and attribute-based access control are also summarized. Finally, it briefly discusses multi-level access control and physical access control.
This document discusses various security concepts including types of threats, security mechanisms, authentication methods, access control, and electronic payment systems. It provides examples of security architectures like Globus and protocols like Kerberos. Key topics covered include encryption, digital signatures, firewalls, capabilities, and privacy in electronic payment systems.
The document discusses cyber security and outlines the objectives, system vulnerabilities, business value of security controls, frameworks for security and control, technologies and tools, and management challenges and solutions. It provides an introduction to cyber security concepts over several pages with definitions, figures, and examples.
This document discusses security aspects in distributed operating systems. It covers authentication methods like symmetric cryptography and public key cryptography. It also discusses authorization mechanisms like access control matrices, lists and capabilities. The document proposes a design for distributed authentication using public key Kerberos and distributed authorization using trust management systems. It concludes that current security in distributed OS still relies on traditional centralized mechanisms and that alternatives like trust management systems are still immature.
An authentication protocol allows two entities to verify each other's identity. It is a type of computer communications protocol designed to transfer authentication data between two entities, allowing the receiving entity to authenticate the connecting entity. There are several types of authentication protocols including Kerberos, NTLM, SSL/TLS, digest authentication, smart cards, VPN and RAS. The authentication process involves determining a user's identity and access levels through security mechanisms before authorizing access to system resources.
The document outlines 10 security principles for data security: economy of mechanism, fail-safe defaults, complete mediation, open design, separation of privilege, least privilege, least common mechanism, psychological acceptability, work factor, and compromise recording. The principles stress simplicity, conservative protection by default, checking all access for compliance, public availability of design, requiring multiple conditions for access, minimal necessary privileges, minimizing shared mechanisms, intuitive user interfaces, comparing circumvention cost to attacker resources, and sometimes prioritizing recording intrusions over prevention.
Externalising or modularising access rules outside of the application in so called policies, represents a fundamental software engineering tactic. In this talk, we discuss ways how application developers can decouple access control rules from the application code and how this is done in current technologies. We also discuss the ways a security expert can express access control policies in an independent format and how he can manage these policies to align them with for example rules from different departments or enterprise-wide regulations.
PLM Software, What is PLM, Cloud Based PLM, Supply Chain, Cloud vs. on premises PLM systems, PLM Benefits, Bill or Material, BOM Management, BOMControl, Content Management
Data security authorization and access controlLeo Mark Villar
This document discusses various methods of authorization and access control for data security. It describes identification, authentication, authorization, and the principle of least privilege. Access control methods like access control lists and capability-based security are explained. Different access control models such as discretionary access control, mandatory access control, role-based access control and attribute-based access control are also summarized. Finally, it briefly discusses multi-level access control and physical access control.
This document discusses various security concepts including types of threats, security mechanisms, authentication methods, access control, and electronic payment systems. It provides examples of security architectures like Globus and protocols like Kerberos. Key topics covered include encryption, digital signatures, firewalls, capabilities, and privacy in electronic payment systems.
The document discusses cyber security and outlines the objectives, system vulnerabilities, business value of security controls, frameworks for security and control, technologies and tools, and management challenges and solutions. It provides an introduction to cyber security concepts over several pages with definitions, figures, and examples.
Achieving Defendable Architectures Via Threat Driven MethodologiesPriyanka Aash
The document discusses using threat-driven methodologies to achieve defendable system architectures. It describes a threat analysis methodology that involves identifying assets, attack vectors, threat actors, and controls. A case study applies this methodology to analyze threats to a smart card system. Key threats are identified and addressed through controls that focus on visibility, manageability, and survivability of the system. The presentation emphasizes starting with threat intelligence and analysis to select appropriate protections and designing systems to be defended.
This document provides a table of technical parameters for evaluating a SIEM (security information and event management) system during a proof of concept assessment. The table includes parameters such as data collection, data normalization, event correlation, threat detection, alerting and reporting, incident response, user management, data privacy and security, scalability and performance, and integration with other security tools. Evaluating a SIEM against these comprehensive technical parameters can provide a deeper understanding of its capabilities and help determine if it is suitable for full deployment in an organization's network environment.
This document summarizes a research paper that proposes a novel symmetric key cryptography algorithm (N-SKC) to improve data security in cloud computing. The N-SKC algorithm uses multiple computational steps, random operator and delimiter selections to encrypt data with the same key producing different ciphertexts. It is designed to protect against brute force attacks. The paper also proposes using RSA for key exchange between the cloud provider and user to secretly share a symmetric key for encryption. Experimental results testing the N-SKC algorithm show it produces different ciphertexts for the same plaintext and key.
This document provides an overview of network security fundamentals. It discusses why security is needed given how the internet has evolved beyond its original design. It covers security concepts like access control, authentication, authorization, risks, threats and vulnerabilities. It also describes different types of attacks, sources of attacks, and strategies for risk analysis and management.
This document provides an overview of network security fundamentals. It discusses why security is needed given how the internet has evolved beyond its original design. It covers security concepts like access control, authentication, authorization, integrity, and risk analysis. It also examines threats, vulnerabilities, and different types of attacks sources. The goal is to help network administrators understand security on different layers and how to implement mitigation techniques.
TechWiseTV Workshop: Cisco Stealthwatch and ISERobb Boyd
Replay the live event: http://cs.co/90008z2Ar
Learn how your existing Cisco network can help you to know exactly who is doing what on the network with end-to-end visibility, differentiate anomalies from normal behavior with contextual threat intelligence and stop threats and mitigate risk with one-click containment of users and devices.
It’s time for the network to protect itself. Please make time for this important workshop.
Resources:
Watch the Cisco Stealthwatch and ISE full episode: http://cs.co/90008z24M
Network as a Sensor-Enforcer on CCO:
http://www.cisco.com/c/en/us/solutions/enterprise-networks/enterprise-network-security/net-sensor.html
Cisco ISE Community
http://cs.co/ise-community
This document summarizes a seminar presentation on public key infrastructure (PKI). It discusses key concepts of PKI including digital signatures, certificates, validation, revocation, and the roles of certification authorities. The presentation covers how asymmetric encryption, hashing, and digital signatures enable secure authentication and authorization in a PKI. It also examines the entities, operations, and technologies involved in implementing and managing a PKI, such as certificate authorities, registration authorities, key generation and storage, and certification revocation lists.
Multi-Server user Authentication Scheme for Privacy Preservation with Fuzzy C...IJCNCJournal
The integration of artificial intelligence technology with a scalable Internet of Things (IoT) platform facilitates diverse smart communication services, allowing remote users to access services from anywhere at any time. The multi-server environment within IoT introduces a flexible security service model, enabling users to interact with any server through a single registration. To ensure secure and privacy preservation services for resources, an authentication scheme is essential. Zhao et al. recently introduced a user authentication scheme for the multi-server environment, utilizing passwords and smart cards, claiming resilience against well-known attacks. This paper conducts cryptanalysis on Zhao et al.'s scheme, focusing on denial of service and privacy attacks, revealing a lack of user-friendliness. Subsequently, we propose a new multi-server user authentication scheme for privacy preservation with fuzzy commitment over the IoT environment, addressing the shortcomings of Zhao et al.'s scheme. Formal security verification of the proposed scheme is conducted using the ProVerif simulation tool. Through both formal and informal security analyses, we demonstrate that the proposed scheme is resilient against various known attacks and those identified in Zhao et al.'s scheme.
Multi-Server user Authentication Scheme for Privacy Preservation with Fuzzy C...IJCNCJournal
The integration of artificial intelligence technology with a scalable Internet of Things (IoT) platform facilitates diverse smart communication services, allowing remote users to access services from anywhere at any time. The multi-server environment within IoT introduces a flexible security service model, enabling users to interact with any server through a single registration. To ensure secure and privacy preservation services for resources, an authentication scheme is essential. Zhao et al. recently introduced a user authentication scheme for the multi-server environment, utilizing passwords and smart cards, claiming resilience against well-known attacks. This paper conducts cryptanalysis on Zhao et al.'s scheme, focusing on denial of service and privacy attacks, revealing a lack of user-friendliness. Subsequently, we propose a new multi-server user authentication scheme for privacy preservation with fuzzy commitment over the IoT environment, addressing the shortcomings of Zhao et al.'s scheme. Formal security verification of the proposed scheme is conducted using the ProVerif simulation tool. Through both formal and informal security analyses, we demonstrate that the proposed scheme is resilient against various known attacks and those identified in Zhao et al.'s scheme.
Modeling and Utilizing Security Knowledge for Eliciting Security RequirementsShinpei Hayashi
This document proposes a technique for eliciting security requirements early in the development process. It involves using security knowledge from security targets and common criteria standards. Potential threats are detected by matching scenario descriptions to threat patterns using graph transformation. When a threat is detected, a negative scenario is derived and security objectives are embedded to address the threat. An evaluation of the technique found it could accurately detect threats in scenarios from different domains, though some false positives and negatives occurred. Overall, the technique shows promise for aiding the elicitation of security requirements.
This document discusses public key infrastructure (PKI) and its role in enabling electronic commerce. It covers traditional commerce versus e-commerce, security requirements and attacks for safe commerce, security services like encryption and authentication, and security mechanisms like digital signatures. It also provides details on cryptographic algorithms like symmetric, asymmetric, and hash algorithms. It describes how digital certificates, certification authorities, and hardware security modules help establish trust in e-commerce transactions.
The document outlines the syllabus for a course on cryptography and network security. It discusses key topics that will be covered including cryptographic algorithms, network security concepts, security services, security mechanisms, and types of security attacks. The goal is for students to understand the fundamentals of network security and how to apply cryptographic techniques and authentication schemes to secure applications and networks.
The document discusses several coding security practices for developing secure software, including input validation, output handling, parameterizing queries, identity and authentication controls, and access controls. It provides examples and recommendations for implementing each practice to prevent common vulnerabilities like injection and data tampering. The goal is to integrate security at the code level from the beginning to reduce risks.
A study of cryptography for satellite applicationsRajesh Ishida
This document discusses and compares various cryptographic algorithms for use in securing satellite downlink communications. It begins with an overview of aspects of satellite security and introduces cryptography. It then covers symmetric and asymmetric cryptographic algorithms like AES, RSA and stream ciphers. It analyzes the performance of algorithms in terms of hardware usage and throughput. Block cipher modes and attacks on cryptosystems are also examined. The document concludes by recommending the KHAZAD block cipher and A5/1 stream cipher as best suited for satellite security based on a performance comparison.
The document discusses various principles and methods of access control, including:
1) Three main principles of access control are identity, authority, and accountability. Identity establishes a user's identity, authority authorizes access privileges, and accountability tracks user actions.
2) Common methods to establish identity include passwords, tokens, biometrics, and digital certificates. Multifactor authentication combines multiple methods for stronger security.
3) Access control models like Bell-LaPadula and Biba are used to enforce security policies based on confidentiality, integrity, and transactions. Clark-Wilson defines processes to control access to critical data items.
This document outlines a project to develop and implement an access control mechanism for outsourced data on the cloud. The project has four modules: a supporting application, service development, encryption/decryption, and role-based access. It discusses key terms like access control, access control models, outsourced data, and cloud computing. The system works by allowing a data owner to encrypt and outsource data to the cloud, which then processes and forwards the data to authorized users based on an agreed access policy.
Eric Golpe. Security, privacy, and compliance concerns can be significant hurdles to cloud adoption. Azure can help customers move to the cloud with confidence by providing a trusted foundation, demonstrating compliance with security standards, and making strong commitments to safeguard the privacy of customer data. This presentation will educate you in the fundamentals of Azure security as they pertain to the Cortana Analytics Suite, including capabilities in place for threat defense, network security, access control, and data protection as well as data privacy and compliance. Go to https://channel9.msdn.com/ to find the recording of this session.
IRJET- Detection of Intrinsic Intrusion and Auspice System by Utilizing Data ...IRJET Journal
This document proposes a system called the Detection of Intrinsic Intrusion and Auspice System (DIIAS) to detect insider attacks at the system call level using data mining and machine learning. The DIIAS generates user profiles by analyzing system call patterns in user log files to model normal user behavior. It then monitors current system calls and compares them to the user's profile to detect anomalies that may indicate an insider attack. The DIIAS achieves a user identification accuracy of 93% with a response time of less than 0.35 seconds, allowing it to efficiently detect and shut down insider attacks.
Week Topic Code Access vs Event Based.pptxArjayBalberan1
The document discusses code access security and evidence-based security models in .NET. It covers basic security concepts like authentication, authorization, and threats. It then describes how .NET uses evidence like strong names and publishers to determine the permissions granted to assemblies based on policy levels and code groups. The .NET configuration tools allow editing and creating custom permissions and code groups.
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsVictor Morales
K8sGPT is a tool that analyzes and diagnoses Kubernetes clusters. This presentation was used to share the requirements and dependencies to deploy K8sGPT in a local environment.
More Related Content
Similar to Authentication and Access Control in e-Health Systems in the Cloud Computing
Achieving Defendable Architectures Via Threat Driven MethodologiesPriyanka Aash
The document discusses using threat-driven methodologies to achieve defendable system architectures. It describes a threat analysis methodology that involves identifying assets, attack vectors, threat actors, and controls. A case study applies this methodology to analyze threats to a smart card system. Key threats are identified and addressed through controls that focus on visibility, manageability, and survivability of the system. The presentation emphasizes starting with threat intelligence and analysis to select appropriate protections and designing systems to be defended.
This document provides a table of technical parameters for evaluating a SIEM (security information and event management) system during a proof of concept assessment. The table includes parameters such as data collection, data normalization, event correlation, threat detection, alerting and reporting, incident response, user management, data privacy and security, scalability and performance, and integration with other security tools. Evaluating a SIEM against these comprehensive technical parameters can provide a deeper understanding of its capabilities and help determine if it is suitable for full deployment in an organization's network environment.
This document summarizes a research paper that proposes a novel symmetric key cryptography algorithm (N-SKC) to improve data security in cloud computing. The N-SKC algorithm uses multiple computational steps, random operator and delimiter selections to encrypt data with the same key producing different ciphertexts. It is designed to protect against brute force attacks. The paper also proposes using RSA for key exchange between the cloud provider and user to secretly share a symmetric key for encryption. Experimental results testing the N-SKC algorithm show it produces different ciphertexts for the same plaintext and key.
This document provides an overview of network security fundamentals. It discusses why security is needed given how the internet has evolved beyond its original design. It covers security concepts like access control, authentication, authorization, risks, threats and vulnerabilities. It also describes different types of attacks, sources of attacks, and strategies for risk analysis and management.
This document provides an overview of network security fundamentals. It discusses why security is needed given how the internet has evolved beyond its original design. It covers security concepts like access control, authentication, authorization, integrity, and risk analysis. It also examines threats, vulnerabilities, and different types of attacks sources. The goal is to help network administrators understand security on different layers and how to implement mitigation techniques.
TechWiseTV Workshop: Cisco Stealthwatch and ISERobb Boyd
Replay the live event: http://cs.co/90008z2Ar
Learn how your existing Cisco network can help you to know exactly who is doing what on the network with end-to-end visibility, differentiate anomalies from normal behavior with contextual threat intelligence and stop threats and mitigate risk with one-click containment of users and devices.
It’s time for the network to protect itself. Please make time for this important workshop.
Resources:
Watch the Cisco Stealthwatch and ISE full episode: http://cs.co/90008z24M
Network as a Sensor-Enforcer on CCO:
http://www.cisco.com/c/en/us/solutions/enterprise-networks/enterprise-network-security/net-sensor.html
Cisco ISE Community
http://cs.co/ise-community
This document summarizes a seminar presentation on public key infrastructure (PKI). It discusses key concepts of PKI including digital signatures, certificates, validation, revocation, and the roles of certification authorities. The presentation covers how asymmetric encryption, hashing, and digital signatures enable secure authentication and authorization in a PKI. It also examines the entities, operations, and technologies involved in implementing and managing a PKI, such as certificate authorities, registration authorities, key generation and storage, and certification revocation lists.
Multi-Server user Authentication Scheme for Privacy Preservation with Fuzzy C...IJCNCJournal
The integration of artificial intelligence technology with a scalable Internet of Things (IoT) platform facilitates diverse smart communication services, allowing remote users to access services from anywhere at any time. The multi-server environment within IoT introduces a flexible security service model, enabling users to interact with any server through a single registration. To ensure secure and privacy preservation services for resources, an authentication scheme is essential. Zhao et al. recently introduced a user authentication scheme for the multi-server environment, utilizing passwords and smart cards, claiming resilience against well-known attacks. This paper conducts cryptanalysis on Zhao et al.'s scheme, focusing on denial of service and privacy attacks, revealing a lack of user-friendliness. Subsequently, we propose a new multi-server user authentication scheme for privacy preservation with fuzzy commitment over the IoT environment, addressing the shortcomings of Zhao et al.'s scheme. Formal security verification of the proposed scheme is conducted using the ProVerif simulation tool. Through both formal and informal security analyses, we demonstrate that the proposed scheme is resilient against various known attacks and those identified in Zhao et al.'s scheme.
Multi-Server user Authentication Scheme for Privacy Preservation with Fuzzy C...IJCNCJournal
The integration of artificial intelligence technology with a scalable Internet of Things (IoT) platform facilitates diverse smart communication services, allowing remote users to access services from anywhere at any time. The multi-server environment within IoT introduces a flexible security service model, enabling users to interact with any server through a single registration. To ensure secure and privacy preservation services for resources, an authentication scheme is essential. Zhao et al. recently introduced a user authentication scheme for the multi-server environment, utilizing passwords and smart cards, claiming resilience against well-known attacks. This paper conducts cryptanalysis on Zhao et al.'s scheme, focusing on denial of service and privacy attacks, revealing a lack of user-friendliness. Subsequently, we propose a new multi-server user authentication scheme for privacy preservation with fuzzy commitment over the IoT environment, addressing the shortcomings of Zhao et al.'s scheme. Formal security verification of the proposed scheme is conducted using the ProVerif simulation tool. Through both formal and informal security analyses, we demonstrate that the proposed scheme is resilient against various known attacks and those identified in Zhao et al.'s scheme.
Modeling and Utilizing Security Knowledge for Eliciting Security RequirementsShinpei Hayashi
This document proposes a technique for eliciting security requirements early in the development process. It involves using security knowledge from security targets and common criteria standards. Potential threats are detected by matching scenario descriptions to threat patterns using graph transformation. When a threat is detected, a negative scenario is derived and security objectives are embedded to address the threat. An evaluation of the technique found it could accurately detect threats in scenarios from different domains, though some false positives and negatives occurred. Overall, the technique shows promise for aiding the elicitation of security requirements.
This document discusses public key infrastructure (PKI) and its role in enabling electronic commerce. It covers traditional commerce versus e-commerce, security requirements and attacks for safe commerce, security services like encryption and authentication, and security mechanisms like digital signatures. It also provides details on cryptographic algorithms like symmetric, asymmetric, and hash algorithms. It describes how digital certificates, certification authorities, and hardware security modules help establish trust in e-commerce transactions.
The document outlines the syllabus for a course on cryptography and network security. It discusses key topics that will be covered including cryptographic algorithms, network security concepts, security services, security mechanisms, and types of security attacks. The goal is for students to understand the fundamentals of network security and how to apply cryptographic techniques and authentication schemes to secure applications and networks.
The document discusses several coding security practices for developing secure software, including input validation, output handling, parameterizing queries, identity and authentication controls, and access controls. It provides examples and recommendations for implementing each practice to prevent common vulnerabilities like injection and data tampering. The goal is to integrate security at the code level from the beginning to reduce risks.
A study of cryptography for satellite applicationsRajesh Ishida
This document discusses and compares various cryptographic algorithms for use in securing satellite downlink communications. It begins with an overview of aspects of satellite security and introduces cryptography. It then covers symmetric and asymmetric cryptographic algorithms like AES, RSA and stream ciphers. It analyzes the performance of algorithms in terms of hardware usage and throughput. Block cipher modes and attacks on cryptosystems are also examined. The document concludes by recommending the KHAZAD block cipher and A5/1 stream cipher as best suited for satellite security based on a performance comparison.
The document discusses various principles and methods of access control, including:
1) Three main principles of access control are identity, authority, and accountability. Identity establishes a user's identity, authority authorizes access privileges, and accountability tracks user actions.
2) Common methods to establish identity include passwords, tokens, biometrics, and digital certificates. Multifactor authentication combines multiple methods for stronger security.
3) Access control models like Bell-LaPadula and Biba are used to enforce security policies based on confidentiality, integrity, and transactions. Clark-Wilson defines processes to control access to critical data items.
This document outlines a project to develop and implement an access control mechanism for outsourced data on the cloud. The project has four modules: a supporting application, service development, encryption/decryption, and role-based access. It discusses key terms like access control, access control models, outsourced data, and cloud computing. The system works by allowing a data owner to encrypt and outsource data to the cloud, which then processes and forwards the data to authorized users based on an agreed access policy.
Eric Golpe. Security, privacy, and compliance concerns can be significant hurdles to cloud adoption. Azure can help customers move to the cloud with confidence by providing a trusted foundation, demonstrating compliance with security standards, and making strong commitments to safeguard the privacy of customer data. This presentation will educate you in the fundamentals of Azure security as they pertain to the Cortana Analytics Suite, including capabilities in place for threat defense, network security, access control, and data protection as well as data privacy and compliance. Go to https://channel9.msdn.com/ to find the recording of this session.
IRJET- Detection of Intrinsic Intrusion and Auspice System by Utilizing Data ...IRJET Journal
This document proposes a system called the Detection of Intrinsic Intrusion and Auspice System (DIIAS) to detect insider attacks at the system call level using data mining and machine learning. The DIIAS generates user profiles by analyzing system call patterns in user log files to model normal user behavior. It then monitors current system calls and compares them to the user's profile to detect anomalies that may indicate an insider attack. The DIIAS achieves a user identification accuracy of 93% with a response time of less than 0.35 seconds, allowing it to efficiently detect and shut down insider attacks.
Week Topic Code Access vs Event Based.pptxArjayBalberan1
The document discusses code access security and evidence-based security models in .NET. It covers basic security concepts like authentication, authorization, and threats. It then describes how .NET uses evidence like strong names and publishers to determine the permissions granted to assemblies based on policy levels and code groups. The .NET configuration tools allow editing and creating custom permissions and code groups.
Similar to Authentication and Access Control in e-Health Systems in the Cloud Computing (20)
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsVictor Morales
K8sGPT is a tool that analyzes and diagnoses Kubernetes clusters. This presentation was used to share the requirements and dependencies to deploy K8sGPT in a local environment.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...University of Maribor
Slides from talk presenting:
Aleš Zamuda: Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapter and Networking.
Presentation at IcETRAN 2024 session:
"Inter-Society Networking Panel GRSS/MTT-S/CIS
Panel Session: Promoting Connection and Cooperation"
IEEE Slovenia GRSS
IEEE Serbia and Montenegro MTT-S
IEEE Slovenia CIS
11TH INTERNATIONAL CONFERENCE ON ELECTRICAL, ELECTRONIC AND COMPUTING ENGINEERING
3-6 June 2024, Niš, Serbia
TIME DIVISION MULTIPLEXING TECHNIQUE FOR COMMUNICATION SYSTEMHODECEDSIET
Time Division Multiplexing (TDM) is a method of transmitting multiple signals over a single communication channel by dividing the signal into many segments, each having a very short duration of time. These time slots are then allocated to different data streams, allowing multiple signals to share the same transmission medium efficiently. TDM is widely used in telecommunications and data communication systems.
### How TDM Works
1. **Time Slots Allocation**: The core principle of TDM is to assign distinct time slots to each signal. During each time slot, the respective signal is transmitted, and then the process repeats cyclically. For example, if there are four signals to be transmitted, the TDM cycle will divide time into four slots, each assigned to one signal.
2. **Synchronization**: Synchronization is crucial in TDM systems to ensure that the signals are correctly aligned with their respective time slots. Both the transmitter and receiver must be synchronized to avoid any overlap or loss of data. This synchronization is typically maintained by a clock signal that ensures time slots are accurately aligned.
3. **Frame Structure**: TDM data is organized into frames, where each frame consists of a set of time slots. Each frame is repeated at regular intervals, ensuring continuous transmission of data streams. The frame structure helps in managing the data streams and maintaining the synchronization between the transmitter and receiver.
4. **Multiplexer and Demultiplexer**: At the transmitting end, a multiplexer combines multiple input signals into a single composite signal by assigning each signal to a specific time slot. At the receiving end, a demultiplexer separates the composite signal back into individual signals based on their respective time slots.
### Types of TDM
1. **Synchronous TDM**: In synchronous TDM, time slots are pre-assigned to each signal, regardless of whether the signal has data to transmit or not. This can lead to inefficiencies if some time slots remain empty due to the absence of data.
2. **Asynchronous TDM (or Statistical TDM)**: Asynchronous TDM addresses the inefficiencies of synchronous TDM by allocating time slots dynamically based on the presence of data. Time slots are assigned only when there is data to transmit, which optimizes the use of the communication channel.
### Applications of TDM
- **Telecommunications**: TDM is extensively used in telecommunication systems, such as in T1 and E1 lines, where multiple telephone calls are transmitted over a single line by assigning each call to a specific time slot.
- **Digital Audio and Video Broadcasting**: TDM is used in broadcasting systems to transmit multiple audio or video streams over a single channel, ensuring efficient use of bandwidth.
- **Computer Networks**: TDM is used in network protocols and systems to manage the transmission of data from multiple sources over a single network medium.
### Advantages of TDM
- **Efficient Use of Bandwidth**: TDM all
International Conference on NLP, Artificial Intelligence, Machine Learning an...gerogepatton
International Conference on NLP, Artificial Intelligence, Machine Learning and Applications (NLAIM 2024) offers a premier global platform for exchanging insights and findings in the theory, methodology, and applications of NLP, Artificial Intelligence, Machine Learning, and their applications. The conference seeks substantial contributions across all key domains of NLP, Artificial Intelligence, Machine Learning, and their practical applications, aiming to foster both theoretical advancements and real-world implementations. With a focus on facilitating collaboration between researchers and practitioners from academia and industry, the conference serves as a nexus for sharing the latest developments in the field.
A review on techniques and modelling methodologies used for checking electrom...nooriasukmaningtyas
The proper function of the integrated circuit (IC) in an inhibiting electromagnetic environment has always been a serious concern throughout the decades of revolution in the world of electronics, from disjunct devices to today’s integrated circuit technology, where billions of transistors are combined on a single chip. The automotive industry and smart vehicles in particular, are confronting design issues such as being prone to electromagnetic interference (EMI). Electronic control devices calculate incorrect outputs because of EMI and sensors give misleading values which can prove fatal in case of automotives. In this paper, the authors have non exhaustively tried to review research work concerned with the investigation of EMI in ICs and prediction of this EMI using various modelling methodologies and measurement setups.
Batteries -Introduction – Types of Batteries – discharging and charging of battery - characteristics of battery –battery rating- various tests on battery- – Primary battery: silver button cell- Secondary battery :Ni-Cd battery-modern battery: lithium ion battery-maintenance of batteries-choices of batteries for electric vehicle applications.
Fuel Cells: Introduction- importance and classification of fuel cells - description, principle, components, applications of fuel cells: H2-O2 fuel cell, alkaline fuel cell, molten carbonate fuel cell and direct methanol fuel cells.
Literature Review Basics and Understanding Reference Management.pptxDr Ramhari Poudyal
Three-day training on academic research focuses on analytical tools at United Technical College, supported by the University Grant Commission, Nepal. 24-26 May 2024
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
Harnessing WebAssembly for Real-time Stateless Streaming PipelinesChristina Lin
Traditionally, dealing with real-time data pipelines has involved significant overhead, even for straightforward tasks like data transformation or masking. However, in this talk, we’ll venture into the dynamic realm of WebAssembly (WASM) and discover how it can revolutionize the creation of stateless streaming pipelines within a Kafka (Redpanda) broker. These pipelines are adept at managing low-latency, high-data-volume scenarios.
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
Authentication and Access Control in e-Health Systems in the Cloud Computing
1. Authentication and Access Control in e-Health
Systems in the Cloud
Nafiseh Kahani, Khalid Elgazzar, James R. Cordy
School of Computing
Queen’s University, Canada
3. Introduction
Security and access control issues are strong barrier for users to
use Cloud computing systems
How can we authenticate the various communicating entities?
How can we mange access by these communicating entities?
• Traditional Access Control mechanisms
5. System Architecture and Parameters
ID; Public Keys (AAM and CS); System Parameters
Data is represented in n fragments (R={R1,...,Rn})
Secret, sensitive, and official security levels based on the administrative,
medical, and accounting records
Each fragment is encrypted with its category-specific secret key and generates
C = {C1, ..., Cn}
Multiple keywords in the search request (encrypted searchable indexes) [1]
6. Authentication Phase
Authentication is performed based on the secret ID that users
receive during their registration with the system
The zero-knowledge protocol can prove the identity of the user
anonymously without revealing any knowledge
8. Access Control Phase
Subjects and Roles
• S = {S1,...,Sn} and R = {R1,...,Rm}
Subject Attributes
Objects
• O = {O1, ..., On}
Object Attributes
Actions/Rights
• a = {a1,...,am} and α={α1,…,αn}
10. Access Control- Dynamic
Using the concept “Intention” to narrow down the user rights
according to the intention per request
Intentions presented in hierarchical relationships [3]
Each leaf node is linked to a record:
• Trapdoor
• Rights
• Fragment
13. Features and Security Risk Analysis
Data Confidentiality
• standard symmetric-key algorithm
Fine-grained Access Control
• two-step access control
Non-repudiation
• Using logs
Authentication and User Privacy
• the zero-knowledge protocol does not reveal the user ID
15. Conclusion
The propose method simultaneously achieves authentication and
fine-grained access control
The authentication process based on a zero-knowledge method,
determines authorized users
The access control method determines user privileges to access
data
17. References
[1] N. Cao, C. Wang, M. Li, K. Ren, and W. Lou, ”Privacy-preserving multi- keyword ranked search over
encrypted cloud data,” IEEE Transaction on Parallel and Distributed Systems, vol 25, pp. 222-233, 2014.
[2] A. A. Kalam, R. E. Baida, P. Balbiani, S. Benferhat, F. Cuppens, Y. Deswarte, and G. Trouessin,
”Organization based access control,” in IEEE 4th International Workshop, pp. 120-131, June 2003.
[3] J. W. Byun, E. Bertino, and N. Li, ”Purpose based access control of complex data for privacy
protection,” in Proceedings of the 10th ACM Symposium on Access Control Models and Technologies, pp. 102-110,
June 2005.
[4] M. Fallah and N. Kahani “TDPF: A traceback-based distributed packet filter to mitigate DDoS
attacks”, Journal of Security and Communication Networks, vol 7, pp. 1-20, 2014.
Editor's Notes
The opportunity to access on-demand, unbounded computation and storage resources has increasingly motivated users to move their health records from local data centers to the cloud environment. This change can reduce the costs associated with the management of data sharing, communication overhead and improve Quality of Service (QoS). Processing, storing, hosting and archiving data related to e-Health systems without physical access and control can exacerbate authentication and access control issues in this new environment. Therefore, convincing users to move sensitive medical records to the cloud environment requires implementing secure and strong authentication and access control methods to protect the data.
This paper proposes a new information access method that preserves both authentication and access control in cloud-based e-Health systems. Our method is based on a zero-knowledge protocol combined with two-stage keyed access control. In each access request, based on the maximum rights of user, the minimum access is extracted. To establish secure connections between different entities in the system, a two-step combination of public key encryption and DUKPT is used. We analyze our scheme with respect to data confidentiality and resistance to common attacks on the network.
e-Health services provide efficient exchange of the patient’s data. The cloud computing paradigm provides great opportunities to support flexible and controlled information exchang. To perform safe and secure data exchange, we must address two concerns (1) How can we authenticate the various communicating entities?
And (2) How can we mange access by these communicating entities?
To address the first concern, we need a secure and accurate authentication protocol to authenticate authorized entities, while preserving the privacy of users. This authentication protocol must grant access to users according to policies set by data owners. Regarding the second concern, traditional access control mechanisms cannot be applied in this environment since they assume servers in the same domain are fully trusted and can enforce access control policies. However, in cloud environments, servers are outside of the user’s trusted domain, and hence sensitive data must be protected from unauthorized access. These security concerns become more challenging in real-time health-care services.
The proposed system is composed of the following entities: data owners, service provider (SP), cloud server (CS ), users (U), and an authentication and access control manager (AAM). The service provider sets and administrates access policies on behalf of data owners. The AAM is a trusted server which manages the process of authentication and access control. Data owners outsource their data files to an SP which determines and manages access control on their behalf. The CS provides the required resources to data owners to store and manage their data on the cloud. This server is typically managed by a cloud service provider (CSP) and can provide abundant storage capacity and computational power on demand. The architecture minimizes the computational overhead on the data owner’s side as well as the time that data owners are required to be available online.
We suppose that data is represented in n fragments R = {R1, ..., Rn}. These fragments do not overlap with each other and cover the whole data set. In addition, R is categorized into three levels of security as secret, sensitive, and official based on the administrative records, medical records and accounting records, respectively. Different encryption secret keys are used for each category. This classification and usage of different secret keys decreases the risk of data disclosure, especially during searching. Access levels are structured so that users with a lower access level cannot have access to data from higher levels without the explicit consent of the owner. This multiple keyword search produces accurate results.
Users may enter private and sensitive personal information during registration that they do not want to reveal to the AAM. The challenge-response protocol can prove the identify of the user anonymously without revealing any knowledge.
si is a sequence index that is used to derive other session keys. It can be also used as a sequence number that is used to prevent attackers to run the replay attack. Time that helps to prevent forced delay of taking long time to respond.
Security policies specify and regulate the authorized actions that can be carried out in the system , we define the static access policy that determines the maximum rights of the users, and then we narrow down to the minimum rights that fit the user’s intention to access the required data. Attributes are used to show the characteristics and capabilities of a subject. These attributes include role, job description. Object Attribute: Attributes include important information about an object such as security level. Type Rights are the set of actions that the subject has permission to perform on objects.
A subject si is authorized in the system if si ∈ S. We use attributes to show the characteristics and capabilities of a subject. These attributes include role, job description, etc.
The entity objects are health records such as medical records, accounting records, and administrative records that are divided to n fragments. The object set
is defined as O = {O1, ..., On}. To define the static policies, we adopt the relationships Employ and Define which means that the organization, org, defines action ai in terms of rights. SP grants the right αi on object Oj to the subject Si with the role Rj, from the organization, org. This definition permits authorized users only to have access to specific health records. This relationship can be defined by the SP in the case that the owner wants to seek advice from a subject.
Relationships of previous slide used to determine the maximum rights of each subject. When the system receives an authenticated access request, it extracts the least rights that satisfy the request. To achieve this goal, we use the concept “Intention”. This requires the user to express her/his reasons for accessing the data. To represent all possible intentions, different paths in the tree represent various intentions from specialized to generalized ones. A trapdoor is used to search over the encrypted documents in the cloud, and the required data fragments.
The PDC uses static policies, object attributes, and subject attributes to check whether the user intention complies with the user’s role(s), job descriptions and so on. If match is true, the PDC passes the authenticated access request, including the intention to the RSC . To determine the least access rights, RSC receives access requests with admitted status from the PDC and generates the access rights that the user needs to perform the required actions.
Where N is a sequence number to prevent replay attack. Sequence number N prevents a replay attack during data access, so attackers cannot replay outdated data to the user.
The CS decrypts the message with its private key and verifies the token by applying the system public key on the token’s signature. The user selects a secret key KSCS as a session key. The CS also verifies that the token is timely valid. The CS then searches the encrypted data with the trapdoor included in the token.
The decryption keys are only shared with the authorized users. The communication between entities are carried out in an encrypted form, since everything is encrypted, from data to trapdoors.
In our Access Control, the first step limits the rights of users based on their roles. The second step then limits the user rights based on the intention of his/her task. Moreover, changing the trapdoor results in different file access rights. Therefore, the data owner is able to define and enforce a flexible access policy for each user.
All communications between parties in our model are encrypted. highly immune to replay attacks. The time-stamp of tokens and timeout period ensures the validity and freshness of the requested message. During the authentication process, the AAM determines the value of Time which means that the user has to respond during the specific time frame which effectively prevents a forced delay in response.