文献紹介:Image Segmentation Using Deep Learning: A SurveyToru Tamaki
Shervin Minaee, Yuri Boykov, Fatih Porikli, Antonio Plaza, Nasser Kehtarnavaz, Demetri Terzopoulos, Image Segmentation Using Deep Learning: A Survey, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 7, pp. 3523-3542, 1 July 2022, doi: 10.1109/TPAMI.2021.3059968.
https://ieeexplore.ieee.org/document/9356353
https://arxiv.org/abs/2001.05566
Report on the First Knowledge Graph Reasoning Challenge 2018 -Toward the eXp...KnowledgeGraph
JIST2019: The 9th Joint International Semantic Technology Conference
The premium Asian forum on Semantic Web, Knowledge Graph, Linked Data and AI on the Web. Nov. 25-27, 2019, Hangzhou, China.
http://jist2019.openkg.cn/
This document discusses Wasserstein GAN (WGAN) and how it improves upon traditional GANs. WGAN uses the Wasserstein distance as its loss function instead of the Jensen-Shannon divergence used in traditional GANs. This allows for more stable training with less mode collapse. The Wasserstein distance is continuous, unlike other distance metrics, which helps gradients flow better during training. However, the Wasserstein distance is computationally intractable, so WGAN uses weight clipping to make the critic Lipchitz continuous and allow for its estimation. Overall, WGAN provides more meaningful learning curves and hyperparameters are easier to tune compared to traditional GANs.
文献紹介:Image Segmentation Using Deep Learning: A SurveyToru Tamaki
Shervin Minaee, Yuri Boykov, Fatih Porikli, Antonio Plaza, Nasser Kehtarnavaz, Demetri Terzopoulos, Image Segmentation Using Deep Learning: A Survey, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 7, pp. 3523-3542, 1 July 2022, doi: 10.1109/TPAMI.2021.3059968.
https://ieeexplore.ieee.org/document/9356353
https://arxiv.org/abs/2001.05566
Report on the First Knowledge Graph Reasoning Challenge 2018 -Toward the eXp...KnowledgeGraph
JIST2019: The 9th Joint International Semantic Technology Conference
The premium Asian forum on Semantic Web, Knowledge Graph, Linked Data and AI on the Web. Nov. 25-27, 2019, Hangzhou, China.
http://jist2019.openkg.cn/
This document discusses Wasserstein GAN (WGAN) and how it improves upon traditional GANs. WGAN uses the Wasserstein distance as its loss function instead of the Jensen-Shannon divergence used in traditional GANs. This allows for more stable training with less mode collapse. The Wasserstein distance is continuous, unlike other distance metrics, which helps gradients flow better during training. However, the Wasserstein distance is computationally intractable, so WGAN uses weight clipping to make the critic Lipchitz continuous and allow for its estimation. Overall, WGAN provides more meaningful learning curves and hyperparameters are easier to tune compared to traditional GANs.
Mask R-CNN extends Faster R-CNN by adding a branch for predicting segmentation masks in parallel with bounding box recognition and classification. It introduces a new layer called RoIAlign to address misalignment issues in the RoIPool layer of Faster R-CNN. RoIAlign improves mask accuracy by 10-50% by removing quantization and properly aligning extracted features. Mask R-CNN runs at 5fps with only a small overhead compared to Faster R-CNN.
- Pakistan has the highest incidence of breast cancer in Asia, with over 83,000 new cases reported annually. Breast cancer is the leading cause of cancer mortality among females in Pakistan.
- Early detection of breast cancer significantly improves survival rates, with a 100% 5-year survival rate for cancers detected early. However, Pakistan currently lacks a system for widespread breast cancer screening.
- Artificial intelligence can help by assisting oncologists in diagnosing breast cancer. A machine learning model trained on breast cancer data achieved over 96% accuracy in predicting malignant tumors, which could help detect cancers earlier.
Predictive Analysis of Breast Cancer Detection using Classification AlgorithmSushanti Acharya
Dissertation project titled “Predictive analysis of Breast Cancer detection using Classification”. For the research conducted, Breast Cancer Wisconsin Diagnostics dataset was used for analysis. Using R language machine learning model was designed based on various algorithms and the derived results were then visualized to present the most accurate model of them all (SVM in this case).
Application of-image-segmentation-in-brain-tumor-detectionMyat Myint Zu Thin
This document discusses applications of image segmentation in brain tumor detection. It begins by defining brain tumors and different types. It then discusses various image segmentation methods that can be used for brain tumor segmentation, including k-means clustering, region-based watershed algorithm, region growing, and active contour methods. It demonstrates how these methods can be implemented in Python for segmenting tumors from MRI images. The document also discusses computer-aided diagnosis systems and the roles of artificial intelligence and machine learning in medical image analysis and cancer diagnosis using image processing.
The document discusses brain tumor segmentation from MRI images. It describes how brain tumors are classified, outlines the segmentation process which includes preprocessing, segmentation, feature extraction and classification. Local binary patterns and support vector machines are used for feature extraction and classification. The accuracy, sensitivity and specificity are calculated to measure the performance of the segmentation system. Figures show examples of segmented images and comparisons of results from support vector machines and decision tree approaches.
p7タイトル: "Do Better ImageNet Models Transfer Better? " -> "What makes ImageNet good for transfer learning?"の誤りでした。大変申し訳ございません。
cvpaper.challenge の メタサーベイ発表スライドです。
cvpaper.challengeはコンピュータビジョン分野の今を映し、トレンドを創り出す挑戦です。論文サマリ作成・アイディア考案・議論・実装・論文投稿に取り組み、凡ゆる知識を共有します。2020の目標は「トップ会議30+本投稿」することです。
http://xpaperchallenge.org/cv/
This document discusses clustering and anomaly detection in data science. It introduces the concept of clustering, which is grouping a set of data into clusters so that data within each cluster are more similar to each other than data in other clusters. The k-means clustering algorithm is described in detail, which works by iteratively assigning data to the closest cluster centroid and updating the centroids. Other clustering algorithms like k-medoids and hierarchical clustering are also briefly mentioned. The document then discusses how anomaly detection, which identifies outliers in data that differ from expected patterns, can be performed based on measuring distances between data points. Examples applications of anomaly detection are provided.
This document summarizes recent developments in action recognition using deep learning techniques. It discusses early approaches using improved dense trajectories and two-stream convolutional neural networks. It then focuses on advances using 3D convolutional networks, enabled by large video datasets like Kinetics. State-of-the-art results are achieved using inflated 3D convolutional networks and temporal aggregation methods like temporal linear encoding. The document provides an overview of popular datasets and challenges and concludes with tips on training models at scale.
BIOMETRIC SMARTCARD AUTHENTICATION FOR FOG COMPUTINGIJNSA Journal
In the IoT scenario, things at the edge can create significantly large amounts of data. Fog Computing has recently emerged as the paradigm to address the needs of edge computing in the Internet of Things (IoT) and Industrial Internet of Things (IIoT) applications. In a Fog Computing environment, much of the processing would take place closer to the edge in a router device, rather than having to be transmitted to the Fog. Authentication is an important issue for the security of fog computing since services are offered to massive-scale end users by front fog nodes.Fog computing faces new security and privacy challenges besides those inherited from cloud computing. Authentication helps to ensure and confirms a user's identity. The existing traditional password authentication does not provide enough security for the data and there have been instances when the password-based authentication has been manipulated to gain access into the data. Since the conventional methods such as passwords do not serve the purpose of data security, research worksare focused on biometric user authentication in fog computing environment. In this paper, we present biometric smartcard authentication to protect the fog computing environment.
BIOMETRIC SMARTCARD AUTHENTICATION FOR FOG COMPUTINGIJNSA Journal
In the IoT scenario, things at the edge can create significantly large amounts of data. Fog Computing has recently emerged as the paradigm to address the needs of edge computing in the Internet of Things (IoT) and Industrial Internet of Things (IIoT) applications. In a Fog Computing environment, much of the processing would take place closer to the edge in a router device, rather than having to be transmitted to the Fog. Authentication is an important issue for the security of fog computing since services are offered to massive-scale end users by front fog nodes.Fog computing faces new security and privacy challenges besides those inherited from cloud computing. Authentication helps to ensure and confirms a user's identity. The existing traditional password authentication does not provide enough security for the data and there have been instances when the password-based authentication has been manipulated to gain access into the data. Since the conventional methods such as passwords do not serve the purpose of data security, research worksare focused on biometric user authentication in fog computing environment. In this paper, we present biometric smartcard authentication to protect the fog computing environment.
Mask R-CNN extends Faster R-CNN by adding a branch for predicting segmentation masks in parallel with bounding box recognition and classification. It introduces a new layer called RoIAlign to address misalignment issues in the RoIPool layer of Faster R-CNN. RoIAlign improves mask accuracy by 10-50% by removing quantization and properly aligning extracted features. Mask R-CNN runs at 5fps with only a small overhead compared to Faster R-CNN.
- Pakistan has the highest incidence of breast cancer in Asia, with over 83,000 new cases reported annually. Breast cancer is the leading cause of cancer mortality among females in Pakistan.
- Early detection of breast cancer significantly improves survival rates, with a 100% 5-year survival rate for cancers detected early. However, Pakistan currently lacks a system for widespread breast cancer screening.
- Artificial intelligence can help by assisting oncologists in diagnosing breast cancer. A machine learning model trained on breast cancer data achieved over 96% accuracy in predicting malignant tumors, which could help detect cancers earlier.
Predictive Analysis of Breast Cancer Detection using Classification AlgorithmSushanti Acharya
Dissertation project titled “Predictive analysis of Breast Cancer detection using Classification”. For the research conducted, Breast Cancer Wisconsin Diagnostics dataset was used for analysis. Using R language machine learning model was designed based on various algorithms and the derived results were then visualized to present the most accurate model of them all (SVM in this case).
Application of-image-segmentation-in-brain-tumor-detectionMyat Myint Zu Thin
This document discusses applications of image segmentation in brain tumor detection. It begins by defining brain tumors and different types. It then discusses various image segmentation methods that can be used for brain tumor segmentation, including k-means clustering, region-based watershed algorithm, region growing, and active contour methods. It demonstrates how these methods can be implemented in Python for segmenting tumors from MRI images. The document also discusses computer-aided diagnosis systems and the roles of artificial intelligence and machine learning in medical image analysis and cancer diagnosis using image processing.
The document discusses brain tumor segmentation from MRI images. It describes how brain tumors are classified, outlines the segmentation process which includes preprocessing, segmentation, feature extraction and classification. Local binary patterns and support vector machines are used for feature extraction and classification. The accuracy, sensitivity and specificity are calculated to measure the performance of the segmentation system. Figures show examples of segmented images and comparisons of results from support vector machines and decision tree approaches.
p7タイトル: "Do Better ImageNet Models Transfer Better? " -> "What makes ImageNet good for transfer learning?"の誤りでした。大変申し訳ございません。
cvpaper.challenge の メタサーベイ発表スライドです。
cvpaper.challengeはコンピュータビジョン分野の今を映し、トレンドを創り出す挑戦です。論文サマリ作成・アイディア考案・議論・実装・論文投稿に取り組み、凡ゆる知識を共有します。2020の目標は「トップ会議30+本投稿」することです。
http://xpaperchallenge.org/cv/
This document discusses clustering and anomaly detection in data science. It introduces the concept of clustering, which is grouping a set of data into clusters so that data within each cluster are more similar to each other than data in other clusters. The k-means clustering algorithm is described in detail, which works by iteratively assigning data to the closest cluster centroid and updating the centroids. Other clustering algorithms like k-medoids and hierarchical clustering are also briefly mentioned. The document then discusses how anomaly detection, which identifies outliers in data that differ from expected patterns, can be performed based on measuring distances between data points. Examples applications of anomaly detection are provided.
This document summarizes recent developments in action recognition using deep learning techniques. It discusses early approaches using improved dense trajectories and two-stream convolutional neural networks. It then focuses on advances using 3D convolutional networks, enabled by large video datasets like Kinetics. State-of-the-art results are achieved using inflated 3D convolutional networks and temporal aggregation methods like temporal linear encoding. The document provides an overview of popular datasets and challenges and concludes with tips on training models at scale.
BIOMETRIC SMARTCARD AUTHENTICATION FOR FOG COMPUTINGIJNSA Journal
In the IoT scenario, things at the edge can create significantly large amounts of data. Fog Computing has recently emerged as the paradigm to address the needs of edge computing in the Internet of Things (IoT) and Industrial Internet of Things (IIoT) applications. In a Fog Computing environment, much of the processing would take place closer to the edge in a router device, rather than having to be transmitted to the Fog. Authentication is an important issue for the security of fog computing since services are offered to massive-scale end users by front fog nodes.Fog computing faces new security and privacy challenges besides those inherited from cloud computing. Authentication helps to ensure and confirms a user's identity. The existing traditional password authentication does not provide enough security for the data and there have been instances when the password-based authentication has been manipulated to gain access into the data. Since the conventional methods such as passwords do not serve the purpose of data security, research worksare focused on biometric user authentication in fog computing environment. In this paper, we present biometric smartcard authentication to protect the fog computing environment.
BIOMETRIC SMARTCARD AUTHENTICATION FOR FOG COMPUTINGIJNSA Journal
In the IoT scenario, things at the edge can create significantly large amounts of data. Fog Computing has recently emerged as the paradigm to address the needs of edge computing in the Internet of Things (IoT) and Industrial Internet of Things (IIoT) applications. In a Fog Computing environment, much of the processing would take place closer to the edge in a router device, rather than having to be transmitted to the Fog. Authentication is an important issue for the security of fog computing since services are offered to massive-scale end users by front fog nodes.Fog computing faces new security and privacy challenges besides those inherited from cloud computing. Authentication helps to ensure and confirms a user's identity. The existing traditional password authentication does not provide enough security for the data and there have been instances when the password-based authentication has been manipulated to gain access into the data. Since the conventional methods such as passwords do not serve the purpose of data security, research worksare focused on biometric user authentication in fog computing environment. In this paper, we present biometric smartcard authentication to protect the fog computing environment.
This document discusses fog computing, which extends cloud computing to the edge of the network. It describes the existing cloud computing model and proposes fog computing as an alternative to address issues like latency. Key topics covered include security issues, privacy issues, potential scenarios and applications of fog computing, and ideas for future enhancement.
Authentication And Authorization Issues In Mobile Cloud Computing A Case StudyAngie Miller
The document discusses authentication and authorization issues in mobile cloud computing. It presents the mobile cloud computing (MCC) security solution developed and applied by STMicroelectronics. The solution addresses issues like reducing the need to store multiple passwords/usernames for different services and simplifying security policy management. It takes into account the complexity of STMicroelectronics' geographical and organizational structure. The solution and the tools/technologies used are described. Conclusions on the solution are also discussed.
A STUDY ON ADOPTION OF BLOCKCHAIN TECHNOLOGY IN CYBERSECURITYIRJET Journal
This document discusses adopting blockchain technology in cybersecurity. It begins by introducing blockchain and its potential benefits for cybersecurity. These include decentralized data storage, improved availability against DDoS attacks, and enhanced security for IoT systems. The document then outlines the objectives of using blockchain to enhance cybersecurity by making systems more secure and tamper-proof. It presents the methodology and block diagram of how blockchain would work in a cybersecurity system. Several use cases are described, such as decentralized storage, availability, and IoT security. The document concludes by discussing common cybersecurity threats on blockchain networks and outlining the two-part workflow of an integrated blockchain-cybersecurity system.
Rough set method-cloud internet of things: a two-degree verification scheme ...IJECEIAES
The quick development of innovations and increasing use of the internet of things (IoT) in human life brings numerous challenges. It is because of the absence of adequate capacity resources and tremendous volumes of IoT information. This can be resolved by a cloud-based architecture. Consequently, a progression of challenging security and privacy concerns has emerged in the cloud based IoT context. In this paper, a novel approach to providing security in cloud based IoT environments is proposed. This approach mainly depends on the working of rough set rules for guaranteeing security during data sharing (rough set method-cloud IoT (RSM-CIoTD)). The proposed RSM-CIoTD conspire guarantees secure communication between the user and cloud service provider (CSP) in a cloud based IoT. To manage unauthorized users, an RSM-CIoTD scheme utilizes a registered authority which plays out a two-degree confirmation between the network substances. The security and privacy appraisal techniques utilize minimum and maximum trust benefits of past communication. The experiments show that our proposed system can productively and safely store the cloud service while outperforming other security methods.
The fast emerging of internet of things (IoTs) has introduced fog computing as an intermediate layer between end-users and the cloud datacenters. Fog computing layer characterized by its closeness to end users for service provisioning than the cloud. However, security challenges are still a big concern in fog and cloud computing paradigms as well. In fog computing, one of the most destructive attacks is man-in-the-middle (MitM). Moreover, MitM attacks are hard to be detected since they performed passively on the network level. This paper proposes a MitM mitigation scheme in fog computing architecture. The proposal mapped the fog layer on software-defined network (SDN) architecture. The proposal integrated multi-path transmission control protocol (MPTCP), moving target defense (MTD) technique, and reinforcement learning agent (RL) in one framework that contributed significantly to improving the fog layer resources utilization and security. The proposed schema hardens the network reconnaissance and discovery, thus improved the network security against MitM attack. The evaluation framework was tested using a simulation environment on mininet, with the utilization of MPTCP kernel and Ryu SDN controller. The experimental results shows that the proposed schema maintained the network resiliency, improves resource utilization without adding significant overheads compared to the traditional transmission control protocol (TCP).
EFFECTIVE METHOD FOR MANAGING AUTOMATION AND MONITORING IN MULTI-CLOUD COMPUT...IJNSA Journal
Multi-cloud is an advanced version of cloud computing that allows its users to utilize different cloud systems from several Cloud Service Providers (CSPs) remotely. Although it is a very efficient computing
facility, threat detection, data protection, and vendor lock-in are the major security drawbacks of this infrastructure. These factors act as a catalyst in promoting serious cyber-crimes of the virtual world. Privacy and safety issues of a multi-cloud environment have been overviewed in this research paper. The
objective of this research is to analyze some logical automation and monitoring provisions, such as monitoring Cyber-physical Systems (CPS), home automation, automation in Big Data Infrastructure (BDI), Disaster Recovery (DR), and secret protection. The Results of this research investigation indicate that it is possible to avoid security snags of a multi-cloud interface by adopting these scientific solutions methodically.
Dissertations are among the most important pieces of work which students complete at university. And they allow you to work individually and on something that truly attracts you. Computer science is a hot field for researchers. Many topic ideas can be generated for a dissertation in this special branch of engineering.
Ph.D. Assistance serves as an external mentor to brainstorm your idea and translate that into a research model. Hiring a mentor or tutor is common and therefore let your research committee know about the same. We do not offer any writing services without the involvement of the researcher.
Learn More: https://bit.ly/3bWsGpz
Contact Us:
Website: https://www.phdassistance.com/
UK NO: +44–1143520021
India No: +91–4448137070
WhatsApp No: +91 91769 66446
Email: info@phdassistance.com
Security and Privacy Issues of Fog Computing: A SurveyHarshitParkar6677
Abstract. Fog computing is a promising computing paradigm that ex-
tends cloud computing to the edge of networks. Similar to cloud comput-
ing but with distinct characteristics, fog computing faces new security
and privacy challenges besides those inherited from cloud computing. In
this paper, we have surveyed these challenges and corresponding solu-
tions in a brief manner.
This document summarizes a paper presented at the International Conference on Emerging Technology Trends (ICETT) in 2011. The paper proposes an architecture called Cloud Protection System (CPS) to provide increased security to cloud resources using virtualization. CPS monitors the integrity of guest virtual machines in a cloud system like Eucalyptus. It also proposes HypeSec, which controls inter-VM communication in the Qemu hypervisor according to security policies. The effectiveness of CPS implemented in Eucalyptus is shown by testing against the Sebek rootkit attack.
A survey of fog computing concepts applications and issuesRezgar Mohammad
This document provides a survey of fog computing that discusses its key concepts, applications, and issues. It defines fog computing as a scenario that provides computation, storage, and networking services between end devices and cloud servers at the edge of the network. Representative applications of fog computing discussed include augmented reality, real-time video analytics, content delivery/caching, and mobile big data analytics. Potential issues covered include fog networking, quality of service concerns regarding connectivity, reliability, and capacity, and resource management challenges in dynamically provisioning and scheduling resources across fog nodes.
SECURITY AND PRIVACY AWARE PROGRAMMING MODEL FOR IOT APPLICATIONS IN CLOUD EN...ijccsa
This document summarizes a research paper on privacy-preserving techniques for IoT data in cloud environments. It introduces two differential privacy algorithms: 1) Generic differential privacy (GenDP) which provides generalized privacy protection for homogeneous and heterogeneous IoT metadata through data portioning. 2) Cluster-based differential privacy which groups similar data into clusters before defining classifiers to validate privacy. The paper evaluates these techniques and finds the cluster-based approach offers better security than customized interactive algorithms while maintaining data utility. Overall, the study presents new differential privacy methods for anonymizing IoT metadata stored in the cloud.
This document discusses the latest trends in cybersecurity, including increased use of machine learning and artificial intelligence to more effectively detect cyber threats. It also covers growing issues like ransomware attacks, the need for multi-factor authentication beyond passwords, and security challenges around cloud computing and the Internet of Things. Advantages of addressing these trends include better protecting networks and data from unauthorized access and vulnerabilities while enabling earlier threat detection. The conclusion emphasizes that new cybersecurity trends constantly emerge, so organizations must stay informed of developments to best secure themselves.
The document discusses several limitations of IoT-enabled automation solutions:
1. Cybersecurity and privacy concerns are significant as more devices are connected and hackers can more easily access building functions by exploiting vulnerabilities.
2. Lack of integration and interoperability standards means buildings use multiple incompatible protocols, increasing costs.
3. Data capturing and processing has limitations due to the expense of comprehensive sensor deployment and expert analysis needed to derive value from data.
Enabling Security-by-design in Smart Grids: An architecture-based approachMassimiliano Masi
Use an architectural approach to provide security-based design in Smart Grids, with influence from the healthcare world. Slides preseted in DSOGRI.org workshop in Naples.
This document discusses Internet of Things (IoT) cloud integration and IoT cloud systems. It begins with an overview of cloud computing and the IoT. There are several common models for integrating IoTs and clouds, including using cloud platforms for data analytics and storage from sensors. Effective engineering of IoT cloud systems requires techniques like virtualization, composition and orchestration of services, and the ability to deploy across private, public and hybrid clouds. The integration of IoTs and clouds enables many application domains and helps connect physical things to online services.
Fog Computing is a paradigm that extends Cloud computing and services to the edge of the network. Similar to Cloud, Fog provides data, compute, storage, and application services to end-users. The motivation of Fog computing lies in a series of real scenarios, such as Smart Grid, smart traffic lights in vehicular networks and software defined networks.
A data quarantine model to secure data in edge computingIJECEIAES
Edge computing provides an agile data processing platform for latencysensitive and communication-intensive applications through a decentralized cloud and geographically distributed edge nodes. Gaining centralized control over the edge nodes can be challenging due to security issues and threats. Among several security issues, data integrity attacks can lead to inconsistent data and intrude edge data analytics. Further intensification of the attack makes it challenging to mitigate and identify the root cause. Therefore, this paper proposes a new concept of data quarantine model to mitigate data integrity attacks by quarantining intruders. The efficient security solutions in cloud, ad-hoc networks, and computer systems using quarantine have motivated adopting it in edge computing. The data acquisition edge nodes identify the intruders and quarantine all the suspected devices through dimensionality reduction. During quarantine, the proposed concept builds the reputation scores to determine the falsely identified legitimate devices and sanitize their affected data to regain data integrity. As a preliminary investigation, this work identifies an appropriate machine learning method, linear discriminant analysis (LDA), for dimensionality reduction. The LDA results in 72.83% quarantine accuracy and 0.9 seconds training time, which is efficient than other state-of-the-art methods. In future, this would be implemented and validated with ground truth data.
IRJET - A Joint Optimization Approach to Security and Insurance Managemen...IRJET Journal
This document presents a joint optimization approach for security and cyber insurance management in cloud computing. It proposes using stochastic optimization to optimally provision both security services and insurance to protect against uncertainty from pricing, traffic, and cyberattacks. The model formulates the problem as a mixed integer program and introduces a partial Lagrange multiplier algorithm that exploits total unimodularity to find an optimal solution in polynomial time. The approach screens incoming data traffic, handles unsafe packets detected by security services, and redirects unhandled packets to an insurance management process to calculate damages and refund customers.
Similar to Enabling a Zero Trust Architecture in Smart Grids through a Digital Twin (20)
This document discusses the launch of a Cybersecurity Task Force within ASECAP to address cybersecurity issues in the road transportation sector. It notes that while most industries have undergone digital transformation, the road sector's cybersecurity maturity is still developing. The task force aims to initially represent the sector to the European Union Agency for Cybersecurity (ENISA). It conducted a questionnaire of ASECAP members that found most operate critical digital services but few have certified security governance. The task force seeks to increase harmonization of cybersecurity postures among members and engage in international cooperation activities.
Securing Critical Infrastructures with a cybersecurity digital twin Massimiliano Masi
Critical Infrastructures are common targets for cyber-and-physical attacks. Smart Grids, Water Transport Systems, Railway, or Motorway witness an increase of malware and attacks partially due to the IT/OT convergence. Usually, critical infrastructures are composed by legacy software or hardware that cannot be easily patched or upgraded, facilitating the work of the attackers by exposing vulnerabilities solved in IT decades ago. Moreover, it is usually impossible to have a test system for such infrastructures, where a security countermeasure is evaluated for its impact. In fact, in OT systems, availability is of its utmost importance, thus adding a security countermeasure has to be carefully evaluated to not hinder such property. To overcome such shortcomings, digital twins are used. In this talk, it will be presented how digital twins specifically devised for cybersecurity are used for the evaluation of threats in cyber-and-physical systems in an industrial environment. In particular, it will be shown how a digital twin will be systematically derived from the Architectural representation of a real-world industrial system (the "collaborative intelligent transport system") and how the security measures are evaluated with an specific architectural view.
Security and Safety by Design in the Internet of Actors an Architectural Appr...Massimiliano Masi
The document proposes an architectural approach to designing complex systems like smart grids and healthcare projects with security and safety by design. It introduces the Internet of Actors framework which designs systems using smart actors that cooperate through roles and business processes. The framework is enhanced with an Architecture Development Method and by mapping actors to the RAMI 4.0 reference model. This allows applying the Risk and Impact Assessment Methodology Security Steps at each stage of development to systematically achieve security goals. The approach aims to provide governance, sustainability, security and safety from the early design phases.
Securing Mobile e-Health Environments by Design: A Holistic Architectural App...Massimiliano Masi
The document proposes a holistic architectural approach for securing mobile e-health environments. It combines the Reference Model of Industrial Automation (RAMI) 4.0, the Risk Management for Information Security Architecture (RMIAS) method, and standards like Integrating the Healthcare Enterprise (IHE) and Fast Healthcare Interoperability Resources (FHIR) to address security and interoperability throughout the lifecycle of medical devices. The approach involves applying RMIAS cycles at each layer of the RAMI architecture to integrate ubiquitous medical devices into healthcare IT infrastructures in a secure-by-design manner. A tool called MOSAA is being developed to enable security architects to formally model and evaluate such architectures.
The need for interoperability in blockchain-based initiatives to facilitate c...Massimiliano Masi
Slides for the IEEE Blockchain Symposium in Glasgow, https://blockchain.ieee.org/standards/clinicaltrialseurope18, https://blockchain.ieee.org/standards/clinicaltrialseurope18/speakers
Blockchain technology has many potential use cases in healthcare, but also faces challenges regarding interoperability, security, and performance. While blockchain investments peaked in 2017-2018, many projects have failed due to a lack of real use cases. For healthcare, appropriate uses of blockchain may include payments and supply chain applications that do not require storing sensitive medical data on the blockchain. Overall, blockchain remains an emerging technology that could play a role in healthcare if standards, security, and technical limitations are properly addressed.
The document proposes automating the design of smart grid solution architectures using a formal model. It introduces an approach used in healthcare to define integration profiles and transactions between actors. This is formalized to automatically evaluate interdependencies and quality attributes. As a proof of concept, secure message exchange in a virtual power plant use case is modeled to check throughput requirements. Future work aims to integrate additional smart grid reference architecture components and quality metrics into the formal evaluation.
This is the introductory material to blockchain that I had at the Firenze Linux User Group meeting http://www.firenze.linux.it/2018/02/il-bitcoin-e-le-altre/
Distributed Ledger Technologies just left the peek of the Gartner’s Hype Cycle for Emerging technologies of 2017. However the status of the art for Blockchain-based initiatives in healthcare has not yet been reached, mostly due to the lack of knowledge about the need of interoperability amongst blockchain practitioners.
Following the adagio “The nice thing about standards is that you have so many to choose from”, the GrapevineWorld Project brings together DLT technologies in the healthcare context following the rules set by the IHE international standardisation body, whose specifications are the pillars of continental healthcare information exchange.
First, this presentation will introduce the IHE governance model.
Then it will tackle the benefit of DLTs to introduce the Grapevine research ecosystem.
A governance model for ubiquitous medical devices accessing eHealth data: the...Massimiliano Masi
The Electronic Health Record (EHR) is a reality in almost all the EU and USA regions.
The introduction of EHR dramatically reduced the need for paper-based records, thus resulting in an improvement of patient care, including the “freedom of movement” principle across countries. EHRs contain very sensitive information (Private Healthcare Information, PHI) and they are ruled by several acts and international regulations, defined by each country. Key principles for this sector are interoperability, and security. There are two overarching standards for such security, FHIR and IHE. This short presentation aims at providing an overall status across eHealth Security and Interoperability, common pitfalls, and a description of common architectures, when connecting medical devices to patient’s EHR.
Addressing Security and Provide through IHE Profiles Massimiliano Masi
This is the talk that I had at Med-e-Tel 2015, presenting IHE security profiles, how to exploit IHE to fulfill the security needs of local, regional, national, continental, healthcare information exchange
When I was asked to give a companion lecture in support of ‘The Philosophy of Science’ (https://shorturl.at/4pUXz) I decided not to walk through the detail of the many methodologies in order of use. Instead, I chose to employ a long standing, and ongoing, scientific development as an exemplar. And so, I chose the ever evolving story of Thermodynamics as a scientific investigation at its best.
Conducted over a period of >200 years, Thermodynamics R&D, and application, benefitted from the highest levels of professionalism, collaboration, and technical thoroughness. New layers of application, methodology, and practice were made possible by the progressive advance of technology. In turn, this has seen measurement and modelling accuracy continually improved at a micro and macro level.
Perhaps most importantly, Thermodynamics rapidly became a primary tool in the advance of applied science/engineering/technology, spanning micro-tech, to aerospace and cosmology. I can think of no better a story to illustrate the breadth of scientific methodologies and applications at their best.
Phenomics assisted breeding in crop improvementIshaGoswami9
As the population is increasing and will reach about 9 billion upto 2050. Also due to climate change, it is difficult to meet the food requirement of such a large population. Facing the challenges presented by resource shortages, climate
change, and increasing global population, crop yield and quality need to be improved in a sustainable way over the coming decades. Genetic improvement by breeding is the best way to increase crop productivity. With the rapid progression of functional
genomics, an increasing number of crop genomes have been sequenced and dozens of genes influencing key agronomic traits have been identified. However, current genome sequence information has not been adequately exploited for understanding
the complex characteristics of multiple gene, owing to a lack of crop phenotypic data. Efficient, automatic, and accurate technologies and platforms that can capture phenotypic data that can
be linked to genomics information for crop improvement at all growth stages have become as important as genotyping. Thus,
high-throughput phenotyping has become the major bottleneck restricting crop breeding. Plant phenomics has been defined as the high-throughput, accurate acquisition and analysis of multi-dimensional phenotypes
during crop growing stages at the organism level, including the cell, tissue, organ, individual plant, plot, and field levels. With the rapid development of novel sensors, imaging technology,
and analysis methods, numerous infrastructure platforms have been developed for phenotyping.
Unlocking the mysteries of reproduction: Exploring fecundity and gonadosomati...AbdullaAlAsif1
The pygmy halfbeak Dermogenys colletei, is known for its viviparous nature, this presents an intriguing case of relatively low fecundity, raising questions about potential compensatory reproductive strategies employed by this species. Our study delves into the examination of fecundity and the Gonadosomatic Index (GSI) in the Pygmy Halfbeak, D. colletei (Meisner, 2001), an intriguing viviparous fish indigenous to Sarawak, Borneo. We hypothesize that the Pygmy halfbeak, D. colletei, may exhibit unique reproductive adaptations to offset its low fecundity, thus enhancing its survival and fitness. To address this, we conducted a comprehensive study utilizing 28 mature female specimens of D. colletei, carefully measuring fecundity and GSI to shed light on the reproductive adaptations of this species. Our findings reveal that D. colletei indeed exhibits low fecundity, with a mean of 16.76 ± 2.01, and a mean GSI of 12.83 ± 1.27, providing crucial insights into the reproductive mechanisms at play in this species. These results underscore the existence of unique reproductive strategies in D. colletei, enabling its adaptation and persistence in Borneo's diverse aquatic ecosystems, and call for further ecological research to elucidate these mechanisms. This study lends to a better understanding of viviparous fish in Borneo and contributes to the broader field of aquatic ecology, enhancing our knowledge of species adaptations to unique ecological challenges.
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...Ana Luísa Pinho
Functional Magnetic Resonance Imaging (fMRI) provides means to characterize brain activations in response to behavior. However, cognitive neuroscience has been limited to group-level effects referring to the performance of specific tasks. To obtain the functional profile of elementary cognitive mechanisms, the combination of brain responses to many tasks is required. Yet, to date, both structural atlases and parcellation-based activations do not fully account for cognitive function and still present several limitations. Further, they do not adapt overall to individual characteristics. In this talk, I will give an account of deep-behavioral phenotyping strategies, namely data-driven methods in large task-fMRI datasets, to optimize functional brain-data collection and improve inference of effects-of-interest related to mental processes. Key to this approach is the employment of fast multi-functional paradigms rich on features that can be well parametrized and, consequently, facilitate the creation of psycho-physiological constructs to be modelled with imaging data. Particular emphasis will be given to music stimuli when studying high-order cognitive mechanisms, due to their ecological nature and quality to enable complex behavior compounded by discrete entities. I will also discuss how deep-behavioral phenotyping and individualized models applied to neuroimaging data can better account for the subject-specific organization of domain-general cognitive systems in the human brain. Finally, the accumulation of functional brain signatures brings the possibility to clarify relationships among tasks and create a univocal link between brain systems and mental functions through: (1) the development of ontologies proposing an organization of cognitive processes; and (2) brain-network taxonomies describing functional specialization. To this end, tools to improve commensurability in cognitive science are necessary, such as public repositories, ontology-based platforms and automated meta-analysis tools. I will thus discuss some brain-atlasing resources currently under development, and their applicability in cognitive as well as clinical neuroscience.
The use of Nauplii and metanauplii artemia in aquaculture (brine shrimp).pptxMAGOTI ERNEST
Although Artemia has been known to man for centuries, its use as a food for the culture of larval organisms apparently began only in the 1930s, when several investigators found that it made an excellent food for newly hatched fish larvae (Litvinenko et al., 2023). As aquaculture developed in the 1960s and ‘70s, the use of Artemia also became more widespread, due both to its convenience and to its nutritional value for larval organisms (Arenas-Pardo et al., 2024). The fact that Artemia dormant cysts can be stored for long periods in cans, and then used as an off-the-shelf food requiring only 24 h of incubation makes them the most convenient, least labor-intensive, live food available for aquaculture (Sorgeloos & Roubach, 2021). The nutritional value of Artemia, especially for marine organisms, is not constant, but varies both geographically and temporally. During the last decade, however, both the causes of Artemia nutritional variability and methods to improve poorquality Artemia have been identified (Loufi et al., 2024).
Brine shrimp (Artemia spp.) are used in marine aquaculture worldwide. Annually, more than 2,000 metric tons of dry cysts are used for cultivation of fish, crustacean, and shellfish larva. Brine shrimp are important to aquaculture because newly hatched brine shrimp nauplii (larvae) provide a food source for many fish fry (Mozanzadeh et al., 2021). Culture and harvesting of brine shrimp eggs represents another aspect of the aquaculture industry. Nauplii and metanauplii of Artemia, commonly known as brine shrimp, play a crucial role in aquaculture due to their nutritional value and suitability as live feed for many aquatic species, particularly in larval stages (Sorgeloos & Roubach, 2021).
Or: Beyond linear.
Abstract: Equivariant neural networks are neural networks that incorporate symmetries. The nonlinear activation functions in these networks result in interesting nonlinear equivariant maps between simple representations, and motivate the key player of this talk: piecewise linear representation theory.
Disclaimer: No one is perfect, so please mind that there might be mistakes and typos.
dtubbenhauer@gmail.com
Corrected slides: dtubbenhauer.com/talks.html
The debris of the ‘last major merger’ is dynamically youngSérgio Sacani
The Milky Way’s (MW) inner stellar halo contains an [Fe/H]-rich component with highly eccentric orbits, often referred to as the
‘last major merger.’ Hypotheses for the origin of this component include Gaia-Sausage/Enceladus (GSE), where the progenitor
collided with the MW proto-disc 8–11 Gyr ago, and the Virgo Radial Merger (VRM), where the progenitor collided with the
MW disc within the last 3 Gyr. These two scenarios make different predictions about observable structure in local phase space,
because the morphology of debris depends on how long it has had to phase mix. The recently identified phase-space folds in Gaia
DR3 have positive caustic velocities, making them fundamentally different than the phase-mixed chevrons found in simulations
at late times. Roughly 20 per cent of the stars in the prograde local stellar halo are associated with the observed caustics. Based
on a simple phase-mixing model, the observed number of caustics are consistent with a merger that occurred 1–2 Gyr ago.
We also compare the observed phase-space distribution to FIRE-2 Latte simulations of GSE-like mergers, using a quantitative
measurement of phase mixing (2D causticality). The observed local phase-space distribution best matches the simulated data
1–2 Gyr after collision, and certainly not later than 3 Gyr. This is further evidence that the progenitor of the ‘last major merger’
did not collide with the MW proto-disc at early times, as is thought for the GSE, but instead collided with the MW disc within
the last few Gyr, consistent with the body of work surrounding the VRM.
EWOCS-I: The catalog of X-ray sources in Westerlund 1 from the Extended Weste...Sérgio Sacani
Context. With a mass exceeding several 104 M⊙ and a rich and dense population of massive stars, supermassive young star clusters
represent the most massive star-forming environment that is dominated by the feedback from massive stars and gravitational interactions
among stars.
Aims. In this paper we present the Extended Westerlund 1 and 2 Open Clusters Survey (EWOCS) project, which aims to investigate
the influence of the starburst environment on the formation of stars and planets, and on the evolution of both low and high mass stars.
The primary targets of this project are Westerlund 1 and 2, the closest supermassive star clusters to the Sun.
Methods. The project is based primarily on recent observations conducted with the Chandra and JWST observatories. Specifically,
the Chandra survey of Westerlund 1 consists of 36 new ACIS-I observations, nearly co-pointed, for a total exposure time of 1 Msec.
Additionally, we included 8 archival Chandra/ACIS-S observations. This paper presents the resulting catalog of X-ray sources within
and around Westerlund 1. Sources were detected by combining various existing methods, and photon extraction and source validation
were carried out using the ACIS-Extract software.
Results. The EWOCS X-ray catalog comprises 5963 validated sources out of the 9420 initially provided to ACIS-Extract, reaching a
photon flux threshold of approximately 2 × 10−8 photons cm−2
s
−1
. The X-ray sources exhibit a highly concentrated spatial distribution,
with 1075 sources located within the central 1 arcmin. We have successfully detected X-ray emissions from 126 out of the 166 known
massive stars of the cluster, and we have collected over 71 000 photons from the magnetar CXO J164710.20-455217.
This presentation explores a brief idea about the structural and functional attributes of nucleotides, the structure and function of genetic materials along with the impact of UV rays and pH upon them.
ESR spectroscopy in liquid food and beverages.pptxPRIYANKA PATEL
With increasing population, people need to rely on packaged food stuffs. Packaging of food materials requires the preservation of food. There are various methods for the treatment of food to preserve them and irradiation treatment of food is one of them. It is the most common and the most harmless method for the food preservation as it does not alter the necessary micronutrients of food materials. Although irradiated food doesn’t cause any harm to the human health but still the quality assessment of food is required to provide consumers with necessary information about the food. ESR spectroscopy is the most sophisticated way to investigate the quality of the food and the free radicals induced during the processing of the food. ESR spin trapping technique is useful for the detection of highly unstable radicals in the food. The antioxidant capability of liquid food and beverages in mainly performed by spin trapping technique.
What is greenhouse gasses and how many gasses are there to affect the Earth.moosaasad1975
What are greenhouse gasses how they affect the earth and its environment what is the future of the environment and earth how the weather and the climate effects.
What is greenhouse gasses and how many gasses are there to affect the Earth.
Enabling a Zero Trust Architecture in Smart Grids through a Digital Twin
1. Enabling a Zero Trust Architecture in Smart
Grids through a Digital Twin
Giovanni Paolo Sellitto, Helder Aranha, Massimiliano Masi,
and Tanja Pavleska
massimiliano.masi@gmail.com
DSOGRI, Virtual Conference, September 13, 2021
2. Problem Statement
Smart Grids are complex systems with high variability
Energy communities, Virtual Power Plants, employ a different
set of Distributed Energy Resources (DER)
Operators tend to include prosumers and DERs rather than
exclude them due to interoperability and security reason
And no standard exists yet!
Sellitto et al.: ZTA and DT CC
Massimiliano Masi DSOGRI, Virtual Conference, September 13, 2021 2/13
3. Huge attack surface
DERs and households run in potentially different contexts
Messages could be exchanged over public internet. Can PLCs
be exposed (InterNiche critical bug, CVE-2020-25928)
Safety, Availability, Integrity, and Confidentiality control
messages are usually shared in a shared internet cable, where
vulnerabilities of segregation tools can have huge impact on
the grid stability (e.g., VPN)
Protocols not made with security in mind (e.g., 60870,
61850), added at a later stage (IEC 62531)
Sellitto et al.: ZTA and DT CC
Massimiliano Masi DSOGRI, Virtual Conference, September 13, 2021 3/13
4. Operation-Aware Design
In such scenario it is paramount to secure the entire system
before it is in production: design the system with security in
mind
Cyberattacks will come, with probability = 1
Consider always Cyber-and-physical security as a cross cutting
concern
From business to device
Syntax and Semantics
Sellitto et al.: ZTA and DT CC
Massimiliano Masi DSOGRI, Virtual Conference, September 13, 2021 4/13
5. Our core: SGAM and RMIAS
SGAM support the definition of security
countermeasures during the entire lifecycle of the
Smart Grid
Design the target architecture and start reasoning
over abstract architectural assets to assess its
quality (resilience, security, business continuity
plans)
Threat model it, and simulate its security through
a CyberSecurity Digital Twin
If possible, derive it automatically
Sellitto et al.: ZTA and DT CC
Massimiliano Masi DSOGRI, Virtual Conference, September 13, 2021 5/13
6. Supporting Fidelity
A digital twin enjoys the concept of fidelity
How to ensure that the Digital Twin represents the actual
system, since it is highly dynamic (for an ICS)?
How to ensure that the actual system is compliant with the
SGAM model?
How to remain flexible with respect to innovation and new
products?
The proposal
The digital twin as an architectural deliverable
Explore new architectural variants and observe the evolution
of the threat model
Find better business continuity plan and cyber resilience
programs
Sellitto et al.: ZTA and DT CC
Massimiliano Masi DSOGRI, Virtual Conference, September 13, 2021 6/13
7. Introducing the control view
We represent our SGAM assets as microservice architecture,
service mesh
Introduce an additional architectural view to SGAM, akin to a
Control Plane (common terminology and concept from TelCo)
Segregation between the information used to control the
system, w.r.t. the information used by the system
Register/De-Register DERs using a proxy pattern
Sellitto et al.: ZTA and DT CC
Massimiliano Masi DSOGRI, Virtual Conference, September 13, 2021 7/13
8. The sidecar pattern
We use the well-known sidecar pattern to proxy
the instance of the architectural asset in the
SGAM cube by injecting the information in the
control plane / view
This allows to have a digital twin of the instances
as well, directly mapped from SGAM, inheriting
all the measurements and simulations performed
ex-ante
Reasoning over the SGAM, its cyber-security
digital twin, and drilling down to the control
plane, enables to define, by design, the necessary
authorization processes and rules to attain a Zero
Trust Architecture
Sellitto et al.: ZTA and DT CC
Massimiliano Masi DSOGRI, Virtual Conference, September 13, 2021 8/13
9. The Digital Twin: observability
Observability
The control view enables the management of information flows
needed for the dynamic alignment of the Digital Twin and the
Smart Grid. Every new participant that joins the grid is registered
through the sidecar and it is monitored until it gets de-registered
Sellitto et al.: ZTA and DT CC
Massimiliano Masi DSOGRI, Virtual Conference, September 13, 2021 9/13
10. Conclusions and future work
We have seen a method to derive a digital twin of the
architecture to evaluate its security before it is deployed, and
a process to keep its instance aligned with the current status
of the Smart Grid, by re-using the concepts of service mesh
Still to be done: implement the process using a reference
implementation, such as ISTIO, by defining the metadata
obtained with the sidecar, and automatically perform the
connection rules defined by the ZTA
Sellitto et al.: ZTA and DT CC
Massimiliano Masi DSOGRI, Virtual Conference, September 13, 2021 10/13
11. Input from Attendees / Discussion
Sellitto et al.: ZTA and DT CC
Massimiliano Masi DSOGRI, Virtual Conference, September 13, 2021 11/13
12. Thank You
Sellitto et al.: ZTA and DT CC
Massimiliano Masi DSOGRI, Virtual Conference, September 13, 2021 12/13