This presentation is about Azure Quantum. I prepared this for Global Azure Korea 2022.
Azure Quantum을 처음 사용해보실 분들을 위한 발표 자료입니다. 스크린샷이 많이 들어가 있어 쉽게 따라하실 수 있을 거에요. :)
Azure Quantum Workspace for developing Q# based quantum circuitsVijayananda Mohire
This document provides steps to develop quantum circuits using Q# on Azure Quantum. It instructs the user to create an Azure subscription, log into the Azure portal, create a Quantum Workspace, and provision storage. It then explains how to define Q# operations, simulate them locally using %simulate, connect to the Azure Quantum workspace with %azure.connect, specify an execution target with %azure.target, submit jobs with %azure.submit, check job status with %azure.status, retrieve outputs with %azure.output, and view all jobs with %azure.jobs. An example quantum random number generation program written in Q# is provided.
This is a demo of getting starting with Azure quantum workspace for developing quantum projects. Included in the file is the demonstration of running a simple random number generator code using Q# and Jupyter Notebook
Beyond the Query: A Cassandra + Solr + Spark Love Triangle Using Datastax Ent...DataStax Academy
Wait! Back away from the Cassandra 2ndary index. It’s ok for some use cases, but it’s not an easy button. "But I need to search through a bunch of columns to look for the data and I want to do some regression analysis… and I can’t model that in C*, even after watching all of Patrick McFadins videos. What do I do?” The answer, dear developer, is in DSE Search and Analytics. With it’s easy Solr API and Spark integration so you can search and analyze data stored in your Cassandra database until your heart’s content. Take our hand. WE will show you how.
Beyond the Query – Bringing Complex Access Patterns to NoSQL with DataStax - ...StampedeCon
Learn how to model beyond traditional direct access in Apache Cassandra. Utilizing the DataStax platform to harness the power of Spark and Solr to perform search, analytics, and complex operations in place on your Cassandra data!
A Cassandra + Solr + Spark Love Triangle Using DataStax EnterprisePatrick McFadin
Wait! Back away from the Cassandra 2ndary index. It’s ok for some use cases, but it’s not an easy button. "But I need to search through a bunch of columns to look for the data and I want to do some regression analysis… and I can’t model that in C*, even after watching all of Patrick McFadins videos. What do I do?” The answer, dear developer, is in DSE Search and Analytics. With it’s easy Solr API and Spark integration so you can search and analyze data stored in your Cassandra database until your heart’s content. Take our hand. WE will show you how.
Kyle Hailey is an Oracle expert who has worked with Oracle since 1990. He has experience with Oracle support, porting versions of Oracle, benchmarking, and real world performance. He has also worked with startups, Quest Software, Oracle OEM, and Embarcadero. The document discusses row locks in Oracle and how to find blocking sessions and SQL using tools like ASH, v$lock, and Logminer. It provides examples of creating row lock waits and how to investigate them using these tools.
AWS Study Group - Chapter 03 - Elasticity and Scalability Concepts [Solution ...QCloudMentor
Ch3 Elasticity and Scalability Concepts
Technical requirements
Sources of failure
Dividing and conquering
Virtualization technologies
LAMP installation
Scaling the webserver
Resiliency
EC2 persistence model
Disaster recovery
Cascading deletion
Bootstrapping
Scaling the compute layer
Scaling a database server
Summary
Further reading
Azure Quantum Workspace for developing Q# based quantum circuitsVijayananda Mohire
This document provides steps to develop quantum circuits using Q# on Azure Quantum. It instructs the user to create an Azure subscription, log into the Azure portal, create a Quantum Workspace, and provision storage. It then explains how to define Q# operations, simulate them locally using %simulate, connect to the Azure Quantum workspace with %azure.connect, specify an execution target with %azure.target, submit jobs with %azure.submit, check job status with %azure.status, retrieve outputs with %azure.output, and view all jobs with %azure.jobs. An example quantum random number generation program written in Q# is provided.
This is a demo of getting starting with Azure quantum workspace for developing quantum projects. Included in the file is the demonstration of running a simple random number generator code using Q# and Jupyter Notebook
Beyond the Query: A Cassandra + Solr + Spark Love Triangle Using Datastax Ent...DataStax Academy
Wait! Back away from the Cassandra 2ndary index. It’s ok for some use cases, but it’s not an easy button. "But I need to search through a bunch of columns to look for the data and I want to do some regression analysis… and I can’t model that in C*, even after watching all of Patrick McFadins videos. What do I do?” The answer, dear developer, is in DSE Search and Analytics. With it’s easy Solr API and Spark integration so you can search and analyze data stored in your Cassandra database until your heart’s content. Take our hand. WE will show you how.
Beyond the Query – Bringing Complex Access Patterns to NoSQL with DataStax - ...StampedeCon
Learn how to model beyond traditional direct access in Apache Cassandra. Utilizing the DataStax platform to harness the power of Spark and Solr to perform search, analytics, and complex operations in place on your Cassandra data!
A Cassandra + Solr + Spark Love Triangle Using DataStax EnterprisePatrick McFadin
Wait! Back away from the Cassandra 2ndary index. It’s ok for some use cases, but it’s not an easy button. "But I need to search through a bunch of columns to look for the data and I want to do some regression analysis… and I can’t model that in C*, even after watching all of Patrick McFadins videos. What do I do?” The answer, dear developer, is in DSE Search and Analytics. With it’s easy Solr API and Spark integration so you can search and analyze data stored in your Cassandra database until your heart’s content. Take our hand. WE will show you how.
Kyle Hailey is an Oracle expert who has worked with Oracle since 1990. He has experience with Oracle support, porting versions of Oracle, benchmarking, and real world performance. He has also worked with startups, Quest Software, Oracle OEM, and Embarcadero. The document discusses row locks in Oracle and how to find blocking sessions and SQL using tools like ASH, v$lock, and Logminer. It provides examples of creating row lock waits and how to investigate them using these tools.
AWS Study Group - Chapter 03 - Elasticity and Scalability Concepts [Solution ...QCloudMentor
Ch3 Elasticity and Scalability Concepts
Technical requirements
Sources of failure
Dividing and conquering
Virtualization technologies
LAMP installation
Scaling the webserver
Resiliency
EC2 persistence model
Disaster recovery
Cascading deletion
Bootstrapping
Scaling the compute layer
Scaling a database server
Summary
Further reading
How to Troubleshoot OpenStack Without Losing SleepSadique Puthen
The complex architecture, design, and difficulties while troubleshooting amplifies the effort in debugging a problem with an OpenStack environment. This can give administrators and support associates sleepless nights if OpenStack native and supporting components are not configured properly and tuned for optimum performance, especially with large deployments that involve high availability and load balancing.
This document provides an overview of Terraform including its key features and how to install, configure, and use Terraform to deploy infrastructure on AWS. It covers topics such as creating EC2 instances and other AWS resources with Terraform, using variables, outputs, and provisioners, implementing modules and workspaces, and managing the Terraform state.
AWS and Slack Integration - Sending CloudWatch Notifications to Slack.pdfManish Chopra
This document is a brief tutorial for integration AWS and Slack. It shows implementing AWS CloudWatch notification to Slack, when any of your AWS service thresholds cross the set boundary.
CloudForecast is a system monitoring and visualization tool that uses Perl and RRDTool to collect data from servers and generate graphs. It collects metrics like CPU usage, network traffic, and Gearman worker status. Data is stored in RRD files and a SQLite database. A radar component collects data and a web interface is used to view graphs generated from the collected data.
Kernel Recipes 2018 - KernelShark 1.0; What's new and what's coming - Steven ...Anne Nicolas
Ftrace is the official tracer of the Linux kernel. It was added in 2008, and in 2009 came trace-cmd which was a command line tool that would make interaction with ftrace easier. Shortly after that, KernelShark was created as a GUI for trace-cmd interface. But as KernelShark and trace-cmd were mostly side projects, there wasn't as much activity that they deserved. trace-cmd was updated more often, but KernelShark has suffered with bit-rot for some time. But all that has changed recently as VMware has active developers working on it.
KernelShark has been completely rewritten from scratch and version 1.0 is due to be released in August of 2018 (has already been released as of this talk). This will discuss what changed, how to use the new tool and what is coming in the future.
This document describes a scalable, versioned document store built within PostgreSQL. It discusses the motivation for moving from multiple data stores and repositories to a single PostgreSQL database. It then covers the design of storing immutable content as Merkle DAG nodes linked by cryptographic hashes, with references and tags allowing different versions. It also explains how the system was implemented using PostgreSQL functions to generate hashes, insert nodes, and handle migrations from the original data model.
Programming the Physical World with Device Shadows and Rules EngineAmazon Web Services
Learn more about how to use AWS IoT's Device Shadows and Rules Engine to build powerful IoT applications. With Device Shadows, you can build applications that interact with your devices by providing always available REST APIs. By taking advantage of AWS IoT's topic-based rules and built-in integrations, you can build IoT applications that gather, process, analyze, and act on data generated by connected devices at global scale, without having to manage any infrastructure.
... or why Oracle still cares about CMAN and why you should do it too
The Oracle Connection Manager (CMAN) is the Swiss-army knife for database connections. It can be used for security, routing, high availability, single-point of contact... Starting with Oracle 18c, it has been extended with the new Traffic Director Mode (CMAN TDM), that allows transparent failover for applications that do not implement it natively.
In this session I will introduce briefly what CMAN is capable of, how to configure it in a high availability environment, and how the new release achieves a higher protection level.
This document provides an overview of Terraform including its key features, installation process, and common usage patterns. Terraform allows infrastructure to be defined as code and treated similarly to other code. It generates execution plans to avoid surprises when provisioning resources. Complex changes can be automated while avoiding human errors. The document covers installing Terraform, deploying AWS EC2 instances, variables, outputs, modules, and workspaces. It demonstrates how Terraform can be used to provision and manage infrastructure in a safe, efficient manner.
Coolblue - Behind the Scenes Continuous Integration & DeploymentMatthew Hodgkins
Do you want to know what our process looks like from code to production? Or do you want to learn how we envision the future of deployment at Coolblue? During this evening, our Engineers will show you a peek Behind the Scenes and tell you everything about our challenges with Continuous Integration and Deployment.
Cross the Streams! Creating Streaming Data Pipelines with Apache Flink + Apac...StreamNative
Despite what the Ghostbusters said, we’re going to go ahead and cross (or, join) the streams. This session covers getting started with streaming data pipelines, maximizing Pulsar’s messaging system alongside one of the most flexible streaming frameworks available, Apache Flink. Specifically, we’ll demonstrate the use of Flink SQL, which provides various abstractions and allows your pipeline to be language-agnostic. So, if you want to leverage the power of a high-speed, highly customizable stream processing engine without the usual overhead and learning curves of the technologies involved (and their interconnected relationships), then this talk is for you. Watch the step-by-step demo to build a unified batch and streaming pipeline from scratch with Pulsar, via the Flink SQL client. This means you don’t need to be familiar with Flink, (or even a specific programming language). The examples provided are built for highly complex systems, but the talk itself will be accessible to any experience level.
Time series with Apache Cassandra - Long versionPatrick McFadin
Apache Cassandra has proven to be one of the best solutions for storing and retrieving time series data. This talk will give you an overview of the many ways you can be successful. We will discuss how the storage model of Cassandra is well suited for this pattern and go over examples of how best to build data models.
DevOps (Continuous Integrations, Continuous Delivery & Continuous deployment using Jenkins and Visual studio team services, setting up VTST build Agents, Integrating VSTS with SonarQube, NDepend,) , Complete automation of pushing code into VSTS from Visual Studio, Building Code by a Jenkin Server hosted on Azure and pushing that successful build on to Azure Web App via Release Pipeline or directly from Jenkins,VSTS Default agents, Setting up local agent from scratch, Setting up agents for code build, VSTS, Visual Studio Online Agents, Agent Pools, Hosted Agents, Hosted VS2017. Hosted Linux Agents, Setting up agent on VS Dev Test Labs, Setting up Template Parameters for Continuos Pipeline, Build Agent Creation Dynamically, Random Machine Name, Random Passwords, Dynamic Agent creation in VS Dev Test labs, Sonarcube, Code quality, Code Analysis, MSBuild, Integrate VSTS Build with NDepend, Package manager, Monolithic Architecture, Nuget, Package management, Npm js.com, Semantic versioning, Creating a nuget package, nuspec file, GitVersion Plugin, FeedURL, Chocolatey for package management, Chocolatey, chocolatey workflow,
Postgres-BDR with Google Cloud PlatformSungJae Yun
This document provides an overview of testing PostgreSQL-BDR with Google Cloud Platform. Key points:
- Google Cloud Platform was chosen as the test environment for its regions in Asia, Europe, and US. Nine virtual machines were created, with two in each region configured for PostgreSQL-BDR and one for performance testing.
- PostgreSQL-BDR was installed on the servers and a cluster was created by joining nodes in Asia, US, and Europe. Pgbench tests were run to measure performance of transactions on the replicated database.
- Pgbench results showed transaction rates increased from around 1000 TPS for a single client/server to over 6000 TPS when distributed across the BDR cluster nodes
- The Spark Cassandra Connector allows reading Cassandra data into Spark RDDs and writing Spark RDDs back to Cassandra tables.
- When reading, it partitions RDDs by Cassandra token ranges to co-locate partitions with node data. When writing, it batches writes by partition key to minimize requests.
- This allows efficient distributed processing of Cassandra data using Spark's parallelism while minimizing network usage through co-location of data and tasks.
The document describes deploying Cosmos DB resources using Terraform in Azure. It outlines prerequisites, environment details, and the configuration files and process used to create a resource group, Cosmos DB account, database, and collection. The main.tf file defines these resources, variables.tf contains configurable values, and output.tf displays output after deployment. Running terraform init and terraform plan commands prepares for deploying the resources.
This document provides instructions for installing and configuring the OpenStack Glance image service. It begins with setting up the necessary variables and creating the Glance service and database in Keystone. It then walks through installing and configuring Glance, verifying the installation, and uploading two test images. It concludes by discussing some concepts of Glance like image formats and providing references for more documentation. The next steps outlined are expanding the deployment to two servers by modifying Vagrant files and installing necessary Nova packages to introduce compute functionality.
Manchester Hadoop Meetup: Cassandra Spark internalsChristopher Batey
This document summarizes how the Spark Cassandra Connector works to read and write data between Spark and Cassandra in a distributed manner. It discusses how the connector partitions Spark RDDs based on Cassandra token ranges and nodes, retrieves data from Cassandra in batches using CQL, and writes data back to Cassandra in batches grouped by partition key. Key classes and configuration parameters that control this distributed processing are also outlined.
Qiskit: Building a Quantum Computing CommunityDayeong Kang
This presentation is for introducing what is Qiskit Community and how to contribute to Qiskit. I prepared this for Community Session in Nano Korea 2022.
How to Troubleshoot OpenStack Without Losing SleepSadique Puthen
The complex architecture, design, and difficulties while troubleshooting amplifies the effort in debugging a problem with an OpenStack environment. This can give administrators and support associates sleepless nights if OpenStack native and supporting components are not configured properly and tuned for optimum performance, especially with large deployments that involve high availability and load balancing.
This document provides an overview of Terraform including its key features and how to install, configure, and use Terraform to deploy infrastructure on AWS. It covers topics such as creating EC2 instances and other AWS resources with Terraform, using variables, outputs, and provisioners, implementing modules and workspaces, and managing the Terraform state.
AWS and Slack Integration - Sending CloudWatch Notifications to Slack.pdfManish Chopra
This document is a brief tutorial for integration AWS and Slack. It shows implementing AWS CloudWatch notification to Slack, when any of your AWS service thresholds cross the set boundary.
CloudForecast is a system monitoring and visualization tool that uses Perl and RRDTool to collect data from servers and generate graphs. It collects metrics like CPU usage, network traffic, and Gearman worker status. Data is stored in RRD files and a SQLite database. A radar component collects data and a web interface is used to view graphs generated from the collected data.
Kernel Recipes 2018 - KernelShark 1.0; What's new and what's coming - Steven ...Anne Nicolas
Ftrace is the official tracer of the Linux kernel. It was added in 2008, and in 2009 came trace-cmd which was a command line tool that would make interaction with ftrace easier. Shortly after that, KernelShark was created as a GUI for trace-cmd interface. But as KernelShark and trace-cmd were mostly side projects, there wasn't as much activity that they deserved. trace-cmd was updated more often, but KernelShark has suffered with bit-rot for some time. But all that has changed recently as VMware has active developers working on it.
KernelShark has been completely rewritten from scratch and version 1.0 is due to be released in August of 2018 (has already been released as of this talk). This will discuss what changed, how to use the new tool and what is coming in the future.
This document describes a scalable, versioned document store built within PostgreSQL. It discusses the motivation for moving from multiple data stores and repositories to a single PostgreSQL database. It then covers the design of storing immutable content as Merkle DAG nodes linked by cryptographic hashes, with references and tags allowing different versions. It also explains how the system was implemented using PostgreSQL functions to generate hashes, insert nodes, and handle migrations from the original data model.
Programming the Physical World with Device Shadows and Rules EngineAmazon Web Services
Learn more about how to use AWS IoT's Device Shadows and Rules Engine to build powerful IoT applications. With Device Shadows, you can build applications that interact with your devices by providing always available REST APIs. By taking advantage of AWS IoT's topic-based rules and built-in integrations, you can build IoT applications that gather, process, analyze, and act on data generated by connected devices at global scale, without having to manage any infrastructure.
... or why Oracle still cares about CMAN and why you should do it too
The Oracle Connection Manager (CMAN) is the Swiss-army knife for database connections. It can be used for security, routing, high availability, single-point of contact... Starting with Oracle 18c, it has been extended with the new Traffic Director Mode (CMAN TDM), that allows transparent failover for applications that do not implement it natively.
In this session I will introduce briefly what CMAN is capable of, how to configure it in a high availability environment, and how the new release achieves a higher protection level.
This document provides an overview of Terraform including its key features, installation process, and common usage patterns. Terraform allows infrastructure to be defined as code and treated similarly to other code. It generates execution plans to avoid surprises when provisioning resources. Complex changes can be automated while avoiding human errors. The document covers installing Terraform, deploying AWS EC2 instances, variables, outputs, modules, and workspaces. It demonstrates how Terraform can be used to provision and manage infrastructure in a safe, efficient manner.
Coolblue - Behind the Scenes Continuous Integration & DeploymentMatthew Hodgkins
Do you want to know what our process looks like from code to production? Or do you want to learn how we envision the future of deployment at Coolblue? During this evening, our Engineers will show you a peek Behind the Scenes and tell you everything about our challenges with Continuous Integration and Deployment.
Cross the Streams! Creating Streaming Data Pipelines with Apache Flink + Apac...StreamNative
Despite what the Ghostbusters said, we’re going to go ahead and cross (or, join) the streams. This session covers getting started with streaming data pipelines, maximizing Pulsar’s messaging system alongside one of the most flexible streaming frameworks available, Apache Flink. Specifically, we’ll demonstrate the use of Flink SQL, which provides various abstractions and allows your pipeline to be language-agnostic. So, if you want to leverage the power of a high-speed, highly customizable stream processing engine without the usual overhead and learning curves of the technologies involved (and their interconnected relationships), then this talk is for you. Watch the step-by-step demo to build a unified batch and streaming pipeline from scratch with Pulsar, via the Flink SQL client. This means you don’t need to be familiar with Flink, (or even a specific programming language). The examples provided are built for highly complex systems, but the talk itself will be accessible to any experience level.
Time series with Apache Cassandra - Long versionPatrick McFadin
Apache Cassandra has proven to be one of the best solutions for storing and retrieving time series data. This talk will give you an overview of the many ways you can be successful. We will discuss how the storage model of Cassandra is well suited for this pattern and go over examples of how best to build data models.
DevOps (Continuous Integrations, Continuous Delivery & Continuous deployment using Jenkins and Visual studio team services, setting up VTST build Agents, Integrating VSTS with SonarQube, NDepend,) , Complete automation of pushing code into VSTS from Visual Studio, Building Code by a Jenkin Server hosted on Azure and pushing that successful build on to Azure Web App via Release Pipeline or directly from Jenkins,VSTS Default agents, Setting up local agent from scratch, Setting up agents for code build, VSTS, Visual Studio Online Agents, Agent Pools, Hosted Agents, Hosted VS2017. Hosted Linux Agents, Setting up agent on VS Dev Test Labs, Setting up Template Parameters for Continuos Pipeline, Build Agent Creation Dynamically, Random Machine Name, Random Passwords, Dynamic Agent creation in VS Dev Test labs, Sonarcube, Code quality, Code Analysis, MSBuild, Integrate VSTS Build with NDepend, Package manager, Monolithic Architecture, Nuget, Package management, Npm js.com, Semantic versioning, Creating a nuget package, nuspec file, GitVersion Plugin, FeedURL, Chocolatey for package management, Chocolatey, chocolatey workflow,
Postgres-BDR with Google Cloud PlatformSungJae Yun
This document provides an overview of testing PostgreSQL-BDR with Google Cloud Platform. Key points:
- Google Cloud Platform was chosen as the test environment for its regions in Asia, Europe, and US. Nine virtual machines were created, with two in each region configured for PostgreSQL-BDR and one for performance testing.
- PostgreSQL-BDR was installed on the servers and a cluster was created by joining nodes in Asia, US, and Europe. Pgbench tests were run to measure performance of transactions on the replicated database.
- Pgbench results showed transaction rates increased from around 1000 TPS for a single client/server to over 6000 TPS when distributed across the BDR cluster nodes
- The Spark Cassandra Connector allows reading Cassandra data into Spark RDDs and writing Spark RDDs back to Cassandra tables.
- When reading, it partitions RDDs by Cassandra token ranges to co-locate partitions with node data. When writing, it batches writes by partition key to minimize requests.
- This allows efficient distributed processing of Cassandra data using Spark's parallelism while minimizing network usage through co-location of data and tasks.
The document describes deploying Cosmos DB resources using Terraform in Azure. It outlines prerequisites, environment details, and the configuration files and process used to create a resource group, Cosmos DB account, database, and collection. The main.tf file defines these resources, variables.tf contains configurable values, and output.tf displays output after deployment. Running terraform init and terraform plan commands prepares for deploying the resources.
This document provides instructions for installing and configuring the OpenStack Glance image service. It begins with setting up the necessary variables and creating the Glance service and database in Keystone. It then walks through installing and configuring Glance, verifying the installation, and uploading two test images. It concludes by discussing some concepts of Glance like image formats and providing references for more documentation. The next steps outlined are expanding the deployment to two servers by modifying Vagrant files and installing necessary Nova packages to introduce compute functionality.
Manchester Hadoop Meetup: Cassandra Spark internalsChristopher Batey
This document summarizes how the Spark Cassandra Connector works to read and write data between Spark and Cassandra in a distributed manner. It discusses how the connector partitions Spark RDDs based on Cassandra token ranges and nodes, retrieves data from Cassandra in batches using CQL, and writes data back to Cassandra in batches grouped by partition key. Key classes and configuration parameters that control this distributed processing are also outlined.
Qiskit: Building a Quantum Computing CommunityDayeong Kang
This presentation is for introducing what is Qiskit Community and how to contribute to Qiskit. I prepared this for Community Session in Nano Korea 2022.
How to Make Open-source Contributions and Run Blog with GithubDayeong Kang
This presentation is about how to use Github well. There are three parts to understand how Git and Github work and utilize it for own personal branding. I prepared this for WISET Open Talk session of team Quantum is here.
How to Contribute to Qiskit with GithubDayeong Kang
This presentation is about how to contribute to Qiskit with Github. I prepared this for Qiskit Hackathon Korea 2022.
Git과 Github를 이용해 Qiskit에 기여하는 방법을 고민하고 있으신 분들께 도움이 되었으면 좋겠습니다. 자세한 설명은 아래의 유튜브 링크를 참고해주시면 감사하겠습니다.
*Detailed explanation(kor): https://youtu.be/5cSdM5nBJ60
Sharing my three Qiskit projects and experience in hackathon. I presented this in the first anniversary of Full-Stack Quantum Computation: https://fullstackquantumcomputation.tech/anniversary/.
This presentation is about Quantum Cryptography; focused on quantum key distribution(QKD). I presented this in the quantum computing class of Modulabs(모두의 연구소).
*Detailed explanation: https://tula3and.github.io/cryptography/quantum-cryptography/#
Quantum Blockchain Solution for Logistics: I presented this in 2021 Qiskit Hackathon Korea. All sources are in my github: https://github.com/tula3and/qoupang. If you interested in our Qoupang, please check this out!
*Some icons were replaced because of copyright considerations.
This presentation is about an android web-application which shows all the restaurants around KNU. It will help students to choose what to eat. I used Github for hosting a website and Expo webview for building it.
This presentation is about Quantum Teleportation; short description and codes with circuits. I presented this in the quantum machine learning class of Modulabs(모두의 연구소).
*Detailed explanation(kor): https://tula3and.github.io/qiskit/qiskit-02-kor/#
This presentation is about what I did from June to now. As a member of IBM C:LOUDERs, I could make a study group(called ZUA) and be a project manager. I also started my personal project, OCOL, which is for a newsletter publication with people that I met from this group. In this project, I usually write articles about computer security.
*Colors from: https://www.ibm.com/design/language/ibm-logos/rebus/
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELgerogepatton
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
Comparative analysis between traditional aquaponics and reconstructed aquapon...bijceesjournal
The aquaponic system of planting is a method that does not require soil usage. It is a method that only needs water, fish, lava rocks (a substitute for soil), and plants. Aquaponic systems are sustainable and environmentally friendly. Its use not only helps to plant in small spaces but also helps reduce artificial chemical use and minimizes excess water use, as aquaponics consumes 90% less water than soil-based gardening. The study applied a descriptive and experimental design to assess and compare conventional and reconstructed aquaponic methods for reproducing tomatoes. The researchers created an observation checklist to determine the significant factors of the study. The study aims to determine the significant difference between traditional aquaponics and reconstructed aquaponics systems propagating tomatoes in terms of height, weight, girth, and number of fruits. The reconstructed aquaponics system’s higher growth yield results in a much more nourished crop than the traditional aquaponics system. It is superior in its number of fruits, height, weight, and girth measurement. Moreover, the reconstructed aquaponics system is proven to eliminate all the hindrances present in the traditional aquaponics system, which are overcrowding of fish, algae growth, pest problems, contaminated water, and dead fish.
TIME DIVISION MULTIPLEXING TECHNIQUE FOR COMMUNICATION SYSTEMHODECEDSIET
Time Division Multiplexing (TDM) is a method of transmitting multiple signals over a single communication channel by dividing the signal into many segments, each having a very short duration of time. These time slots are then allocated to different data streams, allowing multiple signals to share the same transmission medium efficiently. TDM is widely used in telecommunications and data communication systems.
### How TDM Works
1. **Time Slots Allocation**: The core principle of TDM is to assign distinct time slots to each signal. During each time slot, the respective signal is transmitted, and then the process repeats cyclically. For example, if there are four signals to be transmitted, the TDM cycle will divide time into four slots, each assigned to one signal.
2. **Synchronization**: Synchronization is crucial in TDM systems to ensure that the signals are correctly aligned with their respective time slots. Both the transmitter and receiver must be synchronized to avoid any overlap or loss of data. This synchronization is typically maintained by a clock signal that ensures time slots are accurately aligned.
3. **Frame Structure**: TDM data is organized into frames, where each frame consists of a set of time slots. Each frame is repeated at regular intervals, ensuring continuous transmission of data streams. The frame structure helps in managing the data streams and maintaining the synchronization between the transmitter and receiver.
4. **Multiplexer and Demultiplexer**: At the transmitting end, a multiplexer combines multiple input signals into a single composite signal by assigning each signal to a specific time slot. At the receiving end, a demultiplexer separates the composite signal back into individual signals based on their respective time slots.
### Types of TDM
1. **Synchronous TDM**: In synchronous TDM, time slots are pre-assigned to each signal, regardless of whether the signal has data to transmit or not. This can lead to inefficiencies if some time slots remain empty due to the absence of data.
2. **Asynchronous TDM (or Statistical TDM)**: Asynchronous TDM addresses the inefficiencies of synchronous TDM by allocating time slots dynamically based on the presence of data. Time slots are assigned only when there is data to transmit, which optimizes the use of the communication channel.
### Applications of TDM
- **Telecommunications**: TDM is extensively used in telecommunication systems, such as in T1 and E1 lines, where multiple telephone calls are transmitted over a single line by assigning each call to a specific time slot.
- **Digital Audio and Video Broadcasting**: TDM is used in broadcasting systems to transmit multiple audio or video streams over a single channel, ensuring efficient use of bandwidth.
- **Computer Networks**: TDM is used in network protocols and systems to manage the transmission of data from multiple sources over a single network medium.
### Advantages of TDM
- **Efficient Use of Bandwidth**: TDM all
A review on techniques and modelling methodologies used for checking electrom...nooriasukmaningtyas
The proper function of the integrated circuit (IC) in an inhibiting electromagnetic environment has always been a serious concern throughout the decades of revolution in the world of electronics, from disjunct devices to today’s integrated circuit technology, where billions of transistors are combined on a single chip. The automotive industry and smart vehicles in particular, are confronting design issues such as being prone to electromagnetic interference (EMI). Electronic control devices calculate incorrect outputs because of EMI and sensors give misleading values which can prove fatal in case of automotives. In this paper, the authors have non exhaustively tried to review research work concerned with the investigation of EMI in ICs and prediction of this EMI using various modelling methodologies and measurement setups.
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.
Harnessing WebAssembly for Real-time Stateless Streaming PipelinesChristina Lin
Traditionally, dealing with real-time data pipelines has involved significant overhead, even for straightforward tasks like data transformation or masking. However, in this talk, we’ll venture into the dynamic realm of WebAssembly (WASM) and discover how it can revolutionize the creation of stateless streaming pipelines within a Kafka (Redpanda) broker. These pipelines are adept at managing low-latency, high-data-volume scenarios.
23. GenerateRandomBit()
@tula3and
namespace HelloWorld {
open Microsoft.Quantum.Canon;
open Microsoft.Quantum.Intrinsic;
open Microsoft.Quantum.Measurement;
@EntryPoint()
operation GenerateRandomBit() : Result {
use q = Qubit();
H(q);
return M(q);
}
}
24. GenerateRandomBit()
@tula3and
namespace HelloWorld {
open Microsoft.Quantum.Canon;
open Microsoft.Quantum.Intrinsic;
open Microsoft.Quantum.Measurement;
@EntryPoint()
operation GenerateRandomBit() : Result {
use q = Qubit();
H(q);
return M(q);
}
}
25. GenerateRandomBit()
@tula3and
namespace HelloWorld {
open Microsoft.Quantum.Canon;
open Microsoft.Quantum.Intrinsic;
open Microsoft.Quantum.Measurement;
@EntryPoint()
operation GenerateRandomBit() : Result {
use q = Qubit();
H(q);
return M(q);
}
}
0
26. ?
GenerateRandomBit()
@tula3and
namespace HelloWorld {
open Microsoft.Quantum.Canon;
open Microsoft.Quantum.Intrinsic;
open Microsoft.Quantum.Measurement;
@EntryPoint()
operation GenerateRandomBit() : Result {
use q = Qubit();
H(q);
return M(q);
}
}
27. 0
1
GenerateRandomBit()
@tula3and
namespace HelloWorld {
open Microsoft.Quantum.Canon;
open Microsoft.Quantum.Intrinsic;
open Microsoft.Quantum.Measurement;
@EntryPoint()
operation GenerateRandomBit() : Result {
use q = Qubit();
H(q);
return M(q);
}
}
OR
52. *
참고자료
1. (me) Qiskit을 경험해보기 전에: https://tula3and.github.io/qiskit/before-qiskit-kor/#
2. (microsoft) Exercise - Install the QDK for Visual Studio Code:
https://docs.microsoft.com/en-us/learn/modules/qsharp-create-first-quantum-
development-kit/2-install-quantum-development-kit-code
3. (microsoft) Quickstart: Create a quantum-based random number generator in Azure
Quantum: https://docs.microsoft.com/en-us/azure/quantum/quickstart-microsoft-
qc?pivots=platform-ionq
4. (microsoft) Exercise - Create a quantum random bit generator:
https://docs.microsoft.com/en-us/learn/modules/qsharp-create-first-quantum-
development-kit/3-random-bit-generator
5. (me) Take a first step of Azure Quantum: https://tula3and.github.io/azure/azure-
quantum-introduction/
@tula3and