The document provides an overview of Amazon Web Services (AWS) and its cloud computing services. It discusses the history of utility computing models and how AWS offers infrastructure as a service based on virtualization and pay-as-you-go pricing. Key AWS services covered include Elastic Compute Cloud (EC2) for virtual servers, Simple Storage Service (S3) for object storage, Elastic Block Store (EBS) for virtual disk volumes, and Glacier for low-cost archival storage. The document provides examples of pricing and commands to get started using these AWS services.
This document provides an overview of Amazon Web Services (AWS) and its cloud computing services including Elastic Compute Cloud (EC2), Simple Storage Service (S3), Elastic Block Store (EBS), and Glacier. It discusses the pricing models and terminology for each service, how they can be integrated into applications, and examples of using the command line interface. The document also covers topics like AWS architecture best practices, availability zones, consistency models, and durability across the different services.
This document provides an overview of architecting applications for the Amazon Web Services (AWS) cloud platform. It discusses key cloud computing attributes like abstract resources, on-demand provisioning, scalability, and lack of upfront costs. It then describes various AWS services for compute, storage, messaging, payments, distribution, analytics and more. It provides examples of how to design applications to be scalable and fault-tolerant on AWS. Finally, it discusses best practices for migrating existing web applications to take advantage of AWS capabilities.
The document provides an overview of Amazon Web Services (AWS) and its computing services. It describes Amazon Elastic Compute Cloud (EC2) which allows users to launch virtual servers called instances in AWS data centers. It provides flexibility, cost effectiveness, scalability, security and reliability. EC2 reduces time to obtain servers and allows users to pay only for what they use.
Amazon Web Services provides a set of cloud computing services including Amazon EC2 for computing power, Amazon S3 for object storage, and Amazon EBS for block-level storage. The document discusses these services as well as Amazon VPC which allows users to provision a virtual private cloud within AWS. It provides flexibility to customize the network configuration and control the virtual networking environment.
Developing And Running A Website On Amazon S Ejaymuntz
This document provides an overview of Amazon EC2, including what EC2 is, hardware and software options, important concepts like security groups and availability zones, using EC2 to host a LAMP website, advantages and caveats of EC2. It discusses instance types, machine images, storage with S3 and EBS, elastic IP addresses, regions and availability zones, fees, and provides a case study of setting up a LAMP project on EC2.
This document discusses how to scale web applications on the cloud using Amazon Web Services (AWS). It explains key AWS services like EC2, S3, RDS, SQS that can be used to build scalable applications. The document also provides an example of how the coding practice platform Coderloop was built on AWS to handle increasing user demand. It recommends tools like Puppet, Capistrano, Nagios for deployment, monitoring and managing infrastructure on AWS. Lastly, it provides tips to reduce AWS costs and concludes that AWS is an excellent platform to build scalable applications.
The document summarizes an AWS user group presentation by Shaimaa Esmaeil on AWS101. The presentation introduced cloud computing concepts, AWS global infrastructure and services, and demonstrated EC2 and S3. It discussed on-premises vs cloud, cloud models (IaaS, PaaS, SaaS), AWS regions and availability zones. It provided overviews of EC2 instances, AMIs, types, EBS, security groups and S3 buckets and objects. Useful training and practice exam resources were also shared.
Amazon Web Services offers several cloud computing services including Mechanical Turk (MTurk), Simple Queue Service (SQS), Simple Storage Service (S3), and Elastic Compute Cloud (EC2). MTurk allows users to submit human intelligence tasks and pay workers to complete them. SQS provides a queue system to communicate between distributed applications. S3 provides storage buckets for files. EC2 offers virtual servers that can be configured and deployed on demand in the cloud.
This document provides an overview of Amazon Web Services (AWS) and its cloud computing services including Elastic Compute Cloud (EC2), Simple Storage Service (S3), Elastic Block Store (EBS), and Glacier. It discusses the pricing models and terminology for each service, how they can be integrated into applications, and examples of using the command line interface. The document also covers topics like AWS architecture best practices, availability zones, consistency models, and durability across the different services.
This document provides an overview of architecting applications for the Amazon Web Services (AWS) cloud platform. It discusses key cloud computing attributes like abstract resources, on-demand provisioning, scalability, and lack of upfront costs. It then describes various AWS services for compute, storage, messaging, payments, distribution, analytics and more. It provides examples of how to design applications to be scalable and fault-tolerant on AWS. Finally, it discusses best practices for migrating existing web applications to take advantage of AWS capabilities.
The document provides an overview of Amazon Web Services (AWS) and its computing services. It describes Amazon Elastic Compute Cloud (EC2) which allows users to launch virtual servers called instances in AWS data centers. It provides flexibility, cost effectiveness, scalability, security and reliability. EC2 reduces time to obtain servers and allows users to pay only for what they use.
Amazon Web Services provides a set of cloud computing services including Amazon EC2 for computing power, Amazon S3 for object storage, and Amazon EBS for block-level storage. The document discusses these services as well as Amazon VPC which allows users to provision a virtual private cloud within AWS. It provides flexibility to customize the network configuration and control the virtual networking environment.
Developing And Running A Website On Amazon S Ejaymuntz
This document provides an overview of Amazon EC2, including what EC2 is, hardware and software options, important concepts like security groups and availability zones, using EC2 to host a LAMP website, advantages and caveats of EC2. It discusses instance types, machine images, storage with S3 and EBS, elastic IP addresses, regions and availability zones, fees, and provides a case study of setting up a LAMP project on EC2.
This document discusses how to scale web applications on the cloud using Amazon Web Services (AWS). It explains key AWS services like EC2, S3, RDS, SQS that can be used to build scalable applications. The document also provides an example of how the coding practice platform Coderloop was built on AWS to handle increasing user demand. It recommends tools like Puppet, Capistrano, Nagios for deployment, monitoring and managing infrastructure on AWS. Lastly, it provides tips to reduce AWS costs and concludes that AWS is an excellent platform to build scalable applications.
The document summarizes an AWS user group presentation by Shaimaa Esmaeil on AWS101. The presentation introduced cloud computing concepts, AWS global infrastructure and services, and demonstrated EC2 and S3. It discussed on-premises vs cloud, cloud models (IaaS, PaaS, SaaS), AWS regions and availability zones. It provided overviews of EC2 instances, AMIs, types, EBS, security groups and S3 buckets and objects. Useful training and practice exam resources were also shared.
Amazon Web Services offers several cloud computing services including Mechanical Turk (MTurk), Simple Queue Service (SQS), Simple Storage Service (S3), and Elastic Compute Cloud (EC2). MTurk allows users to submit human intelligence tasks and pay workers to complete them. SQS provides a queue system to communicate between distributed applications. S3 provides storage buckets for files. EC2 offers virtual servers that can be configured and deployed on demand in the cloud.
The IoT Academy_awstraining_part2_aws_ec2_iaasThe IOT Academy
This document provides an overview of Amazon Web Services (AWS) and its infrastructure services, with a focus on Amazon Elastic Compute Cloud (EC2). It describes the main EC2 concepts like Amazon Machine Images (AMIs), instances, regions/availability zones, storage options, networking and security features, monitoring and auto-scaling capabilities. It also discusses how to access and manage EC2 instances using the AWS management console, command line tools, and APIs.
This document provides an overview of architecting applications for the AWS cloud. It discusses key AWS cloud computing attributes like scalability, on-demand provisioning, and efficiency of experts. It also outlines best practices like designing for failure, loose coupling, dynamism, and security. Specific AWS services are mapped to common application needs like compute, storage, content delivery, databases, and more. Overall the document aims to educate readers on how to leverage AWS architectural principles and services.
The document discusses digital media ingest and storage options on AWS. It introduces AWS storage services like Amazon S3, Glacier and Snowball that can be used to create a scalable "content lake" for storing large amounts of media content. It also discusses how these services allow flexible management of content across different resolutions and formats over time. Additional services like Lambda, EFS and Elastic Transcoder can be used to build processing pipelines for tasks like transcoding and sharing content from the lake.
Why Scale Matters and How the Cloud is Really Different (at scale)Amazon Web Services
This document discusses how various companies scale their services and applications on AWS to handle large user loads and data volumes. It provides examples of Animoto handling over 1 billion files saved per day and Airbnb having over 9 million guests. It then outlines an approach for scaling an application from 1 user to millions by starting with EC2 instances, adding services like S3, DynamoDB, ElastiCache and auto-scaling groups. The document emphasizes using AWS managed services to avoid re-inventing solutions for tasks like queuing, storage and databases.
Amazon Elastic Compute Cloud (Amazon EC2) is a web service that allows users to rent virtual computers on which to run their own computer applications. It provides scalable deployment of applications by allowing users to create, launch, and terminate virtual server instances as needed. EC2 provides control over the location of instances and offers 12 types of instances. It allows flexible scaling of computing power and reduces costs by only paying for resources used.
As part of the Introduction to AWS Workshop Series, see how to scale your website from your first user, right up to a complex architecture to support 10 million users.
ARC205 Building Web-scale Applications Architectures with AWS - AWS re: Inven...Amazon Web Services
As both new and established businesses work to increase their customer numbers, revenue and relevance to the market – they are working to deliver software that scales larger than ever before. The challenge of being the "victim of your own success" be it from viral marketing, social media or simply dramatic uptake of a new service; is something that troubles the minds of CIOs and Engineers alike. This session will focus on ways to avoid creating "technical debt" during initial development, and will share well established practices and approaches to building applications that can tolerate and revel in the challenges of scaling to "web scale". Working through a range of architectural dimensions, patterns and pithy examples – attendees will leave this session with useful ideas on how to design new applications, as well as the "retro-fitting" that can be done to existing applications to enable them to scale on AWS.
Amazon Web Services (AWS) began offering IT infrastructure services to businesses in the form of web services -- now commonly known as cloud computing. One of the key benefits of cloud computing is the opportunity to replace up-front capital infrastructure expenses with low variable costs that scale with your business. With the Cloud, businesses no longer need to plan for and procure servers and other IT infrastructure weeks or months in advance. Instead, they can instantly spin up hundreds or thousands of servers in minutes and deliver results faster.
Cloud Computing Primer: Using cloud computing tools in your museumRobert J. Stein
A presentation by Robert Stein, Charlie Moad and Ari Davidow on cloud computing for the Museum Computer Network Conference in Portland, OR November, 2009
The document discusses cloud computing and provides examples of how museums can utilize cloud services. It describes common cloud applications and utilities, discusses pros and cons of the cloud, and provides specific examples of how the International Museum of Art used Amazon Web Services (AWS) to save costs on data storage and transition their website and video servers to the cloud.
Amazon S3 provides inexpensive cloud storage while EC2 offers virtual computing resources. S3 allows storage of unlimited data for $0.15 per GB per month with data retrieval priced at $0.10-$0.13 per GB depending on amount. EC2's virtual machines range in power and price from $0.10 per hour for a small instance to $0.80 per hour for an extra large one. Both services offer flexibility to scale up or down on demand with no long term commitments.
Scaling up to your first 10 million users - Pop-up Loft Tel AvivAmazon Web Services
Cloud computing gives you a number of advantages, such as the ability to scale your web application or website on demand. If you have a new web application and want to use cloud computing, you might be asking yourself, "Where do I start?" Join us in this session to understand best practices for scaling your resources from zero to millions of users. We show you how to best combine different AWS services, how to make smarter decisions for architecting your application, and how to scale your infrastructure in the cloud.
This document provides an overview of Amazon Web Services (AWS) and Amazon Elastic Compute Cloud (EC2). It defines common cloud computing models like SaaS, PaaS, IaaS and discusses key EC2 concepts such as elasticity, security, pricing models and cost savings strategies. Instance types, operating systems, data transfer costs and the EC2 API are also summarized. The document aims to introduce readers to AWS and EC2 capabilities for flexible and scalable cloud computing.
Amazon Aurora is a relational database engine that combines the speed and reliability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. Amazon Aurora is designed to be compatible with MySQL 5.6, so that existing MySQL applications and tools can run without requiring modification. AWS Database Migration Service helps you migrate databases to AWS easily and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database.
Presented by: Danilo Poccia, Technical Evangelist, Amazon Web Services
AWS Webcast - Best Practices in Architecting for the CloudAmazon Web Services
Join us to get a better understanding around architecting scalable, reliable applications for the cloud. You'll learn about monitoring, alarming, automatic scaling, load balancing, replication, and more, direct from AWS Senior Evangelist Jeff Barr.
This document provides an overview of AWS services including EC2, EBS, Auto Scaling, ELB, S3, IAM, VPC, Route 53, RDS, CloudWatch, Aurora, Glacier, and CloudFormation. It describes the purpose and basic usage of each service. Key points covered include launching EC2 instances, attaching EBS volumes, setting up auto scaling groups, load balancing with ELB, storing objects in S3, managing access with IAM, building VPC networks, configuring DNS with Route 53, provisioning databases with RDS, monitoring with CloudWatch, archiving with Glacier, and defining infrastructure as code with CloudFormation.
How to run your Hadoop Cluster in 10 minutesVladimir Simek
- Two companies faced challenges processing big data on-premises, including high fixed costs, slow deployment, lack of scalability, and outages impacting production.
- Amazon Elastic MapReduce (EMR) provides a managed Hadoop service that allows companies to launch clusters within minutes in the AWS cloud at lower costs by using elastic and scalable infrastructure.
- AOL moved their 2PB on-premises Hadoop cluster to EMR, reducing costs by 4x while gaining automatic scaling and high availability across availability zones. EMR addressed their challenges and allowed faster restatement of historical data.
Cloud storage services like AWS S3 provide scalable and durable storage in the cloud. Files and objects can be stored in S3 buckets and accessed from anywhere with an internet connection. S3 offers different storage classes like Standard and Glacier that provide varying levels of availability and durability depending on how often the data needs to be accessed. Other AWS storage options include EBS for raw block-level storage attached to EC2 instances, EFS for shared file storage across EC2 instances, and Glacier for long-term archival storage of non-critical data.
Amazon Aurora is a MySQL-compatible database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. This session introduces you to Amazon Aurora, explains common use cases for the service, and helps you get started with building your first Amazon Aurora–powered application.
Batteries -Introduction – Types of Batteries – discharging and charging of battery - characteristics of battery –battery rating- various tests on battery- – Primary battery: silver button cell- Secondary battery :Ni-Cd battery-modern battery: lithium ion battery-maintenance of batteries-choices of batteries for electric vehicle applications.
Fuel Cells: Introduction- importance and classification of fuel cells - description, principle, components, applications of fuel cells: H2-O2 fuel cell, alkaline fuel cell, molten carbonate fuel cell and direct methanol fuel cells.
International Conference on NLP, Artificial Intelligence, Machine Learning an...gerogepatton
International Conference on NLP, Artificial Intelligence, Machine Learning and Applications (NLAIM 2024) offers a premier global platform for exchanging insights and findings in the theory, methodology, and applications of NLP, Artificial Intelligence, Machine Learning, and their applications. The conference seeks substantial contributions across all key domains of NLP, Artificial Intelligence, Machine Learning, and their practical applications, aiming to foster both theoretical advancements and real-world implementations. With a focus on facilitating collaboration between researchers and practitioners from academia and industry, the conference serves as a nexus for sharing the latest developments in the field.
The IoT Academy_awstraining_part2_aws_ec2_iaasThe IOT Academy
This document provides an overview of Amazon Web Services (AWS) and its infrastructure services, with a focus on Amazon Elastic Compute Cloud (EC2). It describes the main EC2 concepts like Amazon Machine Images (AMIs), instances, regions/availability zones, storage options, networking and security features, monitoring and auto-scaling capabilities. It also discusses how to access and manage EC2 instances using the AWS management console, command line tools, and APIs.
This document provides an overview of architecting applications for the AWS cloud. It discusses key AWS cloud computing attributes like scalability, on-demand provisioning, and efficiency of experts. It also outlines best practices like designing for failure, loose coupling, dynamism, and security. Specific AWS services are mapped to common application needs like compute, storage, content delivery, databases, and more. Overall the document aims to educate readers on how to leverage AWS architectural principles and services.
The document discusses digital media ingest and storage options on AWS. It introduces AWS storage services like Amazon S3, Glacier and Snowball that can be used to create a scalable "content lake" for storing large amounts of media content. It also discusses how these services allow flexible management of content across different resolutions and formats over time. Additional services like Lambda, EFS and Elastic Transcoder can be used to build processing pipelines for tasks like transcoding and sharing content from the lake.
Why Scale Matters and How the Cloud is Really Different (at scale)Amazon Web Services
This document discusses how various companies scale their services and applications on AWS to handle large user loads and data volumes. It provides examples of Animoto handling over 1 billion files saved per day and Airbnb having over 9 million guests. It then outlines an approach for scaling an application from 1 user to millions by starting with EC2 instances, adding services like S3, DynamoDB, ElastiCache and auto-scaling groups. The document emphasizes using AWS managed services to avoid re-inventing solutions for tasks like queuing, storage and databases.
Amazon Elastic Compute Cloud (Amazon EC2) is a web service that allows users to rent virtual computers on which to run their own computer applications. It provides scalable deployment of applications by allowing users to create, launch, and terminate virtual server instances as needed. EC2 provides control over the location of instances and offers 12 types of instances. It allows flexible scaling of computing power and reduces costs by only paying for resources used.
As part of the Introduction to AWS Workshop Series, see how to scale your website from your first user, right up to a complex architecture to support 10 million users.
ARC205 Building Web-scale Applications Architectures with AWS - AWS re: Inven...Amazon Web Services
As both new and established businesses work to increase their customer numbers, revenue and relevance to the market – they are working to deliver software that scales larger than ever before. The challenge of being the "victim of your own success" be it from viral marketing, social media or simply dramatic uptake of a new service; is something that troubles the minds of CIOs and Engineers alike. This session will focus on ways to avoid creating "technical debt" during initial development, and will share well established practices and approaches to building applications that can tolerate and revel in the challenges of scaling to "web scale". Working through a range of architectural dimensions, patterns and pithy examples – attendees will leave this session with useful ideas on how to design new applications, as well as the "retro-fitting" that can be done to existing applications to enable them to scale on AWS.
Amazon Web Services (AWS) began offering IT infrastructure services to businesses in the form of web services -- now commonly known as cloud computing. One of the key benefits of cloud computing is the opportunity to replace up-front capital infrastructure expenses with low variable costs that scale with your business. With the Cloud, businesses no longer need to plan for and procure servers and other IT infrastructure weeks or months in advance. Instead, they can instantly spin up hundreds or thousands of servers in minutes and deliver results faster.
Cloud Computing Primer: Using cloud computing tools in your museumRobert J. Stein
A presentation by Robert Stein, Charlie Moad and Ari Davidow on cloud computing for the Museum Computer Network Conference in Portland, OR November, 2009
The document discusses cloud computing and provides examples of how museums can utilize cloud services. It describes common cloud applications and utilities, discusses pros and cons of the cloud, and provides specific examples of how the International Museum of Art used Amazon Web Services (AWS) to save costs on data storage and transition their website and video servers to the cloud.
Amazon S3 provides inexpensive cloud storage while EC2 offers virtual computing resources. S3 allows storage of unlimited data for $0.15 per GB per month with data retrieval priced at $0.10-$0.13 per GB depending on amount. EC2's virtual machines range in power and price from $0.10 per hour for a small instance to $0.80 per hour for an extra large one. Both services offer flexibility to scale up or down on demand with no long term commitments.
Scaling up to your first 10 million users - Pop-up Loft Tel AvivAmazon Web Services
Cloud computing gives you a number of advantages, such as the ability to scale your web application or website on demand. If you have a new web application and want to use cloud computing, you might be asking yourself, "Where do I start?" Join us in this session to understand best practices for scaling your resources from zero to millions of users. We show you how to best combine different AWS services, how to make smarter decisions for architecting your application, and how to scale your infrastructure in the cloud.
This document provides an overview of Amazon Web Services (AWS) and Amazon Elastic Compute Cloud (EC2). It defines common cloud computing models like SaaS, PaaS, IaaS and discusses key EC2 concepts such as elasticity, security, pricing models and cost savings strategies. Instance types, operating systems, data transfer costs and the EC2 API are also summarized. The document aims to introduce readers to AWS and EC2 capabilities for flexible and scalable cloud computing.
Amazon Aurora is a relational database engine that combines the speed and reliability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. Amazon Aurora is designed to be compatible with MySQL 5.6, so that existing MySQL applications and tools can run without requiring modification. AWS Database Migration Service helps you migrate databases to AWS easily and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database.
Presented by: Danilo Poccia, Technical Evangelist, Amazon Web Services
AWS Webcast - Best Practices in Architecting for the CloudAmazon Web Services
Join us to get a better understanding around architecting scalable, reliable applications for the cloud. You'll learn about monitoring, alarming, automatic scaling, load balancing, replication, and more, direct from AWS Senior Evangelist Jeff Barr.
This document provides an overview of AWS services including EC2, EBS, Auto Scaling, ELB, S3, IAM, VPC, Route 53, RDS, CloudWatch, Aurora, Glacier, and CloudFormation. It describes the purpose and basic usage of each service. Key points covered include launching EC2 instances, attaching EBS volumes, setting up auto scaling groups, load balancing with ELB, storing objects in S3, managing access with IAM, building VPC networks, configuring DNS with Route 53, provisioning databases with RDS, monitoring with CloudWatch, archiving with Glacier, and defining infrastructure as code with CloudFormation.
How to run your Hadoop Cluster in 10 minutesVladimir Simek
- Two companies faced challenges processing big data on-premises, including high fixed costs, slow deployment, lack of scalability, and outages impacting production.
- Amazon Elastic MapReduce (EMR) provides a managed Hadoop service that allows companies to launch clusters within minutes in the AWS cloud at lower costs by using elastic and scalable infrastructure.
- AOL moved their 2PB on-premises Hadoop cluster to EMR, reducing costs by 4x while gaining automatic scaling and high availability across availability zones. EMR addressed their challenges and allowed faster restatement of historical data.
Cloud storage services like AWS S3 provide scalable and durable storage in the cloud. Files and objects can be stored in S3 buckets and accessed from anywhere with an internet connection. S3 offers different storage classes like Standard and Glacier that provide varying levels of availability and durability depending on how often the data needs to be accessed. Other AWS storage options include EBS for raw block-level storage attached to EC2 instances, EFS for shared file storage across EC2 instances, and Glacier for long-term archival storage of non-critical data.
Amazon Aurora is a MySQL-compatible database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. This session introduces you to Amazon Aurora, explains common use cases for the service, and helps you get started with building your first Amazon Aurora–powered application.
Batteries -Introduction – Types of Batteries – discharging and charging of battery - characteristics of battery –battery rating- various tests on battery- – Primary battery: silver button cell- Secondary battery :Ni-Cd battery-modern battery: lithium ion battery-maintenance of batteries-choices of batteries for electric vehicle applications.
Fuel Cells: Introduction- importance and classification of fuel cells - description, principle, components, applications of fuel cells: H2-O2 fuel cell, alkaline fuel cell, molten carbonate fuel cell and direct methanol fuel cells.
International Conference on NLP, Artificial Intelligence, Machine Learning an...gerogepatton
International Conference on NLP, Artificial Intelligence, Machine Learning and Applications (NLAIM 2024) offers a premier global platform for exchanging insights and findings in the theory, methodology, and applications of NLP, Artificial Intelligence, Machine Learning, and their applications. The conference seeks substantial contributions across all key domains of NLP, Artificial Intelligence, Machine Learning, and their practical applications, aiming to foster both theoretical advancements and real-world implementations. With a focus on facilitating collaboration between researchers and practitioners from academia and industry, the conference serves as a nexus for sharing the latest developments in the field.
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...University of Maribor
Slides from talk presenting:
Aleš Zamuda: Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapter and Networking.
Presentation at IcETRAN 2024 session:
"Inter-Society Networking Panel GRSS/MTT-S/CIS
Panel Session: Promoting Connection and Cooperation"
IEEE Slovenia GRSS
IEEE Serbia and Montenegro MTT-S
IEEE Slovenia CIS
11TH INTERNATIONAL CONFERENCE ON ELECTRICAL, ELECTRONIC AND COMPUTING ENGINEERING
3-6 June 2024, Niš, Serbia
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsVictor Morales
K8sGPT is a tool that analyzes and diagnoses Kubernetes clusters. This presentation was used to share the requirements and dependencies to deploy K8sGPT in a local environment.
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.
TIME DIVISION MULTIPLEXING TECHNIQUE FOR COMMUNICATION SYSTEMHODECEDSIET
Time Division Multiplexing (TDM) is a method of transmitting multiple signals over a single communication channel by dividing the signal into many segments, each having a very short duration of time. These time slots are then allocated to different data streams, allowing multiple signals to share the same transmission medium efficiently. TDM is widely used in telecommunications and data communication systems.
### How TDM Works
1. **Time Slots Allocation**: The core principle of TDM is to assign distinct time slots to each signal. During each time slot, the respective signal is transmitted, and then the process repeats cyclically. For example, if there are four signals to be transmitted, the TDM cycle will divide time into four slots, each assigned to one signal.
2. **Synchronization**: Synchronization is crucial in TDM systems to ensure that the signals are correctly aligned with their respective time slots. Both the transmitter and receiver must be synchronized to avoid any overlap or loss of data. This synchronization is typically maintained by a clock signal that ensures time slots are accurately aligned.
3. **Frame Structure**: TDM data is organized into frames, where each frame consists of a set of time slots. Each frame is repeated at regular intervals, ensuring continuous transmission of data streams. The frame structure helps in managing the data streams and maintaining the synchronization between the transmitter and receiver.
4. **Multiplexer and Demultiplexer**: At the transmitting end, a multiplexer combines multiple input signals into a single composite signal by assigning each signal to a specific time slot. At the receiving end, a demultiplexer separates the composite signal back into individual signals based on their respective time slots.
### Types of TDM
1. **Synchronous TDM**: In synchronous TDM, time slots are pre-assigned to each signal, regardless of whether the signal has data to transmit or not. This can lead to inefficiencies if some time slots remain empty due to the absence of data.
2. **Asynchronous TDM (or Statistical TDM)**: Asynchronous TDM addresses the inefficiencies of synchronous TDM by allocating time slots dynamically based on the presence of data. Time slots are assigned only when there is data to transmit, which optimizes the use of the communication channel.
### Applications of TDM
- **Telecommunications**: TDM is extensively used in telecommunication systems, such as in T1 and E1 lines, where multiple telephone calls are transmitted over a single line by assigning each call to a specific time slot.
- **Digital Audio and Video Broadcasting**: TDM is used in broadcasting systems to transmit multiple audio or video streams over a single channel, ensuring efficient use of bandwidth.
- **Computer Networks**: TDM is used in network protocols and systems to manage the transmission of data from multiple sources over a single network medium.
### Advantages of TDM
- **Efficient Use of Bandwidth**: TDM all
TIME DIVISION MULTIPLEXING TECHNIQUE FOR COMMUNICATION SYSTEM
Case Study Amazon AWS
1. CASE STUDY: AMAZON AWS
Cloud Computing
Prof. DouglasThain
University of Notre Dame
2. CAUTIONTOTHE READER:
HEREIN ARE EXAMPLES OF PRICES CONSULTED IN
OCTOBER 2014,TO GIVE A SENSE OFTHE
MAGNITUDE OF COSTS. DOYOUR OWN RESEARCH
BEFORE SPENDINGYOUR OWN MONEY!
3. SEVERAL HISTORICALTRENDS
Shared Utility Computing
1960s – MULTICS – Concept of a Shared Computing Utility
1970s – IBM Mainframes – rent by the CPU-hour. (Fast/slow switch.)
Data Center Co-location
1990s-2000s – Rent machines for months/years, keep them close to the network access point
and pay a flat rate. Avoid running your own building with utilities!
Pay asYou Go
Early 2000s - Submit jobs to a remote service provider where they run on the raw hardware.
Sun Cloud ($1/CPU-hour, Solaris +SGE) IBM Deep Capacity Computing on Demand (50
cents/hour)
Virtualization
1960s – OS-VM,VM-360 – Used to split mainframes into logical partitions.
1998 –VMWare – First practical implementation on X86, but at significant performance hit.
2003 – Xen paravirtualization provides much perf, but kernel must assist.
Late 2000s – Intel and AMD add hardware support for virtualization.
4. VIRTUAL-* ALLOWS FORTHE SCALE OF
ABSTRACTIONTO INCREASE OVERTIME
Run one process within certain resource limits.
Op Sys has virtual memory, virtual CPU, and virtual storage (file system).
Run multiple processes within certain resource limits.
Resource containers (Solaris), virtual servers (Linux), virtual images (Docker)
Run an entire operating system within certain limits.
Virtual machine technology:VMWare, Xen, KVM, etc.
Run a set of virtual machines connected via a private network.
Virtual networks (SDNs) provision bandwidth between virtual machines.
Run a private virtual architecture for every customer.
Automated tools replicate virtual infrastructure as needed.
5. AMAZON AWS
Grew out ofAmazon’s need to rapidly provision and configure machines of standard
configurations for its own business.
Early 2000s – Both private and shared data centers began using virtualization to perform
“server consolidation”
2003 – Internal memo by Chris Pinkham describing an “infrastructure service for the
world.”
2006 – S3 first deployed in the spring, EC2 in the fall
2008 – Elastic Block Store available.
2009 – Relational Database Service
2012 – DynamoDB
Does it turn a profit?
6.
7. TERMINOLOGY
Instance = One running virtual machine.
InstanceType = hardware configuration: cores, memory, disk.
Instance StoreVolume =Temporary disk associated with instance.
Image (AMI) = Stored bits which can be turned into instances.
Key Pair = Credentials used to accessVM from command line.
Region = Geographic location, price, laws, network locality.
Availability Zone = Subdivision of region the is fault-independent.
8.
9. EC2 PRICING MODEL
Free UsageTier
On-Demand Instances
Start and stop instances whenever you like, costs are rounded up to the
nearest hour. (Worst price)
Reserved Instances
Pay up front for one/three years in advance. (Best price)
Unused instances can be sold on a secondary market.
Spot Instances
Specify the price you are willing to pay, and instances get started and
stopped without any warning as the marked changes. (Kind of like
Condor!)
http://aws.amazon.com/ec2/pricing/
10. FREE USAGETIER
750 hours of EC2 running Linux, RHEL, or SLES t2.micro instance usage
750 hours of EC2 running Microsoft Windows Server t2.micro instance usage
750 hours of Elastic Load Balancing plus 15 GB data processing
30 GB of Amazon Elastic Block Storage in any combination of General Purpose
(SSD) or Magnetic, plus 2 million I/Os (with Magnetic) and 1 GB of snapshot
storage
15 GB of bandwidth out aggregated across all AWS services
1 GB of Regional DataTransfer
15. SIMPLE STORAGE SERVICE (S3)
A bucket is a container for objects and describes location, logging, accounting,
and access control. A bucket can hold any number of objects, which are files of up
to 5TB. A bucket has a name that must be globally unique.
Fundamental operations corresponding to HTTP actions:
http://bucket.s3.amazonaws.com/object
POST a new object or update an existing object.
GET an existing object from a bucket.
DELETE an object from the bucket
LIST keys present in a bucket, with a filter.
A bucket has a flat directory structure (despite the appearance given by the
interactive web interface.)
17. BUCKET PROPERTIES
Versioning – If enabled, POST/DELETE result in the creation of new versions
without destroying the old.
Lifecycle – Delete or archive objects in a bucket a certain time after creation or
last access or number of versions.
Access Policy – Control when and where objects can be accessed.
Access Control – Control who may access objects in this bucket.
Logging – Keep track of how objects are accessed.
Notification – Be notified when failures occur.
18. S3WEAK CONSISTENCY MODEL
Direct quote from the Amazon developer API:
“Updates to a single key are atomic….”
“Amazon S3 achieves high availability by replicating data across multiple servers
within Amazon's data centers. If a PUT request is successful, your data is safely
stored. However, information about the changes must replicate acrossAmazon
S3, which can take some time, and so you might observe the following behaviors:
A process writes a new object to Amazon S3 and immediately attempts to read it. Until the
change is fully propagated,Amazon S3 might report "key does not exist."
A process writes a new object to Amazon S3 and immediately lists keys within its bucket. Until
the change is fully propagated, the object might not appear in the list.
A process replaces an existing object and immediately attempts to read it. Until the change is
fully propagated,Amazon S3 might return the prior data.
A process deletes an existing object and immediately attempts to read it. Until the deletion is
fully propagated,Amazon S3 might return the deleted data.”
19.
20.
21.
22. ELASTIC BLOCK STORE
An EBS volume is a virtual disk of a fixed size with a block read/write interface. It can be
mounted as a filesystem on a running EC2 instance where it can be updated
incrementally. Unlike an instance store, an EBS volume is persistent.
(Compare to an S3 object, which is essentially a file that must be accessed in its entirety.)
Fundamental operations:
CREATE a new volume (1GB-1TB)
COPY a volume from an existing EBS volume or S3 object.
MOUNT on one instance at a time.
SNAPSHOT current state to an S3 object.
23.
24.
25. EBS IS APPROX. 3X MORE EXPENSIVE BYVOLUME
AND 10X MORE EXPENSIVE BY IOPSTHAN S3.
26. USE GLACIER FOR COLD DATA
Glacier is structured like S3: a vault is a container for an arbitrary number of archives.
Policies, accounting, and access control are associated with vaults, while an archive is a
single object.
However:
All operations are asynchronous and notified via SNS.
Vault listings are updated once per day.
Archive downloads may take up to four hours.
Only 5% of total data can be accessed in a given month.
Pricing:
Storage: $0.01 per GB-month
Operations: $0.05 per 1000 requests
DataTransfer: Like S3, free withinAWS.
S3 Policies can be set up to automatically move data into Glacier.
27. DURABILITY
Amazon claims about S3:
Amazon S3 is designed to sustain the concurrent loss of data in two facilities, e.g. 3+ copies across
multiple available domains.
99.999999999% durability of objects over a given year.
Amazon claims about EBS:
Amazon EBS volume data is replicated across multiple servers in an Availability Zone to prevent the
loss of data from the failure of any single component.
Volumes <20GB modified data since last snapshot have an annual failure rate of 0.1% - 0.5%,
resulting in complete loss of the volume.
Commodity hard disks have anAFR of about 4%.
Amazon claims about Glacier is the same as S3:
Amazon S3 is designed to sustain the concurrent loss of data in two facilities, e.g. 3+ copies across
multiple available domains PLUS periodic internal integrity checks.
99.999999999% durability of objects over a given year.
Beware of oversimplified arguments about low-probability events!
28. ARCHITECTURE CENTER
Ideas for constructing large scale infrastructures using AWS:
http://aws.amazon.com/architecture/
29. COMMAND LINE SETUP
Go to your profile menu (your name) in the upper right hand corner, select “Security
Credentials” and “Continue to Security Credentials”
Select “Access Keys”
Select “New Access Key” and save the generated keys somewhere.
Edit ~/.aws/config and set it up like this:
Now test it: aws ec2-describe-instances
Note the syntax here is different from how
it was given in the web console!
AWSAccessKey=XXXXXX
AWSSecretAccessKey=YYYYYYYYY
[default]
output = json
region = us-west-2
aws_access_key = XXXXXX
aws_secret_access_key =YYYYYYYYYYYY
30. S3 COMMAND LINE EXAMPLES
aws s3 mb s3://bucket
. . . cp localfile s3://bucket/key
mv s3://bucket/key s3://bucket/newname
ls s3://bucket
rm s3://bucket/key
rb s3://bucket
aws s3 help
aws s3 ls help
31. EC2 COMMAND LINE EXAMPLES
aws ec2 describe-instances
run-instances --image-id ami-xxxxx -- count 1
--instance-type t1.micro --key-name keyfile
stop-instances --instance-id i-xxxxxx
aws ec2 help
aws ec2 start-instances help
32. WARMUP: GET STARTEDWITH AMAZON
Skim through the AWS documentation.
Sign up for AWS at http://aws.amazon.com
(Skip the IAM management for now)
Apply the service credit you received by email.
Create and download a Key-Pair, save it in your home directory.
Create aVM via the AWS Console
Connect to your newly-createdVM like this:
ssh -i my-aws-keypair.pem ec2-user@ip-address-of-vm
Create a bucket in S3 and upload/download some files.