This document discusses how to scale web applications on the cloud using Amazon Web Services (AWS). It explains key AWS services like EC2, S3, RDS, SQS that can be used to build scalable applications. The document also provides an example of how the coding practice platform Coderloop was built on AWS to handle increasing user demand. It recommends tools like Puppet, Capistrano, Nagios for deployment, monitoring and managing infrastructure on AWS. Lastly, it provides tips to reduce AWS costs and concludes that AWS is an excellent platform to build scalable applications.
Amazon Aurora is a relational database engine that combines the speed and reliability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. Amazon Aurora is designed to be compatible with MySQL 5.6, so that existing MySQL applications and tools can run without requiring modification. AWS Database Migration Service helps you migrate databases to AWS easily and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database.
Presented by: Danilo Poccia, Technical Evangelist, Amazon Web Services
Want to get ramped up on how to use Amazon's big data web services and launch your first big data application on AWS? Join us on our journey as we discuss reference architecture, design patterns, and best practices for assembling technologies to meet your big data challenges. We will also build a big data application in real-time using Amazon EMR, Amazon Redshift, Amazon Kinesis, Amazon DynamoDB, and Amazon S3.
Migrating On-Premises Databases to Cloud - AWS PS Summit CanberraAmazon Web Services
Migrating On-Premises Databases to Cloud
The benefits of running databases in the cloud are compelling but how do you get the data there? In this session we will explore how to use the AWS Database Migration Service and the AWS Schema Conversion Tool to help you migrate, or continuously replicate, your on-premises databases to AWS.
Speaker: Craig Roach, Solutions Architect, Amazon Web Services
Level: 200
Convert and Migrate Your NoSQL Database or Data Warehouse to AWS - May 2017 A...Amazon Web Services
Learning Objectives:
- Understand the use cases for migrating or replicating databases to the cloud
- Learn about the benefits of cloud-native databases for performance and costs reduction
- See how AWS Database Migration Service helps with your migration
- See how AWS Schema Conversion Tool makes conversions simple and quick
Moving or replicating your databases to the cloud should be simple and inexpensive. AWS has recently enhanced the AWS Database Migration Service and the AWS Schema Conversion Tool with new data sources to increase your migration options. You can now export from MongoDB databases and Greenplum, IBM Netezza, HPE Vertica, Teradata, Oracle DW and Microsoft SQL Server data warehouses to AWS. Learn how to export and migrate your data and procedural code with minimal downtime to the cloud database of your choice, including cloud-native offerings such as Amazon Aurora, Amazon DynamoDB and Amazon Redshift.
AWS Summit 2013 | India - Scaling Seamlessly and Going Global with the Cloud,...Amazon Web Services
AWS provides a platform that is ideally suited for deploying highly available and reliable systems that can scale with a minimal amount of human interaction. This talk describes a set of architectural patterns that support highly available services that are also scalable, low cost, low latency and allow for taking your application global with the click of a button. We walk through the various architectural decisions taken to achieve high scale and address global audience.
Getting Started with the Hybrid Cloud: Enterprise Backup and RecoveryAmazon Web Services
This session is for architects and storage admins seeking simple and non-disruptive ways to adopt cloud platforms in their organizations. You will learn how to deliver lower costs and greater scale with nearly seamless integration into your existing B&R processes. Services mentioned: S3, Glacier, Snowball, 3rd party partners, storage gateway, and ingestion services.
Amazon Aurora is a relational database engine that combines the speed and reliability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. Amazon Aurora is designed to be compatible with MySQL 5.6, so that existing MySQL applications and tools can run without requiring modification. AWS Database Migration Service helps you migrate databases to AWS easily and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database.
Presented by: Danilo Poccia, Technical Evangelist, Amazon Web Services
Want to get ramped up on how to use Amazon's big data web services and launch your first big data application on AWS? Join us on our journey as we discuss reference architecture, design patterns, and best practices for assembling technologies to meet your big data challenges. We will also build a big data application in real-time using Amazon EMR, Amazon Redshift, Amazon Kinesis, Amazon DynamoDB, and Amazon S3.
Migrating On-Premises Databases to Cloud - AWS PS Summit CanberraAmazon Web Services
Migrating On-Premises Databases to Cloud
The benefits of running databases in the cloud are compelling but how do you get the data there? In this session we will explore how to use the AWS Database Migration Service and the AWS Schema Conversion Tool to help you migrate, or continuously replicate, your on-premises databases to AWS.
Speaker: Craig Roach, Solutions Architect, Amazon Web Services
Level: 200
Convert and Migrate Your NoSQL Database or Data Warehouse to AWS - May 2017 A...Amazon Web Services
Learning Objectives:
- Understand the use cases for migrating or replicating databases to the cloud
- Learn about the benefits of cloud-native databases for performance and costs reduction
- See how AWS Database Migration Service helps with your migration
- See how AWS Schema Conversion Tool makes conversions simple and quick
Moving or replicating your databases to the cloud should be simple and inexpensive. AWS has recently enhanced the AWS Database Migration Service and the AWS Schema Conversion Tool with new data sources to increase your migration options. You can now export from MongoDB databases and Greenplum, IBM Netezza, HPE Vertica, Teradata, Oracle DW and Microsoft SQL Server data warehouses to AWS. Learn how to export and migrate your data and procedural code with minimal downtime to the cloud database of your choice, including cloud-native offerings such as Amazon Aurora, Amazon DynamoDB and Amazon Redshift.
AWS Summit 2013 | India - Scaling Seamlessly and Going Global with the Cloud,...Amazon Web Services
AWS provides a platform that is ideally suited for deploying highly available and reliable systems that can scale with a minimal amount of human interaction. This talk describes a set of architectural patterns that support highly available services that are also scalable, low cost, low latency and allow for taking your application global with the click of a button. We walk through the various architectural decisions taken to achieve high scale and address global audience.
Getting Started with the Hybrid Cloud: Enterprise Backup and RecoveryAmazon Web Services
This session is for architects and storage admins seeking simple and non-disruptive ways to adopt cloud platforms in their organizations. You will learn how to deliver lower costs and greater scale with nearly seamless integration into your existing B&R processes. Services mentioned: S3, Glacier, Snowball, 3rd party partners, storage gateway, and ingestion services.
(DAT303) Oracle on AWS and Amazon RDS: Secure, Fast, and ScalableAmazon Web Services
AWS and Amazon RDS provide advanced features and architectures that enable graceful migration, high performance, elastic scaling, and high availability for Oracle database workloads. Learn best practices for realizing the benefits of the cloud while reducing costs, by running Oracle on AWS in a variety of single- and multi-instance topologies. This session teaches you to take advantage of features unique to AWS and Amazon RDS to free your databases from the confines of the conventional data center.
BDA302 Deep Dive on Migrating Big Data Workloads to Amazon EMRAmazon Web Services
Customers are migrating their analytics, data processing (ETL), and data science workloads running on Apache Hadoop, Spark, and data warehouse appliances from on-premise deployments to Amazon EMR in order to save costs, increase availability, and improve performance. Amazon EMR is a managed service that lets you process and analyze extremely large data sets using the latest versions of over 15 open-source frameworks in the Apache Hadoop and Spark ecosystems. This session will focus on identifying the components and workflows in your current environment and providing the best practices to migrate these workloads to Amazon EMR. We will explain how to move from HDFS to Amazon S3 as a durable storage layer, and how to lower costs with Amazon EC2 Spot instances and Auto Scaling. Additionally, we will go over common security recommendations and tuning tips to accelerate the time to production.
Accelerate your Business with SAP on AWS - AWS Summit Cape Town 2017 Amazon Web Services
From dev and test to large-scale HANA production deployments, enterprise usage of SAP on AWS is rapidly growing across all verticals and geographies. Gain insights on the AWS SAP offering and partnership and why a cloud first approach makes business, technical and financial sense for the numerous SAP solutions that are certified and ready to be deployed today.
AW Speaker: Michael Needham, Sr Mgr, Solutions Architecture - Amazon Web Services
Evolution of Geospatial Workloads on AWS - AWS PS Summit Canberra Amazon Web Services
Geospatial workloads are often amongst the first to move to AWS in government. This session will cover some common topics in GIS, including optimizing for license costs, leveraging native cloud capabilities and running GIS “desktop" software on AWS cloud.
Speaker: Herman Coomans, Solutions Architect, Amazon Web Services
Level: 200
Training for AWS Solutions Architect at http://zekelabs.com/courses/amazon-web-services-training-bangalore/.This slide describes about database offering, Relational Database services (RDS), Read Replica, Multi-AZ, DynamoDB, Elasticache, Redshift, Aurora and Neptune
___________________________________________________
zekeLabs is a Technology training platform. We provide instructor led corporate training and classroom training on Industry relevant Cutting Edge Technologies like Big Data, Machine Learning, Natural Language Processing, Artificial Intelligence, Data Science, Amazon Web Services, DevOps, Cloud Computing and Frameworks like Django,Spring, Ruby on Rails, Angular 2 and many more to Professionals.
Reach out to us at www.zekelabs.com or call us at +91 8095465880 or drop a mail at info@zekelabs.com
Database Migration – Simple, Cross-Engine and Cross-Platform MigrationAmazon Web Services
Learn about the new AWS Database Migration Service, which helps you migrate databases with minimal downtime from on-premises and Amazon EC2 environments to Amazon RDS, Amazon Redshift, Amazon Aurora and EC2 databases.
AWS March 2016 Webinar Series - Managed Database Services on Amazon Web ServicesAmazon Web Services
AWS customers can choose among a variety of managed database services in addition to running databases in Amazon EC2 on their own. Managed database services remove the burden of implementing, managing and maintaining the database and let you focus on your applications.
In this webinar, we will help you understand the differences and common areas of these managed database, and how to choose one or more. We will explain the fundamentals of Amazon RDS, a relational database service in the cloud; Amazon DynamoDB, a fully managed NoSQL database service; Amazon ElastiCache, a fast, in-memory caching service in the cloud; and Amazon Redshift, a fully managed, petabyte-scale data-warehouse solution. We will also cover how each service can help support your application, how much each service costs, and how to get started.
Learning Objectives:
• Understand the Managed Database Service options available on AWS
• Learn how to choose among the Managed Database Services on AWS for your use cases
Who Should Attend:
• IT Professionals, IT Managers, DBAs, Systems Administrators and Developers
An introduction to Amazon RDS for SQL Server as well as how you can lower your costs of running SQL Server in AWS RDS, and Migrating your data into and out of Amazon RDS for SQL Server.
Managing Data with Amazon ElastiCache for Redis - August 2016 Monthly Webinar...Amazon Web Services
Many data sets, such as time-series collections or Internet of Things (IoT) deployments can include huge numbers of sensor reports and other data points, which can be a challenge to manage and aggregate. Amazon ElastiCache for Redis provides an on-demand managed service with the performance and scalability to turn big data into useful information. Join us to learn how to use Amazon ElastiCache to create serverless solutions that lets you rapidly make use of large and multisource data sets.
Learning Objectives:
• Learn how to ingest and analyze sensor data using Amazon ElastiCache for Redis and the AWS IoT Service
• Learn how to use ElastiCache Redis for Time-Series data
Intended for customers who have (or will have) thousands of instances on AWS, this session is about reducing the complexity of managing costs for these large fleets so they run efficiently. Attendees will learn about common roadblocks that prevent large customers from cost optimizing, tools they can use to efficiently remove those roadblocks, and techniques to monitor their rate of cost optimization. The session will include a case study that will talk in detail about the millions of dollars saved using these techniques. Customers will learn about a range of templates they can use to quickly implement these techniques, and also partners who can help them implement these templates.
Presented by: Guy Kfir, Senior Account Manager, Amazon Web Services
Customer Guest: David Costa, CTO, Fredhopper
Amazon Aurora is a MySQL-compatible database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. This session introduces you to Amazon Aurora, explains common use cases for the service, and helps you get started with building your first Amazon Aurora–powered application.
Migrating minimal databases with minimal downtime to AWS RDS, Amazon Redshift and Amazon Aurora
Migration of databases to same and different engines and from on premise to cloud
Schema conversion from Oracle and SQL Server to MySQL and Aurora
Innovate, optimize and profit with cloud computingFederico Feroldi
Telecom companies can fully embrace the power of cloud computing technologies by innovating, optimizing costs and creating new revenue streams. Here's how.
(DAT303) Oracle on AWS and Amazon RDS: Secure, Fast, and ScalableAmazon Web Services
AWS and Amazon RDS provide advanced features and architectures that enable graceful migration, high performance, elastic scaling, and high availability for Oracle database workloads. Learn best practices for realizing the benefits of the cloud while reducing costs, by running Oracle on AWS in a variety of single- and multi-instance topologies. This session teaches you to take advantage of features unique to AWS and Amazon RDS to free your databases from the confines of the conventional data center.
BDA302 Deep Dive on Migrating Big Data Workloads to Amazon EMRAmazon Web Services
Customers are migrating their analytics, data processing (ETL), and data science workloads running on Apache Hadoop, Spark, and data warehouse appliances from on-premise deployments to Amazon EMR in order to save costs, increase availability, and improve performance. Amazon EMR is a managed service that lets you process and analyze extremely large data sets using the latest versions of over 15 open-source frameworks in the Apache Hadoop and Spark ecosystems. This session will focus on identifying the components and workflows in your current environment and providing the best practices to migrate these workloads to Amazon EMR. We will explain how to move from HDFS to Amazon S3 as a durable storage layer, and how to lower costs with Amazon EC2 Spot instances and Auto Scaling. Additionally, we will go over common security recommendations and tuning tips to accelerate the time to production.
Accelerate your Business with SAP on AWS - AWS Summit Cape Town 2017 Amazon Web Services
From dev and test to large-scale HANA production deployments, enterprise usage of SAP on AWS is rapidly growing across all verticals and geographies. Gain insights on the AWS SAP offering and partnership and why a cloud first approach makes business, technical and financial sense for the numerous SAP solutions that are certified and ready to be deployed today.
AW Speaker: Michael Needham, Sr Mgr, Solutions Architecture - Amazon Web Services
Evolution of Geospatial Workloads on AWS - AWS PS Summit Canberra Amazon Web Services
Geospatial workloads are often amongst the first to move to AWS in government. This session will cover some common topics in GIS, including optimizing for license costs, leveraging native cloud capabilities and running GIS “desktop" software on AWS cloud.
Speaker: Herman Coomans, Solutions Architect, Amazon Web Services
Level: 200
Training for AWS Solutions Architect at http://zekelabs.com/courses/amazon-web-services-training-bangalore/.This slide describes about database offering, Relational Database services (RDS), Read Replica, Multi-AZ, DynamoDB, Elasticache, Redshift, Aurora and Neptune
___________________________________________________
zekeLabs is a Technology training platform. We provide instructor led corporate training and classroom training on Industry relevant Cutting Edge Technologies like Big Data, Machine Learning, Natural Language Processing, Artificial Intelligence, Data Science, Amazon Web Services, DevOps, Cloud Computing and Frameworks like Django,Spring, Ruby on Rails, Angular 2 and many more to Professionals.
Reach out to us at www.zekelabs.com or call us at +91 8095465880 or drop a mail at info@zekelabs.com
Database Migration – Simple, Cross-Engine and Cross-Platform MigrationAmazon Web Services
Learn about the new AWS Database Migration Service, which helps you migrate databases with minimal downtime from on-premises and Amazon EC2 environments to Amazon RDS, Amazon Redshift, Amazon Aurora and EC2 databases.
AWS March 2016 Webinar Series - Managed Database Services on Amazon Web ServicesAmazon Web Services
AWS customers can choose among a variety of managed database services in addition to running databases in Amazon EC2 on their own. Managed database services remove the burden of implementing, managing and maintaining the database and let you focus on your applications.
In this webinar, we will help you understand the differences and common areas of these managed database, and how to choose one or more. We will explain the fundamentals of Amazon RDS, a relational database service in the cloud; Amazon DynamoDB, a fully managed NoSQL database service; Amazon ElastiCache, a fast, in-memory caching service in the cloud; and Amazon Redshift, a fully managed, petabyte-scale data-warehouse solution. We will also cover how each service can help support your application, how much each service costs, and how to get started.
Learning Objectives:
• Understand the Managed Database Service options available on AWS
• Learn how to choose among the Managed Database Services on AWS for your use cases
Who Should Attend:
• IT Professionals, IT Managers, DBAs, Systems Administrators and Developers
An introduction to Amazon RDS for SQL Server as well as how you can lower your costs of running SQL Server in AWS RDS, and Migrating your data into and out of Amazon RDS for SQL Server.
Managing Data with Amazon ElastiCache for Redis - August 2016 Monthly Webinar...Amazon Web Services
Many data sets, such as time-series collections or Internet of Things (IoT) deployments can include huge numbers of sensor reports and other data points, which can be a challenge to manage and aggregate. Amazon ElastiCache for Redis provides an on-demand managed service with the performance and scalability to turn big data into useful information. Join us to learn how to use Amazon ElastiCache to create serverless solutions that lets you rapidly make use of large and multisource data sets.
Learning Objectives:
• Learn how to ingest and analyze sensor data using Amazon ElastiCache for Redis and the AWS IoT Service
• Learn how to use ElastiCache Redis for Time-Series data
Intended for customers who have (or will have) thousands of instances on AWS, this session is about reducing the complexity of managing costs for these large fleets so they run efficiently. Attendees will learn about common roadblocks that prevent large customers from cost optimizing, tools they can use to efficiently remove those roadblocks, and techniques to monitor their rate of cost optimization. The session will include a case study that will talk in detail about the millions of dollars saved using these techniques. Customers will learn about a range of templates they can use to quickly implement these techniques, and also partners who can help them implement these templates.
Presented by: Guy Kfir, Senior Account Manager, Amazon Web Services
Customer Guest: David Costa, CTO, Fredhopper
Amazon Aurora is a MySQL-compatible database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. This session introduces you to Amazon Aurora, explains common use cases for the service, and helps you get started with building your first Amazon Aurora–powered application.
Migrating minimal databases with minimal downtime to AWS RDS, Amazon Redshift and Amazon Aurora
Migration of databases to same and different engines and from on premise to cloud
Schema conversion from Oracle and SQL Server to MySQL and Aurora
Innovate, optimize and profit with cloud computingFederico Feroldi
Telecom companies can fully embrace the power of cloud computing technologies by innovating, optimizing costs and creating new revenue streams. Here's how.
Building a microservices architecture means making a lot of decisions, about tools, about frameworks. In this talk I share the decisions that we made at Measurence during our journey for building a microservices architecture based on Scala technologies.
We're going to talk about Spray, Akka, Swagger, Sbt, Docker, Jenkins, Mesos and Marathon.
Learn about the patterns and techniques a business should be using in building their infrastructure on Amazon Web Services to be able to handle rapid growth and success in the early days. From leveraging highly scalable AWS services, to architecting best patterns, there are a number of smart choices you can make early on to help you overcome some typical infrastructure issues.
Presenter: Chris Munns,Solutions Architect, Amazon Web Services
Amazon Web Services (AWS) began offering IT infrastructure services to businesses in the form of web services -- now commonly known as cloud computing. One of the key benefits of cloud computing is the opportunity to replace up-front capital infrastructure expenses with low variable costs that scale with your business. With the Cloud, businesses no longer need to plan for and procure servers and other IT infrastructure weeks or months in advance. Instead, they can instantly spin up hundreds or thousands of servers in minutes and deliver results faster.
Amazon Aurora is a MySQL-compatible database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. This session introduces you to Amazon Aurora, explains common use cases for the service, and helps you get started with building your first Amazon Aurora–powered application.
AWS Cloud Kata 2013 | Singapore - Getting to Scale on AWSAmazon Web Services
This session will focus on how to get from 'Minimum Viable Product' (MVP) to scale. It will also explain how to deal with unpredictable demand and how to build a scalable business. Attend this session to learn how to:
Scale web servers and app services with Elastic Load Balancing and Auto Scaling on Amazon EC2
Scale your storage on Amazon S3 and S3 Reduced Redundancy Storage
Scale your database with Amazon DynamoDB, Amazon RDS, and Amazon ElastiCache
Scale your customer base by reaching customers globally in minutes with Amazon CloudFront
AWS Webcast - Best Practices in Architecting for the CloudAmazon Web Services
Join us to get a better understanding around architecting scalable, reliable applications for the cloud. You'll learn about monitoring, alarming, automatic scaling, load balancing, replication, and more, direct from AWS Senior Evangelist Jeff Barr.
Una Pubblica Amministrazione Agile, Funzionale e Serverless: si può fare! - C...Federico Feroldi
In questo talk raccontiamo il percorso del team che si è occupato della progettazione e dello sviluppo della piattaforma di messaggistica tra PA e cittadini a scala nazionale, prevista dal Piano Triennale per l'ICT della Pubblica Amministrazione. Quali sono state le difficoltà? Quali le vittorie? Cosa abbiamo imparato da questo percorso? La Pubblica Amministrazione è una macchina complessa, lenta, ma che, se gestita nel modo giusto può generare innovazione e tecnologia allo stato dell'arte.
From 1 to infinity: how to scale your tech organization, build a great cultur...Federico Feroldi
As a technology leader, one of your most challenging tasks is to bootstrap a new tech organization and grow it to tens or even hundreds of people. In this talk I will share my learnings from 20+ years of experience as member and leader of several tech teams and hacker cultures. We will follow the journey, the successes and the mistakes of a startup founder: starting by himself and eventually growing to a 100+ people organization.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
2. 0M PVs
125M PVs
250M PVs
375M PVs
500M PVs
Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
user demand servers capacity
1/3 of your
infrastructure
cost
is USELESS
3. TOPICS FOR TODAY
‣ What is Scalability?
‣ Amazon Web Services
‣ Building a web application on AWS
‣ Tools and Best Practices
6. WHAT IS SCALABILITY?
A desirable property of a system, which indicates its ability
to...
Gracefully handle increasing demand.
Increase its capacity (transactions, processing, storage,
throughput) proportionally with the addition of more
resources (nodes, CPU, storage, bandwidth).
7. SCALING UP
Add resources to a single node in the system.
Pros: transparent to the software, almost “linear” scale,
quickly fix performance issues
Cons: risky, may impact on service, can be very expensive,
cannot scale “infinitely”
“It's
the
Stay
Puft
Marshmallow
Man!”
8. SCALING OUT
Add more nodes to the system.
Pros: cheap, almost no impact on service, can scale
“infinitely”
Cons: the system must be properly designed, could be
complex to operate
“The
best
thing
about
being
me
-‐
there's
so
many
of
me.”
9. SCALE FAST OR DIE!
If your application is accessible from the Internet you can't
decide how many people are going to use it.
More time it takes for your systems to adapt to the user
demand, more time in advance you have to provision
your system to be able to handle the future usage (and
most of the time your forecast will be wrong).
10. ELASTICITY
The ability to quickly and
gracefully increase capacity
by adding more resources and
also to quickly and gracefully
release resources when the
required capacity decreases.
In a few words: the ability to
quickly scale a system
capacity up and down based
on the demand in an almost
real-time fashion.
12. INTRODUCING
Amazon launched Amazon Web
Services in 2001
Currently the most mature,
feature rich and flexible IaaS
platform
In a few words: AWS is the Data
Center in the Cloud (and much
more)
13. 5 REASONS TO USE AWS
1. It’s cheap: pay per use, simple cost model.
2. It’s efficient: VMs, DBs, NAS, storage, messaging,
CDNs, LBs, etc...
3. It’s safe: proven infrastructure, Amazon itself builds it's
services on the same technologies.
4. It’s flexible: everything can be managed with an API
call, you can build your own tools or use one of the
many available.
5. It’s unlimited: virtually no limit on VM instances or
storage you can use.
14. WEB APPS COMPONENTS
Web/Application servers: to serve dynamic pages
Databases: to store user data
File Storage: to store images, videos, documents
Computing nodes: to execute background tasks like
image conversion, video transcoding, email delivery
Messaging infrastructure: asynchronous and reliable
communication between nodes
Load balancers: distribute requests
Monitoring infrastructure: to check that everything is
working well and track the system usage
16. EXAMPLE: CODERLOOP
Coderloop is a community where
programmers can practice their
skills and Load balancers:
distribute requests against each
other to solve complex
programming problems.
Users send computer
programs to solve certain
puzzles, Coderloop execute the
programs, verifies the
correctness and gives a
rating.
Serve Web
Application
Store User Data
Process
Submissions
17. FIRST A WEB SERVER
EC2 is the Elastic Compute Cloud.
Users can launch virtual machines (called instances) with a
CLI tool or an API.
They have dynamic IPs but you can reserve fixed IPs
and associate them to an instance.
Instances are created from customizable “images” and
they come in many sizes (memory and CPU).
You pay for the CPU time and the bandwidth.
When an instance is turned off, you lose the data in it
(volatile storage).
18. PERSISTENT USERS’ DATA
RDS is the Relational Database Service.
It’s a fully managed MySQL database.
You can scale it as needed (CPU and available storage).
It’s automatically patched when needed.
Supports multi-master and replication.
You can create backups periodically or with a single API call.
You pay for CPU usage, dedicated primary storage,
backup storage (optional) and bandwidth used.
19. THE NOSQL ALTERNATIVE
SimpleDB is the non relational data store.
It’s a fully managed, scalable “data store”.
Simple web service API no SQL
No need to define schema in advance.
Scales automatically, you just keep adding data.
You pay for CPU time, used storage and bandwidth.
25 hours of CPU and 1GB of storage for free each
month.
20. STORING FILES AND IMAGES
S3 is the Simple Storage Service.
Simple API (REST and SOAP) to write objects (files) in
buckets (similar to directories).
Objects can be from 1 byte to 5GB in size
Standard (99.999999999%) or reduced (99.99%)
availability options.
You pay for the storage used and the bandwidth.
21. PROCESSING SUBMISSIONS
We use EC2 with SQS (Simple Queue Service)
Reliable, highly scalable message queue.
Web servers queue “job requests” that are picked by EC2
instances and processed.
EC2 instances can be added or removed based on the
amount of requests to be processed.
You pay for the amount of requests made to the service
and the amount of data transferred.
The first 100K requests are free each month.
22. SCALING OUT THE APP
We can distribute the requests with the Elastic Load
Balancing service
Monitors the available instances and routes incoming
requests to “healthy” instances
Supports sticky sessions and SSL termination.
You pay for the time and the bandwidth transferred.
With RDS we can easily setup read replicas for our
database to scale the read capacity.
23. ENABLING ELASTICITY
With Cloud Watch we can monitor our servers’ usage in
real time: CPU, Disk and Network for EC2 instances and
databases.
This information can be used to enable Auto Scaling:
automatically scale your Amazon EC2 capacity up or down
according to conditions you define (based on average CPU
utilization, network activity or disk utilization).
New EC2 instances get automatically added to or
removed from the Elastic Load Balancer
24. INCREASING AVAILABILITY
AWS is deployed on multiple Regions and Availability
Zones.
AZs are distinct locations, insulated from failures in other
AZs and provide inexpensive, low latency network
connectivity to other AZ in the same Region.
Regions consist of one or more AZs, are geographically
dispersed, and will be in separate geographic areas or
countries: currently Northern Virginia, Northern California,
Ireland, and Singapore.
27. PUPPET
Centralized configuration management
Rapid creation of an instance of a pre-defined type: web
server, email server, etc...
Ensuring uniform configuration of the entire set of instances
of the same type at all times.
A Puppet Master, guardian of configurations.
Multiple Puppet clients installed on EC2 instances.
28. CAPISTRANO
Application deployment and parallel execution of
automated tasks
Scripting a certain number of tasks, whether complex or not
(deliveries, backups, site publication/maintenance, etc.)
executing them rapidly in parallel on X instances with a
single command.
Webistrano (web interface) for 1-click deployments
29. MONITORING AND LOGS
Supervision verifies the state of a host or a service and
sends out an alarm upon detecting any abnormal behavior:
Nagios
Metrology enables instrumentation data to be archived, and
if necessary, processed or filtered, before it is presented in
the form of graphs or reports: Cacti
The more instances you have, the more scattered logs
there’ll be on the various instances, implement centralized
log collection: Syslog-NG
31. 6 TIPS TO REDUCE COSTS
Keep machines in the same availability zone
Use spot instances (can cost up to 3 time less)
Choose your instance types wisely (c1.medium cost 2x
m1.small by offers 5x computing power)
Choose the smallest possible storage (it’s very easy to
expand the capacity of an RDS instance)
Use Autoscaling (have always the least needed amount of
instances to handle the traffic)
Reserve your instances (you can reduce your costs
significantly by reserving number of instances for a year or
for three years).
33. Amazon Web Services is an excellent platform to build
your first prototype or even run your production
service.
It has a very flexible and proven API that let you
manage your infrastructure in ways you never
though were possible before.
It provides you with a lot of building blocks that scale
well and that you can trust, so you don’t have to
waste time building your infrastructure and you can
focus on making your service the best in the world.
And when the day comes that you hit the Techcrunch
homepage, you infrastructure is ready to scale in
minutes.