Brightpearl is a cloud-based business management platform that provides e-commerce, inventory, order, customer, and shipping functionality to over 1,300 customers. It is built on Amazon Web Services (AWS) using various programming languages and services. Some challenges of building and scaling such a platform on AWS include designing for redundancy, performance, concurrency, cost efficiency, and failure tolerance.
Hadoop World 2011: Data Ingestion, Egression, and Preparation for Hadoop - Sa...Cloudera, Inc.
One of the first challenges Hadoop developers face is accessing all the data they need and getting it into Hadoop for analysis. Informatica PowerExchange accesses a variety of data types and structures at different latencies (e.g. batch, real-time, or near real-time) and ingests data directly into Hadoop. The next step is to parse the data in preparation for analysis in Hadoop. Informatica provides a visual IDE to deploy pre-built parsers or design specific parsers for complex data formats and deploy them on Hadoop. Once the analysis is complete, Informatica PowerExhange delivers the resulting output to other information management systems such as a data warehouse. Learn in this session from Informatica and one of their customers, how to get all the data you need into Hadoop, parse a variety of data formats and structures, and egress the resultant output to other systems.
Picking the right database based on imperfect data is challenging. Decades of traditional app development have conditioned us to put everything in a big box. In this session we will look at selecting the right database for the right job.
Speakers:
Steve Abraham - Principal Database Specialist Solutions Architect, AWS
Charles Hammell - Principal Enterprise Architect, AWS
Which Database is Right for My Workload?: Database Week San FranciscoAmazon Web Services
Database Week at the San Francisco Loft: Which Database is Right for My Workload?
Monday, August 27th
Managed Relational Databases on the Cloud
9:30AM–10:00AM
Check In
10:00AM–10:15AM
Database Services at AWS
Short overview of AWS Database and Analytics offerings and an overview of the day's topics.
Speaker: Bill Baldwin - Global Enterprise Support Lead, AWS
10:15AM-11:15AM
Relational Database Services at AWS
Amazon RDS makes it easy to set up, operate, and scale a relational database in the cloud. We’ll look at what RDS does (and does not) do to manage the “muck” of database operations.
Speakers:
Vishwajit Tigadi - Manager, Strategic Accounts, AWS
Bill Baldwin - Global Enterprise Support Lead, AWS
11:15AM-12:15PM
Hands-On Lab: Managed Database Basics
Hands-on Lab to set up and use RDS and Aurora. You’ll need a laptop with a Firefox or Chrome browser.
Speakers:
Vishwajit Tigadi - Manager, Strategic Accounts, AWS
Chris Holmes - Technical Account Manager, AWS
12:15PM-1:15PM
Lunch
1:15PM-1:45PM
Open Source Databases on the Cloud
Speaker: Miguel Cervantes - Associate Solutions Architect, AWS
1:45PM-2:15PM
Oracle and SQL Server on the Cloud
Speaker: Joyjeet Banerjee - Enterprise Solutions Architect, AWS
Speakers:
Miguel Cervantes - Associate Solutions Architect, AWS
Joyjeet Banerjee - Enterprise Solutions Architect, AWS
Which Database is Right for My Workload: Database Week SFAmazon Web Services
Database Week at the San Francisco Loft
Which Database is Right for My Workload?
Picking the right database based on imperfect data is challenging. Decades of traditional app development have conditioned us to put everything in a big box. In this session we will look at selecting the right database for the right job.
Level: 200
Speakers:
Joyjeet Banerjee - Enterprise Solutions Architect, AWS
Vishwajit Tigadi - Manager, Strategic Accounts, AWS
Hadoop World 2011: Data Ingestion, Egression, and Preparation for Hadoop - Sa...Cloudera, Inc.
One of the first challenges Hadoop developers face is accessing all the data they need and getting it into Hadoop for analysis. Informatica PowerExchange accesses a variety of data types and structures at different latencies (e.g. batch, real-time, or near real-time) and ingests data directly into Hadoop. The next step is to parse the data in preparation for analysis in Hadoop. Informatica provides a visual IDE to deploy pre-built parsers or design specific parsers for complex data formats and deploy them on Hadoop. Once the analysis is complete, Informatica PowerExhange delivers the resulting output to other information management systems such as a data warehouse. Learn in this session from Informatica and one of their customers, how to get all the data you need into Hadoop, parse a variety of data formats and structures, and egress the resultant output to other systems.
Picking the right database based on imperfect data is challenging. Decades of traditional app development have conditioned us to put everything in a big box. In this session we will look at selecting the right database for the right job.
Speakers:
Steve Abraham - Principal Database Specialist Solutions Architect, AWS
Charles Hammell - Principal Enterprise Architect, AWS
Which Database is Right for My Workload?: Database Week San FranciscoAmazon Web Services
Database Week at the San Francisco Loft: Which Database is Right for My Workload?
Monday, August 27th
Managed Relational Databases on the Cloud
9:30AM–10:00AM
Check In
10:00AM–10:15AM
Database Services at AWS
Short overview of AWS Database and Analytics offerings and an overview of the day's topics.
Speaker: Bill Baldwin - Global Enterprise Support Lead, AWS
10:15AM-11:15AM
Relational Database Services at AWS
Amazon RDS makes it easy to set up, operate, and scale a relational database in the cloud. We’ll look at what RDS does (and does not) do to manage the “muck” of database operations.
Speakers:
Vishwajit Tigadi - Manager, Strategic Accounts, AWS
Bill Baldwin - Global Enterprise Support Lead, AWS
11:15AM-12:15PM
Hands-On Lab: Managed Database Basics
Hands-on Lab to set up and use RDS and Aurora. You’ll need a laptop with a Firefox or Chrome browser.
Speakers:
Vishwajit Tigadi - Manager, Strategic Accounts, AWS
Chris Holmes - Technical Account Manager, AWS
12:15PM-1:15PM
Lunch
1:15PM-1:45PM
Open Source Databases on the Cloud
Speaker: Miguel Cervantes - Associate Solutions Architect, AWS
1:45PM-2:15PM
Oracle and SQL Server on the Cloud
Speaker: Joyjeet Banerjee - Enterprise Solutions Architect, AWS
Speakers:
Miguel Cervantes - Associate Solutions Architect, AWS
Joyjeet Banerjee - Enterprise Solutions Architect, AWS
Which Database is Right for My Workload: Database Week SFAmazon Web Services
Database Week at the San Francisco Loft
Which Database is Right for My Workload?
Picking the right database based on imperfect data is challenging. Decades of traditional app development have conditioned us to put everything in a big box. In this session we will look at selecting the right database for the right job.
Level: 200
Speakers:
Joyjeet Banerjee - Enterprise Solutions Architect, AWS
Vishwajit Tigadi - Manager, Strategic Accounts, AWS
Building a Modern Data Warehouse: Deep Dive on Amazon Redshift - SRV337 - Chi...Amazon Web Services
In this chalk talk, we take a deep dive on Amazon Redshift architecture and the latest performance enhancements that give you faster insights into your data. We also cover Amazon Redshift Spectrum, a feature of Amazon Redshift that enables you to analyze data across Amazon Redshift and your Amazon S3 data lake to deliver unique insights not possible by analyzing independent data silos.
The Open Data Lake Platform Brief - Data Sheets | WhitepaperVasu S
An open data lake platform provides a robust and future-proof data management paradigm to support a wide range of data processing needs, including data exploration, ad-hoc analytics, streaming analytics, and machine learning.
Cloud Computing and the Microsoft Developer - A Down-to-Earth AnalysisAndrew Brust
Slides from my Keynote at Visual Studio Live Las Vegas 2011 (Day 2).
Closely compares Azure to AWS, and discusses Force.com, Google, Rackspace, VMWare and Red Hat.
Discussion includes capabilities, pricing, strategy.
Architecting Big Data Ingest & ManipulationGeorge Long
Here's the presentation I gave at the KW Big Data Peer2Peer meetup held at Communitech on 3rd November 2015.
The deck served as a backdrop to the interactive session
http://www.meetup.com/KW-Big-Data-Peer2Peer/events/226065176/
The scope was to drive an architectural conversation about :
o What it actually takes to get the data you need to add that one metric to your report/dashboard?
o What's it like to navigate the early conversations of an analytic solution?
o How is one technology selected over another and how do those selections impact or define other selections?
Redshift is a petabyte-scale data warehouse that is a lot faster, a lot less expensive and a whole lot simpler to use. How can you get your data into Amazon Redshift? In this webinar, hear from representatives of Attunity (Amazon Redshift Partner), and AWS as they present many of the options available for data integration. Whether your data is in an on premise platform or a cloud based database like DynamoDB, we will show you how you can easily load your data in to Re
dshift.
Reasons to attend: - Learn about best practices to efficiently integrate data into Redshift. - Attend Q&A session with Redshift experts
DAT304_Amazon Aurora Performance Optimization with MySQLKamal Gupta
Amazon Aurora services are MySQL and PostgreSQL -compatible relational database engines with the speed, reliability, and availability of high-end commercial databases at one-tenth the cost. This session introduces you to Amazon Aurora, explores the capabilities and features of Aurora, explains common use cases, and helps you get started with Aurora.
Comment envisager l'architecture d'une solution dans le Cloud ? Quelles différences avec un hébergement classique ?
Nous illustrerons les grands principes du développement Cloud en prenant l'exemple d'une application web typique. Nous construirons l'architecture étape par étape pour la rendre scalable et lui faire bénéficier des avantages du Cloud.
Nous verrons ensuite les différents types d'implémentations et choix technologiques possibles de cette architecture sur le Cloud Microsoft Azure. Nous envisagerons aussi bien des services d'infrastructure (VMs, conteneurs, …) que des services de plus haut niveau de type plateforme, du serverless, des bases de données managées…
Nous zoomerons ensuite sur l'acquisition de la donnée et son traitement dans un contexte Big Data et verrons les caractéristiques d'une architecture lambda et ses implémentations possibles sur Azure (Hadoop, …). Nous terminerons par les différentes manières d'ajouter de l'intelligence dans sa solution : de la plus simple à mettre en œuvre pour le développeur via des APIs pré-packagées, à la plus élaborée et personnalisable pour le Data Scientist. Mais aussi comment la rendre plus facilement accessible par l'utilisateur via un bot Skype, Facebook, Slack, email, SMS...
Support du meetup https://www.meetup.com/fr-FR/Duchess-France-Meetup/events/238437772/
AWS re:Invent 2016: ElastiCache Deep Dive: Best Practices and Usage Patterns ...Amazon Web Services
In this session, we provide a peek behind the scenes to learn about Amazon ElastiCache's design and architecture. See common design patterns with our Redis and Memcached offerings and how customers have used them for in-memory operations to reduce latency and improve application throughput. During this session, we review ElastiCache best practices, design patterns, and anti-patterns.
Consuming and producing complex structures in Hadoop MapReduce™ with on-line natural language processing (NLP) enhanced 1.5 Billion Word Wikipedia Text Corpus Example
In this presentation, you will get a look under the covers of Amazon Redshift, a fast, fully-managed, petabyte-scale data warehouse service for less than $1,000 per TB per year. Learn how Amazon Redshift uses columnar technology, optimized hardware, and massively parallel processing to deliver fast query performance on data sets ranging in size from hundreds of gigabytes to a petabyte or more. We'll also walk through techniques for optimizing performance and, you’ll hear from a specific customer and their use case to take advantage of fast performance on enormous datasets leveraging economies of scale on the AWS platform.
Bursting on-premise analytic workloads to Amazon EMR using AlluxioAlluxio, Inc.
Data Orchestration Summit 2020 organized by Alluxio
https://www.alluxio.io/data-orchestration-summit-2020/
Bursting on-premise analytic workloads to Amazon EMR using Alluxio
Roy Hasson, AWS
About Alluxio: alluxio.io
Engage with the open source community on slack: alluxio.io/slack
Big Data Day LA 2015 - NoSQL: Doing it wrong before getting it right by Lawre...Data Con LA
The team at Fandango heartily embraced NoSQL, using Couchbase to power a key media publishing system. The initial implementation was fraught with integration issues and high latency, and required a major effort to successfully refactor. My talk will outline the key organizational and architectural decisions that created deep systemic problems, and the steps taken to re-architect the system to achieve a high level of performance at scale.
A data lake can be used as a source for both structured and unstructured data - but how? We'll look at using open standards including Spark and Presto with Amazon EMR, Amazon Redshift Spectrum and Amazon Athena to process and understand data.
Speakers:
Neel Mitra - Solutions Architect, AWS
Roger Dahlstrom - Solutions Architect, AWS
Building a Modern Data Warehouse: Deep Dive on Amazon Redshift - SRV337 - Chi...Amazon Web Services
In this chalk talk, we take a deep dive on Amazon Redshift architecture and the latest performance enhancements that give you faster insights into your data. We also cover Amazon Redshift Spectrum, a feature of Amazon Redshift that enables you to analyze data across Amazon Redshift and your Amazon S3 data lake to deliver unique insights not possible by analyzing independent data silos.
The Open Data Lake Platform Brief - Data Sheets | WhitepaperVasu S
An open data lake platform provides a robust and future-proof data management paradigm to support a wide range of data processing needs, including data exploration, ad-hoc analytics, streaming analytics, and machine learning.
Cloud Computing and the Microsoft Developer - A Down-to-Earth AnalysisAndrew Brust
Slides from my Keynote at Visual Studio Live Las Vegas 2011 (Day 2).
Closely compares Azure to AWS, and discusses Force.com, Google, Rackspace, VMWare and Red Hat.
Discussion includes capabilities, pricing, strategy.
Architecting Big Data Ingest & ManipulationGeorge Long
Here's the presentation I gave at the KW Big Data Peer2Peer meetup held at Communitech on 3rd November 2015.
The deck served as a backdrop to the interactive session
http://www.meetup.com/KW-Big-Data-Peer2Peer/events/226065176/
The scope was to drive an architectural conversation about :
o What it actually takes to get the data you need to add that one metric to your report/dashboard?
o What's it like to navigate the early conversations of an analytic solution?
o How is one technology selected over another and how do those selections impact or define other selections?
Redshift is a petabyte-scale data warehouse that is a lot faster, a lot less expensive and a whole lot simpler to use. How can you get your data into Amazon Redshift? In this webinar, hear from representatives of Attunity (Amazon Redshift Partner), and AWS as they present many of the options available for data integration. Whether your data is in an on premise platform or a cloud based database like DynamoDB, we will show you how you can easily load your data in to Re
dshift.
Reasons to attend: - Learn about best practices to efficiently integrate data into Redshift. - Attend Q&A session with Redshift experts
DAT304_Amazon Aurora Performance Optimization with MySQLKamal Gupta
Amazon Aurora services are MySQL and PostgreSQL -compatible relational database engines with the speed, reliability, and availability of high-end commercial databases at one-tenth the cost. This session introduces you to Amazon Aurora, explores the capabilities and features of Aurora, explains common use cases, and helps you get started with Aurora.
Comment envisager l'architecture d'une solution dans le Cloud ? Quelles différences avec un hébergement classique ?
Nous illustrerons les grands principes du développement Cloud en prenant l'exemple d'une application web typique. Nous construirons l'architecture étape par étape pour la rendre scalable et lui faire bénéficier des avantages du Cloud.
Nous verrons ensuite les différents types d'implémentations et choix technologiques possibles de cette architecture sur le Cloud Microsoft Azure. Nous envisagerons aussi bien des services d'infrastructure (VMs, conteneurs, …) que des services de plus haut niveau de type plateforme, du serverless, des bases de données managées…
Nous zoomerons ensuite sur l'acquisition de la donnée et son traitement dans un contexte Big Data et verrons les caractéristiques d'une architecture lambda et ses implémentations possibles sur Azure (Hadoop, …). Nous terminerons par les différentes manières d'ajouter de l'intelligence dans sa solution : de la plus simple à mettre en œuvre pour le développeur via des APIs pré-packagées, à la plus élaborée et personnalisable pour le Data Scientist. Mais aussi comment la rendre plus facilement accessible par l'utilisateur via un bot Skype, Facebook, Slack, email, SMS...
Support du meetup https://www.meetup.com/fr-FR/Duchess-France-Meetup/events/238437772/
AWS re:Invent 2016: ElastiCache Deep Dive: Best Practices and Usage Patterns ...Amazon Web Services
In this session, we provide a peek behind the scenes to learn about Amazon ElastiCache's design and architecture. See common design patterns with our Redis and Memcached offerings and how customers have used them for in-memory operations to reduce latency and improve application throughput. During this session, we review ElastiCache best practices, design patterns, and anti-patterns.
Consuming and producing complex structures in Hadoop MapReduce™ with on-line natural language processing (NLP) enhanced 1.5 Billion Word Wikipedia Text Corpus Example
In this presentation, you will get a look under the covers of Amazon Redshift, a fast, fully-managed, petabyte-scale data warehouse service for less than $1,000 per TB per year. Learn how Amazon Redshift uses columnar technology, optimized hardware, and massively parallel processing to deliver fast query performance on data sets ranging in size from hundreds of gigabytes to a petabyte or more. We'll also walk through techniques for optimizing performance and, you’ll hear from a specific customer and their use case to take advantage of fast performance on enormous datasets leveraging economies of scale on the AWS platform.
Bursting on-premise analytic workloads to Amazon EMR using AlluxioAlluxio, Inc.
Data Orchestration Summit 2020 organized by Alluxio
https://www.alluxio.io/data-orchestration-summit-2020/
Bursting on-premise analytic workloads to Amazon EMR using Alluxio
Roy Hasson, AWS
About Alluxio: alluxio.io
Engage with the open source community on slack: alluxio.io/slack
Big Data Day LA 2015 - NoSQL: Doing it wrong before getting it right by Lawre...Data Con LA
The team at Fandango heartily embraced NoSQL, using Couchbase to power a key media publishing system. The initial implementation was fraught with integration issues and high latency, and required a major effort to successfully refactor. My talk will outline the key organizational and architectural decisions that created deep systemic problems, and the steps taken to re-architect the system to achieve a high level of performance at scale.
A data lake can be used as a source for both structured and unstructured data - but how? We'll look at using open standards including Spark and Presto with Amazon EMR, Amazon Redshift Spectrum and Amazon Athena to process and understand data.
Speakers:
Neel Mitra - Solutions Architect, AWS
Roger Dahlstrom - Solutions Architect, AWS
(ISM304) Oracle to Amazon RDS MySQL & Aurora: How Gallup Made the MoveAmazon Web Services
"Amazon RDS MySQL offers a highly scalable, available and high performing database service at a fraction of the cost of a commercially licensed database provider. To take advantage of Amazon RDS MySQL benefits such as Multi-AZ replication and ease of administration, Gallup transitioned its Reporting and Analytics platforms to AWS.
Swapan Golla, Technical Architect at Gallup, will talk about the benefits the company has seen moving from an on-premise Oracle deployment to RDS MySQL. Learn about the solution architecture and how they tuned their schemas and application code to take full advantage of the scalability and performance of RDS. He will also talk about the next steps in the team's roadmap, which involve Amazon Aurora."
Data Analytics Week at the San Francisco Loft
Using Data Lakes
A data lake can be used as a source for both structured and unstructured data - but how? We'll look at using open standards including Spark and Presto with Amazon EMR, Amazon Redshift Spectrum and Amazon Athena to process and understand data.
Speakers:
John Mallory - Principal Business Development Manager Storage (Object), AWS
Hemant Borole - Sr. Big Data Consultant, AWS
AWS re:Invent 2016: How to Build a Big Data Analytics Data Lake (LFS303)Amazon Web Services
For discovery-phase research, life sciences companies have to support infrastructure that processes millions to billions of transactions. The advent of a data lake to accomplish such a task is showing itself to be a stable and productive data platform pattern to meet the goal. We discuss how to build a data lake on AWS, using services and techniques such as AWS CloudFormation, Amazon EC2, Amazon S3, IAM, and AWS Lambda. We also review a reference architecture from Amgen that uses a data lake to aid in their Life Science Research.
A data lake can be used as a source for both structured and unstructured data - but how? We'll look at using open standards including Spark and Presto with Amazon EMR, Amazon Redshift Spectrum and Amazon Athena to process and understand data.
Level: Intermediate
Speakers:
Tony Nguyen - Senior Consultant, ProServe, AWS
Hannah Marlowe - Consultant - Federal, AWS
Want to see a high-level overview of the products in the Microsoft data platform portfolio in Azure? I’ll cover products in the categories of OLTP, OLAP, data warehouse, storage, data transport, data prep, data lake, IaaS, PaaS, SMP/MPP, NoSQL, Hadoop, open source, reporting, machine learning, and AI. It’s a lot to digest but I’ll categorize the products and discuss their use cases to help you narrow down the best products for the solution you want to build.
In this session, we show you how to understand what data you have, how to drive insights, and how to make predictions using purpose-built AWS services. Learn about the common pitfalls of building data lakes and discover how to successfully drive analytics and insights from your data. Also learn how services such as Amazon S3, AWS Glue, Amazon Redshift, Amazon Athena, Amazon EMR, Amazon Kinesis, and Amazon ML services work together to build a successful data lake for various roles, including data scientists and business users.
by Mamoon Chowdry, Solutions Architect
AWS Data & Analytics Week is an opportunity to learn about Amazon’s family of managed analytics services. These services provide easy, scalable, reliable, and cost-effective ways to manage your data in the cloud. We explain the fundamentals and take a technical deep dive into Amazon Redshift data warehouse; Data Lake services including Amazon EMR, Amazon Athena, & Amazon Redshift Spectrum; Log Analytics with Amazon Elasticsearch Service; and data preparation and placement services with AWS Glue and Amazon Kinesis. You'll will learn how to get started, how to support applications, and how to scale.
by Avijit Goswami, Sr. Solutions Architect, AWS
A data lake can be used as a source for both structured and unstructured data - but how? We'll look at using open standards including Spark and Presto with Amazon EMR, Amazon Redshift Spectrum and Amazon Athena to process and understand data.
From raw data to business insights. A modern data lakejavier ramirez
In this talk I spoke about the pitfalls when you try to build a data lake, and how you can solve the problem either with unmanaged open source, or with the managed and/or native solutions at AWS. Delivered at the Madrid Data Engineering meetup in May 2019
by Sid Chauhan, Solutions architect, AWS
A data lake can be used as a source for both structured and unstructured data - but how? We'll look at using open standards including Spark and Presto with Amazon EMR, Amazon Redshift Spectrum and Amazon Athena to process and understand data.
For our next ArcReady, we will explore a topic on everyone’s mind: Cloud computing. Several industry companies have announced cloud computing services . In October 2008 at the Professional Developers Conference, Microsoft announced the next phase of our Software + Services vision: the Azure Services Platform. The Azure Services Platforms provides a wide range of internet services that can be consumed from both on premises environments or the internet.
Session 1: Cloud Services
In our first session we will explore the current state of cloud services. We will then look at how applications should be architected for the cloud and explore a reference application deployed on Windows Azure. We will also look at the services that can be built for on premise application, using .NET Services. We will also address some of the concerns that enterprises have about cloud services, such as regulatory and compliance issues.
Session 2: The Azure Platform
In our second session we will take a slightly different look at cloud based services by exploring Live Mesh and Live Services. Live Mesh is a data synchronization client that has a rich API to build applications on. Live services are a collection of APIs that can be used to create rich applications for your customers. Live Services are based on internet standard protocols and data formats.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Application design for the cloud using AWS
1. Application Design for the Cloud/AWS
Jonathan Holloway (@jph98)
Application Architect @ Brightpearl
2. A little bit of background
Brightpearl grew out of a company called Lush
Longboards in Bristol
Chris Tanner and Andrew Mulvenna identified
a need for a single online business
management app
First customer on Brightpearl in 2007
Over 1300 customers currently on Brightpearl
and growing
The numbers:
Founded 2007
Today, we have:
1,300+ customers
53 countries
87 employees
(in San Francisco and Bristol)
$1.3 billion
gmv processed
3. What Does Brightpearl Do?
- Multi-channel integration (Amazon, Ebay)
- Storefront integration
- Inventory, orders, and customers
- Logistics and shipping
- Web and POS interface
- App store for third party integrations
4. Business @ Scale on the Amazon Platform
Over 1,300 customers run on our cloud based platform
5. Company Culture
Rapidly growing
Team, Product, Technology, Career
Team and People
Passionate, Diverse, Agile
The way we work
Open Plan, Open Minded, Autonomous
Great social life
We call our colleagues friends and do fun stuff
together outside the office.
Career development
We invest in our people and support their
career aspirations.
Open atmosphere
Our managers don’t have offices and openly
tell us about the business’ progress.
Multi-cultural workforce
We have employees who come from over 20
different countries.
6. Tech Culture
General tech lunch and learns (TLAL) for various technology talks
(Java, PHP, Javascript)
Ruby project workshop for building apps once every two weeks,
encourage use outside development
Friday - half day for personal tech projects, e.g. dashboards,
product improvements
Java Meetup Group for external talks with developers from other
companies in the Bristol and Bath area. We host PHPSW along with
Basekit in Bristol
7. My Background
Application Architect - Solutions/Technical, Developer
Started with SaaS back in 2010 with email archiving and large scale
storage/search solution
- Cassandra, Hadoop, Pig, Lucene and Jersey
Worked in statistical computing with distributed grids (Oracle Grid
Engine, LSF). Deployed on large compute clusters internally in
pharma, looked at cloud based solution using Starcluster, Python
Java background, Ruby, Python and Javascript
9. SaaS - Big Product Examples
Lots of big SaaS based products on the web today including:
On-demand movie rental
Customer Relationship Management
Microsoft Office Toolsuite Online
Conference and collaboration
10. SaaS and PaaS and IaaS
It’s all very confusing… think of it as a pyramid
IaaS - Infrastructure as a service (Tools/services for devops)
- Amazon EC2, Windows Azure, Heroku
PaaS - Platform as a service (Tools/services for app devs)
- AWS Beanstalk, Heroku, Google App Engine, Redhat
Openshift
SaaS - Software as a service (Apps for end users - yay)
- e.g. Netflix, Brightpearl, Salesforce, Office365
11. How do I know when a product should be SaaS based?
- Have to be careful with data requirements, data at rest,
transfer because of the public cloud. VPC and VPN can help.
- I/O and throughput might prohibit movement of files
- Have to be careful don’t just move your application into the
cloud… break it down… re-assemble with cloud based
application services
- Factor in availability, performance, failover, reliability. Don’t
underestimate reliability
12. SaaS - Archiving Solution
Customer Account
Metadata
Postgres/Slony
Search Services
SOL
R
SOL
R
SOL
R
Content Extraction
Tika Tika
Content Storage
Solaris/ZFS
Search Interface
Django
Tika
Mail Archive
Mail Archive
Mail Archive
- Worked on this for a private cloud solution
- Why doesn’t it fit the public cloud - Amazon/Rackspace?
- Couldn’t move it “as is”
15. Amazon Web Services - Cloud Computing Services
Using AWS for ~ 4 years as a IaaS platform
Brightpearl is designed for use on AWS
Make use of both US and European datacentres, multi-availability
zones - approx 90 EC2 instances
All Amazon Linux based images, use Centos 5.x in dev/test
We approximate an environment for development and test
16. Architectural Overview
Will break it down into the following views:
- Infrastructure and Operating System
- Application Services (Queuing, Data Storage, Load Balancing)
- Software Stack (Brightpearl Application - JS, PHP and Java)
17. Infrastructure
We build on various Amazon base Images (based on Amazon Linux)
with different specifications...
- m1 medium (webserver)
- t1 micro (mail relay)
- m1 small (messaging)
EC2Instances is useful - http://www.ec2instances.info/
Software is provisioned on top of the base O/S with Chef (we
maintain this configuration and keeping it up to date).
18. Application Services
Elastic Compute Cloud (based off Xen)
- Various instance sizes (small, medium large, xlarge)
- Basis is an AMI - Centos, RHEL, Windows
Amazon RDS (Relational Database Service) - aka MySQL
- Data Storage
Content delivery network, think Akamai
- Global delivery of static resources (images, content)
19. Application Services
Key/value store - useful for storing large data that
won’t fit in a relational database
Amazon S3 - storage service for files
ELB (Elastic Load Balancer) - instance load balancing
20. Languages - Javascript
Javascript for DOM manipulation, data binding, validation
- Functional, oh so functional
- JQuery for DOM manipulation and UI elements
- Backbone for structure (+ CommonJS)
- Mocha for testing
- JS on the serverside - Node
- Dependency Management (Bower, NPM)
21. Languages - PHP
PHP for web development in the presentation tier
- Dynamically typed, interpreted
- Single threaded
- Well supported, lots of third party software
- Lightweight, fast and proven
22. Languages - Java
Java for scaling services out
- Statically and strongly typed
- Good for concurrency and parallelisation
- Good library, framework & IDE support (Intellij)
- Build RESTful API’s for PHP to communicate with
23. Languages - Ruby
Ruby - for provisioning infrastructure, configuration and test
- Our “devops” and “test engineer” language
- Dynamically typed, multi paradigm
- Readable, testable, way cool
- Great third party library support
- Chef by Opscode used for configuring EC2 instances
- Cucumber and Webdriver for application testing
25. AWS Design Considerations
Sometimes we need to be agnostic for performance, cost reasons
and also for vendor (Amazon) lockin
Have to build in failover to each individual service
Don’t use SQS (Simple Queueing Service) - instead we roll our own
Roll our own datagrid for cross EC-2 instance data - an in-memory
datagrid. Think distributed Java collections.
Make use of a distributed file system for transient file storage
26. Problems at Scale
Design for redundancy:
- Ephemeral storage by default. Use EBS (Elastic Block Storage)
- Multiple copies of services deployed on instances
- Multiple instances for failover
Design for scale:
- Partitioning of accounts using separate RDS instances
27. Problems at Scale
Design for concurrency:
- Immutability is key, Actor Model, STM are useful
Design for performance:
- Profile everything (Yourkit is great for this)
- Java is very memory hungry… tune the JVM and GC strategy
28. Problems at Scale
Design for cost:
- Reserved instances (up-front cost) can save a fair bit
Design for content delivery:
- Server side caching (Varnish, mod_cache)
- Client side caching (expires headers, etags)
- Content Delivery Network - (Cloudfront, Akamai)
29. Design for failure:
- Internal Services (Redundant Copies)
- External Services
- Protect against them overloading your internal
services
- Don’t flood them with your traffic
Problems at Scale
30. Future Tech
Some future technology we’re interested in...
- Web Components (Polymer)
- Functional JVM languages (JDK 8 Streams, Clojure)
- Docker (LXC containers) for virtualisation
- Interested in Quasar for lightweight
threading
31. Oh by the way… we’re expanding...
Current Open Positions
Senior Developers
Graduate Developers
Test Engineers
Key Events @ Brightpearl
5th August - Java Meetup Group
20th August - PHPSW Meetup
Questions?