This document provides an overview of Clark Nuber's cloud plan, indicating which applications and items are currently in the cloud, planned to move to the cloud, or have no plan to move. It shows that many productivity, collaboration, research, and administrative applications and systems like Office 365, SharePoint, and CRM tools have already moved or will move to the cloud by 2016. Servers supporting functions like file storage, email, and databases are also scheduled to transition by 2016. The telephone system is planned for a 2018 cloud move, while core accounting and practice management systems currently have no cloud plans.
Big data is primarily associated with AI and new technology. It is as much a revolution in cooperation patterns, however. Big data entails the democratisation of data within an organisation, enabling agile, data-driven innovation in a manner that was previously unavailable. Knowing this, how can you work as an organisation to harvest the fruits and what can go wrong?
Building Identity Graph at Scale for Programmatic Media Buying Using Apache S...Databricks
The proliferation of digital channels has made it mandatory for marketers to understand an individual across multiple touchpoints. In order to develop market effectiveness, marketers need have a pretty good sense of its consumer’s identity so that it can reach him via mobile device, desktop or a big TV screen on living room. Examples of such identity tokens include cookies, app IDs etc.A consumer can use multiple devices at the same time and so the same consumer should not be treated as different people in the advertising space. The idea of identity resolution comes with this mission and goal to have an omnichannel view of a consumer.
Solution architecture for big data projects
solution architecture,big data,hadoop,hive,hbase,impala,spark,apache,cassandra,SAP HANA,Cognos big insights
Big data is primarily associated with AI and new technology. It is as much a revolution in cooperation patterns, however. Big data entails the democratisation of data within an organisation, enabling agile, data-driven innovation in a manner that was previously unavailable. Knowing this, how can you work as an organisation to harvest the fruits and what can go wrong?
Building Identity Graph at Scale for Programmatic Media Buying Using Apache S...Databricks
The proliferation of digital channels has made it mandatory for marketers to understand an individual across multiple touchpoints. In order to develop market effectiveness, marketers need have a pretty good sense of its consumer’s identity so that it can reach him via mobile device, desktop or a big TV screen on living room. Examples of such identity tokens include cookies, app IDs etc.A consumer can use multiple devices at the same time and so the same consumer should not be treated as different people in the advertising space. The idea of identity resolution comes with this mission and goal to have an omnichannel view of a consumer.
Solution architecture for big data projects
solution architecture,big data,hadoop,hive,hbase,impala,spark,apache,cassandra,SAP HANA,Cognos big insights
Building Pinterest Real-Time Ads Platform Using Kafka Streams confluent
Building Pinterest Real-Time Ads Platform Using Kafka Streams (Liquan Pei + Boyang Chen, Pinterest) Kafka Summit SF 2018
In this talk, we are sharing the experience of building Pinterest’s real-time Ads Platform utilizing Kafka Streams. The real-time budgeting system is the most mission-critical component of the Ads Platform as it controls how each ad is delivered to maximize user, advertiser and Pinterest value. The system needs to handle over 50,000 queries per section (QPS) impressions, requires less than five seconds of end-to-end latency and recovers within five minutes during outages. It also needs to be scalable to handle the fast growth of Pinterest’s ads business.
The real-time budgeting system is composed of real-time stream-stream joiner, real-time spend aggregator and a spend predictor. At Pinterest’s scale, we need to overcome quite a few challenges to make each component work. For example, the stream-stream joiner needs to maintain terabyte size state while supporting fast recovery, and the real-time spend aggregator needs to publish to thousands of ads servers while supporting over one million read QPS. We choose Kafka Streams as it provides milliseconds latency guarantee, scalable event-based processing and easy-to-use APIs. In the process of building the system, we performed tons of tuning to RocksDB, Kafka Producer and Consumer, and pushed several open source contributions to Apache Kafka. We are also working on adding a remote checkpoint for Kafka Streams state to reduce the time of code start when adding more machines to the application. We believe that our experience can be beneficial to people who want to build real-time streaming solutions at large scale and deeply understand Kafka Streams.
In the next five years, 15 to 40 billion additional connected devices are expected to hit the market. How can we handle such volumes and velocity of data?
Introduction to Dynamo storage systems, Riak, Cassandra, time series databases and edge analytics.
50 Shades of Data - Dutch Oracle Architects Platform (February 2018)Lucas Jellema
Gone are the days of a single enterprise database – typically and Oracle RDBMS – that holds all data in a strictly normalized form. We work with many more types of data (big and fast, structured and unstructured) that we use in various ways. Relational and ACID is not applicable to all of those. Always the latest, freshest data may not be uniformly valid either. We will continue to see an increase in specialized data stores that cater for specific needs and specific scenarios.
This presentation is a combination of a presentation and a demonstration on the various dimensions and use cases of using data and data stores in various ways – while ensuring the appropriate (!) levels of freshness, integrity, performance. Key take away: what as an architect you should know about the various types of data in enterprise IT and how to store/manage/query/manipulate them. What products and technologies are at your disposal. How can you make these work together - for a consistent (enough) overall data presentation. How are upcoming architectural patterns such as CQRS (command query responsibility segregation) , event sourcing and microservices influencing the way we handle data in the enterprise? Some of the technologies discussed: products such as MongoDB, MySQL, Neo4J, Apache Kafka, Redis, Elastic Search and Hadoop/Spark, Oracle Data Hub Cloud (based on Apache Cassandra) – used locally, in containers and on the cloud. Additionally we will discuss data replication scenarios.
Die QAware Big Data Landscape gibt einen detaillierten Überblick zu den relevantesten Big Data Technologien, vornehmlich aus dem Open-Source-Ökosystem.
(Dokument bitte herunterladen für bessere Lesbarkeit / komplette Ansicht)
-----------------
The QAware Big Data Landscape provides a detailed overview over the most relevant Big Data technologies, most of them open source.
(Please download file for a better readability / complete view)
Budapest Data Forum 2017 - BigQuery, Looker And Big Data Analytics At Petabyt...Rittman Analytics
As big data and data warehousing scale-up and move into the cloud, they’re increasingly likely to be delivered as services using distributed cloud query engines such as Google BigQuery, loaded using streaming data pipelines and queried using BI tools such as Looker. In this session the presenter will walk through how data modelling and query processing works when storing petabytes of customer event-level activity in a distributed data store and query engine like BigQuery, how data ingestion and processing works in an always-on streaming data pipeline, how additional services such as Google Natural Language API can be used to classify for sentiment and extract entity nouns from incoming unstructured data, and how BI tools such as Looker and Google Data Studio bring data discovery and business metadata layers to cloud big data analytics
Die QAware Big Data Landscape gibt einen detaillierten Überblick zu den relevantesten Big Data Technologien, vornehmlich aus dem Open-Source-Ökosystem.
(Dokument bitte herunterladen für bessere Lesbarkeit / rotierte Ansicht)
-----------------
The QAware Big Data Landscape provides a detailed overview over the most relevant Big Data technologies, most of them open source.
(Please download file for a better readability / rotated view)
Mar 2018 talk to SW Data Meetup by Mark O'Mahony, Software Engineer, Kx Systems.
Learn how to use a relational time-series and columnar database as well as a tightly integrated query language capable of doing aggregations and consolidations on billions of streaming and real time historical records.
Kx Systems are the founders of the world’s fastest time series database – Kdb+ – as well as the ‘q’ query language it is written in. kdb+ is a high-performance, high-volume database designed as a solution to Big Data problems.
Kdb+ is widely adopted by financial institutions around the world, including the top ten global investment banks, however as a result in the explosion of big and the internet of things, Kx is now expanding its applicability into new verticals such IoT, Utilities, Telco, Retail, Space and Pharmaceutical industries. Perhaps you would be interested in our most recent work with Aston Martin Red Bull Racing or Airbus.
CData Power BI Connectors - MS Business Application SummitJerod Johnson
The CData presentation introducing and demonstrating the CData Power BI Connectors (offering live connectivity to more than 100 SaaS, Big Data, and NoSQL sources).
It’s All About The Cards: Sharing on Social Media Encouraged HTML Metadata G...Shawn Jones
In a perfect world, all articles consistently contain sufficient metadata to describe the resource. We know this is not the reality, so we are motivated to investigate the evolution of the metadata that is present when authors and publishers supply their own. Because applying metadata takes time, we recognize that each news article author has a limited metadata budget with which to spend their time and effort. How are they spending this budget? What are the top metadata categories in use? How did they grow over time? What purpose do they serve? We also recognize that not all metadata fields are used equally. What is the growth of individual fields over time? Which fields experienced the fastest adoption? In this paper, we review 227,726 HTML news articles from 29 outlets captured by the Internet Archive between 1998 and 2016. Upon reviewing the metadata fields in each article, we discovered that 2010 began a metadata renaissance as publishers embraced metadata for improved search engine ranking, search engine tracking, social media tracking, and social media sharing. When analyzing individual fields, we find that one application of metadata stands out above all others: social cards -- the cards generated by platforms like Twitter when one shares a URL. Once a metadata standard was established for cards in 2010, its fields were adopted by 20% of articles in the first year and reached more than 95% adoption by 2016. This rate of adoption surpasses efforts like schema.org and Dublin Core by a fair margin. When confronted with these results on how news publishers spend their metadata budget, we must conclude that it is all about the cards.
Unlocking Geospatial Analytics Use Cases with CARTO and DatabricksDatabricks
Many companies need to analyze large datasets that include location information. To be able to derive business insights from these datasets you need a solution that provides geospatial analysis functionalities and can scale to manage large volumes of information. The combination of CARTO and Databricks allows you to solve this kind of large scale geospatial analytics problems. CARTO provides a location intelligence platform to discover and predict key insights through location data. In this session we will see how we can integrate CARTO and Databricks and how we can take advantage of this combination to solve specific problems for industries such as logistics, telecommunications or financial services.
Achieving Real-Time Analytics at Hermes | Zulf Qureshi, HVR and Dr. Stefan Ro...HostedbyConfluent
Hermes, Germany's largest post-independent logistics service provider for deliveries, had one main goal—make faster and smarter data-driven business decisions. But with high volumes of diverse and disparate data, how can you effectively leverage it as an asset for real-time insights and business intelligence? During this session, Hermes will share their data challenges and how HVR's high volume data replication capabilities enabled Hermes to securely and seamlessly integrate data into Kafka for real-time decision-making and greater visibility into the entire logistics process.
Building Pinterest Real-Time Ads Platform Using Kafka Streams confluent
Building Pinterest Real-Time Ads Platform Using Kafka Streams (Liquan Pei + Boyang Chen, Pinterest) Kafka Summit SF 2018
In this talk, we are sharing the experience of building Pinterest’s real-time Ads Platform utilizing Kafka Streams. The real-time budgeting system is the most mission-critical component of the Ads Platform as it controls how each ad is delivered to maximize user, advertiser and Pinterest value. The system needs to handle over 50,000 queries per section (QPS) impressions, requires less than five seconds of end-to-end latency and recovers within five minutes during outages. It also needs to be scalable to handle the fast growth of Pinterest’s ads business.
The real-time budgeting system is composed of real-time stream-stream joiner, real-time spend aggregator and a spend predictor. At Pinterest’s scale, we need to overcome quite a few challenges to make each component work. For example, the stream-stream joiner needs to maintain terabyte size state while supporting fast recovery, and the real-time spend aggregator needs to publish to thousands of ads servers while supporting over one million read QPS. We choose Kafka Streams as it provides milliseconds latency guarantee, scalable event-based processing and easy-to-use APIs. In the process of building the system, we performed tons of tuning to RocksDB, Kafka Producer and Consumer, and pushed several open source contributions to Apache Kafka. We are also working on adding a remote checkpoint for Kafka Streams state to reduce the time of code start when adding more machines to the application. We believe that our experience can be beneficial to people who want to build real-time streaming solutions at large scale and deeply understand Kafka Streams.
In the next five years, 15 to 40 billion additional connected devices are expected to hit the market. How can we handle such volumes and velocity of data?
Introduction to Dynamo storage systems, Riak, Cassandra, time series databases and edge analytics.
50 Shades of Data - Dutch Oracle Architects Platform (February 2018)Lucas Jellema
Gone are the days of a single enterprise database – typically and Oracle RDBMS – that holds all data in a strictly normalized form. We work with many more types of data (big and fast, structured and unstructured) that we use in various ways. Relational and ACID is not applicable to all of those. Always the latest, freshest data may not be uniformly valid either. We will continue to see an increase in specialized data stores that cater for specific needs and specific scenarios.
This presentation is a combination of a presentation and a demonstration on the various dimensions and use cases of using data and data stores in various ways – while ensuring the appropriate (!) levels of freshness, integrity, performance. Key take away: what as an architect you should know about the various types of data in enterprise IT and how to store/manage/query/manipulate them. What products and technologies are at your disposal. How can you make these work together - for a consistent (enough) overall data presentation. How are upcoming architectural patterns such as CQRS (command query responsibility segregation) , event sourcing and microservices influencing the way we handle data in the enterprise? Some of the technologies discussed: products such as MongoDB, MySQL, Neo4J, Apache Kafka, Redis, Elastic Search and Hadoop/Spark, Oracle Data Hub Cloud (based on Apache Cassandra) – used locally, in containers and on the cloud. Additionally we will discuss data replication scenarios.
Die QAware Big Data Landscape gibt einen detaillierten Überblick zu den relevantesten Big Data Technologien, vornehmlich aus dem Open-Source-Ökosystem.
(Dokument bitte herunterladen für bessere Lesbarkeit / komplette Ansicht)
-----------------
The QAware Big Data Landscape provides a detailed overview over the most relevant Big Data technologies, most of them open source.
(Please download file for a better readability / complete view)
Budapest Data Forum 2017 - BigQuery, Looker And Big Data Analytics At Petabyt...Rittman Analytics
As big data and data warehousing scale-up and move into the cloud, they’re increasingly likely to be delivered as services using distributed cloud query engines such as Google BigQuery, loaded using streaming data pipelines and queried using BI tools such as Looker. In this session the presenter will walk through how data modelling and query processing works when storing petabytes of customer event-level activity in a distributed data store and query engine like BigQuery, how data ingestion and processing works in an always-on streaming data pipeline, how additional services such as Google Natural Language API can be used to classify for sentiment and extract entity nouns from incoming unstructured data, and how BI tools such as Looker and Google Data Studio bring data discovery and business metadata layers to cloud big data analytics
Die QAware Big Data Landscape gibt einen detaillierten Überblick zu den relevantesten Big Data Technologien, vornehmlich aus dem Open-Source-Ökosystem.
(Dokument bitte herunterladen für bessere Lesbarkeit / rotierte Ansicht)
-----------------
The QAware Big Data Landscape provides a detailed overview over the most relevant Big Data technologies, most of them open source.
(Please download file for a better readability / rotated view)
Mar 2018 talk to SW Data Meetup by Mark O'Mahony, Software Engineer, Kx Systems.
Learn how to use a relational time-series and columnar database as well as a tightly integrated query language capable of doing aggregations and consolidations on billions of streaming and real time historical records.
Kx Systems are the founders of the world’s fastest time series database – Kdb+ – as well as the ‘q’ query language it is written in. kdb+ is a high-performance, high-volume database designed as a solution to Big Data problems.
Kdb+ is widely adopted by financial institutions around the world, including the top ten global investment banks, however as a result in the explosion of big and the internet of things, Kx is now expanding its applicability into new verticals such IoT, Utilities, Telco, Retail, Space and Pharmaceutical industries. Perhaps you would be interested in our most recent work with Aston Martin Red Bull Racing or Airbus.
CData Power BI Connectors - MS Business Application SummitJerod Johnson
The CData presentation introducing and demonstrating the CData Power BI Connectors (offering live connectivity to more than 100 SaaS, Big Data, and NoSQL sources).
It’s All About The Cards: Sharing on Social Media Encouraged HTML Metadata G...Shawn Jones
In a perfect world, all articles consistently contain sufficient metadata to describe the resource. We know this is not the reality, so we are motivated to investigate the evolution of the metadata that is present when authors and publishers supply their own. Because applying metadata takes time, we recognize that each news article author has a limited metadata budget with which to spend their time and effort. How are they spending this budget? What are the top metadata categories in use? How did they grow over time? What purpose do they serve? We also recognize that not all metadata fields are used equally. What is the growth of individual fields over time? Which fields experienced the fastest adoption? In this paper, we review 227,726 HTML news articles from 29 outlets captured by the Internet Archive between 1998 and 2016. Upon reviewing the metadata fields in each article, we discovered that 2010 began a metadata renaissance as publishers embraced metadata for improved search engine ranking, search engine tracking, social media tracking, and social media sharing. When analyzing individual fields, we find that one application of metadata stands out above all others: social cards -- the cards generated by platforms like Twitter when one shares a URL. Once a metadata standard was established for cards in 2010, its fields were adopted by 20% of articles in the first year and reached more than 95% adoption by 2016. This rate of adoption surpasses efforts like schema.org and Dublin Core by a fair margin. When confronted with these results on how news publishers spend their metadata budget, we must conclude that it is all about the cards.
Unlocking Geospatial Analytics Use Cases with CARTO and DatabricksDatabricks
Many companies need to analyze large datasets that include location information. To be able to derive business insights from these datasets you need a solution that provides geospatial analysis functionalities and can scale to manage large volumes of information. The combination of CARTO and Databricks allows you to solve this kind of large scale geospatial analytics problems. CARTO provides a location intelligence platform to discover and predict key insights through location data. In this session we will see how we can integrate CARTO and Databricks and how we can take advantage of this combination to solve specific problems for industries such as logistics, telecommunications or financial services.
Achieving Real-Time Analytics at Hermes | Zulf Qureshi, HVR and Dr. Stefan Ro...HostedbyConfluent
Hermes, Germany's largest post-independent logistics service provider for deliveries, had one main goal—make faster and smarter data-driven business decisions. But with high volumes of diverse and disparate data, how can you effectively leverage it as an asset for real-time insights and business intelligence? During this session, Hermes will share their data challenges and how HVR's high volume data replication capabilities enabled Hermes to securely and seamlessly integrate data into Kafka for real-time decision-making and greater visibility into the entire logistics process.
Onboarding process made agile with confluent and flowablmimacom
Slides of the presentation "Onboarding process made agile with Confluent and Flowable: from days to minutes" given by Nelo Puchades, Solution Architect at mimacom and Javier del Águila, Solution Architect at Flowable.
Build Next Generation Real-time Applications with SAP HANA on AWS (BDT211) | ...Amazon Web Services
"(Presented by SAP) SAP HANA, available on the AWS Cloud, is an industry transforming in-memory platform, which has been adopted by many startups and ISVs, as well as traditional SAP enterprise customers. SAP HANA converges database and application platform capabilities in-memory to transform transactions, analytics, text analysis, predictive, and spatial processing so businesses can operate in real-time. Please join us to learn what SAP HANA can do for you!
Doug Turner, CEO of Mantis Technologies, and an early adopter of SAP HANA One on AWS, will present and share his experience migrating his Sentiment Analysis solution from MySQL to SAP HANA One. He will talk about following benefits that he achieved with this migration:
-Dramatic simplification of his system architecture and landscape
-System consolidation by moving from 23 MySQL instances to one SAP HANA One instance
-Reduced overall AWS infrastructure cost as well as reduced admin effort and efficiency
We will conclude with an overview of all the key SAP HANA capabilities on the AWS Cloud like text analysis, predictive analytics, geospatial, data integration. We will round out the session with an in-depth view of what new HANA deployment options are available on the AWS Cloud like customers’ ability to bring their own licenses (BYOL) of SAP HANA to run on AWS in a variety of configurations ranging from 244GB up to 1.22TB. "
Top 6 Benefits of Moving On Premise SAP S4HANA to MS Azure CloudVCERPConsultingPvtLt1
With Microsoft and SAP working cooperatively, clients can relocate on-premise SAP applications and ERP answers for Azure Cloud. Organizations across the world depend on SAP ERP answers to smooth out, normalize, and in corporate offices across the full range of their organization. Website: https://www.vc-erp.com/top-6-benefits-of-moving-on-premise-sap-s-4hana-to-ms-azure-cloud/
Microsoft Dynamics NAV (NAV) began at a small company in Denmark and has evolved into a global power-house being used by more than 110,000 organizations world-wide. NAV is an enterprise resource planning (ERP) software suite for mid-sized organizations that offers applications for financial management, human resources management, manufacturing, multiple and international sites, project management, sales and marketing, service management, supply chain management and business intelligence, with the functionality being particularly strong in manufacturing and distribution. Microsoft Dynamics NAV is also known for being highly customizable and partners have developed a long list of industry-specific configurations to serve various vertical markets.
Wise Men Solutions Cloud Migration WebinarWise Men
•Solutioning for Cloud / Hybrid Deployment
•Cloud Administration
•Migrating your Applications to Cloud
•Cloud Application Development
•Managed Services
Modern Thinking: Cómo el Big Data y Cognitive están cambiando la estrategia de Marketing
Por: Ismael Yuste, Strategic Cloud Engineer Google Cloud
Presentación: Introducción a las soluciones Big Data de Google
Concurrency SharePoint Summit 2016 Presentation. For more information on our SharePoint solution, please visit http://www.concurrency.com/digital-transformation/customer-engagement.
In this White paper, Torry Harris Business Solutions carries out a high level comparison of the significant features delivered by key public cloud providers of the industry and key considerations that enterprises need to take into account while they embark on Cloud Computing.
This session goes through the Dynamics AX roadmap, highlights some of the changes coming with AX 7 and the benefits of cloud ERP such as Machine Learning.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Assuring Contact Center Experiences for Your Customers With ThousandEyes
Cloud Plan 2014
1. Currently in Planned date to
Application or Item the cloud move to cloud No Plan
Tax
Preparation
BNA Planner X
GoSystem X
ProFX 990, 706, 709 2016
Number Cruncher X
Retirement Distributions Planner X
Superforms X
Tax Interest X
Time Value X
Fixed Asset Manager X
Research
BNA X
RIA Checkpoint X
Morningstar X
PPC X
Kleinrock X
Workpapers - SurePrep X
Workflow - Microsfoft CRM X
Document Management - MS SharePoint ?
Due date management - CCH PM X
Audit
Trial balance and workpapers - CCH Engagement ?
Programs, checklists, libraries 2016
Research X
Workflow - Microsfoft SharePoint 2016
Microsoft Office X
Client Accounting Solution - Right Networks X
Electronic Signature X
Admin Document Management - Microsoft SharePoint 2014-2016
Personal file management - Microsoft OneDrive X
Client Portal - ShareFiile X
Client Collaboration - Smartsheet X
Intranet X
Email, Calendar, Contacts X
Online Meetings, IM, Presence, Screen Sharing X
Telephone System 2018
Fax X
Practice Management - CCH Practice Management X
CRM - Microsoft X
Staff Scheduling - Staff Trax X
Human Resources - Microsoft SharePoint X
General Ledger - Microsoft Greatplains Dynamics X
Clark Nuber Cloud Plan
2. Currently in Planned date to
Application or Item the cloud move to cloud No Plan
Clark Nuber Cloud Plan
Payroll - Ultipro X
Learning Management - Checkpoint X
Performance Evaluations - Halogen X
Data Extraction and Analysis - CCH Active Data X
Business Intelligence - Microsoft Power BI X
Knowledge Management - None X
Adobe Products - Acrobat, etc X
Successor auditor access X
Backup tools, Data X 2014-2016 X
IT Management, updates, rollouts, antivirus X
Servers X
2 domain controllers X
ADFS 3.0 X
SMTP relay for Office 365 and licensing (KMS) X
Office 365 Directory Synchronization X
VM Manager 2012 R2 X
Varonis file access documentation X
MIP Demo server X
Amelio server X
Engagement RAS X
Test servers X
W2 Mate X
Speech X
Commvault - replaced by Veeam 2016
ADFS 3.0 Server (Primary) 2018
Exchange 2010 for O365 Management 2016
2 Production Sarepoint Servers - replaced with Box X
2 Dev Sharepoint Servers 2016
SQL ?
SQL Reporting Services ?
File Server - administrative flat file data 2016
2 Domain Controllers X
Shoretel X
Fixed Asset Server 2016
KwikTag - accounts payable X
Print Server 2016
Network Scanning/Misc 2016
Remote Desktop Gateway (published apps) X
Library 7 (shared workstation) ?