Scott Stocker shares the best practices you can follow while upgrading your Sitecore environment to its latest version. He also shares handy solutions for common challenges you may face.
The most important thing for any organization is DATA. There can be 100 of front end applications which utilizing the same data for different purpose. Data plays an important role for any CMS application. This presentation touches different viewpoint while migrating data from external database to Sitecore CMS.
By using these details we able to successfully migrate over 5,00,000+ records in Sitecore.
Support material for the blog post available in https://hub.alfresco.com/t5/alfresco-content-services-blog/alfresco-7-3-upgrading-to-transform-core-3-0-0/ba-p/315364
This presentation describes the differences between Alfresco Transform Engine and Alfresco Transform Core 3.0.0.
Deployment, configuration and extension topics for Transform Core are covered.
by Brent Rabowsky, Solutions Architect & Itzik Paz, Solutions Architect, AWS
As serverless architectures become more popular, customers need a framework of patterns to help them identify how they can leverage AWS to deploy their workloads without managing servers or operating systems. This session describes re-usable serverless patterns while considering costs. For each pattern, we provide operational and security best practices and discuss potential pitfalls and nuances. We also discuss the considerations for moving an existing server-based workload to a serverless architecture. The patterns use services like AWS Lambda, Amazon API Gateway, Amazon Kinesis Streams, Amazon Kinesis Analytics, Amazon DynamoDB, Amazon S3, AWS Step Functions, AWS Config, AWS X-Ray, and Amazon Athena. This session can help you recognize candidates for serverless architectures in your own organizations and understand areas of potential savings and increased agility. What’s new in 2017: using X-Ray in Lambda for tracing and operational insight; a pattern on high performance computing (HPC) using Lambda at scale; how a query can be achieved using Athena; Step Functions as a way to handle orchestration for both the Automation and Batch patterns; a pattern for Security Automation using AWS Config rules to detect and automatically remediate violations of security standards; how to validate API parameters in API Gateway to protect your API back-ends; and a solid focus on CI/CD development pipelines for serverless –that includes testing, deploying, and versioning (SAM tools).
Data Migration Steps PowerPoint Presentation Slides SlideTeam
Presenting this set of slides with name - Data Migration Steps Powerpoint Presentation Slides. This PPT deck displays twenty-six slides with in-depth research. We provide a ready to use deck with all sorts of relevant topics subtopics templates, charts and graphs, overviews, analysis templates. When you download this deck by clicking the download button below, you get the presentation in both standard and widescreen format. All slides are fully editable. change the colors, font size, add or delete text if needed. The presentation is fully supported with Google Slides. It can be easily converted into JPG or PDF format.
Strategic Approach To Data Migration Project PlanSlideTeam
Presenting this set of slides with name Strategic Approach To Data Migration Project Plan. This is a six stage process. The stages in this process are Plan, Develop, Validate, Migrate Stage, Test. This is a completely editable PowerPoint presentation and is available for immediate download. Download now and impress your audience. https://bit.ly/3CTswep
AWS Lambda enables developers to build scalable application components with minimal effort. With Step Functions, we can solve the challenge of building large distributed applications using visual workflows. In this session, learn how to get started with Step Functions, and understand how to use them to take your Lambda-based applications to the next level. We start with a few granular functions and stitch them up using Step Functions. As we build the application, we'll add monitoring to ensure that the changes we make result in improvements.
The most important thing for any organization is DATA. There can be 100 of front end applications which utilizing the same data for different purpose. Data plays an important role for any CMS application. This presentation touches different viewpoint while migrating data from external database to Sitecore CMS.
By using these details we able to successfully migrate over 5,00,000+ records in Sitecore.
Support material for the blog post available in https://hub.alfresco.com/t5/alfresco-content-services-blog/alfresco-7-3-upgrading-to-transform-core-3-0-0/ba-p/315364
This presentation describes the differences between Alfresco Transform Engine and Alfresco Transform Core 3.0.0.
Deployment, configuration and extension topics for Transform Core are covered.
by Brent Rabowsky, Solutions Architect & Itzik Paz, Solutions Architect, AWS
As serverless architectures become more popular, customers need a framework of patterns to help them identify how they can leverage AWS to deploy their workloads without managing servers or operating systems. This session describes re-usable serverless patterns while considering costs. For each pattern, we provide operational and security best practices and discuss potential pitfalls and nuances. We also discuss the considerations for moving an existing server-based workload to a serverless architecture. The patterns use services like AWS Lambda, Amazon API Gateway, Amazon Kinesis Streams, Amazon Kinesis Analytics, Amazon DynamoDB, Amazon S3, AWS Step Functions, AWS Config, AWS X-Ray, and Amazon Athena. This session can help you recognize candidates for serverless architectures in your own organizations and understand areas of potential savings and increased agility. What’s new in 2017: using X-Ray in Lambda for tracing and operational insight; a pattern on high performance computing (HPC) using Lambda at scale; how a query can be achieved using Athena; Step Functions as a way to handle orchestration for both the Automation and Batch patterns; a pattern for Security Automation using AWS Config rules to detect and automatically remediate violations of security standards; how to validate API parameters in API Gateway to protect your API back-ends; and a solid focus on CI/CD development pipelines for serverless –that includes testing, deploying, and versioning (SAM tools).
Data Migration Steps PowerPoint Presentation Slides SlideTeam
Presenting this set of slides with name - Data Migration Steps Powerpoint Presentation Slides. This PPT deck displays twenty-six slides with in-depth research. We provide a ready to use deck with all sorts of relevant topics subtopics templates, charts and graphs, overviews, analysis templates. When you download this deck by clicking the download button below, you get the presentation in both standard and widescreen format. All slides are fully editable. change the colors, font size, add or delete text if needed. The presentation is fully supported with Google Slides. It can be easily converted into JPG or PDF format.
Strategic Approach To Data Migration Project PlanSlideTeam
Presenting this set of slides with name Strategic Approach To Data Migration Project Plan. This is a six stage process. The stages in this process are Plan, Develop, Validate, Migrate Stage, Test. This is a completely editable PowerPoint presentation and is available for immediate download. Download now and impress your audience. https://bit.ly/3CTswep
AWS Lambda enables developers to build scalable application components with minimal effort. With Step Functions, we can solve the challenge of building large distributed applications using visual workflows. In this session, learn how to get started with Step Functions, and understand how to use them to take your Lambda-based applications to the next level. We start with a few granular functions and stitch them up using Step Functions. As we build the application, we'll add monitoring to ensure that the changes we make result in improvements.
A presentation given at the Lucene/Solr Revolution 2014 conference to show Solr and Elasticsearch features side by side. The presentation time was only 30 minutes, so only the core usability features were compared. The full video is embedded on the last slide.
Esta charla pretende enseñar como añadir RabbitMQ a un proyecto Symfony.
Charla realizada en Symfony Barcelona el dia 2 de julio. Podeis encontrar ejemplos de código en https://github.com/solilokiam/rabbitmqexample
A Practical Enterprise Feature Store on Delta LakeDatabricks
The feature store is a data architecture concept used to accelerate data science experimentation and harden production ML deployments. Nate Buesgens and Bryan Christian describe a practical approach to building a feature store on Delta Lake at a large financial organization. This implementation has reduced feature engineering “wrangling” time by 75% and has increased the rate of production model delivery by 15x. The approach described focuses on practicality. It is informed by innovative approaches such as Feast, but our primary goal is evolutionary extensions of existing patterns that can be applied to any Delta Lake architecture.
Key Takeaways:
– Understand the key use cases that motivate the feature store from both a data science and engineering perspective.
– Consider edge cases where there may be opportunities for simplification such as “online” predictions.
– Review a typical logical data model for a feature store and how that can be applied to your business domain.
– Consider options for physical storage of the feature store in the Delta Lake.
– Understand common access patterns including metadata-based feature discovery.
Feature Store as a Data Foundation for Machine LearningProvectus
Looking to design and build a centralized, scalable Feature Store for your Data Science & Machine Learning teams to take advantage of? Come and learn from experts of Provectus and Amazon Web Services (AWS) how to!
Feature Store is a key component of the ML stack and data infrastructure, which enables feature engineering and management. By having a Feature Store, organizations can save massive amounts of resources, innovate faster, and drive ML processes at scale. In this webinar, you will learn how to build a Feature Store with a data mesh pattern and see how to achieve consistency between real-time and training features, to improve reproducibility with time-traveling for data.
Agenda
- Modern Data Lakes & Modern ML Infrastructure
- Existing and Emerging Architectural Shifts
- Feature Store: Overview and Reference Architecture
- AWS Perspective on Feature Store
Intended Audience
Technology executives & decision makers, manager-level tech roles, data architects & analysts, data engineers & data scientists, ML practitioners & ML engineers, and developers
Presenters
- Stepan Pushkarev, Chief Technology Officer, Provectus
- Gandhi Raketla, Senior Solutions Architect, AWS
- German Osin, Senior Solutions Architect, Provectus
Feel free to share this presentation with your colleagues and don't hesitate to reach out to us at info@provectus.com if you have any questions!
REQUEST WEBINAR: https://provectus.com/webinar-feature-store-as-data-foundation-for-ml-nov-2020/
Kappa vs Lambda Architectures and Technology ComparisonKai Wähner
Real-time data beats slow data. That’s true for almost every use case. Nevertheless, enterprise architects build new infrastructures with the Lambda architecture that includes separate batch and real-time layers.
This video explores why a single real-time pipeline, called Kappa architecture, is the better fit for many enterprise architectures. Real-world examples from companies such as Disney, Shopify, Uber, and Twitter explore the benefits of Kappa but also show how batch processing fits into this discussion positively without the need for a Lambda architecture.
The main focus of the discussion is on Apache Kafka (and its ecosystem) as the de facto standard for event streaming to process data in motion (the key concept of Kappa), but the video also compares various technologies and vendors such as Confluent, Cloudera, IBM Red Hat, Apache Flink, Apache Pulsar, AWS Kinesis, Amazon MSK, Azure Event Hubs, Google Pub Sub, and more.
Video recording of this presentation:
https://youtu.be/j7D29eyysDw
Further reading:
https://www.kai-waehner.de/blog/2021/09/23/real-time-kappa-architecture-mainstream-replacing-batch-lambda/
https://www.kai-waehner.de/blog/2021/04/20/comparison-open-source-apache-kafka-vs-confluent-cloudera-red-hat-amazon-msk-cloud/
https://www.kai-waehner.de/blog/2021/05/09/kafka-api-de-facto-standard-event-streaming-like-amazon-s3-object-storage/
Monitoring Java Applications with Prometheus and GrafanaJustin Reock
Learn how to modernize your Java application monitoring and dashboarding with Prometheus and Grafana. There's a lot of information out there when it comes to monitoring a Kubernetes cluster with Prometheus, but, in the modern enterprise landscape, applications are still what matters. Learn how to leverage Prometheus and Grafana to build slick, modern monitoring dashboards and threshold logic for Java applications.
Data Migration Plan PowerPoint Presentation SlidesSlideTeam
Data transfer is a complex process for every business. Keep this in mind we have created Data Migration Plan PowerPoint Presentation Slides. There are various slides provided in this information transfer plan PowerPoint complete deck such as data migration approach, steps, a simplified illustration of data migration steps, lifecycle, process, data migration on the cloud, and many more. Our team of experts uses all sorts of editable charts, icons and graphs to design these impressive presentation slides. The content ready information transfer PPT visuals are fully editable. You can modify, color, text, and font size. It has relevant templates to cater to your business needs. Outline all the important concepts without any hassle. Furthermore, data migration strategy PPT slides are apt to present related concepts like data conversion, data curation, data preservation, system migration to name a few. Showcase varied ways of data transformation using this professionally designed information migration PPT visual.
Data Versioning and Reproducible ML with DVC and MLflowDatabricks
Machine Learning development involves comparing models and storing the artifacts they produced. We often compare several algorithms to select the most efficient ones. We assess different hyper-parameters to fine-tune the model. Git helps us store multiple versions of our code. Additionally, we need to keep track of the datasets we are using. This is important not only for audit purposes but also for assessing the performances of the models, developed at a later time. Git is a standard code versioning tool in software development. It can be used to store your datasets but it does not offer an optimal solution.
The ability to monitor infrastructure and application performance in real time is essential to every software organization. Now, with the MongoDB Atlas and Datadog integration, you can seamlessly track Atlas performance monitoring data in Datadog. You can use Datadog to correlate performance metrics and events across your entire stack, create custom graphs and dashboards, as well as setup advanced alerting to help identify issues.
Vertex AI - Unified ML Platform for the entire AI workflow on Google CloudMárton Kodok
Vertex AI is a managed ML platform for practitioners to accelerate experiments and deploy AI models.
Enhanced developer experience
- Build with the groundbreaking ML tools that power Google
- Approachable from the non-ML developer perspective (AutoML, managed models, training)
- Ease the life of a data scientist/ML (has feature store, managed datasets, endpoints, notebooks)
- Infrastructure management overhead have been almost completely eliminated
- Unified UI for the entire ML workflow
- End-to-end integration for data and AI with build pipelines that outperform and solve complex ML tasks
- Explainable AI and TensorBoard to visualize and track ML experiments
BDA302 Deep Dive on Migrating Big Data Workloads to Amazon EMRAmazon Web Services
Customers are migrating their analytics, data processing (ETL), and data science workloads running on Apache Hadoop, Spark, and data warehouse appliances from on-premise deployments to Amazon EMR in order to save costs, increase availability, and improve performance. Amazon EMR is a managed service that lets you process and analyze extremely large data sets using the latest versions of over 15 open-source frameworks in the Apache Hadoop and Spark ecosystems. This session will focus on identifying the components and workflows in your current environment and providing the best practices to migrate these workloads to Amazon EMR. We will explain how to move from HDFS to Amazon S3 as a durable storage layer, and how to lower costs with Amazon EC2 Spot instances and Auto Scaling. Additionally, we will go over common security recommendations and tuning tips to accelerate the time to production.
What MLflow is; what problem it solves for machine learning lifecycle; and how it solves; How it will be used with Databricks; and CI/CD pipeline with Databricks.
The importance of search for modern applications is evident and nowadays it is higher than ever. A lot of projects use search forms as a primary interface for communication with a user. Though implementation of an intelligent search functionality is still a challenge and we need a good set of tools.
In this presentation, I will talk through the high-level architecture and benefits of Elasticsearch with some examples. Aside from that, we will also take a look at its existing competitors, their similarities, and differences.
In this presentation, we'll provide an extensive overview of the latest mobile marketing possibilities on all popular platforms (such as Google, Facebook and Twitter). You’ll know which platform to use for what purpose and what it can mean for your mobile presence. And yes, we promise that you will walk away with rock solid tactics to discover the real power of mobile and actually start driving business value from the mobile touchpoints of your potential clients.
A presentation given at the Lucene/Solr Revolution 2014 conference to show Solr and Elasticsearch features side by side. The presentation time was only 30 minutes, so only the core usability features were compared. The full video is embedded on the last slide.
Esta charla pretende enseñar como añadir RabbitMQ a un proyecto Symfony.
Charla realizada en Symfony Barcelona el dia 2 de julio. Podeis encontrar ejemplos de código en https://github.com/solilokiam/rabbitmqexample
A Practical Enterprise Feature Store on Delta LakeDatabricks
The feature store is a data architecture concept used to accelerate data science experimentation and harden production ML deployments. Nate Buesgens and Bryan Christian describe a practical approach to building a feature store on Delta Lake at a large financial organization. This implementation has reduced feature engineering “wrangling” time by 75% and has increased the rate of production model delivery by 15x. The approach described focuses on practicality. It is informed by innovative approaches such as Feast, but our primary goal is evolutionary extensions of existing patterns that can be applied to any Delta Lake architecture.
Key Takeaways:
– Understand the key use cases that motivate the feature store from both a data science and engineering perspective.
– Consider edge cases where there may be opportunities for simplification such as “online” predictions.
– Review a typical logical data model for a feature store and how that can be applied to your business domain.
– Consider options for physical storage of the feature store in the Delta Lake.
– Understand common access patterns including metadata-based feature discovery.
Feature Store as a Data Foundation for Machine LearningProvectus
Looking to design and build a centralized, scalable Feature Store for your Data Science & Machine Learning teams to take advantage of? Come and learn from experts of Provectus and Amazon Web Services (AWS) how to!
Feature Store is a key component of the ML stack and data infrastructure, which enables feature engineering and management. By having a Feature Store, organizations can save massive amounts of resources, innovate faster, and drive ML processes at scale. In this webinar, you will learn how to build a Feature Store with a data mesh pattern and see how to achieve consistency between real-time and training features, to improve reproducibility with time-traveling for data.
Agenda
- Modern Data Lakes & Modern ML Infrastructure
- Existing and Emerging Architectural Shifts
- Feature Store: Overview and Reference Architecture
- AWS Perspective on Feature Store
Intended Audience
Technology executives & decision makers, manager-level tech roles, data architects & analysts, data engineers & data scientists, ML practitioners & ML engineers, and developers
Presenters
- Stepan Pushkarev, Chief Technology Officer, Provectus
- Gandhi Raketla, Senior Solutions Architect, AWS
- German Osin, Senior Solutions Architect, Provectus
Feel free to share this presentation with your colleagues and don't hesitate to reach out to us at info@provectus.com if you have any questions!
REQUEST WEBINAR: https://provectus.com/webinar-feature-store-as-data-foundation-for-ml-nov-2020/
Kappa vs Lambda Architectures and Technology ComparisonKai Wähner
Real-time data beats slow data. That’s true for almost every use case. Nevertheless, enterprise architects build new infrastructures with the Lambda architecture that includes separate batch and real-time layers.
This video explores why a single real-time pipeline, called Kappa architecture, is the better fit for many enterprise architectures. Real-world examples from companies such as Disney, Shopify, Uber, and Twitter explore the benefits of Kappa but also show how batch processing fits into this discussion positively without the need for a Lambda architecture.
The main focus of the discussion is on Apache Kafka (and its ecosystem) as the de facto standard for event streaming to process data in motion (the key concept of Kappa), but the video also compares various technologies and vendors such as Confluent, Cloudera, IBM Red Hat, Apache Flink, Apache Pulsar, AWS Kinesis, Amazon MSK, Azure Event Hubs, Google Pub Sub, and more.
Video recording of this presentation:
https://youtu.be/j7D29eyysDw
Further reading:
https://www.kai-waehner.de/blog/2021/09/23/real-time-kappa-architecture-mainstream-replacing-batch-lambda/
https://www.kai-waehner.de/blog/2021/04/20/comparison-open-source-apache-kafka-vs-confluent-cloudera-red-hat-amazon-msk-cloud/
https://www.kai-waehner.de/blog/2021/05/09/kafka-api-de-facto-standard-event-streaming-like-amazon-s3-object-storage/
Monitoring Java Applications with Prometheus and GrafanaJustin Reock
Learn how to modernize your Java application monitoring and dashboarding with Prometheus and Grafana. There's a lot of information out there when it comes to monitoring a Kubernetes cluster with Prometheus, but, in the modern enterprise landscape, applications are still what matters. Learn how to leverage Prometheus and Grafana to build slick, modern monitoring dashboards and threshold logic for Java applications.
Data Migration Plan PowerPoint Presentation SlidesSlideTeam
Data transfer is a complex process for every business. Keep this in mind we have created Data Migration Plan PowerPoint Presentation Slides. There are various slides provided in this information transfer plan PowerPoint complete deck such as data migration approach, steps, a simplified illustration of data migration steps, lifecycle, process, data migration on the cloud, and many more. Our team of experts uses all sorts of editable charts, icons and graphs to design these impressive presentation slides. The content ready information transfer PPT visuals are fully editable. You can modify, color, text, and font size. It has relevant templates to cater to your business needs. Outline all the important concepts without any hassle. Furthermore, data migration strategy PPT slides are apt to present related concepts like data conversion, data curation, data preservation, system migration to name a few. Showcase varied ways of data transformation using this professionally designed information migration PPT visual.
Data Versioning and Reproducible ML with DVC and MLflowDatabricks
Machine Learning development involves comparing models and storing the artifacts they produced. We often compare several algorithms to select the most efficient ones. We assess different hyper-parameters to fine-tune the model. Git helps us store multiple versions of our code. Additionally, we need to keep track of the datasets we are using. This is important not only for audit purposes but also for assessing the performances of the models, developed at a later time. Git is a standard code versioning tool in software development. It can be used to store your datasets but it does not offer an optimal solution.
The ability to monitor infrastructure and application performance in real time is essential to every software organization. Now, with the MongoDB Atlas and Datadog integration, you can seamlessly track Atlas performance monitoring data in Datadog. You can use Datadog to correlate performance metrics and events across your entire stack, create custom graphs and dashboards, as well as setup advanced alerting to help identify issues.
Vertex AI - Unified ML Platform for the entire AI workflow on Google CloudMárton Kodok
Vertex AI is a managed ML platform for practitioners to accelerate experiments and deploy AI models.
Enhanced developer experience
- Build with the groundbreaking ML tools that power Google
- Approachable from the non-ML developer perspective (AutoML, managed models, training)
- Ease the life of a data scientist/ML (has feature store, managed datasets, endpoints, notebooks)
- Infrastructure management overhead have been almost completely eliminated
- Unified UI for the entire ML workflow
- End-to-end integration for data and AI with build pipelines that outperform and solve complex ML tasks
- Explainable AI and TensorBoard to visualize and track ML experiments
BDA302 Deep Dive on Migrating Big Data Workloads to Amazon EMRAmazon Web Services
Customers are migrating their analytics, data processing (ETL), and data science workloads running on Apache Hadoop, Spark, and data warehouse appliances from on-premise deployments to Amazon EMR in order to save costs, increase availability, and improve performance. Amazon EMR is a managed service that lets you process and analyze extremely large data sets using the latest versions of over 15 open-source frameworks in the Apache Hadoop and Spark ecosystems. This session will focus on identifying the components and workflows in your current environment and providing the best practices to migrate these workloads to Amazon EMR. We will explain how to move from HDFS to Amazon S3 as a durable storage layer, and how to lower costs with Amazon EC2 Spot instances and Auto Scaling. Additionally, we will go over common security recommendations and tuning tips to accelerate the time to production.
What MLflow is; what problem it solves for machine learning lifecycle; and how it solves; How it will be used with Databricks; and CI/CD pipeline with Databricks.
The importance of search for modern applications is evident and nowadays it is higher than ever. A lot of projects use search forms as a primary interface for communication with a user. Though implementation of an intelligent search functionality is still a challenge and we need a good set of tools.
In this presentation, I will talk through the high-level architecture and benefits of Elasticsearch with some examples. Aside from that, we will also take a look at its existing competitors, their similarities, and differences.
In this presentation, we'll provide an extensive overview of the latest mobile marketing possibilities on all popular platforms (such as Google, Facebook and Twitter). You’ll know which platform to use for what purpose and what it can mean for your mobile presence. And yes, we promise that you will walk away with rock solid tactics to discover the real power of mobile and actually start driving business value from the mobile touchpoints of your potential clients.
We all use several devices and browsers to visit the same content on the web, on mobile and apps. A few months ago we were guessing and trying to get a grip on the customer journey on all these devices. But now the time that we can measure, estimate and understand cross device usage has arrived! Learn how to understand and setup cross device measurement. What can you learn from it and what are the benefits to understand the cross device behavior of your customers?
DF2015: A case for Customer Experience DesignThe Reference
The creation of customer experiences is nothing new, and neither is the occasional design of such experiences. But what is new is customer experience design as a management discipline, with its own principles, tools and techniques. This presentation takes you through the steps to draft a framework for purposeful and profitable customer experience design.
Insights and marketing automation with sitecore 8The Reference
This presentation, presented by Thomas Danniau on the Sitecore Inspiration Day of The Reference, includes all insights about marketing automation in Sitecore 8.0.
The internet of things, your next crucial challenge - ProductizeThe Reference
Discover how the world is transforming into a global nervous system allowing agile companies to perform high resolution management and enter new markets with hybrid product-services following new business models. This session will give you a broad view on the Internet of Things, and guide you through the necessary steps to successfully capture the new values resulting from this revolution.
Sitecore development approach evolution – destination helixPeter Nazarov
Sitecore Development Approach Evolution – Destination Helix
Sitecore officially recommended Helix as a set of overall design principles and conventions for Sitecore development around 18 month ago at SUGCON 2016 alongside with an official implementation example - Habitat. Why was it necessary? What are the benefits? Has it worked in practice? Peter Nazarov will share the outlook on why and how a combination of Sitecore Helix and Habitat benefits the business and development users of Sitecore in practice.
Sitecore 9.2 new features for SUGMEA - Presented by Naresh Geepalem of Horizo...dharmeshharji
Naresh, Sitecore Architect at Horizontal Integration Dubai discusses the key new features of Sitecore 9.2 as well as touching on Sitecore's latest acquisition; Stylelabs not rebranded as Content Hub.
Upgrading to TIBCO Jaspersoft 7 with The Customer Success TeamTIBCO Jaspersoft
New TIBCO Jaspersoft® 7 delivers highly anticipated reporting, BI, analytics, and next-level integration capabilities.
See its new highly streamlined workflow for designing and embedding data visualizations into your applications, including the all-new ability to embed the Jaspersoft® Ad Hoc Viewer into web applications using our award-winning JavaScript API.
Register for this webinar, led by the TIBCO Customer Success Team, to learn about our Upshift program that:
- Offers BI best practices tailored to your needs throughout your analytics lifecycle
- Enables your team to realize fast ROI
- Helps you efficiently move through implementation
Get your Jaspersoft implementation up-to-speed with the TIBCO Customer Success Team.
Sitecore9 key features by jitendra soni - Presented in Sitecore User Group UKJitendra Soni
Presented on Sitecore9 key features - After attending the Sitecore Symposium in LA - the USA in 2017
https://eventil.com/events/manchester-sitecore-technical-user-group
This presentation will be useful to those
who would like to get acquainted with lifetime history
of successful monolithic Java application.
It shows architectural and technical evolution of one Java web startup that is beyond daily coding routine and contains a lot of simplifications, Captain Obvious and internet memes.
But this presentation is not intended for monolithic vs. micro services architectures comparison.
Dart Past Your Competition by Getting Your Digital Experience into Market Fas...Perficient, Inc.
During the 2015 IBM Digital Experience, Mark Polly, Perficient Director, Strategic Advisors for Portal, Social, Web Content, demonstrated how you can dart past your competition by getting your digital experience into market faster than ever before.
Continuous Delivery: releasing Better and Faster at DashlaneDashlane
An introduction to how the Dashlane Engineering Team worked on achieving Continuous Delivery: the ability to deliver to production, fast, reliably and on-demand, through an industrialized automated Release Pipeline.
Triggered Nurturing using Marketing Automation in Sitecore 9edynamic
In the past, triggering a lead nurturing program off of website behavior necessitated the use of data and analytics integrations of some type. For most platforms, it still does, but Sitecore had brought this functionality under one roof in their last version, and improved it even further in Sitecore 9.
Studies upon studies have been done on how efficient you could be, and how much more ROI you’d gain, if you’d just put an organized lead management process in place. Easier said than done though, and best practices are only a good starting point to get your mind around the concept.
Whether you are looking to optimize your CMS or Re-platform to a new one, attend this webinar to walk away with actionable best practices. This webinar will also help you understand the latest cutting edge features available today in best of breed CMS systems and how to use these features to its maximum potential to drive real results.
Law Firm Websites in 2018: Bottlenecks & Recommendationsedynamic
The legal industry is facing significant headwinds due to flat growth, increased competition, and increased client demands. In order to compete in a crowded marketplace, firms must evolve and differentiate themselves to win new business and grow
Progressive Web Apps, also known as Installable Web Apps or Hybrid Web Apps, is the latest industry trend helping businesses create more engaged and loyal customers by presenting regular web pages or websites as traditional applications or native mobile applications to the users.
A Revenue Engine increases revenue for organizations by seamlessly aligning demand gen strategies with business processes and the customer journey, providing a single view of the customer and integrating an optimized CX tech stack.
As marketers, we are all feeling the pressure to measure the impact of every dollar we spend on marketing, and today, there is mounting focus put on the direct revenue impact of marketing on revenue. During this webinar we will showcase a framework for revenue marketers to maximize ROI through their digital marketing efforts.
Engagement strategies for law firms to compete in the age of the customeredynamic
Today’s business relationships can be initiated online using personalized thought leadership content to attract prospects to your firm, engage with your attorneys, and transport them along a buying journey to becoming a client. Your digital platform is now the tool that builds trust and relationships with your clients.
Mid-market marketers’ task is lot tougher compared to large enterprise marketers due to limited budget, lean resources, and complex client needs. For mid-market businesses, it’s vital to be able to compete on a level playing field with industry giants.
Learn how a winning combination of a best-in-class digital experience platform in combination with an award-winning agency with a full Marketing Operations model can drive business success and measurable marketing ROI.
Customers no longer follow neat, linear paths toward a purchase. Instead, they utilize various channels across their lifecycle and with each interaction, your organization must be able to capture their past interactions, preferences, and data to advance their journey – and provide the right customer experience, no matter the channel. With the modern customer journey being nearly impossible to predict, the impact of omni-channel consistency is key.
Contextual Commerce: Best Practices for Winning with Customer Experience with...edynamic
Commerce has evolved. To win customers, you need to win their hearts and minds with customer experience. Great commerce experiences start with understanding your customers, delivering relevant content based on who they are and where they are in the buying journey to allow shoppers to make better-informed decisions and bringing convenience to the buying process.
In the complex mobile technology ecosystem, selecting the right development approach can be intimidating.Sitecore mobile solution makes the entire process simple for marketers and technologists.
Why Marketing is Broken, and how Time to Value fixes it!edynamic
Time to Value is a methodology to increase measurable business revenue using Sitecore in under 30 days. Business demands positive results, Marketers who don’t shift the needle, and who are not data driven simply won’t survive the customer experience revolution.
Crossing Paths: Meet Customers Wherever They Are on Their Journeyedynamic
Caroline Schmid, edynamic’s Vice President for Demand Generation Services will help you to encounter your customers and prospects one-to-one in her session – combining marketing technologies for omni-channel communication.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
2. Let's talk!
• Quick Intro
• Why should you upgrade?
• Upgrade basics
• Research and planning
• Recommended approach
• Tips for a smooth upgrade
3. Sitecore expert with 12 years
of CMS experience and over
10 years of experience on the
Sitecore platform. He loves
building personalized web
experiences using Sitecore's
experience platformand
helping clients understand
Sitecore's capabilities. Scott
has architectedmany large
Sitecore applications including
sites serving 200+ million page
views annually.
Hi. I’m Scott.
Scott Stocker
scott.stocker@edynamic.net
@sestocker
4. An award winning
global digital
marketing and
technology agency
focused on customer
experience
5. Exceptional Sitecore Expertise
Sitecore Practice
Sitecore partnership since
2003
Over 200 solutions
delivered
Large global resource pool
> 80 certified Sitecore
resources
Center of Excellence
Best practice development
3 MVPs on staff
Ongoing internal training
program run by MVPs
Creation of industry
acceleratorsand reusable
frameworks such as content
accelerators,customdata
connectors
Creation of reusable
Sitecore connectors:
Eloqua, Pardot, Marketo,
CRMs
7. • Identify reasons to keep Sitecore on
a current version
• Identify parts of the upgrade process
• Identify areas for research and
planning
• Review the recommended approach
• Get everyone more comfortable with
the upgrade process
Goals Today
17. • Database
• Configuration
• New Sitecore Files
• DLL’s
• /sitecoredirectory
• Code Updates
• New Server Roles
Components of a Sitecore Upgrade
18. • Have to run steps separately as documented by
Sitecore
• Downloadthe.updatefiles
• UsuallyUpdateInstallation Wizard
• Sometimesdatabasescripts
• Upgrades the content database and files on disk
• Get needed files from Sitecore
• SDNforversionsofSitecorebefore8
• Devfor8.0andabove
Running the Sitecore Upgrade
21. • Usually only required for major versions
• Forexample,ContentSearchAPIfrom6.6to7.0
• Analytics APIchangeswithxDBintroduction
• IntroductionofSPEAK
• Start with new Sitecore DLL’s
• Addtoyoursolutioninsourcecontrol
• Lookforcompilationerrors
• Testlocally
• If making several leaps, wait until the final version to
make code changes
• Forexample,ifgoingfrom6.6to8.1,don’tstopat7.0tomake
ContentSearchchanges
Code Changes
22. • If you are upgrading from < 7.5:
• MongoDB
• Processingserver
• Other options
• xManagementmode
• xDBCloudfromSitecore
New Server Roles
31. • On a developer workstation or sandbox environment
• PracticeUpgradeSteps
• WorkwithSitecoreSupportonanyerrors
• UsevanillaCMinstances
• Remove customizations – especially pipelines
• UpgrademodulesalongwithSitecoreversions
• Documentstepsandproblems
• Upgraded database can now be used for developer
changes
• Re-enablecustomizationsandtest
• Makenecessarycodechanges
Local Upgrade
32. • Server where final upgrade will be run
• PracticeUpgradeSteps
• UsevanillaCMinstances
• Remove customizations – especially pipelines
• Makeiteasytoreplicaterunningtheupgrade
• Should be able to run the upgrade steps easily
• Practice at least once before the final upgrade of
content
• Have your CM and CD ready ahead of time
• Code/Configwillbedeployedviayourbuildprocess
• Rememberthatcustomconfigchangeswillbepatchedin
Upgrade Environment
33. • Upgrade the databases using out-of-the-boxSitecore
instances
• Sitecore files and configuration should come from the
Sitecore installer
• Customconfigshouldbepatchedin!
• Don’ttrytofollowthepagesofconfigchanges
• Remember to remove Sitecoresupport files you might
have
• Typically thesearehotfixesforaparticularknownissue
• Test both content management and content delivery
• Don’t forget to review log files,both during the upgrade
and after
Recommendations
34. 33
Features
Easy to install on your Sitecore instance - just
select the current and target versions and start
an upgrade
Pre-configured with automated testing
capabilities powered by Selenium
Full support for multiple languages
Can handle any number of websites or
microsites
Tool tracks all upgrade events in Sitecore log
file so you can see success/failure at any time
Full support for user interface resolutions post
upgrade
Full support for any on-premise or cloud
deployment
Comes with 2 weeks of free, warranty support
Benefits
Upgrade from any old Sitecore version to a
higher versions within days
Sitecore Automatic Version
Upgrader
35. edynamic‘s Auto vUpgrader for Sitecore
Tool detects the
current version by
itself. User needs to
select the next version
Confirmation message
displayed once the upgrade is
complete
Automated test case can be
run subsequently with the tool
Test results are displayed
36. Upgrade of 13 sites from Sitecore
6.6 to 8.1 within 1.2 weeks
Georgia-Pacific LLC is one of the world's
leading manufacturers & distributors of pulp,
paper, tissue, toilet and paper towel dispensers
and related chemicals.
They were struggling for over past 4 months to
upgrade multiple sites from Sitecore 6.6 to the
latest version
Background
Solution & Benefits
• edynamic was hired by G-P to rescue its upgrade project
undergoing delays for months by incumbent vendor
• With its upgrade utility tool edynamic upgraded 13 sites
from Sitecore 6.6 to Sitecore 8.1 within a record time of
1.2 weeks
• Test results showed minimal errors and were easily fixed
after the upgrade
39. • Upgrade Sitecore regularly
• Irecommendatleastonceevery12-18months
• Planning is key for success
• Engage a partner for help
• Sitecorepartnersdoupgradesallthetime!
• Practice helps – often based on content freeze and
time it takes to upgrade, you will need to do the
process several times
• Engage SitecoreSupport for issues you can’t google
• Note:onlySitecorecertifieddeveloperscanengagesupport
• Don’t use IE to run the upgrade
• Modify the web.config/sitecore.configto increase
timeouts
Tips
To give you in simple words- edynamic is a digital mrkt tech agency. We focus on bringing technology and marketing experience together and help you provide superior customer experience to your clients
Deeply committed to Sitecore, our relationship dates back to … delivered hundered… what differentiates us is COE… connectors at marketplace.. Allows to deliver great results .. Awards..
The upgrade process is unique for every solution. Each customer may have different levels of customization, different tolerances for content freezes and different preferences regarding testing and release management.
Note that simple upgrades – such as moving from 8.1 Update-2 to 8.1 Update-3 – will typically require less analysis and testing than upgrades that traverse major versions.
Mainstream support has already ended for versions 6.6 and under
7.0 and 7.1 mainstream is about to end
Without mainstream support, assistance for errors and product defects will cost additional money
Current browsers work better with the latest versions of Sitecore
If you are not on 7.2 or 8.x I suspect you might see some issues today
8.2 drops the dependency on Silverlight
6.6 doesn’t have support for > Windows 8
Microsoft Mainstream support has already ended for Server 2008
Old software = bad
Want to run SQL 2014? Need to be on the latest version.
Mongo 2.6 has been end of life. You need to get on 3.x
New user friendly UI in Sitecore 8
Vastly improved Experience Editor
Planning is probably the key for success
The upgrade needs to be rolled out to all of your environments
Current version needs maintained while during the upgrade process
CMS upgrade process is pretty straight-forward
Modules are more complicated
Don’t forget the code changes that might be necessary based on new module versions
If all sites can be done at once, that is the simplest approach
CM + CD
Not just prod
CM + CD
Not just prod
CM + CD
Not just prod
Now you know the components and what to plan for – but how do execute the upgrade? What does the process look like?