This document discusses Intershop Commerce Management's support for Microsoft SQL Server and Azure SQL Database as operational databases. Key points include:
- Intershop Commerce Management version 7.10 now supports Microsoft SQL Server and Azure SQL Database in addition to Oracle Database.
- Microsoft SQL Server and Azure SQL Database provide features for business intelligence, advanced analytics, data management, and machine learning.
- Organizations have options to use SQL Server on-premises, Azure SQL Database on Azure, or let Intershop manage the database through their commerce-as-a-service offering.
- The document outlines the steps taken to migrate an existing Intershop implementation from Oracle to Microsoft SQL Server, including
TechDays NL 2016 - Building your scalable secure IoT Solution on AzureTom Kerkhove
The Internet-of-Things was one of the big hypes in 2015 but it’s more than that – Customers want to build out their own infrastructures and act on their data.
Today we’ll look at how Microsoft Azure helps us to build scalable solutions to process events from thousands of devices in a secure manner and the challenges it has. Once the data is in the cloud we’ll also take a look at ways we can learn from our measurements.
With more than 700 million monthly active users, Instagram continues to make it easier for people across the globe to join the community, share their experiences, and strengthen connections to their friends and passions. Powering Instagram’s various products requires the use of machine learning, high performance ranking services, and most importantly large amounts of data. At Instagram, we use Apache Spark for several critical production pipelines, including generating labeled training data for our machine learning models. In this session, you’ll learn about how one of Instagram’s largest Spark pipelines has evolved over time in order to process ~300 TB of input and ~90 TB of shuffle data. We’ll discuss the experience of building and managing such a large production pipeline and some tips and tricks we’ve learned along the way to manage Spark at scale. Topics include migrating from RDD to Dataset for better memory efficiency, splitting up long-running pipelines in order to better tune intermediate shuffle data, and dealing with changing data skew over time. Finally, we will also go over some optimizations we have made in order to maintain reliability of this critical data pipeline.
Un orquestador en la nube: Azure Data Factory (por Carlos Sacristán)Jorge Millán Cabrera
En esta breve charla, Carlos Sacristán nos mostrará qué es ADF, sus componentes principales y cómo podemos sacarle partido empleándolo en algunos escenarios de uso típicos.
TechDays NL 2016 - Building your scalable secure IoT Solution on AzureTom Kerkhove
The Internet-of-Things was one of the big hypes in 2015 but it’s more than that – Customers want to build out their own infrastructures and act on their data.
Today we’ll look at how Microsoft Azure helps us to build scalable solutions to process events from thousands of devices in a secure manner and the challenges it has. Once the data is in the cloud we’ll also take a look at ways we can learn from our measurements.
With more than 700 million monthly active users, Instagram continues to make it easier for people across the globe to join the community, share their experiences, and strengthen connections to their friends and passions. Powering Instagram’s various products requires the use of machine learning, high performance ranking services, and most importantly large amounts of data. At Instagram, we use Apache Spark for several critical production pipelines, including generating labeled training data for our machine learning models. In this session, you’ll learn about how one of Instagram’s largest Spark pipelines has evolved over time in order to process ~300 TB of input and ~90 TB of shuffle data. We’ll discuss the experience of building and managing such a large production pipeline and some tips and tricks we’ve learned along the way to manage Spark at scale. Topics include migrating from RDD to Dataset for better memory efficiency, splitting up long-running pipelines in order to better tune intermediate shuffle data, and dealing with changing data skew over time. Finally, we will also go over some optimizations we have made in order to maintain reliability of this critical data pipeline.
Un orquestador en la nube: Azure Data Factory (por Carlos Sacristán)Jorge Millán Cabrera
En esta breve charla, Carlos Sacristán nos mostrará qué es ADF, sus componentes principales y cómo podemos sacarle partido empleándolo en algunos escenarios de uso típicos.
Azure Synapse is Microsoft's new cloud analytics service offering that combines enterprise data warehouse and Big Data analytics capabilities. It offers a powerful and streamlined platform to facilitate the process of consolidating, storing, curating and analysing your data to generate reliable and actionable business insights.
Data Con LA 2018 - Big Data as a Service: Running Elasticsearch on Pure by Br...Data Con LA
Big Data as a Service: Running Elasticsearch on Pure by Brian Gold, Founding Member, FlashBlade, PureStorage
As organizations look to scale their use of modern analytics, the traditional deployment model of these tools has become a drag on productivity. Existing big-data architectures typically run on fixed sets of server instances with tightly coupled storage. While originally designed for scalability, these rigid environments cause server sprawl and increase time-to-deployment.
Organizations are grappling to manually classify and create an inventory for distributed and heterogeneous data assets to deliver value. However, the new Azure service for enterprises – Azure Synapse Analytics is poised to help organizations and fill the gap between data warehouses and data lakes.
Oracle Analytics Cloud empowers business analysts and consumers with modern, AI-powered, self-service analytics capabilities for data preparation, visualization, enterprise reporting, augmented analysis, and natural language processing/ generation. It is a single and complete platform that empowers your entire
Presented at the 2013 Esri Southeast User Conference. User case story showing how ArcGIS Online was used to publish a public-facing outage map for an electric utility. Presented at the 2013 Esri Southeast User Conference in Jacksonville, FL.
Logging, Metrics, and APM: The Operations Trifecta (P)Elasticsearch
Take your operational visibility to the next level by bringing your logs, metrics, and now APM data under one roof. Learn how Elasticsearch efficiently combines these types of data in a single store and see how Kibana is used to search logs, analyze metrics, and leverage APM features for better performance monitoring and faster troubleshooting.
ANSI SQL - a shortcut to Microsoft SQL Server/Azure SQL Database for Intersho...Jens Kleinschmidt
These slides are for Intershop developers who want to start looking into an Oracle DB alternative - Microsoft SQL Server or Azure SQL Database.
It includes steps the vendor, Intershop, has undertaken to support MS SQL as well as migration hints for projects.
Agenda:
Introduction
Evaluation
Why Microsoft SQL Server?
Work Ahead
MS SQL Support
A Story of Epic proportion
ANSI SQL to the rescue
Migration Steps
Outlook
Summary
Q&A
Azure Synapse is Microsoft's new cloud analytics service offering that combines enterprise data warehouse and Big Data analytics capabilities. It offers a powerful and streamlined platform to facilitate the process of consolidating, storing, curating and analysing your data to generate reliable and actionable business insights.
Data Con LA 2018 - Big Data as a Service: Running Elasticsearch on Pure by Br...Data Con LA
Big Data as a Service: Running Elasticsearch on Pure by Brian Gold, Founding Member, FlashBlade, PureStorage
As organizations look to scale their use of modern analytics, the traditional deployment model of these tools has become a drag on productivity. Existing big-data architectures typically run on fixed sets of server instances with tightly coupled storage. While originally designed for scalability, these rigid environments cause server sprawl and increase time-to-deployment.
Organizations are grappling to manually classify and create an inventory for distributed and heterogeneous data assets to deliver value. However, the new Azure service for enterprises – Azure Synapse Analytics is poised to help organizations and fill the gap between data warehouses and data lakes.
Oracle Analytics Cloud empowers business analysts and consumers with modern, AI-powered, self-service analytics capabilities for data preparation, visualization, enterprise reporting, augmented analysis, and natural language processing/ generation. It is a single and complete platform that empowers your entire
Presented at the 2013 Esri Southeast User Conference. User case story showing how ArcGIS Online was used to publish a public-facing outage map for an electric utility. Presented at the 2013 Esri Southeast User Conference in Jacksonville, FL.
Logging, Metrics, and APM: The Operations Trifecta (P)Elasticsearch
Take your operational visibility to the next level by bringing your logs, metrics, and now APM data under one roof. Learn how Elasticsearch efficiently combines these types of data in a single store and see how Kibana is used to search logs, analyze metrics, and leverage APM features for better performance monitoring and faster troubleshooting.
ANSI SQL - a shortcut to Microsoft SQL Server/Azure SQL Database for Intersho...Jens Kleinschmidt
These slides are for Intershop developers who want to start looking into an Oracle DB alternative - Microsoft SQL Server or Azure SQL Database.
It includes steps the vendor, Intershop, has undertaken to support MS SQL as well as migration hints for projects.
Agenda:
Introduction
Evaluation
Why Microsoft SQL Server?
Work Ahead
MS SQL Support
A Story of Epic proportion
ANSI SQL to the rescue
Migration Steps
Outlook
Summary
Q&A
Building workflow solution with Microsoft Azure and Cloud | Integration MondayBizTalk360
Most will agree that a business process can be a workflow. But, what do people think of when running workflows in the Cloud and in particular Azure or Microsoft Cloud. Because, Microsoft Azure and Cloud offer us several options to build them: No-code/low-code, and a code option with Power Automate, Logic Apps, and Durable Functions? In this session, we'll explore each and focus on building workflows with them. Furthermore, we'll see the differences and how each could potentially, complement the other.
Supercharge your data analytics with BigQueryMárton Kodok
Powering interactive data analysis require massive architecture, and Know-How to build a fast real-time computing system. BigQuery solves this problem by enabling super-fast, SQL-like queries against petabytes of data using the processing power of Google’s infrastructure. We will cover its core features, creating tables, columns, views, working with partitions, clustering for cost optimizations, streaming inserts, User Defined Functions, and several use cases for everydaay developer: funnel analytics, behavioral analytics, exploring unstructured data.
The other part will be about BigQuery ML, which enables users to create and execute machine learning models in BigQuery using standard SQL queries. BigQuery ML democratizes machine learning by enabling SQL practitioners to build models using existing SQL tools and skills. BigQuery ML increases development speed by eliminating the need to move data.
Splunk Ninjas: New Features, Pivot, and Search DojoSplunk
Besides seeing the newest features in Splunk Enterprise and learning the best practices for data models and pivot, we will show you how to use a handful of search commands that will solve most search needs. Learn these well and become a ninja.
Gimel is a data abstraction framework built on Apache Spark - providing unified Data Access via API & SQL to different technologies such as kafka, elastic, HBASE, Rest API, File, Object stores, Relational , etc.
We spoke about this recently in the "cloud track" in the "Scale By The Bay" Conference.
https://www.scale.bythebay.io/schedule
https://sched.co/e55D
Youtube - https://www.youtube.com/watch?v=cy8g2WZbEBI&ab_channel=FunctionalTV
https://youtu.be/m6_0iI4XDpU
Agile Data Science 2.0 (O'Reilly 2017) defines a methodology and a software stack with which to apply the methods. *The methodology* seeks to deliver data products in short sprints by going meta and putting the focus on the applied research process itself. *The stack* is but an example of one meeting the requirements that it be utterly scalable and utterly efficient in use by application developers as well as data engineers. It includes everything needed to build a full-blown predictive system: Apache Spark, Apache Kafka, Apache Incubating Airflow, MongoDB, ElasticSearch, Apache Parquet, Python/Flask, JQuery. This talk will cover the full lifecycle of large data application development and will show how to use lessons from agile software engineering to apply data science using this full-stack to build better analytics applications. The entire lifecycle of big data application development is discussed. The system starts with plumbing, moving on to data tables, charts and search, through interactive reports, and building towards predictions in both batch and realtime (and defining the role for both), the deployment of predictive systems and how to iteratively improve predictions that prove valuable.
Agile Data Science 2.0 (O'Reilly 2017) defines a methodology and a software stack with which to apply the methods. *The methodology* seeks to deliver data products in short sprints by going meta and putting the focus on the applied research process itself. *The stack* is but an example of one meeting the requirements that it be utterly scalable and utterly efficient in use by application developers as well as data engineers. It includes everything needed to build a full-blown predictive system: Apache Spark, Apache Kafka, Apache Incubating Airflow, MongoDB, ElasticSearch, Apache Parquet, Python/Flask, JQuery. This talk will cover the full lifecycle of large data application development and will show how to use lessons from agile software engineering to apply data science using this full-stack to build better analytics applications. The entire lifecycle of big data application development is discussed. The system starts with plumbing, moving on to data tables, charts and search, through interactive reports, and building towards predictions in both batch and realtime (and defining the role for both), the deployment of predictive systems and how to iteratively improve predictions that prove valuable.
SplunkLive! Tampa: Splunk Ninjas: New Features, Pivot, and Search Dojo Splunk
Besides seeing the newest features in Splunk Enterprise and learning the best practices for data models and pivot, we will show you how to use a handful of search commands that will solve most search needs. Learn these well and become a ninja.
Besides seeing the newest features in Splunk Enterprise and learning the best practices for data models and pivot, we will show you how to use a handful of search commands that will solve most search needs. Learn these well and become a ninja.
Optimizing Code Reusability for SharePoint using Linq to SharePoint & the MVP...Sparkhound Inc.
Whether developing a small customization or a large enterprise solution, one goal is to minimize redundancy in Code. In this presentation, Sparkhound Consultant Ted Wagner shows how the MVP design pattern is used in SharePoint to create business models that can be reused easily between other ASP or C# application.
Splunk Ninjas: New features, pivot, and search dojoSplunk
Besides seeing the newest features in Splunk Enterprise and learning the best practices for data models and pivot, we will show you how to use a handful of search commands that will solve most search needs. Learn these well and become a ninja.
Splunk Ninjas: New Features, Pivot, and Search DojoSplunk
Besides seeing the newest features in Splunk Enterprise and learning the best practices for data models and pivot, we will show you how to use a handful of search commands that will solve most search needs. Learn these well and become a ninja.
Industry leading
Build mission-critical, intelligent apps with breakthrough scalability, performance, and availability.
Security + performance
Protect data at rest and in motion. SQL Server is the most secure database for six years running in the NIST vulnerabilities database.
End-to-end mobile BI
Transform data into actionable insights. Deliver visual reports on any device—online or offline—at one-fifth the cost of other self-service solutions.
In-database advanced analytics
Analyze data directly within your SQL Server database using R, the popular statistics language.
Consistent experiences
Whether data is in your datacenter, in your private cloud, or on Microsoft Azure, you’ll get a consistent experience.
Agile Data Science 2.0 (O’Reilly 2017) defines a methodology and a software stack with which to apply the methods. The methodology seeks to deliver data products in short sprints by going meta and putting the focus on the applied research process itself. The stack is but an example of one meeting the requirements that it be utterly scalable and utterly efficient in use by application developers as well as data engineers. It includes everything needed to build a full-blown predictive system: Apache Spark, Apache Kafka, Apache Incubating Airflow, MongoDB, ElasticSearch, Apache Parquet, Python/Flask, JQuery. This talk will cover the full lifecycle of large data application development and will show how to use lessons from agile software engineering to apply data science using this full-stack to build better analytics applications.
Data Ingestion in Big Data and IoT platformsGuido Schmutz
Many of the Big Data and IoT use cases are based on combining data from multiple data sources and to make them available on a Big Data platform for analysis. The data sources are often very heterogeneous, from simple files, databases to high-volume event streams from sensors (IoT devices). It’s important to retrieve this data in a secure and reliable manner and integrate it with the Big Data platform so that it is available for analysis in real-time (stream processing) as well as in batch (typical big data processing). In past some new tools have emerged, which are especially capable of handling the process of integrating data from outside, often called Data Ingestion. From an outside perspective, they are very similar to a traditional Enterprise Service Bus infrastructures, which in larger organization are often in use to handle message-driven and service-oriented systems. But there are also important differences, they are typically easier to scale in a horizontal fashion, offer a more distributed setup, are capable of handling high-volumes of data/messages, provide a very detailed monitoring on message level and integrate very well with the Hadoop ecosystem. This session will present and compare Apache NiFi, StreamSets and the Kafka Ecosystem and show how they handle the data ingestion in a Big Data solution architecture.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
3. 3
INTERSHOP |
Support of Operational Databases
This graphic was published by Gartner, Inc. as part of a larger researchdocument and should be evaluated in the context of the entire document.The
Gartner documentis available upon request. Gartner does not endorse any vendor, product or service depicted in its research publications, and does
not advise technology users to select only those vendors with the highest ratings or other designation. Gartner researchpublications consist of the
opinions of Gartner's researchorganizationand should not be construed as statementsof fact. Gartner disclaims all warranties, expressed or implied,
with respect to this research, including any warranties of merchantabilityor fitness for a particular purpose.
With 7.10 we now support both
market leading Operational
Databases
&
5. Business intelligence
Advanced Analytics& AI
DATA INSIGHTS
DATA MANAGEMENT
Data warehousing
Operational data
Power BI
Azure
Machine Learning
Azure
Stream Analytics
Azure
Cognitive Services
SQL Server
Reporting Services
SQL Server Analysis
Services, R Services
Azure SQL
Data Warehouse
Azure
SQL Database
SQL Server
2017
SQL Server
2017
ON-PREM CLOUD
COMMERCE MANAGEMENT
INTERSHOP | Why Microsoft SQL Server
5
6. INTERSHOP | One Database, Various Options
6
SQL Server
2017
Azure
SQL
Database
Bring your
own license
SUB-
SCRIPTION
Get it from
INTERSHOP
Get it from
INTERSHOP
Bring your
own license
ON-
PREMISES
ON-
PREMISES
OFF-
PREMISES
INTERSHOP
CaaS
INTERSHOP
CaaS-Individual
Self-
Managed
LICENSE
With Azure SQL
Database Q4/2018
8. 8
INTERSHOP | ORACLE versus MICROSOFT
Pricing
Comparable
Features
Performance:
Roughly 1:1
Similar: CPU pricing,
support costs
Every feature ICM is using
has a match within MS SQL
Different: MS license only
necessary for production
environments
9. 9
INTERSHOP | SQL Server versus Azure SQL Database
Editions
5 Editions – Enterprise, Standard,
Express, Web and Developer
Always Enterprise
Updates
Possible
must be purchased optionally
Automatically
included in the subscription
Backups
Possible
Full, differential, transaction log
Automatically
Full, differential, transaction log
Replicas
Possible
Active geo-replication
Up to 4 readable secondary
databases globally distributed in
Azure data-centers
Azure
SQL Database
SQL Server
2017
10. 10
SCENARIOS | Azure SQL Database Versions
Managed Instance Elastic Pool Single Database
Model SQL Server instance Logical instance
Scaling Manual Automatic Manual
Feature Set ~ 100% SQL Server Roughly same as SQL Server
ICM Support YES NO (No Linked Servers)
Max. vCore 24 (Gen 4), 80 (Gen 5)
Billing Per hour
Pricing 40% cheaper
as Elastic Pool & Single Database
Both the same price
15. 15
Migrate to ICM 7.10 on Oracle Database
Move over to ICM 7.10 on Microsoft SQL Server / Azure SQL Database
1. Remove Oracle Specifics from Java Code
2. Convert Functions into Procedures
3. Extract Queries into Query Files
4. Implement Dialect
5. Use JUnit Query Test Framework to write query tests!
MIGRATION STEPS | From Oracle to Microsoft SQL Server
17. 17
Convert all functions into procedures which changes DDL or DML
MS SQL functions can’t change DDL or DML
Convert all PL/SQL packages into functions
MS SQL doesn’t support PL/SQL packages
MIGRATION STEPS | Convert Functions into Procedures
18. 18
Iterator<ProductPO> pIterator = pH.getObjectsBySQLWhere("sku=? and domainID=?
and rownum=1", new String[] { sku, aDomain.getUUID() }).iterator();
Iterator<ProductPO> pIterator = null;
try
{
Map<String, Object> params = new HashMap<>();
params.put("SKU", sku);
params.put("DomainUUID", aDomain.getUUID());
productsIterator =
appProvider.get().getQueryExecutor().executePageableQuery("product/GetProductBySKUSimple"
, params);
if (productsIterator.hasNext())
{
ProductPO p = productsIterator.next();
aProduct = productViewProvider.create(p.getUUID(), aDomain.getUUID());
}
} …
MIGRATION STEPS | Extract Queries into Query Files
19. 19
Remove Oracle specifics by using ANSI SQL or implement Microsoft dialect
<?xml version="1.0" encoding="UTF-8"?>
<query>
<processor name="JDBC“/>
<template sqlDialect="Oracle">
…
</template>
<template sqlDialect="Microsoft">
…
</template>
</query>
MIGRATION STEPS | Convert Query Files
SELECT
<sql-dialect name="Oracle">
sysdate FROM DUAL
</sql-dialect>
<sql-dialect name="Microsoft">
GETDATE()
</sql-dialect>
20. 20
Use JUnit Query Test Framework to write query tests!
MIGRATION STEPS | Write Query Tests
21. 21
Data Definition (DDL)
>750 tables
16 views
>2100 indexes
Data Manipulation (DML)
>150 stored procedures
~100 functions
<10 PL/SQL packages
(transformed into MS SQL functions)
Data Query (DQL)
750+ queries
65 of them are
Oracle/Microsoft
specific
WORK DONE
22. EARLY ADOPTER PROGRAM | Microsoft SQL Server
22
Benefits
Direct access to Product Management and Engineering
Dedicated Product Manager as single point of communication
Support from Engineering
Influencing the development pipeline for PWA and Microsoft Connectors
Eligibility
Projects based on ICM 7.10 and using at least one of the new features
(PWA, MS SQL or Microsoft Connectors)
September 2018 March 2019Starting Until
Jens
Kleinschmidt
23. QUESTIONS AND ANSWERS | Microsoft SQL Server
23
Jens Kleinschmidt
Technical Product Manager / Architect
Stefan Holzknecht
Senior Software Engineer
ProductManagement@intershop.de
24. The world of commerce is changing.
Unlock your potential with the exciting possibilities
of Intershop omni-channel commerce.
Jena, Germany
Hong Kong, China
Melbourne, Australia
San Francisco, USA
Amsterdam, Netherlands
Berlin, Germany
Frankfurt, Germany
Hamburg, Germany
London, UK
Nuremberg, Germany
Paris, France
Rio de Janeiro, Brazil
Sofia, Bulgaria
Stuttgart, Germany
intershop.com
info@intershop.com
Furthermore Intershop is represented in Austria, Belgium,
China, Denmark, Finland, India, Italy, Norway, Russian
Federation, Spain, Sweden, Switzerland, and Turkey.
For a full overview, as well as for contact details please consult
our website: www.intershop.com/offices-and-subsidiaries
24