This session shows an overview of the features and architecture of SQL Server on Linux and Containers. It covers install, config, performance, security, HADR, Docker containers, and tools. Find the demos on http://aka.ms/bobwardms
All our aggregates are wrong (ExploreDDD 2018)Mauro Servienti
It always starts well. At first glance the requirements seem straightforward, and implementation proceeds without hiccups. Then the requirements start to get more complex, and you find yourself in a predicament, introducing technical shortcuts that smell for the sake of delivering the new feature on schedule. In this talk, we'll analyze what appears to be a straightforward e-commerce shopping cart. We'll then go ahead and add a few more use-cases that make it more complex and see how it can negatively impact the overall design. Finally, we'll focus our attention to the business needs of these requirements and see how it can shed light on the correct approach to designing the feature. Walk away with a new understanding on how to take requirements apart to build the right software.
Kafka Streams at Scale (Deepak Goyal, Walmart Labs) Kafka Summit London 2019confluent
Walmart.com generates millions of events per second. At WalmartLabs, I’m working in a team called the Customer Backbone (CBB), where we wanted to upgrade to a platform capable of processing this event volume in real-time and store the state/knowledge of possibly all the Walmart Customers generated by the processing. Kafka streams’ event-driven architecture seemed like the only obvious choice. However, there are a few challenges w.r.t. Walmart’s scale: – the clusters need to be large and the problems thereof. – infinite retention of changelog topics, wasting valuable disk. – slow stand-by task recovery in case of a node failure (changelog topics have GBs of data) – no repartitioning in Kafka Streams.
As part of the event-driven development and addressing the challenges above, I’m going to talk about some bold new ideas we developed as features/patches to Kafka Streams to deal with the scale required at Walmart.
– Cold Bootstrap: Where in case of a Kafka Streams node failure, how instead of recovering from the change-log topic, we bootstrap the standby from active’s RocksDB using JSch and zero event loss by careful offset management.
– Dynamic Repartitioning: We added support for repartitioning in Kafka Streams where state is distributed among the new partitions. We can now elastically scale to any number of partitions and any number of nodes.
– Cloud/Rack/AZ aware task assignment: No active and standby tasks of the same partition are assigned to the same rack.
– Decreased Partition Assignment Size: With large clusters like ours (>400 nodes and 3 stream threads per node), the size of Partition Assignment of the KS cluster being few 100MBs, it takes a lot of time to settle a rebalance.
Key Takeaways:
– Basic understanding of Kafka Streams.
– Productionizing Kafka Streams at scale.
– Using Kafka Streams as Distributed NoSQL DB.
All our aggregates are wrong (ExploreDDD 2018)Mauro Servienti
It always starts well. At first glance the requirements seem straightforward, and implementation proceeds without hiccups. Then the requirements start to get more complex, and you find yourself in a predicament, introducing technical shortcuts that smell for the sake of delivering the new feature on schedule. In this talk, we'll analyze what appears to be a straightforward e-commerce shopping cart. We'll then go ahead and add a few more use-cases that make it more complex and see how it can negatively impact the overall design. Finally, we'll focus our attention to the business needs of these requirements and see how it can shed light on the correct approach to designing the feature. Walk away with a new understanding on how to take requirements apart to build the right software.
Kafka Streams at Scale (Deepak Goyal, Walmart Labs) Kafka Summit London 2019confluent
Walmart.com generates millions of events per second. At WalmartLabs, I’m working in a team called the Customer Backbone (CBB), where we wanted to upgrade to a platform capable of processing this event volume in real-time and store the state/knowledge of possibly all the Walmart Customers generated by the processing. Kafka streams’ event-driven architecture seemed like the only obvious choice. However, there are a few challenges w.r.t. Walmart’s scale: – the clusters need to be large and the problems thereof. – infinite retention of changelog topics, wasting valuable disk. – slow stand-by task recovery in case of a node failure (changelog topics have GBs of data) – no repartitioning in Kafka Streams.
As part of the event-driven development and addressing the challenges above, I’m going to talk about some bold new ideas we developed as features/patches to Kafka Streams to deal with the scale required at Walmart.
– Cold Bootstrap: Where in case of a Kafka Streams node failure, how instead of recovering from the change-log topic, we bootstrap the standby from active’s RocksDB using JSch and zero event loss by careful offset management.
– Dynamic Repartitioning: We added support for repartitioning in Kafka Streams where state is distributed among the new partitions. We can now elastically scale to any number of partitions and any number of nodes.
– Cloud/Rack/AZ aware task assignment: No active and standby tasks of the same partition are assigned to the same rack.
– Decreased Partition Assignment Size: With large clusters like ours (>400 nodes and 3 stream threads per node), the size of Partition Assignment of the KS cluster being few 100MBs, it takes a lot of time to settle a rebalance.
Key Takeaways:
– Basic understanding of Kafka Streams.
– Productionizing Kafka Streams at scale.
– Using Kafka Streams as Distributed NoSQL DB.
RTCMultiConnection is an open-source WebRTC wrapper Library for all RTCWeb API.
RTCMultiConnection is the only "Dynamic" WebRTC Library on the Web!
http://www.rtcmulticonnection.org/
source : http://www.opennaru.com/cloud/ci-cd/
CI/CD는 애플리케이션 개발 단계를 자동화하여 애플리케이션을 보다 짧은 주기로 고객에게 제공하는 방법입니다. CI/CD의 기본 개념은 지속적인 통합, 지속적인 서비스 제공, 지속적인 배포입니다. CI/CD는 새로운 코드 통합으로 인해 개발 및 운영팀에 발생하는 문제(일명 "인테그레이션 헬(integration hell)")을 해결하기 위한 솔루션입니다.
특히, CI/CD는 애플리케이션의 통합 및 테스트 단계에서부터 제공 및 배포에 이르는 애플리케이션의 라이프사이클 전체에 걸쳐 지속적인 자동화와 지속적인 모니터링을 제공합니다. 이러한 구축 사례를 일반적으로 "CI/CD 파이프라인"이라 부르며 개발 및 운영팀의 애자일 방식 협력을 통해 지원됩니다.
These slides are for my talk at Percona Live 2022: https://sched.co/10KEo
MySQL Cookbook 4th edition (https://www.target.com/p/mysql-cookbook-4th-edition-by-sveta-smirnova-alkin-tezuysal-paperback/-/A-85851771) is planned to be released this spring. I am one of the authors of the book and will show you how to "cook" MySQL. I will show you a few tasks with different priorities, such as JSON in MySQL for those who need flexibility; modern SQL for analytics, and Group Replication for high availability. I will also show how to write programs using JavaScript and Python languages, X DevAPI, and MySQL Shell. I expect this talk will be interesting for MySQL application developers.
발표 다시보기: https://youtu.be/V6g1SE4DkK4?list=PLORxAVAC5fUWg_jFcq8hNJEMzELtAD6kc
Oracle, SQL Server 등과 같은 상업용 데이터베이스로부터 AWS 관리형 데이터베이스 서비스로 이동함으로써 많은 비용을 절감할 수 있습니다. 본 세션에서는 AWS가 제공하고 있는 관리형 데이터베이스 서비스의 종류 및 특징에 대해서 알아보도록 하겠습니다.
AWS의 블록체인 서비스 활용 방법 - 박혜영 솔루션즈 아키텍트, AWS / 박선준 솔루션즈 아키텍트, AWS :: AWS Summit S...Amazon Web Services Korea
AWS의 블록체인 서비스 활용 방법
박혜영 솔루션즈 아키텍트, AWS
박선준 솔루션즈 아키텍트, AWS
AWS에서 제공하는 Managed Blockchain으로 빠르고 간단하게 블록체인 네트워크를 생성하고 Ledger 데이터베이스인 Quantum DB를 AWS 빅데이터 분석 서비스와 연계하여 다양한 블록체인 데이터를 클라우드 상에서 분석 및 활용하는 방법을 소개합니다.
Goldilocks is an open-source tool by Fairwinds. Goldilocks is a utility that can help us to identify a starting point for resource requests and limits. It is a Kubernetes controller that provides a dashboard that gives recommendations on how to set your resource requests. Goldilocks doesn't recommend resource requests/limits by itself. It utilizes a Kubernetes project called Vertical Pod Autoscaler (VPA).
MySQL and MariaDB though they share the same roots for replication .They support parallel replication , but they diverge the way the parallel replication is implemented.
온디맨드 다시보기: https://www.youtube.com/watch?v=dBLv4V3hRRQ
엔터프라이즈 미션 크리티컬 시스템을 위해서 다양한 고객 환경에서 Oracle RAC와 같은 대용량 데이터베이스가 운영중에 있습니다. 클라우드를 도입하는 많은 고객이 이러한 대용량 데이터베이스 환경을 클라우드 네이티브 서비스가 이를 대체할 수 있을지에 많은 의구심을 가지고 있습니다. AWS의 클라우드 네이티브 데이터베이스 서비스의 기술적인 관점에서 대용량 데이터 운영 관리의 특성을 살펴 보고 Oracle RAC의 완벽한 대체제로 충분한 역량을 가지고 있음을 소개합니다.
This is a presentation how to introduce CQRS pattern to an existing application, step by step, without breaking changes and holding up the development.
DB12c: All You Need to Know About the Resource ManagerMaris Elsins
This presentation is different from the previous uploads as SLOB was used for the testing.
Oracle Database 12c Multitenant provides the highest level of Oracle Database resource efficiency, driven by an improved resource manager. The 12c resource manager effectively allocates resources both within a single database and between multiple pluggable databases in a container. This presentation will review new features of the 12c resource manager, provide guidelines for migration of your current resource management plan to 12c, and will also look into how much overhead the resource manager introduces.
RTCMultiConnection is an open-source WebRTC wrapper Library for all RTCWeb API.
RTCMultiConnection is the only "Dynamic" WebRTC Library on the Web!
http://www.rtcmulticonnection.org/
source : http://www.opennaru.com/cloud/ci-cd/
CI/CD는 애플리케이션 개발 단계를 자동화하여 애플리케이션을 보다 짧은 주기로 고객에게 제공하는 방법입니다. CI/CD의 기본 개념은 지속적인 통합, 지속적인 서비스 제공, 지속적인 배포입니다. CI/CD는 새로운 코드 통합으로 인해 개발 및 운영팀에 발생하는 문제(일명 "인테그레이션 헬(integration hell)")을 해결하기 위한 솔루션입니다.
특히, CI/CD는 애플리케이션의 통합 및 테스트 단계에서부터 제공 및 배포에 이르는 애플리케이션의 라이프사이클 전체에 걸쳐 지속적인 자동화와 지속적인 모니터링을 제공합니다. 이러한 구축 사례를 일반적으로 "CI/CD 파이프라인"이라 부르며 개발 및 운영팀의 애자일 방식 협력을 통해 지원됩니다.
These slides are for my talk at Percona Live 2022: https://sched.co/10KEo
MySQL Cookbook 4th edition (https://www.target.com/p/mysql-cookbook-4th-edition-by-sveta-smirnova-alkin-tezuysal-paperback/-/A-85851771) is planned to be released this spring. I am one of the authors of the book and will show you how to "cook" MySQL. I will show you a few tasks with different priorities, such as JSON in MySQL for those who need flexibility; modern SQL for analytics, and Group Replication for high availability. I will also show how to write programs using JavaScript and Python languages, X DevAPI, and MySQL Shell. I expect this talk will be interesting for MySQL application developers.
발표 다시보기: https://youtu.be/V6g1SE4DkK4?list=PLORxAVAC5fUWg_jFcq8hNJEMzELtAD6kc
Oracle, SQL Server 등과 같은 상업용 데이터베이스로부터 AWS 관리형 데이터베이스 서비스로 이동함으로써 많은 비용을 절감할 수 있습니다. 본 세션에서는 AWS가 제공하고 있는 관리형 데이터베이스 서비스의 종류 및 특징에 대해서 알아보도록 하겠습니다.
AWS의 블록체인 서비스 활용 방법 - 박혜영 솔루션즈 아키텍트, AWS / 박선준 솔루션즈 아키텍트, AWS :: AWS Summit S...Amazon Web Services Korea
AWS의 블록체인 서비스 활용 방법
박혜영 솔루션즈 아키텍트, AWS
박선준 솔루션즈 아키텍트, AWS
AWS에서 제공하는 Managed Blockchain으로 빠르고 간단하게 블록체인 네트워크를 생성하고 Ledger 데이터베이스인 Quantum DB를 AWS 빅데이터 분석 서비스와 연계하여 다양한 블록체인 데이터를 클라우드 상에서 분석 및 활용하는 방법을 소개합니다.
Goldilocks is an open-source tool by Fairwinds. Goldilocks is a utility that can help us to identify a starting point for resource requests and limits. It is a Kubernetes controller that provides a dashboard that gives recommendations on how to set your resource requests. Goldilocks doesn't recommend resource requests/limits by itself. It utilizes a Kubernetes project called Vertical Pod Autoscaler (VPA).
MySQL and MariaDB though they share the same roots for replication .They support parallel replication , but they diverge the way the parallel replication is implemented.
온디맨드 다시보기: https://www.youtube.com/watch?v=dBLv4V3hRRQ
엔터프라이즈 미션 크리티컬 시스템을 위해서 다양한 고객 환경에서 Oracle RAC와 같은 대용량 데이터베이스가 운영중에 있습니다. 클라우드를 도입하는 많은 고객이 이러한 대용량 데이터베이스 환경을 클라우드 네이티브 서비스가 이를 대체할 수 있을지에 많은 의구심을 가지고 있습니다. AWS의 클라우드 네이티브 데이터베이스 서비스의 기술적인 관점에서 대용량 데이터 운영 관리의 특성을 살펴 보고 Oracle RAC의 완벽한 대체제로 충분한 역량을 가지고 있음을 소개합니다.
This is a presentation how to introduce CQRS pattern to an existing application, step by step, without breaking changes and holding up the development.
DB12c: All You Need to Know About the Resource ManagerMaris Elsins
This presentation is different from the previous uploads as SLOB was used for the testing.
Oracle Database 12c Multitenant provides the highest level of Oracle Database resource efficiency, driven by an improved resource manager. The 12c resource manager effectively allocates resources both within a single database and between multiple pluggable databases in a container. This presentation will review new features of the 12c resource manager, provide guidelines for migration of your current resource management plan to 12c, and will also look into how much overhead the resource manager introduces.
SQL Server 2017 will bring SQL Server to Linux for the first time. This presentation covers the scope, schedule, and architecture as well as a background on why Microsoft is making SQL Server available on Linux.
SQL Server vNext is the next major release of SQL Server and the first release which will be available on Linux and Docker. This presentation covers all the details.
Brk3288 sql server v.next with support on linux, windows and containers was...Bob Ward
SQL Server is bringing its world-class RDBMS to Linux and Windows with SQL Server v.Next. In this session you will learn what´s next for SQL Server on Linux and how application developers and IT architects can now leverage the enterprise class features of SQL Server in every edition on Linux, Windows and containers.
SUSE Webinar - Introduction to SQL Server on LinuxTravis Wright
Introduction to SQL Server on Linux for SUSE customers. Talks about scope of the first release of SQL Server on Linux, schedule, Early Adoption Program. Recording is available here:
https://www.brighttalk.com/webcast/11477/243417
SQL Server goes Linux - Hello, my name is Tux, I would like to join the #SQLF...Andre Essing
SQL Server und Windows, seit Jahren ein unzertrennliches Paar und eine gute Kombination für die Verarbeitung von Daten. Mit SQL Server 2017 wird sich dies jedoch ändern, denn aus dem Norden Europas, genauer aus Helsinki, möchte ein kleiner Pinguin der SQL Server Community beitreten.
Die kommende SQL Server Version wird es somit nicht mehr ausschließlich für Windows geben. Ein Deployment der relationalen Datenbank Engine wird in naher Zukunft auch auf Linux und innerhalb von Docker möglich sein. Somit bietet der SQL Server nun mehr Einsatzmöglichkeiten und Anwendungsfälle als je zuvor. Aber wie funktioniert SQL Server on Linux? Was wird unterstützt, was nicht? Welche Szenarien sind möglich? Alle diese Punkte möchte in dieser Session besprechen.
Microsoft SQL server 2017 Level 300 technical deckGeorge Walters
This deck covers new features in SQL Server 2017, as well as carryover features from 2012 onwards. This includes high availability, columnstore, alwayson, In-memory tables, and other enterprise features.
PASS Summit - SQL Server 2017 Deep DiveTravis Wright
Deep dive into SQL Server 2017 covering SQL Server on Linux, containers, HA improvements, SQL graph, machine learning, python, adaptive query processing, and much much more.
The event, held on 11th December 2018, was a technical presentation about running MS SQL Server 2017 on Linux. We started off by using containers and proceeded in looking at High Availability and Data Protection, more specifically:
- Supported features & Linux differences
- Installing SQL Server on a Linux Container
- Accessing SMB 3.0 shared storage using Samba
- Setting up a Fail over Cluster using Pacemaker
- Setting up AlwaysOn Availability Groups using Pacemaker
- Authenticating to SQL Server using AD Authentication
- Setting up Read-Scale Cross-Platform Availability Groups
https://techspark.mt/sql-server-on-linux-11th-december-2018/
High performance Redis is popular among developers for its incredible performance, versatility and simplicity. The powerful combination of low cost memory and high performance Redis brings to life new next generation analytic uses - such as simultaneous real time transaction and analytics processing. With Redis Labs' RLEC Flash on AWS SSD instances, you can get fantastic performance at up to 70% lower costs. Join this session to learn how next generation Flash from leading memory provider Intel has made significant strides in performance while retaining its cost advantage to memory. Using a combination of AWS' powerful SSD instances, and Redis Labs' RLEC Flash, you can achieve up to 3M ops/sec at sub millisecond latencies, with a combination of RAM and Flash. The session will also feature customer use cases from a large university, a large customer engagement company and a pioneer of online Flash sales. Session sponsored by Redis Labs.
A sharing in a meetup of the AWS Taiwan User Group.
The registration page: https://bityl.co/7yRK
The promotion page: https://www.facebook.com/groups/awsugtw/permalink/4123481584394988/
Brk3043 azure sql db intelligent cloud database for app developers - wash dcBob Ward
Make building and maintaining applications easier and more productive. With built-in intelligence that learns app patterns and adapts to maximize performance, reliability, and data protection, SQL Database is a cloud database built for developers. The session covers our most advanced features to-date including Threat Detection, auto-tuned performance and actionable recommendations across performance and security aspects. Case studies and live demos help you understand how choosing SQL Database will make a difference for your app and your company.
Hekaton is the original project name for In-Memory OLTP and just sounds cooler for a title name. Keeping up the tradition of deep technical “Inside” sessions at PASS, this half-day talk will take you behind the scenes and under the covers on how the In-Memory OLTP functionality works with SQL Server.
We will cover “everything Hekaton”, including how it is integrated with the SQL Server Engine Architecture. We will explore how data is stored in memory and on disk, how I/O works, how native complied procedures are built and executed. We will also look at how Hekaton integrates with the rest of the engine, including Backup, Restore, Recovery, High-Availability, Transaction Logging, and Troubleshooting.
Demos are a must for a half-day session like this and what would an inside session be if we didn’t bring out the Windows Debugger. As with previous “Inside…” talks I’ve presented at PASS, this session is level 500 and not for the faint of heart. So read through the docs on In-Memory OLTP and bring some extra pain reliever as we move fast and go deep.
This session will appear as two sessions in the program guide but is not a Part I and II. It is one complete session with a small break so you should plan to attend it all to get the maximum benefit.
SQL Server In-Memory OLTP: What Every SQL Professional Should KnowBob Ward
Perhaps you have heard the term “In-Memory” but not sure what it means. If you are a SQL Server Professional then you will want to know. Even if you are new to SQL Server, you will want to learn more about this topic. Come learn the basics of how In-Memory OLTP technology in SQL Server 2016 and Azure SQL Database can boost your OLTP application by 30X. We will compare how In-Memory OTLP works vs “normal” disk-based tables. We will discuss what is required to migrate your existing data into memory optimized tables or how to build a new set of data and applications to take advantage of this technology. This presentation will cover the fundamentals of what, how, and why this technology is something every SQL Server Professional should know
SQL Server R Services: What Every SQL Professional Should KnowBob Ward
SQL Server 2016 introduces a new platform for building intelligent, advanced analytic applications called SQL Server R Services. This session is for the SQL Server Database professional to learn more about this technology and its impact on managing a SQL Server environment. We will cover the basics of this technology but also look at how it works, troubleshooting topics, and even usage case scenarios. You don't have to be a data scientist to understand SQL Server R Services but you need to know how this works so come upgrade you career by learning more about SQL Server and advanced analytics.
Based on the popular blog series, join me in taking a deep dive and a behind the scenes look at how SQL Server 2016 “It Just Runs Faster”, focused on scalability and performance enhancements. This talk will discuss the improvements, not only for awareness, but expose design and internal change details. The beauty behind ‘It Just Runs Faster’ is your ability to just upgrade, in place, and take advantage without lengthy and costly application or infrastructure changes. If you are looking at why SQL Server 2016 makes sense for your business you won’t want to miss this session.
Strategies for Successful Data Migration Tools.pptxvarshanayak241
Data migration is a complex but essential task for organizations aiming to modernize their IT infrastructure and leverage new technologies. By understanding common challenges and implementing these strategies, businesses can achieve a successful migration with minimal disruption. Data Migration Tool like Ask On Data play a pivotal role in this journey, offering features that streamline the process, ensure data integrity, and maintain security. With the right approach and tools, organizations can turn the challenge of data migration into an opportunity for growth and innovation.
Your Digital Assistant.
Making complex approach simple. Straightforward process saves time. No more waiting to connect with people that matter to you. Safety first is not a cliché - Securely protect information in cloud storage to prevent any third party from accessing data.
Would you rather make your visitors feel burdened by making them wait? Or choose VizMan for a stress-free experience? VizMan is an automated visitor management system that works for any industries not limited to factories, societies, government institutes, and warehouses. A new age contactless way of logging information of visitors, employees, packages, and vehicles. VizMan is a digital logbook so it deters unnecessary use of paper or space since there is no requirement of bundles of registers that is left to collect dust in a corner of a room. Visitor’s essential details, helps in scheduling meetings for visitors and employees, and assists in supervising the attendance of the employees. With VizMan, visitors don’t need to wait for hours in long queues. VizMan handles visitors with the value they deserve because we know time is important to you.
Feasible Features
One Subscription, Four Modules – Admin, Employee, Receptionist, and Gatekeeper ensures confidentiality and prevents data from being manipulated
User Friendly – can be easily used on Android, iOS, and Web Interface
Multiple Accessibility – Log in through any device from any place at any time
One app for all industries – a Visitor Management System that works for any organisation.
Stress-free Sign-up
Visitor is registered and checked-in by the Receptionist
Host gets a notification, where they opt to Approve the meeting
Host notifies the Receptionist of the end of the meeting
Visitor is checked-out by the Receptionist
Host enters notes and remarks of the meeting
Customizable Components
Scheduling Meetings – Host can invite visitors for meetings and also approve, reject and reschedule meetings
Single/Bulk invites – Invitations can be sent individually to a visitor or collectively to many visitors
VIP Visitors – Additional security of data for VIP visitors to avoid misuse of information
Courier Management – Keeps a check on deliveries like commodities being delivered in and out of establishments
Alerts & Notifications – Get notified on SMS, email, and application
Parking Management – Manage availability of parking space
Individual log-in – Every user has their own log-in id
Visitor/Meeting Analytics – Evaluate notes and remarks of the meeting stored in the system
Visitor Management System is a secure and user friendly database manager that records, filters, tracks the visitors to your organization.
"Secure Your Premises with VizMan (VMS) – Get It Now"
Globus Connect Server Deep Dive - GlobusWorld 2024Globus
We explore the Globus Connect Server (GCS) architecture and experiment with advanced configuration options and use cases. This content is targeted at system administrators who are familiar with GCS and currently operate—or are planning to operate—broader deployments at their institution.
Into the Box Keynote Day 2: Unveiling amazing updates and announcements for modern CFML developers! Get ready for exciting releases and updates on Ortus tools and products. Stay tuned for cutting-edge innovations designed to boost your productivity.
SOCRadar Research Team: Latest Activities of IntelBrokerSOCRadar
The European Union Agency for Law Enforcement Cooperation (Europol) has suffered an alleged data breach after a notorious threat actor claimed to have exfiltrated data from its systems. Infamous data leaker IntelBroker posted on the even more infamous BreachForums hacking forum, saying that Europol suffered a data breach this month.
The alleged breach affected Europol agencies CCSE, EC3, Europol Platform for Experts, Law Enforcement Forum, and SIRIUS. Infiltration of these entities can disrupt ongoing investigations and compromise sensitive intelligence shared among international law enforcement agencies.
However, this is neither the first nor the last activity of IntekBroker. We have compiled for you what happened in the last few days. To track such hacker activities on dark web sources like hacker forums, private Telegram channels, and other hidden platforms where cyber threats often originate, you can check SOCRadar’s Dark Web News.
Stay Informed on Threat Actors’ Activity on the Dark Web with SOCRadar!
Prosigns: Transforming Business with Tailored Technology SolutionsProsigns
Unlocking Business Potential: Tailored Technology Solutions by Prosigns
Discover how Prosigns, a leading technology solutions provider, partners with businesses to drive innovation and success. Our presentation showcases our comprehensive range of services, including custom software development, web and mobile app development, AI & ML solutions, blockchain integration, DevOps services, and Microsoft Dynamics 365 support.
Custom Software Development: Prosigns specializes in creating bespoke software solutions that cater to your unique business needs. Our team of experts works closely with you to understand your requirements and deliver tailor-made software that enhances efficiency and drives growth.
Web and Mobile App Development: From responsive websites to intuitive mobile applications, Prosigns develops cutting-edge solutions that engage users and deliver seamless experiences across devices.
AI & ML Solutions: Harnessing the power of Artificial Intelligence and Machine Learning, Prosigns provides smart solutions that automate processes, provide valuable insights, and drive informed decision-making.
Blockchain Integration: Prosigns offers comprehensive blockchain solutions, including development, integration, and consulting services, enabling businesses to leverage blockchain technology for enhanced security, transparency, and efficiency.
DevOps Services: Prosigns' DevOps services streamline development and operations processes, ensuring faster and more reliable software delivery through automation and continuous integration.
Microsoft Dynamics 365 Support: Prosigns provides comprehensive support and maintenance services for Microsoft Dynamics 365, ensuring your system is always up-to-date, secure, and running smoothly.
Learn how our collaborative approach and dedication to excellence help businesses achieve their goals and stay ahead in today's digital landscape. From concept to deployment, Prosigns is your trusted partner for transforming ideas into reality and unlocking the full potential of your business.
Join us on a journey of innovation and growth. Let's partner for success with Prosigns.
Field Employee Tracking System| MiTrack App| Best Employee Tracking Solution|...informapgpstrackings
Keep tabs on your field staff effortlessly with Informap Technology Centre LLC. Real-time tracking, task assignment, and smart features for efficient management. Request a live demo today!
For more details, visit us : https://informapuae.com/field-staff-tracking/
Code reviews are vital for ensuring good code quality. They serve as one of our last lines of defense against bugs and subpar code reaching production.
Yet, they often turn into annoying tasks riddled with frustration, hostility, unclear feedback and lack of standards. How can we improve this crucial process?
In this session we will cover:
- The Art of Effective Code Reviews
- Streamlining the Review Process
- Elevating Reviews with Automated Tools
By the end of this presentation, you'll have the knowledge on how to organize and improve your code review proces
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...Globus
The U.S. Geological Survey (USGS) has made substantial investments in meeting evolving scientific, technical, and policy driven demands on storing, managing, and delivering data. As these demands continue to grow in complexity and scale, the USGS must continue to explore innovative solutions to improve its management, curation, sharing, delivering, and preservation approaches for large-scale research data. Supporting these needs, the USGS has partnered with the University of Chicago-Globus to research and develop advanced repository components and workflows leveraging its current investment in Globus. The primary outcome of this partnership includes the development of a prototype enterprise repository, driven by USGS Data Release requirements, through exploration and implementation of the entire suite of the Globus platform offerings, including Globus Flow, Globus Auth, Globus Transfer, and Globus Search. This presentation will provide insights into this research partnership, introduce the unique requirements and challenges being addressed and provide relevant project progress.
Gamify Your Mind; The Secret Sauce to Delivering Success, Continuously Improv...Shahin Sheidaei
Games are powerful teaching tools, fostering hands-on engagement and fun. But they require careful consideration to succeed. Join me to explore factors in running and selecting games, ensuring they serve as effective teaching tools. Learn to maintain focus on learning objectives while playing, and how to measure the ROI of gaming in education. Discover strategies for pitching gaming to leadership. This session offers insights, tips, and examples for coaches, team leads, and enterprise leaders seeking to teach from simple to complex concepts.
First Steps with Globus Compute Multi-User EndpointsGlobus
In this presentation we will share our experiences around getting started with the Globus Compute multi-user endpoint. Working with the Pharmacology group at the University of Auckland, we have previously written an application using Globus Compute that can offload computationally expensive steps in the researcher's workflows, which they wish to manage from their familiar Windows environments, onto the NeSI (New Zealand eScience Infrastructure) cluster. Some of the challenges we have encountered were that each researcher had to set up and manage their own single-user globus compute endpoint and that the workloads had varying resource requirements (CPUs, memory and wall time) between different runs. We hope that the multi-user endpoint will help to address these challenges and share an update on our progress here.
Quarkus Hidden and Forbidden ExtensionsMax Andersen
Quarkus has a vast extension ecosystem and is known for its subsonic and subatomic feature set. Some of these features are not as well known, and some extensions are less talked about, but that does not make them less interesting - quite the opposite.
Come join this talk to see some tips and tricks for using Quarkus and some of the lesser known features, extensions and development techniques.
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...Globus
The Earth System Grid Federation (ESGF) is a global network of data servers that archives and distributes the planet’s largest collection of Earth system model output for thousands of climate and environmental scientists worldwide. Many of these petabyte-scale data archives are located in proximity to large high-performance computing (HPC) or cloud computing resources, but the primary workflow for data users consists of transferring data, and applying computations on a different system. As a part of the ESGF 2.0 US project (funded by the United States Department of Energy Office of Science), we developed pre-defined data workflows, which can be run on-demand, capable of applying many data reduction and data analysis to the large ESGF data archives, transferring only the resultant analysis (ex. visualizations, smaller data files). In this talk, we will showcase a few of these workflows, highlighting how Globus Flows can be used for petabyte-scale climate analysis.
Check out the webinar slides to learn more about how XfilesPro transforms Salesforce document management by leveraging its world-class applications. For more details, please connect with sales@xfilespro.com
If you want to watch the on-demand webinar, please click here: https://www.xfilespro.com/webinars/salesforce-document-management-2-0-smarter-faster-better/
Why React Native as a Strategic Advantage for Startup Innovation.pdfayushiqss
Do you know that React Native is being increasingly adopted by startups as well as big companies in the mobile app development industry? Big names like Facebook, Instagram, and Pinterest have already integrated this robust open-source framework.
In fact, according to a report by Statista, the number of React Native developers has been steadily increasing over the years, reaching an estimated 1.9 million by the end of 2024. This means that the demand for this framework in the job market has been growing making it a valuable skill.
But what makes React Native so popular for mobile application development? It offers excellent cross-platform capabilities among other benefits. This way, with React Native, developers can write code once and run it on both iOS and Android devices thus saving time and resources leading to shorter development cycles hence faster time-to-market for your app.
Let’s take the example of a startup, which wanted to release their app on both iOS and Android at once. Through the use of React Native they managed to create an app and bring it into the market within a very short period. This helped them gain an advantage over their competitors because they had access to a large user base who were able to generate revenue quickly for them.
In software engineering, the right architecture is essential for robust, scalable platforms. Wix has undergone a pivotal shift from event sourcing to a CRUD-based model for its microservices. This talk will chart the course of this pivotal journey.
Event sourcing, which records state changes as immutable events, provided robust auditing and "time travel" debugging for Wix Stores' microservices. Despite its benefits, the complexity it introduced in state management slowed development. Wix responded by adopting a simpler, unified CRUD model. This talk will explore the challenges of event sourcing and the advantages of Wix's new "CRUD on steroids" approach, which streamlines API integration and domain event management while preserving data integrity and system resilience.
Participants will gain valuable insights into Wix's strategies for ensuring atomicity in database updates and event production, as well as caching, materialization, and performance optimization techniques within a distributed system.
Join us to discover how Wix has mastered the art of balancing simplicity and extensibility, and learn how the re-adoption of the modest CRUD has turbocharged their development velocity, resilience, and scalability in a high-growth environment.
Accelerate Enterprise Software Engineering with PlatformlessWSO2
Key takeaways:
Challenges of building platforms and the benefits of platformless.
Key principles of platformless, including API-first, cloud-native middleware, platform engineering, and developer experience.
How Choreo enables the platformless experience.
How key concepts like application architecture, domain-driven design, zero trust, and cell-based architecture are inherently a part of Choreo.
Demo of an end-to-end app built and deployed on Choreo.
OpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoamtakuyayamamoto1800
In this slide, we show the simulation example and the way to compile this solver.
In this solver, the Helmholtz equation can be solved by helmholtzFoam. Also, the Helmholtz equation with uniformly dispersed bubbles can be simulated by helmholtzBubbleFoam.
OpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoam
Brk2051 sql server on linux and docker
1.
2. Agenda
• Features and Architecture
• Installation and Configuration
• Performance
• Security
• HADR
• Docker
• Tools
3. End-to-end mobile BI
on any device
Choice of platform
and language
Most secure
over the last 7 years
0
20
40
60
80
100
120
140
160
180
200
Vulnerabilities(2010-2016)
A fraction of the cost
Self-serviceBIperuser
Only commercial DB
with AI built-in
Microsoft Tableau Oracle
$120
$480
$2,230
Industry-leading
performance
1/10
Most consistent data platform
#1 OLTP performance
#1 DW performance
#1 price/performance
T-SQL
Java
C/C++
C#/VB.NET
PHP
Node.js
Python
Ruby
R
R and Python +
in-memory at massive scale
Native T-SQL scoring
S Q L S E R V E R 2 0 1 7
I N D U S T R Y - L E A D I N G P E R F O R M A N C E A N D S E C U R I T Y N O W O N L I N U X A N D D O C K E R
Private cloud Public cloud
In-memory across all workloads
1/10th the cost of Oracle
4. SQL Server 2017
Supported platforms and Linux requirements
Linux Containers
Windows
Windows Server
• RedHat Enterprise Linux (RHEL) 7.3+
• SUSE Enterprise Linux (SLES) v12 SP2
• Ubuntu 16.04, 16.10
• Docker: Windows & Linux containers
• Windows Server / Windows 10
Memory 2 GB
File System XFS or EXT4 (and NFS)
Disk space 6 GB
Speed 2 GHz
Cores 2 cores
CPU Type x64-compatible only
5. Windows Linux
Developer, Express, Web, Standard, Enterprise
Database Engine
Integration Services
Analysis Services, Reporting Services, MDS, DQS
Maximum number of cores Unlimited Unlimited
Maximum memory utilized per instance 12 TB 12 TB
Maximum database size 524 PB 524 PB
Basic OLTP (Basic In-Memory OLTP, Basic operational analytics)
Advanced OLTP (Advanced In-Memory OLTP, Advanced operational analytics)
Basic high availability (2-node single database failover, non-readable secondary)
Advanced HA (Always On - multi-node, multi-db failover, readable secondaries)
Security
Basic security (Basic auditing, Row-level security, Data masking, Always Encrypted)
Advanced security (Transparent Data Encryption)
HADR
Always On Availability Groups
Failover Clustering
Replication
Data
warehousing
PolyBase2
Basic data warehousing/data marts (Basic In-Memory ColumnStore, Partitioning, Compression)
Advanced data warehousing (Advanced In-Memory ColumnStore)
Tools
Windows ecosystem: Full-fidelity Management & Dev Tool (SSMS & SSDT), command line tools
Linux/OSX/Windows ecosystem: Dev tools (VS Code), DB Admin GUI tool, command line tools
Developer Programmability (T-SQL, CLR, Data Types, JSON)
BI & Advanced Analytics
Basic Corporate Business Intelligence (Multi-dimensional models, Basic tabular model)
Basic “R” integration (Connectivity to R Open, Limited parallelism for ScaleR)
Advanced “R” integration (Full parallelism for ScaleR)
Hybrid cloud Stretch Database
What’s in SQL Server On Linux?
6. Linux software loves packages
• Major Linux distributions follow the
pattern of using a package manager.
• RHEL: RPM and yum.
• Ubuntu: Debian and apt-get
• SUSE: RPM and zypper.
• Package: binaries + instructions to install.
• Benefits: dependency management,
automatic versioning, one-tool-to-rule-
them-all, architecture control.
mssql-server mssql-tools
mssql-
server-agent
mssql-
server-fts
mssql-
server-ha
mssql-
server-is
7. mssql-conf
• mssql-conf is a configuration script that installs with SQL
Server 2017 on RHEL, SUSE, and Ubuntu. This utility can be
used to set:
• TCP port
• Default data directory
• Default log directory
• Default dump directory
• Default backup directory
• Core dump type
• HA
• Set traceflags
• Set collation
8. Offline simple and
easy
Go for the complete
unattended
experience with
SQLCAT
Simple update to
the latest release
Rollback to a
previous update
Uninstall easy
The Installation suite
Updates based on CU
or GDR repositories
rm –r /var/opt/mssql
9. SQL Server Linux Architecture
LibOS (Win API and
Kernel)
Host Extension mapping to OS system calls
(IO, Memory, CPU scheduling)
SQLOS (SQLPAL)
SQL PAL
Everything else
System Resource &
Latency Sensitive Code
Paths
SQL Platform
Abstraction Layer (SQLPAL)
Linux Kernel
SQLSERVRSQLAGENT
ABI
API
Linux APIs (mmap, pthread_create, …)
Linux
Process
(Ring 3)
Ring 0
Based on Microsoft
Research
Drawbridge Project
Parent “Watchdog” processfork
10.
11. Does it Perform?
#1 TPC-H Benchmark1 at 1,009,065 QphH with price/performance of $0.47 USD per QphH
World’s First Enterprise Class Diskless Database
Columnstore Indexes and In-Memory OLTP full capabilities
180 Billon Rows scanned in < 20 secs with 480 CPUs
Adaptive Query Processing and Automatic Tuning
Linux Performance Best Practice Guidance
1 #1 non-clustered 1TB TPC-H (http://www.tpc.org/3331)
Go read what our
customers say
12. S Q L S E R V E R 2 0 1 7 : I N D U S T R Y - L E A D I N G S E C U R I T Y
SECURITY
Comprehensive, easy-to-use security to implement enterprise-grade protection
(Getting started: https://docs.microsoft.com/en-us/sql/linux/sql-server-linux-security-get-started)
Information Protection
Access Management
Threat Protection
0
20
40
60
80
100
120
140
160
180
200
Vulnerabilities(2010-2016)
+
Database access SQL Server Logins
Active Directory Authentication
Roles and permissions
Schemas
Application access Row-Level Security
Dynamic Data Masking
Encryption-at-rest Transparent data encryption (TDE)
Backup encryption
Cell-level encryption
Encryption in transit Transport Layer Security (TLS 1.2)
Client-side encryption Always Encrypted
Tracking & investigation Fine-grained audit
openssl and
linux config
13. AD Authentication on Linux
• Setup + Config requires Linux specifics (e.g. realm to join the domain)
• Built on Kerberos and GSSAPI
• The steps are….
Join SQL Server host to AD domain
Create AD user for SQL Server and set SPN
Configure SQL Server service keytab
Create AD-based logins in Transact-SQL
Connect to SQL Server using AD Authentication
14. CLUSTER_TYPE = EXTERNAL
Auto failover
Pacemaker
SQL resource agent (mssql-server-ha)
3 replicas required
Configuration only replica
for metadata
Full AG capabilities
Pacemaker and Corosync
Single SQL Server instance
SQL resource agent (mssql-server-ha)
Shared Storage
ISCI, NFS, SMB
CLUSTER_TYPE=NONE
No clustering required
Manual or forced failover
Sync or async replicas
Read scale routing
Failover Cluster
Instance
High Availability
Availability Group
Read-Scale
Availability Group
17. Docker Containers: What and why?
• Docker: Multi-platform container engine based on
Linux and Windows Containers.
• NOT virtualization
• Image
• lightweight, stand-alone, executable package that
includes everything needed to run a piece of software,
including the code, a runtime, libraries, environment
variables, and config files
• Container
• runtime instance of an image—what the image
becomes in memory when actually executed
• Imagine a world of “database containers”
• What is all this talk about DevOps?
18. How to run SQL Server in Docker?
• In a manner of minutes…
On Linux:
Connect:
> “Fire Away” like any other SQL Server
$ docker pull microsoft/mssql-server-linux:2017-latest
$ sudo docker run -e 'ACCEPT_EULA=Y' -e 'MSSQL_SA_PASSWORD=<YourStrong!Passw0rd>'
-p 1401:1433 --name sql3
-d microsoft/mssql-server-linux:2017-latest
$ sudo docker exec -it sql3 "bash"
$ /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P '<YourStrong!Passw0rd>'
19. Docker in the real world
• Run in production (Use the docker store)
• To run multiple containers on the same host, use different ports.
• Pull a specific version
$ docker run --name sqlenterprise
-e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=<YourStrong!Passw0rd>'
-e 'MSSQL_PID=Enterprise' -p 1401:1433
-d store/microsoft/mssql-server-linux:2017-latest
$ docker run -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=<YourStrong!Passw0rd>' -p 1401:1433 -d microsoft/mssql-server-linux
$ docker run -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=<YourStrong!Passw0rd>' -p 1402:1433 -d microsoft/mssql-server-linux
$ docker pull microsoft/mssql-server-linux:<version>
docker hub image
includes: SQL
Server Engine and
tools
Licensing is the
same
Supports
Availability Groups
Docker files for
CentOS and RHEL
mssql-tools docker
image
20. Data Persistence
• Mount the drive on the location
of the SQL on Linux data.
Default: /var/opt/mssql
• Docker Engine copies the data
over to the container once docker
run is executed. SQL Server
restores data files.
• Host data location is locked by
Docker process -> Multiple
containers can’t access the same
drive.
Docker Engine
mssql
data
data
Host
$ docker run -e … -e … –v /your/location:/var/opt/mssql -p … -d microsoft/mssql-server-linux
21. SQL Server 2017 on Linux
Tools and programmability
• Windows SQL Server Management
Studio (SSMS)
• Windows SQL Server Data Tools (SSDT)
• 3rd party tools continue to work
• Native Linux command tools
• sqlcmd, bcp, sqlpackage
• Visual Studio Code mssql extension
• Existing drivers/frameworks supported
• SQL Operations Studio
• mssql-cli New
New
22. Introducing SQL Operations Studio
Free lightweight modern data operations tool for SQL everywhere
Cross-Platform
Based on VS Code.
Available on Windows,
Linux and macOS
Modern
Rich SQL editor,
customizable dashboards,
modern code workflow,
monitoring +
troubleshooting
Open
Source code on Github for
Community + Partners to contributehttps://aka.ms/sqlopsstudio
23. APP-DEV SCENARIOS
• Connect to SQL running anywhere, Azure SQL
DB and Azure SQL DW
• Connection management (secret store, profiles)
• Edit T-SQL with rich language services
(IntelliSense, syntax colorization, auto-complete,
T-SQL error validations, T-SQL snippets)
• Run T-SQL queries and see results
• Export results to CSV, JSON
https://aka.ms/mssql-marketplace
24. Available on Linux, macOS, Windows Available on Linux Preview
sqlcmd bcp mssql-scripter DBFS mssql-cli
Akin to psql & mysql
- Connect to SQL
anywhere
- Run ad-hoc T-SQL
- View results
Akin to pg_dump but
for bulk data loading
- Connect to SQL
anywhere
- Bulk load/export
data
Akin to pg_dump &
mysqldump
- Connect to SQL
anywhere
- Generate CREATE
scripts for schema
- Generate INSERT
scripts for data
- Save scripts as .sql
files
- Pipe to native *nix
tools
- Connect to SQL
anywhere
- Monitor live data
from SQL Server
DMVs as virtual files
in a virtual directory
“Next Gen” sqlcmd
- Connect to SQL
anywhere
- Get results as raw
text, formatted CSV
or JSON
- CLI 'tab completion'
(T-SQL keywords,
statements, tables,
views, …)
- Platform-native
piping with awk,
sed, grep, jq, cut,
JMESPath
25.
26.
27. FAQ
What Linux platforms are supported?
SQL Server is currently supported on Red Hat Enterprise Server, SUSE Linux Enterprise Server, and
Ubuntu. For the latest information about the supported versions, see Supported platforms.
Can I install SQL Server on the Linux Subsystem for Windows 10?
No. Linux running on Windows 10 is currently not a supported platform for SQL Server and related
tools.
How do I get SQL Server installed on my Linux servers?
Microsoft maintains package repositories for installing SQL Server and supports installation via native
package managers such as yum, zypper, and apt. To quickly install, see one of the quick start tutorials.
Which Linux file systems can SQL Server 2017 use for data files?
Currently SQL Server on Linux supports ext4 and XFS. Support for other file systems will be added as
needed in the future.
Can I download the installation packages to install SQL Server offline?
Yes. For more information, see the package download links are in the Release notes. Also, review the
instructions for offline installations.
28. FAQ
Can I use the SQL Server Management Studio client on Windows to access SQL Server on Linux?
Yes, you can use all your existing tools that run on Windows to access SQL Server on Linux. These include tools from Microsoft such as SQL
Server Management Studio (SSMS), SQL Server Data Tools (SSDT), and OSS and third-party tools.
Is there a tool like SSMS that runs on Linux?
The new Microsoft SQL Operations Studio (preview) is a cross-platform tool for managing SQL Server. For more information, see What is
Microsoft SQL Operations Studio (preview)?.
Are commands like sqlcmd and bcp available on Linux?
Yes, sqlcmd and bcp are natively available on Linux, macOS, and Windows. In addition, use the new mssql-scripter command line tool on Linux,
macOS, or Windows to generate T-SQL scripts for your SQL database running anywhere.
Is it possible to view Activity Monitor when connected through SSMS on Windows for an instance running on Linux?
Yes, you can use SSMS on Windows to connect remotely, and use tools/ features such as Activity Monitor commands on a Linux instance.
What tools are available to monitor SQL Server performance on Linux?
You can use system dynamic management views to collect various types of information about SQL Server, including Linux process information.
29. FAQ
Has Microsoft created an app like the SQL Server Configuration Manager on Linux?
Yes, there is a configuration tool for SQL Server on Linux: mssql-conf.
Does SQL Server on Linux support multiple instances on the same host?
We recommend running multiple containers on a host to have multiple distinct instances running on a host. Each container will need to listen on a
different port. For more information, see Run multiple SQL Server containers.
Are Always On and clustering supported in Linux?
Failover clustering and high availability on Linux are achieved with Pacemaker on Linux. For more information, see Business continuity and
database recovery - SQL Server on Linux.
Is it possible to configure replication from Linux to Windows and vice versa?
Read-scale replicas can be used between Windows and Linux to replicate data one-way.
Is it possible to migrate existing databases in older versions of SQL Server from Windows to Linux?
Yes, there are several methods of achieving this.
Can I migrate my data from Oracle and other database engines to SQL Server on Linux?
Yes. SSMA supports migration from several types of database engines: Microsoft Access, DB2, MySQL, Oracle, and SAP ASE (formerly SAP Sybase
ASE). For an example of how to use SSMA, see Migrate an Oracle schema to SQL Server 2017 on Linux with the SQL Server Migration Assistant.
Can I change the ownership of SQL Server files and directories from the installed mssql account and group?
We do not support changing the ownership of SQL Server directory and files from the default installation. The mssql account and group is
specifically used for SQL Server and has no interactive login access.
30. Modern BI
Big Data
&
Data Warehousing
Database
Modernization
Data Science & AI
Migrate to SQL
Server or Azure
IoT
6 ways to get value data.
Apply for complimentary workshop, assessment or PoC
Application link: www.aka.ms/data6offers
35. Offer: Data Platform Modernization
Are you still running applications on SQL Server 2005 or 2008? It’s time to move your data infrastructure to the new level.
Upgrading to SQL Server 2017 or moving to Azure SQL DB can help your organization to stay compliant, improve performance,
reliability, security and achieve higher flexibility.
Microsoft partners can help you to learn about the latest version of SQL Server and Azure Data Services, get familiar with the
benefits of modernizing your Data Infrastructure, assess implementation & test drive new solution with a Proof of Concept project.
Ready to upgrade your data infrastructure?
Apply for complimentary workshop, assessment or PoC: www.aka.ms/Data6Offers
* Apply your Software Assurance benefits (SQL Server Deployment Planning Services days) to cover the engagement cost.
Your organization also might be eligible for subsidized deployment services. Please contact your Microsoft Account Manager or
Partner for details.
www.aka.ms/Data6Offers
36. Offer: Migrate to SQL Server & Azure
Why Migrate? Oracle customers have many concerns about moving forward on their Oracle platforms, with one of the risks being
getting locked into a single vendor with high licensing costs. Oracle licenses for the same workload could cost up to 10-12 times
more than SQL Server. By migrating to SQL Server your organization can save money and achieve the performance, scale, and
security your mission-critical applications need.
Microsoft partners can help you to assess your current environment, review architectural considerations, create a detailed
migration plan to SQL Server and pilot a future solution with Proof of Concept project.
Review architectural
considerations and create a
migration plan
Dive into key features of SQL
Server and Azure SQL DB
through hands-on labs and
instructor-led demos.
Ready to migrate to SQL Server or Azure?
Apply for complimentary workshop, assessment or PoC: www.aka.ms/Data6Offers
* Apply your Software Assurance benefits (SQL Server Deployment Planning Services days) to cover the engagement cost.
Your organization also might be eligible for subsidized deployment services. Please contact your Microsoft Account Manager or
Partner for details. www.aka.ms/Data6Offers
37. Buying a SQL Server license gives
you the option to use it on
Windows Server, Linux, or Docker.
Regardless of where you run it—
VM, Docker, physical, cloud, on-
premises—the licensing model is
the same; available features depend
on which edition of SQL Server you
use.
LICENSE
Same license, new choice
Check out the SQL
Server on Linux with
RedHat Offer
38. Linux packages: the result
• Install and upgrade with a single
command.
• Remove with a single command.
• 3rd party software: First-time install by
adding repository and running install
command.
• Fully-scriptable/automatable.
40. E N C R Y P T D A T A A T R E S T A N D I N M O T I O N
SECURITY
Always Encrypted
Query
Client side Server side
Data set
Enhanced
SQL Server
Library
CIPHERTEXT
Search for single
customer record
Sensitive data
remains encrypted
inside SQL Server
Result set with
unencrypted data
Column master key Column encryption key
Customer Credit card # Exp.
Denny Usher 0x7ff654ae6d 5/174949-8003-8473-1930
Customer Credit card # Exp.
Tim Irish 4839-2939-1919-3987 7/19
Denny Usher 4949-8003-8473-1930 5/17
Alicia Hodge 9000-4899-1600-1324 4/18
Credit card #
1x7fg655se2e
0x7ff654ae6d
0y8fj754ea2c
41. Environment Variables
These variables can be used to silently configure SQL Server on Linux:
ACCEPT_EULA Accept the SQL Server license agreement when set to any value (for example, 'Y'). Used on
first install only.
SA_PASSWORD Configure the SA user password. Used on first install only.
MSSQL_TCP_PORT Configure the TCP port that SQL Server listens on (default 1433).
MSSQL_IP_ADDRESS Set the IP address. Currently, the IP address must be IPv4 style (0.0.0.0).
MSSQL_BACKUP_DIR Sets the Default backup directory location.
MSSQL_DATA_DIR Change the directory where the new SQL Server database data files (.mdf) are created.
MSSQL_LOG_DIR Changes the directory where the new SQL Server database log (.ldf) files are created.
MSSQL_DUMP_DIR Change the directory where SQL Server will deposit the memory dumps and other
troubleshooting files by default.
MSSQL_ENABLE_HADR Enable Availability Groups.
42. Encrypt connections on Linux using TLS 1.2
• TLS 1.0, 1.1 and 1.2 supported
• TLS is used to encrypt connections from a client application
to SQL Server
• Specify "Encrypt=True" and "TrustServerCertificate=False" in
their connection strings.
• FIPS compliance
• Recently moved to openssl stack
43. Use of openssl Crypto
• SQL Server on Linux uses openssl crypto instead of the
Windows crypto
• Use certificates from the Linux file system
• This allows customers to run in FIPS mode.
44. Failover cluster
Instance-level protection
Seconds to minutes RTO
Basic availability groups
AG with 2 replicas
Log shipping
Warm standbys for DR
VM failover
Resilience against guest
and OS-level failures
Minutes RTO
Backup and restore
DR
Minutes to hours RTO
Always-on availability
groups
• Database level protection
• Seconds to minutes RTO
Distributed availability
groups
• DR
• Seconds to minutes RTO
Simple Standard Mission critical
45. Enterprise building a mission critical app
Scenario Solution
Backups
Reports
HA
DR
Async Log
Synchronization
Sync Log
Synchronization
49. Best Practices
• Change the SA password.
• To run multiple containers on the same host, use different ports.
• Upgrade SQL Server frequently
• Data persistence
$ docker exec -it <ContainerID> /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P '<Old Password>'
-Q 'ALTER LOGIN SA WITH PASSWORD="<New Password>";'
$ docker run -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=<YourStrong!Passw0rd>' -p 1401:1433 -d microsoft/mssql-server-linux
$ docker run -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=<YourStrong!Passw0rd>' -p 1402:1433 -d microsoft/mssql-server-linux
$ docker pull microsoft/mssql-server-linux
Editor's Notes
That’s not all it can do. SQL Server 2017 continues to deliver industry-leading capabilities:
Our latest performance benchmarks on Windows and Linux blow away our old records.
OLTP – We have #1 OLTP TPC-E performance
DW: And, we have the fastest performing DW. With best price/performance.
We offer the most secure database.
According to US National Institute of Standards and Technology (NSIT), we have had fewer vulnerabilities over the last 7 years than Oracle or IBM
Fewer vulnerabilities mean less patching for you!
SQL Server is the first commercial database with Advanced Analytics using R and Python built-in.
Why does this matter to you?
Now you can use SQL Server to operationalize your data science models in a secure and performant way
Use native T-SQL commands to score data in near real-time
And unlike our competitors, mobile BI on every device comes built-in. Or add access to powerful, self-service BI visualizations through Power BI - at a fraction of the cost of our competitors.
SQL Server 2017 gives you your choice of platform and language, and the most consistent on-prem to cloud environment.
And it does all this for 1/10th the cost of Oracle
Available on Windows, but the config and setup has changed on Linux. On Windows – password policy was the one that was available on your machine. On linux we have a hard coded password policy (complexity, expliration time etc.).
Use MIT’s implementation of Kerberos
Open source modules of id mapping
Lastly, let's consider the example of an ISV that has a solution on Windows and now with SQL Server available on Linux they want to certify their solution on Linux as well. Or the example of an enterprise moving their SQL Server workload to a Linux infrastructure. But due to their business requirements related to availability SLA they need the migration to be live with minimum downtime.
Using SQL Server 2017 customers can migrate their workloads cross platform, you can setup a DAG with primary AG on Windows for example and secondary on Linux. Again, we do not recommend running in this configuration in a steady state as there is no cluster manager for cross-platform orchestration, but it is the fastest solution for a cross-platform migration with minimum downtime.
TODO: Research how this works further to explain
Uncover insights buried in your data to optimize the way you do business.
Get insight you need to make the right decisions
Make data-driven decisions to drive business results.
Businesses need easy-to-use analytics products that deliver complex analysis and insights with a low cost of ownership
Integrate big data from across the enterprise value chain and use advanced analytics in real time to optimize supply-side performance and save money. Embrace proactive measures with a live view into your supply chain—assess inventory levels, predict product fulfillment needs, and identify potential backlog issues.
Bring together all of the data you need
Data volumes are exploding—from traditional point-of-sale systems and e-commerce websites to new customer sentiment sources like Twitter and IoT sensors that stream data in real time using Apache Hadoop and Spark. By analyzing a diverse dataset from the start, you’ll make more informed decisions that are predictive and holistic rather than reactive and disconnected.
Deliver better experiences and make better decisions by analyzing massive amounts of data in real time. Get the insight you need to deliver intelligent actions that improve customer engagement, increase revenue, and lower costs.
Take charge of your data overload with big data
Better understand customers, enhance decision-making, and increase productivity with insights gained from structured, semi-structured, and unstructured data. By analyzing vast quantities of data that were previously inaccessible, you transform your business culture into one that shapes the future with predictive analytics, rather than hindsight.
Create new services, Reinvent processes, and capture new revenue
SQL Server runtime and associated libraries:
/opt/mssql/bin/
/opt/mssql/lib/
Data and log files for SQL Server databases:
/var/opt/mssql/data/
/var/opt/mssql/log/
NB: This slide is correct as at SQL Server 2017 RC2. The list of unsupported features might change in later releases.
For more information, see: https://docs.microsoft.com/en-us/sql/linux/sql-server-linux-release-notes.
Recently moved to the openssl stack. This allows us to use certificates from the linux file system.
Recently moved to the openssl stack. This allows us to use certificates from the linux file system.
Instead of using windows crypto, we now us openssl crypto. This allows customers to run in FIPS mode.
As stated since our first public preview release back in November 2016, our goal for Sql Server on Linux is to enable all spectrum of HADR solutions available on Windows today. CTP1 release included Backup/Restore and shared disk FCI and with CTP1.3 we released Always On AG capabilities (all flavors). Log shipping is in progress to being released as well.
The focus of this session is Always On AGs, as a mission critical HADR solution for SQL Server on Linux. So let's go through a series of scenarios that AGs are enabling for SQL Server 2017.
For start, let's assume you are an organization with an all Linux infrastructure. You are building a mission critical application and now with SQL Server being available on the new platform, you are considering using SQL Server on Linux. To enable such scenarios we are bringing our flagship HA solution Always On AG on Linux.
This feature is now available on all Linux OS distributions SQL Server v.Next supports — Red Hat Enterprise Linux, Ubuntu and SUSE Linux Enterprise Server. Also, all capabilities that make Availability Groups a flexible, integrated and efficient HADR solution are available on Linux as well: multidatabase failover, fast failure detection and failover, transparent failover using floating IP resource, multiple sync/async secondary replicas, manual/automatic failover with no data loss, active secondary replicas available for read/backup workloads, automatic seeding, read-only routing, database level health monitoring and failover trigger, disaster Recovery configurations.
On Windows, Always On depends on Windows Server Failover Cluster (WSFC) for distributed metadata storage, failure detection and failover orchestration. On Linux, we are enabling Availability Groups to integrate natively with Pacemaker, a popular Linux clustering technology. Users can add a previously configured SQL Server Availability Group as a resource to a Pacemaker cluster and all the orchestration regarding monitoring, failure detection and failover is taken care of. To achieve this, customers will use the SQL Server Resource Agent for Pacemaker available with the mssql-server-ha package, that is installed alongside mssql-server.
Replace notes
In a typical database application, there are multiple clients running various types of workloads that can lead to bottlenecks due to resource constraints. To free up resources and achieve higher throughput for the OLTP workload or to deliver higher performance and scale on read-only workloads, customers can leverage the fastest replication technology for SQL Server and create a group of replicated databases to offload read workload. They can do this by using Availability Groups.
With Availability Groups, one or more secondary replicas can be configured to support read-only access to secondary databases and/or to permit backups on secondary databases.
The secondary replicas allow direct read-only querying or can enforce connections that specify ‘ReadOnly’ as their Application Intent. Or SQL Server offer load balancing capabilities by routing incoming connections to an availability group listener to a secondary replica that is configured to allow read-only workloads. From SQL Server 2016, we have an option to create a group of secondary replicas as part of the routing list that can be used in round-robin fashion.
I mention before offloading reporting and analytics workloads to the secondary replicas.
In addition, secondary replicas can also be used to offset backup operations or DBCC checks.
Q: This is great! What happens if the read-only clients are spread across various geographic areas?
The solution for this scenario is using Distributed Availability Groups. This allows offloading read workloads from the primary to replicas on sites that are closer to the source of the read workloads. This not only reduces the utilization of resources on the primary, but also helps with read throughput. Network latency is reduced and read only workloads will benefit from the dedicated resources.
A single distributed availability group can have up to 17 readable secondary replicas. For increased scaling capacity, user can daisy chain multiple availability groups and increase the number of readable replicas even further. Or can also do two distributed availability groups from the same primary availability group.