The document discusses topics related to designing and implementing an SAP HANA infrastructure, including the hardware and software components required for the SAP HANA server, storage, network, backup, and disaster recovery systems. It provides information on sizing SAP HANA systems, certified hardware partners, storage options like TDI, network requirements, security best practices, backup methods, and high availability and disaster recovery strategies. The presentation aims to help with planning and designing the various elements of an SAP HANA infrastructure.
Better performance and cost effectiveness empower better results in the cognitive era. For more information, visit: http://www.ibm.com/systems/power/hardware/linux-lc.html
Better performance and cost effectiveness empower better results in the cognitive era. For more information, visit: http://www.ibm.com/systems/power/hardware/linux-lc.html
Spectrum Scale - Diversified analytic solution based on various storage servi...Wei Gong
This slides describe diversified analytic solutions based on Spectrum Scale with various deployment mode, such as storage rich-server, share storage, IBM DeepFlash 150 and Elastic Storage Server. It deep dives several advanced data management features and solutions for BD&A workload derived from Spectrum Scale.
Big data processing meets non-volatile memory: opportunities and challenges DataWorks Summit
Advanced big data processing frameworks have been proposed to harness the fast data transmission capability of remote direct memory access (RDMA) over InfiniBand and RoCE. However, with the introduction of the non-volatile memory (NVM), these designs along with the default execution models, like MapReduce and Directed Acyclic Graph (DAG), need to be re-assessed to discover the possibilities of further enhanced performance.
In this context, we propose an accelerated execution framework (NVMD) for MapReduce and DAG that leverages the benefits of NVM and RDMA. NVMD introduces novel features for MapReduce and DAG, such as a hybrid push and pull shuffle mechanism and dynamic adaptation to the network congestion. The design has been incorporated into Apache Hadoop and Tez. Performance results illustrate that NVMD can achieve up to 3.65x and 3.18x improvement for Hadoop and Tez, respectively. In this talk, we will also present NVM-aware HDFS design and its benefits for MapReduce, Spark, and HBase.
Speaker: Shashank Gugnani, PhD Student, Ohio State University
5 Ways to Avoid Server and Application DowntimeNeverfail Group
Successfully maintaining server and application uptime requires a diligent watch. This presentation outlines five ways you can avoid server and application downtime to ensure your users are always connected to the programs vital to their success.
Dell PowerEdge zero touch provisioning with Auto Config speeds and simplifies server deployment. Using Server Configuration Profiles and your existing data center infrastructure, deploy one or thousands of PowerEdge servers reliably and repeatably. Learn more: http://www.dell.techcenter.com/LC
SUSE juega un rol importante como proveedor de soluciones de infraestructura basada en software para el mundo de BigData. Dichas soluciones son los cimientos que permiten despliegues de BigData escalables y sencillos de manejar aprovechando los últimos avances en computación, contenedores, almacenamiento y gestión de entornos.
Los acuerdos de SUSE con los principales fabricantes, tanto de soluciones de software como hardware, permiten una aproximación con garantías al complejo ecosistema de la gestión de datos a nivel empresarial.
The rapid growth of in-memory compute applications is not surprising given the tremendous performance gains they can offer. Jobs that used to take hours can now take minutes or seconds as they are no longer subject to the rotational and seek latencies of spinning media. While Flash memory provide some relief, it is still a hundred times slower than the DRAM that in-memory compute applications utilize as their primary storage.
One drawback to in-memory compute applications is the high cost associated with DRAM. Not only are the acquisition costs an order of magnitude more expensive than Flash, DRAM consumes far more power. Power can be a significant issue in data centers besides contributing a major part to the operational costs. In addition, a single server has limited capacity for DRAM and datasets that are larger, need to find an alternate solution or cope with the nuisance of sharding. Furthermore, in order to utilize the maximum capacity of DRAM in a server, higher cost DRAM needs to be installed further escalating the cost of compute.
We discuss a paradigm to allow in-memory computing applications to extend their capacity by utilizing Flash memory; often with minimal performance loss. We give examples of applications that have been modified to utilize the paradigm and show performance comparisons. We also discuss TCO and the relative cost per transaction of the different solutions.
Building Apps with Distributed In-Memory Computing Using Apache GeodePivotalOpenSourceHub
Slides from the Meetup Monday March 7, 2016 just before the beginning of #GeodeSummit, where we cover an introduction of the technology and community that is Apache Geode, the in-memory data grid.
Nagios Conference 2014 - Jeremy Rust - Avoiding Downtime Using Linux High Ava...Nagios
Jeremy Rust's presentation on Avoiding Downtime Using Linux High Availability.
The presentation was given during the Nagios World Conference North America held Oct 13th - Oct 16th, 2014 in Saint Paul, MN. For more information on the conference (including photos and videos), visit: http://go.nagios.com/conference
Supporting Apache HBase : Troubleshooting and Supportability ImprovementsDataWorks Summit
HBase has been in production in hundreds of clusters across the CDH/HDP customer base and Cloudera/Hortonworks support it for many years.
In this talk, based on our support experience, we aim to introduce useful information to troubleshoot HBase clusters efficiently. First off, we (Daisuke at Cloudera support) are going to talk about typical log messages and web UI info which we can use for troubleshooting (especially for struggling with performance issues). Since their meanings have been changing over the past versions, we would like to show the difference and improvements as well (e.g. HBASE-20232 for memstore flush, HBASE-16972 for slow scanner, HBASE-18469 for request counter, and also HBASE-21207 for sorting in web UI). We (Toshihiro at Cloudera, a former Hortonworks employee) will also cover some new tools (e.g. HBASE-21926 Profiler Servlet, HBASE-11062 htop, etc.), which should also be useful for performance troubleshooting.
Data Highway Rainbow - Petabyte Scale Event Collection, Transport & Delivery ...DataWorks Summit
This paper will present the architecture and features of Data Highway Rainbow, Yahoo’s hosted multi-tenant infrastructure which offers event collection, transport and aggregated delivery as a service. Data Highway supports collection from multiple data centers & aggregated delivery in primary Yahoo data centers which provide a big data computing cluster. From a delivery perspective, Data Highway supports endpoints/sinks such as HDFS, Storm and Kafka; with Storm & Kafka endpoints tailored towards latency latency consumers.
We will also look into the evolution of the service in terms of prominent features added, the motivation behind these features starting from it’s initial launch, some of which were customer asks, while others were driven from optimizing the efficiency and footprint of the deployed infrastructure. Some of the features we will touch upon are
* Delivery Completeness Audit WebService
* Publisher Daemon & Client API Robustness
* Aggregated HDFS File Delivery
* Filters for Low Latency Delivery.
* Schema Registry
* Adaptive Rate Limiting
* Various Load Balancing techniques.
* Event Deduplication
Aggregated Daily Metrics
* Events Ingested: 250 Billion
* Bytes Ingested (Uncompressed) : 700 Tera Bytes
* Bytes Delivered (Batch + Near Real Time) : 1.5 Peta Bytes
* Near Real Time Delivery (Storm & Kafka) Latency : 95th percentile 500ms - 1 second
* Batch Delivery Latency (Aggregated into 1 minute files) : 95th percentile within 3 minutes
* Production H/W Footprint : 651
* Total Active Event Schema Types: ~200
Underlying Technology Stack : ZeroMQ, Apache Avro, libevent, Apache HttpComponents
The paper will conclude with the next steps we’re considering as a logical evolution for Data Highway in light of considerable developments in similar open source projects such as Apache Kafka.
Simplifying systems management with Dell OpenManage on 13G Dell PowerEdge ser...Principled Technologies
Automated systems management and additional connectivity solutions can reduce the number of administrators you need to run your datacenter or simply free up administrators to innovate rather than tying them up with routine management tasks. We found that the Dell OpenManage suite provides several new features for 13G Dell PowerEdge server solutions to streamline management tasks in both time and steps. Other new features let us easily connect to iDRAC right from the server. Updating firmware with Dell OpenManage features was also easier—eliminating 213 steps for updating a single server compared to updating manually.
The latest versions of the Dell OpenManage suite of system management tools and the power of iDRAC 8 contained within Dell 13G servers gives administrators increased flexibility and powerful new options for managing their data centers that translate to demonstrable savings in time and administrative effort. These automated enhancements and new technologies enable administrators to manage increasingly larger workloads while reducing the amount of hands-on work required for each system, bringing real value to systems management and datacenter operations.
Spectrum Scale - Diversified analytic solution based on various storage servi...Wei Gong
This slides describe diversified analytic solutions based on Spectrum Scale with various deployment mode, such as storage rich-server, share storage, IBM DeepFlash 150 and Elastic Storage Server. It deep dives several advanced data management features and solutions for BD&A workload derived from Spectrum Scale.
Big data processing meets non-volatile memory: opportunities and challenges DataWorks Summit
Advanced big data processing frameworks have been proposed to harness the fast data transmission capability of remote direct memory access (RDMA) over InfiniBand and RoCE. However, with the introduction of the non-volatile memory (NVM), these designs along with the default execution models, like MapReduce and Directed Acyclic Graph (DAG), need to be re-assessed to discover the possibilities of further enhanced performance.
In this context, we propose an accelerated execution framework (NVMD) for MapReduce and DAG that leverages the benefits of NVM and RDMA. NVMD introduces novel features for MapReduce and DAG, such as a hybrid push and pull shuffle mechanism and dynamic adaptation to the network congestion. The design has been incorporated into Apache Hadoop and Tez. Performance results illustrate that NVMD can achieve up to 3.65x and 3.18x improvement for Hadoop and Tez, respectively. In this talk, we will also present NVM-aware HDFS design and its benefits for MapReduce, Spark, and HBase.
Speaker: Shashank Gugnani, PhD Student, Ohio State University
5 Ways to Avoid Server and Application DowntimeNeverfail Group
Successfully maintaining server and application uptime requires a diligent watch. This presentation outlines five ways you can avoid server and application downtime to ensure your users are always connected to the programs vital to their success.
Dell PowerEdge zero touch provisioning with Auto Config speeds and simplifies server deployment. Using Server Configuration Profiles and your existing data center infrastructure, deploy one or thousands of PowerEdge servers reliably and repeatably. Learn more: http://www.dell.techcenter.com/LC
SUSE juega un rol importante como proveedor de soluciones de infraestructura basada en software para el mundo de BigData. Dichas soluciones son los cimientos que permiten despliegues de BigData escalables y sencillos de manejar aprovechando los últimos avances en computación, contenedores, almacenamiento y gestión de entornos.
Los acuerdos de SUSE con los principales fabricantes, tanto de soluciones de software como hardware, permiten una aproximación con garantías al complejo ecosistema de la gestión de datos a nivel empresarial.
The rapid growth of in-memory compute applications is not surprising given the tremendous performance gains they can offer. Jobs that used to take hours can now take minutes or seconds as they are no longer subject to the rotational and seek latencies of spinning media. While Flash memory provide some relief, it is still a hundred times slower than the DRAM that in-memory compute applications utilize as their primary storage.
One drawback to in-memory compute applications is the high cost associated with DRAM. Not only are the acquisition costs an order of magnitude more expensive than Flash, DRAM consumes far more power. Power can be a significant issue in data centers besides contributing a major part to the operational costs. In addition, a single server has limited capacity for DRAM and datasets that are larger, need to find an alternate solution or cope with the nuisance of sharding. Furthermore, in order to utilize the maximum capacity of DRAM in a server, higher cost DRAM needs to be installed further escalating the cost of compute.
We discuss a paradigm to allow in-memory computing applications to extend their capacity by utilizing Flash memory; often with minimal performance loss. We give examples of applications that have been modified to utilize the paradigm and show performance comparisons. We also discuss TCO and the relative cost per transaction of the different solutions.
Building Apps with Distributed In-Memory Computing Using Apache GeodePivotalOpenSourceHub
Slides from the Meetup Monday March 7, 2016 just before the beginning of #GeodeSummit, where we cover an introduction of the technology and community that is Apache Geode, the in-memory data grid.
Nagios Conference 2014 - Jeremy Rust - Avoiding Downtime Using Linux High Ava...Nagios
Jeremy Rust's presentation on Avoiding Downtime Using Linux High Availability.
The presentation was given during the Nagios World Conference North America held Oct 13th - Oct 16th, 2014 in Saint Paul, MN. For more information on the conference (including photos and videos), visit: http://go.nagios.com/conference
Supporting Apache HBase : Troubleshooting and Supportability ImprovementsDataWorks Summit
HBase has been in production in hundreds of clusters across the CDH/HDP customer base and Cloudera/Hortonworks support it for many years.
In this talk, based on our support experience, we aim to introduce useful information to troubleshoot HBase clusters efficiently. First off, we (Daisuke at Cloudera support) are going to talk about typical log messages and web UI info which we can use for troubleshooting (especially for struggling with performance issues). Since their meanings have been changing over the past versions, we would like to show the difference and improvements as well (e.g. HBASE-20232 for memstore flush, HBASE-16972 for slow scanner, HBASE-18469 for request counter, and also HBASE-21207 for sorting in web UI). We (Toshihiro at Cloudera, a former Hortonworks employee) will also cover some new tools (e.g. HBASE-21926 Profiler Servlet, HBASE-11062 htop, etc.), which should also be useful for performance troubleshooting.
Data Highway Rainbow - Petabyte Scale Event Collection, Transport & Delivery ...DataWorks Summit
This paper will present the architecture and features of Data Highway Rainbow, Yahoo’s hosted multi-tenant infrastructure which offers event collection, transport and aggregated delivery as a service. Data Highway supports collection from multiple data centers & aggregated delivery in primary Yahoo data centers which provide a big data computing cluster. From a delivery perspective, Data Highway supports endpoints/sinks such as HDFS, Storm and Kafka; with Storm & Kafka endpoints tailored towards latency latency consumers.
We will also look into the evolution of the service in terms of prominent features added, the motivation behind these features starting from it’s initial launch, some of which were customer asks, while others were driven from optimizing the efficiency and footprint of the deployed infrastructure. Some of the features we will touch upon are
* Delivery Completeness Audit WebService
* Publisher Daemon & Client API Robustness
* Aggregated HDFS File Delivery
* Filters for Low Latency Delivery.
* Schema Registry
* Adaptive Rate Limiting
* Various Load Balancing techniques.
* Event Deduplication
Aggregated Daily Metrics
* Events Ingested: 250 Billion
* Bytes Ingested (Uncompressed) : 700 Tera Bytes
* Bytes Delivered (Batch + Near Real Time) : 1.5 Peta Bytes
* Near Real Time Delivery (Storm & Kafka) Latency : 95th percentile 500ms - 1 second
* Batch Delivery Latency (Aggregated into 1 minute files) : 95th percentile within 3 minutes
* Production H/W Footprint : 651
* Total Active Event Schema Types: ~200
Underlying Technology Stack : ZeroMQ, Apache Avro, libevent, Apache HttpComponents
The paper will conclude with the next steps we’re considering as a logical evolution for Data Highway in light of considerable developments in similar open source projects such as Apache Kafka.
Simplifying systems management with Dell OpenManage on 13G Dell PowerEdge ser...Principled Technologies
Automated systems management and additional connectivity solutions can reduce the number of administrators you need to run your datacenter or simply free up administrators to innovate rather than tying them up with routine management tasks. We found that the Dell OpenManage suite provides several new features for 13G Dell PowerEdge server solutions to streamline management tasks in both time and steps. Other new features let us easily connect to iDRAC right from the server. Updating firmware with Dell OpenManage features was also easier—eliminating 213 steps for updating a single server compared to updating manually.
The latest versions of the Dell OpenManage suite of system management tools and the power of iDRAC 8 contained within Dell 13G servers gives administrators increased flexibility and powerful new options for managing their data centers that translate to demonstrable savings in time and administrative effort. These automated enhancements and new technologies enable administrators to manage increasingly larger workloads while reducing the amount of hands-on work required for each system, bringing real value to systems management and datacenter operations.
Cisco & MapR bring 3 Superpowers to SAP HANA DeploymentsMapR Technologies
SAP HANA is an increasingly popular platform for various analytical and transactional use cases with its in-memory architecture. If you’re an SAP customer you’ve experienced the benefits.
However, the underlying storage for SAP HANA is painfully expensive. This slows down your ability to grow your SAP HANA footprint and serve up more applications.
Best Practices to Administer, Operate, and Monitor an SAP HANA SystemSAPinsider Events
Review this session from HANA 2015 in Las Vegas. Coming to Europe! www.HANA2015.com
Best Practices to Administer, Operate, and Monitor an SAP HANA System by Kurt Hollis, Deloitte
This session provides easy to understand, step-by-step instruction for operation and administration of SAP HANA post go-live. Through live demo and detailed instruction, attendees will:
· Learn how to use the SAP HANA studio for security, user management, credential management, high availability administration, system maintenance, and performance optimization
· Gain a comprehensive understanding of available SAP HANA platform lifecycle management tools, deployment options, and system relocation
· Get an introduction to SAP HANA HA/DR capabilities, and learn best practices for backup and recovery of the SAP HANA system
SAP HANA®, which is 10-1000 times faster than
SAP on a traditional database, has revolutionized
business operations by streamlining transactions,
analytics and data processing on a single,
in-memory database so enterprises can operate at
the speed of business -- in real-time.
AWS re:Invent 2016: Optimizing workloads in SAP HANA with Amazon EC2 X1 Insta...Amazon Web Services
AWS and SAP have worked together closely to certify the AWS platform so that companies of all sizes can fully realize all the benefits of the SAP HANA in-memory database platform on the AWS cloud. By placing SAP systems in the cloud, organizations are achieving greater agility, flexibility, and cost efficiency while saving resources to focus on their core businesses. We will discuss recent SAP and AWS innovations including the Amazon EC2 X1 instance type that offers up to 2TB of RAM, and dive into features of the AWS platform that bring significant flexibility to SAP HANA deployments.
On this presentation Carl Bachor, from AWS Professional Services, takes us on a deep dive of SAP enterprise software, and how it is implemented on the AWS cloud.
YASH Technologies at ASUG Minnesota chapter meetingYASH Technologies
Presentation by Lon Blake, YASH Technologies on System Landscape Requirements and Essential Considerations to Prepare Your SAP Landscape for SAP S/4HANA
SAP/HANA Financial Closing can help you ACCELERATE your financial closing cycle. Benefit from increased governance, higher user efficiency and automation, strong collaboration, and real-time insight.
Many of the world’s largest enterprises are replacing their traditional SAP server environments with SAP running in the AWS Cloud. As well as increasing business agility and scalability, our cloud platform significantly reduces SAP infrastructure and support costs, simplifies operations and contributes directly to the bottom line.
SAP HANA Distinguished Engineer (HDE) Webinar: Overview of SAP HANA On-Premis...Tomas Krojzl
This slide deck was used to present public webinar "SAP HANA On-Premise Deployment Options" - additional information including replay information can be found here: http://scn.sap.com/community/hana-in-memory/blog/2016/05/25/sap-hana-distinguished-engineer-hde-webinar-overview-of-sap-hana-on-premise-deployment-options
AWS re:Invent 2016: Technical Tips for Helping SAP Customers Succeed on AWS (...Amazon Web Services
In this session, AWS partners, both with and without SAP focused practices, learn how to develop and design services and solutions to help SAP customers migrate to and run on the AWS Cloud. We discuss the different types of services required by SAP customers and how to identify and qualify SAP on AWS opportunities. Based on actual SAP customer projects, we discuss what patterns work, where the potential pitfalls are, and how to ensure a successful SAP on AWS customer project.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Elizabeth Buie - Older adults: Are we really designing for our future selves?
TechTalkThai webinar SAP HANA
1.
2. DC Specialist, Cisco Systems
SAP HANA infrastructure
Jarut Nakaramaleerat
9-Dec-2016
http://bit.ly/2gfzVhg
3. สิ#งที#จะพูดคุยกันวันนี3
• 11.00 – 12.30
• การทํางานของ SAP HANA
• อุปกรณ์ที#ต ้องมี (หรือควรมี) สําหรับระบบ SAP HANA
• การออกแบบระบบ Server
• การออกแบบระบบ Storage
• การออกแบบระบบ Network
• การออกแบบระบบ Backup
• การออกแบบระบบ DR Site
5. Instances of a SAP-System
Web Browser / Fiori
Server A Server CServer B
Central
Instance 00
Central Services
Instance 01
Dialog
Instance
02
Dialog
Instance
03
Server D DB
6. What is SAP HANA?
SAP HANA
•Stands for “SAP High Performance Analytic Appliance”
SAP Definition
•SAP HANA is a multipurpose, data source-agnostic in-memory appliance software that combines
SAP software components optimized on proven hardware and delivered by SAP’s leading hardware
partners.
•SAP in-memory computing technology accelerates access to data by storing databases in the main
memory of a computer.
•SAP HANA is a data platform developed by SAP. At the core of SAP HANA is a relational database,
which supports a wide range of Business Intelligence, ERP and other enterprise applications.
8. Faster Access with a Column-Based Approach
• Data is stored vertically and serves as the index
• Columns are stored separately
• Retrieve only columns used in the query
• Reduces I/O dramatically
Results
• Data is stored horizontally
• Querying without indexes and views is I/O intensive
• Building indexes and views is time consuming
• Requires an expanded database footprint
Row-Based
1
2
3
5
… Column-Based
1 2 3 4 5 6 7 8 9 …
Results Results
9. What Makes SAP HANA So Fast?
Vs.
Transactions Reports
Database
ETL
Transactions Reports Spreadsheet
External
Resources
Persistent Storage
• Planning
• Modeling
• What-
if…
Business Intelligence
11. S/4HANA = Next Gen Business Suite
HANA = Next Gen Platform
12.
13. • SAP HANA appliance or TDI (server, storage, virtualization)
• Network Switch
• Security and Patching
• Backup & HA & DR
• Upgrade plan
อุปกรณ์ที#ต ้องมี (หรือควรมี) สําหรับระบบ SAP HANA
15. SAP HANA Scalability
Scales from very small servers to very large clusters
Single Server
• 2 CPU 128GB to 8 CPU 8TB
(Special layout for Suite on HANA or
S/4HANA with up to 20TB+ per host)
• Single SAP HANA deployments for
data marts or accelerators with performance
demands (Socket to Memory ratio)
• Support for high availability
and disaster recovery
Scale Out Cluster
• 2 to n servers per cluster
• Each server is either 4 CPU/2TB or 8
CPU/4TB
• Largest certified configuration: 112 servers
• Largest tested configuration: 250+ servers
• Support for high availability
and disaster recovery
Cloud Deployment
• SAP HANA instances can be
deployed to public clouds
• BYOL
• Pay-per-use
12 PetaByte Data Warehouse with SAP HANA ð Guinness world record
OLTP/OLAP
OLAP only
OLTP/OLAP
16. defined
certified
defined
certified
SAP HANA Deployment Model
SAP HANA tailored data center integration is an additional option to the existing appliance delivery model
SAP HANA appliance delivery SAP HANA tailored data center integration
HANA
Server
HANA
Server
Storage
HANA
Server
Application
Database
Operating
System
Virtualization
Server
Network
Storage
Enterprise
Storage
HANA
Server
HANA
Server
HANA
Server
Shared Network
Virtualization
Server
Network
Storage
17. List of certified Hardware
http://global1.sap.com/community/ebook/2014-09-02-hana-hardware/enEN/index.html
20. HANA TDI Storage
Pros
• Advance Data
Management
• Simplified architecture
Cons
• Initial investment
• Need to meet SAP KPIEnterprise
Storage
HANA Appliance
21. HANA TDI Virtualization
Pros
• Virtualization & Mobility
• Easy to move to cloud
Cons
• Beware the overhead
• Limit max capacity
Enterprise
Storage
VMware vSphere
vHANA vHANA vHANA vHANA
22. Hyperconverged Infra for SAP HANA?
Certified Solution by using HCI for SAP app tier
And HANA appliance
Non-production only
(For Production is in review)
Fully Virtualized HCI with SAP HANA
http://www.cisco.com/c/dam/en/us/products/collateral/hyperconverged-infrastructure/hyperflex-hx-series/white_paper_c11-738214.pdf
23. Software Defined Storage as HANA TDI
`
CVD
• UCS Managed
• 40G network
• Scalable C240 storage
• 24x 1.8T HDD
• RHEL or SLES
• MapR Data Platform
• NFS attach storage
• Cert completed
• CVD in production
• Roadmaps in
development
24. 1) Determine the size of your future SAP HANA system
Ÿ SAP Quick Sizer tool is a quick and easy way for customers to
determine the CPU, memory, and SAPS requirements for running
their workloads on SAP HANA.
Ÿ Consider involving SAP Active Global Support for IT landscape
planning
2) Check the offerings of SAP‘s HANA Hardware Partners
Ÿ See the Certified SAP HANA Hardware Directory site
3) Order your SAP HANA server hardware
Ÿ If you decided to follow the TDI approach, please note:
o You do not need to order the local disks/integrated storage - these are
only required for appliances
o If you want your SAP HANA system to boot from SAN, additional Fibre
Channel adapters are allowed
4) Check the offerings of certified storage vendors
Ÿ Select one from the list of certified storage families
Ÿ If your preferred storage is not yet on the list, contact the vendor
on their plans to get certified
5) Set up your SAP HANA hardware infrastructure
Ÿ Make yourself familiar with SAP HANA’s IO patterns and the
impact of SAP’s data throughput KPIs during daily SAP HANA
operation
Ÿ Configure the storage system following the vendor’s directions
and recommendations
o Ask your storage vendor for a copy of their Configuration Guide for
SAP HANA
Ÿ Optional: Check the data throughput and the latency times
using HWCCT
o SAP’s KPIs are listed in the tool documentation
Ÿ Contact your storage vendor if the KPIs are not met
6) Install SAP HANA software
Ÿ Make sure that only certified personal do the SAP HANA
installation
Ÿ See SAP’s installation guides and related SAP notes at
help.sap.com
7) Go Live
Ÿ Consider involving SAP Active Global Support to perform a
HANA Go-Live Check prior to going productive
Going Live with SAP HANA TDI
High-level Process
25. All Applications – One Platform
SAP BW on HANA (PRD)
• 4+ Node Scale-out Cluster
SAP BW on HANA (PRD, virtual)
• Small scale-up HANA systems
SAP Suite on HANA (PRD)
• 2+ Node Scale-up HA Cluster
Non-SAP Applications
• VDI, SharePoint, Exchange, etc.
Storage Pools for HANA Persistence
Aggregation and Out-of-Band
Management for SAP
(Cisco Nexus and UCS Fabric)
Hadoop Cluster
SAP Applications (Non-PRD)
SAP Applications (PRD)
Shared Storage
28. SAP HANA in Data Centers
Bandwidth considerations for System Replication
• SAP How-To Guide:
Network requirements for System Replication (Link)
• 1. “Throughput”: Out of practical reason it must be
possible to transport the size of the persistently stored
data within one day from the primary to the secondary.
• 2. “Latency”: In case of SYNC operation:
The redo log shipping wait time for 4 KB log buffers
must be less than a millisecond or in a low single-digit
millisecond range – depending on the application
requirements (relevant for synchronous replication only).
Baseline
Average bandwidth need
Time
Bandwidth
Peaks
Delta-Data continuous Log
Ÿ Example for a bandwidth calculation:
Given: 4.3 TB of persistently stored data (sum of
data backup file sizes).
Throughput: 4.3 TB per day ð ~ 50 MByte/s
ð ~0.5 GBit/s minimum connection required
ð More info with SAP HANA Network Requirement Paper or
ð SAP note 1969700 contains among others an SQL statement
(in zip archive attached) to estimate the average (per day)
bandwidth required for SAP HANA System Replication
depending on the data and log amount per day
30. SAP HANA Security – data center integration
• User and role provisioning
• Out-of-the-box connector for
SAP NetWeaver Identity
Management
• SQL interface for integration
with other identity management
solutions)
• Compliance infrastructure
• Out-of-the-box connector for
SAP Access Control 10.1
• Standards-based single sign-on
infrastructure
• E.g. Microsoft Active Directory
• Logging infrastructure
• Database audit trail written via
Linux syslog
SecurityInfrastructure
Logging
Infrastructure syslog
Single Sign-On
Infrastructure
Kerberos
SAML
Identity
Management
Infrastructure
SQL
SAP HANA
Compliance
Infrastructure SQL
Antivirus
NW-VSI
compatible
XS
31. SAP HANA – security patching
• Operating systems
• SUSE Linux Enterprise and RedHat Enterprise Linux
• Security patches
• SAP HANA security patches are published as part of the SAP Security Patch strategy (SAP Security Notes)
• Delivered as SAP HANA revisions
• Operating system security patches are provided and published by SUSE/RedHat
• SAP HANA security documentation
• General information on SAP HANA security: SAP Help Portal
• Security whitepaper: http://www.saphana.com/docs/DOC-3751
• Best practice document on SAP HANA roles (incl. role templates): https://scn.sap.com/docs/DOC-53974
• Important SAP Notes
• 1598623: SAP HANA appliance: Security (Central Security Note)
• 1514967: SAP HANA appliance (Central Appliance Note)
• 1730929: Using external tools in an SAP HANA appliance
• 1730930: Using antivirus software in an SAP HANA appliance
• 1730999: Configuration changes in HANA appliance
33. SAP HANA Backup and Recovery
Options for backup: Comparison
File system Backint Storage snapshot
Advantages Ÿ Consistency checks on block level Ÿ Consistency checks on block level
Ÿ Ease of use – no explicit backup files management,
integrated into Studio
Ÿ Data center integration
Ÿ Additional features, e.g. encryption or de-duplication
Ÿ After completion, backups immediately available for
recovery
Ÿ Fast (usually seconds to minutes)
Ÿ Negligible network load
Ÿ First storage partners offer integration in their
tools
Disadvantages Ÿ Additional storage required
Ÿ File system fill level needs to be monitored
Ÿ Additional time needed to make backups
available for recovery
Ÿ Network load
Ÿ In case of recoveries, backup files must be
returned to staging area
Ÿ Network load
Ÿ 3rd party backup tool necessary
Ÿ No consistency checks on block level
Size Ÿ Payload only: Current data (backup size)
usually smaller than the data area)
Ÿ Payload only: Current data (backup size) usually
smaller than the data area)
Ÿ ~ Size data area, but usually compressed/de-
duplicated by storage
Duration Ÿ IO-bound (reading from data volume, writing
to target)
Ÿ Network-bound (writing to file system)
Ÿ IO-bound (reading from data volume)
Ÿ Network-bound (writing to backup server)
Ÿ Usually negligible (logical pointers are
replicated)
34. SAP HANA Backup and Recovery
Destinations for backups (I)
SAP HANA Database
Backup
Storage,
e.g. NFS
Create backup
hdbsql
SAP HANA
studio
• Backups to the file system
• Data backups can be triggered using
• SAP HANA Cockpit
• SAP HANA Studio
• SQL commands
• Scheduled with
• DBA Cockpit
• Standard scheduling tools
o starting SQL commands to initiate operations
• Log backups
• written automatically
• Triggered every 15 Minutes or by finished Log segment
• More information:
• File systems that are not supported: SAP Note 1820529
• Scheduling using the XS scheduler: SCN blog
35. SAP HANA Backup and Recovery
Destinations for backups (II)
Backups to 3rd party backup server
Ÿ For both data and log backups
Ÿ SAP HANA provides an API “Backint for SAP HANA” via
which 3rd party backup tools can be connected
Ÿ Provides functions for backup, recovery, query, delete
Ÿ 3rd party backup agent runs on the SAP HANA server,
communicating with 3rd party backup server
Ÿ Backups are transferred via pipe
Direct integration with SAP HANA:
Ÿ Data backups to Backint can be triggered/scheduled using
SAP HANA studio, SQL commands, or DBA Cockpit
Ÿ Log backups are automatically written to Backint (if
configured)
SAP HANA Database
3rd
Party
Backup Server
3rd
Party
Backup Agent
hdbsql
SAP HANA
studio
Create backup
36. SAP HANA Backup and Recovery
Backint Certification
• Certification is an installation prerequisite for tools using the “Backint for SAP HANA” API
• SAP Note 1730932 (“Using backup tools with Backint”)
• Certified tools (as of 2016-Jun)
Online listing of certified tools: Application Development Partner Directory
Ÿ Enter the search term HANA-BRINT and click on a partner name ð ”SAP Certified Solutions” for further details
Vendor Backup tool Intel
Arch.
Power
Arch.
Support
process
Allen Systems ASG-Time Navigator 4.4 ü 2212571
Commvault Simpana 10.0, Hitachi Data Protection Suite 10 (via Simpana Backint interface) ü 1957450
EMC Networker 8.2 ü 1999166
Interface for Data Domain Boost 1.0 ü 1970559
HP Data Protector 7.0, 8.1, 9.0; StoreOnce Plug-in for SAP HANA 1.0 ü 1970558
IBM Tivoli Storage Manager for Enterprise 6.4 ü 1913500
Spectrum Protect for Enterprise Resource Planning 7.1 ü 1913500
Libelle BusinessShadow 6.0.6 ü 2212575
Mindtree NBU CONNECTOR for SAP HANA ü 2330945
SEP Sesam 4.4 ü ü 2024234
Veritas (Symantec) NetBackup 7.7 ü 1913568
New
New
New
New
37. SAP HANA Backup and Recovery
Recovery steps when using a storage snapshot
1. Using the storage tool, transfer the
storage snapshot to the data area of the
SAP HANA database
2. Using SAP HANA studio, recover the
database using the storage snapshot as
basis (available in the recovery wizard)
Note: All recovery options are available,
including point-in-time recovery using log
backups/log from the log area
Note: All recovery options are available,
including point-in-time recovery using log
backups/log from the log area
hdbsql
SAP HANA
Studio
Recover database
Transfer storage
snapshot to
data area
SAP HANA Database
External
Storage
Data Area (Disk)
Data snapshot
Storage
Tool
38. HA & DR Concepts in general
RPO RTO
operation resumed…
time
Sync or
backup
…system operational
design & prepare detect recover perf. ramp
KPIs:
• Recovery Point Objective (RPO) = worst-case data-loss
• Recovery Time Objective (RTO) = time to recover from outage
*synchronous solution
Solution Used for Cost RPO RTO Perf. ramp
Backup & Recovery HA & DR $ high high med
SAP HANA Host Auto-Failover (Scale-out only) HA $ 0 med long
SAP HANA Storage Replication w/ QA, Dev. DR $$ 0* med long
SAP HANA System Replication HA & DR $$$ 0* low short
SAP HANA System Replication w/ QA, Dev. HA & DR $**/$$ 0* med long
** single host installations
42. Worldwide Data Center Setups
Multi Tier System Replication – Cascading Systems
Production Local shadow
with data preload
Remote system/shadow
with or without preload
(mixed usage together with non-prod.
operation)
Data CenterData Center
Sync
Tier 1 Tier 2 Tier 3
SAP Note 2303243 – SAP HANA Multitier System Replication – supported
replication modes between sites with SPS11: ASYNC&ASYNC, SYNCMEM&SYNC
(Of course, distance (latency) will rule the use of replication mode options!)
Tier 1 ð 2 Tier 2 ð 3
SYNCMEM SYNC
ASYNC ASYNC
44. SAP HANA Timeline
The journey so far
SPS2
27. June 2011
SPS4
11. May 2012
SPS3
7. Nov. 2011
SPS5
29. Nov. 2012
SAP BW powered
by SAP HANA
SAP HANA Data Marts Round-Off Release
SAP Suite powered
by SAP HANA
SPS6
Mid 2013
SPS7
End. 2013
Real Time Data Platform
Core Topics
Innovations
49. Cisco and SAP
Infrastructure
Cisco Advanced
Services
Cisco SAP Managed
Services
Cloud
• End-to-end infrastructure
for SAP (Compute,
Network, Security,
Storage, Backup)
• Top SAP HANA success
cases in Thailand
• Assessment Service
• Planning and Design
Service
• Implementation Service
• Data Load Service
• Optimization Service
• Remote Management of
the SAP Solution
• 24x7 Solution Monitoring
• Patching Service
• Cisco Powered Service
Provider to host the
Application(s) and SAP
HANA
Contact: ciscoth-sap@cisco.com