This document discusses FlexPod and Cisco solutions for SAP, including:
1. FlexPod, a validated reference architecture of Cisco UCS servers and NetApp storage that provides a scalable, simplified platform for various workloads including SAP.
2. FlexPod Datacenter for SAP which provides a multi-tenant infrastructure for hosting multiple SAP systems with integrated data protection, mobility, and high availability capabilities.
3. Certified solutions for SAP HANA including appliances and tailored configurations of Cisco UCS and NetApp storage optimized for HANA performance and scalability.
CloudBridge and NetApp Storage Solutions - The Killer AppNetApp
Among the largest pain points for most businesses are data storage and backup. Learn about the value and best practices of deploying Citrix CloudBridge to help optimize NetApp SnapMirror storage replication.
Oracle Database Consolidation with FlexPod on Cisco UCSNetApp
Cisco and Oracle as technology front-runners provide YOU the tools you need to optimize your Oracle environments! John McAbel, Senior Product Manager - Oracle Solutions on UCS at Cisco Systems, explains how NetApp and Cisco are providing a flexible infrastructure that helps prepare organizations for today, and for future business growth and change.
Enabling the Software Defined Data Center for Hybrid ITNetApp
Recently, NetApp held a Cloud Breakfast for customers of our High Touch Customer Program. This was a combined presentation from OBS, VMware and NetApp.
Presenters:
Jim Sangster, Senior Director, Solutions Marketing, NetApp - "Cloud for the Hybrid Data Center"
John Gilmartin, Vice President, Cloud Infrastructure Products, VMware - "Next Generation of IT"
Axel Haentjens Vice President, Marketing and International Orange Cloud for Business "NetApp Epic Story OBS"
Tim Waldron, Manager, Cloud Solutions, NetApp EMEA "Cloud Services – An EMEA Perspective"
CloudBridge and NetApp Storage Solutions - The Killer AppNetApp
Among the largest pain points for most businesses are data storage and backup. Learn about the value and best practices of deploying Citrix CloudBridge to help optimize NetApp SnapMirror storage replication.
Oracle Database Consolidation with FlexPod on Cisco UCSNetApp
Cisco and Oracle as technology front-runners provide YOU the tools you need to optimize your Oracle environments! John McAbel, Senior Product Manager - Oracle Solutions on UCS at Cisco Systems, explains how NetApp and Cisco are providing a flexible infrastructure that helps prepare organizations for today, and for future business growth and change.
Enabling the Software Defined Data Center for Hybrid ITNetApp
Recently, NetApp held a Cloud Breakfast for customers of our High Touch Customer Program. This was a combined presentation from OBS, VMware and NetApp.
Presenters:
Jim Sangster, Senior Director, Solutions Marketing, NetApp - "Cloud for the Hybrid Data Center"
John Gilmartin, Vice President, Cloud Infrastructure Products, VMware - "Next Generation of IT"
Axel Haentjens Vice President, Marketing and International Orange Cloud for Business "NetApp Epic Story OBS"
Tim Waldron, Manager, Cloud Solutions, NetApp EMEA "Cloud Services – An EMEA Perspective"
The Apache Spark config behind the indsutry's first 100TB Spark SQL benchmarkLenovo Data Center
Some configurations deserve their own SlideShare entry: this is one of them. When the indsutry's first 100TB Spark SQL benchmark was reached, the media took notice. For good reason.
Intel, Mellanox, Lenovo and IBM came together to investigate a topology that leveraged advances in CPU, memory, storage and networking to assess the readiness of Spark SQL to harness new capabilities -- and speeds.
Slides: Start Small, Grow Big with a Unified Scale-Out InfrastructureNetApp
Slides from the on-demand webcast (showcasing customer Cirrity). Learn how NetApp® clustered Data ONTAP® 8.2 can help you scale multiple workloads on a single unified storage platform with support for multiple protocols such as SMB 3.0 and pNFS, and scale the performance of all of your applications, whether on SAN or NAS infrastructure.
Overview of how NetApp IT Runs NetApp Technology in Their EnterpriseNetApp
Highlights on the NetApp on NetApp experience on the "why?" & "how?" the internal IT teams (both enterprise IT and engineering IT) use NetApp technology. And most importantly, the "results.”
Software Defined Storage - Open Framework and Intel® Architecture TechnologiesOdinot Stanislas
(FR)
Dans cette présentation vous aurez le plaisir d'y trouver une introduction plutôt détaillées sur la notion de "SDS Controller" qui est en résumé la couche applicative destinée à contrôler à terme toutes les technologies de stockage (SAN, NAS, stockage distribué sur disque, flash...) et chargée de les exposer aux orchestrateurs de Cloud et donc aux applications.
(ENG)
This presentation cover in detail the notion of "SDS Controller" which is in summary a software stack able to handle all storage technologies (SAN, NDA, distributed file systems on disk, flash...) and expose it to Cloud orchestrators and applications. Lots of good content.
This white paper describes how BlueData enables virtualization of Hadoop and Spark workloads running on Intel architecture.
Even as virtualization has spread throughout the data center, Apache Hadoop continues to be deployed almost exclusively on bare-metal physical servers. Processing overhead and I/O latency typically associated with virtualization have prevented big data architects from virtualizing Hadoop implementations.
As a result, most Hadoop initiatives have been limited in terms of agility, with infrastructure changes such as provisioning a new server for Hadoop often taking weeks or even months. This infrastructure complexity continues to slow down adoption in enterprise deployments. Apache Spark is a relatively new big data technology, but interest is growing rapidly; many of these same deployment challenges apply to on-premises Spark implementations.
The BlueData EPIC software platform addresses these limitations, enabling data center operators to accelerate Hadoop and Spark implementations on Intel architecture-based servers.
For more information, visit intel.com/bigdata and bluedata.com
Hyper-C is OpenStack on Windows Server 2016, based on Nano Server, Hyper-V, Storage Spaces Direct (S2D) and Open vSwitch for Windows. Bare metal deployment features Cloudbase Solutions Juju charms and MAAS.
MOUG17 Keynote: Oracle OpenWorld Major AnnouncementsMonica Li
Midwest Oracle Users Group Training Day 2017 Presentation by Rich Niemiec, Chief Innovation Officer at Viscosity North America.
Catch up on OOW17's top announcements in this 1 hour presentation.
Join Postgres experts Bruce Momjian and Marc Linster as they preview everything new in Postgres 12. You don’t want to miss this!
Highlights include:
- New compatibility features
- PostgreSQL: Table access methods
- Partitioning Improvements
Webinar: 4 Ways to Improve NetApp Storage Performance Without Replacing ItStorage Switzerland
New on demand webinar with Storage Switzerland Lead Analyst George Crump and Avere Systems Director Chris Bowen. In this webinar, George and Chris discuss why NAS storage performance is so critical, how to balance storage performance and storage capacity, and four ways to improve storage performance without replacing your existing NAS system.
The Apache Spark config behind the indsutry's first 100TB Spark SQL benchmarkLenovo Data Center
Some configurations deserve their own SlideShare entry: this is one of them. When the indsutry's first 100TB Spark SQL benchmark was reached, the media took notice. For good reason.
Intel, Mellanox, Lenovo and IBM came together to investigate a topology that leveraged advances in CPU, memory, storage and networking to assess the readiness of Spark SQL to harness new capabilities -- and speeds.
Slides: Start Small, Grow Big with a Unified Scale-Out InfrastructureNetApp
Slides from the on-demand webcast (showcasing customer Cirrity). Learn how NetApp® clustered Data ONTAP® 8.2 can help you scale multiple workloads on a single unified storage platform with support for multiple protocols such as SMB 3.0 and pNFS, and scale the performance of all of your applications, whether on SAN or NAS infrastructure.
Overview of how NetApp IT Runs NetApp Technology in Their EnterpriseNetApp
Highlights on the NetApp on NetApp experience on the "why?" & "how?" the internal IT teams (both enterprise IT and engineering IT) use NetApp technology. And most importantly, the "results.”
Software Defined Storage - Open Framework and Intel® Architecture TechnologiesOdinot Stanislas
(FR)
Dans cette présentation vous aurez le plaisir d'y trouver une introduction plutôt détaillées sur la notion de "SDS Controller" qui est en résumé la couche applicative destinée à contrôler à terme toutes les technologies de stockage (SAN, NAS, stockage distribué sur disque, flash...) et chargée de les exposer aux orchestrateurs de Cloud et donc aux applications.
(ENG)
This presentation cover in detail the notion of "SDS Controller" which is in summary a software stack able to handle all storage technologies (SAN, NDA, distributed file systems on disk, flash...) and expose it to Cloud orchestrators and applications. Lots of good content.
This white paper describes how BlueData enables virtualization of Hadoop and Spark workloads running on Intel architecture.
Even as virtualization has spread throughout the data center, Apache Hadoop continues to be deployed almost exclusively on bare-metal physical servers. Processing overhead and I/O latency typically associated with virtualization have prevented big data architects from virtualizing Hadoop implementations.
As a result, most Hadoop initiatives have been limited in terms of agility, with infrastructure changes such as provisioning a new server for Hadoop often taking weeks or even months. This infrastructure complexity continues to slow down adoption in enterprise deployments. Apache Spark is a relatively new big data technology, but interest is growing rapidly; many of these same deployment challenges apply to on-premises Spark implementations.
The BlueData EPIC software platform addresses these limitations, enabling data center operators to accelerate Hadoop and Spark implementations on Intel architecture-based servers.
For more information, visit intel.com/bigdata and bluedata.com
Hyper-C is OpenStack on Windows Server 2016, based on Nano Server, Hyper-V, Storage Spaces Direct (S2D) and Open vSwitch for Windows. Bare metal deployment features Cloudbase Solutions Juju charms and MAAS.
MOUG17 Keynote: Oracle OpenWorld Major AnnouncementsMonica Li
Midwest Oracle Users Group Training Day 2017 Presentation by Rich Niemiec, Chief Innovation Officer at Viscosity North America.
Catch up on OOW17's top announcements in this 1 hour presentation.
Join Postgres experts Bruce Momjian and Marc Linster as they preview everything new in Postgres 12. You don’t want to miss this!
Highlights include:
- New compatibility features
- PostgreSQL: Table access methods
- Partitioning Improvements
Webinar: 4 Ways to Improve NetApp Storage Performance Without Replacing ItStorage Switzerland
New on demand webinar with Storage Switzerland Lead Analyst George Crump and Avere Systems Director Chris Bowen. In this webinar, George and Chris discuss why NAS storage performance is so critical, how to balance storage performance and storage capacity, and four ways to improve storage performance without replacing your existing NAS system.
Bringing NetApp Data ONTAP & Apache CloudStack TogetherDavid La Motta
CloudStack Collaboration Conference - Denver 2014
Details on the integration between NetApp and CloudStack, and key features that make ONTAP the best operating system for the cloud.
SCI Lab Test Validation Report: NetApp Storage EfficiencyNetApp
Silverton Consulting tested a number of NetApp’s widely used software storage efficiency features on a FAS3240 storage system using a mix of data types. The testing was designed to measure the cumulative impact of multiple efficiency technologies when used together, and as a result, several test phases were required.
Organizations today are faced with multiple options for solving dynamic data management challenges. A hybrid cloud, which is a blend of private and public cloud services, is an innovative approach validated by thought leaders as outlined in this infographic. Learn about the benefits of a data fabric designed to take advantage of the full potential of the hybrid cloud.
NetApp Clustered Data ONTAP with Oracle DatabasesNetApp
The ESG Lab Validation report documents the results of hands-on testing of NetApp clustered Data ONTAP in Oracle database environments, with a focus on ease of management, non-disruptive operations, and efficient scaling.
Today, CIOs are moving from being builders of apps and operators of data centers to becoming brokers of information services to the business. They're embracing new technologies and new service models that allow them to make IT faster, cheaper, and smarter, and make their companies more responsive and more competitive. Joel Kaufman, Senior Manager, VMware Technical Marketing at NetApp, explains how NetApp's clustered Data ONTAP fits into the software-defined storage discussion.
Neutron Done the SDN Way
Dragonflow is an open source distributed control plane implementation of Neutron which is an integral part of OpenStack. Dragonflow introduces innovative solutions and features to implement networking and distributed network services in a manner that is both lightweight and simple to extend, yet targeted towards performance-intensive and latency-sensitive applications. Dragonflow aims at solving the performance
With around 11 years of Information Technology experience, I am looking to work in dynamic and challenging environments where I can provide efficient and cost-effective solutions to business.
Give Your Confluent Platform Superpowers! (Sandeep Togrika, Intel and Bert Ha...HostedbyConfluent
Whether you are a die-hard DC comic enthusiast, mad for Marvel, or completely clueless when it comes to comic books, at the end of the day each of us would love to possess the superpower to transform data in seconds versus minutes or days. But architects and developers are challenged with designing and managing platforms that scale elastically and combine event streams with stored data, to enable more contextually rich data analytics. This made even more complex with data coming from hundreds of sources, and in hundreds of terabytes, or even petabytes, per day.
Now, with Apache Kafka and Intel hardware technology advances, organizations can turn massive volumes of disparate data into actionable insights with the ability to filter, enrich, join and process data instream. Let's consider Information Security. IT leaders need to ensure all company data and IP is secured against threats and vulnerabilities. A combination of real-time event streaming with Confluent Platform and Intel Architecture has enabled threat detection efforts that once took hours to be completed in seconds, while simultaneously reducing technical debt and data processing and storage costs.
In this session, Confluent and Intel architects will share detailed performance benchmarking results and new joint reference architecture. We’ll detail ways to remove Kafka performance bottlenecks, and improve platform resiliency and ensure high availability using Confluent Control Center and Multi-Region Clusters. And we’ll offer up tips for addressing challenges that you may be facing in your own super heroic efforts to design, deploy, and manage your organization’s data platforms.
Manila, an update from Liberty, OpenStack Summit - TokyoSean Cohen
Manila is a community-driven project that presents the management of file shares (e.g. NFS, CIFS, HDFS) as a core service to OpenStack. Manila currently works with a variety of storage platforms, as well as a reference implementation based on a Linux NFS server.
Manila is exploding with new features, use cases, and deployers. In this session, we'll give an update on the new capabilities added in the Liberty release:
• Integration with OpenStack Sahara
• Migration of shares across different storage back-ends
• Support for availability zones (AZs) and share replication across these AZs
• The ability to grow and shrink file shares on demand
• New mount automation framework
• and much more…
As well as provide a quick look of whats coming up in Mitaka release with Share Replication demo
Improve performance and gain room to grow by easily migrating to a modern Ope...Principled Technologies
We deployed this modern environment, then migrated database VMs from legacy servers and saw performance improvements that support consolidation
Conclusion
If your organization’s transactional databases are running on gear that is several years old, you have much to gain by upgrading to modern servers with new processors and networking components and an OpenShift environment. In our testing, a modern OpenShift environment with a cluster of three Dell PowerEdge R7615 servers with 4th Generation AMD EPYC processors and high-speed 100Gb Broadcom NICs outperformed a legacy environment with MySQL VMs running on a cluster of three Dell PowerEdge R7515 servers with 3rd Generation AMD EPYC processors and 25Gb Broadcom NICs. We also easily migrated a VM from the legacy environment to the modern environment, with only a few steps required to set up and less than ten minutes of hands-on time. The performance advantage of the modern servers would allow a company to reduce the number of servers necessary to perform a given amount of database work, thus lowering operational expenditures such as power and cooling and IT staff time for maintenance. The high-speed 100Gb Broadcom NICs in this solution also give companies better network performance and networking capacity to grow as they embrace emerging technologies such as AI that put great demands on networks.
VMworld 2013: Architecting Oracle Databases on vSphere 5 with NetApp StorageVMworld
VMworld 2013
Greg Loughmiller, NetApp
Kannan Mani, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
VMworld 2013: How SRP Delivers More Than Power to Their Customers VMworld
VMworld 2013
Sheldon Brown, SRP
Girish Manmadkar, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
DevOps the NetApp Way: 10 Rules for Forming a DevOps TeamNetApp
Does your enterprise IT organization practice DevOps without a common team approach? To create a standardized way for development and operations teams to work together at NetApp, the IT team differentiates a DevOps team from a regular development team based on these 10 rules.
Spot Lets NetApp Get the Most Out of the CloudNetApp
Prior to NetApp acquiring Spot.io, two of its IT teams had adopted Spot in their operations: Product Engineering for Cloud Volumes ONTAP test automation and NetApp IT for corporate business applications. Check out the results in this infographic.
NetApp has fully embraced tools that allow for seamless, collaborative work from home, and as a result was fully prepared to minimize COVID-19's impact on how we conduct business. Check out this infographic for a look at results from the new remote work reality.
4 Ways FlexPod Forms the Foundation for Cisco and NetApp SuccessNetApp
At Cisco and NetApp, seeing our customers succeed in their digital transformations means that we’ve succeeded too. But that’s only one of the ways we measure our performance. What’s another way? Hearing how our wide-ranging IT support helps Cisco and NetApp thrive. Here’s what makes FlexPod an indispensable part of Cisco’s and NetApp’s IT departments.
With the widespread adoption of hybrid multicloud as the de-facto architecture for the enterprise, organizations everywhere are modernizing to deliver tangible business value around data-intensive applications and workloads such as AI-driven IoT and Hyperledgers. Shifting from on-premises to public cloud services, private clouds, and moving from disk to flash – sometimes concurrently – opens the door to enormous potential, but also the unintended consequence of IT complexity.
With the widespread adoption of hybrid multicloud as the de facto IT architecture for the enterprise, organizations everywhere are modernizing to deliver tangible business value around data-intensive applications and workloads such as AI-driven IoT and indelible ledgers.
10 Reasons Why Your SAP Applications Belong on NetAppNetApp
NetApp has been supporting SAP for 20 years, delivering advanced solutions for SAP applications. Here are 10 reasons why your SAP applications belong on NetApp!
Redefining HCI: How to Go from Hyper Converged to Hybrid Cloud InfrastructureNetApp
The hyper converged infrastructure (HCI) market is entering a new phase of maturity. A modern HCI solution requires a private cloud platform that integrates with public clouds to create a consistent hybrid multi-cloud experience.
During this webinar, NetApp and an IDC guest speaker covered what led to the next generation of hyper converged infrastructure and which five capabilities are required to go from hyper converged to hybrid cloud infrastructure.
As we enter 2019, what stands out is how trends in business and technology are connected by common themes. For example, AI is at the heart of trends in development, data management, and delivery of applications and services at the edge, core, and cloud. Also essential are containerization as a critical enabling technology and the increasing intelligence of IoT devices at the edge. Navigating the tempests of transformation are developers, whose requirements are driving the rapid creation of new paradigms and technologies that they must then master in pursuit of long-term competitive advantage. Here are some of our perspectives and predictions for 2019.
Künstliche Intelligenz ist in deutschen Unter- nehmen ChefsacheNetApp
Einer aktuellen Umfrage des führenden Datenma- nagementspezialisten in der Hybrid Cloud NetApp zufolge gewinnt künstliche Intelligenz (KI) in deut- schen Unternehmen zunehmend an Relevanz.
Iperconvergenza come migliora gli economics del tuo ITNetApp
In this NetApp Webinar we present how NetApp HCI helps improve the economics of IT: accelerating and ensuring performance for each application, simplifying your Data Center and make your architecture more scalable by reducing waste, implementing and expanding your HCI infrastructure quickly and inexpensively, making your management even simpler and more intuitive, saving time and using the skills you already have in the company.
NetApp IT’s Tiered Archive Approach for Active IQNetApp
NetApp AutoSupport technology proactively monitors the health of NetApp systems installed at customer’s location and provides 24/7 actionable intelligence to optimize their storage environment. The amount of data received back to NetApp doubles approximately every 16 months. To manage the swelling waves of data to archive, NetApp IT sought a more flexible solution.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
Joint NetApp and Cisco Solutions for SAP: FlexPod and HANA
1. Joint NetApp and
Cisco Solutions for
SAP: FlexPod and
HANA
Craig Sullivan, NetApp
SAP Solutions Architect
1
2. FlexPod: The global leader
in Integrated Infrastructure
Standard, prevalidated, best-inclass infrastructure building blocks
Cisco UCS™ B-Series
Blade Servers and
Cisco UCS Manager
Cisco Nexus®
Family Switches
Flexible: One platform scales to fit
many environments and mixed
workloads
– Add applications and workload
– Scale up and out
Simplified management and
repeatable deployments
NetApp® FAS
10GE and FCoE
Design and sizing guides
Services: Facilitate deployment of
different environments
2
3. FlexPod Datacenter for SAP
Solution Overview
Application Virtualization
SAP®
Adaptive Computing
VMware
vSphere
Extensible Open
Management
Server Virtualization
VMware® vSphere®
Compute Node Virtualization
Cisco Unified Computing System™
Network Virtualization
Cisco Nexus® 1000v, Nexus 5000
Storage Virtualization
NetApp® MultiStore®
3
4. Tenant Components
Tenant Description
Infrastructure Tenant
Computing
Network
Storage
ACC
SMSAP
vSphere®
Tenant Description
Tenant 2
Tenant 3
DHCP
DNS
NIS
DHCP
DNS
NIS
VLAN, Routing, Firewall
VLAN, Routing, Firewall
VLAN, Routing, Firewall
Primary
and
Backup
vFiler®
Units
Primary
and
Backup
vFiler
Units
Primary
and
Backup
vFiler
Units
DNS
5. Integrated Data Protection
for All Data
Operating
Systems
VMware® vSphere® Integrated Backup Data Replication with
with SnapManager® for VI
Protection Manager
Nonapplication
Data
Backups with
Protection Manager
Data Replication with
Protection Manager
Application
Data
Backup
Backup
Data Replication with
Protection Manager
Backup
Primary
Primary
Primary
vFiler®
Units
Primary
SAP® Integrated Backup
with SnapManager for SAP
Backup
vFiler
Units
5
10. System Refresh
Automated data extraction
Storage cloning
Automated OS/DB postprocessing
Automated SAP® postprocessing
FlexPod®
Approach
Effort: Several hours
Typically 20 x faster
and fully automated
Effort: Several days
Standard
Approach
Extract
Source Data
Restore from
OS/DB
Tape
Postprocessing
SAP
Postprocessing
10
12. The SAP HANA + NetApp Building Block
Simplicity and
performance
Building block
architecture scales
without logical limits
Shared storage for
nodes mandatory
SAP HANA Database
512 GB
512 GB
512 GB
512 GB
10Gb Ethernet, NFS
Standards-based
access and
interconnect
FAS3250AE
Ready to scale out by block
12
13. Cisco - Certified Appliances
Building Block
May 2012
May 2013
Planned
4 x compute nodes
4 x compute nodes
6 x compute nodes
1 x FAS3240A
1 x FAS3250AE
1 x FAS3250AE
2 x DS4243
600GB 15K
2 x DS4243
600GB 15K
3 x DS2246
600GB 10K
13
14. Tailored Datacenter Integration
Ability to certify on-site non-appliance configurations
SAP HANA Database
512 GB
512 GB
512 GB
512 GB
10Gb Ethernet, NFS
FAS6290HA
512 GB
512 GB
15. DR building block (Cisco)
SAP HANA Production
512 GB
512 GB
512 GB
512 GB
Data 1
Data 2
Data 3
Data 4
Four Hana nodes
attached to two
active storage
controllers.
Controller B and D
are not actively used
during normal
operation.
10GbE
Controller A
MDS Switches
Controller B
8Gb FC
SourceA
8Gb FC
MirrorA
HA Pair MetroCluster
Data 1/2
Controller D
Controller C
8Gb FC
SourceC
Data 3/4
8Gb FC
MirrorC
HA Pair MetroCluster
15
16. Servers at DR site for Dev/Test
Servers at the DR
SAP HANA Dev/Test
site are used for Dev/Test,
but must have their own
512 GB
512 GB
512 GB
512 GB
512 GB
storage and network.
SAP HANA Production
512 GB
512 GB
512 GB
Data 1
Data 2
Data 3
Data 1
Data 4
Data 2
Data 3
Data 4
10GbE
Controller A
Controller B
Dev/Test
8Gb FC
SourceA
8Gb FC
MirrorA
HA Pair
Data 1/2
Controller D
Controller C
8Gb FC
SourceC
Data 3/4
8Gb FC
MirrorC
HA Pair
16
17. Site failover
SAP HANA Production
512 GB
512 GB
512 GB
512 GB
Data 1
Data 2
Data 3
DEV/Test systems are
shut down and servers
are used for production.
SAP HANA Production DR
512 GB
512 GB
512 GB
Data 1
Data 4
512 GB
Data 2
Data 3
Data 4
10GbE
Controller A
Controller B
Dev/Test
8Gb FC
SourceA
8Gb FC
MirrorA
HA Pair
Data 1/2
Controller D
Controller C
8Gb FC
SourceC
Data 3/4
8Gb FC
MirrorC
HA Pair
17
18. NetApp Snapshot Backup
SAP HANA Production Database
Compute Node 1
Compute Node 2
Compute Node 3
Compute Node 4
Step 1: SnapCreator triggers a
SAP HANA global synchronized
backup savepoint via hdbsql
statement (SAP note 1703435)
10Gb Ethernet, NFS
Data 1 Log 1
Data 2 Log 2
Data 3 Log 3
Data 4 Log 4
Backup in minutes without impact to
production
Step 2: Snapshot copies are
created on storage layer
Step 3: Snapshot copies can
be replicated asynchronously
on storage layer
Restore in minutes
With the current HANA release a storage
based Snapshot can only be restored,
a forward recovery is not possible yet
Backup
Volumes
18