The document discusses Cisco UCS with NetApp storage for SAP HANA solutions. It provides an overview of Cisco UCS and how it provides a unified system for compute, network, virtualization and storage access. It also discusses NetApp storage solutions and how Cisco and NetApp can provide solutions for SAP HANA deployments including high availability, disaster recovery, and data migration services.
Oracle Database Consolidation with FlexPod on Cisco UCSNetApp
Cisco and Oracle as technology front-runners provide YOU the tools you need to optimize your Oracle environments! John McAbel, Senior Product Manager - Oracle Solutions on UCS at Cisco Systems, explains how NetApp and Cisco are providing a flexible infrastructure that helps prepare organizations for today, and for future business growth and change.
CloudBridge and NetApp Storage Solutions - The Killer AppNetApp
Among the largest pain points for most businesses are data storage and backup. Learn about the value and best practices of deploying Citrix CloudBridge to help optimize NetApp SnapMirror storage replication.
Enabling the Software Defined Data Center for Hybrid ITNetApp
Recently, NetApp held a Cloud Breakfast for customers of our High Touch Customer Program. This was a combined presentation from OBS, VMware and NetApp.
Presenters:
Jim Sangster, Senior Director, Solutions Marketing, NetApp - "Cloud for the Hybrid Data Center"
John Gilmartin, Vice President, Cloud Infrastructure Products, VMware - "Next Generation of IT"
Axel Haentjens Vice President, Marketing and International Orange Cloud for Business "NetApp Epic Story OBS"
Tim Waldron, Manager, Cloud Solutions, NetApp EMEA "Cloud Services – An EMEA Perspective"
Overview of how NetApp IT Runs NetApp Technology in Their EnterpriseNetApp
Highlights on the NetApp on NetApp experience on the "why?" & "how?" the internal IT teams (both enterprise IT and engineering IT) use NetApp technology. And most importantly, the "results.”
EMEA TechTalk – The NetApp Flash Optimized PortfolioNetApp
EMEA TechTalk – October 7th, 2014 - Learn how NetApp Flash Optimized Storage improves application performance, reduces storage capacity, costs and complexity in the data centre.
NetApp IT Data Center Strategies to Enable Digital TransformationNetApp
During an Insight Las Vegas 2017 breakout presentation, NetApp IT Customer-1 Director, Stan Cox, and Senior Storage Architect, Eduardo Rivera explained how NetApp IT enables digital transformation with data center strategies that incorporates ONTAP AFF systems in the data center to save power, cooling & space and NetApp Private Storage and ONTAP Cloud to leverage the public cloud while retaining control of their data. Using OnCommand Insight for data center management—and its integration with their configuration management database—the NetApp IT team knows what’s in their data centers, in terms of both functionality, usage, and inter-connections. NetApp IT believes knowing what’s in your data centers is fundamental to maintaining total cost of ownership, adapting to new technologies, leveraging the cloud while owning your data, and enabling digital transformation.
Oracle Database Consolidation with FlexPod on Cisco UCSNetApp
Cisco and Oracle as technology front-runners provide YOU the tools you need to optimize your Oracle environments! John McAbel, Senior Product Manager - Oracle Solutions on UCS at Cisco Systems, explains how NetApp and Cisco are providing a flexible infrastructure that helps prepare organizations for today, and for future business growth and change.
CloudBridge and NetApp Storage Solutions - The Killer AppNetApp
Among the largest pain points for most businesses are data storage and backup. Learn about the value and best practices of deploying Citrix CloudBridge to help optimize NetApp SnapMirror storage replication.
Enabling the Software Defined Data Center for Hybrid ITNetApp
Recently, NetApp held a Cloud Breakfast for customers of our High Touch Customer Program. This was a combined presentation from OBS, VMware and NetApp.
Presenters:
Jim Sangster, Senior Director, Solutions Marketing, NetApp - "Cloud for the Hybrid Data Center"
John Gilmartin, Vice President, Cloud Infrastructure Products, VMware - "Next Generation of IT"
Axel Haentjens Vice President, Marketing and International Orange Cloud for Business "NetApp Epic Story OBS"
Tim Waldron, Manager, Cloud Solutions, NetApp EMEA "Cloud Services – An EMEA Perspective"
Overview of how NetApp IT Runs NetApp Technology in Their EnterpriseNetApp
Highlights on the NetApp on NetApp experience on the "why?" & "how?" the internal IT teams (both enterprise IT and engineering IT) use NetApp technology. And most importantly, the "results.”
EMEA TechTalk – The NetApp Flash Optimized PortfolioNetApp
EMEA TechTalk – October 7th, 2014 - Learn how NetApp Flash Optimized Storage improves application performance, reduces storage capacity, costs and complexity in the data centre.
NetApp IT Data Center Strategies to Enable Digital TransformationNetApp
During an Insight Las Vegas 2017 breakout presentation, NetApp IT Customer-1 Director, Stan Cox, and Senior Storage Architect, Eduardo Rivera explained how NetApp IT enables digital transformation with data center strategies that incorporates ONTAP AFF systems in the data center to save power, cooling & space and NetApp Private Storage and ONTAP Cloud to leverage the public cloud while retaining control of their data. Using OnCommand Insight for data center management—and its integration with their configuration management database—the NetApp IT team knows what’s in their data centers, in terms of both functionality, usage, and inter-connections. NetApp IT believes knowing what’s in your data centers is fundamental to maintaining total cost of ownership, adapting to new technologies, leveraging the cloud while owning your data, and enabling digital transformation.
Today, CIOs are moving from being builders of apps and operators of data centers to becoming brokers of information services to the business. They're embracing new technologies and new service models that allow them to make IT faster, cheaper, and smarter, and make their companies more responsive and more competitive. Joel Kaufman, Senior Manager, VMware Technical Marketing at NetApp, explains how NetApp's clustered Data ONTAP fits into the software-defined storage discussion.
During the second half of 2016, IBM built a state of the art Hadoop cluster with the aim of running massive scale workloads. The amount of data available to derive insights continues to grow exponentially in this increasingly connected era, resulting in larger and larger data lakes year after year. SQL remains one of the most commonly used languages used to perform such analysis, but how do today’s SQL-over-Hadoop engines stack up to real BIG data? To find out, we decided to run a derivative of the popular TPC-DS benchmark using a 100 TB dataset, which stresses both the performance and SQL support of data warehousing solutions! Over the course of the project, we encountered a number of challenges such as poor query execution plans, uneven distribution of work, out of memory errors, and more. Join this session to learn how we tackled such challenges and the type of tuning that was required to the various layers in the Hadoop stack (including HDFS, YARN, and Spark) to run SQL-on-Hadoop engines such as Spark SQL 2.0 and IBM Big SQL at scale!
Speaker
Simon Harris, Cognitive Analytics, IBM Research
Need For Speed- Using Flash Storage to optimise performance and reduce costs-...NetAppUK
Flash Storage technologies are opening up a wealth of new opportunities for improving the optimisation of applications, data and storage, as well as reducing costs. In this session, Peter Mason, NetApp Consulting Systems Engineer, shares his experiences and discusses the use and impact of different Flash technologies.
NetApp HCI
Hyper Converged Infrastructure (HCI) continues to evolve rapidly to meet the expectations of the Enterprise. First generation HCI platforms achieved an immediate return on
investment and met a simple set of goals to achieve rapid adoption and success:
• The ability to collapse and consolidate large traditional infrastructures to reduce capital expenditures (CAPEX)
• Reduction in operating expenses (OPEX) through simplified management tools and complexity coupled with less of a dependency on specialized technical resources
NetApp enterprise All Flash Storage
This presentation provides the key messages and differentiation, value propositions, and promotional programs for AFF.
Oracle усиливает свои позиции на рынке Cloud Computing, приобретая компанию Ravello Systems - лидера на рынке nested virtualization (вложенная виртуализация) и стремительно развивая решения по переносу on-premise мощностей в облако.
OpenStack at the speed of business with SolidFire & Red Hat NetApp
When it comes to OpenStack® and the enterprise, it’s critical that you can rapidly deploy a plug-and-play solution that delivers mixed workload capabilities on a shared infrastructure. Join Red Hat and SolidFire to see how Agile Infrastructure for OpenStack can help your cloud move at the speed of business.
This ESG Lab Validation Report presents the hands-on evaluation and testing results of the NetApp FAS2200 series with Flash Pool. ESG Lab focused on key areas that make the FAS2200 an attractive offering for midsized businesses and distributed enterprises: cost-effective mixed workload performance, ease of implementation, and storage efficiency.
Webinar: How To Use Software Defined Storage to Extend Your SAN, Not Replace itStorage Switzerland
Join Storage Switzerland and ioFABRIC for this on demand webinar, "How to use Software Defined Storage to extend your SAN, not replace it”. We discuss the different types of software defined storage, why vendors want to replace your SAN instead of enhance it and what you can do to not only protect your current storage investments but also prepare a path to the future.
MongoDB Europe 2016 - Deploying MongoDB on NetApp storageMongoDB
Customer and business requirements are shifting constantly. Today’s powerful programming languages can keep up—but what about your database? NetApp® MongoDB solutions offer a flexible, scalable answer. Learn how NetApp storage solutions will accelerate your MongoDB Performance, reduce operational Costs and provide the highest levels of Availability and Security. These solutions provide advanced fault-recovery features and easy, in-service growth capabilities to accommodate your unpredictable, ever-changing business demands. NetApp storage is designed to help you build a high-performance, cost-efficient, and highly available analytics solution. So you can focus on adding real business value.
Today, CIOs are moving from being builders of apps and operators of data centers to becoming brokers of information services to the business. They're embracing new technologies and new service models that allow them to make IT faster, cheaper, and smarter, and make their companies more responsive and more competitive. Joel Kaufman, Senior Manager, VMware Technical Marketing at NetApp, explains how NetApp's clustered Data ONTAP fits into the software-defined storage discussion.
During the second half of 2016, IBM built a state of the art Hadoop cluster with the aim of running massive scale workloads. The amount of data available to derive insights continues to grow exponentially in this increasingly connected era, resulting in larger and larger data lakes year after year. SQL remains one of the most commonly used languages used to perform such analysis, but how do today’s SQL-over-Hadoop engines stack up to real BIG data? To find out, we decided to run a derivative of the popular TPC-DS benchmark using a 100 TB dataset, which stresses both the performance and SQL support of data warehousing solutions! Over the course of the project, we encountered a number of challenges such as poor query execution plans, uneven distribution of work, out of memory errors, and more. Join this session to learn how we tackled such challenges and the type of tuning that was required to the various layers in the Hadoop stack (including HDFS, YARN, and Spark) to run SQL-on-Hadoop engines such as Spark SQL 2.0 and IBM Big SQL at scale!
Speaker
Simon Harris, Cognitive Analytics, IBM Research
Need For Speed- Using Flash Storage to optimise performance and reduce costs-...NetAppUK
Flash Storage technologies are opening up a wealth of new opportunities for improving the optimisation of applications, data and storage, as well as reducing costs. In this session, Peter Mason, NetApp Consulting Systems Engineer, shares his experiences and discusses the use and impact of different Flash technologies.
NetApp HCI
Hyper Converged Infrastructure (HCI) continues to evolve rapidly to meet the expectations of the Enterprise. First generation HCI platforms achieved an immediate return on
investment and met a simple set of goals to achieve rapid adoption and success:
• The ability to collapse and consolidate large traditional infrastructures to reduce capital expenditures (CAPEX)
• Reduction in operating expenses (OPEX) through simplified management tools and complexity coupled with less of a dependency on specialized technical resources
NetApp enterprise All Flash Storage
This presentation provides the key messages and differentiation, value propositions, and promotional programs for AFF.
Oracle усиливает свои позиции на рынке Cloud Computing, приобретая компанию Ravello Systems - лидера на рынке nested virtualization (вложенная виртуализация) и стремительно развивая решения по переносу on-premise мощностей в облако.
OpenStack at the speed of business with SolidFire & Red Hat NetApp
When it comes to OpenStack® and the enterprise, it’s critical that you can rapidly deploy a plug-and-play solution that delivers mixed workload capabilities on a shared infrastructure. Join Red Hat and SolidFire to see how Agile Infrastructure for OpenStack can help your cloud move at the speed of business.
This ESG Lab Validation Report presents the hands-on evaluation and testing results of the NetApp FAS2200 series with Flash Pool. ESG Lab focused on key areas that make the FAS2200 an attractive offering for midsized businesses and distributed enterprises: cost-effective mixed workload performance, ease of implementation, and storage efficiency.
Webinar: How To Use Software Defined Storage to Extend Your SAN, Not Replace itStorage Switzerland
Join Storage Switzerland and ioFABRIC for this on demand webinar, "How to use Software Defined Storage to extend your SAN, not replace it”. We discuss the different types of software defined storage, why vendors want to replace your SAN instead of enhance it and what you can do to not only protect your current storage investments but also prepare a path to the future.
MongoDB Europe 2016 - Deploying MongoDB on NetApp storageMongoDB
Customer and business requirements are shifting constantly. Today’s powerful programming languages can keep up—but what about your database? NetApp® MongoDB solutions offer a flexible, scalable answer. Learn how NetApp storage solutions will accelerate your MongoDB Performance, reduce operational Costs and provide the highest levels of Availability and Security. These solutions provide advanced fault-recovery features and easy, in-service growth capabilities to accommodate your unpredictable, ever-changing business demands. NetApp storage is designed to help you build a high-performance, cost-efficient, and highly available analytics solution. So you can focus on adding real business value.
How to shutdown and power up of the netapp cluster mode storage systemSaroj Sahu
This slide will guide you how to shutdown and power up of the Netapp cluster mode storage system in command mode. (It will depict you environmental shutdown process (SAN environment in a DataCenter)
Building Cloud-Native Applications with OpenStack Platform9
Lately, the industry has been filled with talk regarding building cloud-native applications. But what does it mean to build cloud-native applications and how should users approach this changing world? In this webinar, we will provide a better understanding of makes cloud-native application different than more traditional applications. We will also review how you can begin preparing your IT infrastructure for hosting cloud-native applications. Among the technologies we will cover that is part of this cloud-native world include:
- OpenStack
- Docker
- CoreOS
- Kubernetes
- Zookeeper
Presented during the Open Source Conference 2012, organized by Accenture and Redhat on December 14th 2012. This presentation discusses an open source Big Data case study.
By Jonathan Bender, Consultant, Accenture Technology Labs
AWS Summit 2013 | Singapore - NetApp Private Storage for AWS with Equinix, Pr...Amazon Web Services
Cloud computing is going prime time. Organizations can no longer ignore the benefits of cloud, but rather, architect their network models to combine new cloud offerings with existing on-premise infrastructure.
Join Clement and Scott to learn how NetApp® Private Storage for AWS with Equinix allows enterprise and mid-market customers to build an agile cloud infrastructure that balances private and cloud resources to best meet their business needs.
VMworld 2013: Low-Cost, High-Performance Storage for VMware Horizon Desktops VMworld
VMworld 2013
Courtney Burry, VMware
Donal Geary, VMware
Tristan Todd, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
VMworld 2013: Architecting Oracle Databases on vSphere 5 with NetApp StorageVMworld
VMworld 2013
Greg Loughmiller, NetApp
Kannan Mani, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
The Search Is Over: Integrating Solr and Hadoop in the Same Cluster to Simpli...lucenerevolution
Presented by M.C. Srivas | MapR. See conference video - http://www.lucidimagination.com/devzone/events/conferences/lucene-revolution-2012
This session addresses the biggest issue facing Big Data – Search, Discovery and Analytics need to be integrated. While creating and maintaining separate SOLR and Hadoop clusters is time consuming, error prone and difficult to keep in synch, most Hadoop installations do not integrate with SOLR within the same cluster. Find out how to easily integrate these capabilities into a single cluster. The session will also touch on some of the technical aspects of Big Data Search including how to; protect against silent index corruption that permeates large distributed clusters, overcome the shard distribution problem by leveraging Hadoop to ensure accurate distributed search results, and provide real-time indexing for distributed search including support for streaming data capture. Srivas will also share relevant experiences from his days at Google where he ran one of the major search infrastructure teams where GFS, BigTable and MapReduce were used extensively.
Raleigh NC Docker Meetup presentation from October 21, 2015.
Doing development and test for an application on your laptop or a single server is an effective and quick way to ensure functionality, but can be limiting for larger applications. When the application is ready to deploy into production using containers, you’re going to want to use more than one server to increase resources and provide distributed availability. There are a large number of orchestration platforms for Docker which allow you to define an application consisting of one or more containers, and then coordinate deploying the application across small and large farms of servers. In this session we will concentrate on the three most popular orchestration platforms: Mesos, Kubernetes, and Docker Swarm. We will learn how each of them defines an application, some of their strengths, some weaknesses, and why you might choose one over another.
Apresentações | Jantar Exclusivo Cisco e Netapp | 27 de Junho de 2012 | Spett...Softcorp
A Softcorp, em parceria com a NetApp e a Cisco, realizou um jantar especial sobre a tecnologia FlexPod™.
Durante o evento foi possível conhecer os benefícios da solução e tirar dúvidas técnicas, operacionais e consultivas com os especialistas das três empresas.
O momento também foi oportuno para trocar experiências com outros profissionais do setor.
Para descontrair, tivemos uma palestra com boas dicas sobre cortes de carne e os segredos do bom churrasqueiro para garantir o sucesso do churrasco.
Cisco & MapR bring 3 Superpowers to SAP HANA DeploymentsMapR Technologies
SAP HANA is an increasingly popular platform for various analytical and transactional use cases with its in-memory architecture. If you’re an SAP customer you’ve experienced the benefits.
However, the underlying storage for SAP HANA is painfully expensive. This slows down your ability to grow your SAP HANA footprint and serve up more applications.
Watch a replay of the webinar: https://www.youtube.com/watch?v=BtzPgLBy56w
451 Research and NuoDB outline the key database criteria for cloud applications. Explore how applications deployed in the cloud require a combination of standard functionality, such as ANSI SQL, and new capabilities specifically required to take full advantage of cloud economics, such as elastic scalability and continuous availability.
Adaptive computing and pay-as-you-model for SAP.....
Customers should stop buying on CAPEX and signing long term contracts.... it is time to move to Consumption Based Computing...
AWS re:Invent 2016: Optimizing workloads in SAP HANA with Amazon EC2 X1 Insta...Amazon Web Services
AWS and SAP have worked together closely to certify the AWS platform so that companies of all sizes can fully realize all the benefits of the SAP HANA in-memory database platform on the AWS cloud. By placing SAP systems in the cloud, organizations are achieving greater agility, flexibility, and cost efficiency while saving resources to focus on their core businesses. We will discuss recent SAP and AWS innovations including the Amazon EC2 X1 instance type that offers up to 2TB of RAM, and dive into features of the AWS platform that bring significant flexibility to SAP HANA deployments.
DevOps the NetApp Way: 10 Rules for Forming a DevOps TeamNetApp
Does your enterprise IT organization practice DevOps without a common team approach? To create a standardized way for development and operations teams to work together at NetApp, the IT team differentiates a DevOps team from a regular development team based on these 10 rules.
Spot Lets NetApp Get the Most Out of the CloudNetApp
Prior to NetApp acquiring Spot.io, two of its IT teams had adopted Spot in their operations: Product Engineering for Cloud Volumes ONTAP test automation and NetApp IT for corporate business applications. Check out the results in this infographic.
NetApp has fully embraced tools that allow for seamless, collaborative work from home, and as a result was fully prepared to minimize COVID-19's impact on how we conduct business. Check out this infographic for a look at results from the new remote work reality.
4 Ways FlexPod Forms the Foundation for Cisco and NetApp SuccessNetApp
At Cisco and NetApp, seeing our customers succeed in their digital transformations means that we’ve succeeded too. But that’s only one of the ways we measure our performance. What’s another way? Hearing how our wide-ranging IT support helps Cisco and NetApp thrive. Here’s what makes FlexPod an indispensable part of Cisco’s and NetApp’s IT departments.
With the widespread adoption of hybrid multicloud as the de-facto architecture for the enterprise, organizations everywhere are modernizing to deliver tangible business value around data-intensive applications and workloads such as AI-driven IoT and Hyperledgers. Shifting from on-premises to public cloud services, private clouds, and moving from disk to flash – sometimes concurrently – opens the door to enormous potential, but also the unintended consequence of IT complexity.
With the widespread adoption of hybrid multicloud as the de facto IT architecture for the enterprise, organizations everywhere are modernizing to deliver tangible business value around data-intensive applications and workloads such as AI-driven IoT and indelible ledgers.
10 Reasons Why Your SAP Applications Belong on NetAppNetApp
NetApp has been supporting SAP for 20 years, delivering advanced solutions for SAP applications. Here are 10 reasons why your SAP applications belong on NetApp!
Redefining HCI: How to Go from Hyper Converged to Hybrid Cloud InfrastructureNetApp
The hyper converged infrastructure (HCI) market is entering a new phase of maturity. A modern HCI solution requires a private cloud platform that integrates with public clouds to create a consistent hybrid multi-cloud experience.
During this webinar, NetApp and an IDC guest speaker covered what led to the next generation of hyper converged infrastructure and which five capabilities are required to go from hyper converged to hybrid cloud infrastructure.
As we enter 2019, what stands out is how trends in business and technology are connected by common themes. For example, AI is at the heart of trends in development, data management, and delivery of applications and services at the edge, core, and cloud. Also essential are containerization as a critical enabling technology and the increasing intelligence of IoT devices at the edge. Navigating the tempests of transformation are developers, whose requirements are driving the rapid creation of new paradigms and technologies that they must then master in pursuit of long-term competitive advantage. Here are some of our perspectives and predictions for 2019.
Künstliche Intelligenz ist in deutschen Unter- nehmen ChefsacheNetApp
Einer aktuellen Umfrage des führenden Datenma- nagementspezialisten in der Hybrid Cloud NetApp zufolge gewinnt künstliche Intelligenz (KI) in deut- schen Unternehmen zunehmend an Relevanz.
Iperconvergenza come migliora gli economics del tuo ITNetApp
In this NetApp Webinar we present how NetApp HCI helps improve the economics of IT: accelerating and ensuring performance for each application, simplifying your Data Center and make your architecture more scalable by reducing waste, implementing and expanding your HCI infrastructure quickly and inexpensively, making your management even simpler and more intuitive, saving time and using the skills you already have in the company.
NetApp IT’s Tiered Archive Approach for Active IQNetApp
NetApp AutoSupport technology proactively monitors the health of NetApp systems installed at customer’s location and provides 24/7 actionable intelligence to optimize their storage environment. The amount of data received back to NetApp doubles approximately every 16 months. To manage the swelling waves of data to archive, NetApp IT sought a more flexible solution.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
AgendaHi, my name is <name> and I’m here today to talk to you about how the Cisco UCS solution with NetApp storage for SAP HANA can help create infrastructure that support real-time business operation.I’ll start by talking about some of the business concerns that may be troubling you, and how SAP HANA and Cisco CUS can help you address those areas of your business and IT infrastructure. Next, I’ll explain why traditional approaches to real-time business fall short, and how Cisco and NetApp technologies come together in a high-performance, scalable solution that can help your business.Because business continuity is of the utmost concern, I’ll explain the disaster tolerance capabilities of the solution and how they work to help your business run in the event of an outage or disaster.Finally, I’ll describe how we can help you build the infrastructure you need with comprehensive service offerings.
TheCisco Unified Computing System, or Cisco UCS is the first truly unified data center platform that combines industry-standard, intelligent Intel Xeon processor-based servers with unified management, networking and storage access into The system is a smart infrastructure that is automatically configured through integrated, model-based management to simplify and speed deployment of enterprise-class applications and services running in bare-metal, virtualized, and cloud-computing environments. Cisco servers, combined with a simplified, unified architecture, drive better IT productivity and superior price/performance for lower total cost of ownership (TCO). Only Cisco servers integrate with Cisco UCS, and only Cisco integrates rack and blade servers into a single unified system.Building on Cisco’s strength in enterprise networking, Cisco Unified Computing System is integrated with a standards-based, high-bandwidth, low-latency, virtualization-aware unified fabric. The system is wired once to support the desired bandwidth and carries all Internet protocol, storage, interprocess communication, and virtual machine traffic with security isolation, visibility, and control equivalent to physical networks. The system’s 10GB Ethernet network meets the bandwidth demands of today’s multicore processors, eliminates costly redundancy, and increases workload agility, reliability, and performance.Cisco UCShelps organizations go beyond efficiency: it helps them become more effective through technologies that breed simplicity rather than complexity. The result is flexible, agile, high-performance, self-integrating information technology that reduces staff costs, increases uptime through automation, with faster return on investment.
Fastest Growing Product in the MarketAt Cisco, we didn’t create just another server, we designed an evolutionary system that integrates networking and management and isflexible and scalable enough to handle any workload.We aimed to create the ideal, programmable platform for virtualized and cloud environments, and to help solve many of the very real challenges customers face. People said we wouldn’t succeed—we did.Today, Cisco UCS is the fastest growing product on the market. In just a short time, there are more than 23,000 unique Cisco UCS customers. Today, Cisco has the #2 worldwide market share in x86 blade servers. More than 44 ISVs write to the Cisco UCS API, and this number continues to grow. Tens of thousands of applications are supported on the system, so you can have confidence that the applications you depend on can be supported.Cisco UCS was designed with system and business performance in mind. As of September 2013, Cisco UCS has achieved 79 world records on performance benchmarks. The system has also received numerous industry awards and certifications.It’s also readily available, with more than 3,400 channel partners providing businesses with easy access to the technology.Let’s turn to how this innovative system can help your business succeed.
Cisco UCS Power SAP HANACisco UCS powers SAP HANA, delivering a broad range of advantages to your business.Reduces cost while increasing manageability. The unified infrastructure inherent in the solution dramatically reduces the number of physical components required. The solution effectively uses limited space, power, and cooling by deploying less infrastructure to perform the same, or even more, work. For example, the unified fabric built into Cisco UCS results in fewer network interface cards (NICs), host bus adapters (HBAs), cables, and upstream switch ports and eliminates the need for parallel Fibre Channel or management networks.Scales out effectively. With Cisco UCS, you can easily add more compute and storage building blocks as demand rises. Your IT department can start with the compute and storage infrastructure needed today, and to scale easily by adding more compute and storage blocks as demand rises. Because the Cisco UCS compute and NetApp storage building blocks integrate into the unified system, they do not require additional supporting infrastructure or expert knowledge. The system simply, quickly, and cost effectively presents more compute power and storage capacity to SAP HANA applications.Accelerates time to results. When your company needs to make important business decisions, the performance of analytic and business intelligence systems is critical. SAP HANA innovation, combined with the balanced resources of Cisco UCS, delivers rapid analysis and reporting at less cost. Your IT department can use more servers and distribute data loading and analysis tasks to take advantage of massively parallel processing. The balanced resources of Cisco UCS—processing power, I/O bandwidth, and memory capacity—help ensure your IT department gets more performance from your SAP HANA implementation.Offers visibility. End-to-end management provides visibility and enables the monitoring and automated remediation of physical servers, storage, and network.
Next, I’d like to turn to the issue of disaster tolerance, and how the Cisco and NetApp solution helps ensure business continuity.
SAP HANA: A New Kind of ArchitectureWith SAP HANA, the database and its transaction log reside in memory. While this results in accelerated operation, you are probably wondering what happens when there is a failure, and whether the data in memory is lost. The short answer is: don’t worry. The transaction log tracks all changes made to the in-memory database. When a transaction is committed, or when the transaction log becomes full, the log is written to persistent storage on the NetApp storage device(s).By creating a save point, the system ensures the entire in-memory database image is stored to disk. This copy can be read in the event of a failure to restart the application and environment. Save points occur regularly, every five minutes by default, to ensure database consistency.Let’s take a look at how this saved information is used during a failover situation.
High Availability for SAP HANASpeaker: This slide is animated and walks you through the steps of a controlled failover of the SAP HANA database.Additional things to note:During this process, a standby node takes over the persistence of a failed node.The entire process is controlled by the SAP HANA Nameserver, which also triggers calls to the block API.Pay close attention to the color scheme and point out to customers that the SCSI-3 PGR reservation ensures that only the node that has written the reservation to the disk has exclusive access to the persistent storage (data and log).
Regular Operation with Cisco UCSThis slide depicts the layout of solution components during normal operation.The production environment is on the left, with its components highlighted in blue. Standby NetApp storage is on the right, in a second system, with its components highlighted in green.The remainder of the second system is used for non-production activities, such as QA and development.Let’s take a look at how the system responds to a failure condition or other disruptive event.
Disaster Operations with Cisco UCSLet’s assume some sort of failure—power loss or a catastrophic event—disrupts access to the production system (on the left).In this case, some of the non-production system are taken offline and brought back online as active SAP HANA nodes and storage. (Note these systems went from purple to blue.) The standby NetApp storage nodes are activated (green). Any remaining resources are left available to non-production environments (purple).What makes this continuous data availability possible is the architecture and rapid provisioning capability of Cisco UCS aong with NetAppMetroCluster software. The combination of array-based clustering and synchronous mirroring supports continuous availability and zero data loss, and provides transparent recovery from failures so critical applications run uninterrupted. Eliminating repetitive change management activities reduces the risk of human error and administrative overhead. Racks in two locations create a single, geographically distributed cluster that writes data synchronously to both units. Since the storage is located in both locations, one system can take over in the event the other site becomes unavailable. This provides both disaster recovery and continuous availability with zero data loss, keeping SAP HANA running in the event of unplanned outages from operator error, site failures, network outages and natural disasters. The cluster enables non-disruptive upgrades so that data analysis can continue 24x7 with no need for planned downtime. In addition, mirroring at the storage layer increases read throughput up to 80 percent, reducing startup time for the HANA database and the time needed to failover an active SAP HANA node to a standby node.
Disaster Tolerance ConfigurationIn the solution, production SAP HANA storage is replicated to a dedicated storage system on the second side of the solution, Additional storage for non-production systems also resides in this location. Keeping operations clean and streamlined, production service profiles only communicate with production storage; non-production service profiles only communicate with non-production storage.You may be wondering how long it takes to switch over to a second data center. Here are the approximate time intervals for each stage of the process:Shutdown of the non-production systems:Hard Shutdown = 1 minuteGraceful shutdown (HANA, OS) = 2 to 15 minutes,where 1 min is calculated for the operating system and the rest of the time is allocated to SAP HANAImport and activate the production configuration and boot the operating system:Option 1: backup non-production systems, delete the non-production environment(s), and import the production environment = 10 to 15 minutesOption 2: activate a pre-configured production environment = 5 minutesBoot all operating system instances = 3 minutes
NetAppSnapShot Backup and RecoveryNetApp storage systems offer simple data protection strategies that help lower administrative and infrastructure costs.Integrated data protection capabilities make it easy to back up, recover, and clone SAP HANA databases and applications.Key capabilities:Snapshot copies are space efficient and near instantaneousRestore and recovery actions are fast and granularThin replication makes disaster recovery copiesDisk-to-disk (D2D) and disk-to-disk-to-tape (D2D2T) features offer simple backup and recovery management for applications
SAP HANA Disaster Recovery with NetApp StorageThe solution uses NetAppMetroCluster software for support disaster recovery efforts.The software combines clustering and synchronous mirroring to ensure SAP HANA data is up to date and continuously available.Failover capabilities minimize application interruption.Synchronous mirroring immediately duplicates SAP HANA transactions, ensuring all copies are up to date. With database copies always consistent, the solution can activate the secondary site (labeled Passive SAP HANA Database on the slide) very quickly.
How Cisco Can HelpCisco offers several services that can help you with your deployment.
Cisco RMS for SAP HANA ApplianceCisco provides unparalleled solution-level support to help you focus on running your business, rather than focusing on infrastructure issues. It provides a single point of contact to initiate and manage support. You get priority access to Cisco solution experts qualified to troubleshoot and drive resolution of all field issues, including the third-party products in the SAP HANA solution. Cisco will coordinate issue resolution between Cisco and solution technology partner support teams.
Plan Services for SAP HANACisco offers services that can help you formulate astrategy and create detailed plans for implementing your SAP HANA solution.Assessment Service for SAP HANA focuses on discovery and assessment of your existing environment to justify the business and technical requirements for migrating to SAP HANA. The service confirms your priorities, return-on-investment (ROI) requirements, and migration time frames.Planning and Design Service for SAP HANA provides an architecture and data analytics assessment and proof of concept (PoC) to help you prepare for a sound implementation.
Implementation Services for SAP HANACisco offers services that can help you with assembly, installation, and data load for your SAP HANA deployment. These services provide an unmatched level of service assurance.Assembly Service for SAP HANA includes the assembly of the hardware and software required for the SAP HANA solution. Cisco experts or partners verify all hardware components based on a certified bill of materials, install third-party hardware and software, make sure of the proper powering of the chassis and all components, and validate the correct installation of all software and hardware components. This service is mandatory.Installation Service for SAP HANA installs the solution into your network, connecting it to source systems, and pointing it to the proper data source(s). The service includes a pre-installation interview, site planning survey, and design review. Installation validation helps ensure that the system configuration has passed initial startup tests, and the installed SAP HANA solution is ready for data load and SAP HANA software configuration. This service is mandatory.Data Load Service for SAP HANA is an optional service enabling SAP rapid deployment solutions (RDSs) with data extraction, transformation, and load (ETL). The service includes extracting data from outside sources, transforming it to fit operational needs, and loading the data into the database or warehouse for a fully operational solution