In this presentation by SolidFire's vExpert Jeramiah Dooley, you will learn about how consolidating workloads onto a single platform can be the crucial missing step as VMware customers transition from a legacy architecture into a Software Defined Data Center.
Storage has become the boat anchor of the modern, virtualized data center. Up until recently it has been slow, unpredictable, inflexible, required far too much operational overhead and was sized, deployed and managed using methods created in the 1980s. Worst of all, storage capacity and performance are often still locked together in fixed ratios, making it inevitable that even the best run storage environment will be wasteful and inefficient as you consolidate your applications.
But it doesn’t have to be this way. The technology and know-how exists to have storage be a predictable, flexible, and easily managed part of an overall virtualized platform designed to provide the performance and availability that today’s end user workloads demand.
In this presentation (originally presented to vExperts on March 31, 2015), you will learn about SolidFire’s technology preview showcasing integration with Rawlinson Rivera’s VMware’s vRealize Automation (vRA) project. As an automation solution for VMware, vRA provides a service catalog of application blueprints to build, move, add, and change application infrastructure resources. Enterprise organizations benefit from reduced administrative duties to service the needs of applications running in a Software Defined Data Center.
This tech preview will showcase application service catalog building of software defined infrastructure through VMware’s vSphere 6, vRA, and SolidFire’s all-flash scale-out storage system. You’ll see a database and other applications being built from a template that allows you to select from exposed SolidFire capabilities, such as Quality of Service, encryption, and even Site Recovery Manager (SRM) protection.
TechTarget Event - Storage Architectures for the Modern Data Center - Jeramia...NetApp
Why Is All-Flash Adoption Growing So Fast?
Presented by Jeramiah Dooley, Principal Architect, SolidFire
To be successful today, IT must transition from a cost center to a competitive advantage – and the path to success is through the data center. More central to business than ever before, the next-generation data center must be powered by all-flash.
All-flash is no longer the future; it's the present. Learn how all-flash can save your IT team time and resources with intelligent policy-based management, automation and more.
Take a look at the Agile Infrastructure approach to successful OpenStack cloud deployments, allowing you to go from concept to cloud in 90 minutes. Learn how:
* To simplify and accelerate the deployment of your self-service, enterprise-ready cloud infrastructure
* SolidFire can enable you to run production and test/dev operations on a single storage platform
* To build an OpenStack infrastructure that supports you now and into the future
(BIZ305) Case Study: Migrating Oracle E-Business Suite to AWS | AWS re:Invent...Amazon Web Services
With the maturity and breadth of cloud solutions, more enterprises are moving mission-critical workloads to the cloud. American Commercial Lines (ACL) recently migrated their Oracle ERP to AWS. ERP solutions such as Oracle E-Business Suite require specific knowledge in mapping AWS infrastructure to the specific configurations and needs of running these workloads. In this session, Apps Associates and ACL walk through the considerations for running Oracle E-Business Suite on AWS, including deployment architectures, concurrent processing, load balanced forms and web services, varying database transactional workloads, and performance requirements, as well as security and monitoring aspects. ACL shares their experiences and business drivers in making this transition to AWS.
In this presentation (originally presented to vExperts on March 31, 2015), you will learn about SolidFire’s technology preview showcasing integration with Rawlinson Rivera’s VMware’s vRealize Automation (vRA) project. As an automation solution for VMware, vRA provides a service catalog of application blueprints to build, move, add, and change application infrastructure resources. Enterprise organizations benefit from reduced administrative duties to service the needs of applications running in a Software Defined Data Center.
This tech preview will showcase application service catalog building of software defined infrastructure through VMware’s vSphere 6, vRA, and SolidFire’s all-flash scale-out storage system. You’ll see a database and other applications being built from a template that allows you to select from exposed SolidFire capabilities, such as Quality of Service, encryption, and even Site Recovery Manager (SRM) protection.
TechTarget Event - Storage Architectures for the Modern Data Center - Jeramia...NetApp
Why Is All-Flash Adoption Growing So Fast?
Presented by Jeramiah Dooley, Principal Architect, SolidFire
To be successful today, IT must transition from a cost center to a competitive advantage – and the path to success is through the data center. More central to business than ever before, the next-generation data center must be powered by all-flash.
All-flash is no longer the future; it's the present. Learn how all-flash can save your IT team time and resources with intelligent policy-based management, automation and more.
Take a look at the Agile Infrastructure approach to successful OpenStack cloud deployments, allowing you to go from concept to cloud in 90 minutes. Learn how:
* To simplify and accelerate the deployment of your self-service, enterprise-ready cloud infrastructure
* SolidFire can enable you to run production and test/dev operations on a single storage platform
* To build an OpenStack infrastructure that supports you now and into the future
(BIZ305) Case Study: Migrating Oracle E-Business Suite to AWS | AWS re:Invent...Amazon Web Services
With the maturity and breadth of cloud solutions, more enterprises are moving mission-critical workloads to the cloud. American Commercial Lines (ACL) recently migrated their Oracle ERP to AWS. ERP solutions such as Oracle E-Business Suite require specific knowledge in mapping AWS infrastructure to the specific configurations and needs of running these workloads. In this session, Apps Associates and ACL walk through the considerations for running Oracle E-Business Suite on AWS, including deployment architectures, concurrent processing, load balanced forms and web services, varying database transactional workloads, and performance requirements, as well as security and monitoring aspects. ACL shares their experiences and business drivers in making this transition to AWS.
MySQL in the Cloud, is Amazon RDS for you?Continuent
With more and more business moving into the cloud, the inclination is to use more cloud-based databases services, such as Amazon RDS. Deployment of Amazon RDS is capable with just a few buttons, but there are big differences between firing up a simple database for testing, and translating that into a full deployment to be used in production. For this to work properly, you have to consider many other aspects of the deployment, including high availability (HA), disaster recovery (DR), and scalability of your solution within your application's requirements.
Continuent Tungsten provides a full data management solution that is already handling hundreds of millions of transactions daily for our customers. This webinar explores how your business can benefit from Continuent Tungsten, a flexible clustering solution that helps data-driven businesses handle billions of transactions daily across a wide range of environments. We'll focus on the following problems in particular:
- Ensuring fully capable cloud DBMS operation
- Avoiding lock-in by choosing solutions that run across clouds as well as on-premises
- Spreading MySQL data over regions using flexible primary/DR and multi-master topologies
- Controlling maintenance intervals and the DBMS stack directly
- Integrating in real-time to data warehouses and on-premises DBMS like Oracle
- Ensuring immediate access to top-notch, 24x7 support when things go south.
Learn how you can use Continuent Tungsten to build scalable management solutions that offer the economic benefits of the cloud with the enterprise capabilities required by businesses that live and die by their data. Your data is too precious to take shortcuts.
AWS re:Invent 2016: Optimizing workloads in SAP HANA with Amazon EC2 X1 Insta...Amazon Web Services
AWS and SAP have worked together closely to certify the AWS platform so that companies of all sizes can fully realize all the benefits of the SAP HANA in-memory database platform on the AWS cloud. By placing SAP systems in the cloud, organizations are achieving greater agility, flexibility, and cost efficiency while saving resources to focus on their core businesses. We will discuss recent SAP and AWS innovations including the Amazon EC2 X1 instance type that offers up to 2TB of RAM, and dive into features of the AWS platform that bring significant flexibility to SAP HANA deployments.
How to you manage Performance in the Cloud, in particular in "Platform as a Service (PaaS) environments like Window's Azure or Heroku where you don't have a "virtual machine" to manage?
Even in "Infrastructure as a Service (IaaS)" environments like Amazon EC2 there are limitations on the tools you can deploy into that environment to assist in performance management, troubleshooting etc (e.g. you can't deploy promiscuous mode network sniffing tools in EC2).
James Smith from Adactus will give us an overview of Cloud Services as a whole, and then drill down into some of the issues they have experienced in deployed their "Pulse" Claims Management Solution into the Azure cloud (http://www.pulseclaims.com/home).
Beyond just looking at page speed performance he'll talk about the challenges involved in managing SLA's, Cloud "support" (or lack of it!), performance troubleshooting and the whole "performance lifecycle".
Lock, Stock and Backup: Data GuaranteedJervin Real
Percona Live 2017 - the decisions you need to make, the tools we recommend, the process you need to consider for a successful backup implementation for your MySQL services.
The third in our series of webinars, 'Journey Through the AWS Cloud', this complimentary presentation discusses the use of AWS as a storage and archive platform. We introduce some key mechanisms that will help you use AWS as a flexible deployment environment, talk about customers who are using AWS for development and test, and provide some tips and tricks to help you manage your AWS infrastructure and keep it cost effective.
AWS re:Invent 2016: High Performance Computing on AWS (CMP207)Amazon Web Services
High performance computing in the cloud is enabling high scale compute- and graphics-intensive workloads across industries, ranging from aerospace, automotive, and manufacturing to life sciences, financial services, and energy. AWS provides application developers and end users with unprecedented computational power for massively parallel applications, in areas such as large-scale fluid and materials simulations, 3D content rendering, financial computing, and deep learning. This session provides an overview of HPC capabilities on AWS, describes the newest generations of accelerated computing instances (including P2), as well as highlighting customer and partner use-cases across industries.
Attendees learn about best practices for running HPC workflows in the cloud, including graphical pre- and post-processing, workflow automation, and optimization. Attendees also learn about new and emerging HPC use cases: in particular, deep learning training and inference, large-scale simulations, and high performance data analytics.
High Availability Infrastructure for Cloud ComputingBob Rhubart
Infrastructure high availability is extremely critical to Cloud Computing. In a Cloud system that hosts a large number of databases and applications with different SLAs, any unplanned outage can be devastating, and even a small planned downtime may be unacceptable. This presentation will discuss various technology solutions and related best practices that system architects should consider in cloud infrastructure design to ensure high availability.
SQL Server Lift & Shift on Azure - SQL Saturday 921Marco Obinu
Slides presented at SQL Saturday 921, while talking about how to plan a Lift & Shift migration for SQL Server workloads, depicting the pros & cons of using different Azure services as landing zones.
Flash Arrays Enable the Next Generation Data Center NetApp
SolidFire’s all-flash storage is designed to solve the challenges of today’s workloads for a true cloud architecture with predictable, flexible, and easy-to-manage performance and capacity.
In this webinar, Jesse Genson examines how consolidating workloads onto a single platform can be the crucial missing step for highly virtualized data centers in the next generation data center.
MySQL in the Cloud, is Amazon RDS for you?Continuent
With more and more business moving into the cloud, the inclination is to use more cloud-based databases services, such as Amazon RDS. Deployment of Amazon RDS is capable with just a few buttons, but there are big differences between firing up a simple database for testing, and translating that into a full deployment to be used in production. For this to work properly, you have to consider many other aspects of the deployment, including high availability (HA), disaster recovery (DR), and scalability of your solution within your application's requirements.
Continuent Tungsten provides a full data management solution that is already handling hundreds of millions of transactions daily for our customers. This webinar explores how your business can benefit from Continuent Tungsten, a flexible clustering solution that helps data-driven businesses handle billions of transactions daily across a wide range of environments. We'll focus on the following problems in particular:
- Ensuring fully capable cloud DBMS operation
- Avoiding lock-in by choosing solutions that run across clouds as well as on-premises
- Spreading MySQL data over regions using flexible primary/DR and multi-master topologies
- Controlling maintenance intervals and the DBMS stack directly
- Integrating in real-time to data warehouses and on-premises DBMS like Oracle
- Ensuring immediate access to top-notch, 24x7 support when things go south.
Learn how you can use Continuent Tungsten to build scalable management solutions that offer the economic benefits of the cloud with the enterprise capabilities required by businesses that live and die by their data. Your data is too precious to take shortcuts.
AWS re:Invent 2016: Optimizing workloads in SAP HANA with Amazon EC2 X1 Insta...Amazon Web Services
AWS and SAP have worked together closely to certify the AWS platform so that companies of all sizes can fully realize all the benefits of the SAP HANA in-memory database platform on the AWS cloud. By placing SAP systems in the cloud, organizations are achieving greater agility, flexibility, and cost efficiency while saving resources to focus on their core businesses. We will discuss recent SAP and AWS innovations including the Amazon EC2 X1 instance type that offers up to 2TB of RAM, and dive into features of the AWS platform that bring significant flexibility to SAP HANA deployments.
How to you manage Performance in the Cloud, in particular in "Platform as a Service (PaaS) environments like Window's Azure or Heroku where you don't have a "virtual machine" to manage?
Even in "Infrastructure as a Service (IaaS)" environments like Amazon EC2 there are limitations on the tools you can deploy into that environment to assist in performance management, troubleshooting etc (e.g. you can't deploy promiscuous mode network sniffing tools in EC2).
James Smith from Adactus will give us an overview of Cloud Services as a whole, and then drill down into some of the issues they have experienced in deployed their "Pulse" Claims Management Solution into the Azure cloud (http://www.pulseclaims.com/home).
Beyond just looking at page speed performance he'll talk about the challenges involved in managing SLA's, Cloud "support" (or lack of it!), performance troubleshooting and the whole "performance lifecycle".
Lock, Stock and Backup: Data GuaranteedJervin Real
Percona Live 2017 - the decisions you need to make, the tools we recommend, the process you need to consider for a successful backup implementation for your MySQL services.
The third in our series of webinars, 'Journey Through the AWS Cloud', this complimentary presentation discusses the use of AWS as a storage and archive platform. We introduce some key mechanisms that will help you use AWS as a flexible deployment environment, talk about customers who are using AWS for development and test, and provide some tips and tricks to help you manage your AWS infrastructure and keep it cost effective.
AWS re:Invent 2016: High Performance Computing on AWS (CMP207)Amazon Web Services
High performance computing in the cloud is enabling high scale compute- and graphics-intensive workloads across industries, ranging from aerospace, automotive, and manufacturing to life sciences, financial services, and energy. AWS provides application developers and end users with unprecedented computational power for massively parallel applications, in areas such as large-scale fluid and materials simulations, 3D content rendering, financial computing, and deep learning. This session provides an overview of HPC capabilities on AWS, describes the newest generations of accelerated computing instances (including P2), as well as highlighting customer and partner use-cases across industries.
Attendees learn about best practices for running HPC workflows in the cloud, including graphical pre- and post-processing, workflow automation, and optimization. Attendees also learn about new and emerging HPC use cases: in particular, deep learning training and inference, large-scale simulations, and high performance data analytics.
High Availability Infrastructure for Cloud ComputingBob Rhubart
Infrastructure high availability is extremely critical to Cloud Computing. In a Cloud system that hosts a large number of databases and applications with different SLAs, any unplanned outage can be devastating, and even a small planned downtime may be unacceptable. This presentation will discuss various technology solutions and related best practices that system architects should consider in cloud infrastructure design to ensure high availability.
SQL Server Lift & Shift on Azure - SQL Saturday 921Marco Obinu
Slides presented at SQL Saturday 921, while talking about how to plan a Lift & Shift migration for SQL Server workloads, depicting the pros & cons of using different Azure services as landing zones.
Flash Arrays Enable the Next Generation Data Center NetApp
SolidFire’s all-flash storage is designed to solve the challenges of today’s workloads for a true cloud architecture with predictable, flexible, and easy-to-manage performance and capacity.
In this webinar, Jesse Genson examines how consolidating workloads onto a single platform can be the crucial missing step for highly virtualized data centers in the next generation data center.
Learn how to simplify management and guarantee application performance through the unique combination of vSphere and SolidFire.
- Consolidation of multiple mission critical applications on the same storage system
- Integrated End-to-End Quality of Service allocation using VMware's SIOC settings to eliminate inconsistent performance
- Eliminate VM sprawl and the need to over-provision storage, allowing you to deploy more VMs on the same infrastructure
Learn more about the tools, techniques and technologies for working productively with data at any scale. This session will introduce the family of data analytics tools on AWS which you can use to collect, compute and collaborate around data, from gigabytes to petabytes. We'll discuss Amazon Elastic MapReduce, Redshift, Hadoop, structured and unstructured data, and the EC2 instance types which enable high performance analytics.
Taming the cost of your first cloud - CCCEU 2014Tim Mackey
Today everyone is talking about clouds, and a few are building them, but far fewer are operating successful clouds. In this session we'll examine a variety of paradigm shifts IT makes when moving from a traditional virtualization and management mindset to operating a successful cloud. For most organizations, without careful planning the hype of a cloud solution can quickly overcome its capabilities and pre-existing best practices can combine to create the worst possible cloud scenario -- a cloud which isn't economical to operate, and which is more cumbersome to manage than a traditional virtualization farm.
Key topics covered include:
- Successful transition of operational and management paradigm
- How the VM density of clouds change Ops
- What it means to monitor the network in a cloud environment, at hyper-dense virtualization levels
- Preventing storage costs from outpacing delivery costs
OpenStack at the speed of business with SolidFire & Red Hat NetApp
When it comes to OpenStack® and the enterprise, it’s critical that you can rapidly deploy a plug-and-play solution that delivers mixed workload capabilities on a shared infrastructure. Join Red Hat and SolidFire to see how Agile Infrastructure for OpenStack can help your cloud move at the speed of business.
Join us for our on demand webinar where Storage Switzerland and Tegile Systems discuss how the acquisition and operating costs of flash make it feasible to build a private cloud that is responsive to the needs of the business and cost effective.
Workload Centric Scale-Out Storage for Next Generation DatacenterCloudian
For performance workloads, SolidFire provides a scale-out all-flash storage platform designed
to deliver guaranteed storage performance to thousands of application workloads side-by-side,
allowing performance workload consolidation under a single storage platform. The SolidFire system
can be combined together over standard networking technologies in clusters ranging from 4 to 100
nodes, providing high performance capacity from 35TB to 3.4PB, and can deliver between 200,000
and 7.5M guaranteed IOPS to more than 100,000 volumes / applications within a single cluster.
Data Warehouse Modernization - Big Data in the Cloud Success with Qubole on O...Qubole
The effective use of big data is the key to gaining a competitive advantage and outperforming the competition. This change demands that companies consume and blend enormous amount of data created from divergent and inherently mismatched sources, which represents a paradigm shift to the traditional data warehouse.
Companies need to modernize their data warehouse, augmenting it with a platform that allows storage, processing, exploration and analysis of large and diverse datasets without limiting the ability to deliver the data access, and flexibility responding to the needs of the business. That’s where Oracle Cloud and Qubole work together delivering a new breed of data platform —capable of storing and processing the overwhelming amount of data that on-premises big data deployments cannot handle.
Watch this on-demand webinar to understand:
- Why deploying big data on-premises is expensive, complex to maintain and limits your ability to scale across new use cases and data sources
- How Oracle Bare Metal Cloud's predictable and fast performance compute and network services deliver the foundation of a cost-effective, high-performance big data platform
- How Qubole leverages Oracle Bare Metal Cloud to provide a turnkey big data service that optimizes cost, performance, and scale, enabling self-service data exploration.
Qubole delivers a cloud-based, turnkey, self-service big data service that removes the complexity and reduces the cost of doing big data. It leverages Oracle Bare Metal Cloud’s next generation of scalable, inexpensive and performant compute, network and storage public cloud infrastructure to provide a solution that accelerates time to market and reduces the risk of your big data initiatives.
Sql Start! 2020 - SQL Server Lift & Shift su AzureMarco Obinu
Slide of the session delivered during SQL Start! 2020, where I illustrate different approaches to determine the best landing zone for you SQL Server workloads.
Video (ITA): https://youtu.be/1hqT_xHs0Qs
StorPool Storage presenting at Storage Field Day 25pdfStorPool Storage
Storage Field Day 25 took place on March 22–23, 2023, and gathered industry leaders and storage analysts in an exciting 2-day meet up with technical presentations. StorPool Storage participated in the event, and our team showcased our storage platform, its capabilities, and improvements.
Learn more: Watch now the recording of the presentation: https://storpool.com/blog/storpool-presents-at-storage-field-day-25-video-recordings
All Things Open 2014 - Day 1
Wednesday, October 22nd, 2014
Sergey Razin
Chief Technology Officer for SIOS Technology Corp.
Cloud
Self-Driving Data Center
Find more by Sergey here: http://www.slideshare.net/techdozor
DevOps the NetApp Way: 10 Rules for Forming a DevOps TeamNetApp
Does your enterprise IT organization practice DevOps without a common team approach? To create a standardized way for development and operations teams to work together at NetApp, the IT team differentiates a DevOps team from a regular development team based on these 10 rules.
Spot Lets NetApp Get the Most Out of the CloudNetApp
Prior to NetApp acquiring Spot.io, two of its IT teams had adopted Spot in their operations: Product Engineering for Cloud Volumes ONTAP test automation and NetApp IT for corporate business applications. Check out the results in this infographic.
NetApp has fully embraced tools that allow for seamless, collaborative work from home, and as a result was fully prepared to minimize COVID-19's impact on how we conduct business. Check out this infographic for a look at results from the new remote work reality.
4 Ways FlexPod Forms the Foundation for Cisco and NetApp SuccessNetApp
At Cisco and NetApp, seeing our customers succeed in their digital transformations means that we’ve succeeded too. But that’s only one of the ways we measure our performance. What’s another way? Hearing how our wide-ranging IT support helps Cisco and NetApp thrive. Here’s what makes FlexPod an indispensable part of Cisco’s and NetApp’s IT departments.
With the widespread adoption of hybrid multicloud as the de-facto architecture for the enterprise, organizations everywhere are modernizing to deliver tangible business value around data-intensive applications and workloads such as AI-driven IoT and Hyperledgers. Shifting from on-premises to public cloud services, private clouds, and moving from disk to flash – sometimes concurrently – opens the door to enormous potential, but also the unintended consequence of IT complexity.
With the widespread adoption of hybrid multicloud as the de facto IT architecture for the enterprise, organizations everywhere are modernizing to deliver tangible business value around data-intensive applications and workloads such as AI-driven IoT and indelible ledgers.
10 Reasons Why Your SAP Applications Belong on NetAppNetApp
NetApp has been supporting SAP for 20 years, delivering advanced solutions for SAP applications. Here are 10 reasons why your SAP applications belong on NetApp!
Redefining HCI: How to Go from Hyper Converged to Hybrid Cloud InfrastructureNetApp
The hyper converged infrastructure (HCI) market is entering a new phase of maturity. A modern HCI solution requires a private cloud platform that integrates with public clouds to create a consistent hybrid multi-cloud experience.
During this webinar, NetApp and an IDC guest speaker covered what led to the next generation of hyper converged infrastructure and which five capabilities are required to go from hyper converged to hybrid cloud infrastructure.
As we enter 2019, what stands out is how trends in business and technology are connected by common themes. For example, AI is at the heart of trends in development, data management, and delivery of applications and services at the edge, core, and cloud. Also essential are containerization as a critical enabling technology and the increasing intelligence of IoT devices at the edge. Navigating the tempests of transformation are developers, whose requirements are driving the rapid creation of new paradigms and technologies that they must then master in pursuit of long-term competitive advantage. Here are some of our perspectives and predictions for 2019.
Künstliche Intelligenz ist in deutschen Unter- nehmen ChefsacheNetApp
Einer aktuellen Umfrage des führenden Datenma- nagementspezialisten in der Hybrid Cloud NetApp zufolge gewinnt künstliche Intelligenz (KI) in deut- schen Unternehmen zunehmend an Relevanz.
Iperconvergenza come migliora gli economics del tuo ITNetApp
In this NetApp Webinar we present how NetApp HCI helps improve the economics of IT: accelerating and ensuring performance for each application, simplifying your Data Center and make your architecture more scalable by reducing waste, implementing and expanding your HCI infrastructure quickly and inexpensively, making your management even simpler and more intuitive, saving time and using the skills you already have in the company.
NetApp IT’s Tiered Archive Approach for Active IQNetApp
NetApp AutoSupport technology proactively monitors the health of NetApp systems installed at customer’s location and provides 24/7 actionable intelligence to optimize their storage environment. The amount of data received back to NetApp doubles approximately every 16 months. To manage the swelling waves of data to archive, NetApp IT sought a more flexible solution.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
6. What Are We Talking About Today?
• Storage in Today’s Enterprise
• Four Stages of Change
– Workload Consolidation?
– AHHH!!! WORKLOAD CONSOLIDATION!
– Oh Wait, Cool…
– How Do I Get Started?
• What Does Tomorrow Look Like?
13. Modern Infrastructure Consumption Models
Best of
Breed
Appliance
As a Service
Software on
Commodity
Hardware
Hyper
Converged
Converged
Infrastructure
14. The Infrastructure Consumption Continuum
Best of Breed
Appliance
Hyper
Converged
As a Service
Software on
Commodity
Hardware
Converged
Infrastructure
15. The Infrastructure Consumption Continuum
Best of Breed
Appliance
Hyper
Converged
As a Service
Software on
Commodity
Hardware
Converged
Infrastructure
Ease of ImplementationEasier Harder
Degree of Vendor Lock-InMore Lock-In Less Lock-In
Implementation FlexibilityLess Flexible More Flexible
Cost EfficiencyAt Small Scale At Large Scale
16. The Infrastructure Consumption Continuum
SMB / Startups
Best of Breed
Appliance
Hyper
Converged
As a Service
Software on
Commodity
Hardware
Converged
Infrastructure
Ease of ImplementationEasier Harder
Degree of Vendor Lock-InMore Lock-In Less Lock-In
Implementation FlexibilityLess Flexible More Flexible
Cost EfficiencyAt Small Scale At Large Scale
17. The Infrastructure Consumption Continuum
SMB / Startups Large Enterprise
Best of Breed
Appliance
Hyper
Converged
As a Service
Software on
Commodity
Hardware
Converged
Infrastructure
Ease of ImplementationEasier Harder
Degree of Vendor Lock-InMore Lock-In Less Lock-In
Implementation FlexibilityLess Flexible More Flexible
Cost EfficiencyAt Small Scale At Large Scale
18. The Infrastructure Consumption Continuum
SMB / Startups Large Enterprise Hyperscale / SP
Best of Breed
Appliance
Hyper
Converged
As a Service
Software on
Commodity
Hardware
Converged
Infrastructure
Ease of ImplementationEasier Harder
Degree of Vendor Lock-InMore Lock-In Less Lock-In
Implementation FlexibilityLess Flexible More Flexible
Cost EfficiencyAt Small Scale At Large Scale
19. SolidFire Consumption Models
Element
Opera,ng
System
Converged
Infrastructure
Agile
Infrastructure
Best
of
Breed
Appliance
SF
Series
Appliances
As
a
Service
Scale-‐Out
Infrastructure
Agility
Guaranteed
Quality
of
Service
Complete
System
Automa,on
In-‐Line
Data
Reduc,on
Self
Healing
High
Availability
SoFware
on
Commodity
Hardware
element
20. High performance storage systems designed for
large scale infrastructure
Industry Leading
Quality of Service (QoS)
Scale-Out Architecture
• 4 – 100 nodes
• 35TB – 3.4PB Usable Capacity
• 200k – 7.5M Controllable IOPS
Simple all inclusive
pricing model Direct Tier 3 Support
for every customer
Industry-standard hardware
• 10 GigE iSCSI, 16/8 Gb FC
21. Applying All-Flash in your Data Center
PerformanceLow
Performance and Capacity
(Public / Private
Cloud, IaaS)
High
Cheap and Deep [object / file]
(Backup / Archive / Video Streaming)
Performance at any cost
(Transactional Processing /
Trading Platforms)
#ofApplications
22. Requirements constantly changing
Consumes the most capital and time
Applying All-Flash in your Data Center
PerformanceLow High
Cheap and Deep [object / file]
(Backup / Archive / Video Streaming)
Performance at any cost
(Transactional Processing /
Trading Platforms)
#ofApplications
Performance and Capacity
(Public / Private
Cloud, IaaS)
23. Flash V2 - All Flash Array
Flash V1 PCI Flash
Requirements constantly changing
Consumes the most capital and time
Applying All-Flash in your Data Center
PerformanceLow High
Cheap and Deep [object / file]
(Backup / Archive / Video Streaming)
Performance at any cost
(Transactional Processing /
Trading Platforms)
#ofApplications
Performance and Capacity
(Public / Private
Cloud, IaaS)
24. Flash V2 - All Flash Array
Flash V1 PCI Flash
Flash for the Next Generation Data Center
Requirements constantly changing
Consumes the most capital and time
Applying All-Flash in your Data Center
PerformanceLow High
Cheap and Deep [object / file]
(Backup / Archive / Video Streaming)
Performance at any cost
(Transactional Processing /
Trading Platforms)
#ofApplications
Performance and Capacity
(Public / Private
Cloud, IaaS)
27. All-flash storage platform
for the next generation data center.
Scale-Out
Infrastructure Agility
Guaranteed
Quality of Service
Complete
System Automation
In-Line Data
Reduction
Self Healing
High Availability
29. Scale-Out Agility
Performance
Capacity
200,000 IOPS35 TB
Guaranteed Compatibility
between all SolidFire storage nodes
• Future-Proof your storage investment
• Eliminate storage migrations and forklift upgrades
• Never wait 3 years for an upgrade
Linear Scale
of Performance and Capacity
Expand / Contract
without disruption or reconfiguration
30. Scale-Out Agility
Performance
Capacity
43.6 TB 250,000 IOPS
200,000 IOPS35 TB
Guaranteed Compatibility
between all SolidFire storage nodes
• Future-Proof your storage investment
• Eliminate storage migrations and forklift upgrades
• Never wait 3 years for an upgrade
Linear Scale
of Performance and Capacity
Expand / Contract
without disruption or reconfiguration
31. Scale-Out Agility
Performance
Capacity
43.6 TB 250,000 IOPS
52.2 TB 300,000 IOPS
200,000 IOPS35 TB
Guaranteed Compatibility
between all SolidFire storage nodes
• Future-Proof your storage investment
• Eliminate storage migrations and forklift upgrades
• Never wait 3 years for an upgrade
Linear Scale
of Performance and Capacity
Expand / Contract
without disruption or reconfiguration
32. Scale-Out Agility
Performance
Capacity
43.6 TB 250,000 IOPS
60.8 TB 350,000 IOPS
52.2 TB 300,000 IOPS
200,000 IOPS35 TB
Guaranteed Compatibility
between all SolidFire storage nodes
• Future-Proof your storage investment
• Eliminate storage migrations and forklift upgrades
• Never wait 3 years for an upgrade
Linear Scale
of Performance and Capacity
Expand / Contract
without disruption or reconfiguration
33. • Platform Compatibility Guarantee
– Ensures that all future software &
platforms from SolidFire will interoperate
with existing infrastructure
• Unlimited Drive Wear Guarantee
– Eliminate concerns around flash
endurance
– No restrictions on use-case or workload
Never Obsolete
Unparalleled
Investment Protection
No more forklift
upgrades
34. SolidFire QoS Eliminates of traditional
performance related storage problems - ESG 2015
Guaranteed Quality of Service (QoS)
Dynamically Allocate, Manage and Guarantee
storage performance independent of capacity
Define / enforce Min, Max and Burst settings
for each application / volume
40. Consolidate with Guaranteed QoS
• Guarantee storage performance to
every application
• Combine a broad array of
application workloads within a
single storage platform
• Increase application density and
resource utilization
• Respond to demands faster and
with greater agility than ever before
48. Self Healing High Availability
SolidFire Helix™
Cluster wide RAID-less data protection
• No single points of failure
• Automatic self-healing – restores
redundancy after failure
• Maintains all QoS settings regardless of
failure condition
• Non-disruptive hardware and software
upgrades
A
C
D
J
B
F
G
J
D
E
H
B
A
F
I
EH
I
G
C
J
A
D
I
49. Self Healing High Availability
SolidFire Helix™
Cluster wide RAID-less data protection
• No single points of failure
• Automatic self-healing – restores
redundancy after failure
• Maintains all QoS settings regardless of
failure condition
• Non-disruptive hardware and software
upgrades
A
C
J
B
F
G
D
E
H
B
F
I
EH
G
C
J
A
D
I
C
G
B I
J
50. VMware & SolidFire
Empowering The Software Defined Datacenter
Leading Scale-Out
All-Flash Storage
Architecture
Strong vSphere
Storage API
Support
SIOC
+
QoS
VAAI
VASA
VVols
API Driven
Automation &
Management
Delivering
Guaranteed
Performance to
Every Application
Tier
1
Apps
vCenter
Plug-in
Storage Policy
Based
Management
End
User
Compu2ng Private
Cloud
Automa2on
Automation | Ops | Orchestrator
PowerShell
SRM
SRA
51. • Real-Time Replication
• Integrated Backup and Restore
• Encryption at Rest
• Snapshots and Clones
• Consistency Group Snapshots
• VLAN tagging / Multi-tenant Networking
• SNMP Monitoring
• VAAI, VASA, VVOLs, SIOC, Horizon View, and vCenter Plug-In
• Simultaneous Multiprotocol Support (FC / iSCSI)
All Inclusive Feature Set
52.
53. Today’s Legacy Enterprise Data Center
Legacy
SAN
Siloed
Applica,on
Storage
Tradi,onal
Compute
1Gb
FC
Flash
62. Closing Thoughts
• Don’t be the apes. The bananas belong to
you, help each other go get them!
• Consolidation is the only way to increase
efficiency to any significant degree
• Architecture matters, but so does
integration