This document discusses solid state drives (SSDs) and flash memory from an independent perspective. It provides an overview of SSD technologies like SLC, MLC, eMLC and TLC flash and how they differ in performance, reliability and cost. It discusses challenges with flash like write amplification and the "write cliff" effect. It also covers topics like wear levelling, flash applications and benchmarks a virtual desktop infrastructure deployment using all-flash storage versus traditional disk.
All-flash storage is the wave of the future, and IBM and Catalogic have joined forces to provide the most cost-effective, management-efficient and value-adding flash array solution in the market. Join us to learn how the IBM FlashSystem V9000 and Catalogic Software can jump-start your most critical IT projects.
Real world experience with provisioning servicesCitrix
If you use Citrix NetScaler for secure remote access to your Citrix XenApp/Citrix XenDesktop deployment, you may be wondering if there’s more that it can do. You are correct! NetScaler also offers load balancing, global server load balancing, web interface integration, HDX traffic inspection and much more. It can enhance Citrix ShareFile StorageZones and Citrix mobile deployments. Join this session for a quick NetScaler refresher.
All-flash storage is the wave of the future, and IBM and Catalogic have joined forces to provide the most cost-effective, management-efficient and value-adding flash array solution in the market. Join us to learn how the IBM FlashSystem V9000 and Catalogic Software can jump-start your most critical IT projects.
Real world experience with provisioning servicesCitrix
If you use Citrix NetScaler for secure remote access to your Citrix XenApp/Citrix XenDesktop deployment, you may be wondering if there’s more that it can do. You are correct! NetScaler also offers load balancing, global server load balancing, web interface integration, HDX traffic inspection and much more. It can enhance Citrix ShareFile StorageZones and Citrix mobile deployments. Join this session for a quick NetScaler refresher.
on the most suitable storage architecture for virtualizationJordi Moles Blanco
This is a paper I wrote on the most suitable storage architecture for virtualization that solved some of the problems we had with shared storage at CDmon. The paper talks about pros and cons of both ISCSI and NFS and tries to get the most stable and best performing storage solution with the tools we had available at that moment.
This Blueprint is designed to help with customers who are utilising OST technology with Backup Exec’s deduplication Option to improve back end storage capabilities within a complex backup environment.
Relentless Information Growth
The data deduplication technology within Backup Exec 2014 breaks down streams of backup data into “blocks.” Each data block is identified as either unique or non-unique, and a tracking database is used to ensure that only a single copy of a data block is saved to storage by that Backup Exec server. For subsequent backups, the tracking database identifies which blocks have been protected and only stores the blocks that are new or unique. For example, if five different client systems are sending backup data to a Backup Exec server and a data block is found in backup streams from all five of those client systems, only a single copy of the data block is actually stored by the Backup Exec server. This process of reducing redundant data blocks that are saved to backup storage leads to significant reduction in storage space needed for backups.
Data has increased the necessity in making greater investments in IT infrastructure, with the increase in the duplication of data and Data protection processes, such as backup, has compound data growth creating multiple copies of primary data made for operational and disaster recovery. This has also made the Backup Infrastructure far more complex. Now that disk-based systems inherently offer faster restores, disk systems can also make backup environments more complex and difficult to manage. This creates a problem for many backup solutions to manage advanced storage device capabilities such as data deduplication, replication, and ability to write directly to tape.
Power of OpenStorage Technology (OST)
Symantec Backup Exec software and the OpenStorage technology (OST) have been designed to provide centrally managed, edge-to-core data protection in order to span multiple sites and provide disk-to-disk-to-tape (D2D2T) functionality and automate Data Movement. The OpenStorage API introduced in Backup Exec 2010 provides automated movement of data between sites and storage tiers and acts as a single Point of Management and Catalog for Backup Data, regardless of where it resides (remote office or corporate data center) or of what type of media it is stored on (disk or tape), or its age (recent backup or long term archive), providing better Control of Advanced Storage Devices.
The OpenStorage initiative allows customers to better utilize advanced, disk-based storage solutions from qualified partners. It gives the ability to ensure tighter integration between the backup software and storage, greater efficiency and performance using an easy-to-deploy, purpose-built appliance that does not have the limitation of tape emulation devices: increasing Performance and Optimization, achieving faster backups to deduplication appliances via a third-party OST plug-in enabled by Backup Exec.
VMworld 2013: How SRP Delivers More Than Power to Their Customers VMworld
VMworld 2013
Sheldon Brown, SRP
Girish Manmadkar, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
What is NetBackup appliance? Is it just NetBackup pre-installed on hardware?
The answer is both yes and no.
Yes, NetBackup appliance is simply backup in a box if you are looking for a solution for your data protection and disaster recovery readiness. That is the business problem you are solving with this turnkey appliance that installs in minutes and reduce your operational costs.
No, NetBackup appliance is more than a backup in box if you are comparing it with rolling your own hardware for NetBackup or if you are comparing it with third party deduplication appliances. Here is why I say this…
NetBackup appliance comes with redundant storage in RAID6 for storing your backups
Symantec worked with Intel to design the hardware for running NetBackup optimally for predictable and consistent performance. Eliminates the guesswork while designing the solution.
Many vendors will talk about various processes running on their devices to perform integrity checks, some solutions even need blackout windows to do those operations. NetBackup appliances include Storage Foundation at no additional cost. The storage is managed by Veritas Volume Manager (VxVM) and presented to operating system through Veritas File System. Why is this important? SF is industry-leading storage management infrastructure that powers the most mission-critical applications in the enterprise space. It is built for high-performance and resiliency. NetBackup appliance provides 24/7 protection with data integrity on storage provided by the industry leading technology.
The Linux based operating system, optimized for NetBackup, harden by Symantec eliminates the cost of deploying and maintaining general purpose operating system and associated IT applications.
NetBackup appliances include built-on WAN Optimization driver. Replicate to appliances on remote sites or to the cloud up to 10 times faster on across high latency links.
Your backups need to be protected. Symantec Critical System Protection provides non-signature based Host Intrusion Prevention protection. It protects against zero-day attacks using granular OS hardening policies along with application, user and device controls, all pre-defined for you in NetBackup appliance so that you don’t need to worry about configuring it.
Best of all, reduce your operational expenditure and eliminate complexity! One patch updates everything in this stack! The most holistic data protection solution with the least number of knobs to operate.
Master VMware Performance and Capacity ManagementIwan Rahabok
12 Sep 2016 update: See this http://virtual-red-dot.info/operationalize-sddc-program-2/ for details.
-------------
Based on the book http://virtual-red-dot.info/performance-and-capacity-management/
Master performance and capacity management of VMware SDDC
This presentation provides an introduction to the current activities leading to software architectures and methodologies for new NVM technologies, including the activities of the SNIA Non-Volatile Memory (NVM) Technical Working Group. This session includes a review and discussion of the impacts of the SNIA NVM Programming Model (NPM). We will preview the current work on new technologies, including remote access, high availability, clustering, atomic transactions, error management, and current methodologies for dealing with NVM.
How to configure SQL Server for SSDs and VMsSolarWinds
Recent trends in virtualized converged infrastructures, coupled with rapidly decreasing costs of solid state storage (SSD), pose new challenges for DBAs and developers alike. Learn answers to to pressing questions like:
• If my array uses compression, are there benefits to using database compression?
• How should I arrange my data files on an SSD array?
• If I have limited SSD space, how should I use it?
• On a hybrid array (spinning disk and SSD) how should I structure policies?
• What changes from physical servers to VMs?
How to deploy SQL Server on an Microsoft Azure virtual machinesSolarWinds
Running apps on Microsoft Azure Virtual Machines is tempting; promising faster deployments and lower overall TCO. But how easy is it really to configure and run SQL Server in an Azure VM environment? Learn what you should know about tuning, optimizing, and key indicators for monitoring performance, as well as special considerations for High-Availability and Disaster Recovery.
Symantec continues to deliver on its information management strategy to enable organizations to protect their information completely, deduplicate everywhere to eliminate redundant data, delete confidently and discover efficiently with Enterprise Vault 9.0, Enterprise Vault Discovery Collector, NetBackup 5000 and the NetBackup Cloud Storage for Nirvanix.
The best DBAs tune SQL Server for performance at the server, instance, and database layers. This allows for both the logical and physical database designs to meet performance expectations. But it can be difficult to know which configuration options are better than others. Learn expert tips from Microsoft Certified Masters Tim Chapman and Thomas LaRock.
If your business is considering using a hyperconverged
computer/storage solution rather than disparate dedicated appliances, a Nutanix storage cluster powered by Dell XC630 appliances could bring many benefits. Thanks to its powerful Dell servers with Intel processors, this space-efficient solution was able to handle nine SQL Server 2014 OLTP workloads at over 420,000 OPM, 160 mailboxes in Microsoft Exchange 2013, and file/print and web server disk workloads; that’s enough to meet your present demands and still have room for future growth. With software-defined tiered storage, high availability, and a redundant network architecture, the hyperconverged solution based on Dell XC630 appliances can help your business get the job done.
on the most suitable storage architecture for virtualizationJordi Moles Blanco
This is a paper I wrote on the most suitable storage architecture for virtualization that solved some of the problems we had with shared storage at CDmon. The paper talks about pros and cons of both ISCSI and NFS and tries to get the most stable and best performing storage solution with the tools we had available at that moment.
This Blueprint is designed to help with customers who are utilising OST technology with Backup Exec’s deduplication Option to improve back end storage capabilities within a complex backup environment.
Relentless Information Growth
The data deduplication technology within Backup Exec 2014 breaks down streams of backup data into “blocks.” Each data block is identified as either unique or non-unique, and a tracking database is used to ensure that only a single copy of a data block is saved to storage by that Backup Exec server. For subsequent backups, the tracking database identifies which blocks have been protected and only stores the blocks that are new or unique. For example, if five different client systems are sending backup data to a Backup Exec server and a data block is found in backup streams from all five of those client systems, only a single copy of the data block is actually stored by the Backup Exec server. This process of reducing redundant data blocks that are saved to backup storage leads to significant reduction in storage space needed for backups.
Data has increased the necessity in making greater investments in IT infrastructure, with the increase in the duplication of data and Data protection processes, such as backup, has compound data growth creating multiple copies of primary data made for operational and disaster recovery. This has also made the Backup Infrastructure far more complex. Now that disk-based systems inherently offer faster restores, disk systems can also make backup environments more complex and difficult to manage. This creates a problem for many backup solutions to manage advanced storage device capabilities such as data deduplication, replication, and ability to write directly to tape.
Power of OpenStorage Technology (OST)
Symantec Backup Exec software and the OpenStorage technology (OST) have been designed to provide centrally managed, edge-to-core data protection in order to span multiple sites and provide disk-to-disk-to-tape (D2D2T) functionality and automate Data Movement. The OpenStorage API introduced in Backup Exec 2010 provides automated movement of data between sites and storage tiers and acts as a single Point of Management and Catalog for Backup Data, regardless of where it resides (remote office or corporate data center) or of what type of media it is stored on (disk or tape), or its age (recent backup or long term archive), providing better Control of Advanced Storage Devices.
The OpenStorage initiative allows customers to better utilize advanced, disk-based storage solutions from qualified partners. It gives the ability to ensure tighter integration between the backup software and storage, greater efficiency and performance using an easy-to-deploy, purpose-built appliance that does not have the limitation of tape emulation devices: increasing Performance and Optimization, achieving faster backups to deduplication appliances via a third-party OST plug-in enabled by Backup Exec.
VMworld 2013: How SRP Delivers More Than Power to Their Customers VMworld
VMworld 2013
Sheldon Brown, SRP
Girish Manmadkar, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
What is NetBackup appliance? Is it just NetBackup pre-installed on hardware?
The answer is both yes and no.
Yes, NetBackup appliance is simply backup in a box if you are looking for a solution for your data protection and disaster recovery readiness. That is the business problem you are solving with this turnkey appliance that installs in minutes and reduce your operational costs.
No, NetBackup appliance is more than a backup in box if you are comparing it with rolling your own hardware for NetBackup or if you are comparing it with third party deduplication appliances. Here is why I say this…
NetBackup appliance comes with redundant storage in RAID6 for storing your backups
Symantec worked with Intel to design the hardware for running NetBackup optimally for predictable and consistent performance. Eliminates the guesswork while designing the solution.
Many vendors will talk about various processes running on their devices to perform integrity checks, some solutions even need blackout windows to do those operations. NetBackup appliances include Storage Foundation at no additional cost. The storage is managed by Veritas Volume Manager (VxVM) and presented to operating system through Veritas File System. Why is this important? SF is industry-leading storage management infrastructure that powers the most mission-critical applications in the enterprise space. It is built for high-performance and resiliency. NetBackup appliance provides 24/7 protection with data integrity on storage provided by the industry leading technology.
The Linux based operating system, optimized for NetBackup, harden by Symantec eliminates the cost of deploying and maintaining general purpose operating system and associated IT applications.
NetBackup appliances include built-on WAN Optimization driver. Replicate to appliances on remote sites or to the cloud up to 10 times faster on across high latency links.
Your backups need to be protected. Symantec Critical System Protection provides non-signature based Host Intrusion Prevention protection. It protects against zero-day attacks using granular OS hardening policies along with application, user and device controls, all pre-defined for you in NetBackup appliance so that you don’t need to worry about configuring it.
Best of all, reduce your operational expenditure and eliminate complexity! One patch updates everything in this stack! The most holistic data protection solution with the least number of knobs to operate.
Master VMware Performance and Capacity ManagementIwan Rahabok
12 Sep 2016 update: See this http://virtual-red-dot.info/operationalize-sddc-program-2/ for details.
-------------
Based on the book http://virtual-red-dot.info/performance-and-capacity-management/
Master performance and capacity management of VMware SDDC
This presentation provides an introduction to the current activities leading to software architectures and methodologies for new NVM technologies, including the activities of the SNIA Non-Volatile Memory (NVM) Technical Working Group. This session includes a review and discussion of the impacts of the SNIA NVM Programming Model (NPM). We will preview the current work on new technologies, including remote access, high availability, clustering, atomic transactions, error management, and current methodologies for dealing with NVM.
How to configure SQL Server for SSDs and VMsSolarWinds
Recent trends in virtualized converged infrastructures, coupled with rapidly decreasing costs of solid state storage (SSD), pose new challenges for DBAs and developers alike. Learn answers to to pressing questions like:
• If my array uses compression, are there benefits to using database compression?
• How should I arrange my data files on an SSD array?
• If I have limited SSD space, how should I use it?
• On a hybrid array (spinning disk and SSD) how should I structure policies?
• What changes from physical servers to VMs?
How to deploy SQL Server on an Microsoft Azure virtual machinesSolarWinds
Running apps on Microsoft Azure Virtual Machines is tempting; promising faster deployments and lower overall TCO. But how easy is it really to configure and run SQL Server in an Azure VM environment? Learn what you should know about tuning, optimizing, and key indicators for monitoring performance, as well as special considerations for High-Availability and Disaster Recovery.
Symantec continues to deliver on its information management strategy to enable organizations to protect their information completely, deduplicate everywhere to eliminate redundant data, delete confidently and discover efficiently with Enterprise Vault 9.0, Enterprise Vault Discovery Collector, NetBackup 5000 and the NetBackup Cloud Storage for Nirvanix.
The best DBAs tune SQL Server for performance at the server, instance, and database layers. This allows for both the logical and physical database designs to meet performance expectations. But it can be difficult to know which configuration options are better than others. Learn expert tips from Microsoft Certified Masters Tim Chapman and Thomas LaRock.
If your business is considering using a hyperconverged
computer/storage solution rather than disparate dedicated appliances, a Nutanix storage cluster powered by Dell XC630 appliances could bring many benefits. Thanks to its powerful Dell servers with Intel processors, this space-efficient solution was able to handle nine SQL Server 2014 OLTP workloads at over 420,000 OPM, 160 mailboxes in Microsoft Exchange 2013, and file/print and web server disk workloads; that’s enough to meet your present demands and still have room for future growth. With software-defined tiered storage, high availability, and a redundant network architecture, the hyperconverged solution based on Dell XC630 appliances can help your business get the job done.
Lezione 2 del corso Web Design from Ground to TopSkillsAndMore
Il linguaggio HTML è un metalinguaggio che permette di dare struttura alle proprie pagine web. È fondamentale essere in grado di tirare su questa struttura perché rappresenta le ossa di qualsiasi sito internet.
Conoscere la logica e gli elementi che puoi utilizzare ti permetterà di aggiungere nuove informazioni che i tuoi visitatori e i motori di ricerca potranno ottenere da del semplice codice.
Billboard Connection presents BBI Display’s patented amphibious and airtight, portable billboards can be displayed on
any surface including water, sand, snow, grass, dirt and pavement. The linkable
billboards provide a new way to reach audiences in any location. This truly unique media
will catch the attention of targeted consumers in unlimited markets worldwide. It
provides advertisers and events with the ability to convey messages on the other 75% of
the planet’s surface.
Värske Tallinna kinnisvaraturu kommentaar Goodson & Red kinnisvarakonsultantidelt koostöös Tõnu Toomparkiga. Äsja valminud Tallinna eluaseme ja üürituru ülevaates puudutame põgusalt olulisemaid arenguid siinsel turul, ning nagu tavaliselt heidame pilgu eluaseme- ja laenuturule üldiselt, tehingutele kesklinna kinnisvaraga, pakkumistele ja hinnatasemele, ning loomulikult üüriturule, mille juures toome ära ka mõned näidistehingud.
This presentation was made for 200 participants in a training in basic ICT skills for members of the communities around PMM Girls School in Jinja, Uganda - 30th April 2014. The presentation is a brief about The Internet - What it is and how it has evolved to today.
In this webinar join experts from Storage Switzerland and Tegile to discover if the All-Flash Data Center can become reality. We will explore the return on investment that All-Flash systems can deliver, like increase user and virtual machine densities, lower drive counts and simpler storage architectures. We will also look at some of the methods that All-Flash systems employ to deliver an acceptable cost per GB like thin provisioning, clones, deduplication and compression. Finally we will take one last look at disk, does it have a role in the All-Flash Data Center and if it does what should that role be?
IBM is the first major storage vendor to deliver eMLC Flash Storage Systems and has been incorporating flash into its servers and storage products for many years. This presentation explains the benefits of using IBM FlashSystems with I/O Intensive workloads where lower latency can make the difference; use cases include Online Transaction processing (OLTP), Business Intelligence (BI), Online Analytical Processing (OLAP), Virtual Desktop Infrastructure (VDI), High Performance Computing (HPC), Content delivery solutions (such as cloud storage and video on demand).
Watch the full webinar here: http://bit.ly/1TUuUCK
When considering flash storage, there are many misconceptions and outright myths. Especially when equating consumer-grade flash (USB sticks) to enterprise-grade SSDs. In this webinar SanDisk Chief Architect, Adam Roberts, will discuss 5 myths of flash storage and highlight what you need to look out for when choosing a storage device to accelerate your data center storage. This webinar will cover:
1.Data Protection
2.Power Fail Protection
3.Temperature Throttling/Overheating
4.QoS for Performance
5.SSD Endurance
Stay tuned for future webinars which will look at the benefits of flash beyond performance…busting a few more myths on flash.
I wanted introduce a great new solid state disk called the USSDs. Our just announced USSDs sets a new milestone for the most performance in the smallest space.`
Our single solid state disk can handle about 250 times more small block random I/O's per second than a hard disk drive. That is, 250 disk drives would be needed to perform the same amount of work as one SSD.
The flash market started out monolithically. Flash was a single media type (high performance, high endurance SLC flash). Flash systems also had a single purpose of accelerating the response time of high-end databases. But now there are several flash options. Users can choose between high performance flash or highly dense, medium performance flash systems. At the same time, high capacity hard disk drives are making a case to be the archival storage medium of choice. How does an IT professional choose?
Solid State Drives - Seminar for Computer Engineering Semester 6 - VIT,Univer...ravipbhat
Solid state is term that refers to electronic circuitry that is built entirely out of semiconductors.
A Solid-State Drive (SSD) is a data storage device that uses solid state memory to store persistent data and SSDs use same I/O interfaces developed for hard disk drives.
Implementation of Dense Storage Utilizing HDDs with SSDs and PCIe Flash Acc...Red_Hat_Storage
At Red Hat Storage Day New York on 1/19/16, Red Hat partner Seagate presented on how to implement dense storage using HDDs with SSDs and PCIe flash accelerator cards.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
3. Trusted Advisor of Choice
►Solid State Solutions (S3)
› Leading independent storage integrator
› Big data and virtualisation
› Reduce risk and cost
› Flexible range of services
►Celebrating 25 Years
› Privately owned and financed
› Customer focused, vendor independent
› Technically excellent, self sufficient
› ISO:9001 and ISO:27001 certified
4. Trusted Advisor of Choice
►What makes S3 different?
› Solution focused not product biased
› Technically excellent
› S3 Support
› Enduring relationships
►Numerous accolades
› EMC Isilon Partner of the Year - Single Country EMEA 2013
› EMC New Partner of the Year 2012
› Nexsan EMEA Partner of the Year - 2008, 2009, 2010, 2011
› CommVault Trusted Advisor of the Year 2012 EMEA
5. Trusted Advisor of Choice
►Managed Services
› Offload day-day management tasks
» Servers, Backups, Storage
› Focus on your projects
› Predictable and controlled costs
►Cloud Services
› Archive and preservation
› 100% integrity guarantee
› Email Archive
› Storage
6. Trusted Advisor of Choice
►Support
› 9/5 on everything we sell
› Support Advantage
› 24/7 uplift
› Close 70% calls with no vendor involvement
►Professional Services
› Methodology
› Project Management
› Flexible offerings
› Highly referenceable
10. Trusted Advisor of Choice
Demystifying SSD
Flash is not always a good thing (an independent view)
Mark Smith 21st May 2013
11. Flash Memory
► Flash Memory is a non volatile semiconductor originally designed for digital
cameras
► They were not intended to be written to in the way most applications do!
► The two big challenges are:
› They wear out
› Read / Write imbalance
► The design maybe the same but the secret is in the manufacturing
Manufacturing, screening and quality
control has a massive impact of the
performance and reliability of flash
memory
12. SSD Drives
► There are over 300 SSD/Flash vendors (59 didn’t exist before 1st Jan 2013)
► SSD is just the productisation of flash based storage
► Different types of flash :-
› SLC
› eMLC
› MLC
› TLC
Single Level Cell
Each memory cell can either be on or off.
Most reliable type of SSD with a typical write
endurance in the 100Ks of cycles.
Performance – Fastest
Cost - Highest
Bit Density – Lowest – 1
Error Sensitivity - Low
Multi Level Cell
Each memory cell can support two levels
(voltages) so more bits can be stored per cell
with a typical write endurance in the 1Ks of
cycles
Performance – Middle
Cost - Low
Bit Density – Middle – 2
Error Sensitivity - Low
Enterprise Multi Level Cell
Similar to MLC but with an enhance error
correction to give much lower error rates.
Typical write endurance in the 10Ks of cycles
Performance – Middle
Cost - Middle
Bit Density – Middle – 2
Error Sensitivity - Low
Triple Level Cell
Similar to MLC in design but 3 bits of data per
cell. Lower cost due to increased density.
Typical write endurance in the 100s of cycles.
Performance – Middle
Cost - Lowest
Bit Density – Highest – 3
Error Sensitivity - High
13. Flash Failures
► Flash memory fails in a very different way to traditional mechanical disk
► The unrecoverable error rate for a mechanical hard drive remains pretty
constant over the number of writes to a drive
► Errors are typically adjacent sectors (like a scratched CD)
► Flash unrecoverable errors increase exponentially with every TB that is written
► Overtime the write payload causes more and more errors to occur
Unlike traditional mechanical disk, flash drives are
highly unbalanced, performing much slower in
write situations than they do under read only loads.
It takes much more time to erase and write a flash
cell than it does to read it. Typically 10 times!
14. Flash Products
Acceleration of specific
workloads
Features
› Uses flash to accelerate SAS
› Limited amount of flash
› Relies on traditional disk for
capacity
› Working set of data can be
small
› Flash cache misses can
cause performance issues
› Limited enterprise features
(replication, snaps etc)
Flash products typically fall into four categories:
PCIe Cards
Acceleration of single
applications
Features
› The flash based I/O
available
› No HA features (mirroring,
HA, etc)
› Low capacity so specific
application
› targeting required
› Not shared across hosts
› Most expensive (cost per
GB)
Flash Appliances
Acceleration of specific
applications
Features
› High performance boost for
existing storage
› Can compromise data
integrity
› Effects DR planning
› Most useful for read biased
applications
› Expensive (cost per GB) but
can accelerate a large
storage estate
General purpose Tier 0/1
storage
Features
› High capacity – all flash
› Typically has enterprise
class features (replication,
snap)
› Not the fastest
› Scales out to the largest
flash foot print of any
category
› High performance for all
workloads
Hybrid Arrays Flash Arrays
15. Flash and Applications
Flash is fantastic, its fast, its funky so would you like to buy some please?
► Do you have an application that would benefit from flash?
► Do you know your application’s performance footprint?
► How does your application access data?
Read/Write Ratio?
Working set size?
Latency Sensitive?
Persistent or Volatile?
16. The “Write Cliff”
So what is the write cliff and should you be concerned about it?
► The SSD write cliff is the effect where SSD write performance drops off after all
the free flash memory pages in an SSD have been initially written to and the
device cannot provide enough free pages to keep up with subsequent write
requests.
► Each new write request then requires the SSD locate a block that can be erased
for the new data.
► If a block that needs to be erased contains
active data, the active data must be written
to a new location to free up the block to be
erased.
► This process of copying valid data from one
block to a new block, called ‘write
amplification’ increases SSD wear and is the
primary cause of the write cliff.
17. Wear Levelling
Why is wear levelling important?
► Wear levelling is fundamental to the endurance of the flash drive
► SSD drives now contain wear levelling algorithms to improve longevity
► Flash array manufacturers typically add their own software wear levelling and smart
data placement routines to further improve flash drive service life
► Advanced wear levelling gives an eMLC drive a 9 year lifespan typically**
› No Wear Levelling
› Dynamic Wear Levelling
› Static Wear Levelling
No Wear Levelling
Each flash block is permanently mapped to a
O/S logical block. This means that every
block previously written too must be read,
erased and written. Highly written locations
wear out quickly while others could be left
unused. Blocks quickly reach their end of life
and could render the drive unusable
Static Wear Levelling
Static wear levelling works the same as
dynamic wear levelling except the static
blocks that do not change are periodically
moved so that these low usage cells are able
to be used by other data. This rotational
effect enables an SSD to operate until most
of the blocks are near their end of life
Dynamic Wear Levelling
Each time a block of data is re-written to the
Flash memory it is written to a new location.
However, blocks that never get replacement
data sit with no additional wear on the Flash
memory. The drive may last longer than one
with no wear levelling, but there is still an
uneven wear pattern
** Pure Storage rate their SSD drives for 9 years due to advance software wear levelling
18. The Secret Sauce
So let’s talk about Flash Arrays …..
► Most new SSD/Flash arrays regardless of whether they are All Flash or Hybrid have
a software layer. It’s no secret that most of the intellectual property is in the
software.
► Commodity hardware – is it a good or bad thing?
› Less investment in hardware means more to investment in the software
› Easier supply chain with less chance of manufacturing delays
› Are they a software company looking to productise and sell up?
› Technical impact of commodity hardware, will I be hardware constrained?
► Deduplication technology – faster, slower?
› If dedupe was designed in from the start then should be faster with better
drive wear characteristics and higher capacity
› Dedupe as an after thought just means wasted I/O
► Replication, snapshots and all that stuff
› Just like deduplication, designed in is great, thrown in as an after thought is bad
19. VDI
Lets look at VDI for an example Flash workload….
► 1000 users with an average of 25 IOPs per user
› 25,000 IOPs (+5,000 IOPs for tertiary workload)
› 2.8TB of storage space for OS and View components
Traditional Disk
► 166 data drives, plus parity and hot spares totals 194 drives!
› 20 x 8+1 Raid groups
› 100TB of available capacity (only need 3 TB)
› 20 x 1500 IOP Islands of performance
› 48U of rack space
› Power, cooling and purchase/maintenance costs
1000 users running Windows 7 Pro with local anti-virus (75% linked clones)
20. VDI
So how did we do it …..?
► All Flash commodity array with 18 x 200GB eMLC drives
› FC connectivity (for no other reason than they have Brocade SAN in place)
› 3.1TB usable capacity
› 90K IOPs maximum performance across two storage pools
› 2U of rack space
› 335W peak power consumption
The savings
► The solution showed a compelling ROI based on traditional disk
› Almost 4 times the required performance
› Much lower latency ~1millisecond
› 95% less rack space
› 94% less power
› EUE was excellent
21. VDI 2 Years on..
So how are things working today …..?
► Project was completed in March 2011
► Grown to 1300 desktops to date
► Write Cliff has not become an issue
► Reaching its maximum performance potential
► Good end user experience testing and user satisfaction
► VDI is an island of storage
► Gold images and persistent data stored on enterprise SAN
► Did not need to deploy replication or storage based snapshots
22. Which technology do I choose?
Think about your applications …..
► Flash based storage is great but they come in lots of flavours
› PCIe Card, Flash Appliances, Hybrid Arrays and all Flash Arrays
› Do you really need 800,000 IOPs?
› Low latency is usually the real answer
› If you copy TBs of data look for devices that actively manage wear levelling
Think about your investment …..
► With so many new flash vendors how do you avoid buying a dead duck?
› References, case studies, UK presence
› Investment, funding and the goal of the company
› It’s easy to make claims but can they back them up?
› POC, conditional P.O, labs sessions
› VMUG, Talk to your peers!
23. 1 Prisma Park, Berrington Way
Basingstoke, Hampshire, RG24 8GT
0870 7776111
Inforeq@s3.co.ukwww.s3.co.uk
We are here to help
Contact