The document discusses a test conducted by Hitachi Data Systems and Halliburton Landmark to evaluate the performance of Hitachi's networked storage solution for use with Halliburton Landmark's SeisSpace seismic processing software. The initial test configuration showed improvements over other vendors but still took over 4 hours to complete certain tasks. Various configuration changes were made and optimized the solution, reducing completion times by over 60%. Only Hitachi demonstrated the ability to meet the high performance requirements for both primary and secondary storage simultaneously with a single solution.
Achieve Higher Quality Decisions Faster for a Competitive Edge in the Oil and...Hitachi Vantara
Hitachi next-generation unified storage solutions meet the challenges of today’s data-intensive oil and gas exploration and production activities. For more information on Hitachi Unified Storage and Hitachi NAS Platform 4000 series please visit: http://www.hds.com/products/file-and-content/network-attached-storage/?WT.ac=us_mg_pro_hnasp
Face Data Challenges of Life Science Organizations With Next-Generation Hitac...Hitachi Vantara
Hitachi Unified Storage 100 family drives efficiency at reduced costs and improves the discovery-to-market cycle for life sciences organizations. For more information on Hitachi Unified Storage and Hitachi NAS Platform 4000 series please visit: http://www.hds.com/products/file-and-content/network-attached-storage/?WT.ac=us_mg_pro_hnasp
Explains how backup-free storage reduces cost and complexity; provides benefits of Hitachi Content Platform; includes brief HDS backup use cases.
For more information on our Unstructured Data Management Solutions please check: http://www.hds.com/go/hitachi-abc-ebook-managing-data/
Learn more about Hitachi Content Platform Anywhere by visiting http://www.hds.com/products/file-and-content/hitachi-content-platform-anywhere.html
and more information on the Hitachi Content Platform is at http://www.hds.com/products/file-and-content/content-platform
Achieve Higher Quality Decisions Faster for a Competitive Edge in the Oil and...Hitachi Vantara
Hitachi next-generation unified storage solutions meet the challenges of today’s data-intensive oil and gas exploration and production activities. For more information on Hitachi Unified Storage and Hitachi NAS Platform 4000 series please visit: http://www.hds.com/products/file-and-content/network-attached-storage/?WT.ac=us_mg_pro_hnasp
Face Data Challenges of Life Science Organizations With Next-Generation Hitac...Hitachi Vantara
Hitachi Unified Storage 100 family drives efficiency at reduced costs and improves the discovery-to-market cycle for life sciences organizations. For more information on Hitachi Unified Storage and Hitachi NAS Platform 4000 series please visit: http://www.hds.com/products/file-and-content/network-attached-storage/?WT.ac=us_mg_pro_hnasp
Explains how backup-free storage reduces cost and complexity; provides benefits of Hitachi Content Platform; includes brief HDS backup use cases.
For more information on our Unstructured Data Management Solutions please check: http://www.hds.com/go/hitachi-abc-ebook-managing-data/
Learn more about Hitachi Content Platform Anywhere by visiting http://www.hds.com/products/file-and-content/hitachi-content-platform-anywhere.html
and more information on the Hitachi Content Platform is at http://www.hds.com/products/file-and-content/content-platform
As more companies grow their business in global markets, they discover the need to capture new opportunities in a matter of days rather than months to have competitive advantage and to capture new market share. Their machines are producing terabytes of various data types — video, audio, Microsoft® SharePoint®, sensor data, Microsoft Excel® files — and leaders are searching for the right technologies to capture this data and help provide a better understanding of their business. The HDS big data product roadmap will help customers build a big data enterprise plan that ingests data faster and correlate meaningful data sets to create intelligence that’s easy to consume and helps leaders make the right business decisions. View this webcast to learn about Hitachi’s product roadmap to big data. For more information on HDS Big Data Solutions please visit: http://www.hds.com/solutions/it-strategies/big-data/?WT.ac=us_mg_sol_bigdat
Simplify Data Center Monitoring With a Single-Pane ViewHitachi Vantara
Keeping IT systems up and well tuned requires constant attention, but the task is too often complicated by separate monitoring tools required to watch applications, servers, networks and storage. This white paper discusses how system administrators can consolidate oversight of these components, particularly where DataCore SANsymphony V storage hypervisor virtualizes the storage resources. Such visibility is made possible through the integration of SANsymphony-V with Hitachi IT Operations Analyzer.
A key reason for using dynamic tiering for mainframe storage is performance. This session will focus on dynamic tiering in mainframe environments and how to configure and control tiering. The session ends with a detailed discussion of performance considerations when using Hitachi Dynamic Tiering. By viewing this webcast, you will: Understand Hitachi Dynamic Tiering and the options for configuring and controlling tiering. Understand the performance considerations and the type of performance improvements you might experience when you implement Hitachi Dynamic Tiering. For more information on Hitachi Dynamic Tiering please visit: http://www.hds.com/products/storage-software/hitachi-dynamic-tiering.html?WT.ac=us_mg_pro_dyntir
Cisco Big Data Warehouse Expansion Featuring MapR DistributionAppfluent Technology
Learn more about the Cisco Big Data Warehouse Expansion Solution featuring MapR Distribution including Apache Hadoop.
The BDWE solution begins with the collection of data usage statistics by Appfluent. Then the BDWE solution optimizes Cisco UCS hardware for running the MapR Distribution including Hadoop, software for federating multiple data sources, and a comprehensive services methodology for assessing, migrating, virtualizing, and operating a logically expanded warehouse.
Continuously improving factory operations is of critical importance to manufacturers. Consider the facts: the total cost of poor quality amounts to a staggering 20% of sales (American Society of Quality) and unplanned downtime costs plants approximately $50 billion per year (Deloitte).
The most pressing questions are: which process variables effect quality and yield and which process variables predict equipment failure? Getting to those answers is providing forward thinking manufacturers a leg up over competitors.
The speakers address the data management challenges facing today's manufacturers, including proprietary systems and silo'ed data sources, as well as an inability to make sensor-based data usable.
Integrating enterprise data from ERP, MES, maintenance systems and other sources with real time operations data from sensors, PLCs, SCADA systems and historians represents a major first step. But how to get started? What is the value of a data lake? How are AI/ML being applied to enable real time action?
Join us for this educational session, which includes a rare view from one of our SWAT team experts into our roadmap for an open source industrial IoT data management platform.
Key Takeaways:
• How to choose an initial project from which to quickly demonstrate high value returns
• Understand the value of multivariate data sources, as opposed to a single sensor on a piece of equipment
• Understand advances in big data management and streaming analytics that are paving the way to next-generation factory performance
MICHAEL GER, General Manager, Manufacturing and Automotive, Hortonworks and RYAN TEMPLETON, Senior Solutions Engineer, Hortonworks
Consolidate More: High Performance Primary Deduplication in the Age of Abunda...Hitachi Vantara
Increase productivity, efficiency and environmental savings by eliminating silos, preventing sprawl and reducing complexity by 50%. Using powerful consolidation systems, Hitachi Unified Storage or Hitachi NAS Platform, lets you consolidate existing file servers and NAS devices on to fewer nodes. You can perform the same or even more work with fewer devices and lower overhead, while reducing floor space and associated power and cooling costs. View this webcast to learn how to: Shrink your primary file data without disrupting performance. Increase productivity and utilization of available capacity. Defer additional storage purchases. Save on power, cooling and space costs. For more information please visit: http://www.hds.com/products/file-and-content/network-attached-storage/?WT.ac=us_inside_rm_htchunfds
Can data virtualization uphold performance with complex queries?Denodo
Watch full webinar here: https://bit.ly/2JzypTx
There are myths about data virtualization that are based on misconceptions and even falsehoods. These myths can confuse and worry people who - quite rightly - look at data virtualization as a critical technology for a modern, agile data architecture.
We've decided that we need to set the record straight, so we put together this webinar series. It's time to bust a few myths!
In the first webinar of the series, we’ll be busting the 'performance' myth. “What about performance?” is usually the first question that we get when talking to people about data virtualization. After all, the data virtualization layer sits between you and your data, so how does this affect the performance of your queries? Sometimes the myth is perpetuated by people with alternative solutions…the ‘Put all your data in our Cloud and everything will be fine. Data virtualization? Nah, you don’t need that! It can't handle big queries anyway,’ type of thing.
Join us for this webinar to look at the basis of the 'performance' myth and examine whether there is any underlying truth to it.
Infosys Deploys Private Cloud Solution Featuring Combined Hitachi and Microsoft® Technologies. For more information on Hitachi Unified Compute Platform Solutions please visit: http://www.hds.com/products/hitachi-unified-compute-platform/?WT.ac=us_mg_pro_ucp
As more companies grow their business in global markets, they discover the need to capture new opportunities in a matter of days rather than months to have competitive advantage and to capture new market share. Their machines are producing terabytes of various data types — video, audio, Microsoft® SharePoint®, sensor data, Microsoft Excel® files — and leaders are searching for the right technologies to capture this data and help provide a better understanding of their business. The HDS big data product roadmap will help customers build a big data enterprise plan that ingests data faster and correlate meaningful data sets to create intelligence that’s easy to consume and helps leaders make the right business decisions. View this webcast to learn about Hitachi’s product roadmap to big data. For more information on HDS Big Data Solutions please visit: http://www.hds.com/solutions/it-strategies/big-data/?WT.ac=us_mg_sol_bigdat
Simplify Data Center Monitoring With a Single-Pane ViewHitachi Vantara
Keeping IT systems up and well tuned requires constant attention, but the task is too often complicated by separate monitoring tools required to watch applications, servers, networks and storage. This white paper discusses how system administrators can consolidate oversight of these components, particularly where DataCore SANsymphony V storage hypervisor virtualizes the storage resources. Such visibility is made possible through the integration of SANsymphony-V with Hitachi IT Operations Analyzer.
A key reason for using dynamic tiering for mainframe storage is performance. This session will focus on dynamic tiering in mainframe environments and how to configure and control tiering. The session ends with a detailed discussion of performance considerations when using Hitachi Dynamic Tiering. By viewing this webcast, you will: Understand Hitachi Dynamic Tiering and the options for configuring and controlling tiering. Understand the performance considerations and the type of performance improvements you might experience when you implement Hitachi Dynamic Tiering. For more information on Hitachi Dynamic Tiering please visit: http://www.hds.com/products/storage-software/hitachi-dynamic-tiering.html?WT.ac=us_mg_pro_dyntir
Cisco Big Data Warehouse Expansion Featuring MapR DistributionAppfluent Technology
Learn more about the Cisco Big Data Warehouse Expansion Solution featuring MapR Distribution including Apache Hadoop.
The BDWE solution begins with the collection of data usage statistics by Appfluent. Then the BDWE solution optimizes Cisco UCS hardware for running the MapR Distribution including Hadoop, software for federating multiple data sources, and a comprehensive services methodology for assessing, migrating, virtualizing, and operating a logically expanded warehouse.
Continuously improving factory operations is of critical importance to manufacturers. Consider the facts: the total cost of poor quality amounts to a staggering 20% of sales (American Society of Quality) and unplanned downtime costs plants approximately $50 billion per year (Deloitte).
The most pressing questions are: which process variables effect quality and yield and which process variables predict equipment failure? Getting to those answers is providing forward thinking manufacturers a leg up over competitors.
The speakers address the data management challenges facing today's manufacturers, including proprietary systems and silo'ed data sources, as well as an inability to make sensor-based data usable.
Integrating enterprise data from ERP, MES, maintenance systems and other sources with real time operations data from sensors, PLCs, SCADA systems and historians represents a major first step. But how to get started? What is the value of a data lake? How are AI/ML being applied to enable real time action?
Join us for this educational session, which includes a rare view from one of our SWAT team experts into our roadmap for an open source industrial IoT data management platform.
Key Takeaways:
• How to choose an initial project from which to quickly demonstrate high value returns
• Understand the value of multivariate data sources, as opposed to a single sensor on a piece of equipment
• Understand advances in big data management and streaming analytics that are paving the way to next-generation factory performance
MICHAEL GER, General Manager, Manufacturing and Automotive, Hortonworks and RYAN TEMPLETON, Senior Solutions Engineer, Hortonworks
Consolidate More: High Performance Primary Deduplication in the Age of Abunda...Hitachi Vantara
Increase productivity, efficiency and environmental savings by eliminating silos, preventing sprawl and reducing complexity by 50%. Using powerful consolidation systems, Hitachi Unified Storage or Hitachi NAS Platform, lets you consolidate existing file servers and NAS devices on to fewer nodes. You can perform the same or even more work with fewer devices and lower overhead, while reducing floor space and associated power and cooling costs. View this webcast to learn how to: Shrink your primary file data without disrupting performance. Increase productivity and utilization of available capacity. Defer additional storage purchases. Save on power, cooling and space costs. For more information please visit: http://www.hds.com/products/file-and-content/network-attached-storage/?WT.ac=us_inside_rm_htchunfds
Can data virtualization uphold performance with complex queries?Denodo
Watch full webinar here: https://bit.ly/2JzypTx
There are myths about data virtualization that are based on misconceptions and even falsehoods. These myths can confuse and worry people who - quite rightly - look at data virtualization as a critical technology for a modern, agile data architecture.
We've decided that we need to set the record straight, so we put together this webinar series. It's time to bust a few myths!
In the first webinar of the series, we’ll be busting the 'performance' myth. “What about performance?” is usually the first question that we get when talking to people about data virtualization. After all, the data virtualization layer sits between you and your data, so how does this affect the performance of your queries? Sometimes the myth is perpetuated by people with alternative solutions…the ‘Put all your data in our Cloud and everything will be fine. Data virtualization? Nah, you don’t need that! It can't handle big queries anyway,’ type of thing.
Join us for this webinar to look at the basis of the 'performance' myth and examine whether there is any underlying truth to it.
Infosys Deploys Private Cloud Solution Featuring Combined Hitachi and Microsoft® Technologies. For more information on Hitachi Unified Compute Platform Solutions please visit: http://www.hds.com/products/hitachi-unified-compute-platform/?WT.ac=us_mg_pro_ucp
2 d and 3d land seismic data acquisition and seismic data processingAli Mahroug
The seismic method has three important/principal applications
a. Delineation of near-surface geology for engineering studies, and coal and mineral
exploration within a depth of up to 1km: the seismic method applied to the near –
surface studies is known as engineering seismology.
b. Hydrocarbon exploration and development within a depth of up to 10 km: seismic
method applied to the exploration and development of oil and gas fields is known
as exploration seismology.
c. Investigation of the earth’s crustal structure within a depth of up to 100 km: the
seismic method applies to the crustal and earthquake studies is known as
earthquake seismology.
times ten in-memory database for extreme performanceOracle Korea
어디서나 업무가 가능한 모바일 시대가 되면서 비약적으로 데이터 사이즈가 커지고 이를 처리하기 위해서는 고성능의 빠른 Database가 필요하게 되었습니다. 이러한 요구사항을 반영하여 기존에 우리가 잘 사용하고있던 Database 들도 In-Memory 기술을 속속 도입하고 있습니다. In-Memory 기술은 이전부터 있었지만 하드웨어의 한계와 소프트웨어의 확정성의 부족으로 많이 사용되지 않았던 기술입니다.
Oracle TimesTen 18.1은 기존 In-Memory Database가 가지는 한계를 극복하고, 빠른 처리 속도와 확장(Scaleout)가능한 분산 아키텍처를 지원하는 In-Memory 관계형 Database 입니다.
본 세션에서는 Oracle TimesTen의 분산 아키텍처와 주요 Feature를 소개하고 TimesTen 최신버전인 18.1의 데모를 진행할 예정입니다. 또한 현재 TimesTen을 이용하여 국내 통신사의 서비스를 개발하고 있는 이루온의 실제 적용 사례 및 성능 테스트 결과를 공유하는 시간이 될 것입니다.
Performance of persistent apps on Container-Native Storage for Red Hat OpenSh...Principled Technologies
For companies in need of a comprehensive strategy for containers and software-defined storage, Red Hat Container Ready Storage paired with Red Hat OpenShift Container Platform offer a solution that allows them to leverage their investment in VMware vSphere. In our proof-of-concept study, we explored the scaling capabilities of a CNS implementation using two types of Western Digital storage media, Ultrastar He10 hard drives and the new Ultrastar SS200 solid-state drives. We tested the solutions under a variety of conditions, using both IO-intensive and CPU-intensive workloads, multiple vCPU allocation counts, and a range of quantities of app instances. In this document, we have presented some of the many resulting data points, including price/performance metrics, which have the potential to assist IT professionals implementing CNS to meet the unique needs of their businesses.
Power the Creation of Great Work Solution ProfileHitachi Vantara
This solution discusses how quality and speed are critical in solving storage and data management bottlenecks, delivering cost-effective solutions that are highly scalable for post-production tasks. Whether CGI animation, rendering, or transcoding, Hitachi Data Systems powers digital workflows, enabling extraordinary creative and business achievements with HUS and HNAS infrastructure offerings. For more information on Hitachi Unified Storage and Hitachi NAS Platform 4000 Series please visit: http://www.hds.com/products/file-and-content/network-attached-storage/?WT.ac=us_mg_pro_hnasp
Matlab Based High Level Synthesis Engine for Area And Power Efficient Arithme...ijceronline
Embedded systems used in real-time applications require low power, less area and a high computation speed. For digital signal processing (DSP), image processing and communication applications, data are often received at a continuously high rate. Embedded processors have to cope with this high data rate and process the incoming data based on specific application requirements. Even though there are many different application domains, they all require arithmetic operations that quickly compute the desired values using a larger range of operation, reconfigurable behavior, low power and high precision. The type of necessary arithmetic operations may vary greatly among different applications. The RTL-based design and verification of one or more of these functions may be time-consuming. Some High Level Synthesis tools reduce this design and verification time but may not be optimal or suitable for low power applications. The developed MATLAB-based Arithmetic Engine improves design time and reduces the verification process, but the key point is to use a unified design that combines some of the basic operations with more complex operations to reduce area and power consumption. The results indicate that using the Arithmetic Engine from a simple design to more complex systems can improve design time by reducing the verification time by up to 62%. The MATLAB-based Arithmetic Engine generates structural RTL code, a testbench, and gives the designers more control. The MATLAB-based design and verification engine uses optimized algorithms for better accuracy at a better throughput.
Global Financial Leader Consolidates Mainframe Storage and Reduces Costs with...Hitachi Vantara
Companies with mainframes and mainframe storage face the same complex issues and desires as other businesses. They need to lower costs, reduce their storage footprint, boost performance and increase scalability, all with flat or declining budgets. And even as they make these improvements, companies also want to reduce operations costs and be freed from the overhead of continually tuning their environments for peak performance. They want and expect
data to be moved to the appropriate tier and both capacity and performance to be optimized automatically.
Application Report: Big Data - Big Cluster InterconnectsIT Brand Pulse
As a leading analytics platform that runs on industry-standard hardware and integrates industry-standard database tools and applications, one of ParAccel’s biggest challenges is to architect and test hardware (servers, storage, interconnects) that make their software perform at its peak. In this case, they have achieved their mission to eliminate a cluster bottleneck by implementing 10GbE NICs to provide the bandwidth needed to-day, and well into the future.
Webinar: The Performance Challenge: Providing an Amazing Customer Experience ...DataStax
Building and managing cloud applications is not easy. Delivering one with an amazing customer experience is even harder. Join us for “The Performance Challenge: Providing an Amazing Customer Experience No Matter What” webinar where we will deep dive into the challenges customers face with providing a consistent experience no matter where customers are, providing real-time access to data and how DataStax Enterprise can help.
Link to recording: https://youtu.be/qBGsyNulCOs
View past DataStax webinars: http://www.datastax.com/resources/webinars
The Apache Spark config behind the indsutry's first 100TB Spark SQL benchmarkLenovo Data Center
Some configurations deserve their own SlideShare entry: this is one of them. When the indsutry's first 100TB Spark SQL benchmark was reached, the media took notice. For good reason.
Intel, Mellanox, Lenovo and IBM came together to investigate a topology that leveraged advances in CPU, memory, storage and networking to assess the readiness of Spark SQL to harness new capabilities -- and speeds.
Denodo Platform 7.0: Redefine Analytics with In-Memory Parallel Processing an...Denodo
Watch Pablo's session from Fast Data Strategy on-demand here: https://goo.gl/1aEBo8
The tide is changing for analytics architectures. Traditional approaches, from the data warehouse to the data lake, implicitly assume that all relevant data can be stored in a single, centralized repository. But this approach is slow and expensive, and sometimes not even feasible, because some data sources are too big to be replicated, and data is often too distributed such as those found in cloud data sources to make a “full centralization” strategy successful.
Watch this session to learn more about:
• Modern data architectures
• Why logical architectures are the best option when integrating big data
• How Denodo’s parallel in-memory capabilities with dynamic query optimization redefine analytics architectures
Similar to Meet the Data Processing Workflow Challenges of Oil and Gas Exploration with Advanced Data Storage Solution Profile (20)
Hitachi Vantara and our special guest, Dr. Alison Brooks, Research Director at IDC, discuss:
• How video and other IoT data can help your business become smarter, safer and more efficient.
• How to harness IoT data to gain operational intelligence and achieve better business outcomes.
• How Hitachi’s customers are innovating with IoT to excel.
• Which practical applications and best practices will get you started on your own IoT journey to reach your goals and tackle your challenges.
Virtualizing SAP HANA with Hitachi Unified Compute Platform Solutions: Bring...Hitachi Vantara
Virtualizing SAP HANA with Hitachi Unified Compute Platform Solutions: Bringing Flexibility, Agility and Readiness to the Real-Time Enterprise. VMworld 2015
Hitachi Virtual Infrastructure Integrator (Virtual V2I) is a VMware vCenter plugin plus associated software. It provides data management efficiency for large VM environments. Specifically, the latest release addresses virtual machine backup and recovery and cloning services. Customer want to leverage storage based snapshots as it is scalable, more granular backup from hours between backups to minutes resulting in improved RPO. VMworld 2015.
Economist Intelligence Unit: Preparing for Next-Generation CloudHitachi Vantara
Preparing for next-generation cloud: Lessons learned and insights shared is an Economist Intelligence Unit (EIU) research programme, sponsored by Hitachi Data Systems. In this report, the EIU looks at companies’ experiences with cloud adoption and assesses whether the technology has lived up to expectations. Where the cloud has fallen short of expectations, we set out to understand why. In cases of seamless implementation, we gather best practices from firms using the cloud successfully.
HDS Influencer Summit 2014: Innovating with Information to Address Business N...Hitachi Vantara
Top Executives at HDS share how the company is Innovating with Information to address business needs. Learn how the company is transforming now and into the future. #HDSday.”
Information Innovation Index 2014 UK Research ResultsHitachi Vantara
Hitachi Data Systems releases insights from its inaugural ‘Information Innovation Index’, a UK research report, conducted by independent UK technology market research agency, Vanson Bourne, in which 200 IT decision-makers were surveyed during April 2014 to provide insights into how current approaches to IT are thwarting companies’ ambitions to leverage data to drive innovation and business growth.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
Meet the Data Processing Workflow Challenges of Oil and Gas Exploration with Advanced Data Storage Solution Profile
1. Halliburton Landmark SeisSpace Software and
Hitachi Storage Solution Match New Levels of
Sophistication in the Energy Industry Match
To meet growing worldwide demand, oil and gas
exploration and production (E&P) organizations are under
greater pressure to find new sources of energy. And while
exploration has always been expensive, today’s programs
are in increasingly hostile environments, making speed to
discovery much more urgent.
To expedite efforts, E&P organizations rely on
sophisticated geophysical technologies. Some of these
include reverse time migration, waveform inversion, and
3-D and 4-D downhole sensors to support making higher-
quality decisions. Growing volumes of 3-D data must be
analyzed in shorter time frames to make efficient use of
resources deployed in the field.
SOLUTIONPROFILE
This explosive growth in data volumes presents new
processing challenges. To get the most out of raw
data, most E&P organizations have turned to 3-D
seismic processing applications. Many look to the
Landmark SeisSpace software, in particular.
A division of Halliburton, Landmark designs seismic
processing solutions to scale from field quality control
up through full volume, real-time production processing.
These software applications can be optimized for spe-
cific processing throughput requirements. In particular,
Landmark software is optimized for tasks, including:
■■ General quality control (QC), target investigations,
and field-specific data integrity workflows.
■■ Conventional time processing, Kirchhoff calcula-
tions, and amplitude versus offset (AVO).
■■ Production scale processing.
■■ Seismic coverage validation for illumination stud-
ies, acquisition planning, and targeted imaging
workflows.
■■ 3-D prestack time and depth migration, velocity
analysis imaging, and finite difference forward
modeling.
Meet the Data Processing Workflow Challenges of Oil
and Gas Exploration With Advanced Data Storage
2. SOLUTION PROFILE
Unlike conventional processing technolo-
gies, Landmark offerings place a special
emphasis on high-performance and inter-
active processing algorithms for today’s
high-performance computing environments.
Organizations need a shared network data
storage and management solution that
scales to accommodate the growing data
from seismic equipment and leverage paral-
lel performance characteristics of Landmark
SeisSpace. This solution will also provide
performance to feed these high-throughput
computational workflows.
To better understand the potential per-
formance gains the right storage solution
can offer, Hitachi Data Systems joined with
Landmark to set up a test bed to experiment
with different system configurations. Exploiting
the unique features of the network storage
solution accelerated workflows, significantly
cutting the processing time required to derive
results. Additionally, the testing found that the
Hitachi system could run both primary and
secondary Landmark storage workloads at
the same time. This was something no other
vendor has been able to achieve.
Test Environment
When trying to match a suitable storage
solution with a seismic analysis solution, it is
important to keep in mind that you cannot
rely on narrow benchmarks. Real-world
application workloads are complex, and
overall analysis throughput can vary greatly
over time, depending on something as minor
as an application’s configuration settings.
With these issues in mind, we jointly exam-
ined the challenges, nuances and potential
benefits when integrating and optimizing
seismic analysis systems. Storage and data
management solutions and high-throughput
workflows were also tested.
In particular, the tests explored how to take
advantage of specific application features
to boost analysis workflows and reduce the
typical processing times for analysis.
The test searched for ways to exploit
Landmark SeisSpace performance enhanc-
ing capabilities. Landmark SeisSpace was
designed with new parallel-distributed-
memory architectures in mind. It supports
the JavaSeis prestack format, which allowed
for the development of algorithms that are
suited for true volume processing. The key
to these parallel efficiencies lies in the soft-
ware’s ability to leverage the parallel memory
I/O benefits of JavaSeis.
Essential components to the feature are
a storage filer and network throughput
capacities that are unlikely to become over-
whelmed by I/O transactions or storage
speed from seismic processing.
However, there is a requirement for an E&P
organization to take advantage of the soft-
ware’s enhancements. The organization
must ensure a delicate balance of a com-
puting system’s processing capabilities and
the IT infrastructure’s bandwidth and IOPS.
To evaluate the impact of fine-tuning a
storage solution to match the performance
capabilities of the software, Hitachi Data
Systems set up a test bed infrastructure
(see Figure 1).
The initial setup consisted of a Hitachi NAS
Platform (HNAS) 3090 cluster consisting
of a single storage pool containing 180 x
600GB 15k SAS drives. There were 10GbE
link aggregation control protocol (LACP)
connections into a 10GbE switch and 2
Fibre Channel connections per HNAS 3090
node to the back-end storage.
The testing compute nodes consisted of 32
Linux-based systems (CentOS v5.6) that were
each 1GbE attached. Each compute node
had 2 mounts, to a primary and secondary file
system. Each file system resided on its own
EVS (enterprise virtual server), which enabled
easy migration of the mount points.
With this configuration, 2 baseline testing
runs were conducted. The 1st was a read/
write test against seismic shot data; the
2nd was a read/write/sort function against
a similar subset of data. The 1st test run
completed in approximately 55 minutes,
and the 2nd ran in excess of 4 hours. These
results were consistent with previous expe-
riences, but with a performance edge over
other storage vendors.
Only Hitachi Data Systems with networked storage
has demonstrated the ability to meet requirements for
Landmark SeisSpace primary and secondary storage with
a single solution. This solution allows an organization to
consolidate its storage infrastructure.
Figure 1. The test configuration with Hitachi NAS Platform 3090 met the performance requirements
for both the primary and secondary storage for Halliburton Landmark SeisSpace software.
3. 3
Innovation is the engine of change,
and information is its fuel. Innovate
intelligently to lead your market, grow
your company, and change the world.
Manage your information with
Hitachi Data Systems.
www.hds.com/innovate
A number of configuration changes were
then made to fine-tune the performance.
The 1st change was to upgrade the existing
HNAS 3090 networked storage system
to the latest release of the HNAS system
software v8. One thing that sets Hitachi
Data Systems apart from competitors is our
firmware approach, with hardware accelera-
tion through field programmable gate arrays
(FPGAs). This capability allows adminis-
trators to change characteristics normally
associated with hardware through a soft-
ware upgrade. The HNAS system software
also helps end users analyze data access
patterns and then improve performance.
At each stage of the testing, standardized
performance reports were gathered against
the primary and secondary file systems.
The Landmark team adjusted networking
parameters in the compute nodes. The NFS
mount parameters were optimized for larger
block sizes, which resulted in up to a 15%
performance improvement.
The parameters for sparse file system
functionality were also adjusted. SeisSpace
requires sparse file functions for accurate
application reporting, which includes the
capability to report the actual space used
(sparseness) versus the assumed (thin pro-
visioned) space utilization.
In subsequent tests, performance results
peaked at near the specified HNAS 3090
performance (72,921 IOPS; 1,100MB/sec
throughput without the performance accelera-
tor). At this point, all EVSs were also migrated
to a single physical node to demonstrate the
same performance, even without the failover
ability of a 2nd cluster node.
The original read/write shots test decreased
from 55 minutes runtime to just over 20
minutes runtime (at 1,035MB/sec through-
put), a 63% improvement (see Figure 2).
The 2nd, a sort test, also yielded more than
60% performance improvements.
The tests also demonstrated that the HNAS
3090 system (even as a single node) could
run both sets of Landmark workload (pri-
mary and secondary) simultaneously. No
other vendor has been able to successfully
maintain this performance.
Hitachi NAS Platform
Hitachi NAS Platform is an advanced, and
integrated, network attached storage (NAS)
solution. It is a powerful tool for file sharing
as well as file server consolidation, data
protection and business-critical NAS work-
loads. With HNAS, you can solve challenges
associated with data growth while achieving
a low total cost of ownership (TCO).
Features
■■ Powerful hardware-accelerated file
system for multiprotocol file services,
dynamic provisioning, intelligent tiering,
virtualization and cloud infrastructure.
■■ High performance and scalability: up to
2GB/sec and 140,000 input/outputs per
second (IOPS) per node up to 16PB of
usable capacity.
■■ File-level virtualization in a global name-
space isolates the user from technology
or vendor dependencies. It also enables
unified access to data stored on storage
systems from other vendors or Open
Source solutions like Lustre.
■■ Policy-based, universal file migration
simplifies deploying new technology and
migrating data, without impacting applica-
tion workflows.
■■ Seamless integration with Hitachi SAN stor-
age, Hitachi Command Suite and Hitachi
Data Discovery Suite for advanced search
and indexing across HNAS systems.
Figure 2. After applying best practices
configuration testing, performance was
improved by 63% by using HNAS 3090.
■■ Integration with Hitachi Content Platform
for active archiving, regulatory compli-
ance and large object storage for cloud
infrastructure.
Benefits
■■ Simplifies your IT infrastructure by allow-
ing you to consolidate NAS devices or file
servers and migrate data by policy across
multiple vendors and technologies.
■■ Reduces the complexity of storage man-
agement and lowers your TCO.
■■ Significantly improves efficiency, agility
and utilization across NAS environments
through advanced virtualization and data
protection capabilities.
■■ Offers exceptional performance and
improves productivity for Halliburton
Landmark SeisSpace environments.
Figure 3. Highly scalable Hitachi Unified Storage
150 with Hitachi NAS Platform.