The presentation covers solutions from EMC to improve performance, increase data protection, enhance business continuity for your conventional SAP applications as well as SAP HANA.
UKOUG Tech 15 - Migration from Oracle Warehouse Builder to Oracle Data Integr...Jérôme Françoisse
Oracle Data Integrator is the strategic Data Integration tool replacing Oracle Warehouse Builder, offering more flexibility and supporting more technologies. Based on the Eurocontrol – the European Organisation for the Safety of Air Navigation – story, we will review a migration from Oracle Warehouse Builder to Oracle Data Integrator using the migration utility and custom scripting. After looking at the roadmap and the architecture used, we will see how to automatically migrate supported components how to handle the remaining ones and what needs to be fine-tuned. Finally we will talk about the testing, the challenges, the risks and the lessons learnt so you will be ready to successfully achieve such a migration for your company.
This presentation explores a broad cross-section of enterprise Postgres deployments to identify key usage patterns and reveals important aspects of performance, scalability, and availability including:
* Challenges organizations encounter most frequently during the stages of database development, deployment and maintenance
* Tuning parameters used most frequently to improve performance of production databases
* Frequently problematic database maintenance processes and configuration parameters
* Most commonly-used database back-up and recovery strategies
I dati al giorno d’oggi sono un elemento di estrema importanza e d’intrinseco valore per ogni entità. Per questo quando parliamo di Oracle Database facciamo riferimento al capitale della nostra azienda, sia essa pubblica che privata. Per poter sfruttare al massimo le potenzialità del database Oracle è però necessario avere a disposizione un’infrastruttura in grado di facilitarne l’accesso, di semplificarne la gestione, di proporzionare il livello di performance necessario al fine di garantire la scalabilità utile a mantenere queste condizioni nel tempo. Il costante cambiamento della società spinge le imprese ad aggiornarsi e, con il passare del tempo, questo processo comporta una crescita dei dati immagazzinati nei nostri Database con conseguente aumento della criticità degli stessi. Oracle Database Appliance è il sistema ingegnerizzato creato da Oracle per gestire in modo efficiente i propri Database, minimizzando lo sforzo necessario per il loro mantenimento e permettendo così di focalizzare i propri sforzi in attività direttamente relazionate con il core business. Durante la webinar analizzeremo use case pratici che dimostreranno come al giorno d’oggi sia possibile approfittare dei vantaggi offerti dall’Oracle Database Appliance per rispondere alle differenti necessità che la gestione di una complessa e performante infrastruttura IT possa richiedere.
Oracle MAA Best Practices - Applications ConsiderationsMarkus Michalewicz
Providing the highest levels of availability is the main goal of Oracle's Maximum Availability Architecture (MAA), which has been available for more than two decades. This presentation looks at Oracle MAA from a slightly different angle, as MAA should really be considered by the DBA as well as by developers and even by non-Oracle customers.
The presentation covers solutions from EMC to improve performance, increase data protection, enhance business continuity for your conventional SAP applications as well as SAP HANA.
UKOUG Tech 15 - Migration from Oracle Warehouse Builder to Oracle Data Integr...Jérôme Françoisse
Oracle Data Integrator is the strategic Data Integration tool replacing Oracle Warehouse Builder, offering more flexibility and supporting more technologies. Based on the Eurocontrol – the European Organisation for the Safety of Air Navigation – story, we will review a migration from Oracle Warehouse Builder to Oracle Data Integrator using the migration utility and custom scripting. After looking at the roadmap and the architecture used, we will see how to automatically migrate supported components how to handle the remaining ones and what needs to be fine-tuned. Finally we will talk about the testing, the challenges, the risks and the lessons learnt so you will be ready to successfully achieve such a migration for your company.
This presentation explores a broad cross-section of enterprise Postgres deployments to identify key usage patterns and reveals important aspects of performance, scalability, and availability including:
* Challenges organizations encounter most frequently during the stages of database development, deployment and maintenance
* Tuning parameters used most frequently to improve performance of production databases
* Frequently problematic database maintenance processes and configuration parameters
* Most commonly-used database back-up and recovery strategies
I dati al giorno d’oggi sono un elemento di estrema importanza e d’intrinseco valore per ogni entità. Per questo quando parliamo di Oracle Database facciamo riferimento al capitale della nostra azienda, sia essa pubblica che privata. Per poter sfruttare al massimo le potenzialità del database Oracle è però necessario avere a disposizione un’infrastruttura in grado di facilitarne l’accesso, di semplificarne la gestione, di proporzionare il livello di performance necessario al fine di garantire la scalabilità utile a mantenere queste condizioni nel tempo. Il costante cambiamento della società spinge le imprese ad aggiornarsi e, con il passare del tempo, questo processo comporta una crescita dei dati immagazzinati nei nostri Database con conseguente aumento della criticità degli stessi. Oracle Database Appliance è il sistema ingegnerizzato creato da Oracle per gestire in modo efficiente i propri Database, minimizzando lo sforzo necessario per il loro mantenimento e permettendo così di focalizzare i propri sforzi in attività direttamente relazionate con il core business. Durante la webinar analizzeremo use case pratici che dimostreranno come al giorno d’oggi sia possibile approfittare dei vantaggi offerti dall’Oracle Database Appliance per rispondere alle differenti necessità che la gestione di una complessa e performante infrastruttura IT possa richiedere.
Oracle MAA Best Practices - Applications ConsiderationsMarkus Michalewicz
Providing the highest levels of availability is the main goal of Oracle's Maximum Availability Architecture (MAA), which has been available for more than two decades. This presentation looks at Oracle MAA from a slightly different angle, as MAA should really be considered by the DBA as well as by developers and even by non-Oracle customers.
Beginner's Guide to High Availability for PostgresEDB
Highly available databases are essential to organizations depending on mission-critical, 24/7 access to data. PostgreSQL is widely recognized as an excellent open-source database, with critical maturity and features that allow organizations to scale and achieve high availability.
This webinar will explore:
- High availability concepts and workings
- RPO, RTO, and uptime in high availability
- Postgres high availability using
- Streaming replication
- Logical replication
- Important high availability parameters in Postgres and options to monitor high availability.
- EDB tools (EDB Postgres Failover Manager, BART etc) to create a highly available Postgres architecture
Virtualizing SAP HANA with Hitachi Unified Compute Platform Solutions: Bring...Hitachi Vantara
Virtualizing SAP HANA with Hitachi Unified Compute Platform Solutions: Bringing Flexibility, Agility and Readiness to the Real-Time Enterprise. VMworld 2015
Hitachi Unified Compute Platform Select for SAP HANA -- Solution ProfileHitachi Vantara
A profile of a converged scale-out solution with Hitachi Unified Compute Platform Select SAP HANA. For more information on Hitachi Unified Compute Platform solutions please visit: http://www.hds.com/products/hitachi-unified-compute-platform/?WT.ac=us_mg_pro_ucp
The highlights of this presentation featuring Postgres Enterprise Manager 4.0 include:
• Perfecting your Performance with advanced features such as performance home pages, SQL Profiler, Index Advisor, Postgres Expert, and Tuning Wizard.
• Capacity Planning and Forecasting by automating the collection of your key performance statistics, customizing metrics and reports to analyze historical trend analysis.
• How to Script Less and Monitor More with a really cool graphical interface that provides a fast and consistent method of working with database probes, alerts and various task managers simultaneously.
• Setting up your Customizable Dashboards that consolidate and display all your data with at-a-glance visualization tools in both a platform specific client or a web client.
Please visit http://www.Enterprisedb.com/pem for more information.
Apache Geode (incubating) is the core of Pivotal Gemfire now available as an open source project governed by Apache Software Foundation Incubator. The legacy of Pivotal Gemfire and the ASF community uniquely position Geode as a secret ingredient for modern-day data management architectures.
These types of architectures require a robust in-memory data grid solution to handle a variety of use cases, ranging from enterprise-wide caching to real-time transactional applications at scale. In addition, as memory size and network bandwidth growth continues to outpace those of disk, the importance of managing large pools of RAM at scale increases. It is essential to innovate at the same pace.
Apache Geode (incubating) has all the right ingredients to do for RAM what HDFS has done for direct attach disks. The excitement (and funding!) in this area of big data ecosystem is palpable, and the ASF is the place where the innovation is happening. Come to this session to understand: a brief history of Geode, architecture and use cases, design philosophy and principles, but most importantly: how you too can participate in the in-memory data center revolution.
This presentation addresses and tries to provide a reasonable answer to the rather common question: "Why should we still use an Oracle Database?", which more often than not is raised by management, but in the presence of more and more "alternative solutions", also by non-Oracle database administrators who are looking to solve a particular problem.
This is the first presentation of my "Oracle Fundamentals" presentation series.
Note that slide 35 (of 38) is outdated and should not have been included. Unfortunately, SlideShare does not allow for re-uploads anymore and hence, I will leave it in for now.
Best Practices for a Complete Postgres Enterprise Architecture SetupEDB
This presentation provides the details of a best-practice reference architecture for deploying Postgres into your enterprise for large scale OLTP solutions. It reviews how to put all the key pieces together to build a robust, reliable and cost-effective Postgres infrastructure, providing recommendations for configuration and deployment guidance.
This presentation reviews:
* Standard requirements for robust and reliable OTLP architecture
* How to use open source based Postgres Plus building blocks to meet those requirements
* High availability system design with streaming replication
* Backup with logical and physical backup recommendations and setup for point-in-time recovery
* Replication – single master and multi-master considerations
* Database infrastructure monitoring with alerts
* Managing and tuning your Postgres database configuration
To listen to the recording visit www.enterprisedb.com - Resources - Webcasts - On Demand webcasts
Email sales@enterprisedb.com with your questions about Postgres.
EnterpriseDB's Best Practices for Postgres DBAsEDB
This presentation reviews techniques to become a high performance Postgres DBA such as:
- Day to day monitoring
- Ongoing maintenance tasks including bloat and index maintenance
- Database and OS parameter tuning for performance
Security practices
- Planning for production deployment
- High availability best practices including strategies for backup and recovery
- Ideas for professional development
To listen to the recording of this presentation, visit Enterprisedb.com > Resources > Webcasts > On Demand Webcasts
EMEA TechTalk – The NetApp Flash Optimized PortfolioNetApp
EMEA TechTalk – October 7th, 2014 - Learn how NetApp Flash Optimized Storage improves application performance, reduces storage capacity, costs and complexity in the data centre.
NetApp IT Efficiencies Gained with Flash, NetApp ONTAP, OnCommand Insight, Al...NetApp
During an Insight Las Vegas 2017 breakout presentation, NetApp IT Senior Manager of Customer-1, Pridhvi Appineni, to talk about IT's business results of running a global enterprise on NetApp technology. From being cloud ready to data compliant to prepared for a disaster, NetApp technology is at the heart of our stable, reliable IT data management environment
Flash is een game changing technology, althans dat is wat de markt u graag wil doen geloven. Immers, voorspelbare consistente performance en IO efficiency worden hierdoor mogelijk gemaakt. Maar…
- Microsecondes maken het verschil maar de spelregels veranderen niet.
- Not all Flash was created equal
- Disk is niet dood, al willen sommige leveranciers dat u doen geloven
Bekijk deze presentatie om een nuchtere kijk op Flash te krijgen en uit te vinden wat de echte impact is op uw datacenter infrastructuur.
NetApp IT Data Center Strategies to Enable Digital TransformationNetApp
During an Insight Las Vegas 2017 breakout presentation, NetApp IT Customer-1 Director, Stan Cox, and Senior Storage Architect, Eduardo Rivera explained how NetApp IT enables digital transformation with data center strategies that incorporates ONTAP AFF systems in the data center to save power, cooling & space and NetApp Private Storage and ONTAP Cloud to leverage the public cloud while retaining control of their data. Using OnCommand Insight for data center management—and its integration with their configuration management database—the NetApp IT team knows what’s in their data centers, in terms of both functionality, usage, and inter-connections. NetApp IT believes knowing what’s in your data centers is fundamental to maintaining total cost of ownership, adapting to new technologies, leveraging the cloud while owning your data, and enabling digital transformation.
Beginner's Guide to High Availability for PostgresEDB
Highly available databases are essential to organizations depending on mission-critical, 24/7 access to data. PostgreSQL is widely recognized as an excellent open-source database, with critical maturity and features that allow organizations to scale and achieve high availability.
This webinar will explore:
- High availability concepts and workings
- RPO, RTO, and uptime in high availability
- Postgres high availability using
- Streaming replication
- Logical replication
- Important high availability parameters in Postgres and options to monitor high availability.
- EDB tools (EDB Postgres Failover Manager, BART etc) to create a highly available Postgres architecture
Virtualizing SAP HANA with Hitachi Unified Compute Platform Solutions: Bring...Hitachi Vantara
Virtualizing SAP HANA with Hitachi Unified Compute Platform Solutions: Bringing Flexibility, Agility and Readiness to the Real-Time Enterprise. VMworld 2015
Hitachi Unified Compute Platform Select for SAP HANA -- Solution ProfileHitachi Vantara
A profile of a converged scale-out solution with Hitachi Unified Compute Platform Select SAP HANA. For more information on Hitachi Unified Compute Platform solutions please visit: http://www.hds.com/products/hitachi-unified-compute-platform/?WT.ac=us_mg_pro_ucp
The highlights of this presentation featuring Postgres Enterprise Manager 4.0 include:
• Perfecting your Performance with advanced features such as performance home pages, SQL Profiler, Index Advisor, Postgres Expert, and Tuning Wizard.
• Capacity Planning and Forecasting by automating the collection of your key performance statistics, customizing metrics and reports to analyze historical trend analysis.
• How to Script Less and Monitor More with a really cool graphical interface that provides a fast and consistent method of working with database probes, alerts and various task managers simultaneously.
• Setting up your Customizable Dashboards that consolidate and display all your data with at-a-glance visualization tools in both a platform specific client or a web client.
Please visit http://www.Enterprisedb.com/pem for more information.
Apache Geode (incubating) is the core of Pivotal Gemfire now available as an open source project governed by Apache Software Foundation Incubator. The legacy of Pivotal Gemfire and the ASF community uniquely position Geode as a secret ingredient for modern-day data management architectures.
These types of architectures require a robust in-memory data grid solution to handle a variety of use cases, ranging from enterprise-wide caching to real-time transactional applications at scale. In addition, as memory size and network bandwidth growth continues to outpace those of disk, the importance of managing large pools of RAM at scale increases. It is essential to innovate at the same pace.
Apache Geode (incubating) has all the right ingredients to do for RAM what HDFS has done for direct attach disks. The excitement (and funding!) in this area of big data ecosystem is palpable, and the ASF is the place where the innovation is happening. Come to this session to understand: a brief history of Geode, architecture and use cases, design philosophy and principles, but most importantly: how you too can participate in the in-memory data center revolution.
This presentation addresses and tries to provide a reasonable answer to the rather common question: "Why should we still use an Oracle Database?", which more often than not is raised by management, but in the presence of more and more "alternative solutions", also by non-Oracle database administrators who are looking to solve a particular problem.
This is the first presentation of my "Oracle Fundamentals" presentation series.
Note that slide 35 (of 38) is outdated and should not have been included. Unfortunately, SlideShare does not allow for re-uploads anymore and hence, I will leave it in for now.
Best Practices for a Complete Postgres Enterprise Architecture SetupEDB
This presentation provides the details of a best-practice reference architecture for deploying Postgres into your enterprise for large scale OLTP solutions. It reviews how to put all the key pieces together to build a robust, reliable and cost-effective Postgres infrastructure, providing recommendations for configuration and deployment guidance.
This presentation reviews:
* Standard requirements for robust and reliable OTLP architecture
* How to use open source based Postgres Plus building blocks to meet those requirements
* High availability system design with streaming replication
* Backup with logical and physical backup recommendations and setup for point-in-time recovery
* Replication – single master and multi-master considerations
* Database infrastructure monitoring with alerts
* Managing and tuning your Postgres database configuration
To listen to the recording visit www.enterprisedb.com - Resources - Webcasts - On Demand webcasts
Email sales@enterprisedb.com with your questions about Postgres.
EnterpriseDB's Best Practices for Postgres DBAsEDB
This presentation reviews techniques to become a high performance Postgres DBA such as:
- Day to day monitoring
- Ongoing maintenance tasks including bloat and index maintenance
- Database and OS parameter tuning for performance
Security practices
- Planning for production deployment
- High availability best practices including strategies for backup and recovery
- Ideas for professional development
To listen to the recording of this presentation, visit Enterprisedb.com > Resources > Webcasts > On Demand Webcasts
EMEA TechTalk – The NetApp Flash Optimized PortfolioNetApp
EMEA TechTalk – October 7th, 2014 - Learn how NetApp Flash Optimized Storage improves application performance, reduces storage capacity, costs and complexity in the data centre.
NetApp IT Efficiencies Gained with Flash, NetApp ONTAP, OnCommand Insight, Al...NetApp
During an Insight Las Vegas 2017 breakout presentation, NetApp IT Senior Manager of Customer-1, Pridhvi Appineni, to talk about IT's business results of running a global enterprise on NetApp technology. From being cloud ready to data compliant to prepared for a disaster, NetApp technology is at the heart of our stable, reliable IT data management environment
Flash is een game changing technology, althans dat is wat de markt u graag wil doen geloven. Immers, voorspelbare consistente performance en IO efficiency worden hierdoor mogelijk gemaakt. Maar…
- Microsecondes maken het verschil maar de spelregels veranderen niet.
- Not all Flash was created equal
- Disk is niet dood, al willen sommige leveranciers dat u doen geloven
Bekijk deze presentatie om een nuchtere kijk op Flash te krijgen en uit te vinden wat de echte impact is op uw datacenter infrastructuur.
NetApp IT Data Center Strategies to Enable Digital TransformationNetApp
During an Insight Las Vegas 2017 breakout presentation, NetApp IT Customer-1 Director, Stan Cox, and Senior Storage Architect, Eduardo Rivera explained how NetApp IT enables digital transformation with data center strategies that incorporates ONTAP AFF systems in the data center to save power, cooling & space and NetApp Private Storage and ONTAP Cloud to leverage the public cloud while retaining control of their data. Using OnCommand Insight for data center management—and its integration with their configuration management database—the NetApp IT team knows what’s in their data centers, in terms of both functionality, usage, and inter-connections. NetApp IT believes knowing what’s in your data centers is fundamental to maintaining total cost of ownership, adapting to new technologies, leveraging the cloud while owning your data, and enabling digital transformation.
Addressing Issues of Risk & Governance in OpenStack without sacrificing Agili...OpenStack
Addressing Issues of Risk, and Governance in OpenStack without sacrificing Agility
Audience: Intermediate
Topic: Public & Hybrid Clouds
Abstract: OpenStack has rapidly moved beyond the “science project” label that many of its detractors’ use, but for many stakeholders there are still many uncertainties around governance, compliance, data security and data retention. These issues are the biggest inhibitors to adoption of any cloud technology and left unanswered will slow down the adoption of OpenStack, particularly within government and highly regulated industries such as healthcare. In this presentation NetApp outlines a hybrid approach that leverages the best of open-source and next generation technologies within an OpenStack deployment, as well as a way of unifying data management across OpenStack, HyperScale public cloud and traditional Enterprise architecture that addresses these questions while providing a solid platform for rapid innovation.
Speaker Bio: John Martin, NetApp
John Martin is NetApp’s Director of Strategy and Technology, working as part of the Office of the CTO. Based in Sydney, John is responsible for developing and advocating NetApp’s flash portfolio across the APAC region.
John is one of the driving forces behind NetApp’s continued expansion into flash and works closely with field sales, the channel and alliance technology partners to provide innovative solutions that solve customer business challenges.
While John is NetApp’s flash champion, he continues to provide technology insights and market intelligence to trends that impacts both NetApp and its customers.
Prior to his current role, John was NetApp’s ANZ’s principal technologist for over six years and has over 20 years experience working in the IT industry.
John joined NetApp in 2006 as a systems engineer. Prior to this, he was a principal of GRID IT, where he built relationships with a variety of major storage vendors while also helping to start two storage-related businesses. At GRID IT, John was involved in senior pre-sales, consulting and training for Legato, Veritas, and StorageTek.
In his spare time, John enjoys singing, writing and cooking. He also spends time researching modernist and post modernist philosophy, ancient history, social justice and global development.
OpenStack Australia Day Government - Canberra 2016
https://events.aptira.com/openstack-australia-day-canberra-2016/
Lessons learned processing 70 billion data points a day using the hybrid cloudDataWorks Summit
NetApp receives 70 billion data points of telemetry information each day from its customer’s storage systems. This telemetry data contains configuration information, performance counters, and logs. All of this data is processed using multiple Hadoop clusters, and feeds a machine learning pipeline and a data serving infrastructure that produces insights for customers via an application called Active IQ. We describe the evolution of our Hadoop infrastructure from a traditional on-premises architecture to the hybrid cloud, and lessons learned.
We’ll discuss the insights we are able to produce for our customers, and the techniques used. Finally, we describe the data management challenges with our multi-petabyte Hadoop data lake. We solved these problems by building a unified data lake on-premises and using the NetApp Data Fabric to seamlessly connect to public clouds for data science and machine learning compute resources.
Architecting a truly hybrid cloud implementation allowed NetApp to free up our data scientists to use any software on any cloud, kept the customer log data safe on NetApp Private Storage in Equinix, resulted in faster ability to innovate and release new code and provided flexibility to use any public cloud at the same time with data on NetApp in Equinix.
Speaker
Pranoop Erasani, NetApp, Senior Technical Director, ONTAP
Shankar Pasupathy, NetApp, Technical Director, ACE Engineering
Cephalocon APAC 2018
March 22-23, 2018 - Beijing, China
Lars Marowsky-Brée SUSE Distinguished Engineer, Ceph Advisory Board member
Marc Koderer, SAP OpenStack Evangelist
FlexPod delivers new integrated infrastructure validated designs with NetApp All-Flash and Cisco ACI that deliver new levels of performance and the ability to meet business objectives
Revolutionising Storage for your Future Business RequirementsNetApp
Non-disruptive Operations, Efficiency and Seamless scale are all topics of discussion by organisations facing challenging growth in the volumes of data stored. In this session Julian Wheeler, NetApp Channel SE Manager, investigates new storage infrastructures that enable you to manage growth, scale and efficiency while improving the service to the business.
Adjusting primitives for graph : SHORT REPORT / NOTESSubhajit Sahu
Graph algorithms, like PageRank Compressed Sparse Row (CSR) is an adjacency-list based graph representation that is
Multiply with different modes (map)
1. Performance of sequential execution based vs OpenMP based vector multiply.
2. Comparing various launch configs for CUDA based vector multiply.
Sum with different storage types (reduce)
1. Performance of vector element sum using float vs bfloat16 as the storage type.
Sum with different modes (reduce)
1. Performance of sequential execution based vs OpenMP based vector element sum.
2. Performance of memcpy vs in-place based CUDA based vector element sum.
3. Comparing various launch configs for CUDA based vector element sum (memcpy).
4. Comparing various launch configs for CUDA based vector element sum (in-place).
Sum with in-place strategies of CUDA mode (reduce)
1. Comparing various launch configs for CUDA based vector element sum (in-place).
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...Subhajit Sahu
Abstract — Levelwise PageRank is an alternative method of PageRank computation which decomposes the input graph into a directed acyclic block-graph of strongly connected components, and processes them in topological order, one level at a time. This enables calculation for ranks in a distributed fashion without per-iteration communication, unlike the standard method where all vertices are processed in each iteration. It however comes with a precondition of the absence of dead ends in the input graph. Here, the native non-distributed performance of Levelwise PageRank was compared against Monolithic PageRank on a CPU as well as a GPU. To ensure a fair comparison, Monolithic PageRank was also performed on a graph where vertices were split by components. Results indicate that Levelwise PageRank is about as fast as Monolithic PageRank on the CPU, but quite a bit slower on the GPU. Slowdown on the GPU is likely caused by a large submission of small workloads, and expected to be non-issue when the computation is performed on massive graphs.
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...John Andrews
SlideShare Description for "Chatty Kathy - UNC Bootcamp Final Project Presentation"
Title: Chatty Kathy: Enhancing Physical Activity Among Older Adults
Description:
Discover how Chatty Kathy, an innovative project developed at the UNC Bootcamp, aims to tackle the challenge of low physical activity among older adults. Our AI-driven solution uses peer interaction to boost and sustain exercise levels, significantly improving health outcomes. This presentation covers our problem statement, the rationale behind Chatty Kathy, synthetic data and persona creation, model performance metrics, a visual demonstration of the project, and potential future developments. Join us for an insightful Q&A session to explore the potential of this groundbreaking project.
Project Team: Jay Requarth, Jana Avery, John Andrews, Dr. Dick Davis II, Nee Buntoum, Nam Yeongjin & Mat Nicholas
The Building Blocks of QuestDB, a Time Series Databasejavier ramirez
Talk Delivered at Valencia Codes Meetup 2024-06.
Traditionally, databases have treated timestamps just as another data type. However, when performing real-time analytics, timestamps should be first class citizens and we need rich time semantics to get the most out of our data. We also need to deal with ever growing datasets while keeping performant, which is as fun as it sounds.
It is no wonder time-series databases are now more popular than ever before. Join me in this session to learn about the internal architecture and building blocks of QuestDB, an open source time-series database designed for speed. We will also review a history of some of the changes we have gone over the past two years to deal with late and unordered data, non-blocking writes, read-replicas, or faster batch ingestion.
Adjusting OpenMP PageRank : SHORT REPORT / NOTESSubhajit Sahu
For massive graphs that fit in RAM, but not in GPU memory, it is possible to take
advantage of a shared memory system with multiple CPUs, each with multiple cores, to
accelerate pagerank computation. If the NUMA architecture of the system is properly taken
into account with good vertex partitioning, the speedup can be significant. To take steps in
this direction, experiments are conducted to implement pagerank in OpenMP using two
different approaches, uniform and hybrid. The uniform approach runs all primitives required
for pagerank in OpenMP mode (with multiple threads). On the other hand, the hybrid
approach runs certain primitives in sequential mode (i.e., sumAt, multiply).
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Data and AI
Round table discussion of vector databases, unstructured data, ai, big data, real-time, robots and Milvus.
A lively discussion with NJ Gen AI Meetup Lead, Prasad and Procure.FYI's Co-Found
Cr 2-2540 customer case study - How amgen utilizes Netapp clustered data Ontap and fas62x0 for growth and nondisruptive operations
1. Customer Case Study: How Amgen Utilizes
NetApp Clustered Data ONTAP and FAS62x0
for Growth and Non-disruptive Operations
Harish Mundre – Principal IS Architect, Amgen Inc.
Richard Stokotelny – Technical Account Manager, NetApp Inc
CR-2-2540
NetApp Proprietary – Limited Use Only