An introduction to SQL 2014 in-memory OLTP know as Hekaton (XTP) Introduction. I presented this session at SQL User Group Melbourne sometime back. Suitable for people who are getting to know more about Hekaton
An Expert Guide to Migrating Legacy Databases to PostgreSQLEDB
his webinar will review the challenges teams face when migrating from Oracle databases to PostgreSQL. We will share insights gained from running large scale Oracle compatibility assessments over the last two years, including the over 2,200,000 Oracle DDL constructs that were assessed through EDB’s Migration Portal in 2020.
During this session we will address:
Storage definitions
Packages
Stored procedures
PL/SQL code
Proprietary database APIs
Large scale data migrations
We will end the session demonstrating migration tools that significantly simplify and aid in reducing the risk of migrating Oracle databases to PostgreSQL.
Transform your DBMS to drive engagement innovation with Big DataAshnikbiz
Erik Baardse and Ajit Gadge from EDB Postgres presented on how to transform your DBMS in order to drive digital business. How Postgres enables you to support a wider range of workloads with your relational database which opens the Big Data doors. They also cover EnterpriseDB’s Strategy around Big Data which focuses on 3 areas and finally last but not the last how to find money in IT with Big Data and digital transformation
Operationalizing Data Science Using Cloud FoundryVMware Tanzu
SpringOne Platform 2016
Speaker: Lawrence Spracklen; Vice President of Engineering, Alpine Data Labs.
Data science is undoubtedly becoming a key component of every company’s core strategy for growth and increased revenue potential. To meet this market demand, the big data industry has exploded with a variety of tools to address various pieces of the data science value chain, from model scoring, to notebook interfaces, to niche algorithmic techniques. However, despite the increase in innovation in this area, many insights generated by data science teams end up “dying on the vine”. There has to be a better way of deploying operational models to end users through intuitive interfaces that they can use everyday.
In this session, we will demo how the joint solution between Alpine’s Chorus Platform and Cloud Foundry addresses this problem and closes the gap between data science insights and business value. We will demo an example of creating a machine learning model leveraging data within MPP databases such as Apache HAWQ or Greenplum Database integrated with the Chorus Platform and then deploying this as a micro service within Cloud Foundry as a scoring engine. This turn-key solution will show attendees how easy it is to plug in analytic insights into end user applications that scale, without going through lengthy development cycles.
Open Source Software on OpenPOWER systems.
With 100% open source system software (including the firmware), OpenPOWER is the most open server architecture in the market. Based on the IBM POWER8 chip, this new family of servers featuring the latest Nvidia NVLink technology runs all the software solutions presented at OPEN'16 with significant cost advantages. This session explains how Docker, EnterpriseDB and many others benefit from this advanced design, and how 200+ technology companies including Google and RackSpace are collaborating in an open development alliance to build the datacenter of the future.
We cover the IBM solution for HPC. In addition to hardware and software stack we show how the rational choice of compilation/running parameters helps to significantly improve the performance of technical computing applications.
TechTarget Event - Storage Architectures for the Modern Data Center - Jeramia...NetApp
Why Is All-Flash Adoption Growing So Fast?
Presented by Jeramiah Dooley, Principal Architect, SolidFire
To be successful today, IT must transition from a cost center to a competitive advantage – and the path to success is through the data center. More central to business than ever before, the next-generation data center must be powered by all-flash.
All-flash is no longer the future; it's the present. Learn how all-flash can save your IT team time and resources with intelligent policy-based management, automation and more.
An Expert Guide to Migrating Legacy Databases to PostgreSQLEDB
his webinar will review the challenges teams face when migrating from Oracle databases to PostgreSQL. We will share insights gained from running large scale Oracle compatibility assessments over the last two years, including the over 2,200,000 Oracle DDL constructs that were assessed through EDB’s Migration Portal in 2020.
During this session we will address:
Storage definitions
Packages
Stored procedures
PL/SQL code
Proprietary database APIs
Large scale data migrations
We will end the session demonstrating migration tools that significantly simplify and aid in reducing the risk of migrating Oracle databases to PostgreSQL.
Transform your DBMS to drive engagement innovation with Big DataAshnikbiz
Erik Baardse and Ajit Gadge from EDB Postgres presented on how to transform your DBMS in order to drive digital business. How Postgres enables you to support a wider range of workloads with your relational database which opens the Big Data doors. They also cover EnterpriseDB’s Strategy around Big Data which focuses on 3 areas and finally last but not the last how to find money in IT with Big Data and digital transformation
Operationalizing Data Science Using Cloud FoundryVMware Tanzu
SpringOne Platform 2016
Speaker: Lawrence Spracklen; Vice President of Engineering, Alpine Data Labs.
Data science is undoubtedly becoming a key component of every company’s core strategy for growth and increased revenue potential. To meet this market demand, the big data industry has exploded with a variety of tools to address various pieces of the data science value chain, from model scoring, to notebook interfaces, to niche algorithmic techniques. However, despite the increase in innovation in this area, many insights generated by data science teams end up “dying on the vine”. There has to be a better way of deploying operational models to end users through intuitive interfaces that they can use everyday.
In this session, we will demo how the joint solution between Alpine’s Chorus Platform and Cloud Foundry addresses this problem and closes the gap between data science insights and business value. We will demo an example of creating a machine learning model leveraging data within MPP databases such as Apache HAWQ or Greenplum Database integrated with the Chorus Platform and then deploying this as a micro service within Cloud Foundry as a scoring engine. This turn-key solution will show attendees how easy it is to plug in analytic insights into end user applications that scale, without going through lengthy development cycles.
Open Source Software on OpenPOWER systems.
With 100% open source system software (including the firmware), OpenPOWER is the most open server architecture in the market. Based on the IBM POWER8 chip, this new family of servers featuring the latest Nvidia NVLink technology runs all the software solutions presented at OPEN'16 with significant cost advantages. This session explains how Docker, EnterpriseDB and many others benefit from this advanced design, and how 200+ technology companies including Google and RackSpace are collaborating in an open development alliance to build the datacenter of the future.
We cover the IBM solution for HPC. In addition to hardware and software stack we show how the rational choice of compilation/running parameters helps to significantly improve the performance of technical computing applications.
TechTarget Event - Storage Architectures for the Modern Data Center - Jeramia...NetApp
Why Is All-Flash Adoption Growing So Fast?
Presented by Jeramiah Dooley, Principal Architect, SolidFire
To be successful today, IT must transition from a cost center to a competitive advantage – and the path to success is through the data center. More central to business than ever before, the next-generation data center must be powered by all-flash.
All-flash is no longer the future; it's the present. Learn how all-flash can save your IT team time and resources with intelligent policy-based management, automation and more.
5 Tips to Simplify the Management of Your Postgres DatabaseEDB
This presentaation is a short overview of Postgres database capacity planning, monitoring and acting on key performance indicators, database and application performance evaluation and other management activities.
Aerospike meetup july 2019 | Big Data DemystifiedOmid Vahdaty
Building a low latency (sub millisecond), high throughput database that can handle big data AND linearly scale is not easy - but we did it anyway...
In this session we will get to know Aerospike, an enterprise distributed primary key database solution.
- We will do an introduction to Aerospike - basic terms, how it works and why is it widely used in mission critical systems deployments.
- We will understand the 'magic' behind Aerospike ability to handle small, medium and even Petabyte scale data, and still guarantee predictable performance of sub-millisecond latency
- We will learn how Aerospike devops is different than other solutions in the market, and see how easy it is to run it on cloud environments as well as on premise.
We will also run a demo - showing a live example of the performance and self-healing technologies the database have to offer.
Microsoft has embraced OSS by placing a big bet on Apache YARN to govern the resources of our computing clusters, and we did so by working with the community and adding many new capabilities in YARN. We now look to undertake a similar journey and build the next generation of our job execution engine on top of Apache Tez. We will be building a common platform for executing batch, interactive, ML, and streaming queries at exabyte scale for Microsoft's BigData system called Cosmos. This requires us to push the limits of Tez API to support new graph models, change the executing DAG by dynamically adding new vertices, scheduling for interactive and streaming workloads, squeeze out all the computing power in the cluster by integrating Tez with opportunistic containers in YARN, and scaling a DAG across tens of thousands of machines. We have started out on this journey and want to share our progress, lessons learned, seek help from the community to add these new capabilities, and push Apache Tez to new levels.
SPEAKERS
Hitesh Sharma, Principal Software Engineering Manager, Microsoft Engineering manager in the Big Data team at Microsoft.
Anupam, Senior Software Engineer, Microsoft
Highly available databases are essential to organizations depending on mission-critical, 24/7 access to data. Postgres is widely recognized as an excellent open-source database, with critical maturity and features that allow organizations to scale and achieve high availability.
This webinar will explore:
- Evolution of replication in Postgres
- Streaming replication
- Logical replication
- Replication for high availability
- Important high availability parameters
- Options to monitor high availability
- HA infrastructure to patch the database with minimal downtime
- EDB Postgres Failover Manager (EFM)
- EDB tools to create a highly available Postgres architecture
This presentation introduces the following functionalities of pgAdmin and PEM that make database management more efficient:
1. Examining the performance of a query using the explain plan visualizer in pgAdmin’s Query Tool
2. Examining the performance of a process or session consisting of multiple queries in PEM’s SQL Profiler
3. 24/7 monitoring of Postgres and the underlying host system
4. Capacity management and reporting
5. Alerting the DBA or System Administrator to potential problems
DevOps Culture & Enablement with Postgres Plus Cloud DatabaseEDB
The Cloud and DevOps are made for each other. The ease of provisioning computing resources in the cloud is unmatched, cloud scalability allows testing and deployment for any size and type of application, and the cloud lets you reach developers and customers, wherever they may be.
Before you start down the path to DevOps, you'll need to work through organizational and cultural issues that are just as important as your technological issues.
View this presentation to get an overview of DevOps and the steps you need to take to be successful.
When was the last time Oracle costs went down? Find out how EDB Postgres can help:
- Cap, reduce and in some cases, eliminate your Oracle spend
- Mediate the impact of Oracle ULAs
- Provide choice in selecting an RDBMS
In this webinar to explore the technical perspective of moving off Oracle.
If you are unsure, which Postgres is right for you, rest assured that others have faced the same challenge. Based on EDB's work with other Postgres users we have developed a guide providing you with the pros and cons of various Postgres solutions so that you can make an educated decision.
This presentation reviews the following scenarios based on usage profiles and the technical and business risks involved:
- PostgreSQL without commercial support
- Creating your own PostgreSQL fork
- PostgreSQL with a consulting partner
- EDB Postgres Standard
- EDB Postgres Enterprise
This presentation will help you select which level of tooling and support is appropriate for your particular use case.
Target Audience:
This presentation is intended for decision-makers and IT leaders who are in the process of evaluating PostgreSQL and EDB Postgres. The knowledge of differences between the solutions discussed will assist in selecting the right database to support your current operations and plans for growth.
Care Risk Solutions, based in Mumbai, India, provides software solutions for Banking, Financial Services, and Insurance (BFSI) sectors. Care Risk carries a suite of solutions primarily to support companies in the financial risk domain. This includes its Enterprise Risk Management suite (ERM), Asset Liability Management (ALM), Fund Transfer Pricing (FTP), International Financial Reporting Standards (IFRS), Financial Reporting Applications, Lending suite, and Early Warning systems.
To ensure support for its wide range of products, Care Risk Solutions chose to migrate from its legacy databases to EDB Postgres. The switch enabled Care Risk to widen their product availability on multiple databases as well as the ability to track large contracts across many years. Care Risk also benefited from the services support from Chemtrols Infotech, including SLA maintenance with end customers.
In this presentation, a Care Risk representative explains their migration journey from legacy database systems to EDB Postgres, the challenges faced, benefits of migrating to EDB Postgres, and much more.
How EDB Postgres helps achieve business continuity for database?
Database downtime can cost your business a significant sum of money and stress. Efficient planning, designing, architecting and the right tools used can help your organization minimize the damages during a crisis event. Today, PostgreSQL has become an essential database technology, and as a database of choice for enterprise customers.
It is essential to have the right capabilities and tools for configuring business continuity for databases. And that’s where EDB Postgres enables enterprise customers to achieve business continuity for the Postgres database through its features and tools.
One of the crucial aspects while planning business continuity for the database in addition to high-availability, replication is the disaster recovery of the database. Here, EDB Backup and Recovery Tool (BART) plays an important role.
In this webinar, learn how EDB Postgres can ensure your database is secure and accessible even during an unforeseen disaster.
Using PEM to understand and improve performance in Postgres: Postgres Tuning ...EDB
The Postgres Enterprise Manager (PEM) Tuning Wizard reviews your installation, and recommends a set of configuration options that will help tune a Postgres installation to best suit the anticipated workload. PEM's Performance Diagnostics uses Postgres' wait state information to analyze queries in context of the current workload and help identify further performance improvement opportunities in terms of locks, IO, and CPU bottlenecks.
This webinar covers:
- How to intelligently manage all your database servers with a single console
- Identify the useful features and functionality needed for visual database administration
- How to manage the performance and design of your database servers
The EDB Postgres Platform is an enterprise-class data management platform based on the open source database PostgreSQL, complemented by tool kits for management, integration, and migration; flexible deployment options, and services and support to enable enterprises to deploy Postgres at scale.
5 Tips to Simplify the Management of Your Postgres DatabaseEDB
This presentaation is a short overview of Postgres database capacity planning, monitoring and acting on key performance indicators, database and application performance evaluation and other management activities.
Aerospike meetup july 2019 | Big Data DemystifiedOmid Vahdaty
Building a low latency (sub millisecond), high throughput database that can handle big data AND linearly scale is not easy - but we did it anyway...
In this session we will get to know Aerospike, an enterprise distributed primary key database solution.
- We will do an introduction to Aerospike - basic terms, how it works and why is it widely used in mission critical systems deployments.
- We will understand the 'magic' behind Aerospike ability to handle small, medium and even Petabyte scale data, and still guarantee predictable performance of sub-millisecond latency
- We will learn how Aerospike devops is different than other solutions in the market, and see how easy it is to run it on cloud environments as well as on premise.
We will also run a demo - showing a live example of the performance and self-healing technologies the database have to offer.
Microsoft has embraced OSS by placing a big bet on Apache YARN to govern the resources of our computing clusters, and we did so by working with the community and adding many new capabilities in YARN. We now look to undertake a similar journey and build the next generation of our job execution engine on top of Apache Tez. We will be building a common platform for executing batch, interactive, ML, and streaming queries at exabyte scale for Microsoft's BigData system called Cosmos. This requires us to push the limits of Tez API to support new graph models, change the executing DAG by dynamically adding new vertices, scheduling for interactive and streaming workloads, squeeze out all the computing power in the cluster by integrating Tez with opportunistic containers in YARN, and scaling a DAG across tens of thousands of machines. We have started out on this journey and want to share our progress, lessons learned, seek help from the community to add these new capabilities, and push Apache Tez to new levels.
SPEAKERS
Hitesh Sharma, Principal Software Engineering Manager, Microsoft Engineering manager in the Big Data team at Microsoft.
Anupam, Senior Software Engineer, Microsoft
Highly available databases are essential to organizations depending on mission-critical, 24/7 access to data. Postgres is widely recognized as an excellent open-source database, with critical maturity and features that allow organizations to scale and achieve high availability.
This webinar will explore:
- Evolution of replication in Postgres
- Streaming replication
- Logical replication
- Replication for high availability
- Important high availability parameters
- Options to monitor high availability
- HA infrastructure to patch the database with minimal downtime
- EDB Postgres Failover Manager (EFM)
- EDB tools to create a highly available Postgres architecture
This presentation introduces the following functionalities of pgAdmin and PEM that make database management more efficient:
1. Examining the performance of a query using the explain plan visualizer in pgAdmin’s Query Tool
2. Examining the performance of a process or session consisting of multiple queries in PEM’s SQL Profiler
3. 24/7 monitoring of Postgres and the underlying host system
4. Capacity management and reporting
5. Alerting the DBA or System Administrator to potential problems
DevOps Culture & Enablement with Postgres Plus Cloud DatabaseEDB
The Cloud and DevOps are made for each other. The ease of provisioning computing resources in the cloud is unmatched, cloud scalability allows testing and deployment for any size and type of application, and the cloud lets you reach developers and customers, wherever they may be.
Before you start down the path to DevOps, you'll need to work through organizational and cultural issues that are just as important as your technological issues.
View this presentation to get an overview of DevOps and the steps you need to take to be successful.
When was the last time Oracle costs went down? Find out how EDB Postgres can help:
- Cap, reduce and in some cases, eliminate your Oracle spend
- Mediate the impact of Oracle ULAs
- Provide choice in selecting an RDBMS
In this webinar to explore the technical perspective of moving off Oracle.
If you are unsure, which Postgres is right for you, rest assured that others have faced the same challenge. Based on EDB's work with other Postgres users we have developed a guide providing you with the pros and cons of various Postgres solutions so that you can make an educated decision.
This presentation reviews the following scenarios based on usage profiles and the technical and business risks involved:
- PostgreSQL without commercial support
- Creating your own PostgreSQL fork
- PostgreSQL with a consulting partner
- EDB Postgres Standard
- EDB Postgres Enterprise
This presentation will help you select which level of tooling and support is appropriate for your particular use case.
Target Audience:
This presentation is intended for decision-makers and IT leaders who are in the process of evaluating PostgreSQL and EDB Postgres. The knowledge of differences between the solutions discussed will assist in selecting the right database to support your current operations and plans for growth.
Care Risk Solutions, based in Mumbai, India, provides software solutions for Banking, Financial Services, and Insurance (BFSI) sectors. Care Risk carries a suite of solutions primarily to support companies in the financial risk domain. This includes its Enterprise Risk Management suite (ERM), Asset Liability Management (ALM), Fund Transfer Pricing (FTP), International Financial Reporting Standards (IFRS), Financial Reporting Applications, Lending suite, and Early Warning systems.
To ensure support for its wide range of products, Care Risk Solutions chose to migrate from its legacy databases to EDB Postgres. The switch enabled Care Risk to widen their product availability on multiple databases as well as the ability to track large contracts across many years. Care Risk also benefited from the services support from Chemtrols Infotech, including SLA maintenance with end customers.
In this presentation, a Care Risk representative explains their migration journey from legacy database systems to EDB Postgres, the challenges faced, benefits of migrating to EDB Postgres, and much more.
How EDB Postgres helps achieve business continuity for database?
Database downtime can cost your business a significant sum of money and stress. Efficient planning, designing, architecting and the right tools used can help your organization minimize the damages during a crisis event. Today, PostgreSQL has become an essential database technology, and as a database of choice for enterprise customers.
It is essential to have the right capabilities and tools for configuring business continuity for databases. And that’s where EDB Postgres enables enterprise customers to achieve business continuity for the Postgres database through its features and tools.
One of the crucial aspects while planning business continuity for the database in addition to high-availability, replication is the disaster recovery of the database. Here, EDB Backup and Recovery Tool (BART) plays an important role.
In this webinar, learn how EDB Postgres can ensure your database is secure and accessible even during an unforeseen disaster.
Using PEM to understand and improve performance in Postgres: Postgres Tuning ...EDB
The Postgres Enterprise Manager (PEM) Tuning Wizard reviews your installation, and recommends a set of configuration options that will help tune a Postgres installation to best suit the anticipated workload. PEM's Performance Diagnostics uses Postgres' wait state information to analyze queries in context of the current workload and help identify further performance improvement opportunities in terms of locks, IO, and CPU bottlenecks.
This webinar covers:
- How to intelligently manage all your database servers with a single console
- Identify the useful features and functionality needed for visual database administration
- How to manage the performance and design of your database servers
The EDB Postgres Platform is an enterprise-class data management platform based on the open source database PostgreSQL, complemented by tool kits for management, integration, and migration; flexible deployment options, and services and support to enable enterprises to deploy Postgres at scale.
Вебинар vs. семинар: попытка сравнительного анализа эффективности обучения пе...Olga Arakelyan
С докладом, посвященным сравнительному анализу эффективности дистанционного и очного варианта обучения переводчиков, выступил директор компании Кондратович Федор Вячеславович на Летней школе перевода Союза переводчиков России в июле 2015 года.
VMworld 2013: Virtualizing Databases: Doing IT Right VMworld
VMworld 2013
Michael Corey, Ntirety, Inc
Jeff Szastak, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
IBM Consultants & System Integrators Interchange - 2015
http://www-07.ibm.com/events/in/csiinterchange/index.html
Demystify OpenPOWER
Speaker: Anand Haridass, Chief Engineer – Power System, IBM India
OpenPOWER is an open development community, using the POWER Architecture to serve the evolving needs of customers. Hear about the success of the OpenPOWER strategy and Foundation that is building momentum, and fueling an explosion of new development, innovation and collaboration, and improved performance on the POWER Architecture. What does this means for your clients? Find out how OpenPOWER is expanding the Power ecosystem and capabilities with new solutions coming from IBM and our partners.
Oracle Database In-Memory will be generally available in July 2014 and can be used with all hardware platforms on which Oracle Database 12c is supported.
Accelerate database performance by orders of magnitude for analytics, data warehousing, and reporting while also speeding up online transaction processing (OLTP).
Allow any existing Oracle Database-compatible application to automatically and transparently take advantage of columnar in-memory processing, without additional programming or application changes.
This presentation provides a clear overview of how Oracle Database In-Memory optimizes both analytics and mixed workloads, delivering outstanding performance while supporting real-time analytics, business intelligence, and reporting. It provides details on what you can expect from Database In-Memory in both Oracle Database 12.1.0.2 and 12.2.
Oracle Systems Overview
Engineered systems strategy and overview about exadata, exalitics, superCluster, Exalogic, Oracle virtual appliance, ZFS appliance
En este diapositivas der Microsoft podemos ver qué aporta SQL 2014 en áreas como: Tablas optimizadas en memòria, Cambios en estimacion de la cardinalidad, Cifrado de los Backups, Mejoras en arquitectures, Always On, Cambios en Resource Governor, Data files en Azure.
times ten in-memory database for extreme performanceOracle Korea
어디서나 업무가 가능한 모바일 시대가 되면서 비약적으로 데이터 사이즈가 커지고 이를 처리하기 위해서는 고성능의 빠른 Database가 필요하게 되었습니다. 이러한 요구사항을 반영하여 기존에 우리가 잘 사용하고있던 Database 들도 In-Memory 기술을 속속 도입하고 있습니다. In-Memory 기술은 이전부터 있었지만 하드웨어의 한계와 소프트웨어의 확정성의 부족으로 많이 사용되지 않았던 기술입니다.
Oracle TimesTen 18.1은 기존 In-Memory Database가 가지는 한계를 극복하고, 빠른 처리 속도와 확장(Scaleout)가능한 분산 아키텍처를 지원하는 In-Memory 관계형 Database 입니다.
본 세션에서는 Oracle TimesTen의 분산 아키텍처와 주요 Feature를 소개하고 TimesTen 최신버전인 18.1의 데모를 진행할 예정입니다. 또한 현재 TimesTen을 이용하여 국내 통신사의 서비스를 개발하고 있는 이루온의 실제 적용 사례 및 성능 테스트 결과를 공유하는 시간이 될 것입니다.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
2. Why Hekaton (XTP)?
Market need for ever higher throughput and
lower latency OLTP at a lower cost
HW trends demand architectural changes on
RDBMS to meet those demands
Stalled CPU’s, Attainment of Balanced Systems.
Hekaton is: High performance, memoryoptimized OLTP engine integrated into SQL Server
and architected for modern hardware trends
2
3. Why Hekaton (XTP)?
Market need for ever higher throughput and
lower latency OLTP at a lower cost
HW trends demand architectural changes on
RDBMS to meet those demands
Stalled CPU’s, Attainment of Balanced Systems.
Hekaton is: High performance, memoryoptimized OLTP engine integrated into SQL Server
and architected for modern hardware trends
3
4. What is Hekaton (XTP)?
Hekaton is Greek for “hundreds,” and it was given this name for its
ability to speed up database function 100x (possibly)
With a new latch-free technology Hekaton is claimed to
dramatically increase performance.
SQL Server 2014 allows you to migrate the most-used tables in an
existing database to memory-optimised 'Hekaton' technology.
It is also known as SQL Server In-Memory database, touted to
accelerate transaction throughput up to 30x performance increases
on existing hardware.
4
5. What is Hekaton (XTP)?
Hekaton is Greek for “hundreds,” and it was given this name for its
ability to speed up database function 100x (possibly)
With a new latch-free technology Hekaton is claimed to
dramatically increase performance.
Hekaton doesn’t use SQL’s Buffer Cache
SQLor VAS, it allows youeverything most-used tables in an
Server 2014 stores to migrate the in the RAM
existing database to memory-optimised 'Hekaton' technology.
It is also known as SQL Server In-Memory database, touted to
accelerate transaction throughput up to 30x performance increases
on existing hardware.
5
6. Drivers
Architectural Pillars
Customer
Benefits
In-Memory OLTP – Architectural Pillars
High performance
data operations
Efficient businesslogic processing
Frictionless scaleup
Hybrid engine and
integrated
experience
Main-Memory
Optimized
T-SQL Compiled to
Machine Code
High Concurrency
SQL Server
Integration
• Optimized for in-memory
data
• Indexes (hash and
range) exist only in
memory
• No buffer pool, B-trees
• Stream-based storage
• T-SQL compiled to
machine code via C code
generator and VC
• Invoking a procedure is
just a DLL entry-point
• Aggressive
optimizations @
compile-time
• Multi-version optimistic
concurrency control with
full ACID support
• Core engine uses lockfree algorithms
• No lock manager, latches
or spinlocks
Business
Hardware trends
Steadily declining
memory price, NVRAM
Stalling CPU clock rate
• Same
manageability, administr
ation & development
experience
• Integrated queries &
transactions
• Integrated HA and
backup/restore
Many-core processors
TCO
7. In-Memory OLTP Integration and Application
Migration
Client App
TDS Handler and Session Management
Natively Compiled
SPs and Schema
Parser, Cat
alog, Opti
mizer
Native
Compiler
Key
Existing
SQL
Component
T-SQL Query Execution
In-mem
OLTP
Component
Query
Interop
T1
T2
T3
Tables
Indexes
T1
Memory Optimized Data
Filegroup
Buffer Pool for Tables & Indexes
SQL Server.exe
Memory Optimized Tables & Indexes
T2
Transaction Log
T3
T1
T2
Data Filegroup
T3
Generated
.dll
Moore’s law for CPU and Transistors and CPU Stalled, Memory cheaper (think tank behind In-Memory OLTP)the DigitalAlpha 21164 microprocessor had 9.3 million transistors
Moore’s law for CPU and Transistors and CPU Stalled, Memory cheaper (think tank behind In-Memory OLTP)the DigitalAlpha 21164 microprocessor had 9.3 million transistors
Hekaton is a really interesting technology, but is a world away from the functionality that we know and love. The SQL team have done a great job of disguising this departure from us by integrating it inside the SQL Server engine but none the less is a different beast entirely. Although the ultimate aim, I would imagine, is a seamless integration where the user ( and developer) is not really concerned with the underlying storage technology there will be many real world issues occurring if the differences are not fully understood. Hekaton doesn’t use SQL’s Buffer Cache or VAS, it stores everything into the RAMThe way that Hekaton stores data is in hash buckets, this is a fundamental tenet. A hash is simply a function applied to some key data and the bucket is where the relating row is stored. For example : if our hash function was X%5 then our buckets for the values 1 through 10 would be populated thusly : Bucket Values 5,10 1,6 2,7 3,8 4,9 As % is the function for modulo (divide an return the remainder) 9%5 = 4. The hash function for SQL Server would be much more complicated than this and like the hash function used in a hash join will differ depending on the data being hashed. So this is quite interesting since when we define a hekaton table we need to specify the number of buckets that we think ( perhaps even assume that we need) upfront:
By taking advantage of modern HW trends, we will enable new applications with higher real-time processing needs, and hence increase the addressable market size. Pigeon hole: For mailboxes.. Collisions are ought to happen, hash bucket collisions.. Reduce it