LBBC Technologies are the world’s leading designers and manufacturers of industrial autoclave technology. Aerospace customers use this equipment in the manufacture of high performance castings, like turbine blades. With hundreds of machines all over the world, LBBC are pushing the boundaries of the support they can offer customers. All LBBC equipment comes fitted with industrial gateways which simplify the data connections between industrial PLC controllers and web services - like AWS. This enables LBBC to offer their customers “Connected Support” and Web SCADA. Through their Connected Support software solution, LBBC are providing customers with advanced diagnosis tools used for troubleshooting and process optimization. Discover how they are using a time series platform to enable faster remote anomaly detection and quicker time to resolution.
Join this webinar as Andrew Smith dives into:
The architecture that LBBC have chosen
The role that InfluxDB plays [alongside other elements of LBBC’s IIoT infrastructure]
The way in which industrial customers are using InfluxDB [to monitor equipment condition and provide advanced support services]
An example of how the infrastructure is delivering valuable insights that are leading to competitive advantage
InfluxDB tips and best practices (including the MQTT Native Collector)
How a Heat Treating Plant Ensures Tight Process Control and Exceptional Quali...InfluxData
American Metal Processing Company ("AMP") is the US' largest commercial rotary heat treat facility with customers in the automotive, construction, military, and agriculture industries. They use their atmosphere-protected rotary retort furnaces to provide their clients with three primary hardening services: neutral hardening (quench and temper), carburizing, and carbonitriding.
This furnace style ensures consistent, uniform heat treatment process vs. traditional batch-or-belt-style furnaces; excels at processing high volumes of smaller parts with tight tolerances; and improves the strength and toughness of plain carbon steels. Discover why AMP’s use of Telegraf, InfluxDB, Node-RED, and Grafana allows them to gain 24/7 insights into their plant operations and metallurgical results. Learn how they use time-stamped data to gain accurate metrics about their consumables usage, furnace profiles, and machine status.
Join this webinar as Grant Pinkos dives into:
American Metal Processing's approach to heat treating in a digitized environment through connected systems
Their approach to collecting and measuring sensor data to enable predictive maintenance and improve product quality
Why they need a time series database for managing and analyzing vast amounts of time-stamped data
Start Automating InfluxDB Deployments at the Edge with balena InfluxData
balena.io helps companies develop, deploy, update, and manage IoT devices. By using Linux containers and other cloud technologies, balena enables teams to quickly and easily build fleets of connected devices. Developers are able to use containers with the language of choice and pull IoT sensor data from 70+ different single board computers into balenaCloud. Discover how to use balena.io to automate your InfluxDB deployments at the edge!
During this one-hour session, experts from balena and InfluxData will demonstrate how to build and deploy your own air quality IoT solution. You will learn:
The fundamentals of IoT sensor deployment and management using balena.
How to use a time series platform to collect and visualize metrics from edge devices.
Tips and tricks to using balenaCloud to automate InfluxDB deployments and Telegraf configurations.
How to use InfluxDB's Edge Data Replication feature to collect sensor data and push it to InfluxDB Cloud for analysis.
No coding experience required, just a curiosity to start your own IoT adventure.
Intro to InfluxDB 2.0 and Your First Flux Query by Sonia GuptaInfluxData
In this InfluxDays NYC 2019 talk, InfluxData Developer Advocate Sonia Gupta will provide an introduction to InfluxDB 2.0 and a review of the new features. She will demonstrate how to install it, insert data, and build your first Flux query.
Reduce SRE Stress: Minimizing Service Downtime with Grafana, InfluxDB and Tel...InfluxData
NetApp is a global cloud-led, data-centric software company. They are an industry leader in hybrid cloud data services and data management solutions. Their platform enables their customers to store and share large quantities of digital data across physical and hybrid cloud environments. NetApp Engineering’s Site Reliability Engineering team is tasked with supporting their internal build environment, test, and automation infrastructure. After collecting their time-stamped data in InfluxDB, they are using Kapacitor to push alerts directly to Slack via webhooks. Their globally distributed SRE team are able to seamlessly collaborate and troubleshoot. Discover how NetApp uses a time series platform to detect trends in real time that can result in failures within their environments, and to provide key metrics used in SRE postmortems.
Join this webinar as Dustin Sorge will dive into:
NetApp's approach to monitoring their SRE team's metrics — including SLO's and SLI's
Their best practices and techniques for monitoring memory usage and CPU usage
How they use InfluxDB and Telegraf to detect trends and coordinate fixes faster.
How to Monitor DOCSIS Devices Using SNMP, InfluxDB, and TelegrafInfluxData
Wide Open West Is one of the US' top broadband providers with over 3,000 employees. They aim to connect residential homes and businesses to the world with fast and reliable internet, TV and phone services. WOW uses SNMP and Telegraf to collect network data from cable modems and metrics from VMs/containers; they use Kafka to stream all time-stamped data to InfluxDB. Kapacitor is used to send alerts to Slack, ServiceNow, and email. Discover how WOW is using a time series platform to collect, monitor, and alert on their entire service delivery network.
Join this webinar as Peter Jones and Dylan Shorter dives into:
WOW's approach to reducing infrastructure downtime and improving service uptime
Observability and alerting best practices
How they use the InfluxDB platform to monitor 600K + devices
Improving Clinical Data Accuracy: How to Streamline a Data Pipeline Using Nod...InfluxData
Pinnacle 21 is a leader in clinical trial data software and services. By streamlining the drug approval process, they aim to bring life-saving medicines and treatments to patients faster. Their platform helps biopharmaceutical organizations collect and prepare all clinical trial data for approval and be ready for regulatory review. Their goal is to create clean data pipelines for their clients that result in successful regulatory submissions. Organizations like the FDA and Japan's PMDA, as well as 22 of the Top 25 pharma companies globally, use the solution to validate clinical trial data. To ensure they are providing the best product to their clients, Pinnacle 21 realized they needed observability into their apps, servers and application availability over HTTP. Discover how Pinnacle 21 reduced their monthly infrastructure monitoring spend by using Telegraf and InfluxDB.
Join this webinar as Josh Gitlin dives into:
· Pinnacle 21's approach to improve clinical data pipelines
· Their automated DevOps monitoring methodology including Chef
· How a time series platform provided them with better analysis - customized based on data source
How EnerKey Using InfluxDB Saves Customers Millions by Detecting Energy Usage...InfluxData
In this presentation, Martti Kontula discusses EnerKey’s strategy for reducing energy consumption, how using a time series database enhances EnerKey’s competitive advantage, and their approach to using machine learning to help their customers forecast and optimize operations.
How azeti Monitors PLC and SCADA Systems Using MQTT and InfluxDBInfluxData
How azeti Improves Industry 4.0 Monitoring Using MQTT and InfluxDB
azeti are the creators of an industrial IoT platform that enables customers from process and manufacturing industries to leverage their unused shop floor data in order to lower process complexity, maintenance and operations cost.
By collecting thousands of data points directly from sensors, machine controls (PLC) or control systems (DCS / SCADA), azeti has been able to save customers hundreds of thousands of dollars annually. Discover how azeti uses InfluxDB to enable IIoT use cases like condition monitoring and predictive maintenance for their clients. In this webinar, Florian Hoenigschmid and Sebastian Koch will dive into:
azeti’s approach to enable IIoT use cases
Their methodology to improve machine health and utilization
Why they use a time series database to store vibration, temperature and other sensor data
How a Heat Treating Plant Ensures Tight Process Control and Exceptional Quali...InfluxData
American Metal Processing Company ("AMP") is the US' largest commercial rotary heat treat facility with customers in the automotive, construction, military, and agriculture industries. They use their atmosphere-protected rotary retort furnaces to provide their clients with three primary hardening services: neutral hardening (quench and temper), carburizing, and carbonitriding.
This furnace style ensures consistent, uniform heat treatment process vs. traditional batch-or-belt-style furnaces; excels at processing high volumes of smaller parts with tight tolerances; and improves the strength and toughness of plain carbon steels. Discover why AMP’s use of Telegraf, InfluxDB, Node-RED, and Grafana allows them to gain 24/7 insights into their plant operations and metallurgical results. Learn how they use time-stamped data to gain accurate metrics about their consumables usage, furnace profiles, and machine status.
Join this webinar as Grant Pinkos dives into:
American Metal Processing's approach to heat treating in a digitized environment through connected systems
Their approach to collecting and measuring sensor data to enable predictive maintenance and improve product quality
Why they need a time series database for managing and analyzing vast amounts of time-stamped data
Start Automating InfluxDB Deployments at the Edge with balena InfluxData
balena.io helps companies develop, deploy, update, and manage IoT devices. By using Linux containers and other cloud technologies, balena enables teams to quickly and easily build fleets of connected devices. Developers are able to use containers with the language of choice and pull IoT sensor data from 70+ different single board computers into balenaCloud. Discover how to use balena.io to automate your InfluxDB deployments at the edge!
During this one-hour session, experts from balena and InfluxData will demonstrate how to build and deploy your own air quality IoT solution. You will learn:
The fundamentals of IoT sensor deployment and management using balena.
How to use a time series platform to collect and visualize metrics from edge devices.
Tips and tricks to using balenaCloud to automate InfluxDB deployments and Telegraf configurations.
How to use InfluxDB's Edge Data Replication feature to collect sensor data and push it to InfluxDB Cloud for analysis.
No coding experience required, just a curiosity to start your own IoT adventure.
Intro to InfluxDB 2.0 and Your First Flux Query by Sonia GuptaInfluxData
In this InfluxDays NYC 2019 talk, InfluxData Developer Advocate Sonia Gupta will provide an introduction to InfluxDB 2.0 and a review of the new features. She will demonstrate how to install it, insert data, and build your first Flux query.
Reduce SRE Stress: Minimizing Service Downtime with Grafana, InfluxDB and Tel...InfluxData
NetApp is a global cloud-led, data-centric software company. They are an industry leader in hybrid cloud data services and data management solutions. Their platform enables their customers to store and share large quantities of digital data across physical and hybrid cloud environments. NetApp Engineering’s Site Reliability Engineering team is tasked with supporting their internal build environment, test, and automation infrastructure. After collecting their time-stamped data in InfluxDB, they are using Kapacitor to push alerts directly to Slack via webhooks. Their globally distributed SRE team are able to seamlessly collaborate and troubleshoot. Discover how NetApp uses a time series platform to detect trends in real time that can result in failures within their environments, and to provide key metrics used in SRE postmortems.
Join this webinar as Dustin Sorge will dive into:
NetApp's approach to monitoring their SRE team's metrics — including SLO's and SLI's
Their best practices and techniques for monitoring memory usage and CPU usage
How they use InfluxDB and Telegraf to detect trends and coordinate fixes faster.
How to Monitor DOCSIS Devices Using SNMP, InfluxDB, and TelegrafInfluxData
Wide Open West Is one of the US' top broadband providers with over 3,000 employees. They aim to connect residential homes and businesses to the world with fast and reliable internet, TV and phone services. WOW uses SNMP and Telegraf to collect network data from cable modems and metrics from VMs/containers; they use Kafka to stream all time-stamped data to InfluxDB. Kapacitor is used to send alerts to Slack, ServiceNow, and email. Discover how WOW is using a time series platform to collect, monitor, and alert on their entire service delivery network.
Join this webinar as Peter Jones and Dylan Shorter dives into:
WOW's approach to reducing infrastructure downtime and improving service uptime
Observability and alerting best practices
How they use the InfluxDB platform to monitor 600K + devices
Improving Clinical Data Accuracy: How to Streamline a Data Pipeline Using Nod...InfluxData
Pinnacle 21 is a leader in clinical trial data software and services. By streamlining the drug approval process, they aim to bring life-saving medicines and treatments to patients faster. Their platform helps biopharmaceutical organizations collect and prepare all clinical trial data for approval and be ready for regulatory review. Their goal is to create clean data pipelines for their clients that result in successful regulatory submissions. Organizations like the FDA and Japan's PMDA, as well as 22 of the Top 25 pharma companies globally, use the solution to validate clinical trial data. To ensure they are providing the best product to their clients, Pinnacle 21 realized they needed observability into their apps, servers and application availability over HTTP. Discover how Pinnacle 21 reduced their monthly infrastructure monitoring spend by using Telegraf and InfluxDB.
Join this webinar as Josh Gitlin dives into:
· Pinnacle 21's approach to improve clinical data pipelines
· Their automated DevOps monitoring methodology including Chef
· How a time series platform provided them with better analysis - customized based on data source
How EnerKey Using InfluxDB Saves Customers Millions by Detecting Energy Usage...InfluxData
In this presentation, Martti Kontula discusses EnerKey’s strategy for reducing energy consumption, how using a time series database enhances EnerKey’s competitive advantage, and their approach to using machine learning to help their customers forecast and optimize operations.
How azeti Monitors PLC and SCADA Systems Using MQTT and InfluxDBInfluxData
How azeti Improves Industry 4.0 Monitoring Using MQTT and InfluxDB
azeti are the creators of an industrial IoT platform that enables customers from process and manufacturing industries to leverage their unused shop floor data in order to lower process complexity, maintenance and operations cost.
By collecting thousands of data points directly from sensors, machine controls (PLC) or control systems (DCS / SCADA), azeti has been able to save customers hundreds of thousands of dollars annually. Discover how azeti uses InfluxDB to enable IIoT use cases like condition monitoring and predictive maintenance for their clients. In this webinar, Florian Hoenigschmid and Sebastian Koch will dive into:
azeti’s approach to enable IIoT use cases
Their methodology to improve machine health and utilization
Why they use a time series database to store vibration, temperature and other sensor data
Alles, was Sie ueber HCL Notes 64-Bit Clients wissen muessenpanagenda
This document provides an overview and agenda for a webinar on upgrading from HCL Notes 32-bit clients to 64-bit clients. It discusses the key differences and challenges including increased memory limits, changes to file paths, and ensuring compatibility with third-party add-ons. The presentation covers how to properly uninstall 32-bit clients, prepare the data directory, and install Notes 12.0.2 64-bit. It also notes potential issues like comparing platform types and declaring function calls may need adjustments for 64-bit. Maintaining compatibility with add-ons like MarvelClient and templates is also addressed.
Maximizing performance via tuning and optimizationMariaDB plc
Ensuring that your end users get the performance they expect from your system requires an organized approach to performance management. This session will cover the planning and measurement necessary to ensure satisfied customers, and will also include tips and tricks learned from MariaDB’s years of supporting many of the most demanding installations in the world.
Best Practices: How to Analyze IoT Sensor Data with InfluxDBInfluxData
InfluxDB is the purpose-built time series platform. Its high ingest capability makes it perfect for collecting, storing and analyzing time-stamped data from sensors — down to the nanosecond. The InfluxDB platform has everything developers need: the data collection agent, the database, visualization tools, and data querying and scripting language. Join this webinar as Brian Gilmore provides a product overview; he will also deep-dive with some helpful tips and ticks. Stick around for a live demo and Q&A time.
Join this webinar as Brian Gilmore dives into:
The basics of time series data and applications
A platform overview — learn about InfluxDB, Telegraf, and Flux
InfluxDB use case examples — start collecting data at the edge and use your preferred IoT protocol (i.e. MQTT)
This document discusses F5 Distributed Cloud Services, which provides networking, security, and application delivery services across cloud, on-premises, and edge environments from a centralized SaaS console. It addresses challenges like complexity in coordinating technologies, automation, security across attack surfaces, and limited observability. The platform offers a unified view with centralized management, advanced security, full-stack observability, and automation. Use cases include hybrid/multi-cloud networking, web app and API protection, and running apps globally in cloud and edge. It is delivered via F5's global private network and provides value to DevOps, SecOps, and NetOps teams.
Redis is an in-memory key-value store that is often used as a database, cache, and message broker. It supports various data structures like strings, hashes, lists, sets, and sorted sets. While data is stored in memory for fast access, Redis can also persist data to disk. It is widely used by companies like GitHub, Craigslist, and Engine Yard to power applications with high performance needs.
High Performance, High Reliability Data Loading on ClickHouseAltinity Ltd
This document provides a summary of best practices for high reliability data loading in ClickHouse. It discusses ClickHouse's ingestion pipeline and strategies for improving performance and reliability of inserts. Some key points include using larger block sizes for inserts, avoiding overly frequent or compressed inserts, optimizing partitioning and sharding, and techniques like buffer tables and compact parts. The document also covers ways to make inserts atomic and handle deduplication of records through block-level and logical approaches.
InfluxDB is an open source time series database written in Go that stores metric data and performs real-time analytics. It has no external dependencies. InfluxDB stores data as time series with measurements, tags, and fields. Data is written using a line protocol and can be visualized using Grafana, an open source metrics dashboard.
The document provides information about a DCS system from the company DCS. It discusses the maxDPU4F distributed processing unit, which is the hardware processing engine of the DCS. The maxDPU4F performs primary data acquisition, control, and processing functions. It is a self-contained microprocessor-based unit that communicates with I/O modules and other devices over a maxNET network. The document also describes the front panel features of the maxDPU4F unit and provides specifications for its operation.
CCS is an IDE for developing applications on TI DSPs and MCUs. It allows creating and managing projects, compiling and building code, and debugging programs on both software simulators and hardware debuggers. The document discusses starting a new project in CCS, configuring build options, debugging tools like breakpoints and watch variables, and overview compiler sections and the linker configuration file.
This document discusses working with time series data using InfluxDB. It provides an overview of time series data and why InfluxDB is useful for storing and querying it. Key features of InfluxDB covered include its SQL-like query language, retention policies for managing data storage, continuous queries for aggregation, and tools for data collection, visualization and monitoring.
Apache Ignite vs Alluxio: Memory Speed Big Data AnalyticsDataWorks Summit
Apache Ignite vs Alluxio: Memory Speed Big Data Analytics - Apache Spark’s in memory capabilities catapulted it as the premier processing framework for Hadoop. Apache Ignite and Alluxio, both high-performance, integrated and distributed in-memory platform, takes Apache Spark to the next level by providing an even more powerful, faster and scalable platform to the most demanding data processing and analytic environments.
Speaker
Irfan Elahi, Consultant, Deloitte
Understanding How CQL3 Maps to Cassandra's Internal Data StructureDataStax
CQL3 is the newly ordained, canonical, and best-practices means of interacting with Cassandra. Indeed, the Apache Cassandra documentation itself declares that the Thrift API as “legacy” and recommends that CQL3 be used instead. But I’ve heard several people express their concern over the added layer of abstraction. There seems to be an uncertainty about what’s really happening inside of Cassandra.
In this presentation we will open up the hood and take a look at exactly how Cassandra is treating CQL3 queries. Our first stop will be the Cassandra data structure itself. We will briefly review the concepts of keyspaces, columnfamilies, rows, and columns. And we will explain where this data structure excels and where it does not. Composite rowkeys and columnnames are heavily used with CQL3, so we'll cover their functionality as well.
We will then turn to CQL3. I will demonstrate the basic CQL syntax and show how it maps to the underlying data structure. We will see that CQL actually serves as a sort of best practices interface to the internal Cassandra data structure. We will take this point further by demonstrating CQL3 collections (set, list, and map) and showing how they are really just a creative use of this same internal data structure.
Attendees will leave with a clear, inside-out understanding of CQL3 and will be able use CQL with a confidence that they are following best-practices.
PGConf.ASIA 2019 - PGSpider High Performance Cluster Engine - Shigeo HiroseEqunix Business Solutions
PGSpider is a high-performance SQL cluster engine developed by Toshiba Corporation. It allows distributed querying of heterogeneous data sources using standard SQL. PGSpider improves retrieval performance through parallel queries across nodes and supports multi-tenant querying to retrieve records from the same table across nodes. It utilizes techniques like pushdown of conditional expressions and aggregation functions to nodes to reduce network traffic.
Domino memory is composed of shared and private memory pools. Shared memory is available to all Domino tasks, while private memory is allocated to individual tasks. The NSF buffer pool caches frequently accessed databases in shared memory. Memory dumps and memstats reports can be used to diagnose memory leaks by identifying continually increasing memory addresses over time. The DEBUG_TRAPLEAKS and DEBUG_SHOWLEAKS parameters can help trap specific memory leaks.
Use case and integration of ClickHouse with Apache Superset & DremioAltinity Ltd
This document discusses using Clickhouse for data reporting and integrating it with Apache Superset and Dremio for data visualization. It describes using Clickhouse to inject health census data from multiple sources into a data lake. Dremio and Superset can then connect to Clickhouse and other databases to run common SQL queries and slice, dice, and visualize the data through various charts and map visualizations. An example dashboard using Superset on sample data is also shown.
ClickHouse Deep Dive, by Aleksei MilovidovAltinity Ltd
This document provides an overview of ClickHouse, an open source column-oriented database management system. It discusses ClickHouse's ability to handle high volumes of event data in real-time, its use of the MergeTree storage engine to sort and merge data efficiently, and how it scales through sharding and distributed tables. The document also covers replication using the ReplicatedMergeTree engine to provide high availability and fault tolerance.
Deep Dive on ClickHouse Sharding and Replication-2202-09-22.pdfAltinity Ltd
Join the Altinity experts as we dig into ClickHouse sharding and replication, showing how they enable clusters that deliver fast queries over petabytes of data. We’ll start with basic definitions of each, then move to practical issues. This includes the setup of shards and replicas, defining schema, choosing sharding keys, loading data, and writing distributed queries. We’ll finish up with tips on performance optimization.
#ClickHouse #datasets #ClickHouseTutorial #opensource #ClickHouseCommunity #Altinity
-----------------
Join ClickHouse Meetups: https://www.meetup.com/San-Francisco-...
Check out more ClickHouse resources: https://altinity.com/resources/
Visit the Altinity Documentation site: https://docs.altinity.com/
Contribute to ClickHouse Knowledge Base: https://kb.altinity.com/
Join the ClickHouse Reddit community: https://www.reddit.com/r/Clickhouse/
----------------
Learn more about Altinity!
Site: https://www.altinity.com
LinkedIn: https://www.linkedin.com/company/alti...
Twitter: https://twitter.com/AltinityDB
ServiceNow + Precisely: Getting Business Value and Visibility from Mainframe ...Precisely
Your team has a huge responsibility to enable IT services without disruption for your organization. ServiceNow Service Mapping gives you a clear idea of how services are affected by infrastructure issues. However, you are missing one critical piece – visibility into the mainframe (IBM Z). To drive business value, you need visibility into service performance by automatically connecting mainframe CIs into your CMDB and ServiceNow Service Mapping.
In this webinar, Ian Hartley, Director of Product Management at Precisely, will discuss how Precisely Ironstream can help you with ITOM visibility by automatically linking your mainframe to ServiceNow Service Mapping in just a few clicks.
Watch this on-demand webcast to learn:
• The “must-dos” for a proactive approach to mainframe visibility in ServiceNow Service Mapping
• How to auto-populate the CMDB and ServiceNow with mainframe CIs
• The immediate value Ironstream can bring to your business by connecting the mainframe to ServiceNow
Career Paths for Software ProfessionalsAhmed Misbah
This document outlines various career paths for software professionals, including software development, quality engineering, project management, UI/UX design, business analysis, databases and data warehousing, big data, data science, security, agile coaching, DevOps, IT administration, embedded systems, and academic careers. It provides descriptions of common roles within each path as well as typical career progression charts. The data science section in particular outlines technical skills, responsibilities, and example tasks required of data scientists. Overall, the document serves to inform software professionals about options for specializing and advancing their careers.
Alles, was Sie ueber HCL Notes 64-Bit Clients wissen muessenpanagenda
This document provides an overview and agenda for a webinar on upgrading from HCL Notes 32-bit clients to 64-bit clients. It discusses the key differences and challenges including increased memory limits, changes to file paths, and ensuring compatibility with third-party add-ons. The presentation covers how to properly uninstall 32-bit clients, prepare the data directory, and install Notes 12.0.2 64-bit. It also notes potential issues like comparing platform types and declaring function calls may need adjustments for 64-bit. Maintaining compatibility with add-ons like MarvelClient and templates is also addressed.
Maximizing performance via tuning and optimizationMariaDB plc
Ensuring that your end users get the performance they expect from your system requires an organized approach to performance management. This session will cover the planning and measurement necessary to ensure satisfied customers, and will also include tips and tricks learned from MariaDB’s years of supporting many of the most demanding installations in the world.
Best Practices: How to Analyze IoT Sensor Data with InfluxDBInfluxData
InfluxDB is the purpose-built time series platform. Its high ingest capability makes it perfect for collecting, storing and analyzing time-stamped data from sensors — down to the nanosecond. The InfluxDB platform has everything developers need: the data collection agent, the database, visualization tools, and data querying and scripting language. Join this webinar as Brian Gilmore provides a product overview; he will also deep-dive with some helpful tips and ticks. Stick around for a live demo and Q&A time.
Join this webinar as Brian Gilmore dives into:
The basics of time series data and applications
A platform overview — learn about InfluxDB, Telegraf, and Flux
InfluxDB use case examples — start collecting data at the edge and use your preferred IoT protocol (i.e. MQTT)
This document discusses F5 Distributed Cloud Services, which provides networking, security, and application delivery services across cloud, on-premises, and edge environments from a centralized SaaS console. It addresses challenges like complexity in coordinating technologies, automation, security across attack surfaces, and limited observability. The platform offers a unified view with centralized management, advanced security, full-stack observability, and automation. Use cases include hybrid/multi-cloud networking, web app and API protection, and running apps globally in cloud and edge. It is delivered via F5's global private network and provides value to DevOps, SecOps, and NetOps teams.
Redis is an in-memory key-value store that is often used as a database, cache, and message broker. It supports various data structures like strings, hashes, lists, sets, and sorted sets. While data is stored in memory for fast access, Redis can also persist data to disk. It is widely used by companies like GitHub, Craigslist, and Engine Yard to power applications with high performance needs.
High Performance, High Reliability Data Loading on ClickHouseAltinity Ltd
This document provides a summary of best practices for high reliability data loading in ClickHouse. It discusses ClickHouse's ingestion pipeline and strategies for improving performance and reliability of inserts. Some key points include using larger block sizes for inserts, avoiding overly frequent or compressed inserts, optimizing partitioning and sharding, and techniques like buffer tables and compact parts. The document also covers ways to make inserts atomic and handle deduplication of records through block-level and logical approaches.
InfluxDB is an open source time series database written in Go that stores metric data and performs real-time analytics. It has no external dependencies. InfluxDB stores data as time series with measurements, tags, and fields. Data is written using a line protocol and can be visualized using Grafana, an open source metrics dashboard.
The document provides information about a DCS system from the company DCS. It discusses the maxDPU4F distributed processing unit, which is the hardware processing engine of the DCS. The maxDPU4F performs primary data acquisition, control, and processing functions. It is a self-contained microprocessor-based unit that communicates with I/O modules and other devices over a maxNET network. The document also describes the front panel features of the maxDPU4F unit and provides specifications for its operation.
CCS is an IDE for developing applications on TI DSPs and MCUs. It allows creating and managing projects, compiling and building code, and debugging programs on both software simulators and hardware debuggers. The document discusses starting a new project in CCS, configuring build options, debugging tools like breakpoints and watch variables, and overview compiler sections and the linker configuration file.
This document discusses working with time series data using InfluxDB. It provides an overview of time series data and why InfluxDB is useful for storing and querying it. Key features of InfluxDB covered include its SQL-like query language, retention policies for managing data storage, continuous queries for aggregation, and tools for data collection, visualization and monitoring.
Apache Ignite vs Alluxio: Memory Speed Big Data AnalyticsDataWorks Summit
Apache Ignite vs Alluxio: Memory Speed Big Data Analytics - Apache Spark’s in memory capabilities catapulted it as the premier processing framework for Hadoop. Apache Ignite and Alluxio, both high-performance, integrated and distributed in-memory platform, takes Apache Spark to the next level by providing an even more powerful, faster and scalable platform to the most demanding data processing and analytic environments.
Speaker
Irfan Elahi, Consultant, Deloitte
Understanding How CQL3 Maps to Cassandra's Internal Data StructureDataStax
CQL3 is the newly ordained, canonical, and best-practices means of interacting with Cassandra. Indeed, the Apache Cassandra documentation itself declares that the Thrift API as “legacy” and recommends that CQL3 be used instead. But I’ve heard several people express their concern over the added layer of abstraction. There seems to be an uncertainty about what’s really happening inside of Cassandra.
In this presentation we will open up the hood and take a look at exactly how Cassandra is treating CQL3 queries. Our first stop will be the Cassandra data structure itself. We will briefly review the concepts of keyspaces, columnfamilies, rows, and columns. And we will explain where this data structure excels and where it does not. Composite rowkeys and columnnames are heavily used with CQL3, so we'll cover their functionality as well.
We will then turn to CQL3. I will demonstrate the basic CQL syntax and show how it maps to the underlying data structure. We will see that CQL actually serves as a sort of best practices interface to the internal Cassandra data structure. We will take this point further by demonstrating CQL3 collections (set, list, and map) and showing how they are really just a creative use of this same internal data structure.
Attendees will leave with a clear, inside-out understanding of CQL3 and will be able use CQL with a confidence that they are following best-practices.
PGConf.ASIA 2019 - PGSpider High Performance Cluster Engine - Shigeo HiroseEqunix Business Solutions
PGSpider is a high-performance SQL cluster engine developed by Toshiba Corporation. It allows distributed querying of heterogeneous data sources using standard SQL. PGSpider improves retrieval performance through parallel queries across nodes and supports multi-tenant querying to retrieve records from the same table across nodes. It utilizes techniques like pushdown of conditional expressions and aggregation functions to nodes to reduce network traffic.
Domino memory is composed of shared and private memory pools. Shared memory is available to all Domino tasks, while private memory is allocated to individual tasks. The NSF buffer pool caches frequently accessed databases in shared memory. Memory dumps and memstats reports can be used to diagnose memory leaks by identifying continually increasing memory addresses over time. The DEBUG_TRAPLEAKS and DEBUG_SHOWLEAKS parameters can help trap specific memory leaks.
Use case and integration of ClickHouse with Apache Superset & DremioAltinity Ltd
This document discusses using Clickhouse for data reporting and integrating it with Apache Superset and Dremio for data visualization. It describes using Clickhouse to inject health census data from multiple sources into a data lake. Dremio and Superset can then connect to Clickhouse and other databases to run common SQL queries and slice, dice, and visualize the data through various charts and map visualizations. An example dashboard using Superset on sample data is also shown.
ClickHouse Deep Dive, by Aleksei MilovidovAltinity Ltd
This document provides an overview of ClickHouse, an open source column-oriented database management system. It discusses ClickHouse's ability to handle high volumes of event data in real-time, its use of the MergeTree storage engine to sort and merge data efficiently, and how it scales through sharding and distributed tables. The document also covers replication using the ReplicatedMergeTree engine to provide high availability and fault tolerance.
Deep Dive on ClickHouse Sharding and Replication-2202-09-22.pdfAltinity Ltd
Join the Altinity experts as we dig into ClickHouse sharding and replication, showing how they enable clusters that deliver fast queries over petabytes of data. We’ll start with basic definitions of each, then move to practical issues. This includes the setup of shards and replicas, defining schema, choosing sharding keys, loading data, and writing distributed queries. We’ll finish up with tips on performance optimization.
#ClickHouse #datasets #ClickHouseTutorial #opensource #ClickHouseCommunity #Altinity
-----------------
Join ClickHouse Meetups: https://www.meetup.com/San-Francisco-...
Check out more ClickHouse resources: https://altinity.com/resources/
Visit the Altinity Documentation site: https://docs.altinity.com/
Contribute to ClickHouse Knowledge Base: https://kb.altinity.com/
Join the ClickHouse Reddit community: https://www.reddit.com/r/Clickhouse/
----------------
Learn more about Altinity!
Site: https://www.altinity.com
LinkedIn: https://www.linkedin.com/company/alti...
Twitter: https://twitter.com/AltinityDB
ServiceNow + Precisely: Getting Business Value and Visibility from Mainframe ...Precisely
Your team has a huge responsibility to enable IT services without disruption for your organization. ServiceNow Service Mapping gives you a clear idea of how services are affected by infrastructure issues. However, you are missing one critical piece – visibility into the mainframe (IBM Z). To drive business value, you need visibility into service performance by automatically connecting mainframe CIs into your CMDB and ServiceNow Service Mapping.
In this webinar, Ian Hartley, Director of Product Management at Precisely, will discuss how Precisely Ironstream can help you with ITOM visibility by automatically linking your mainframe to ServiceNow Service Mapping in just a few clicks.
Watch this on-demand webcast to learn:
• The “must-dos” for a proactive approach to mainframe visibility in ServiceNow Service Mapping
• How to auto-populate the CMDB and ServiceNow with mainframe CIs
• The immediate value Ironstream can bring to your business by connecting the mainframe to ServiceNow
Career Paths for Software ProfessionalsAhmed Misbah
This document outlines various career paths for software professionals, including software development, quality engineering, project management, UI/UX design, business analysis, databases and data warehousing, big data, data science, security, agile coaching, DevOps, IT administration, embedded systems, and academic careers. It provides descriptions of common roles within each path as well as typical career progression charts. The data science section in particular outlines technical skills, responsibilities, and example tasks required of data scientists. Overall, the document serves to inform software professionals about options for specializing and advancing their careers.
This document describes a job posting for a Senior Opto-mechanical Engineer skilled in compact imaging system design. The role requires over 10 years experience designing miniature optics and novel opto-electronic devices. Responsibilities include developing mechanical engineering practices, creating new system concepts, leading test and metrology development, and directing project activities and prototypes. Strong skills in Solidworks, tolerance analysis, precision machining techniques, and an innovative mindset are required.
Caitlin Cassidy interned as a database developer for QPM and IRK at Fidelity. Her responsibilities included creating and enhancing database packages, tables, and views. She wrote unit tests for quality assurance and configured a continuous integration dashboard to monitor code changes. She developed a new screen to allow users to determine category mappings for investment funds, making the funds more inclusive. Through the internship, Caitlin gained experience with SQL, unit testing, agile methodology, and saw an entire software release process.
This document discusses optimizing Microsoft Access databases by using SQL Server as the backend database instead of the default Jet/ACE database engine. It provides advantages of using SQL Server like better performance, security, and scalability. It also discusses best practices like using SQL Server for data storage and queries, using Access only for the user interface, migrating Access data and queries to SQL Server, and designing the application for optimal performance when Access and SQL Server are used together.
This document outlines 10 common data management challenges that can be solved within 3 weeks using an engineering data management solution. It describes challenges such as lack of document security, audit trails, revision control, and file sharing. It then provides solutions to each challenge, including securing documents and controlling access, auditing document lifecycles, automating revision control, and enabling collaboration across sites. The solutions are capable of integrating with CAD software, automating workflows, and providing non-CAD users the ability to view and mark up engineering files.
Better Results. Less Work. Optimize IT with Mainframe Visibility in SplunkPrecisely
Like it or not, IT is in the spotlight. From the CEO, down to the individual employee or customer experience, IT operations is more important than ever – keeping service levels high, while keeping expenses in check.
Your IT landscape spans mainframe, IBM i, and distributed platforms – on premises and in the cloud – and you need an IT Operations Analytics (ITOA) solution that does as well. You’ll never be able to see the complete picture, meet service level agreements, and drive efficiencies across the enterprise if you’re focusing on one technology silo at a time.
Join this webinar to learn how to include your critical mainframe systems in an ITOA enterprise-wide view with Splunk dashboards.
During this webinar, we will explore:
- Benefits of ITOA for your business
- Challenges of integrating mainframe data in Splunk dashboards and how to overcome them
- Key use cases
In this DNN-Connect 2019 session, I walk the audience through many of the most common things that we've run into over the years when helping clients with their DNN websites. You'll see some of the most common worst practices and how to resolve them.
The document discusses the database system development lifecycle. It notes that 80-90% of database projects do not meet performance goals and are often late and over budget. Reasons for failure include a lack of complete requirements specification, inappropriate development methodology, and poor system decomposition. The solution is to follow a structured approach like the Information Systems Lifecycle, Software Development Lifecycle, or Database System Development Lifecycle. Key stages of the Database System Development Lifecycle include planning, definition, requirements collection, design, prototyping, implementation, data conversion, testing, and operational maintenance.
The document discusses the various roles on an IT project team. It notes that even small projects are complex due to specialized roles, matrixed resources that are over-scheduled, and potential for bottlenecks and communication issues. There are many job titles for IT roles but they generally fall into four basic categories: team leaders, project managers, technical leaders, and implementers. The document then provides more details on specific roles within each category and how roles are further specialized based on the architecture and development lifecycle of a project.
Margaretha Gertruida Du Toit has over 30 years of experience in data analysis, modeling, and ETL design. She is currently a Senior Data Modeler at Standard Bank, where she is responsible for data modeling, ETL design support, vendor management, and training. Previously, she held roles as an ETL Designer and Developer at Standard Bank, and as an Information Analyst, MIS Analyst, and COBOL Programmer at ABSA. She has extensive experience with Teradata, Datastage, SAS, Oracle, and other tools.
SQLSaturday 664 - Troubleshoot SQL Server performance problems like a Microso...Marek Maśko
The document discusses tools used by Microsoft engineers to troubleshoot SQL Server performance problems when assisting customers. It describes how the Performance and Diagnostic Monitor (PSSDiag) collects diagnostic data from a SQL Server and how Microsoft engineers analyze the collected data using tools like SQL Nexus and PAL to identify issues, root causes, and solutions.
Learn about the three advances in database technologies that eliminate the need for star schemas and the resulting maintenance nightmare.
Relational databases in the 1980s were typically designed using the Codd-Date rules for data normalization. It was the most efficient way to store data used in operations. As BI and multi-dimensional analysis became popular, the relational databases began to have performance issues when multiple joins were requested. The development of the star schema was a clever way to get around performance issues and ensure that multi-dimensional queries could be resolved quickly. But this design came with its own set of problems.
Unfortunately, the analytic process is never simple. Business users always think up unimaginable ways to query the data. And the data itself often changes in unpredictable ways. These result in the need for new dimensions, new and mostly redundant star schemas and their indexes, maintenance difficulties in handling slowly changing dimensions, and other problems causing the analytical environment to become overly complex, very difficult to maintain, long delays in new capabilities, resulting in an unsatisfactory environment for both the users and those maintaining it.
There must be a better way!
Watch this webinar to learn:
- The three technological advances in data storage that eliminate star schemas
- How these innovations benefit analytical environments
- The steps you will need to take to reap the benefits of being star schema-free
This resume summarizes Alok Rajkumar's experience and qualifications for an IT role. He has over 3 years of experience in data warehousing and business intelligence using tools like Oracle, PL/SQL, Informatica PowerCenter, and Java. He has worked on projects in healthcare and pharmaceutical domains, developing ETL processes, performing data analysis, and meeting business requirements. He is proficient in technologies like Oracle, Informatica, Java, and Linux and aims to contribute to an organization's growth through innovative ideas and teamwork.
Learnings from 7 Years of Integrating Mission-Critical IBM Z® and IBM i with ...Precisely
Mainframe (z/OS®) & IBM i systems are incredibly important for many businesses but are often outside the holistic IT Operations Analytics (ITOA) observability available in Splunk. Exclude them & you will miss valuable, critical insights. Ironstream delivers valuable log data, events and intelligence from both IBM mainframe and IBM i environments into Splunk, providing a true 360-degree view of your IT infrastructure.
This session will examine how adding more agility; faster MTTI/MTTR; deeper visibility & increased efficiencies address problems seen by real-world customers over the last 7 years. Effectively addressing these problems has resulted in millions of dollars of cost-savings.
See how you can use Precisely Ironstream to:
- Tap into mainframe and IBM i machine data the easy way!
- Get machine data into your Cloud, On-Prem, ITSI, Enterprise Security, or AIOps environments
- Address the high-value use cases
- Realize fast ROI & be ready for the future!
The document discusses system administration tasks related to automating desktop management. It describes using disk imaging and kickstart installations to consistently deploy operating systems across many computers. Disk imaging allows cloning a tested installation, while kickstart uses configuration files and network boot to deploy in an unattended way. The document outlines components, tools, and best practices for automating desktop management and software updates at scale.
Designing and Implementing Information Systems with Event Modeling, Bobby Cal...confluent
Designing and Implementing Information Systems with Event Modeling, Bobby Calderwood, Founder at Evident Systems
https://www.meetup.com/Saint-Louis-Kafka-meetup-group/events/273869005/
Building Information Systems using Event Modeling (Bobby Calderwood, Evident ...confluent
"Event Modeling is a fairly new information system modeling discipline created by Adam Dymitruk that is heavily influenced by CQRS and Event Sourcing. Its lineage follows from Event Storming, Design Thinking, and other modeling practices from the Agile and Domain-Driven Design communities. The methodology emphasizes simplicity (there are only four model ingredients) and inclusion of non-developer participants.
Like other modeling disciplines, Event Modeling is sufficiently general to enable collaborative learning and knowledge exchange among UI/UX designers, software engineers and architects, and business domain experts. But it's also sufficiently expressive and specific to be directly actionable by the implementors of the information system described by the model.
During this talk, we'll:
* Build an Event Model of a simple information system, including wire-framing the UI/UX experience
* Explore how to proceed from model to implementation using Kafka, its Streams and Connect APIs, and KSQL
* Jump-start the implementation by generating code directly from the Event Model
* Track and measure the work of implementation by generating tasks directly from the Event Model"
This document provides an overview of various topics related to software project management. It begins with a list of suggested topics for discussion, such as challenges specific to software projects, quality measurements, and best practices in Pakistan. It then covers aspects of the software development lifecycle from planning and requirements through deployment and maintenance. Different project models like waterfall, evolutionary prototyping, and spiral development are described along with their advantages and disadvantages. Finally, it touches on using commercial off-the-shelf software.
Similar to Improving Industrial Machine Support Using InfluxDB, Web SCADA, and AWS (20)
InfluxData is excited to announce InfluxDB Clustered, the self-managed version of InfluxDB 3.0 with unparalleled flexibility, speed, performance, and scale. The evolution of InfluxDB Enterprise, InfluxDB Clustered is delivered as a collection of Kubernetes-based containers and services, which enables you to run and operate InfluxDB 3.0 where you need it, whether that's on-premises or in a private cloud environment. With this new enterprise offering, we’re excited to provide our customers with real-time queries, low-cost object storage, unlimited cardinality, and SQL language support – all with improved data access, support, and security! The newest version of InfluxDB was built on Apache Arrow, and through the open source ecosystem and integrations, extends the value of your time-stamped data.
Join this webinar to learn more about InfluxDB Clustered, and how to manage your large mission-critical workloads in the highly available database service offering!
In this webinar, Balaji Palani and Gunnar Aasen will dive into:
Key features of the new InfluxDB Clustered solution
Use cases for using the newest version of the purpose-built time series database
Live demo
During this 1-hour technical webinar, you’ll also get a chance to ask your questions live.
Best Practices for Leveraging the Apache Arrow EcosystemInfluxData
Apache Arrow is an open source project intended to provide a standardized columnar memory format for flat and hierarchical data. It enables more efficient analytics workloads for modern CPU and GPU hardware, which makes working with large data sets easier and cheaper.
InfluxData and Dremio are both members of the Apache Software Foundation (ASF). Dremio is a data lakehouse management service known for its scalability and capacity for direct querying across diverse data sources. InfluxDB is the purpose-built time series database, and InfluxDB 3.0 has a new columnar storage engine and uses the Arrow format for representing data and moving data to and from Parquet. Discover how InfluxDB and Dremio have advanced their solutions by relying on the Apache Arrow framework.
Join this live panel as Alex Merced and Anais Dotis-Georgiou dive into:
Advantages to utilizing the Apache Arrow ecosystem
Tips and tricks for implementing the columnar data structure
How developers can best utilize the ASF to innovate and contribute to new industry standards
How Bevi Uses InfluxDB and Grafana to Improve Predictive Maintenance and Redu...InfluxData
Bevi are the creators of smart water dispensers which empower people to choose their desired beverage — flat or sparkling, their desired flavor and temperature. Since 2014, Bevi users have saved more than 350 million bottles and cans. Their "smart" water coolers have prevented the extraction of 1.4 trillion oz of oil from Earth and have saved 21.7 billion grams of CO2 from the atmosphere.
Discover how Bevi uses a time series database to enable better predictive maintenance and alerting of their entire ecosystem — including the hardware and software. They are using InfluxDB to collect sensor data in real-time remotely from their internet-connected machines about their status and activity — i.e., flavor and CO2 levels, water temp, filter status, etc. They a7re using these metrics to improve their customer experience and continuously improve their sustainability practices. Gain tips and tricks on how to best utilize InfluxDB's schema-less design.
Join this webinar as Spencer Gagnon dives into:
Bevi's approach to reducing organizations' carbon footprint — they are saving 50K+ bottles and cans annually
Their entire system architecture — including InfluxDB Cloud, Grafana, Kafka, and DigitalOcean
The importance of using time-stamped data to extend the life of their machines
Power Your Predictive Analytics with InfluxDBInfluxData
If you're using InfluxDB to store and manage your time series data, you're already off to a great start. But why stop there? In our upcoming webinar, we'll show you how to take your data analysis to the next level by building predictive analytics using a variety of tools and techniques.
We will demonstrate how to use Quix to create custom dashboards and visualizations that allow you to monitor your data in real-time. We'll also introduce you to Hugging Face, a powerful tool for building models that can predict future trends and identify anomalies. With these tools at your disposal, you'll be able to extract valuable insights from your data and make more informed decisions about the future. Don't miss out on this opportunity to improve your data analysis skills and take your business to the next level!
What you will learn:
Use InfluxDB to store and manage time series data
Utilize Quix and Hugging Face to build models, visualize trends, and identify anomalies
Extract valuable insights from your data
Improve your data analysis skills to make informed decision
How Teréga Replaces Legacy Data Historians with InfluxDB, AWS and IO-Base InfluxData
Are you considering replacing your legacy data historian and moving your OT data to the cloud? Join this technical webinar to learn how to adopt InfluxDB and IO Base - a digital platform used to improve operational efficiencies!
Teréga Solutions are the creators of digital solutions used to improve energy efficiencies and to address decarbonization challenges. Their network includes 5,000+ km of gas pipelines within France; they aim to help France attain carbon neutrality by 2050. With these impressive goals in mind, Teréga has created IO-Base — the digital platform to improve industrial performance, and increase profitability. Creating digital twins for their clients allows them to collect data from all production sites and view it in real time, from anywhere and at any time.
Discover how Teréga uses InfluxDB, Docker, and AWS to monitor its gas and hydrogen pipeline infrastructure. They chose to replace their legacy data historian with InfluxDB — the purpose built time series database. They are collecting more than 100K different metrics at various frequencies — some are collected every 5 seconds to only every 1-2 minutes. THey have reduced overall IT spend by 50% and collect 2x the amount of data at 20x frequency! By using various industrial protocols (Modbus, OPC-UA, etc.), Teréga improved output, reduced the TCO, and is now able to create added-value services: forecast, monitoring, predictive maintenance.
Join this webinar as Thomas Delquié dives into:
Teréga's approach to modernizing fossil fuel pipelines IT systems while improving yields and safety
Their centralized methodology to collecting sensor, hardware, and network metrics
The importance of time series data and why they chose InfluxDB
Build an Edge-to-Cloud Solution with the MING StackInfluxData
FlowForge enables organizations to reliably deliver Node-RED applications in a continuous, collaborative, and secure manner. Node-RED is the popular, low-code programming solution that makes it easy to connect different services using a visual programming environment. InfluxData is the creator of InfluxDB, the purpose-built time series database run by developers at scale and in any environment in the cloud, on-premises, or at the edge.
Jump-start monitoring your industrial IoT devices and discover how to build an edge-to-cloud solution with the MING stack. The MING stack includes Mosquitto/MQTT, InfluxDB, Node-RED, and Grafana. This solution can be used to improve fleet management, enable predictive maintenance of industrial machines and power generation equipment (i.e. turbines and generators) and increase safety practices (i.e. buildings, construction sites). Join this webinar to learn best practices from industrial IoT SME's.
In this webinar, Robert Marcer and Jay Clifford dive into:
Best practices for monitoring sensor data collected by everyone — from the edge to the factory
Tips and tricks for using Node-RED and InfluxDB together
Demo — see Node-RED and InfluxDB live
Meet the Founders: An Open Discussion About Rewriting Using RustInfluxData
The document is an agenda for a discussion between the CTO and founder of Ockam, Mrinal Wadhwa, and the CTO and founder of InfluxData, Paul Dix, about rewriting products using the Rust programming language. It includes an introduction of the founders, an overview of the discussion topics like why they decided to rewrite in Rust and the challenges they faced, how they got their engineers comfortable with Rust, tips they learned in the process, benefits gained from moving to Rust, and how their communities responded to the switch.
InfluxData is excited to announce the general availability of InfluxDB Cloud Dedicated! It is a fully managed time series database service running on cloud infrastructure resources that are dedicated to a single tenant. With this new offering, we’re excited to provide our customers with additional security options, and more custom configuration options to best suit customers’ workload requirements. Join this webinar to learn more about InfluxDB Cloud, and the new dedicated database service offering!
In this webinar, Balaji Palani and Gary Fowler will dive into:
Key features of the new InfluxDB Cloud Dedicated solution
Use cases for using the newest version of the purpose-built time series database
Live demo
During this 1-hour technical webinar, you’ll also get a chance to ask your questions live.
Gain Better Observability with OpenTelemetry and InfluxDB InfluxData
Many developers and DevOps engineers have become aware of using their observability data to gain greater insights into their infrastructure systems. InfluxDB is the purpose-built time series database used to collect metrics and gain observability into apps, servers, containers, and networks. Developers use InfluxDB to improve the quality and efficiency of their CI/CD pipelines. Start using InfluxDB to aggregate infrastructure and application performance monitoring metrics to enable better anomaly detection, root-cause analysis, and alerting.
This session will demonstrate how to record metrics, logs, and traces with one library — OpenTelemetry — and store them in one open source time series database — InfluxDB. Zoe will demonstrate how easy it is to set up the OpenTelemetry Operator for Kubernetes and to store and analyze your data in InfluxDB.
How Delft University's Engineering Students Make Their EV Formula-Style Race ...InfluxData
Delft University is the oldest and largest technical university in the Netherlands with 25,000+ students. Since 1999, they have had a team of students (undergraduate and graduate) designing, building, and racing cars, as part of the Formula Student worldwide competition. The competition has grown to include teams from 1K+ universities in 20+ countries. Students are responsible for all aspects of car manufacturing (research, construction, testing, developing, marketing, management, and fundraising). Delft University's team includes 90 students across disciplines.
Discover how Delft University's team uses Marple and InfluxDB to collect telemetry and sensor metrics while they develop, test, and race their electrics cars. They collect sensor data about their EV's control systems using a time series platform. During races, they are collecting IoT data about their batteries, accelerometer, gyroscope, tires, etc. The engineers are able to share important car stats during races which help the drivers tweak their driving decisions — all with the goal of winning. After races, the entire team are able to analyze data in Marple to understand what to do better next time. By using Marple + InfluxDB, their team are able to collect, share and analyze high frequency car data used to make their car faster at competitions.
Join this webinar as Robbin Baauw and Nero Vanbiervliet dive into:
Marple's approach to empowering engineers to organize, analyze, and visualize their data
Delft University's collaborative methodology to building and racing their Formula-style race car
How InfluxDB is crucial to their collaborative engineering and racing process
Introducing InfluxDB’s New Time Series Database Storage EngineInfluxData
InfluxData is excited to announce the general availability of InfluxDB Cloud's new storage engine! It is a cloud-native, real-time, columnar database optimized for time series data. InfluxDB's rebuilt core was coded in Rust and sits on top of Apache Arrow and DataFusion. InfluxData's team picked Apache Parquet as the persistent format. In this webinar, Paul Dix and Balaji Palani will demonstrate key product features including the removal of cardinality limits!
They will dive into:
The next phase of the InfluxDB platform
How using Apache Arrow's ecosystem has improved InfluxDB's performance and scalability
Key features of InfluxDB Cloud's new core — including SQL native support
Understanding InfluxDB’s New Storage EngineInfluxData
Learn more about InfluxDB’s new storage engine! The team developed a cloud-native, real-time, columnar database optimized for time series data. We built it all in Rust and it sits on top of Apache Arrow and DataFusion. We chose Apache Parquet as the persistent format, which is an open source columnar data file format. This new storage engine provides InfluxDB Cloud users with new functionality, including the removal of cardinality limits, so developers can bring in massive amounts of time series data at scale.
In this webinar, Anais Dotis-Georgiou will dive into:
Requirements for rebuilding InfluxDB’s core
Key product features and timeline
How Apache Arrow’s ecosystem is used to meet those requirements
Stick around for a demo and live Q&A
Streamline and Scale Out Data Pipelines with Kubernetes, Telegraf, and InfluxDBInfluxData
RudderStack — the creators of the leading open source Customer Data Platform (CDP) — needed a scalable way to collect and store metrics related to customer events and processing times (down to the nanosecond). They provide their clients with data pipelines that simplify data collection from applications, websites, and SaaS platforms. RudderStack's solution enables clients to stream customer data in real time — they quickly deploy flexible data pipelines that send the data to the customer's entire stack without engineering headaches. Customers are able to stream data from any tool using their 16+ SDK's, and they are able to transform the data in-transit using JavaScript or Python. How does RudderStack use a time series platform to provide their customers with real-time analytics?
Join this webinar as Ryan McCrary dives into:
RudderStack's approach to streamlining data pipelines with their 180+ out-of-the-box integrations
Their data architecture including Kapacitor for alerting and Grafana for customized dashboards
Why using InfluxDB was crucial for them for fast data collection and providing single-sources of truths for their customers
Ward Bowman [PTC] | ThingWorx Long-Term Data Storage with InfluxDB | InfluxDa...InfluxData
Customers using ThingWorx and the Manufacturing Solutions often need to store property data longer than the Solutions default to. These customers are recommended to use InfluxDB, and this presentation will cover the key considerations for moving to InfluxDB vs the standard ThingWorx value streams. Join this session as Ward highlights ThingWorx’s solution and its easy implementation process.
Scott Anderson [InfluxData] | New & Upcoming Flux Features | InfluxDays 2022InfluxData
Two new features are coming to Flux that add flexibility
and functionality to your data workflow—polymorphic
labels and dynamic types. This session walks through
these new features and shows how they work.
This document outlines the schedule for Day 2 of InfluxDays 2022, an event hosted by InfluxData. The schedule includes sessions on building developer experience, how developers like to work, an overview of the InfluxDB developer console and API, demos of client libraries and the InfluxDB v2 API, tips for getting involved in the InfluxDB community and university, use cases for networking monitoring, crypto/fintech, monitoring/observability, and IIoT, and closing thoughts. Recordings of all sessions will be made available to registered attendees by November 7th. Upcoming events include advanced Flux training in London and resources through the community forums, Slack channel, and online university.
Steinkamp, Clifford [InfluxData] | Welcome to InfluxDays 2022 - Day 2 | Influ...InfluxData
This document contains the agenda for Day 2 of InfluxDays 2022, which includes:
- Welcome and introductory remarks from Zoe Steinkamp and Jay Clifford of InfluxData.
- Fireside chats and presentations on building great developer experiences, how developers like to work, and use cases for InfluxDB from companies like Tesla, InfluxData, and others.
- Sessions on the InfluxDB developer console, APIs, client libraries, getting involved in the community, accelerating time to awesome with InfluxDB University, and tips for analyzing IoT data with InfluxDB.
- Closing thoughts from Zoe Steinkamp and Jay Clifford, as well as
The document summarizes the agenda and sessions for Day 1 of InfluxDays 2022. It includes sessions on InfluxDB data collection, scripting languages like Flux, the InfluxDB time series engine, tasks, storage, and a closing discussion. The agenda involves talks from InfluxData employees on building applications with real-time data, navigating the developer experience, solving problems, the InfluxDB platform, community, education, use cases in crypto/fintech and IIoT, and tips/tricks for analysis.
Paul Dix [InfluxData] The Journey of InfluxDB | InfluxDays 2022InfluxData
The document summarizes the evolution of InfluxDB from its initial version 1.0 in 2013 to the current version 2.0 called IOx. It started as a time series database that stored time series data and associated metadata. Over time it incorporated features like tags, line protocol, TSM storage engine, and an inverted index to improve querying capabilities. Version 2.0 refocused it as an all-in-one platform with a new query language called Flux, and aims to be cloud-first. The latest version IOx leverages a columnar database and federated architecture to solve challenges of scale, providing SQL support and the ability to deploy on cloud or edge environments.
Jay Clifford [InfluxData] | Tips & Tricks for Analyzing IIoT in Real-Time | I...InfluxData
Transforming raw machine data into real business outcomes such as OE and OEE is a journey. In this talk we will learn some tips and tricks for analyzing and transforming your machine data using InfluxDB and other third-party platforms.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/how-axelera-ai-uses-digital-compute-in-memory-to-deliver-fast-and-energy-efficient-computer-vision-a-presentation-from-axelera-ai/
Bram Verhoef, Head of Machine Learning at Axelera AI, presents the “How Axelera AI Uses Digital Compute-in-memory to Deliver Fast and Energy-efficient Computer Vision” tutorial at the May 2024 Embedded Vision Summit.
As artificial intelligence inference transitions from cloud environments to edge locations, computer vision applications achieve heightened responsiveness, reliability and privacy. This migration, however, introduces the challenge of operating within the stringent confines of resource constraints typical at the edge, including small form factors, low energy budgets and diminished memory and computational capacities. Axelera AI addresses these challenges through an innovative approach of performing digital computations within memory itself. This technique facilitates the realization of high-performance, energy-efficient and cost-effective computer vision capabilities at the thin and thick edge, extending the frontier of what is achievable with current technologies.
In this presentation, Verhoef unveils his company’s pioneering chip technology and demonstrates its capacity to deliver exceptional frames-per-second performance across a range of standard computer vision networks typical of applications in security, surveillance and the industrial sector. This shows that advanced computer vision can be accessible and efficient, even at the very edge of our technological ecosystem.
Dandelion Hashtable: beyond billion requests per second on a commodity serverAntonios Katsarakis
This slide deck presents DLHT, a concurrent in-memory hashtable. Despite efforts to optimize hashtables, that go as far as sacrificing core functionality, state-of-the-art designs still incur multiple memory accesses per request and block request processing in three cases. First, most hashtables block while waiting for data to be retrieved from memory. Second, open-addressing designs, which represent the current state-of-the-art, either cannot free index slots on deletes or must block all requests to do so. Third, index resizes block every request until all objects are copied to the new index. Defying folklore wisdom, DLHT forgoes open-addressing and adopts a fully-featured and memory-aware closed-addressing design based on bounded cache-line-chaining. This design offers lock-free index operations and deletes that free slots instantly, (2) completes most requests with a single memory access, (3) utilizes software prefetching to hide memory latencies, and (4) employs a novel non-blocking and parallel resizing. In a commodity server and a memory-resident workload, DLHT surpasses 1.6B requests per second and provides 3.5x (12x) the throughput of the state-of-the-art closed-addressing (open-addressing) resizable hashtable on Gets (Deletes).
How information systems are built or acquired puts information, which is what they should be about, in a secondary place. Our language adapted accordingly, and we no longer talk about information systems but applications. Applications evolved in a way to break data into diverse fragments, tightly coupled with applications and expensive to integrate. The result is technical debt, which is re-paid by taking even bigger "loans", resulting in an ever-increasing technical debt. Software engineering and procurement practices work in sync with market forces to maintain this trend. This talk demonstrates how natural this situation is. The question is: can something be done to reverse the trend?
Digital Banking in the Cloud: How Citizens Bank Unlocked Their MainframePrecisely
Inconsistent user experience and siloed data, high costs, and changing customer expectations – Citizens Bank was experiencing these challenges while it was attempting to deliver a superior digital banking experience for its clients. Its core banking applications run on the mainframe and Citizens was using legacy utilities to get the critical mainframe data to feed customer-facing channels, like call centers, web, and mobile. Ultimately, this led to higher operating costs (MIPS), delayed response times, and longer time to market.
Ever-changing customer expectations demand more modern digital experiences, and the bank needed to find a solution that could provide real-time data to its customer channels with low latency and operating costs. Join this session to learn how Citizens is leveraging Precisely to replicate mainframe data to its customer channels and deliver on their “modern digital bank” experiences.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
"Choosing proper type of scaling", Olena SyrotaFwdays
Imagine an IoT processing system that is already quite mature and production-ready and for which client coverage is growing and scaling and performance aspects are life and death questions. The system has Redis, MongoDB, and stream processing based on ksqldb. In this talk, firstly, we will analyze scaling approaches and then select the proper ones for our system.
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
AppSec PNW: Android and iOS Application Security with MobSFAjin Abraham
Mobile Security Framework - MobSF is a free and open source automated mobile application security testing environment designed to help security engineers, researchers, developers, and penetration testers to identify security vulnerabilities, malicious behaviours and privacy concerns in mobile applications using static and dynamic analysis. It supports all the popular mobile application binaries and source code formats built for Android and iOS devices. In addition to automated security assessment, it also offers an interactive testing environment to build and execute scenario based test/fuzz cases against the application.
This talk covers:
Using MobSF for static analysis of mobile applications.
Interactive dynamic security assessment of Android and iOS applications.
Solving Mobile app CTF challenges.
Reverse engineering and runtime analysis of Mobile malware.
How to shift left and integrate MobSF/mobsfscan SAST and DAST in your build pipeline.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
JavaLand 2024: Application Development Green Masterplan
Improving Industrial Machine Support Using InfluxDB, Web SCADA, and AWS
1. Engineering the Next Generation
of Process Technologies
Andrew Smith
Lead Innovation Engineer
Connected Support
2. Bio • Control & Systems Engineer
• Chartered in 2003
• Started in Process Control in 1995
• glass manufacture,
• water & wastewater treatment
• SCADA, PLC control
• Data Mining, Modelling
• 12 years overseas
• Tech Start-up,
• Manufacturing,
• Innovation Consultancy
• Last 3 years in Machine Manufacturing, IIoT and
Process Data
• Now Lead Innovation Engineer for LBBC
• Live in Leeds, UK
3. Connected Support
• LBBC’s machines & processes
• The value of Connected Support
• Data source
• Typical users
• Typical scenarios
• Requirements
• Architecture
• Data shape as it flows
• InfluxDB role and use
• Troubleshooting
• Data mining & processing
4. Dewaxing Boilerclave® system
Investment Casting equipment (Global)
Ceramic Core Leaching system
Investment Casting: The process of forming a casting
(e.g. turbine blade) from a wax “pattern”, a ceramic
“core”, and a ceramic “shell”
5. The value of “Connected Support”
Hear what customers hear, when they hear it:
Taking real-time alerts beyond the foundry fence.
See what customers see: Taking data insights beyond the foundry fence
live and historical for the life of the machine
Advanced data processing: Complex calculations on
combinations of multiple variables at regular intervals
Data driven Condition monitoring: Spotting gradual/ imperceptible
changes and alerting the right people
6. Where does the data come from ? • Heating
• Top Band %
• Mid Band %
• Bottom Band %
• Fault Codes
• Valves
• V1 Open/Closed
• …
• V7 Open/Closed
• Pumps
• M1 running
• M2 Running
• Cycle
• Cycle No
• State No
• Phase No
• Countdown
• Lid / Lock
• Open / Closed
• Locked / Unlocked
• Shotbolt Engaged
• Pressures
• Process Air
• Vessel
• Temperatures
• Top Band
• Mid Band
• Bottom Band
• Top Vessel
• Mid Vessel
• Bottom Vessel
• Levels
• KOH
• Vessel
LC450
Core
Leacher
Industrial PLC (control)
7. • Cost efficiency ( ingest, query, storage )
• Use of standard hardware between PLC & Cloud
• Security:
• Encryption of data in transit
• MFA for data in storage
• Reliability:
• Not loosing data (even if cloud systems are down)
• Being able to guarantee that data arrived
• Serverless infrastructure (not even EC2)
• Data recording for lifetime of machine
• Data visualisation & exploration for the inexpert (ideally
without writing any code or using known languages)
• Live dashboards
• Alerts to Mobile App, email, SMS
• Complex data processing
• On data triggers
• On schedule
Needs &
Requirement
8. Typical Users
Maintenance Engineers
• Electrical / Mechanical background
• No software experience
• Limited PC interest
• Responsible for finding and fixing issues
• Contacted by customer with issues / questions
• Investigate issues using data
Process experts
• Specialist background. Experts in their field
• No software experience
• PC competent
• Obsessed about improving the process
• Called on for challenging process / maintenance issues
Process-Data people
• Data obsessed
• All about performance, KPIs
• Six-Sigma “blackbelt” types
• Excel experts
• Not necessarily programmers
• Create Statistical queries
Database / Software people
• Come from software background
• SQL experience
• Build Visualisations / dashboards
Managers
• Need quick easy to understand visuals for decisions
• Have very little time
• Unlikely to be bothered to point and click, let alone type
Users / Customers
• Observe issues
• Ask questions. Have problems. Want answers
• Don’t understand databases
9. Typical Scenarios
Maintenance Engineers
• Electrical / Mechanical background
• No software experience
• Limited PC interest
• Responsible for finding and fixing issues
• Contacted by customer with issues / questions
• Investigate issues using data
Process experts
• Specialist background. Experts in their field
• No software experience
• PC competent
• Obsessed about improving the process
• Called on for challenging process / maintenance issues
Database Geeks
• Come from software background
• SQL experience
• Build Visualisations / dashboards
Managers
• Need quick easy to understand visuals for decisions
• Have very little time
• Unlikely to be bothered to point and click, let alone type
Users / Customers
• Observe issues
• Ask questions. Have problems. Want answers
• Don’t understand databases
I’m seeing this.
Something has gone
wrong. It’s not working
Fix it!
Process Data Geeks
• Data obsessed
• All about performance, KPIs
• Six-Sigma “blackbelt” types
• Excel experts
• Not necessarily programmers
• Create Statistical queries
10. Maintenance Engineers
• Electrical / Mechanical background
• No software experience
• Limited PC interest
• Responsible for finding and fixing issues
• Contacted by customer with issues / questions
• Investigate issues using data
Process experts
• Specialist background. Experts in their field
• No software experience
• PC competent
• Obsessed about improving the process
• Called on for challenging process / maintenance issues
Database Geeks
• Come from software background
• SQL experience
• Build Visualisations / dashboards
Managers
• Need quick easy to understand visuals for decisions
• Have very little time
• Unlikely to be bothered to point and click, let alone type
Users / Customers
• Observe issues
• Ask questions. Have problems. Want answers
• Don’t understand databases
I want to visualise this
data together to
troubleshoot what
happened when
Process Data Geeks
• Data obsessed
• All about performance, KPIs
• Six-Sigma “blackbelt” types
• Excel experts
• Not necessarily programmers
• Create Statistical queries
Typical Scenarios
11. Maintenance Engineers
• Electrical / Mechanical background
• No software experience
• Limited PC interest
• Responsible for finding and fixing issues
• Contacted by customer with issues / questions
• Investigate issues using data
Process experts
• Specialist background. Experts in their field
• No software experience
• PC competent
• Obsessed about improving the process
• Called on for challenging process / maintenance issues
Database Geeks
• Come from software background
• SQL experience
• Build Visualisations / dashboards
Managers
• Need quick easy to understand visuals for decisions
• Have very little time
• Unlikely to be bothered to point and click, let alone type
Users / Customers
• Observe issues
• Ask questions. Have problems. Want answers
• Don’t understand databases
Here’s what I have
discovered (with notes
and annotations)
Process Data Geeks
• Data obsessed
• All about performance, KPIs
• Six-Sigma “blackbelt” types
• Excel experts
• Not necessarily programmers
• Create Statistical queries
Typical Scenarios
12. Maintenance Engineers
• Electrical / Mechanical background
• No software experience
• Limited PC interest
• Responsible for finding and fixing issues
• Contacted by customer with issues / questions
• Investigate issues using data
Process experts
• Specialist background. Experts in their field
• No software experience
• PC competent
• Obsessed about improving the process
• Called on for challenging process / maintenance issues
Database Geeks
• Come from software background
• SQL experience
• Build Visualisations / dashboards
Managers
• Need quick easy to understand visuals for decisions
• Have very little time
• Unlikely to be bothered to point and click, let alone type
Users / Customers
• Observe issues
• Ask questions. Have problems. Want answers
• Don’t understand databases
I’m seeing a valve
oscillation when the
vessel pressurises. Is
this a problem? Has it
always happened?
Does it happen on
other machines?
Process Data Geeks
• Data obsessed
• All about performance, KPIs
• Six-Sigma “blackbelt” types
• Excel experts
• Not necessarily programmers
• Create Statistical queries
Typical Scenarios
13. Maintenance Engineers
• Electrical / Mechanical background
• No software experience
• Limited PC interest
• Responsible for finding and fixing issues
• Contacted by customer with issues / questions
• Investigate issues using data
Process experts
• Specialist background. Experts in their field
• No software experience
• PC competent
• Obsessed about improving the process
• Called on for challenging process / maintenance issues
Database Geeks
• Come from software background
• SQL experience
• Build Visualisations / dashboards
Managers
• Need quick easy to understand visuals for decisions
• Have very little time
• Unlikely to be bothered to point and click, let alone type
Users / Customers
• Observe issues
• Ask questions. Have problems. Want answers
• Don’t understand databases
I need to investigate
this further before I
answer that. I need
data.
Process Data Geeks
• Data obsessed
• All about performance, KPIs
• Six-Sigma “blackbelt” types
• Excel experts
• Not necessarily programmers
• Create Statistical queries
Typical Scenarios
14. Maintenance Engineers
• Electrical / Mechanical background
• No software experience
• Limited PC interest
• Responsible for finding and fixing issues
• Contacted by customer with issues / questions
• Investigate issues using data
Process experts
• Specialist background. Experts in their field
• No software experience
• PC competent
• Obsessed about improving the process
• Called on for challenging process / maintenance issues
Database Geeks
• Come from software background
• SQL experience
• Build Visualisations / dashboards
Managers
• Need quick easy to understand visuals for decisions
• Have very little time
• Unlikely to be bothered to point and click, let alone type
Users / Customers
• Observe issues
• Ask questions. Have problems. Want answers
• Don’t understand databases
I want to visualise
Overall Equipment
Availability for each
machine in a process /
subprocess
Process Data Geeks
• Data obsessed
• All about performance, KPIs
• Six-Sigma “blackbelt” types
• Excel experts
• Not necessarily programmers
• Create Statistical queries
Typical Scenarios
15. Maintenance Engineers
• Electrical / Mechanical background
• No software experience
• Limited PC interest
• Responsible for finding and fixing issues
• Contacted by customer with issues / questions
• Investigate issues using data
Process experts
• Specialist background. Experts in their field
• No software experience
• PC competent
• Obsessed about improving the process
• Called on for challenging process / maintenance issues
Process Data Geeks
• Data obsessed
• All about performance, KPIs
• Six-Sigma “blackbelt” types
• Excel experts
• Not necessarily programmers
• Create Statistical queries
Database Geeks
• Come from software background
• SQL experience
• Build Visualisations / dashboards
Managers
• Need quick easy to understand visuals for decisions
• Have very little time
• Unlikely to be bothered to point and click, let alone type
Users / Customers
• Observe issues
• Ask questions. Have problems. Want answers
• Don’t understand databases
• Need access to just their data
I’m discovering some
hidden patterns in
data.
• There is an
opportunity OR
• There is a problem
Typical Scenarios
16. • Cost efficiency ( ingest, query, storage )
• Use of standard gateways between PLC & Cloud
• Security:
• Encryption of data in transit
• MFA for data in storage
• Reliability:
• Not loosing data (even if cloud systems are down)
• Being able to guarantee that data arrived
• Serverless infrastructure (not even EC2)
• Data recording for lifetime of machine
• Data visualisation & exploration for the inexpert (ideally
without writing any code or using known languages)
• Live dashboards
• Alerts to Mobile App, email, SMS
• Complex data processing
• On data triggers
• On schedule
Needs &
Requirement
20. Our chosen infrastructure • Gateways speak MQTT & OPC
• AWS is MQTT broker
• MQTT messages handled by
Lambda functions
• Lambda code
• Updates InfluxDB
• Updates live dashboards
(Servitly)
• Failed data is queued and
stored in DynamoDB (e.g. if
InfluxDB down)
21. Our chosen infrastructure • Gateways speak MQTT & OPC
• AWS is MQTT broker
• MQTT messages handled by
Lambda functions
• Lambda code
• Updates InfluxDB
• Updates live dashboards
(Servitly)
• Failed data is queued and
stored in DynamoDB (e.g. if
InfluxDB down)
• Serverless code is watched
and alarms on issues
22. Our chosen infrastructure • Gateways speak MQTT & OPC
• AWS is MQTT broker
• MQTT messages handled by
Lambda functions
• Lambda code
• Updates InfluxDB
• Updates live dashboards
(Servitly)
• Failed data is queued and
stored in DynamoDB (e.g. if
InfluxDB down)
• Serverless code is watched
and alarms on issues
• 3rd party (Servitly) used for
Alerts / Live dashboarding /
Customer’s own view
29. Connected support procedure:
Notification: LBBC
and Customer are
notified of an Alert
Service team attempt
to diagnose using just
the dashboard. InfluxDB is used to
construct any detailed
analysis
30. Value of InfluxDB to us
Value for IIoT use case
What
InfluxDB
delivers
Low Cost
Performant
ingest, query, process
Visualise data
31. Value of InfluxDB to us
Process
data
Low Cost
Performant
ingest, query, process
Visualise data
Automate
Value for IIoT use case
What
InfluxDB
delivers
32. Value of InfluxDB to us
Process
data
Low Cost
Performant
ingest, query, process
Visualise data
Automate Performant UI
Share insights
Explore data
Security
Value for IIoT use case
What
InfluxDB
delivers
33. What other products did we consider?
Forecast of DATA VOLUMES
Forecast of INGEST + STORE COSTS
34. Background:
• Our “Core Leaching” product uses Hydroxide
• Hydroxide is a dangerous & costly chemical
• Hydroxide depletes over months to Silicates
• Both are high pH so measuring pH doesn’t work
• Measuring Hydroxide has a customer value
Chemistry & Physics:
• Hydroxide has a very low vapour pressure.
• Low hydroxide à high vapour pressure
Data:
• 1 year’s worth of cycle data
• Pressures, temperatures, valves, pumps etc
• Data includes both fresh and spent Hydroxide
Challenge :
• Estimate Hydroxide strength from process data
• Predict when Hydroxide is going to be exhausted
Data Exploration Example
38. We’re going to do that
with a lot of data:
• 12 months data
• 150 cycles
• 1.3 million points
Computed and visualised in < 10 secs by
the cloud database.
Data Processing & Visualisation:
39. Discovery of hidden process patterns with value to customer!
“your timing is
right on cue. I
thought it was
close to spent.
Our performance
is definitely
decreasing”
Feb 22nd
LBBC notify
customer
that
Hydroxide
needs
changing
New
Hydroxide
Cycle
#
1247
New
Hydroxide
40. Tips / Learning (opinion only)
Suggest against Telegraph for IIoT. AWS+Lambda+Influx API is a ‘better’ solution:
• Telegraph needs a host. That means hardware or EC2
• AWS is serverless. AWS infrastructure costs £10/month for millions of msgs!
• AWS Lambda scales with data rates. Parallel instances are added as needed.
Suggest against MQTT ingest for IIoT. AWS+Lambda+Influx API is still a ‘better’ solution:
• If InfluxDB is down, you’ll need more advanced store and forward than MQTT offers
• Most IIoT solutions will need data to go to multiple endpoints and verify it got there.
Direct MQTT from the industrial gateway to InfluxDB will not do that
For IIoT, use “measurement” as your hostname or equipment name.
“measurement” as a data type doesn’t make sense for the IIoT use case.
• Equipment name is a more natural IIoT primary key (it’s usually the first filter of data)
• Influx’s Data explorer doesn’t allow the exploration of multiple “measurements”. i.e. you can’t
use it to plot temperatures, pressures and Boolean values on the same chart
41. We love InfluxDB, BUT …
It took a lot of effort to put together an AW + Lambda + Influx API.
If only Influx gave users a pre-prepared, fully functional cloud integration (AWS, GCP, Azure)
InfluxDB doesn’t allow us to restrict logins to access only a subset of data.
Users are all ‘owners’ and can view (and delete) the entire database.
We love that a link can be shared to a dashboard where time (start;stop) & variables are in the https.
But this doesn’t work for Notebooks (and neither do variables)
The InfluxDB UI delivers most of what the Industrial IoT / Process Control use case demands, with some
notable exceptions that reduce usefulness in process troubleshooting & exploration:
• The display of Boolean values
• Synchronisation of multiple charts (zoom + pan together)
• X-Y plots need Flux code.
Username / Password is basic … But IIoT users probably need to protect their data from view by MFA
42. • 12 months data
• 150 cycles
• 1.3 million points
Computed and visualised in < 10 secs by
the cloud database.
Data Processing & Visualisation: