Paradigmshift in Industry:
Telemetry driven production
- All Seeing Eyes -
(c) 2018/2019 Bastian Mäuser / NETZConsult (Germany)
Situation
(not so uncommon)
Customer
●Fr. Ant Niedermayr GmbH & Co. KG (https://niedermayr.net)
●218 Years of company history, est. 1801 by Franz Anton Niedermayr
●situated in the city of Regensburg, Bavaria, southern Germany
●Owner operated by Johannes Helmberger, Niedermayr ancestor in the
6th generation
●205 Employees
●Presscompany, Creative Department, IT + Datacenter Services
●Approved presenting major parts of the Project
Technical origin
●Already experience with various Timeseries DB through personal
opensource project involvement
●IT monitoring: Done plenty of times, everything well documented.
●Had nice IT Dashboards, why not apply this visibility approach to
Industrial Process?
●Aimed for four targets: controlling, prediction, save $$$, escape vendor
lock
●Quick first results: initial implementation took just a few days.
●Most of the work: interpreting and validating the numbers
Print?
●1 Lithoman IV 80p, 2 Lithoman-S 96p web offset printing Machines +
smaller Machines
●Output: up to 4.8M A4 pages per hour per Unit
●24/7 Production
●About 15 different Suppliers of Main subunits: simplified interfaces to
each other, very proprietary, high complexity. (Front-to-back: Splicer, 4x
Inking, Dryer, Remoistener, Web Cutter, Folder, Stitcher, Conveyor,
Trimmer, Stacker, Strapper, palettizer/Robot, Foliator)
Plant
Some Pictures
The Dilemma
●Industrial Plant suffer from a notorious high heterogenity of datasources and
accessprotocols throughout the sub units.
●A manual or semi-automated reporting / aggregation of the different sources
of data doesn’t scale and is often paid with high amounts of manual labor and
is prone to errors.
●Existing reportings: job-bound and only available after job completion. Exact
time-reference for metrics/events is impossible to achieve.
Datasources that ma5er
●Plantcontrol Pecom PMI: Postgres (ODBC) RDBMS
●MAN Roland IDC: MQTT
●Quadtech QTI IDC on 80p
●Energy: Janitza/Gridvis REST API
●ERP: System “Chroma”: Oracle 12 based Client/Server Application without
any usable API
●Robot Cell: MSSQL without client access, but Access to a OPCUA Server
●Baldwin Fluidmanagemend: MQTT
●Technotrans Ink Supply System
Possible Approaches
●Excel (rly?)
●RRD Collector
●Collection of Data in a tablerelational Structure (Postgres, Mysql etc) with
attached Visualisation (Cacti, Zabbix etc)
●Elastic Stack (Elastic Search, Beats, Logstash, Kibana)
●Graphite (Carbon/Whisper + Graphite + Grafana)
●Tick/G/L (Telegraf + InfluxDB + Chronograf + Kapacitor + Grafana) + LoudML
(disruptive Machine Learning API)
Decision for TICK
●Scales well at high ingest rates. Good to eat up to >500k data points per
second on a single Instance (we are at about 800 Datapoints / Machine /
Second)
●Compelling storage Engine (in terms of speed, space eficiency, space reclaim,
retention)
●Extensive ecosystem of plugins on the input and output (Telegraf)
●Proven production ready: many big names in IT rely on it
Chosen Approach: Node Red +TICK/G/L
Example View of a Nodered Flow for IDC
Steps
1.Identification of datasources that matter
2.Deploy instrumentation and extend where required
3.Technical interface design: Some work with plain telegraf, some require
moderate coding
4.Dashboard design (Grafana, Chronograf)
5.Derive KPI
6.Define criteria for Alerts
Difficulties
1.Reverse Engineering might be required
2.Dealing with outdated Hard- and Software is not uncommon
3.Negotiations with Machinesuppliers can be challenging
4.Data Validation
Good habbits
●Implement security right away (At least some reasonable Password for
MQTT Brokers, even better TLS Client certificates)
●Seperate VLAN
●Collecting everything that is available isn’t a good Idea either
●Avoid redundancy of Values
●Do a Interpretation Documentation (at what physical Points do
Measurements orginate, are they raw or already calculated)
●Don’t end up in having a Directory full of Customscripts – developed a
standard in Node Red
Electrical Power
●Consumption up to 4MW
(electrical)
●Biggest savings
Paper
●100000 metric Tons/yr
●Quantify waste
●Identify waste causes
●Reduce waste by
reducing Washcycles
●Predict situations to
avoid unplanned downtime
Central Ink supply
●2700 metric tons/yr
●Validate consumption
●Forecast required
deliveries
Result: tacIcal Overview
QA KPI (Dotgain)
Instrumetation: ΔE Deviation (densitometric)
Waste quantification and causes
more interessIng metrics in print
●Overall Waste
●Washing Waste
●Reel Numbers
●Web Width
Deep Analysis
Consumption
vs.
Efficiency
vs.
Staff
vs.
Jobs/Customer
vs.
Consumable
vs.
Quality KPI
More Deep Analysis
Consumeables in $$$
Incidents in Time and $$$
Achievements so far
●Production realtime data (some near realtime (10-30s at most), some real
streaming metrics <3)
●Significant energy savings (Upper 6 Digit Number/yr)
●Fine grained values
●LoudML / Tensorflow in place, ML Models applied and constantly developed
●Anomaly detection throughout raw datasources
●Close interval validation of business numbers with actual real measurements
●Successfully escaped Vendor lock-in
Future
●Deploy more Instrumentation: Vibration and waveformanalysis (eg.
precursoridentifikation bearing fails, conveyor drives, fans) with
specialized Hardware
●Even More metrics
●Continue ongoing Talks with vendors: Deliver all metrics on a MQTT
broker
●Signalling to Production: Reduce Washing waste by using IDC Signalling
derived from Dotgain Values in InfluxDB (Beta run ongoing)
Thank you for your attention
Questions ?
Bastian Mäuser
<bma@netz.org>

How Sensor Data Can Help Manufacturers Gain Insight to Reduce Waste, Energy Consumption, and Get Rid of Pesky Spreadsheets

  • 1.
    Paradigmshift in Industry: Telemetrydriven production - All Seeing Eyes - (c) 2018/2019 Bastian Mäuser / NETZConsult (Germany)
  • 2.
  • 3.
    Customer ●Fr. Ant NiedermayrGmbH & Co. KG (https://niedermayr.net) ●218 Years of company history, est. 1801 by Franz Anton Niedermayr ●situated in the city of Regensburg, Bavaria, southern Germany ●Owner operated by Johannes Helmberger, Niedermayr ancestor in the 6th generation ●205 Employees ●Presscompany, Creative Department, IT + Datacenter Services ●Approved presenting major parts of the Project
  • 4.
    Technical origin ●Already experiencewith various Timeseries DB through personal opensource project involvement ●IT monitoring: Done plenty of times, everything well documented. ●Had nice IT Dashboards, why not apply this visibility approach to Industrial Process? ●Aimed for four targets: controlling, prediction, save $$$, escape vendor lock ●Quick first results: initial implementation took just a few days. ●Most of the work: interpreting and validating the numbers
  • 5.
    Print? ●1 Lithoman IV80p, 2 Lithoman-S 96p web offset printing Machines + smaller Machines ●Output: up to 4.8M A4 pages per hour per Unit ●24/7 Production ●About 15 different Suppliers of Main subunits: simplified interfaces to each other, very proprietary, high complexity. (Front-to-back: Splicer, 4x Inking, Dryer, Remoistener, Web Cutter, Folder, Stitcher, Conveyor, Trimmer, Stacker, Strapper, palettizer/Robot, Foliator)
  • 6.
  • 7.
  • 8.
    The Dilemma ●Industrial Plantsuffer from a notorious high heterogenity of datasources and accessprotocols throughout the sub units. ●A manual or semi-automated reporting / aggregation of the different sources of data doesn’t scale and is often paid with high amounts of manual labor and is prone to errors. ●Existing reportings: job-bound and only available after job completion. Exact time-reference for metrics/events is impossible to achieve.
  • 9.
    Datasources that ma5er ●PlantcontrolPecom PMI: Postgres (ODBC) RDBMS ●MAN Roland IDC: MQTT ●Quadtech QTI IDC on 80p ●Energy: Janitza/Gridvis REST API ●ERP: System “Chroma”: Oracle 12 based Client/Server Application without any usable API ●Robot Cell: MSSQL without client access, but Access to a OPCUA Server ●Baldwin Fluidmanagemend: MQTT ●Technotrans Ink Supply System
  • 10.
    Possible Approaches ●Excel (rly?) ●RRDCollector ●Collection of Data in a tablerelational Structure (Postgres, Mysql etc) with attached Visualisation (Cacti, Zabbix etc) ●Elastic Stack (Elastic Search, Beats, Logstash, Kibana) ●Graphite (Carbon/Whisper + Graphite + Grafana) ●Tick/G/L (Telegraf + InfluxDB + Chronograf + Kapacitor + Grafana) + LoudML (disruptive Machine Learning API)
  • 11.
    Decision for TICK ●Scaleswell at high ingest rates. Good to eat up to >500k data points per second on a single Instance (we are at about 800 Datapoints / Machine / Second) ●Compelling storage Engine (in terms of speed, space eficiency, space reclaim, retention) ●Extensive ecosystem of plugins on the input and output (Telegraf) ●Proven production ready: many big names in IT rely on it
  • 12.
    Chosen Approach: NodeRed +TICK/G/L
  • 13.
    Example View ofa Nodered Flow for IDC
  • 14.
    Steps 1.Identification of datasourcesthat matter 2.Deploy instrumentation and extend where required 3.Technical interface design: Some work with plain telegraf, some require moderate coding 4.Dashboard design (Grafana, Chronograf) 5.Derive KPI 6.Define criteria for Alerts
  • 15.
    Difficulties 1.Reverse Engineering mightbe required 2.Dealing with outdated Hard- and Software is not uncommon 3.Negotiations with Machinesuppliers can be challenging 4.Data Validation
  • 16.
    Good habbits ●Implement securityright away (At least some reasonable Password for MQTT Brokers, even better TLS Client certificates) ●Seperate VLAN ●Collecting everything that is available isn’t a good Idea either ●Avoid redundancy of Values ●Do a Interpretation Documentation (at what physical Points do Measurements orginate, are they raw or already calculated) ●Don’t end up in having a Directory full of Customscripts – developed a standard in Node Red
  • 17.
    Electrical Power ●Consumption upto 4MW (electrical) ●Biggest savings
  • 18.
    Paper ●100000 metric Tons/yr ●Quantifywaste ●Identify waste causes ●Reduce waste by reducing Washcycles ●Predict situations to avoid unplanned downtime
  • 19.
    Central Ink supply ●2700metric tons/yr ●Validate consumption ●Forecast required deliveries
  • 20.
  • 21.
  • 22.
  • 23.
  • 24.
    more interessIng metricsin print ●Overall Waste ●Washing Waste ●Reel Numbers ●Web Width
  • 25.
  • 26.
    More Deep Analysis Consumeablesin $$$ Incidents in Time and $$$
  • 27.
    Achievements so far ●Productionrealtime data (some near realtime (10-30s at most), some real streaming metrics <3) ●Significant energy savings (Upper 6 Digit Number/yr) ●Fine grained values ●LoudML / Tensorflow in place, ML Models applied and constantly developed ●Anomaly detection throughout raw datasources ●Close interval validation of business numbers with actual real measurements ●Successfully escaped Vendor lock-in
  • 28.
    Future ●Deploy more Instrumentation:Vibration and waveformanalysis (eg. precursoridentifikation bearing fails, conveyor drives, fans) with specialized Hardware ●Even More metrics ●Continue ongoing Talks with vendors: Deliver all metrics on a MQTT broker ●Signalling to Production: Reduce Washing waste by using IDC Signalling derived from Dotgain Values in InfluxDB (Beta run ongoing)
  • 29.
    Thank you foryour attention Questions ? Bastian Mäuser <bma@netz.org>