WIPAC MONTHLY
The Monthly Update from Water Industry Process Automation & Control
	www.wipac.org.uk										Issue 5/2021- May 2021
Page 2
In this Issue
WIPAC Monthly is a publication of the Water Industry Process Automation & Control Group. It is produced by the group
manager and WIPAC Monthly Editor, Oliver Grievson. This is a free publication for the benefit of the Water Industry and please
feel free to distribute to any who you may feel benefit. However due to the ongoing costs of WIPAC Monthly a donation website
has been set up to allow readers to contribute to the running of WIPAC & WIPAC Monthly, For those wishing to donate then
please visit https://www.patreon.com/Wipac all donations will be used solely for the benefit and development of WIPAC.
All enquires about WIPAC Monthly, including those who want to publish news or articles within these pages, should be directed 	
to the publications editor, Oliver Grievson at olivergrievson@hotmail.com
From the editor............................................................................................................. 3
Industry news..............................................................................................................
Highlights of the news of the month from the global water industry centred around the successes of a few
of the companies in the global market.
4 - 11
A brand new rising main monitoring programme........................................................
This was a case study by Syrinix who worked with Anglian Water on using their technology on wastewater rising
mains to detect asset failure. There is a fascinating WIPAC Webinar available on the YouTube channel on the
subject.
12 - 15
The Smart Water Industry is no longer a choice....its a must..........................................
This article was something I wrote back in 2019 after attending a conference about Smart Water and like anything
good came out of a slight sense of frustration as to why the industry wasn't as far along as it could and should be
with Smart Water
16 - 19
Optimisation of a SBR using enhanced control............................................................
In this second case study of this edition we revisit the case study that we first published in 2018 and looks at the
savings made at the Cookstown WwTW using an advanced activated sludge plant controller, the ASP-CON.
20 - 22
WirelessHART networks: 7 myths that cloud their use for process control.....................
WirelessHART is something that has never really achieved what it can do in the water industry. In this revisited
article by ABB we look at the protocol and its benefits to the water industry
23 - 26
Getting data quality wrong and how to get it right.......................................................
The fundamentals of instrumentation are often ignored and right now the time has never been so important to
get it right. In this revisited article we look at the basic principles of getting it right
27 - 29
Is Water 4.0 the future of the Water Industry
An article written in 2016 about Water 4.0 by myself, what was said five years ago is just as relevant now
30 - 32
Using online water quality distribution systems monitoring to detect and control
nitrification............................................................................................................
An article from 2015 looking at distribution systems monitoring data to provide real-time detection
33 - 35
The use of APC in the modern wastewater industry..................................................
And lastly an article from 2015 looking at APC and multi-variate process control techniques that allows control of
wastewater plants using both Real Time Control (RTC) and Multi-Variate Process Control
36 - 38
Workshops, conferences & seminars............................................................................
The highlights of the conferences and workshops in the coming months. 39 - 40
Page 3
From the Editor
	 		 	
Sometimes it seems only yesterday since I started the WIPAC Group and started putting together WIPAC Monthly and
sometimes it seems a very long time ago in deed. This month it was precisely ten years since I started the group and I
remember in detail it growing from a few members to its thousandth member a year later. I remember swapping messages
with a water treatment plant manager who wanted me to convince him as to why he should join the group. Nine years
after that the group is just over 9,500 members and WIPAC Monthly has gone from a one or two page summary to the
bumper edition that I've put together this month featuring some of my favourite articles from just the past five years .
On the 16th May I put an honest message together for members of the group and in it really stated the honest truth, it is
you the readers of WIPAC Monthly, everyone who helps me put together the WIPAC Webinars and more recently are first
WIPAC Showcase that have really made the group what it now is ten years later. In the past ten years I have only managed
to meet a fraction of the members in my travels to various conferences & exhibitions around the world and that is, at
least for me, something that is a real shame. Hopefully, over time when we get back to physical or maybe even hybrid
conferences it will change. In the mean time the virtual conference circuit has its benefits and there will be much more
hopefully of the WIPAC webinars, showcases and anything else that we as a group can share with each other. That was
the aim of the WIPAC Group from the very beginning to share the successes and failures of using instrumentation, automation & control. I have certainly seen
my fair share of success and also my fair share of failure too and the wise words that you learn more from your failures is always going to be true.
So, hopefully you can forgive me this look back over the past articles of the last five years (if i went back any further I think I would burst most people's inboxes)
and I hope that you enjoy this latest edition. My final words in this short editorial this month is to say that I hope that you have enjoyed the last ten years of
WIPAC and WIPAC Monthly for me at times there have been some very late nights (and early mornings) putting the monthly editions together and the final
words are to say there's alot more to come over the next ten years and maybe ten more after that.
Have a good month and of course stay safe,
Oliver
Syrinix Launches New Combined Acoustic Leak and Pressure
Monitor
WIPAC hits its 10th
Anniversary
Syrinix this month has announced the launch of their smart network monitoring tool that
combines high-resolution pressure monitoring and leak detection in one solution: PIPEMINDER-
ONE Acoustic. This new version of the popular PIPEMINDER-ONE evolves the tool’s existing
pressure monitoring with acoustic monitoring to locate leaks and bursts.
Combined with RADAR, Syrinix’s cloud analysis platform, PIPEMINDER-ONE Acoustic locates
leaks on a broad range of pipeline material and sizes. Like the rest of the PIPEMINDER-ONE
family, the Acoustic version triangulates pressure events and sends intelligent alarms so utility
users can identify and fix potential problems on their network. All data is recorded by a precise
time-stamped management information system synced to reliable 4G, 3G, and 2G mobile
networks. Because units are widely spaced along the distribution network, fewer PIPEMINDER-
ONE Acoustic units than traditional leak detectors are needed to obtain valuable high-resolution
data.
“Water and wastewater utilities need cost effective and resilient monitoring systems,” notes Mark Hendy, Vice President of Business Development EMEA at
Syrinix. “The PIPEMINDER-ONE Acoustic can be installed permanently or on a semi-permanent survey basis for use detecting both leaks and the damaging
pressure events that can lead to leaks and bursts.”
The benefits of PIPEMINDER-ONE Acoustic, Hendy adds, translate to significant cost-savings: “Preventing asset deterioration is often the best way to maintain
a viable utility. Using PIPEMINDER-ONE Acoustic for the early detection of leaks and problematic pressure sources, utilities can proactively make operational
adjustments to prevent wear and tear on the network instead of reacting to asset failures.”
By supporting informed decision-making with data to calm networks, previous iterations of the PIPEMINDER monitor transformed how utilities manage and
maintain assets. Adding acoustic leak collection to transient monitoring capabilities, this new iteration of the PIPEMINDER provides new data combinations
in a smaller footprint.
PIPEMINDER-ONE Acoustic records pressure at 128 samples per second, generating both transient and summary data, which can be used for triangulation,
clustering, classification, and export via an API. The addition of acoustic data from a new, improved hydrophone is used in combination with pressure
monitoring to identify a leak position. With speedy and precise detection, utilities can now respond quickly to operational and network failures before
customers notice any problems and, with the same unit, identify and mitigate the pressure events contributing to those leaks and bursts.
Ben Smither, Vice President of Engineering at Syrinix, echoes Hendy’s emphasis on a modern solution: “Modern utilities must monitor for developing leaks
while performing real-time analysis of pressure transient events. Combining leak notifications with high resolution pressure monitoring with zone alarms,
PIPEMINDER-ONE Acoustic empowers operators with the data to save time, save money and improve performance.”
On 16th May this month the Water Industry Process Automation & Control Group reached its 10th
anniversary. Launched on 11th May 2011 over the year it has gathered a membership that currently
stands at just over 9,500 members all interested in instrumentation, control & automation and how
this all fits into the Digital Transformation of the Water Industry.
In a video message to the group, Oliver Grievson, who has now produced 115 editions of WIPAC
Monthly expressed his gratitude to each and every member of the group from those who joined on
the very first day to the most recent members of the group.
This month also saw the launch of WIPAC's new initiative which is the WIPAC showcase which hopes
to take new innovations and products in the water industry and invite members of the WIPAC group
to keep updated with the latest developments.
The first company to showcase their newest developments were Vega Control Systems who have been a long time supporter of Water Industry Process
Automation & Control. In this first showcase we saw Matt Westgate, the Water Industry Manager at Vega, and his colleague Peter Devine takes us through the
first level-based flow device in the water industry which has been certified to operate without a separate transmitter. The C21 and C22 devices along with the
Vegapuls 21 and 31 can operate independently of the transmitter, the Vegamet 861/862 and have a 2mm accuracy over a 5m range which puts the radar into
a Class A category of certification.
The first WIPAC showcase is available for members on the WIPAC YouTube channel. Any other companies who are interested in taking part in a WIPAC showcase
should contact Oliver Grievson, the Executive Director at WIPAC.
Page 4
Industry News
Affinity Water in UK first for two novel Industry 4.0 applications
using smart demand management
In a UK first, Affinity Water is set to trial two novel Industry 4.0 (I4) applications
using smart demand management for existing drinking water and rainwater
storage systems. The project is one of Affinity Water’s two winning initiatives
produced in collaboration with other water companies, UK Universities and
government agencies to improve the efficiency and resilience of its water
supplies.
The trial will seek to unlock ‘hidden gems’ by making the most use of existing
water storage assets in a new way in order to build network resilience and pave
the way for the industry to explore new solutions further.
Working in collaboration with the University of Exeter, Aqua Civils and technical
consultants Affinity Water proposes to develop a ‘business model canvas’ for
drinking water and rainwater storage tanks to harness real-time monitoring and
control solutions to explore optimised strategies for real-time top-up control.
Affinity Water focussed the design of the proposal to target operational system
resilience and Open Data themes.
Historically, decentralised water tanks, such as feeding tower blocks and rainwater harvesting tanks, automatically fill with mains water during peak water
usage periods. In extended dry spells, rainwater harvesting systems fail to reduce demand on the potable network when they are most needed.
The outcome of the trial will quantify the scale of the opportunity to implement smart water tank control at existing customer assets to build operational
resilience and reduce disruption to customers.
It will significantly enhance Affinity Water’s aim to improve the efficiency, flexibility and resilience of water networks for the benefit of customers in the future
while protecting the environment.
Partners include University of Essex and Aqua civils along with a range of experts and consultants.
Seagrass project will use nature-based solutions
The water company’s innovative ‘Seagrass Seeds of Recovery’ project will form part of its activity to use Nature Based Solutions to help address the problems
of both a nature crisis, and a climate emergency.
Seagrass meadows enhance the stability of coastal zones, locking carbon into the seabed at a rapid rate, improving water quality and creating habitat for
hundreds of thousands of small animals - enhancing the resilience of coastal ecosystems.
In Essex and Suffolk, thousands of hectares of seagrass have been lost and restoration of seagrass will help to support the UK Government’s 25-year Environment
Plan.
A consortium of ten partner organisations has been created to deliver this project and strong collaboration throughout will be maintained. These are: Anglian
Water; Project Seagrass – lead delivery partner; Salix River & Wetland Services; Cefas (Centre for Environment, Fisheries and Aquaculture Science); Environment
Agency; Natural England; Department of Zoology and Wadham College, University of Oxford; Swansea University; University of Essex.
Affinity Water already undertakes significant nature-based activities through its long standing catchment management and river restoration programmes. The
utility is already in the process of considering many more nature-based opportunities including planting at least 110,000 trees by 2030.
Page 5
Thames Water offers to address competition concerns over smart
meter roll out
Following complaints and an investigation opened by Ofwat, Thames Water has offered formal commitments under the Competition Act 1998, which look to
address concerns over the company's approach to rolling out its smart metering programme in the non-household market.
Ofwat was concerned Thames Water unfairly removed or limited access to water consumption data used by retailers and third parties – key information for
detecting leaks, ensuring water efficiency and the accuracy of bills.
Ofwat investigated Thames Water following complaints that it had:
•	 installed smart meters that were incompatible with data logging devices used by retailers and third-party providers;
•	 removed other parties' data logging devices when replacing meters with new digital smart meters; and
•	 failed to offer access to data from its smart meters to retailers and third-party providers on fair, reasonable and non-discriminatory terms.
Monopoly providers, such as Thames Water, have a responsibility to ensure that their actions do not harm competition in active markets.
Where it had installed smart meters, Thames Water had effectively withdrawn direct access to its meters and failed to provide a suitable alternative to getting
the water consumption data they provide. Retailers and third-party providers need that data to provide their own services to customers. Ofwat has concerns
that this has the potential to negatively impact competition and the benefits for customers and the environment.
To address these competition concerns, Thames Water is proposing commitments to introduce technology which allows its smart meters to have logging
equipment attached to them and will ensure that the data services it provides to retailers and third-party providers are done so on fair, reasonable and non-
discriminatory terms.
It has stopped proactively replacing meters that have logging equipment attached until this technology is introduced.
Thames Water also proposes commitments to make improvements to how it engages with retailers and third-party providers to better understand and respond
to their needs; and to ensure it fully considers its impacts on markets when making decisions.
Ofwat considers that, when fully implemented, these commitments will address the concerns it identified and proposes to accept them.
Ofwat is now consulting on the commitments proposed by Thames Water before making its final decision. If Ofwat decides to accept the commitments Thames
Water will have to report to Ofwat on their implementation.
Emma Kelso, Senior Director of Markets and Enforcement said:
"We're pleased to see Thames recognising the need to address our competition concerns to ensure it plays its part in making sure markets work effectively. And,
as the sector expands its smart meter programmes, it is important that all companies are mindful of the dominant position they hold in the market and how
their actions can affect markets, customers and other providers."
Riventa Puts Paris Pumping Station On Schedule For Big Savings
At a critical pumping station supplying a major tourist destination near Paris, deployment of
Riventa’s proprietary FREEFLOWi4.0 pump monitoring system and HydraNet software has
proved that the rescheduling of pumps to meet optimum performance, provides an immediate
saving of 21%.
Commissioned by leading water utility Saur, Riventa’s challenges were to identify a potential
reduction in life-cycle costs, as well as show how optimisation of controls and operations would
bring about immediate and long-term savings in running costs, including energy efficiency.
Riventa directly assessed the performance of three large and three small variable speed pumps.
They discovered that at the pumping station, during certain flow rates, the system operated at
up to twice the cost of optimum operation for significant amounts of time - and was also well
in excess of the lowest possible specific power.
Steve Barrett, Managing Director of Riventa, commented: “From the existing total operating
cost per year of just over 157,000 Euros, we showed that by optimising the current pumps, we
could reduce that to less than 124,000 Euros – but it is the optimization of pump schedules that provides the main benefit. Payback can be achieved in less than
12 months, with the significant improvements in operational performance also extending the lifetime of pumps”.
He added: “Based on all of the pumps being refurbished and operated at optimum configuration, the operating cost would lower even further to just under
113,500 Euros – though it must be said that the units were in good condition and a low priority for refurbishment at the time of the project. This means that
capital investment can be focused on other priorities. Assuming a CAPEX of a little over 100,000 Euros, payback would be achieved in three years. This is a classic
example of what can be achieved at pumping stations all over the world”.
Page 6
Thames Water completes “monumental” £20 million IT upgrade
to future-proof London water supply
ThecomplexcomputersystemswhichcontrolLondon’sdrinkingwatersupplies
have been upgraded while keeping the taps running in a “monumental” £20
million project by Thames Water. Moving from the 25-year-old RTAP system
to the new ClearSCADA platform saw the replacement of multiple legacy and
obsolete systems, while keeping customers in supply across the capital.
One of the largest of its type in Europe, the technology monitors output from
the five big treatment works in London – Hampton, Coppermills, Walton,
Ashford and Kempton – as well as more than 200 service reservoirs, pumping
stations and boreholes, many of which are unmanned and need to be operated
remotely.
Carly Bradburn, Thames Water’s head of digital operations, said:
“The computer system oversees the production, treatment and delivery of up
to 2.2 billion litres of drinking water every day. Replacing it has been a very
complex and challenging project.
“The old system was over 25 years old and software updates were no longer
available. Replacing it needed the engagement of multiple stakeholder groups,
external suppliers and companies, and has been a vast undertaking.”
The commissioning of the new system included checking and validating more than 700,000 data points, and around 100,000 functional, mimic, alarm and user
tests to ensure minimal operational disruption and risk.
The new system, supplied by Schneider Electric, was migrated over several months last year and this year, running alongside the old process to resolve any
problems, before taking full control of the whole estate.
Mark Grimshaw, Thames Water’s head of London water production, said:
“Investing in resilient systems and assets is one of our key priorities. There can’t be many more important projects than updating the technology that ensures a
reliable water supply for one of the world’s major cities.
“Keeping the old system up and running while launching the new system alongside it has been a monumental effort by everyone involved – a great example
of teamwork at its very best.”
Sensors for Water Interest Group (SWIG) extends its call for
training videos and announces its next three webinars
The Sensors for Water Interest Group has announced this month that it is extending its call for training videos that is going to act as a hub for members to go
to gem up on their knowledge on instrumentation within the water industry. The library, which is currently available on the Sensors for Water Interest Group
website features 30 videos which cover areas such as
•	 Water quality,
•	 Level & flow meters
•	 Telemetry, IoT and Logging
This is only the start of the video library on the SWIG website which is designed to act as a vital resource for instrumentation specialists within the water
industry in order to help the utilities companies as much as possible. Companies, who are members of SWIG, are more than welcome to submit training videos
for inclusion in the video library on the SWIG website.
SWIG has also announced their next four webinars which are going to take place
•	 Achieving net zero will happen on 16th June and is being hosted by Frances Cabrespina of Suez who are kindly sponsoring the event so SWIG members can
attend free. There is also a taster event happening on 10th June with a keynote by Matt Gordon, the Engineering Manager of Suez with a live discussion
•	 On the 14th July there will be webinar on "How sensors protect our coastal waters chaired by Michael Strahand and kindly sponsored by Xylem Analytics
•	 On the 29th September there is a webinar on "How to get the best value out of sensors that is being sponsored by Siemens and is hosted by Oliver Grievson,
the current SWIG Chairman and Technical Lead at Z-Tech Control Systems
For more details on all of these events please visit the SWIG website at www.swig.org.uk
Page 7
Unlocking the power of water data is becoming a must-have for
utilities
Ovarro’s associate product line manager for RTUs & loggers Adam Wright discusses why more frequent capture of water supply and distribution data is becoming
a must-have for utilities as they strive to build network resilience, improve customer experience and meet regulatory expectations.
Adam Wright shares insights into the latest developments in a Q and A session.
What can today’s data tell utilities about their water networks?
Data logging allows water companies to accurately and reliably record parameters for pressure, flow and level across the water network by interfacing with
common industry flow meters and sensors to enable efficient network management. Visibility of district metered areas (DMAs) combined with network models,
pressure surveys, consumer flow monitoring and reservoir depth calculations all mean water companies are able to make informed decisions that will result in
a reduction in cost of network ownership. With more data comes increased insight and ultimately increased value.
What are some of the data capture challenges faced by utilities?
Key challenges include the increasing pressure on data security, a growing need for more battery power to send more data for longer periods and communications
reliability.ThesearealwaysfrontofmindforOvarrowhendevelopingandupdatingitsdataloggers.Thegoodnewsis,technologyaroundsensors,communications
and battery life is advancing rapidly. Ovarro’s data loggers can now communicate with multiple different sensors from one device using the internet of things
(IoT). They are programmed wirelessly using a Bluetooth app and data is sent securely to the cloud or the customer’s system. The rollout of 4G and IoT networks
has significantly improved communications. Ovarro has very recently updated the XiLog advanced data logger following an intensive period of research and
development. The latest version comes with 4G or NBIoT/CATM1 and Bluetooth as standard, with fifth generation 5G broadband connectivity in the future.
IoT has been a real gamechanger in reducing power consumption and allowing loggers to send data more frequently. Battery technology has also progressed,
allowing Ovarro’s loggers to deliver as much as a 10-year battery life. This means fewer battery changes and site visits, which reduces environmental impact,
while freeing up time and saving costs.
What are the differences between 4G, 5G and IoT networks?
The difference between the available connectivity networks mostly comes down to coverage. Currently, 4G and both narrowband and LTE-M IoT are the most
cost-effective options and together cover most of the world. Use of 5G is currently too expensive for most applications, but the price is slowly coming down.
IoT is available globally and IoT modems consume the least power in sending data. This means that users can either send more data or get a longer life out of the
battery. By contrast, 4G mainly uses region-specific modems. The phasing out of 3G services means that 4G, which is being built out from legacy 3G networks,
will become a requirement. Customers are no longer interested in investing in any technology that has only 3G due to its limited lifespan. It is expected that 5G
will eventually replace 4G, though not imminently.
What advances have been made in the reliability and frequency of data capture?
Historically, data loggers would capture data in a set schedule, say one datapoint every 30 minutes, then relay it once a day. This means that if the signal is
interrupted for any reason then a whole day's data is lost. The logger would then try to send it the next day. In theory, the data could still be extracted, but it
could be days later. From an operations point of view, receiving the data as close to real-time means that personnel can act quickly to changes and irregularities
Where data is delayed or lost, severe pressure changes in the water network might be missed or not acted on until days later.
These occurrences could indicate serious incidents likes leaks, or loss of customer supply. In both cases, the water company wants to see the data as soon
as possible to mitigate the impact on customers. We are seeing now that water companies want data sent every 15 minutes or 30 minutes – thankfully, the
improvements in battery power means this is now possible.
What part can data capture play in meeting regulatory targets and net zero carbon goals?
Data logging enables efficient network management, leading to resilient and reliable water supplies - a win-win for both customers and the environment. On
the regulatory side, an efficient network means fewer bursts, supply interruptions and leaks. The larger datasets that the updated XiLog is capable of collecting
and sending are integral to the network management activities that help shape and optimise leak detection programmes.
Maintaining operational control over these critical areas will also play a part in utilities in England and Wales achieving the net zero goal on carbon emissions by
2030. If the amount of water lost through leakage is reduced, the volume of water being treated and put into supply is also reduced, cutting energy consumption
and carbon emissions in the process. Similarly, if a burst or leak causes low pressure for customers, pumps must work harder, therefore consuming more power.
Having reliable data allows action to be taken before any significant customer or environmental impact is felt.
Where should the sector be going from here?
In the near future we can expect to see fully connected networks, with hardware and analytics, making real-time decisions based on water company goals and
challenges. The future is not about hardware alone, however next generation data loggers are the key to benefiting from this combined approach, that includes
IoT connectivity, big data and advanced analytics.
The current trend is clear, the market is moving in a direction that enables water companies to receive more and more data. Now, more than ever, the question is
about getting the most value out of that data and having the right processes and systems in place to do so. We know that with more data comes more potential
insight, but the true value comes when that data is efficiently visualised and analysed.
Page 8
Artificial Intelligence Predicts River Water Quality With Weather
Data
The difficulty and expense of collecting river water samples in remote areas has led to significant — and in some cases, decades-long — gaps in available water
chemistry data, according to a Penn State-led team of researchers. The team is using artificial intelligence (AI) to predict water quality and fill the gaps in the
data. Their efforts could lead to an improved understanding of how rivers react to human disturbances and climate change.
The researchers developed a model that forecasts dissolved oxygen (DO), a key indicator of water’s capability to support aquatic life, in lightly monitored
watersheds across the United States. They published their results in Environmental Science & Technology.
Generally, the amount of oxygen dissolved in rivers and streams reflects their ecosystems, as certain organisms produce oxygen while others consume it. DO also
varies based on the season and elevation, and the area’s local weather conditions cause fluctuations, too, according to Li Li, professor of civil and environmental
engineering at Penn State.
“People usually think about DO as being driven by stream biological and geochemical processes, like fish breathing in the water or aquatic plants making DO on
sunny days,” Li said. “But weather can also be a major driver. Hydrometeorological conditions, including temperature and sunlight, are influencing the life in the
water, and this in turn influences the concentration levels of DO.”
Hydrometeorological data, which tracks how water moves between the surface of the Earth and the atmosphere, is recorded far more frequently and with
more spatial coverage than water chemistry data, according to Wei Zhi, postdoctoral researcher in the Department of Civil and Environmental Engineering
and first author of the paper. The team theorized that a nationwide hydrometeorological database, which would include measurements like air temperature,
precipitation and stream flow rate, could be used to forecast DO concentrations in remote areas.
“There is a lot of hydrometeorological data available, and we wanted to see if there was enough correlation, even indirectly, to make a prediction and help fill in
the river water chemistry data gaps,” Zhi said.
The model was created through an AI framework known as a Long Short-Term Memory (LSTM) network, an approach used to model natural “storage and
release” systems, according to Chaopeng Shen, associate professor of civil and environmental engineering at Penn State.
“Think of it like a box,” Shen said. “It can take in water and store it in a tank at certain rates, while on the other side releasing it at different rates, and each of
those rates are determined by the training. We have used it in the past to model soil moisture, rain flow, water temperature and now, DO.”
The researchers received data from the Catchment Attributes and Meteorology for Large-sample Studies (CAMELS) hydrology database, which included a recent
addition of river water chemistry data from 1980 to 2014 for minimally disturbed watersheds. Of the 505 watersheds included in the “CAMELS-chem” data set,
the team found 236 with the needed minimum of ten DO concentration measurements in the 35-year span.
To train the LSTM network and create a model, they used watershed data from 1980 to 2000, including DO concentrations, daily hydrometeorological
measurements and watershed attributes like topography, land cover and vegetation.
According to Zhi, the team then tested the model’s accuracy against the remaining DO data from 2001 to 2014, finding that the model had generally learned
the dynamics of DO solubility, including how oxygen decreases in warmer water temperatures and at higher elevation. It also proved to have strong predictive
capability in almost three-quarters of test cases.
“It is a really strong tool,” Zhi said. “It surprised us to see how well the model learned DO dynamics across many different watershed conditions on a continental
scale.”
He added that the model performed best in areas with steadier DO levels and stable water flow conditions, but more data would be needed to improve
forecasting capabilities for watersheds with higher DO and streamflow variability.
“If we can collect more samples that capture the high peaks and low troughs of DO levels, we will be able to reflect that in the training process and improve
performance in the future,” Zhi said.
Penn State researchers Dapeng Feng, doctoral candidate in environmental engineering, and Wen-Ping Tsai, postdoctoral researcher in the Department of Civil
and Environmental Engineering, and University of Nevada, Reno researchers Adrian Harpold, associate professor of mountain ecohydrology, and Gary Sterle,
graduate research assistant in hydrological sciences, also contributed to the project.
A seed grant from Penn State’s Institute of Computation and Data Science, the U.S. Department of Energy Subsurface Biogeochemical Research program, and
the National Science Foundation supported this research.
Page 9
Meteor Communications Innovates Water Quality Monitoring
Market
For decades, anyone needing to monitor water quality would purchase equipment to measure the parameters of interest. Today, an innovative, rapidly growing
company, Meteor Communications, has challenged that model with their ‘Water Quality as a Service’ (WQaaS) solution. “Ultimately, people monitor water
quality because they need data,” explains Meteor’s MD Matt Dibbs. “So, we would happily sell them water quality monitoring systems, but many of our
customers now prefer to just pay for the data - and let us manage the equipment.”
This radical approach has proved so popular with water companies, regulators and environmental consultants that hundreds of stations are now in the field,
delivering continuous real-time, water quality data. Matt says: “Our monitoring systems are ideal for providing real-time data from remote locations because
they operate on very low power and wirelessly connect with the MeteorCloud secure web portal providing secure access for clients to view and download their
own data.”
Working with water companies and government agencies, Meteor Communications developed the ESNET (Environmental Sensor NETwork) autonomous
water quality monitoring systems to allow rapid deployment with no requirement for pre-existing power or communication infrastructure. Modular and with
multiparameter capability as well as built-in communications, ESNET systems deliver robust, high resolution real-time water quality data within minutes of
deployment. The systems are available as a complete portable monitoring station or as part of a kiosk pumped system for semi-permanent or fixed installations.
ESNET enables the rapid creation of monitoring networks, which is a particular advantage in the monitoring of catchments because it allows water managers
to track the movement of water quality issues as they pass through a river system.
ESNET sondes are typically loaded with sensors for parameters such as dissolved oxygen, temperature, pH, conductivity, turbidity, ammonium, Blue Green
Algae and chlorophyll. However, it is also possible to include other water quality parameters as well as remote cameras, water level and flow, or meteorological
measurements. The addition of autosamplers enables the collection of samples for laboratory analysis; either at pre-set intervals and/or initiated by specific
alarm conditions. This is a particular advantage for water companies and regulators because it enables the immediate collection of samples in response to a
pollution incident, which informs mitigation measures and helps to identify the source of contamination.
Under a WQaaS agreement, Meteor Communications installs ESNET stations at the customers’ sites, measuring pre-specified parameters. Meteor is then
responsible for all aspects of the installation and retains ownership of the equipment. The provision of high intensity (typically 15 minute intervals) water
quality data is assured by daily online checks that the stations are performing correctly. In addition, regular site visits are conducted for service and maintenance
including monthly visits to swap the water quality sondes with duplicates which have been calibrated at Meteor’s dedicated Water Quality Services Hub near
Basingstoke. “This ability to swap sondes is a vitally important feature of the service,” Matt explains. “By providing this service to all WQaaS customers there
is a major benefit of scale, because this has enabled us to establish a dedicated sonde service and calibration facility that is able to process large batches of
sondes quickly and effectively.”
The most important advantages are financial. With no capital costs, this model provides enormous flexibility for the users of the service because it means that
they only have to spend money on the data that they need. In addition, there are no equipment depreciation costs and no requirement for investment in the
resources that are necessary for ongoing service and calibration.
For many of Meteor’s customers, the main advantage is peace of mind, because continuity of data is usually vitally important. With staff from its Water Quality
Services Hub checking outstations every day, combined with regular site visits, users of the system can rest assured that uninterrupted monitoring will generate
a comprehensive dataset. On rare occasions, monitoring activities can be hampered by vandalism or even natural events, but the WQaaS system ensures
that such issues are detected immediately, so that appropriate action can be implemented quickly to protect the continuity of data. Risk reduction is also
an advantage, because purchased equipment can fail, resulting in a requirement for repairs or replacement parts, which may cause a loss of data continuity.
However, under the WQaaS scheme, Meteor is responsible for the system’s uptime, so spares for all of the ESNET’s modules are kept on standby as rapid
replacements. Where water quality monitoring is required for a specific project, the equipment can be tailored to meet precise needs, and at the end of the
project the monitoring equipment is simply removed. This is ideal for consultants or researchers bidding for projects with a monitoring element, because it
allows them to define the costs very accurately in advance.
Flexibility is the key benefit for water company users of the WQaaS model. Traditionally, final effluent water quality monitoring at wastewater treatment plants
is undertaken by fixed equipment installed with appropriate capital works. This means that mainly larger plants benefit from continuous monitoring, so the
major advantage of the ESNET systems is that they can be rapidly deployed at any site; delivering water quality insights later that same day. Then, once the
investigation is complete, the equipment can be easily moved to a different plant. Summarising Matt says: “This technology has been developed over many
years, and with hundreds of systems already in the field we have invested heavily in the resources that are necessary to support these networks. This means
that our customers do not need to make the same investment, which delivers efficiency and cost-saving benefits for everyone. We still sell ESNET systems to
those for whom ownership makes more sense, but for many others the advantages of WQaaS are significant, because when the monitoring stops; so does the
cost!”
For over 25 years, Meteor Communications has designed, built and installed remote environmental monitoring systems for global governmental, utility,
industrial, consulting and academic organisations. Innovation underpins the success of the company, and all products and solutions have been developed in
close cooperation with customers.
Meteor’s products provide real-time access to vitally important field data, with two main themes. Remote water quality monitoring stations measure background
levels, enabling trend analysis and the identification of pollution from diffuse and point sources. Remote, low-power, rugged cameras provide visualisation
of key assets such as construction sites, flood gates, weirs, flumes, screens, grills etc. Both the cameras and the water quality monitoring stations provide
immediate access to current conditions with alarm capability, which enables prompt remedial action, as well as the optimisation of maintenance activities.
Meteor Communications provides a wide range of off-the-shelf and bespoke monitoring solutions. Most can be deployed within minutes, are solar powered
and do not require significant infrastructure to run. Cloud-based data is accessed via secure login to the Meteor Communications data centre. This is achieved
using any web-enabled device and provides instant access to live and stored data, which includes an interactive graphical display.Meteor Communications has
a large installed base of remote monitoring stations and the company’s turnover has increased 5-fold in the last 6 years.
Page 10
‘TOTEX’ is key when purchasing instrumentation
There’s a lot to be considered in the price tag of an ultrasonic instrument. Derek Moore from Siemens explains how the historical way of thinking only
of capital costs needs to change to the more holistic approach of total expenditures (TOTEX).
For any purchase, a prudent decision involves thorough analysis with the long-term in mind. When buying a car,for example, we don’t just look at the price tag,
which only represents the initial capital cost. We also consider important operating costs like fuel efficiency, reliability and maintenance. All of these contribute
toward our understanding of the true total expenditure – or “TOTEX” – for the vehicle and we make our purchase decision accordingly. It means the sticker price
might be higher on car A than car B, but car A might still the better deal because its long-term value could be greater when all of the operating costs over the full
driving life of both vehicles are taken into account.
It’s no different when purchasing an instrument for a water/wastewater facility. In addition to the initial capital cost there are a number of operating costs
that must be considered. But all too often these are overlooked. ‘TOTEX’ is key when purchasing instrumentation It starts with installing the devices. Some
instruments have a simpler and less costly installation process than others. Then there’s maintenance, with a number of questions to address in assessing that
cost. How often does production need to be shut down for visual inspections and cleaning? For how long must each shutdown last? And what does that all that
shutdown time and cleaning work add up to as a total cost for lost operating time over many years?
It’s also important to consider the impact of Energy costs to determine the true operating cost of an
instrument. Countries including Canada, UK, Germany, South Africa and Australia have different rates
according to the time of day or season energy is consumed. It could cost up to 80 per cent more in peak
periods compared to low periods. Since the instrument needs to run at all times, the high-cost periods
are unavoidable. That’s where a special feature such as what is seen with Siemens’ ultrasonic controllers
can make a big difference to reduce operating costs. The SITRANS LUT430 (Level, Volume, Pump and
Flow Controller) and the SITRANS LUT440 (High-Accuracy Open Channel Monitor) both offer a full suite
of advanced controls so that in normal operation, the controller will turn pumps on once water reaches
the high-level set point, and then begin to pump down toward the low-level set point.
In economy pumping, the controller will pump wells down to their lowest level before the premium rate
period starts, which maximizes the well’s storage capacity. The controller then maintains a higher level
during the higher-cost tariff period by using the storage capacity of the collection network. Pumping in
this way ensures minimal energy use in peak tariff periods.
In addition, costs can be saved with these and other devices in the SITRANS LUT400 family through pumped volume and built-in data-logging capabilities.
In a closed collection network, it is inefficient and costly to pump rainwater entering the system from degraded pipes that are leaking. The SITRANS LUT400
calculates pumped volumes, which provides useful historical trending information for detecting abnormal increases of pumped water.
A range of Siemens products can bring TOTEX costs down significantly through reduced operational costs.
For example:
•	 All Siemens Echomax ultrasonic transducers are robust and have a self-cleaning face to avoid product build-up which reduces the need to
shut down production for cleaning.
•	 The Siemens HydroRanger 200 and Siemens SITRANS LUT400 have sub mergence detection, with an alarm triggered before the device is
fully submerged. Pumps can also be activated to attempt to lower the water level. This will avoid the costs associated with an overfill.
•	 All Siemens level instruments have intelligent echo processing software that continuously adapts to changing environments and conditions
in the application. Thanks to sophisticated algorithms at the heart of this innovation, users can rely on accurate readings, so they avoid false
readings that lead to costly false alarms.
•	 The new Siemens SIMATIC RTU3030C is a cost-saving device designed for data communications at remote locations. It’s a compact, energy-
self-sufficient Remote Terminal Unit (RTU) with optimized energy consumption, so it requires no external power source. Because it is
battery operated, and works with any Siemens ultrasonic device, no costly trips are needed to remote places to check on instrumentation,
with everything handled from the control centre.
All Siemens instruments can be connected via SIMATIC or other communication protocols, meaning all the needed information is one place – delivering cost-
saving efficiency to the entire operation.
To put all of these operational savings into full TOTEX perspective, consider a direct comparison between a given ultrasonic device purchased from the fictitious
Zebra company and one bought from Siemens. The two devices might both have the same purchase price, but the self-cleaning face on the Siemens device alone
has a huge impact when looked at across 100 units in your operation over the course of a 15-year lifespan for each instrument. Assume that cleaning feature
saves just $100 per year per device. Over the course of 15 years for 100 devices, that’s a difference of $150,000 in TOTEX. It’s just one simple example to show
how a capital cost is only one part of the equation in understanding the true total cost of an instrument.
SIMATIC RTU3030C makes it possible to measure
level measurement anywhere in the world while
providing all the data in your centralized control center.
Page 11
Case Study:
A brand new rising main
monitoring programme
Following successful learnings from monitoring transients on water distribution networks, Anglian Water and Syrinix looked to see if there was an opportunity
to transfer this knowledge across to wastewater ‘rising mains.’ Across the UK water industry monitoring of rising mains is limited and even less common is a
partnership approach to analysing and developing the data. Pressure monitoring offered a whole new angle on providing data which could inform and influence
working practice and ultimately be beneficial from both an environmental and cost saving perspective.
The predominant key driver from Anglian Water was to first off explore the capability of pressure monitoring to identify bursts on rising mains.
A burst on a wastewater network and the impact of its pollution is a problem with significant consequence for both customers and the environment and
therefore increasing resilience by improving visibility and the time to respond is imperative, with obvious all-round benefits. Secondly, there was the issue of
generally having a better understanding about the state of assets. Could Anglian Water get more out of existing assets? Could they last longer? A richer data set
and the information it provided would lead to smarter investment decisions on assets and ultimately a reduction in the need to deliver huge capital solutions,
(like mains replacement etc). Thirdly there was an efficiency point to make, taking into account both financial & operational efficiency.
The identification of failing assets such as Non Return Valves, air valves, degradation of pumps and in general, assets which start to cost more than they should
do, brings back some efficiency and by default aids carbon and energy reduction, all of which are key priorities within the next AMP cycle.
Finally, geography plays its part and the topographical make-up of the Anglian Water area means before the sensors were installed, bursts could occur in rural
areas and remote farmland that could lead to a catastrophic pollution. Having the technology enables AW to manage those areas better and hence it reduces
the overall impact on the environment.
How the partnership works
The rising main monitoring and analysis service from Syrinix combines a high-resolution pressure sensor, deployed at a pumping station outlet and the retrieved
network data which via diagnostic tools analyse the rising main system’s operation and performance.
PIPEMINDER, Syrinix’s high resolution pressure monitor, collects and analyses 128 samples per second and provides one-minute summary data intervals. The
rechargeable battery powered system comprises 3G communications in a rugged IP68 enclosure with an external digital pressure sensor makes it the ideal
solution for deployment on rising mains. The data is sent every 6 hours to RADAR, Syrinix’s cloud-based platform, where it is analysed against set performance
parameters, determining the system operating state. This enables the identification of asset issues such as blockages, sticking/passing non-return valves, worn
pumps and burst mains. Syrinix provide a monitoring service to the water company and have automated alerts for burst main identification, which can be
integrated with existing software.
This project had been truly ground breaking in its ability to deliver early value to a wide range of stakeholders from asset planners who can now look at the effects
of design on rising main performance and therefore inform future standards, through to operational teams who can now do greater amounts of fault diagnostics
remotely such as identifying poor performing NRVs which previously may have gone undetected until costs increased or performance was significantly impeded.
Anglian Water installed over 120 Syrinix PIPEMINDER devices
As the project began to take form, several objectives were defined which took the scope beyond a simple alert system to a more sophisticated performance and
diagnostic tool.
These objectives included -
•	 Asset performance monitoring
•	 NRV operation
•	 Rising main failure/ burst
•	 Air valve operation
•	 Asset condition monitoring
•	 Rising main deterioration
•	 Pump efficiency
•	 Impact of pump operation on rising main life
For the solution to be effectively utilised as BAU the following elements regarding integration were also considered.
•	 Business/ stakeholder buy-in
•	 Education and learning
•	 Integration with existing systems
•	 Dashboards
•	 Effective presentation of data
•	 Contextualised information
•	 Intuitive GUI
Initially sensors were placed on poor performing assets which were known to be at higher risk of failure. The intention behind monitoring these assets was to
generate reference data which would allow an understanding of the patterns associated with bursts and poor performance. In doing so Anglian Water would be
able to roll out a broader programme of ‘condition monitoring’ to proactively monitor for events and deteriorating performance with greater confidence in the
Page 12
accuracy of intelligence generated. Via analysis techniques it became possible to spot blockages and sticking non-return valves, giving predictive capabilities to
asset owners and the proven ability to address under-performance before failure which gave the project real commercial value.
Data collected from PIPEMINDER devices deployed at the pump station outlet is used in Syrinix’s patent pending technique to analyse the operation of the
complete pipeline system. This translates to a visual representation of what good, bad and indifferent performance looks like, ultimately meaning Syrinix can
advise upon how a system is currently operating, compared to what optimum performance should be.
By analysing the one-minute summary data stream, the method extracts number of minutes with:
•	 Low static head
•	 Normal static head
•	 Low delivery pressure
•	 Normal delivery pressure
•	 High delivery pressure
•	 Excessive transient.
A burst alert is raised in the operational control centre meaning the time to respond to asset failure has
reduced significantly.
By looking at analysis of time spent in other zones it became possible to determine the system operating state
such as ragging blockages and sticking non-return valves, giving predictive capabilities to asset owners and
the proven ability to address under-performance before failure. This gave the project real commercial value.
This data tracked over time (figure 3) shows system issues raised in red and a state counter is used to indicate
the asset issue.
An alert is then raised in the Operational control centre so a review and response to the failure can be planned.
This better use of data gives a complete understanding of system performance which can then feed a predictive
maintenance plan.
Working Examples of how monitoring has
made true monetary savings.
Example one - In May 2019 Early detection of a burst
rising main meant a repair bill of £1100 as opposed to
the £25k repair bill received in December 2018, prior
to the burst alert.
Early detection meant Anglian Water could minimise
the impact on the environment, whilst lessening any
customer impact and company reputation.
The sensor placement of PIPEMINDER on a gravity wastewater network
A graphical representation of these zones
Page 13
Example two – Data from RADAR overlaid with hydraulic analysis showed a series of examples where Non return valves (NRV’s) were draining back. This
information could potentially save over a £1k a year on simply unblocking NRV’s.
Drain down is when the non-return valve (two of which can be seen to the left of the PIPEMINDER-ONE monitoring device) which follows a pump does not close
fully and allows for some of the fluid to pass despite being closed. This can be seen quite clearly in the next zone plot. The static head begins to fall as the fluid
begins to flow back into the well.
This means that the well will begin to fill not only from its source but also from the rising main itself. Subsequently, money is wasted in not only pumping this
fluid back up the pipe but also pumping more often as the well fills more quickly.
Using the Zone Plot, Syrinix has the capability to alert on when a rising main is draining down by looking for the presence of the highlighted section. This can
alert on when the NRV needs maintenance and help save money in the long run.
Example 3 By using the extracted data, Anglian Water have been able to reduce the burst frequency on a site from 19 bursts in 2018 to 0 for 2019 Below are
A Zone plot of the event – Pressure drop of delivery pressure form ~4.4 Bar to ~2.5 Bar which tripped the zone alarm. Time between first sign of failure and repair: 47.4 Hrs.
Page 14
some examples chosen randomly from the site:
Date Pump Surge Pump stop to return to steady
static pressure
December 2018 1.233 6.15 minutes
April 2019 0.712 3.87 minutes
July 2019 0.312 2.63 minutes
November 2019 0.313 2.71 minutes
The above data shows that as time has gone on the aggressiveness of the pumps has been decreased. The overpressure due to pump surge and oscillations after
the pump has stopped have been significantly reduced.
“Anglian Water Optimisation Strategy Manager’ Rebecca Harrison said - “This new level of monitoring has allowed Anglian Water to deploy strategies aimed at
extending the life of the rising main (such as soft starts on pumps and improved air valve maintenance) which early results suggested will allow deferral of capital
investment by extending asset life. This enhanced understanding of performance also provides an essential targeting tool for the Optimisation team.”
Mueller, Ferguson Waterworks deliver LoRaWAN® Class B Nodes
with AMI system
Mueller and Ferguson Waterworks have announced the successful deployment of the industry’s first LoRaWAN® Class B endpoints. The Town of Florence located
in Central Pinal County, Ariz., is the first water utility to benefit from this technology advancement. LoRaWAN Class B endpoints provide flexibility to scale
network coverage and integrate into remote disconnect meters (RDM), leak detection and pressure monitoring systems – unlocking greater network efficiency
and improving data granularity.
“The deployment of smart meters is accelerating our journey toward digital transformation and the foundation required to build out our smart city grid,” said
Brent Billingsley, Town Manager of the Town of Florence. “We are confident that this open source network will provide new operational efficiencies, enhanced
service opportunities and additional revenue streams.”
Delivered by Mueller Systems, the Mi.Net® node, implemented with LoRaWAN Class B specifications, is a bi-directional endpoint capable of transmitting secure
data to and from a network server within seconds, as opposed to hours with a Class A endpoint. At this unprecedented speed of communication, on-demand
reads can be commanded and delivered without delay, providing real-time data to customer service and operations to identify and resolve outages quicker than
before.
“It is encouraging to see more cities and water utilities like the Town of Florence at the forefront of the Industrial Internet of Things (IIoT) revolution,” said Kenji
Takeuchi, Senior Vice President, Technology Solutions at Mueller. “We understand that municipalities are facing challenges on many fronts. Our technology
solutions can help drive a better focus on utility spending and return on investment, while helping them operate more efficiently.”
By deploying Mi.Net® LoRaWAN Class B endpoints, the Town of Florence can simply pair them with Mueller Systems’ model 420 RDM to allow water meters to
be turned on or off without the need for truck rolls.
Each LoRa-based endpoint maintains the data in its non-volatile onboard memory and communicates with the Mueller Mi.Net® Advanced Metering Infrastructure
(AMI) system. This helps to ensure water utilities are protected against any single point of failure. Alerts such as leak detection, no flow, low flow, and register
tampering are monitored 24/7 by the Mueller Network Operations Center to provide an added layer of security.
Page 15
Article:
The Smart Water Industry
is no longer a choice...it’s a must
Whatever you call it, be it Smart Water, Water 4.0 or even Digital Transformation, the world of the water industry is changing and the evolution of a “Smart”
Water Industry is no longer a choice it is something that is just simply going to happen. This was the fundamental under-tone on this year’s WWT Smart Water
Networks Conference and the WEX Global conference earlier this month . For the Smart Water industry it is now not a case of “If” it is a case of “when.” This
is all very well but “what is the Smart Water Industry” was one of the questions that was asked during the conference sessions...do we have a definition for it?
Well, if you look to “Industry 4.0,” the definition is that of Cyber-Physical systems. To apply this to the water industry is something challenging as you have a very
disparate system that is tied in by walls such as a “Smart Factory” but is much more of an open structure and form. However it is a system or as Andrew Welsh
of Xylem spoke about a series of snapshots of a system that when brought together make a whole.
So, in the context of the Water Industry what is “Smart?”
For me at least it is bringing together all of the data that we collect to give the industry, at least operationally, something called “Situational Awareness” allowing
us to know what is going on within the operational framework in order to make an informed decision. This can be operationally in a relatively short time-scale
or it can even refer to the customer by giving them the right data to enable them to make decision or it can even be about the performance of the assets or
even resources on a much longer term enabling strategic and planning decisions. This is the fundamental heart of what a “Smart Water Industry” is to me and
in order to get there we must work towards knowing what information that the industry requires on a stakeholder level whether that be the customer, CEO of
the company or the operator on the ground. All of the informational needs are different and may even differ from company to company or region to region but
the fundamental principle is the same.
Where are we and how do we get there?
The discussions have been going on for years and yet there are some great case-studies that are out there certainly on a “Smart City” approach. Eva Martinez
Diaz from FCC Aqualia gave us some great examples on the “factory” approach to the water industry and the work that the innovation teams their have been
doing including the development of biofuel from algae from Chiclana in Southern Spain when wastewater is used to grow algae which is then digested to create
fuel not in the definition of “Smart” per se but certainly taking the principles of the circular economy and also the “Factory” approach that was proposed by
STOWA so many moons ago in their report on the wastewater treatment works of 2030. On a more “Smart” approach is the work that has been done in San
Ferran where the move from manual to automatic meter readings meant that the amount of data sky-rocketed from approximately 9,000 in 2016 to over 2
million the next year. As San Ferran is an island where water resources are stretched this gave a visibility of unaccounted for water that meant that the resources
could be managed. The project could be seen to have a clear need and it made sense to take this approach. Where water resources are short it makes obvious
sense to adopt the technologies to enable the water industry to take this route. In the UK at least the report by Sir James Bevan has highlighted an obvious need.
In the UK at least there is an obvious need, certainly on water resources meaning careful monitoring of what resources we have in the environment but also
protecting what we produce too through the management of non-revenue water. The need is there but is the technology. Within the conference this led us to
the first poll of the day with the question as in figure 1
The answer that came out from the poll was “cautious” but “interested” which is
wholly understandable position. Right now, within the industry we are awash with
technologies, techniques and various “as a service” offerings all the way from data,
software and the likes. It is very difficult to navigate through all of these offerings
and it is also very easy to think of alot of these offerings as “widgets”. One of the
reasons for this cautiousness was highlighted in the last poll of the day which asked
the question (figure 2)
The biggest barrier? What we already have in place, the legacy systems that have
served their purpose over the years but no longer fit suit the needs of the industry
but of course to replace the legacy takes time and alot of investment and it does not
always financially stack up to replace these systems.
What was interesting to see at this year’s conference is that the technologies to do
what we need to do are already present. Various technologies are available for the
industry along with various services. Some neatly address the industry’s needs such
as chemical inventory systems and dosing optimisations systems as presented by
Roderick Abinet of Keimira and Christopher Steele of Black & Veatch.
The key to “Smart” is collaboration....and of course data......
“Smart” is not something that we can deliver in isolation though and this was
demonstrated in many different ways at this year’s conference with Martin Jackson
of Northumbrian Water Group talking about their development journey and the
challenges and enablers that they have seem an the important areas that they’ve
looked at including
1.	 Data Science - Yes there are the basics but its also leveraging the
company expertise in different areas by developing those within the
business. Did this with a Hackathon approach. They have created a culture
where data is trusted to drive leading performance. Its not there yet but
is getting there
2.	 Artificial Intelligence - When we look to use AI and have looked to
bring this approach by having an in-house data architect. The focus has
Figure 1: How would you sum up the water industry’s attitude to smart technology?
Figure 2: What are the biggest barriers to realising the benefits of smart water net-
works
Page 16
been through customer services as this is an easy-win area where basically the volume of calls means that a human can’t do it
3.	 User Experience - Used an out of the box application and so not a bespoke service. An example of this is using Alexa to interact with the
customer. Also developed a game approach for educational purposes. Basically using tailored application
4.	 Smart Technologies - There is a balance between new sensors and technologies and the existing. Its about outcomes rather than installing
a new widget
There are enablers out there with cloud storage prices coming down in price that enables companies to use it for a huge amount. Data storage where not being
exactly free is at least priced very reasonably. Cyber Security is always going to be a risk but this can be limited to the data that needs to be secure (for example
customer billing data). The now famous Northumbrian Water approach has been through a number of design sprints, hackathons and the likes encouraging
others to get involved in an ever developing landscape. In this Northumbrian Water, although arguably being amongst the most developed, are not on their own.
Welsh Water in the form of Nial Grimes presented their collaborative approach and came up with three small culture hacks that they’ve have taken
1.	 Make a big small change - which was the approach at Welsh Water to hackathons and the likes which enabled going further with technology
faster than they’ve done before, true collaboration between different teams in the company, contractors and supply chain and finally a new
way of working
2.	 The power is in the team - or if you want to think of it this way in the members of staff within the organisation and the removal of blockers
and enabling good ideas a huge amount can be done including the case study that was given which included the development of an
application for the capture of real time data on wastewater treatment works.
3.	 Find the crazy person - There are some truly talented and passionate people within most business who have the great ideas and if enabled
can develop something truly wonderful and get the company to follow it.
What this goes to show is that whether collaboration is internal, external within the UK or outside of it there are things that we can learn from each other
across the water industry. There are barriers to this insofar as we are meant to be a competitive business although it was argued by Trevor Bishop that almost
innovation & collaboration is almost more important from an industry-wide strategic perspective.
A good case study of this was presented by Tertius Rust of South-East Water
where they have leveraged the technical experts available and along with the
supply chain, working in collaboration a smart network has been developed from
the sensors in the ground, to the telecommunications systems and cloud bases
all the way to a data lake which feeds both the systems across the corporate
system which is also fed by data loggers in the field. The art of this is knowing
what technologies to apply and where (like anything in the water industry) and
in order to do this there is a requirement for a multi-disciplinary team which in
reality stretches not only within a company but within its supply chain too.
When we look to what is happening around the globe there are numerous
good case studies as to what “smart” can look like especially in the case of non-
revenue water. To most people non-revenue water means leakage...in reality this
is not always the case. Yes, in the main it is leakage but it can also be things such
as meter error, unmetered water use or even water theft from the distribution
system. It is probably the most developed area where solutions are actually
being developed in the water industry and being used right now all the way
from instrument based systems looking at high-pressure transients to acoustic
loggers looking at leakage at step 1 to pressure management systems and event
management system at steps 2 & 3.
Some of the examples of this across the world are in Australia where the CEO of Unitywater, George Theo, takes the approach that once a leak is costing more
than it costs to repair then the situation is simple...its a case of just fixing it. In reality this “simple” approach is much more difficult as (a) you need to know
where the leak is and (b) how much its costing. This is where Unitywater has used an event management system approach. Probably one of the best examples
that I’ve seen in recent years is that of the Lisbon-based water company, EPAL.
The journey that EPAL have gone through in the past 10 years saw the city
drop from NRW levels of 23-24% to as low as 7% in only a few years. They
took the approach of managing their data, converting it into information
and using it to inform their water network asset replacement programme.
However, as presented in the past few days, the path of their NRW has
actually taken an up-turn in the past few years as shown in a copy of their
slide. Although the up-turn was slight in comparison to previous years it was
a definite worsening. The positive point that was picked up was that was a
problem and the team within the company worked to its resolution. As a
result of it the problem was detected in the trunk-mains within the city and
working collaboration they instigated technological detection techniques
using the WRC Sahara technique as well as the Xylem Smartball technique to
see where the problems were. This resulted in the discovery of a number of
leakage points within the large trunk mains that with intense planning were
fixed and the unofficial figures for 2018 show the NRW% has fallen. The
work that EPAL originally did has been documented and is freely available
for download from the EPAL website by clicking here
Despite all of this it is recognised that NRW is not just about leakage on the
water companies distribution system but can be about meter error too. Its
Figure 3: The case study of SE Water (from a presentation by Tertius Rust)
Figure 4: The non-revenue water journey of EPAL as presented by Andrew Donnely
Page 17
an approach that has been investigated on the supplier side by Z-Tech Control Systems through their experience in meter installation problems as well as by EPAL
on the water company side of things. In the UK at least it can come with a large financial cost associated with the Outcome Delivery Incentives with a company
that misses its leakage targets paying tens of thousands of pounds in penalties per 1000m³ or if they achieve their targets receiving tens of thousands of pounds
per 1000m³. Its a similar approach in Denmark where the penalties for a water utility if they are over 9% NRW are large tax bills associated with the inefficiency.
As a result of this regulation HOFOR, the countries largest utility has a 6.7% NRW and the socio-cultural attitudes towards water mean that the per capita
consumption is around 100L per day. The Danish have achieved this through lots of hard work but also by the first key take-away from most workshops and
conferences at the moment - the need for collaboration in what we do
The meter uncertainty is not just linked to large meters though as demonstrated by the work that EPAL have done. In Portugal they have legal obligation to
replace the water flow meters every 12 years. By using their technical specialisms they have analysed the performance of water meters and the potential for
under-reading. If a meter under-reads (a) the customer isn’t being charged for all of their use but most importantly (b) the NRW% looks higher which is in reality
due to meter error. Looking at this EPAL have figured out that there is benefit for them to replace at least some of the flow meter stock earlier than the statutory
12 years giving a return on investment in as little as 12-18 months.
If information, situational awareness and informed decision making are all part of the “Smart” Water industry
then at the root of all of this is the data quality. To use the old phrase Garbage in Garbage Out, which was first
used by William Mellin in 1957, if we use data of poor quality then the whole fundamental basis of the Smart
Water Industry will fail. If we want to use data analytics and machine learning then the concept that was first
used 62 years ago has got to be understood. This is a constant theme that is heard all of the time at the moment
and is true.
At the heart of this, operationally at least, is the application, installation and maintenance of the online
instruments that we use. Figure 5 shows an extreme case of poor installation and maintenance but is a real-life
example (although not from
the UK).
The question has to asked - “where is our data” that we are going to use
coming from. A person on site looking at this can understand that the data
from this particular installation cannot be relied upon but a person who is
remote from the local situation, they don’t have the situational awareness to
know that the data is wrong. Of course nor will a machine. However, these are
the obvious errors - what if the error is not so obvious and is down to an error
that can’t necessarily be seen via air getting into pipeline or a local obstruction
causing errors in the flow meter reading.
Analysis that experts at Z-Tech have done (figure 6) show that in severe cases,
due to air or other pipeline disturbances for example poorly fitted meters
or meter fouling, that the associated error can be in the tens of percent.
When looking at examples such as non-revenue water this can create huge
apparent losses which in reality don’t exist and could be solved by utilising
instrumentation expertise.
...the way to get there....through objectives
At the end of the day though there has to be a point to the whole “smart
water” concept and that is to make the industry more efficiently. The second
poll of the day asked the question as shown in figure 7
In the end it is always going to be in improving the service to the customer as
really in the poll the answer to the question was “all of the below” and more
besides and the work that has been done in Denmark, Portugal and Australia
are casing points which enable a water company to operate at a leaner level
through efficiencies in operation.
In general, there will be horizontal cross-company objectives in terms of
increasing data quality by correct installation & maintenance (Level 2), more
robust communications systems (Level 3) as well as visualisation (Level 4) and
analytics (Level 5) but also vertical segments such as NRW reduction, flooding
& pollution and (water/wastewater) system optimisation and is in the vertical
segments that the objectives and drivers lie.
Take-aways
So,what do we take away from this year’s WWT Smart Water Networks conference and the WEX Global Conference:
1.	 Collaboration - As water companies, contractors and supply chain the technologies and ways to deliver them are out there. Specialists will
need to be used, especially in the current environment, where technical skills can be a challenge. New skills will have to be developed as the
industry shifts into areas which are not in the current “typical” skill set of the industry and these can be co-opted as necessary. We should
also be looking outside of the water industry as well as there is a lot to be learned outside.
2.	 Data Quality - This goes back to the 1950’s and the US Army Corp of Engineers and the infamous phrase of “garbage in garbage out.” If we
Figure 5: Know where your data comes from?
Figure 6: Error analysis showing obvious installation problems
Figure 7: Where do you think smart technology could make the biggest difference to the
water industry?
Page 18
are to build an industry based upon data we need to make sure that the data is right. Information based on wrong data will be wrong and
the resultant analytics may come to the wrong conclusions. A machine cannot tell when data is wrong necessarily as its a difficult job for
a person. This means we need more robust data sources and need to maintain them to ensure that they are right. This is essential moving
forward as otherwise we are building a philosophy on uncertain foundations. However not only is data quality an issue but so is data
availability too. The value of data and its quality and availability is all linked to its usefulness as information - this is a fundamental piece that
I wrote about many years ago in what I termed at the time “the resistance to the effective use of information.” Basically put if we value the
information we will look after the data source.
3.	 Technology is currently available in one form or another to enable us to build a smart water industry. It may take a little help and a little trial
and error to enable the technology with different people but it is already in place. It is just as case of starting the journey.
4.	 Skills - There are some area that the skills exist, there are some areas that the skills are in shortage and there are some areas where we
don’t know what skills are going to be needed. Some can be developed in-house, some are specialist or can be delivered more effectively
externally either on an ad-hoc or permanent basis. There are the people available but not necessarily in the traditional sources.
5.	 The Smart Water Industry is a reality and is no longer a choice, it something that we simply must do in the Water Industry moving forward
to address the challenges that we face.
This, for me at least, were are some of the key takeaway points from the day. The Smart Water industry is certainly possible and in fact has been highlighted as
necessity. Now its up to us as a collaborative industry to deliver them.
There were some questions raised at the conference from the audience and these have been listed below to give some insight into where the industry is unsure
or needs further clarification in certain areas.
Questions
1.	 How are you going to feedback on the questions (un)answered -
2.	 Does the panel (everyone) have an agreed definition of “Smart Water”
3.	 Should we define smart networks - it should include treatment. Are we not considering data driven water to meet and exceed customer
outcomes -
4.	 Do you feel the Water utilities have a digital roadmap and know where they are going and at what pace?
5.	 BIM, Digital Twin, analytics and Big Data are all interlinked. However they seem to be looked at separately within utilities - how do we join
them up
6.	 Are you using the meter read data in Sant Ferran to feedback to customers and drive consumption reduction and if so how? Can we get a
site visit to Sant Farran - it looks beautiful
7.	 There are a lot of people who want to help provide smarter solutions if only water companies provided access to there data. How can we
solve this?
8.	 Could the partnership approach used in Yorkshire Water by Black & Veatch work on a wastewater network
9.	 When trying to implement smart systems into treatment works etc. do you see much resistance from the managers of those sites
10.	No one has mentioned the skills shortage we are facing to enable and fully realise this Smart Future. What are the proposals to bridge this
in time? Can AI help bridge this gap and is Northumbrian Water (or any water company) looking at this possibility? What level of training is
envisaged for end users i.e. the front-line operators at treatment works
11.	At a site tech level do bespoke controllers have a future as most are now integrating their smart “apps” into connected PLCs or Edge Devices
12.	Are the data analysts in the back office now as important as the engineers(/technicians) out front in order for smart networks to succeed?
13.	What are the ethical implications of deploying smart tech, automated condition monitoring and increasing AI
14.	How do we value data
15.	Can you change people or do you need to “change” the people?
16.	England won’t be able to meet demand within 25 years, is there a role for smart networks to help the entire UK sector and not the individual
companies
17.	Tier 2 is the creation of cognitive hydraulic model. In practice can this be the conversion of offline models with added sensor data or
something simpler.
18.	How far away are Northumbrian Water form the South East water smart network system? How far ahead are South East Water compared
to other companies.
19.	How do your smart systems incorporate feedback and learning to continue optimisation from base model?
20.	How as an industry are we connecting into the new innovation and data ethics committee set up by the government
21.	Trevor (Bishop) talked about OFWAT’s push for a systems based approach but only one company’s draft business plan seem to satisfy them.
What were the others missing?
22.	Michael (Strahand) talked about data storage being (virtually) free. How far are we with establishing secure systems of data sharing systems
with the growing threat of cyber-attacks.
Page 19
Introduction – A history of the treatment works at Cookstown
The Wastewater Treatment works at Cookstown in Northern Ireland is a treatment works that has a long and extensive history. It was originally commissioned
in 1965 by the district’s local authority. Situated on the edge of the highly-respected Ballinderry River, the original works was designed to cater for an
equivalent population of 11,500. Within a relatively short period of the old works being commissioned (and following the establishment of Water Service
in 1973), it became apparent that the systems installed - although modern in their day - were not going to be able to deal effectively with the sewage from
the town as well as the surge in volume of effluent being produced from the area’s rapidly expanding pork industry. The trade effluent was extremely high in
strength due to the quantities of blood and fat associated with pig processing and was subsequently putting unprecedented pressure on the works.
By the 1980s Cookstown’s population had increased beyond 24,000, and while the existing works had been extended to cope with the growing domestic
and trade pressures, it was clear by the mid 1990s the sewage plant was operating well beyond its initial capacity. In addition, many of the tanks required
unpleasant and labour - intensive operational procedures to maintain them; whilst other items of plant, such as the detritor. had become ineffective.
Operational problems, such as blockages, were also frequently encountered.
Despite the processes being well maintained, the fact remained that the works was substantially overloaded both hydraulically and biologically. As a result, the
works had failed on a number of occasions to meet consent standards which meant that fines by the EC were imminent.
During the 1990s, extensive studies were carried out in relation to the building of a new sewage treatment works in Cookstown. The planning authority ruled
out the existing site for a bigger works on the grounds that it was too close to housing and that any development of the site would inhibit further residential
expansion in that area of the town.
Overall a total of seven sites were considered for the location of the new works with Environmental Impact Assessments drawn up for each option. An extensive
public consultation exercise was undertaken to present the various sites to key stakeholders but all options were deemed unacceptable.
Having exhausted all avenues, Water Service’s designers went back to looking in greater detail at ways in which they could overcome the constraints posed by
the existing works site.
The main problem with the site surrounded the restricted footprint that was available for introducing new infrastructure. However research showed that by
utilising more modern treatment processes, Water Service would be able to incorporate a new higher capacity works within a much smaller area. From an
environmental point of view, we knew that careful planting and screening of the new works would overcome any visual objections and that by introducing
robust odour control systems, the tightest of standards would be satisfied.
With this option offering the most economically advantageous option, Water Service proceeded with a design to replace the existing Cookstown WwTW with
a modern new plant on the same site. Five alternative treatment processes were economically and practically appraised for their construction within the
confines of the existing works site.
The most suitable option deemed for the new Cookstown Works was a Sequential Batch Reactor (SBR) process- a compact footprint plant which did not require
a separate secondary settlement stage (an element that would take up additional valuable space on site).
Case Study:
Optimisation of a SBR
using Enhanced Control
Figure 1: Cookstown WwTW
Page 20
Also, because the SBR process could be integrated into the existing works and operate without a short-term requirement for primary treatment, it eliminated
the need for the provision of a significant temporary treatment plant
In terms of whole life costs, the SBR option proved to be the most economically viable solution to produce high quality effluent.
Working within the confines of the existing site footprint, coupled with the need to keep the existing works live was probably the biggest challenge that faced
the construction team. Logistically the storing of materials also proved to be a significant problem and while ‘just time’ deliveries were scheduled as far as
possible to maximize space, NI Water were keen to reuse as much of the excavated spoil as possible. To enable this to happen, stockpiles of rock and indigenous
landscaping were created in the area just above the works itself.
Much of this existing material was used during phase one of the construction programme (building of the SBR tanks and the inlet works) when much of the
river improvement work was also undertaken.
River improvements Prior to construction work getting underway, NI Water’s Engineering & Procurement team, set up a special river improvement workshop
to offer a common platform for all those with an interest in the river to come together to discuss their concerns and put forward ideas for enhancing the river
quality and its long-term protection.
During the initial workshop, NI Water highlighted how the design of the works had been developed with cognisance of the adjacent Ballinderry River. To
improve the conditions in the river and protect it from construction work in the short term, NI Water took the decision to carry out ancillary upgrades to the
existing plant to temporarily raise the quality of the treatment process until the new works was brought on line and compiled with current discharge consents.
The first meeting proved a most valuable exercise and from the outset of the scheme, provided a crucial stepping stone to building strategic links with some key
project stakeholders. The knowledge gleaned from the Ballinderry River Enhancement Association (BREA) was fundamental in introducing the most effective
river improvement methods to ensure minimal disturbance to the existing fish or invertebrate life.
To the delight of the NI Water team, their joint venture contractors for the new works wholeheartedly bought into the idea of improving the river. Ahead of
construction, all river banks were strengthened to prevent future erosion and a total of six weirs and groynes lying above and below the works were repaired
using indigenous stone. A boom downstream of the works was introduced so that any silt or debris from the working site was caught and removed and a
number of gravel spawning beds were introduced at agreed locations for the migrating fish such as salmon and dollaghan.
The timing of the works was also taken into account with all construction work in the river undertaken to coincide with the migration of fish.
Moving forward to today – Advanced ASP Control
More recently the works at Cookstown was struggling to hydraulically treat all of the flows that it was receiving from the network with the storm tanks
regularly filling as the sequencing batch reactor cycles were proving to be insufficient to complete treatment before flows were fully treated as such flows
passing to storm tanks. In order to resolve this situation a solution was sought to improve the works control using an advanced activated sludge control system
from Strathkelvin Instruments, the ASP-CON.
The ASP-CON is a multi-parameter Activated Sludge Plant controller that is designed to measure up
to 20 key Activated Sludge Plant parameters that are used to control the Activated Sludge Process.
At its heart it is a respirometer that measures the Oxygen Utilisation rate and the health of the ASP
process but the multiple measurement techniques that utilises allows a greater degree of control of
the process (figure 2)
The ASP-Con system measures basic parameters such as Dissolved Oxygen, Ammonia, MLSS, pH &
Temperature as well as additional basic parameters such as Potassium, Conductivity, Settlement and
TSS – Predicted as well as Advanced WwTW Control Parameters such as OUR and SOUR,. With these
parameters fed to PLC there is a complete control of the ASP system.
This unique access to all of the WWTP information allows the Operational Teams to decide how
to deploy scarce operational resource. The in-situ eliminates the need for Operators to go out on
plant and grab MLSS ASP-Con (Mixed Liquor Suspended Solid) and settlement samples. Depending
on site size and layout this can save up to 2 hours of valuable time and ensuring consistent sampling
techniques and measurement practises. If an issue occurs the ASP-Con can be programmed to grab
another sample or programmed to collect samples more frequently, regardless of the time of day,
day of the week, holiday schedule and regardless of adverse weather conditions. The samples are
then tested in-situ – so avoiding the requirement to send off to the lab and wait a week on results,
not knowing how well samples are stored and for how long before a lab technician is free to test
any particular sample – results are Real-Time. The ASP-Con will also cut down the requirement of
operator time for routine cleaning of ASP-Con probes. All the probes are on one instrument, that
runs through a cleaning and calibration programme as dictated by the Operations Team. Cleaning is
built-in to the normal operating procedures of the instrument. This also can be altered if and when
required, by the Site Team. The demand on an Operator’s time for Maintenance of numerous probes
on a site is huge. The fouling and ragging of “old generation” probes is a significant health and safety
issue. The sheer physical requirement at times, to lift some probes out of the treatment plant due
to excessive ragging should not be under-estimated. In contrast, the ASP-Con’s Self-Cleaning regime
eliminates ragging completely. The regular cleaning regime automatically implemented significantly
reduces fouling, improving accuracy reliability and repeatability of measures. Also health & safety
risks to Operators in cold, wet and lone working conditions are significantly reduced.
Figure 2: ASP-CON System
Page 21
What this means at the wastewater treatment works at Cookstown was that the completion of the sequencing batch reactor cycles could be more accurately
managed by using the ASP-Con system to measure when the Biodegradable load (by measuring Oxygen Uptake Rate – OUR and Ammonium) is completely
removed during each aeration cycle. Once this has been confirmed as complete the ASP-Con system takes a sample to measure the MLSS and then the SVI in
each basin. The SBR control software for the basin is then stepped on to complete the settle and decant phases before being allowed to idle until the level in
the Anoxic basin requires the fill/aerate cycle to restart.
The SBR basins were optimised by
•	 Ensuring biodegradable load is completely removed during each aeration cycle.
•	 Avoiding excessive energy consumption by avoiding overtreatment of wastewaters.
•	 Maximising hydraulic throughput by maximising treatment basin availability.
•	 Monitoring biological measures of performance to avoid long term issues.
This can be seen in figure 3:
What this meant, from a hydraulic point of view, was that the number of SBR cycles could be increased by decreasing the SBR cycle time so that 12 fixed
volume cycles could be treated each week. This increased the hydraulic throughput in the plant by 50% ensuring that spills to the storm tanks could be limited
to genuine storm events and not due to hydraulic overload of the treatment process.
However, this was not the only benefit of the ASP-CON system at Cookstown as the plant worked on the principle of a Surge Anoxic Mix SBR. This has meant
a large decrease in the amount of energy that is required to treat the wastewater to standard as can be seen in figure 4
Over a one month period there was a 50% reduction in the amount of energy
that was consumed by the treatment process. All of these benefits also result
in an increased stability of the treatment process which means overall the
treatment works is more stable.
Conclusions
By utilising advanced monitoring and control using the ASP-CON
system at Cookstown WwTW there has been a large improvement in
environmental quality by increasing the hydraulic capacity of the works and
decreasing the energy consumption. This is a double benefit that the water
industry is seeking insofar more is being achieved, quite literally for less. This
sort of system is usually reserved for larger works where there is a larger
potential for savings. However Cookstown WwTW at a relatively small design
population of 24,000 shows that advanced control systems are available on
treatment works a lot smaller than has been traditionally considered for
advanced ASP control systems. In a time where the water industry is looking
to deliver more for less the ASP-CON system gives the industry a potential
solution to realise the efficiencies that it needs to through instrumentation
and control.
Figure 3: Cookstown unoptmised (left) and optimised (right)
Figure 4: Energy Savings at Cookstown utilising the ASP-CON System
Page 22
WirelessHART®
networks:
7 myths that cloud their consideration
for process control
Misinformation about WirelessHART networks prevails among many instrument engineers in the process industries. This article attempts to set the record
straight by debunking 7 myths about these networks.
Myth 1: WirelessHART is unsafe
False. WirelessHART is safe. But why? A variety of tools make this so.
Encryption—A WirelessHART network always encrypts communications. The network uses a 128-bit AES encryption system (Advanced Encryption Standard)—a
standard in several fields of wired communication. The encryption cannot be disabled. The security manager in the WirelessHART gateway administers three
parameters. The parameters include:
•	 Network ID,
•	 Join key and
•	 Session key.
Integrating a WirelessHART transmitter into a network requires a network ID and join key. After these are entered, the transmitter first searches for the network
with the right ID. If it finds such a network, it sends a “Join Request” message with the key configured. The WirelessHART gateway checks the join key of the
transmitter. If correct, the network accepts the transmitter. A session key encrypts the communication. Every network subscriber gets a separate session key. So
it is possible only to be accepted into a network with the join key, but this does not decrypt the encrypted communication of the other subscribers.
Access list—After completing commissioning, the acceptance of new network subscribers can be disabled. In this way, no new network subscriber can be
integrated into the network even if the network ID and the join key are correct. To integrate a new subscriber, this function can either be disabled or the UID
(Unique Identifier = unique device serial number) of the network subscriber can be entered manually into the gateway. A network subscriber that does not
appear in the subscriber list of the gateway is also ignored by the other network subscribers when messages are forwarded.
Join counter—If a WirelessHART transmitter is integrated into a network, it records this information in the so-called join counter. If the device is restarted and
if it joins the same network, its join counter is increased. Both the network subscriber and the gateway have a join counter. They cannot be read out. If a device
now tries to integrate into a network with a join counter that does not match the gateway, the gateway declines it. As a result, it is not possible to substitute one
device with another without this being noticed, even if both have the same UID.
Nonce counter—Each transmitted message has a nonce counter. This is composed, among others, of the UID and the number of messages sent by the
transmitter so far. Each message is marked uniquely with this mechanism. If a message gets intercepted to resend it again later, it will be identified as outdated
and thus rejected. This technique obstructs any manipulation in the communication.
Modifying the network parameters—The network parameters, network ID and join key can only be changed by the gateway itself or at a WirelessHART
transmitter locally via a service interface or the display. No network subscriber or hacker in the network can modify this information.
Myth 2: Wireless is too expensive2. WirelessHART networks are too expensive
Yes, WirelessHART devices are more expensive than wired HART devices. But, more importantly, how do costs for the overall communication investment
compare? WirelessHART devices are more expensive because:
•	 they contain ultra low power electronics to get long battery life
•	 they require measures to achieve explosion protection
•	 they use high frequency components.
But the whole solution must be considered, not just the devices. The solution involves engineering hours, labour hours and material.
Infrastructure for wired devices—The measurement signal of a new wired device usually must be connected to a PLC or DCS to use the data. This is either done
by system’s local I/O, a remote I/O system or a fieldbus connection. While this is easy during a new installation (greenfield), this could rise to a challenge for an
existing installation (brownfield). To add the new component, spare capacity must exist (free slots, channels, terminals). Another issue concerns
bringing the wires from the measurement to the I/O, requiring routing and protection of the device cabling, junction boxes, cable trays and glands, and all of
their accessories. All this infrastructure must be ordered, prepared and installed. Also an accessible location must be found. Otherwise this access must be
gained by other means, such as by setting up a scaffold tower.
Engineering and labour costs—Before all this, engineers must develop a plan involving where cables can run, which I/O makes sense, and how this work can be
executed. The documentation must be continuously updated to track the location of wires.
Hazardous areas—These areas further increase the difficulty and efforts compared with general purpose areas. Engineers must consider local conditions and
technical issues. An expert in explosion protection must verify the planned installation, including a secure power supply and zone separation.
Page 23
Wireless device break-even points—Of course, some planning and installation is also necessary for a WirelessHART network. The chief difference involves the
effort since only the WirelessHART gateway requires a powered installation. Local conditions will determine affordability. The WirelessHART devices can be
installed in whatever way optimizes the measurement. And separation of explosion zones happens by default since no physical connection exists between the
zones apart from the mechanics (e.g. a thermowell).
But how much could be saved? The wireless solution gains a breakeven point for the first installation of three or four WirelessHART devices plus one
gateway. For example, consider take a well-known device, a monitored heat-exchanger having two inputs and two outputs. The heat exchanger will need four
temperature transmitters. So assume:
•	 4 temperature transmitters,
•	 a distance of 100 meters between control room and the scheduled junction box and
•	 10 meters of cables between the junction box and each transmitter.
Realizing this solution will account about US$ 20,000, where just 20% represents the cost of the temperature transmitters. In the case of wireless, assume:
•	 4 temperature transmitters and
•	 a distance of 10 meters between control room and the WirelessHART gateway.
Realizing this solution will cost about USD15,000, where 80% represents the cost of the WirelessHART devices and the gateway.
So the wireless solution saves 25% compared the wired one. And it will save even more in time. In fact, this solution could be available in a quarter of time. And
the next heat exchanger? Wired, it will cost an additional USD20,000. Wireless, it will just add the cost of the new WirelessHART devices since the gateway is
already available.
While you could get three wireless solutions for the price of two wired solutions, you could get four wireless solutions in the same time as one wired solution!
Myth 3: WirelessHART networks are unreliable
A communication link for process control or even monitoring must be reliable and available as needed. Everyone knows examples of communication
failures just when needed. So can a wireless communication ever be reliable? Surprisingly it can be more reliable than cable. This is achieved by using a time-
synchronized, frequency hopping, meshed network.
Meshed network—As mentioned earlier, every network has a gateway that transforms the wireless data into wired data ready for a DCS or PLC. Most
wireless communication has a star architecture, meaning all network participants connect only to the star centre or head. WLAN and mobile phone
communication are prominent examples for a star topology. WirelessHART has a mesh, rather than a star, architecture. Within a meshed network the
participants are communicating with the gateway and additionally among one another. Furthermore, the wireless devices tell the gateway which other
participants they can communicate with.
Other wireless participants in range are called neighbours. The gateway analyses information about neighbours and creates a routing table. This table contains
the information about which network participant has which neighbours. As participants can reach each other, they can also route the data packets from and to
their neighbours. In this way, the gateway can create redundant communication paths for each network participant. Should one communication path fail, the
sender will automatically switch to a redundant path. Since each transmitted packet must be acknowledged by its receiver, it’s easy to recognize a broken link.
RSSI and path stability—The radio signal strength indicator (RSSI) indicates the quality of a communication link to the gateway. Knowing this, the gateway can
determine if enough reserve strength is available or if the signal level is already too low. Since the gateway gets the RSSI of each single communication link, it
can readily distinguish between high and low level signals. Additionally, the gateway counts the data packets lost during transmission for each link. By
comparing the total number of transmitted packets within a network, the gateway can recognize paths with high losses and retransmissions. It uses both
kinds of information to identify good or bad paths in a network. So the gateway now can pick the good paths that the network participants should use to
communicate.
FHSS and DSSS—To ensure reliability, WirelessHART makes use two techniques: Frequency Hopping Spread Spectrum (FHSS) and Direct Sequence Spread
Spectrum (DSSS). WirelessHART is a frequency hopper in its 2.4 GHz band. After each transmission between two network participants, the radio channel
changes. Hopping across multiple frequencies is a proven way to sidestep interference and overcome RF challenges. Should a transmission be blocked, the
next transmission will be to an alternate participant on a different frequency. The result is simple but extremely resilient in the face of typical RF interference.
DSSS transmits more information than necessary. It sends eight bits for each single information bit. Every bit is encrypted in such a way that the main bit is
restored even if less than half of the eight bits are received. This makes the communication more robust against short disturbances and data does not need to
be re-transmitted, which saves time, bandwidth and energy.
Redundancy—Because each WirelessHART device can route data for other devices, it is possible to set up a network topology with redundant paths for each
network participant. Having at least three independent and good communication paths ensures a reliable communication with the gateway. The gateway can
determine all information concerning topology, network traffic, and quality of the communication paths.
Myth 4: The range of WirelessHART networks is too short
A common question concerns the maximum distance that can be covered by WirelessHART. Answers relating to surroundings and obstructions sometimes
confuse the issue. What range does a WirelessHART device actually need to achieve? The practical answer revolves around the network setup, bandwidth, and
repeaters.
Network setup—The ultimate aim of the network is to get the wireless data to a gateway that transforms it into wired data ready for a DCS or PLC. A properly
Page 24
setup WirelessHART network has at least three devices within range of each other, including the gateway. This ensures a reliable connection to the gateway.
In addition, the gateway should be located towards the middle of the network. Otherwise devices near the gateway become pinch points that shorten battery
life and risk network failure.
Following these recommendations for network setup should provide coverage of nearly 200 feet, even in a highly obstructed area. In reality coverage will
often expand to 300 feet. Large installations will include installing more measuring points. This automatically expands the network coverage as every new
WirelessHART device will route the communication for other devices.
Frequency spectrum and bandwidth—To minimize power consumption, reduce the number of device transmissions to whatever is necessary to serve an
application. It’s important to keep the number of re-transmissions as low as possible, too. To avoid collisions, WirelessHART uses time-division multiple access.
This means each link has its unique time slot to communicate. If this link fails for some reason, transmission passes to another link.
WirelessHART uses the license-free 2.4GHz ISM band. This band can be used by any other application as well (Industrial, Scientific and Medical Band). So
WirelessHART must share its bandwidth with all other technologies working in the same band. And this will cause collisions and re-transmissions for each
device within the network since these different networks are not synchronized to each other (WLAN, Bluetooth etc.).
To keep the network reliable and stable, time slots for re-transmissions must be reserved even if rarely needed. Faster update rates of a device require more
time slots, and the total available network bandwidth decreases. In fact, having an update rate of 1 second could easily result in a maximum amount of 12
devices within one gateway. As an alternative, operating two WirelessHART networks in parallel is possible, but this will also lead to collisions, reducing the
bandwidth of both networks. As opposed to one long range network, having two short-range networks covering different areas with only a small overlapping
areas will increase their stability and device battery lifetime.
Repeater or routing device—Sometimes a measuring point is too far away from a network to connect. This can be corrected by installing additional routing
devices. Any WirelessHART device will do, but the best fit is a device that is small, requires minimum effort to install, and provides an easily replaceable
battery.
Myth 5: Wireless constantly needs batteries5. WirelessHART devices constantly need new batteries
What would a wireless device be that requires a power cord—not completely wireless of course. So an independent and reliable power supply is mandatory.
Batteries can fulfil this requirement, but with the disadvantage of their finite energy. For sure, dead batteries must be replaced to get a battery powered de-
vice running again. But how big is this disadvantage really?
ABB’s WirelessHART devices use an industrial-standard D-size primary cell. This cell was especially designed for extended operating life over a wide
temperature range of -55°C to +85°C to fulfil the requirements of process industries. But how much lifetime is achievable? It depends. Battery life is not
predictable as a hard fact. Rather it behaves like the fuel consumption of a car. Some need more, some need less, depending on acceleration and speed,
vehicle weight, and traffic.
To maximize battery life, ABB electronics have an ultra-low power design—less by a factor of 20 compared to a conventional 4-20 mA HART device. All
components have been chosen by their functionality and their current consumption. The design goal is to consume the minimum energy possible, including
software. For example, sub-circuits power down if not needed. So the sensor itself powers down between two measurements as well as the display. If the
update rate is slow enough, the device will fall into a “deep-sleep mode” between two measurements as often as possible
The update rate is the user-defined interval at which a wireless device initiates a measurement and transmits the data to the gateway. The update rate has
the largest impact on battery life—the faster the update rate, the lower the battery life. This means the update rate must be as slow as possible, but still meet
the needs of the application. Depending on the time constant of the process variable, the update rate should be 3 to 4 times faster for monitoring open loop
control applications and 4 to 10 times faster for regulatory closed loop control and some types of supervisory control.
A special attention should be spent for update rates faster than four seconds. These faster rates will prevent the device from going into the deep-sleep mode.
They will consume much more power as well, impacting the total number of devices that can be handled by one gateway.
Burst command setup—All WirelessHART devices are able to burst up to three independent HART commands. Of course, the update rate of each command
could be setup separately. But as described before, the device tries to fall into deep-sleep mode as much as possible. By default, the update rates are set up
as multiples of each other, giving the device the best conditions to save as much energy as possible.
Network topology—Mesh-functionality can also influence the battery life since each device has routing capability. If one device acts as a parent for another
device and both devices are setup with the same burst configuration, the parent must transmit data twice as often as its child. The most power saving network
topology has all devices within effective range of the gateway. While this is rarely possible, it’s more important to think about this before placing the gateway.
To extend battery life, the gateway should be placed more or less in the middle of a planned network. In this way, the devices acting as parents would be
equally distributed—not relying on only a few devices to route data.
Knowing all this about battery life, what can be expected? Taking all these energy saving recommendations into account and assuming the following:
•	 bursting one command
•	 having a direct communication path to the gateway
•	 having three child devices with the same update rate and
•	 using the device at 21°C.
Under these conditions the battery life could last up to
•	 5 years with an update rate of 8 seconds
•	 8 years with an update rate of 16 seconds and
•	 10 years with an update rate of 32 seconds.
Page 25
If a faster update rate is favoured or if the device has a key position for routing within the network, ABB’s Energy Harvester option would reliably relieve the
battery.
And last—but not least—ABB’s WirelessHART transmitters use standard batteries, making them easy to procure. This will not save battery life, but will save
money.
Myth 6: WirelessHART networks require specialists to set up
A lot of engineers think that setting up a wireless network can be an arduous and annoying job. Getting everything running, ensuring safe communication and
including all desired network participants can take much time. But is this true? What do we really need to do to get a WirelessHART network running?
The wireless elements of a WirelessHART network include:
•	 field devices connected to the process or plant equipment. Of course, they all be WirelessHART capable.
•	 a gateway that enables communication between host applications and the field devices in the WirelessHART network.
•	 a set of network parameters: Network ID and Join Key.
That’s it. Now you can set up your network in a few steps:
Input of network parameter—To get the gateway into proper operation you must input the network parameter. This could be done easily via the integrated
web browser of the WirelessHART gateway. Most gateways provide this comfortable way of configuration. Now the network participants can join the network.
They also need the network parameters. Here’s the easiest way: order them with the desired network parameters. Otherwise you must input parameters
manually.
Since all WirelessHART devices provide a maintenance port, you can use the tools already available for wired HART devices; this avoids the need for additional
equipment. And they can be operated just like the wired HART devices. Additionally, ABB WirelessHART devices can be brought into operation just by using
their HMI. Again, you need not concern yourself with security because it’s built-in.
Update rate—All WirelessHART devices burst their measurement values. By default, all ABB WirelessHART devices burst HART command 9 every 16 seconds.
This includes the dynamic variables PV, SV, TV, QV (for devices with multiple outputs) with the status of each and the remaining battery lifetime. They burst
HART command 48 every 32 seconds—the additional device status information. So typically, you needn’t deal with the burst configuration. Nevertheless the
commands or the update rates can be changed as needed.
Placement of field devices and gateway—Start with the gateway installation first. Find a suitable place for it and power it up. As it is the connection between
host application and the WirelessHART network it will need a power supply and wired connection to the DCS. After the WirelessHART devices have been
prepared they now can be installed in the field.
Installation can be done in the same way as well-known for wired HART devices. But WirelessHART devices require less effort because they have no wires.
This is especially true in hazardous areas where nothing will cross the zones and no output device needs to be checked with its ex parameters against an input
device. After the devices are powered up they will appear in the network automatically. Everything else is handled by the gateway; a user does not need to take
care of meshing the net or which device communicates with which.
Myth 7: WirelessHART is too slow
When asked for the required speed to cover an application, a user will often answer: as fast as possible. The update rate for WirelessHART devices within a
network can be configured individually between once per second and once per hour. Is that fast enough for everything? Let’s look at a few considerations
before answering too quickly.
Usage—At first, examine the uses for which a WirelessHART network is actually intended: condition monitoring and process supervision. Remember, the
wireless sample/update rate should be:
- 3 to 4 times faster than the process time constant for condition monitoring and open loop control applications
- 4 to 10 times faster for regulatory closed loop control.
For measurements in the process industries today, more than 60% simply monitor conditions—not for control applications. So a WirelelessHART update rate
that’s greater or equal to one second may fit many of these applications. Of course other factors may apply too.
Timing—For wired devices, update rates and timing aren’t often considered. Engineers and operators assume the values in the DCS are the real time values
from the process, achieved by oversampling. In fact, signals often are converted and scaled from the initial sensor element before reaching the DCS. So in a
traditional wired installation, the measurement values also have latencies. Instrument engineers are rarely aware of these, but just assume these values are
timely enough. In the world of WirelessHART, the data packets have time stamps that spell out how old a measurement value is. This indicator lets engineers
assess latencies and properly react to them.
Thinking differently—Instrument engineers must know how fast a process value can change for both control applications and condition monitoring. No
additional knowledge is needed for WirelessHART. For wired installations, this knowledge affects a DCS or PLC. For WirelessHART, it affects the planning of the
network. Because the bandwidth is a limited resource, engineers must consider how fast the update rate needs to be rather than how fast it could be.
Comparing speeds—The traditional FSK-HART loop provides a speed of 1200 bits per second. In practice, HART on RS-485 cable is limited to 38,400 bits per
second. WirelessHART provides a speed of 250,000 bits per second. This means WirelessHART is more than 200 times faster than wired HART and even six
times faster than HART over RS-485 cable. By allocating the “Fast Pipe” to a network participant, the wireless gateway provides a high-bandwidth connection
that is four times faster than normal. This is ideal for transmitting a large amount of data, such as up- and downloading a complete configuration.
Page 26
Article:
Getting data quality wrong and
how to get it right
Introduction
The Water Industry has always been known to be data hungry, an estimate of the amount of data collected in the UK alone estimates it to be in the hundreds
of millions of pieces of data per day ranging from customer billing data to operational data that is used to monitor the performance of the industry on a
continuous basis. However the question has to be asked as to how much we can rely on this data on a day to day basis. This is especially the case with recent
articles in the trade press that have disputed the accuracy of things like Smart Meters.
In reality, the customer side of things can be relied upon, it used for billing purposes and there are strict international standards around the manufacture of
flow monitoring and quality control checks and independent auditing and management systems ensure that things are accurate. The advent of Smart Meters
has come under some teething problems in some areas of the world but how much of this is people getting used to vast quantities of data and how much of it
are standard uncertainties that are normal with all measuring systems is debatable. However despite all of this is the final quality control check of the customer
and they will rightly challenge where things aren’t necessarily right.
Putting this aspect of the Water Industry aside it is in the operational side of the business where the use of instrumentation in order to collect data can have
interesting results in informing operations as to the current state of operations especially in the Wastewater side of the business where the challenges of
measurement are high.
In this article the consequences of poor quality data and how to use standard calibration & verification techniques to ensure that the quality of data in the
operational side of the business (with specific emphasis on wastewater) will be discussed.
Where things can go wrong?
Most operational members of staff have seen the obvious errors in measurement on site, especially when looking at SCADA or telemetry. The mimic that says
the final effluent of the treatment works is supposedly at 10000C is amusing but certainly won’t be believed for a second. These are the obvious errors that are,
annoying as the true data is not available but don’t have the potential to cause any damage. They are simply wrong. On the flip side of this is the innocuous
errors that could be right but in reality actually aren’t.
So what are the cause of poor quality data associated with online instrumentation?
•	 Poor selection of the instrument type - for example selecting a high range instrument for a low range application
•	 Poor installation - for example installing a flow meter with insufficient space
•	 Poor commissioning - for example measuring the wrong empty distance on a level based instrument
•	 Poor maintenance - not keeping the instrument clean, replacing consumable parts or reagents
•	 Telemetry errors - Not checking the scaling between the instrument & the telemetry system
These are probably the five most common problems that can occur which can produce errors that while erroneous the results that are produced are not
sufficient enough to produce the confidence of the physical impossibility of the final effluent result at 10000C. An example of a flow meter result where a simple
error in the calibration of a flow meter caused a surprising result is shown in figure 1 below
Figure 1 - A long term calibration error
Page 27
Figure 1 shows a flow meter reading over a period of 8 years which was subject to routine checking procedures and a change in the instrumentation that
performed the measurement and an error in the setup meant that the flow meter was reading significantly higher than the true situation. As the flow meter
was still within it consented dry weather flow reading in the period of time the error was not catastrophic enough to not be believed. It was only during a
routine dip check by a particularly diligent member of the maintenance staff that picked up the error and the correction was made. It was afterwards that look-
ing at the long term scale that the error in flow measurement is particularly evident. This error was caused by a very minor error in the empty head distance
and once it was realised could be fixed in less than 5 minutes.
Figure 2 demonstrates another common cause of error in on-line instrumentation, telemetry scaling issues.
In figure 2 we see an example, again of flow based measurement, but using a different technique. The error in this case is within the realms of believability and
could be typical of a particularly wet year. However routine calibration of the meter itself showed that the scaling in the telemetry system significantly differed
with that on site. As a result of this the meter in telemetry was reading approximately 2.5 times higher than the meter onsite making the site appear that it was
not compliant with the permitted limit.
In both of these cases a false situation was created by online measurement, in this case the flow that was passing through a wastewater treatment works. This
could potentially lead to, depending upon what is actually being measured and how that measurement is being used to anything from under-estimation of
what is leaving a wastewater treatment works to poor control of operational processes to poor investment decisions being made. Some scenarios.....
Scenario 1 - A Dissolved Oxygen probe on an activated sludge plant is reading 4mg/L and the actual process condition is 1mg/L. The plant is using a standard
PID loop control with no ammonia monitoring on the effluent of the treatment works (a very common situation). The control valves of the aeration system close
decreasing the amount of air to attempt to control to what is thought to be 2mg/L. The process condition is actually sub 0.5mg/L and actually not enough air
is being provided for the bacteria or for maintaining the minimum air flow for mixing. The MLSS level crashes as it all settles to the bottom and the ammonia
levels rise due to both insufficient mixed liquor and insufficient air. A pollution event is the result.
Scenario 2 - A treatment works has a history of flow non-compliance with its dry weather flow consent and so the flow meter readings are trusted. This results
in an investigation into the root cause of the flow non-compliance and it appears to be infiltration related. This triggers surveying of the collection network.
This reveals very little infiltration as the flow meter readings are actually falsely high. As a result the investment option is to apply for an increased permit and
expansion of the treatment process. This results in unnecessary investment and a works that is suddenly over-sized for its current flows and loads creating not
only an unnecessary CAPEX but an unnecessary OPEX expenditure in addition and a works that is more difficult to operate.
Both of these scenarios are hypothetical but have a grain of plausibility within them. The impact of poor measurement of on-line instrumentation can be large.
This highlights the importance of the maintenance of online instrumentation and if in the future there is a greater reliance on online instrumentation it comes
with additional responsibility. The need to maintain the instrumentation in order to maintain the data quality.
Figure 2: A typical example of scaling error
Page 28
AQC & Maintaining Instrumentation
The Water Industry are experts at maintaining the instrumentation that are present on the treatment systems. Hundreds of thousands of checks are carried
out each year along with all of the testing that is done in the laboratories.
When you work in the laboratory there is something called Analytical Quality Control (AQC)m normal laboratory procedures sees both duplicate and check
samples being done. This isn’t once a week, once a month or even once a year, the frequency of these check are at the very least once a day and depending
upon the analyte it can be several times an hour (1 in 20 samples used to be a duplicate and 1 in 20 samples a check sample alternating so there was a check
sample 1 in 10 times used to be typical). All of the check samples used to be traceable back to national or international standards and the laboratory method
itself used to be certified along with the laboratory certification (typically to ISO17025).
Moving to online instrumentation, depending upon the type of instrumentation, this rarely happens. The checking of whether an online instrument is actually
working correctly very much depends upon operational or operational maintenance procedures.
All of this does depend upon the type of instrument and what is actually being measured.
•	 Online analytical analysers tend to have an internal calibration sequence that uses traceable standards that calibrate the analyser on a regular basis (typi-
cally daily). This ensures that the analyser remains accurate.
•	 Electromagnetic flow meters tend to go through complex factory based calibrations against a master meter and a factory calibration hard-coded into the
meter. This then is internally verified by the meter itself to ensure it keeps itself within tolerances of the factory calibration.
•	 Level based flow meters can typically be compared against a calibration plate to check that a standard distance is maintained.
•	 Dissolved Oxygen probes typically have a replaceable measurement cap that needs to be changed in order to maintain measurement integrity (typically an
annual task).
Online instrumentation is accurate as long as, like the laboratory techniques that are so diligently practised, are also applied to the field. In principle the
operational tasks for online instrumentation are different but they are based upon the same principles of quality control. For example, when you work in the
laboratory the principle of cross-contamination is driven into you. It is something every analyst in the laboratory has done and every analyst in the laboratory
has paid the price for in the form of wasted analytical time and embarrassment. For online instrumentation it is the principle of keeping your measurement
point and your instrument as clean as practically possible. Apart from keeping online instrumentation clean are the concepts of calibration versus verification.
The two concepts of calibration and verification are often confused and are often misunderstood and this is where online instrumentation needs to borrow
from its analytical, laboratory based relations.
Calibration, in terms of an online instrument is the procedure of changing the measured parameter of an instrument so that it matches that of a traceable
method of measurement. This is often done by applying a factor within the instrument itself. This is often something that the instrument needs to be returned
to the original manufacturer to complete. Although some manufacturers have field services that can accomplish a calibration routine. For analytical instru-
ments this can be accomplished using traceable standards in the field. This should almost always be a wet method of analysis. In the laboratory this would
comparable to running a calibration curve.
Verification, in terms of an online instrument, is the checking of an instrument against a known measurement in order to confirm the correct operation of the
instrument. It would not normally involve making any changes to the instrument itself. For an analytical analyser this would involve taking an independent sam-
ple and comparing the results. For a flow meter checking the gauge or using an independent meter. The variant of the verification is the electronic verification
which checks that the electronics of the device are working within a standard tolerance.
Lastly, for any instrument onsite there is the end to end testing of the telemetry systems, checking what the system is actually reading is the last vital check.
As can be seen by the earlier example this is something that often can be one of the steps that is often missed.
Discussion
Getting the quality of data right is actually a very simple thing to do in theory but as often in life it is normally one of the most difficult things to put into
practice. The simple steps quite simply are
1.	 Select, install and commission any online instrument correctly. Do not cut corners as most often it will end up with poor data quality as a result
2.	 Keep it clean & maintained - Easier said than done, especially in a wastewater environment but absolutely vital. If it can’t be accessed then move it to where
it can. All instruments should be accessible for maintenance especially ones with consumable parts.
3.	 Keep checking it - Getting into the habit of walking by a meter and seeing if its working or not is the first warning that something is not right. Comparing it
against a known sample by using verification methods is the next step
4.	 Check what you are getting onsite is what everyone else is reading too.
In reality the practises that have been developed in laboratories, Analytical Quality Control, should also be applied in the field with online instrumentation if
the quality of the data in the water industry is to be relied upon in the future. Especially as the volume of data is set to increase dramatically. In essence this is
actually taking the culture of AQC from the laboratory and applying it to the field-based environment. The alternative is, as the industry gets more reliant on
field-based online instrumentation, is that the operational situation that we see is seen with a slightly skewed and erroneous point of view.
Page 29
Introduction
Water 4.0 is a concept that has recently be raised as the “future” of the Water Industry...possibly, but apart from being a paraphrase of Industry 4.0 the 	
question has to be asked - What is it and what has it got to do with the way the Water Industry operates in its current state?
So to define what exactly Water 4.0 is we have to look at Industry 4.0
and what came before it, I.e. Industry 1,2 & 3. So what are these?
Industry 1.0 - This was the first Industrial revolution and involved the
mechanisation of production using water and steam power. Think Wa-
ter Mills and Steam Engines
Industry 2.0 - In short think of electricity and what it did for the
mechanisation of industry
Industry 3.0 - Think electronics and computers basically the start of
automation within industry
So Industry 4.0? it is a collective term for technologies and concepts
of value chain organization. Based on the technological concepts of
cyber-physical systems, the Internet of Things and the Internet of Services, it facilitates the vision of the Smart Factory. Within the modular structured Smart
Factories of Industry 4.0, cyber-physical systems monitor physical processes, create a virtual copy of the physical world and make decentralized decisions. Over
the Internet of Things, Cyber-physical systems communicate and cooperate with each other and humans in real time. Via the Internet of Services, both internal
and cross-organizational services are offered and utilized by participants of the value chain.
It is based upon six design principles
1.	 Interoperability: the ability of cyber-physical systems (I.e. work piece carriers, assembly stations and products), humans and Smart Factories to connect and
communicate with each other via the Internet of Things and the Internet of Services
2.	 Virtualization: a virtual copy of the Smart Factory which is created by linking sensor data (from monitoring physical processes) with virtual plant models
and simulation models
3.	 Decentralization: the ability of cyber-physical systems within Smart Factories to make decisions on their own
4.	 Real-Time Capability: the capability to collect and analyze data and provide the insights immediately
5.	 Service Orientation: offering of services (of cyber-physical systems, humans and Smart Factories) via the Internet of Services
6.	 Modularity: flexible adaptation of Smart Factories for changing requirements of individual modules
The “Cyber Physical System” element of this can be defined as a system of collaborating computational elements controlling physical entities. CPS are physical
and engineered systems whose operations are monitored, coordinated, controlled and integrated by a computing and communication core. They allow us to
add capabilities to physical systems by merging computing and communication with physical processes.
So how does this apply to the Water Industry?
Industry 1-4 all apply to the manufacturing industry and for that industry it is relatively simple, something is being fabricated and put together putting together
distinct parts. The Water Industry is actually quite different - be it Potable water or Wastewater it is being cleaned for discharge either to the customer’s tap or
back to the environment. In reality, operationally, does Industry 4.0 apply to the Water Industry or are we trying to force concepts from another industry onto
the Water Industry and creating something that doesn’t quite work?
Possibly? But let’s play around with the design principles briefly and see where we get and see how far the Water Industry is with the concepts
Interoperability - The way that I read interoperability is the ability of Water Industry Operators to connect, communicate and work with the treatment,
collection and distribution systems to find out what is going on and be able to connect remotely. If you ignore the concept of doing this over the Internet it is
arguable that we already have the ability to do this through SCADA systems. In someways you can almost say the Water Industry has achieved this on large
treatment works and to some aspects with distribution systems but are no where near the interoperability concept on smaller treatment works and collection
systems.
Rating - Big Tick....at least in parts of the industry
Virtualisation - a virtual copy of the Smart Factory - arguably a big tick in the Water Industry box. We have telemetry systems which at least allows us to see
what is going on. On Advanced Wastewater Treatment Works we have process models that control aspects of the treatment works and in both advanced
distribution and collection systems we even have a model based simulation models. It is certain that the technology is not quite there yet on a company wide
basis but in pockets in the Water Industry it certainly works and is in place.
Article:
Is Water 4.0 the future of the
Water Industry?
Page 30
Rating - Not far off
Decentralisation - the ability of the treatment works and network systems to control themselves. Again arguably this already exists, we as an industry have
elements of treatment works that are more than capable of controlling themselves through monitoring and control systems, we have pumping stations that
based upon the signals from level controllers will control pass forward pumps. We have PLCs that act as control centres for treatment works or individual parts
of treatment works. So a big tick in the Water Industry has achieved the principle of decentralisation? Perhaps....
Rating - A “Big Tick? Perhaps
Real Time Capability - The capability to collect and analyse data and provide the insights immediately? Hmmm....how do you define immediately, is it
applicable to the Water Industry? Is immediately necessary? This is an area where the Water Industry can definitely develop in. The basics can be said to be
done, we have the ability to alarm out if something is wrong and even the potential to react to the alarm remotely (on some systems) to repair the potential
problem. Under Water 4.0 and the principles of Visualisation & Decentralisation the system should of course react itself. There is the potential for Real Time or
even Near Time Capability (as applicable to the industry) but to be fair this is an area where the Water Industry could grade itself as “An area for improvement”
Rating - An area for improvement
Service Orientation - We’re a service industry, this is an absolutely tick in the box.....or is it? Well actually probably not?
•	 Water meters are mostly manually read once or twice a year
•	 Customer bills and other customer communications are mostly paper based and come through the post although some communication is through social
media
•	 Customer queries are handled over the telephone although text messaging, social media and texting to mobile phones are becoming more popular
•	 Customer analytics are rare at best although with the advent of Smart Metering this is an area that the Industry is actively persuing and improving in
Rating - “An area where improvements are being made but generally could do better
Modularity - A flexible approach? Changing requirements? Does this design principle
apply? Are we already doing it? Again arguably the answer is yes. The picture to the right
is from a large wastewater treatment works and to me demonstrates modularity in the
design of the final tanks of the treatment works as well as flexibility of the operation. The
control system of an individual tank will be exactly the same as the control system for the
tank next door to it (or probably in this example the group of tanks next door.
Some of the Water Companies in the UK have their control system libraries so that they
can take a control module from the “library” and apply it, with a little bit of tweaking to
site requirements.
So has the Water Industry achieved the design principle of modularity? Arguably perhaps
but certainly not across the whole industry and perhaps not if you are going to take a pur-
ist view of industry 4.0 but from a Water 4.0 point of view its a definitely maybe.
Rating: - Getting there
Purely going on the design principles of Industry 4.0 we can argue that Industry 4.0 does apply to the Water Industry and so as concept at least Water 4.0 is
a direction that we should be at least, moving towards and in parts have actually achieved but as per anything you can take the individual ingredients of any
recipe and put them all together in a mixer, it doesn’t of course mean that you will get anything resembling sense out of the other end.
Delivering Water 4.0 - What does it practically mean for the Water Industry
So, in nuts and bolts what does Water 4.0 actually look like from a Water Industry point of view? Well for me its a case of going back to basics and seeing what
the Water Industry currently has and what it can do to bring the Industry forward to a point where we are at least adhering to the design principles. For me at
least it is the management of the “Anthropogenic Water Cycle” from when we abstract water from the source to when we put it back to the environment and
arguably further than that. It is seeing what we want to do, having a look at the technological gaps and then plugging them. There are examples of where this
has been done, at least in part and it is these examples that we must look towards to shape the future of the Water Industry.
To use the principle of the SWAN Layers where are we?
Physical Layer - The first and most extensive of the Layers and includes all of the assets themselves from pipes, to tanks to pumps. This is the base of the Wa-
ter Industry and it must be managed through the use of asset management systems, recording the assets that we have in a consistent way and in the same
way across the Water Industry. Believe it or not this is an area of challenge as across the Water Industry the nomenclature is completely different. All of these
assets of course need to managed in the short, medium and long term with systems such as Computerised Maintenance Management Systems (CMMS) but
potentially Condition Base Maintenance Management Systems (CBMMS)
Page 31
Sensing & Control Layer - This layer is relatively simple and yet is probably one of the major stumbling blocks within the Water Industry. The main reason being
is generally within the Water Industry the requirements of the Sensing & Control layer have been very poorly specified and this has allowed the proliferation of
the phenomenon of Data Richness Information Poverty. As such instrumentation has been installed with little or no attached value. This has led to the devaluing
of instrumentation as a whole and the inability to extract usable intelligence from the vast amount of data that is collected everyday.
If Water 4.0 is to become a true reality in the Water Industry then an exercise to define the information that the Water Industry need to operate needs to be
completed. From the information requirements comes the data needs and from this the instrumentation that is required to feed the data needs. At this level
of course Sensing & Control Management Systems are needed as well as data validation systems to check on the quality of the data that is collected. It is the
Sensing & Control level that is absolutely vital if the Water Industry is to deliver Water 4.0
Collection & Communication Layer - The telemetry system layer where all of the data from the Sensing & Control Layer is collected that also includes PLCs
and SCADA systems. It is at this level that alot of the debate will happen in the Water Industry and is potentially where the so-called Internet of Things comes
into play connecting instruments with the wider system. For the Water Industry there are numerous different elements from the Water Industry Telemetry
Standard (WITS) to the existing SCADA & PLC structure. The main concern and the main stumbling block for Water 4.0 is within this layer of the Water Industry
and concerns Digital or Cyber Security.
If you say to a communications or telemetry specialist in the Water Industry that you are just going to connect this instrument up to the Internet of Things the
answer will be a quite secure “never in a million years,” bring “the Cloud” into the mix and you are definitely not going to be successful in your endeavours and
the least that is said about local communication protocols the better. In fact the discussion over communication protocols in the Water Industry is assuredly
going to be a debate for many years to come. If the definition of Water (or Industry) 4.0 is to connect to the Internet then it is more than likely in the Water
Industry that it will never become a reality.
The Data Management & Display (layer 4) and Data Fusion & Analysis Layer (Layer 5) are probably the Layers that are developed in some respects but
undeveloped in others. Model of the various aspects of the Water Industry exist as do complex telemetry and information management systems. In addition
to this are the business reporting systems, SAP, Click and all of the other management systems and now all of the Software as a Service (SaaS) systems that are
available. On top of this are the various Excel spreadsheets and Access Databases that are almost a pre-requisite in the industry. The problem with this is that
there are several different versions of the truth and accessibility to all of these different systems are compartmentalised across the various companies. The
result if of course is that the truth becomes the truth depending upon whose information that you are looking at.
Conclusions
Water 4.0 - Is it something for the Water Industry, is it something that the Water Industry has already achieved or are we on path to it?
The quick answers are that it is something for the Water Industry and in a large part we have been moving towards it for a number of years. As an industry we
are moving further and further towards a factory approach to the products that we produce whether it is potable water for drinking, treated water for return-
ing to the environment or biosolids to be used on agricultural land. More and more we are seeing product factories, minimisation of losses (through leakage
reduction) and maximising the products that we can produce (through resource recovery. We as an industry are focused on providing the best customer service
that we can hence why more and more companies are metering the water they provide and in a large number of cases this is through “Smart Metering,” to
work with the customer to provide the best customer service.
Water 4.0, the Smart Water Industry or just plain efficient operation (in truth whatever you want to call it) is central to these ways of working and it is through
the development of the design principles of Industry 4.0 that we can deliver the future of the Water Industry. However there are some barriers to this approach
to take into account and some decision that need to be taken, not on a company level but in real terms on an industry level as a whole.
The first of these barriers is that of Communications Protocols insofar as we are industry that is mainly working off analogue signals in the main with Profibus
on larger plants. The Industry seems to be heading towards a future of Ethernet and in the UK there is the whole direction of the Water Industry Telemetry
Standard (WITS) with some heading in that direction and some not.
The second is Cyber Security which more and more is becoming an increasingly urgent issue. For those talking about Cloud or Internet of Things environments
then the proof of absolute security is an absolute must. Incidents of hacking of Water Treatment Works which have hit recent news along with past incidents
only make the issue all the more important. The impact of a hacking incident that changed chemical levels can have serious implications to customer or the
environment and zero risk must be the way forward for the Water Industry to even investigate this area.
The third is instrumentation and data quality and an end to Data Richness Information Poverty. The Water Industry has a vast amount of instruments which
produce a vast amount of data that gives no actionable intelligence and in reality needs to move towards an era of simply Information Richness where the
information that is needed is available to the people that need it in an easy and digestable format that provides one version of the truth. This of course needs to
be accurate which requires the correct instrumentation to be purchased, installed, operated and maintained correctly. This is not always the case in the Water
Industry of today as the value of data and information is relatively low.
Water 4.0 is something that the Water Industry should be aiming towards. How we are going to get there is going to be the fun bit over what probably is going
to be the next decade or two.
Page 32
Article:
Using Online Water Quality
Distribution Systems Monitoring to
Detect and Control Nitrification
Abstract
Distribution system water quality monitoring is still in its infancy, but there is an emerging realization of its potential value. The initial emphasis has been on
developing ways to detect deliberate or inadvertent chemical and biological attacks on water distribution systems. The potential for harm was made clear in the
case of the Milwaukee Cryptosporidiosis outbreak of 1993 (McGuire 2006), in which thousands of people became ill.
Historically, water quality monitoring of the distribution system has been limited to compliance with regulatory standards such as chlorine residual and total
coliform. Yet the need for more comprehensive monitoring has been demonstrated by a growing body of research that indicates that water quality can change
significantly between the water treatment plant (WTP) and the ultimate consumer (Baribeau et al. 2005; Zhang et al. 2002; LeChevallier 1990). Therefore, a
second potential application for distribution monitoring is to ensure that the water received by the public has not degraded below acceptable standards.
While examining data bases from numerous well-operated utilities (Cook et al. 2008), the authors determined that online real-time distribution system
monitoring can provide early-warning of nitrification in chloraminated water systems. Early detection is one of the keys to controlling its spread and severity.
This paper presents a case study in which distribution system monitoring data was used to detect water quality degradation, to explain why the degradation
occurred, and to propose how such monitoring, along with data analysis, could be used to enhance effective operational decision-making.
Introduction
Thomas Kuhn (1922-1996) was a philosopher of science who in his The Structure of Scientific Revolutions introduced a new idea of how scientific
revolutions happen. A main thesis of his work is that scientific revolutions occur when scientists changed their “paradigm” of describing reality, which Kuhn called
a “paradigm shift”. For example, before Einstein all physicists operated under the paradigm of Isaac Newton’s laws in which mass and energy are constant
in space and time. While such as paradigm was only an approximation, it was easily accurate enough to calculate a manned spaceflight to the Moon. Einstein
brought a huge paradigm shift by showing how mass, energy and space had to be viewed relative to each other, that mass could change into energy, and that the
only true constant is the speed of light (www.plato.stanford.edu/Thomas-Kuhn/). Likewise, a paradigm shift is occurring in how water distribution systems are
viewed and how online monitoring of distribution systems could be used in helping to make operational decisions while improving customer - delivered
water quality.
The Original Paradigm
The original paradigm of a distribution system consisted of having highly treated potable water pumped into a series of water mains in which there was little
change in water quality between the water treatment plant (WTP) and the customer. This paradigm has been based upon the fact that: 1) no chemicals were
added in route; 2) water mains are relatively inert chemically; and, 3) microbes are disinfected at the point of entrance. Hence, the original paradigm held that
there was little motivation to establish online monitoring on the distribution system because it would provide little additional information.
The new paradigm views the distribution system as a spatially large, complex bio-chemical reactor, with the pipe walls supporting bacterial growth, and with
reactions taking place both within the bulk water itself and between the bulk water and the pipe walls supporting bio-films. Lengthy detention times and
microbial action mediate an environment with less than perfect water main conditions. As a result, water quality changes between the WTP and water consumer.
The new paradigm allows that variations in water quality on the distribution system can, to a great extent, be explained by the information contained in the water
quality data leaving the WTP. Of course, in order to determine the actual water quality that the customer receives, it is necessary to monitor the water as close to
the tap as possible. In order to detect degradations in water quality, various sensors and data analysis tools, some simple and others more complex, are available
to assist in determining why water quality has degraded which affects a greater degree of control. Moreover, distribution system monitoring to protect public
health enables remedial steps to be taken at the earliest possible time, both at the WTP and also on the distribution system.
Types of Data Analysis and Modelling
Online monitoring of distribution systems generate large quantities of data that must be converted into useful information. It is commonly assumed that there
is a strong relationship between WTP water quality and distribution system water quality; however, variability in distribution system water quality cannot always
be explained. Frequently, this relationship involves several input variables. However, computers can be programmed to quickly analyze large volumes of data to
provide a more complete, multivariate evaluation and to support decision making to optimize water quality.
Predictive numerical models are a means of analysis that generally fall into one of two categories, those based on equations from physics and empirical
correlation functions that adapt generalized mathematical functions to fit a line or surface through data from two or more variables. The most commonly used
and easily understood empirical approach is ordinary least squares (OLS), which relates variables using straight lines, planes, or hyper-planes, whether the actual
Page 33
relationships are linear or not. For systems that are well characterized by data, empirical models can be developed much faster and are more accurate; however,
empirical models are prone to problems such as over-fitting when poorly applied. (Roehl et al. 2003).
Techniques such as OLS and physics-based finite-difference models prescribe the functional form of the model’s fit of the calibration data. Alternatively, artificial
neural networks (ANNs) employ flexible mathematical structures that are inspired by the brain where very complicated behaviours are derived from billions
of interconnected devices, namely, neurons and synapses (Hinton 1992). One type of ANN is the multi-layer perceptron (MLP) which synthesizes rather than
prescribes a function that fits curved surfaces through multivariate data (Jensen 1994). MLP ANNs can be more accurate than other modelling approaches when:
1.	 The available data comprehensively describe the behaviours of interest and the model will be used to interpolate within the range of those behaviours; and
2.	 There is significant mutual information shared among the measured varia-
bles.
MLP ANNs are commonly used in process engineering applications, e.g., appli-
cations to model and control combined man-made and natural systems (Devine
et al. 2003; Conrads and Roehl 2005).
Case Study: Detecting Early Nitrification on Distribution
Nitrification is a process in which nitrifying bacteria consume ammonia (NH3)
to form nitrite (NO2) and eventually nitrate (NO3). The presence of ammonia
in drinking water can be naturally occurring, or more often, a consequence of
chloramines disinfection which is used by many WTPs as an alternative to
traditional, DBP-causing chlorination-only disinfection. The disinfectant
residual is necessary to inactivate potentially harmful microbes in the
distribution system; however, water leaving a WTP can take several days or
longer to reach customers. The disinfectant can be broken down by
nitrifying bacteria as the water ages, which can impact taste and odour, and
allow microbial re-growth (Harrington et al. 2002; Skadsen 2002; Lieu et al.
1993; and LeChevallier 1990). Kirmeyer et al. (1995) estimates that two-thirds
of the medium - to - large chloraminated systems in the U.S. experience
nitrification to some degree, and that fully half of these systems experience
operational problems as a result. Nitrification can be inferred from parameters such as
total chlorine residual, dissolved oxygen, and pH, among others.
A mid-sized utility in the central U.S. was in the process of installing a number of moni-
toring sites throughout their distribution system and provided early
examples of their data for the authors’ Water Research Foundation study (Cook et al.
2008). For this utility, each site measures total chlorine residual, pH, turbidity, pres-
sure, and temperature. A portion of the distribution system is schematized in Figure 1,
showing relative storage tank and booster pump (BP) locations on three mains (1, 2, and
3) originating from the same WTP.
Figure 2 shows approximately 10 months of concurrent total chlorine residual data,
recorded at 15-minute intervals, from BP 1 on Main 1 and from the BPs on Mains 2 and
3. Except where the sensors drop out (downward spikes), the total chlorine residuals of
the BPs on Mains 2 and 3 generally track together and do not fall below 2.0 mg/L;
however, while BP 1 sometimes tracks the other two, it often falls
and remains below 2.0 and lower for several days.
Figure 3 shows the total chlorine residuals at BP 1 and at a location
just downstream at Tank A. After observation 21,000, the BP 1 re-
sidual rises above 3.5 mg/L and shows greatly reduced diurnal var-
iability; however, the Tank A residual continues to have large varia-
bility and large negative changes. Note that the upper values of the
Tank A residual track the trend at BP 1. One possible explanation is
that nitrifying bacteria grew in the tank because of sporadically
low inflow residuals, and persisted in the tank even after the inflow
residual returned to higher levels. Looking at the frequently low BP
1 residuals in Figure 3, it is also possible that the source of the nitri-
fication originated upstream of BP 1.
Figure 1. Schematic of a portion of a distribution system at a central U.S. utility. “BP”
designates a booster pump station.
Figure 2. Total chlorine residual at booster pumps (BP) on three different mains.
Figure 3. Total chlorine residuals at Booster Pump 1 (BP 1) and Tank A on Main 1.
Page 34
Similarly, Figure 4 shows the residuals at BPs 1, 2, and 3 on
Main 1. The residuals at BPs 1 and 2 generally track togeth-
er throughout the period shown. BP 3 is downstream of a
second water tank, Tank B. Between observations 14,000
and 21,000, the residual at BP 3 is often much lower than at
the upstream BPs, possibly indicating the presence of nitri-
fication in Tank B. For some time after observation 21,000,
the upper values of all three residuals are elevated, which
appears to be sufficient to control the nitrifying bacteria
in Tank B to allow the BP 3 residual to gradually approach
those measured upstream.
The appearance of possible nitrification in only one of three
mains from the same WTP suggests that Main 1 is somehow
different from the others, perhaps because its flows are
lower and detention times longer. The 2008 Water Research
Foundation study found that low flows were a contributor
to probable nitrification at a second mid-sized utility in the
southeast. Figures 2, 3 and 4 also indicate that the condi-
tions that lead to and manifest ongoing nitrification, such
as low chlorine residuals, are recognizable from standard
SCADA trend charts and could be automatically alarmed
using modified statistical process control limits or time
derivatives (the rate of change of residuals with respect
to time). This is also true for the main break and DBP
examples.
Concluaions
Based upon a substantial body of research, the new paradigm is to consider distribution systems as complex bio-chemical reactors which alter water quality
between the WTP and the consuming public. But just as importantly, it has been discovered that distribution system water quality changes in such
as way that much of the variability in quality can be explained by analyzing both the finished and distribution system water quality data. Hence, there can be
significant correlations between water quality variables in finished water and changes in water quality on the distribution system. A case study was presented
to explain degradation in distribution system by nitrification. By knowing the relationships between finished water and distribution system water qualities, a
utility can be alerted to rapid water quality degradation, with the ultimate goal of providing the best attainable water quality at the customers’ tap.
Acknowledgements
The genesis for this work was the result of research sponsored by the Water Research Foundation (formerly AwwaRF). The authors would like to thank the
many participating utilities for providing valuable research data.
References
Baribeau, H., Gagnon, G., and R. Hofman. 2005. Impact of Distribution System Water Quality on Disinfection Efficacy. Denver, CO.: AwwaRF.
Cook, J., Daamen, R., and E. Roehl. 2008. Distribution System Security and Water Quality Improvements through Data Mining. Denver, CO.: AwwaRF.
Devine, T.W., Roehl, E.A., and J.B. Busby. 2003. Virtual Sensors – Cost Effective Monitoring, In Proceedings from the Air and Waste Management Association
Annual Conference, June 2003.
Emmert, G. L., Brown, M., Simone, P., Geme, G., and C. Gang. 2007. Methods for Real-Time Measurements of THMs and HAAs in Distribution Systems. Denver,
CO.: AWWA and AwwaRF.
Harrington, G. W., Noguera, D., Kandou, A., and D. VanHoven. 2002. Pilot-Scale Evaluation of Nitrification Control Strategies. Jour. AWWA, 94:11:78.
Hinton, G.E. 1992. How Neural Networks Learn from Experience, Scientific American, September 1992, p.145-151.
Jensen, B.A. 1994. Expert Systems - Neural Networks, Instrument Engineers’ Handbook 3rd Edition. Radnor, PA.: Chilton. p. 48-54.
Kirmeyer, G. et al. 1995. Occurrence and Control of Nitrification in Chloraminated Water Systems, Denver, CO.: AwwaRF.
LeChevallier, M. 1990. Coliform Regrowth in Drinking Water: A Review. Jour. AWWA, 82:11:74.
Lieu, N. I., Wolfe, R., and E. Means. 1993. Optimizing Chloramine Disinfection for the Control of Nitrification. Jour. AWWA, 85:2:81.
McGuire, M. 2006. Eight Revolutions in the History of US Drinking Water Disinfection. Jour. AWWA, 98:3:123.
Roberts, M., Singer, P. and and A. Obolensky, A. 2002. Comparing Total HAA and Total THM Concentrations Using ICR Data, Jour. AWWA, 94:1:103.
Roehl, E.A., Conrads, Paul, and Cook, J.B., 2003. Discussion of “Using complex permittivity and artificial neural networks for contaminant prediction”. Jour.
Environmental Engineering, v. 129, p. 1069.
Skadsen, J. 2002. Effectiveness of High pH in Controlling Nitrification. Jour. AWWA, 94:7:73.
Website: www.plato.stanford.edu/Thomas-Kuhn/. Accessed by authors April 9, 2012.
Zhang, M., Semmens, M., Schuler, D. and R. Hozalski. 2002. Biostability and Microbiological Quality in a Chloraminated Distribution System. Jour. AWWA,
94:9:112.
Figure 4. Total chlorine residuals at booster pumps (BP) 1, 2, and 3 on Main 1 with water temperature at BP 2. BP
3 data is missing between observations 8,000 to 13,000.
Page 35
The concepts of controlling the large wastewater treatment plants in the water industry have been around for many years. The majority of the power in a
wastewater treatment plant is consumed in aeration and the first types of control systems aimed to address this fact. By stopping the operator from manually
going to a panel to switch a blower on and off based on manually going up to an aeration lane and dropping a sensor in the mixed liquor. Dissolved Oxygen
probes allowed trending of the oxygen levels and then control systems with PID loops kept the oxygen levels within tram lines. This is certainly not what the
industry would term advanced but it did save the cost of running the activated sludge plant.
The industry went forward and looked at Ammonia Control, most prominently feedback control from the end of the aeration lanes, as this was thought to be
the area where most of the ammonia is treated and controlling the oxygen levels based upon the measured ammonia concentration. The more advanced and
adventurous water companies looked at feed forward control and using a process model to predict how much would be required and deliver that amount of
air. This was typically supplemented with some sort of feedback control system. In some water companies this has limited success in others. This, in some areas
of the industry is where we are today. Others companies have taken the step of putting Advanced Process Control systems in place however the adoption of
these technologies can be said to be limited.
So what is quite meant by “Advanced Process Control” and how does it differ from just normal process control? For me at least it is using some sort of model to
control the process. The modern water industry has the ability to model most of its processes.. We only have to look to models such as Biowin and GPSX and
the specialists within the industry who understand how to do this process modelling. Most of the models are based upon the predecessor of the IWA and their
ASM (Activated Sludge Models) to see the benefits of modelling. Probably one of the most famous examples of this is seminal paper by Andy Shaw et al of Black
& Veatch and the case study of Daniel Island (click here) looking at the intelligent control of sequencing batch reactors that managed, with little adaptation, to
double the total daily volume that could be successfully treated. This paper was presented at WEFTEC in 2006.
Daniel Island was not the only development in the area of Advanced Process Control with one of the first examples in the UK being the development
of a system for the Southern Water scheme at Peel Common (click here). This project was undertaken in 2008 and whilst extending the activated sludge ca-
pacity converted the treatment works to a four stage Bardenpho
Biological Nutrient Removal
Process.
The scheme at Peel Common was highly successful with the trial
being conducted over a 10-week period and managed to achieve a 20
percent reduction in the amount of aeration, control of the amount
of ammonia that was discharged, and a 50 percent reduction in the
amount of methanol that was consumed.
The system and controller that was developed for this treatment
plant looked to monitor and automate the whole process
including the nitrification and methanol dosing. This has formed
the fundamental basis of the development of a whole range of
controllers that are based upon the use of instrumentation in not
just the activated sludge plant but also in other areas of the future
production factory that the wastewater industry is moving towards
in the future.
So for the Activated Sludge Plant what was the key to achieving the
savings? Firstly it will have been ammonia monitoring installed in the
correct areas of the treatment plant and of course in the correct way.
It was also managing the sludge age of the plant rather than sole-
ly concentrating on the F/M ration which the industry has, and still
does in some areas, concentrate on. With Peel Common it was also
about controlling the whole process not just the individual elements.
From the establishment of the Peel Common case study other
projects developed including the Holdenhurst project for Wessex
Water. This was based, like Peel Common on Hach’s WTOS system
The WTOS system was installed at Wessex Water’s Holdenhurst fa-
cility in Bournemouth (175,000 PE) which mainly treated domestic
wastewater. Aeration for Holdenhurst’s fine bubble activated sludge
treatment system is provided by four large mains powered variable speed blowers. The site has a good record for maintaining a low ammonia discharge, but
had a heavy power/aeration requirement, particularly during storm events. Prior to the installation of the WTOS, LDO™ probes in the aeration lanes fed dis-
solved oxygen data to the PLC which controlled the blowers to maintain DO at specific levels (approximately 2.5 mg/l) depending on the treatment zone.
Article:
The use of Advanced Process Control
in the modern Wastewater Industry
Schematic for Peel Common and the four stage Bardenpho BNR configuration
Page 36
Similarly, under the previous sludge management regime, fixed volumes of sludge were returned based on laboratory mixed liquor suspended solids (MLSS)
values and manual settlement tests.
There are three main components to WTOS:
(1) the RTCs
(2) the process analysers
(3) the PROGNOSYS system.
Automated control systems necessitate reliable continuous measurement values 24 hours a day, so the PROGNOSIS system was developed to constantly check
the diagnostic signals (health and service status) from the installed instruments in order to achieve the required levels of reliability. The capital outlay for
the addition of the system was relatively small; the most significant extra cost was simply a requirement for extra sensors. WTOS overlays and compliments
existing infrastructure, so it is possible to simply turn the control system off and revert to the former regime. Each RTC was implemented on an industrial PC
which communicates with the sc1000 controller network and the local PLC.
The WTOS RTC unit delivers set points for the DO concentration and Surplus Activated Sludge flow rate to the PLC, which applies those set points to the process
Site specific characteristics such as layout and tank size are also taken in consideration when calculating the set points. All set points can be adjusted either
via the SCADA system or the local WTOS controller user interface. This means, when the plant is under RTC control, DO set points are no longer ‘fixed’, instead
they ‘float’ according to the load.
To enable this, the N-RTC receives information about the actual
• NH4-N inflow concentration and flow
• MLSS concentration
• Water temperature
A simulation model is integrated within the controller for open loop control to calculate the DO concentrations necessary to achieve the desired ammonia
outlet concentration. . The N-RTC also constantly reads the NH4-N concentration at the outlet of the aeration lane. This value provides a feed back control loop
and ensures that the DO concentration is increased/decreased if the ammonia concentration is above/below the desired NH4-N set point. In this way, the
N-RTC control module combines the advantages of feed forward and feed back control, which are (1) rapid response and (2) set point accuracy.
The develops of the this controller was not unique and further research and development has let to other controllers for plant control all based upon the
instrumentation monitoring what is going on in the operational process and modelling giving the next steps for the controllers to action. Amongst other
advanced control systems this has been extended to another high cost area within the wastewater factory approach including sludge dewatering. An example
of this is at Northumbrian Water’s treatment works at Bran Sands.
At Bran Sands on Teesside, Northumbrian Water’s site houses a regional sludge treatment centre and effluent treatment works. The site treats the majority
of sludge in the North East − with drying and digestion capabilities. The sludge is processed using the CAMBI thermal hydrolysis digestion process. The plant
processes 40,000 tonnes of dry solids of indigenous and imported sewage sludge per year, and has a generating capacity of up to 4.7MW. Besides a reduction
of carbon emissions, the process leads to huge reductions in consumption of biogas and imported electricity (90 % and 50 % respectively) and thus significantly
saves on operating costs.
Upstream of the CAMBI process, the incoming sludge has to be dewatered to increase the DS content from ~ 2 % to 18 %. Sludge dewatering requires mixing the
incoming sludge with a polymer solution prior to the actual dewatering step in a decanter centrifuge. Adjusting the polymer dose had been done manually in
the past, leading to high polymer consumption and subsequently a high anti-foam consumption to reduce the foam formation caused by an excess of polymer.
Hence the objective of the sludge dewatering optimization was to keep the DS content at the desired 18 % and to reduce the polymer consumption.
The installation of dry solids monitoring and a real time controller enabled the measurement of the feed dry solids and this enabled control of the polymer
dose. Avoidance of the over dosing of polymer allowed minimisation of the isostatic repulsion which in turn caused a decrease in the amount of anti-foam that
was used. The benefits were a stable dry solids concentration post dewatering, a 40% reduction in polymer and 75% reduction of antifoam
Left diagram, before optimization: Very large variations in polymer dose rates leading to unsatisfactory cake quality (under dosing) and antifoam requirement due to overdosing. Right
diagram, after optimization: Very stable polymer dose rates – Average 5.2 g polymer / kg DS.
Page 37
Using instrumentation only and advanced controlling elements of a treatment works isn’t the only approach to advanced process control within the
Water Industry and the alternative approach within the UK Water Industry is contolling on a multi-variate approach using instrumentation and other process
information from intelligence in the treatment works to infer values were necessary. This is the approach that has been used by Perceptive APC in the WaterMV
Advanced Process Control technique.
The WaterMV Advanced Process Control technique uses process and quality data to develop a digital model of each plant’s performance, behaviour,
constraints and opportunities. The system is made robust by using software-derived values alongside real-world sensor measurements; when a hardware
sensor fails or begins to drift, or when communications are lost, the ‘soft’ sensors automatically take over. The plant can continue to be tightly controlled, even
with a high proportion of sensors unavailable. In fact, this approach permits fewer sensors to be installed in the first place, or allows sensors to be removed
because they are no longer required.
Because the control strategy is based on an accurate model of how the plant will perform under any set of circumstances, it is properly known as model
predictive control, i.e., control moves are made ahead of time, based on how the plant will respond in future. This is of particular value when automatically
managing storm or first flush events. The control model reaches all the way back to aeration because, in many cases, site aeration control is simply not
accurate or reliable enough. For example, a fine-bubble diffused aeration ASP will have a common manifold juggling air flows across multiple zones, with
multiple actuated valves all demanding or choking off the air in competition with each other. (This is the multivariable nature of many complex processes,
and is the ‘MV’ in the product’s name). Modelling this behaviour enables a more elegant and coordinated scheme to deliver air when it’s needed, where it’s
needed, without tripping blowers, starving pockets or over-aerating zones. As a result, the system can be implemented on ASPs that are surface aerated,
jet-aerated, or use FBD, for which WaterMV is the best possible solution. Provided enough data and controllability exists, it is also a natural fit for BAFF plants
and SBRs.
The WaterMV controller sits above the SCADA/PLC as a supervisory layer; if the underlying PLC-based process control is not fit-for-purpose, WaterMV will not
be considered.
A key constraint built into each model is the final quality desired from that particular process. By improving control, WaterMV reduces variability in the process
and, therefore, reduces risk of non-compliant operation. This can be exploited as further energy savings, or as an increase in process capability, allowing capital
improvement or expansion projects to be deferred. In other words, it is a perfect fit for the aims of totex, which are common across the manufacturing sectors
in which the technology was born. In addition, the model detects discrepancies between predicted and actual behaviour, to help pinpoint either developing
external conditions (such as a toxic event), or a slow drift away from optimal operation caused by process failure or degradation.
The Perceptive system is not limited to the ASP, but can automatically control RAS and SAS, sludge age and FST levels, to provide end-to-end improvement of
the works. In addition, the same approach can be used to both optimise yield from anaerobic digesters, as well as energy generation from the CHP. By tying
this together with ASP control, WaterMV can provide site-wide energy optimisation, with minimal operator intervention. The downside, if there is one, is
that each monitoring and control scheme must be developed to address the challenges and issues associated with each particular site or asset. This is not a
plug-and-play option, simply because no two plants are identical; a unique set of challenges requires a bespoke solution.
The multivariate process technique was used at Lancaster Treatment Works as a proof of concept to United Utilities the dilemma was poor control of DO
and high energy costly. Using historical process data, a robust mathematical model of the plant was constructed to enable the prediction of future behaviour
and the impact of disturbances on performance. The model is also capable of assessing the quality of signals taken from the plant, determining which are
reliable and which should be discounted from future control decisions. The final control scheme was able to reconstruct missing or corrupt data – in real time
- enabling optimal operation to be maintained even if some critical signals are lost or the data becomes untrustworthy. The results of the scheme was an
average energy saving over 12 months of 26% when compared with previous best performance, with continuing development offering significantly higher
savings and fast return on investment.
United Utilities calculate an annual reduction in equivalent CO2 of more than 250 Tonnes. Plant performance is more tightly controlled, with less operator
intervention required to maintain optimal process operation and maintain high levels of final effluent discharge quality.
Because of the way the system works on a more process based approach that questions the quality of instrumental data financial losses can be tracked across
the plant and this is what has been done with this system at another treatment works providing control room decision support. It works on the fact that
model based control requires instrumentation and if and when the sensor drifts or fails then a fall back position is the default. In the WaterMV system a “soft”
inferential sensor takes over to operate and maintain an acceptable safety margin. The system then calculates the additional operating expense is the Lost
Opportunity and includes daily and cumulative costs . This enables the Operators and managers to prioritize maintenance of process sensors,based on the cost
and impact of their non-availability.
What this shows is that there are huge potentials for Advanced Process Control within the water industry to optimise the performance of a well run treatment
works and the benefits of both systems are not fully understood and this is acting as a barrier to the development of Advanced Process Control in the UK.
Page 38
Water, Wastewater & Environmental Monitoring Virtual
13th - 14th October 2021
The WWEM Conference & Exhibition has been changed to a virtual conference and exhibition for 2021 and a physical conference
and exhibition in 2022. Details on WWEM Virtual will be released in the coming months but it is sure to include huge amount of
technical workshops and events for attendees to enjoy.
International Water Association Digital Water Summit
15th-18th November 2021 - Euskalduna Conference Centre, Bilbao, Spain
In 2021, the first edition of the IWA Digital Water Summit will take place under the tag-line “Join the transformation journey”
designed to be the reference in digitalisation for the global water sector. The Summit has a focus on business and industry, while
technology providers and water utilities will be some of the key participants that will discuss and shape the agenda of the Summit.
The programme includes plenary sessions, interactive discussions, side events, exhibition, technical visits, and social events
Sensor for Water Interest Group Workshops
The Sensors for Water Interest Group has moved their workshops for the foreseeable future to an online webinar format. The next
workshops are
16th June 2021 - Achieving Net Zero
14th July 2021 - How can sensors protect our coastal waters
Zero Pollutions Conference 2021
14th July 2021, Online
The zero pollutions conference is returning for 2021 and is being hosted by Isle Utilities. The theme this year is "Today & Tomorrow"
and tickets are available via Eventbrite. The conference is hosted by Isle Utilities
WEX Global 2021
28th - 30th June 2021 - Valencia, Spain
The WEX Global Conference. Sponsored by Idrica is currently due to take place in Valencia in Spain in June 2021. The conference
concentrates on the circular economy and smart solutions to resolve some of the global water industry's issues
Page 39
Conferences, Events,
Seminars & Studies
Conferences, Seminars & Events
2021 Conference Calendar
Due to the current international crisis there has been a large amount of disruption in the conference calendar. A lot of workshops have
moved online at least in the interim and a lot of organisations are using alternative means of getting the knowledge out there such as
webinars popping up at short notice. Do check your regular channels about information and events that are going on. Also do check on
the dates provided here as they are the best at the time of publishing but as normal things are subject to change.
Page 40

WIPAC Monthly - May 2021

  • 1.
    WIPAC MONTHLY The MonthlyUpdate from Water Industry Process Automation & Control www.wipac.org.uk Issue 5/2021- May 2021
  • 2.
    Page 2 In thisIssue WIPAC Monthly is a publication of the Water Industry Process Automation & Control Group. It is produced by the group manager and WIPAC Monthly Editor, Oliver Grievson. This is a free publication for the benefit of the Water Industry and please feel free to distribute to any who you may feel benefit. However due to the ongoing costs of WIPAC Monthly a donation website has been set up to allow readers to contribute to the running of WIPAC & WIPAC Monthly, For those wishing to donate then please visit https://www.patreon.com/Wipac all donations will be used solely for the benefit and development of WIPAC. All enquires about WIPAC Monthly, including those who want to publish news or articles within these pages, should be directed to the publications editor, Oliver Grievson at olivergrievson@hotmail.com From the editor............................................................................................................. 3 Industry news.............................................................................................................. Highlights of the news of the month from the global water industry centred around the successes of a few of the companies in the global market. 4 - 11 A brand new rising main monitoring programme........................................................ This was a case study by Syrinix who worked with Anglian Water on using their technology on wastewater rising mains to detect asset failure. There is a fascinating WIPAC Webinar available on the YouTube channel on the subject. 12 - 15 The Smart Water Industry is no longer a choice....its a must.......................................... This article was something I wrote back in 2019 after attending a conference about Smart Water and like anything good came out of a slight sense of frustration as to why the industry wasn't as far along as it could and should be with Smart Water 16 - 19 Optimisation of a SBR using enhanced control............................................................ In this second case study of this edition we revisit the case study that we first published in 2018 and looks at the savings made at the Cookstown WwTW using an advanced activated sludge plant controller, the ASP-CON. 20 - 22 WirelessHART networks: 7 myths that cloud their use for process control..................... WirelessHART is something that has never really achieved what it can do in the water industry. In this revisited article by ABB we look at the protocol and its benefits to the water industry 23 - 26 Getting data quality wrong and how to get it right....................................................... The fundamentals of instrumentation are often ignored and right now the time has never been so important to get it right. In this revisited article we look at the basic principles of getting it right 27 - 29 Is Water 4.0 the future of the Water Industry An article written in 2016 about Water 4.0 by myself, what was said five years ago is just as relevant now 30 - 32 Using online water quality distribution systems monitoring to detect and control nitrification............................................................................................................ An article from 2015 looking at distribution systems monitoring data to provide real-time detection 33 - 35 The use of APC in the modern wastewater industry.................................................. And lastly an article from 2015 looking at APC and multi-variate process control techniques that allows control of wastewater plants using both Real Time Control (RTC) and Multi-Variate Process Control 36 - 38 Workshops, conferences & seminars............................................................................ The highlights of the conferences and workshops in the coming months. 39 - 40
  • 3.
    Page 3 From theEditor Sometimes it seems only yesterday since I started the WIPAC Group and started putting together WIPAC Monthly and sometimes it seems a very long time ago in deed. This month it was precisely ten years since I started the group and I remember in detail it growing from a few members to its thousandth member a year later. I remember swapping messages with a water treatment plant manager who wanted me to convince him as to why he should join the group. Nine years after that the group is just over 9,500 members and WIPAC Monthly has gone from a one or two page summary to the bumper edition that I've put together this month featuring some of my favourite articles from just the past five years . On the 16th May I put an honest message together for members of the group and in it really stated the honest truth, it is you the readers of WIPAC Monthly, everyone who helps me put together the WIPAC Webinars and more recently are first WIPAC Showcase that have really made the group what it now is ten years later. In the past ten years I have only managed to meet a fraction of the members in my travels to various conferences & exhibitions around the world and that is, at least for me, something that is a real shame. Hopefully, over time when we get back to physical or maybe even hybrid conferences it will change. In the mean time the virtual conference circuit has its benefits and there will be much more hopefully of the WIPAC webinars, showcases and anything else that we as a group can share with each other. That was the aim of the WIPAC Group from the very beginning to share the successes and failures of using instrumentation, automation & control. I have certainly seen my fair share of success and also my fair share of failure too and the wise words that you learn more from your failures is always going to be true. So, hopefully you can forgive me this look back over the past articles of the last five years (if i went back any further I think I would burst most people's inboxes) and I hope that you enjoy this latest edition. My final words in this short editorial this month is to say that I hope that you have enjoyed the last ten years of WIPAC and WIPAC Monthly for me at times there have been some very late nights (and early mornings) putting the monthly editions together and the final words are to say there's alot more to come over the next ten years and maybe ten more after that. Have a good month and of course stay safe, Oliver
  • 4.
    Syrinix Launches NewCombined Acoustic Leak and Pressure Monitor WIPAC hits its 10th Anniversary Syrinix this month has announced the launch of their smart network monitoring tool that combines high-resolution pressure monitoring and leak detection in one solution: PIPEMINDER- ONE Acoustic. This new version of the popular PIPEMINDER-ONE evolves the tool’s existing pressure monitoring with acoustic monitoring to locate leaks and bursts. Combined with RADAR, Syrinix’s cloud analysis platform, PIPEMINDER-ONE Acoustic locates leaks on a broad range of pipeline material and sizes. Like the rest of the PIPEMINDER-ONE family, the Acoustic version triangulates pressure events and sends intelligent alarms so utility users can identify and fix potential problems on their network. All data is recorded by a precise time-stamped management information system synced to reliable 4G, 3G, and 2G mobile networks. Because units are widely spaced along the distribution network, fewer PIPEMINDER- ONE Acoustic units than traditional leak detectors are needed to obtain valuable high-resolution data. “Water and wastewater utilities need cost effective and resilient monitoring systems,” notes Mark Hendy, Vice President of Business Development EMEA at Syrinix. “The PIPEMINDER-ONE Acoustic can be installed permanently or on a semi-permanent survey basis for use detecting both leaks and the damaging pressure events that can lead to leaks and bursts.” The benefits of PIPEMINDER-ONE Acoustic, Hendy adds, translate to significant cost-savings: “Preventing asset deterioration is often the best way to maintain a viable utility. Using PIPEMINDER-ONE Acoustic for the early detection of leaks and problematic pressure sources, utilities can proactively make operational adjustments to prevent wear and tear on the network instead of reacting to asset failures.” By supporting informed decision-making with data to calm networks, previous iterations of the PIPEMINDER monitor transformed how utilities manage and maintain assets. Adding acoustic leak collection to transient monitoring capabilities, this new iteration of the PIPEMINDER provides new data combinations in a smaller footprint. PIPEMINDER-ONE Acoustic records pressure at 128 samples per second, generating both transient and summary data, which can be used for triangulation, clustering, classification, and export via an API. The addition of acoustic data from a new, improved hydrophone is used in combination with pressure monitoring to identify a leak position. With speedy and precise detection, utilities can now respond quickly to operational and network failures before customers notice any problems and, with the same unit, identify and mitigate the pressure events contributing to those leaks and bursts. Ben Smither, Vice President of Engineering at Syrinix, echoes Hendy’s emphasis on a modern solution: “Modern utilities must monitor for developing leaks while performing real-time analysis of pressure transient events. Combining leak notifications with high resolution pressure monitoring with zone alarms, PIPEMINDER-ONE Acoustic empowers operators with the data to save time, save money and improve performance.” On 16th May this month the Water Industry Process Automation & Control Group reached its 10th anniversary. Launched on 11th May 2011 over the year it has gathered a membership that currently stands at just over 9,500 members all interested in instrumentation, control & automation and how this all fits into the Digital Transformation of the Water Industry. In a video message to the group, Oliver Grievson, who has now produced 115 editions of WIPAC Monthly expressed his gratitude to each and every member of the group from those who joined on the very first day to the most recent members of the group. This month also saw the launch of WIPAC's new initiative which is the WIPAC showcase which hopes to take new innovations and products in the water industry and invite members of the WIPAC group to keep updated with the latest developments. The first company to showcase their newest developments were Vega Control Systems who have been a long time supporter of Water Industry Process Automation & Control. In this first showcase we saw Matt Westgate, the Water Industry Manager at Vega, and his colleague Peter Devine takes us through the first level-based flow device in the water industry which has been certified to operate without a separate transmitter. The C21 and C22 devices along with the Vegapuls 21 and 31 can operate independently of the transmitter, the Vegamet 861/862 and have a 2mm accuracy over a 5m range which puts the radar into a Class A category of certification. The first WIPAC showcase is available for members on the WIPAC YouTube channel. Any other companies who are interested in taking part in a WIPAC showcase should contact Oliver Grievson, the Executive Director at WIPAC. Page 4 Industry News
  • 5.
    Affinity Water inUK first for two novel Industry 4.0 applications using smart demand management In a UK first, Affinity Water is set to trial two novel Industry 4.0 (I4) applications using smart demand management for existing drinking water and rainwater storage systems. The project is one of Affinity Water’s two winning initiatives produced in collaboration with other water companies, UK Universities and government agencies to improve the efficiency and resilience of its water supplies. The trial will seek to unlock ‘hidden gems’ by making the most use of existing water storage assets in a new way in order to build network resilience and pave the way for the industry to explore new solutions further. Working in collaboration with the University of Exeter, Aqua Civils and technical consultants Affinity Water proposes to develop a ‘business model canvas’ for drinking water and rainwater storage tanks to harness real-time monitoring and control solutions to explore optimised strategies for real-time top-up control. Affinity Water focussed the design of the proposal to target operational system resilience and Open Data themes. Historically, decentralised water tanks, such as feeding tower blocks and rainwater harvesting tanks, automatically fill with mains water during peak water usage periods. In extended dry spells, rainwater harvesting systems fail to reduce demand on the potable network when they are most needed. The outcome of the trial will quantify the scale of the opportunity to implement smart water tank control at existing customer assets to build operational resilience and reduce disruption to customers. It will significantly enhance Affinity Water’s aim to improve the efficiency, flexibility and resilience of water networks for the benefit of customers in the future while protecting the environment. Partners include University of Essex and Aqua civils along with a range of experts and consultants. Seagrass project will use nature-based solutions The water company’s innovative ‘Seagrass Seeds of Recovery’ project will form part of its activity to use Nature Based Solutions to help address the problems of both a nature crisis, and a climate emergency. Seagrass meadows enhance the stability of coastal zones, locking carbon into the seabed at a rapid rate, improving water quality and creating habitat for hundreds of thousands of small animals - enhancing the resilience of coastal ecosystems. In Essex and Suffolk, thousands of hectares of seagrass have been lost and restoration of seagrass will help to support the UK Government’s 25-year Environment Plan. A consortium of ten partner organisations has been created to deliver this project and strong collaboration throughout will be maintained. These are: Anglian Water; Project Seagrass – lead delivery partner; Salix River & Wetland Services; Cefas (Centre for Environment, Fisheries and Aquaculture Science); Environment Agency; Natural England; Department of Zoology and Wadham College, University of Oxford; Swansea University; University of Essex. Affinity Water already undertakes significant nature-based activities through its long standing catchment management and river restoration programmes. The utility is already in the process of considering many more nature-based opportunities including planting at least 110,000 trees by 2030. Page 5
  • 6.
    Thames Water offersto address competition concerns over smart meter roll out Following complaints and an investigation opened by Ofwat, Thames Water has offered formal commitments under the Competition Act 1998, which look to address concerns over the company's approach to rolling out its smart metering programme in the non-household market. Ofwat was concerned Thames Water unfairly removed or limited access to water consumption data used by retailers and third parties – key information for detecting leaks, ensuring water efficiency and the accuracy of bills. Ofwat investigated Thames Water following complaints that it had: • installed smart meters that were incompatible with data logging devices used by retailers and third-party providers; • removed other parties' data logging devices when replacing meters with new digital smart meters; and • failed to offer access to data from its smart meters to retailers and third-party providers on fair, reasonable and non-discriminatory terms. Monopoly providers, such as Thames Water, have a responsibility to ensure that their actions do not harm competition in active markets. Where it had installed smart meters, Thames Water had effectively withdrawn direct access to its meters and failed to provide a suitable alternative to getting the water consumption data they provide. Retailers and third-party providers need that data to provide their own services to customers. Ofwat has concerns that this has the potential to negatively impact competition and the benefits for customers and the environment. To address these competition concerns, Thames Water is proposing commitments to introduce technology which allows its smart meters to have logging equipment attached to them and will ensure that the data services it provides to retailers and third-party providers are done so on fair, reasonable and non- discriminatory terms. It has stopped proactively replacing meters that have logging equipment attached until this technology is introduced. Thames Water also proposes commitments to make improvements to how it engages with retailers and third-party providers to better understand and respond to their needs; and to ensure it fully considers its impacts on markets when making decisions. Ofwat considers that, when fully implemented, these commitments will address the concerns it identified and proposes to accept them. Ofwat is now consulting on the commitments proposed by Thames Water before making its final decision. If Ofwat decides to accept the commitments Thames Water will have to report to Ofwat on their implementation. Emma Kelso, Senior Director of Markets and Enforcement said: "We're pleased to see Thames recognising the need to address our competition concerns to ensure it plays its part in making sure markets work effectively. And, as the sector expands its smart meter programmes, it is important that all companies are mindful of the dominant position they hold in the market and how their actions can affect markets, customers and other providers." Riventa Puts Paris Pumping Station On Schedule For Big Savings At a critical pumping station supplying a major tourist destination near Paris, deployment of Riventa’s proprietary FREEFLOWi4.0 pump monitoring system and HydraNet software has proved that the rescheduling of pumps to meet optimum performance, provides an immediate saving of 21%. Commissioned by leading water utility Saur, Riventa’s challenges were to identify a potential reduction in life-cycle costs, as well as show how optimisation of controls and operations would bring about immediate and long-term savings in running costs, including energy efficiency. Riventa directly assessed the performance of three large and three small variable speed pumps. They discovered that at the pumping station, during certain flow rates, the system operated at up to twice the cost of optimum operation for significant amounts of time - and was also well in excess of the lowest possible specific power. Steve Barrett, Managing Director of Riventa, commented: “From the existing total operating cost per year of just over 157,000 Euros, we showed that by optimising the current pumps, we could reduce that to less than 124,000 Euros – but it is the optimization of pump schedules that provides the main benefit. Payback can be achieved in less than 12 months, with the significant improvements in operational performance also extending the lifetime of pumps”. He added: “Based on all of the pumps being refurbished and operated at optimum configuration, the operating cost would lower even further to just under 113,500 Euros – though it must be said that the units were in good condition and a low priority for refurbishment at the time of the project. This means that capital investment can be focused on other priorities. Assuming a CAPEX of a little over 100,000 Euros, payback would be achieved in three years. This is a classic example of what can be achieved at pumping stations all over the world”. Page 6
  • 7.
    Thames Water completes“monumental” £20 million IT upgrade to future-proof London water supply ThecomplexcomputersystemswhichcontrolLondon’sdrinkingwatersupplies have been upgraded while keeping the taps running in a “monumental” £20 million project by Thames Water. Moving from the 25-year-old RTAP system to the new ClearSCADA platform saw the replacement of multiple legacy and obsolete systems, while keeping customers in supply across the capital. One of the largest of its type in Europe, the technology monitors output from the five big treatment works in London – Hampton, Coppermills, Walton, Ashford and Kempton – as well as more than 200 service reservoirs, pumping stations and boreholes, many of which are unmanned and need to be operated remotely. Carly Bradburn, Thames Water’s head of digital operations, said: “The computer system oversees the production, treatment and delivery of up to 2.2 billion litres of drinking water every day. Replacing it has been a very complex and challenging project. “The old system was over 25 years old and software updates were no longer available. Replacing it needed the engagement of multiple stakeholder groups, external suppliers and companies, and has been a vast undertaking.” The commissioning of the new system included checking and validating more than 700,000 data points, and around 100,000 functional, mimic, alarm and user tests to ensure minimal operational disruption and risk. The new system, supplied by Schneider Electric, was migrated over several months last year and this year, running alongside the old process to resolve any problems, before taking full control of the whole estate. Mark Grimshaw, Thames Water’s head of London water production, said: “Investing in resilient systems and assets is one of our key priorities. There can’t be many more important projects than updating the technology that ensures a reliable water supply for one of the world’s major cities. “Keeping the old system up and running while launching the new system alongside it has been a monumental effort by everyone involved – a great example of teamwork at its very best.” Sensors for Water Interest Group (SWIG) extends its call for training videos and announces its next three webinars The Sensors for Water Interest Group has announced this month that it is extending its call for training videos that is going to act as a hub for members to go to gem up on their knowledge on instrumentation within the water industry. The library, which is currently available on the Sensors for Water Interest Group website features 30 videos which cover areas such as • Water quality, • Level & flow meters • Telemetry, IoT and Logging This is only the start of the video library on the SWIG website which is designed to act as a vital resource for instrumentation specialists within the water industry in order to help the utilities companies as much as possible. Companies, who are members of SWIG, are more than welcome to submit training videos for inclusion in the video library on the SWIG website. SWIG has also announced their next four webinars which are going to take place • Achieving net zero will happen on 16th June and is being hosted by Frances Cabrespina of Suez who are kindly sponsoring the event so SWIG members can attend free. There is also a taster event happening on 10th June with a keynote by Matt Gordon, the Engineering Manager of Suez with a live discussion • On the 14th July there will be webinar on "How sensors protect our coastal waters chaired by Michael Strahand and kindly sponsored by Xylem Analytics • On the 29th September there is a webinar on "How to get the best value out of sensors that is being sponsored by Siemens and is hosted by Oliver Grievson, the current SWIG Chairman and Technical Lead at Z-Tech Control Systems For more details on all of these events please visit the SWIG website at www.swig.org.uk Page 7
  • 8.
    Unlocking the powerof water data is becoming a must-have for utilities Ovarro’s associate product line manager for RTUs & loggers Adam Wright discusses why more frequent capture of water supply and distribution data is becoming a must-have for utilities as they strive to build network resilience, improve customer experience and meet regulatory expectations. Adam Wright shares insights into the latest developments in a Q and A session. What can today’s data tell utilities about their water networks? Data logging allows water companies to accurately and reliably record parameters for pressure, flow and level across the water network by interfacing with common industry flow meters and sensors to enable efficient network management. Visibility of district metered areas (DMAs) combined with network models, pressure surveys, consumer flow monitoring and reservoir depth calculations all mean water companies are able to make informed decisions that will result in a reduction in cost of network ownership. With more data comes increased insight and ultimately increased value. What are some of the data capture challenges faced by utilities? Key challenges include the increasing pressure on data security, a growing need for more battery power to send more data for longer periods and communications reliability.ThesearealwaysfrontofmindforOvarrowhendevelopingandupdatingitsdataloggers.Thegoodnewsis,technologyaroundsensors,communications and battery life is advancing rapidly. Ovarro’s data loggers can now communicate with multiple different sensors from one device using the internet of things (IoT). They are programmed wirelessly using a Bluetooth app and data is sent securely to the cloud or the customer’s system. The rollout of 4G and IoT networks has significantly improved communications. Ovarro has very recently updated the XiLog advanced data logger following an intensive period of research and development. The latest version comes with 4G or NBIoT/CATM1 and Bluetooth as standard, with fifth generation 5G broadband connectivity in the future. IoT has been a real gamechanger in reducing power consumption and allowing loggers to send data more frequently. Battery technology has also progressed, allowing Ovarro’s loggers to deliver as much as a 10-year battery life. This means fewer battery changes and site visits, which reduces environmental impact, while freeing up time and saving costs. What are the differences between 4G, 5G and IoT networks? The difference between the available connectivity networks mostly comes down to coverage. Currently, 4G and both narrowband and LTE-M IoT are the most cost-effective options and together cover most of the world. Use of 5G is currently too expensive for most applications, but the price is slowly coming down. IoT is available globally and IoT modems consume the least power in sending data. This means that users can either send more data or get a longer life out of the battery. By contrast, 4G mainly uses region-specific modems. The phasing out of 3G services means that 4G, which is being built out from legacy 3G networks, will become a requirement. Customers are no longer interested in investing in any technology that has only 3G due to its limited lifespan. It is expected that 5G will eventually replace 4G, though not imminently. What advances have been made in the reliability and frequency of data capture? Historically, data loggers would capture data in a set schedule, say one datapoint every 30 minutes, then relay it once a day. This means that if the signal is interrupted for any reason then a whole day's data is lost. The logger would then try to send it the next day. In theory, the data could still be extracted, but it could be days later. From an operations point of view, receiving the data as close to real-time means that personnel can act quickly to changes and irregularities Where data is delayed or lost, severe pressure changes in the water network might be missed or not acted on until days later. These occurrences could indicate serious incidents likes leaks, or loss of customer supply. In both cases, the water company wants to see the data as soon as possible to mitigate the impact on customers. We are seeing now that water companies want data sent every 15 minutes or 30 minutes – thankfully, the improvements in battery power means this is now possible. What part can data capture play in meeting regulatory targets and net zero carbon goals? Data logging enables efficient network management, leading to resilient and reliable water supplies - a win-win for both customers and the environment. On the regulatory side, an efficient network means fewer bursts, supply interruptions and leaks. The larger datasets that the updated XiLog is capable of collecting and sending are integral to the network management activities that help shape and optimise leak detection programmes. Maintaining operational control over these critical areas will also play a part in utilities in England and Wales achieving the net zero goal on carbon emissions by 2030. If the amount of water lost through leakage is reduced, the volume of water being treated and put into supply is also reduced, cutting energy consumption and carbon emissions in the process. Similarly, if a burst or leak causes low pressure for customers, pumps must work harder, therefore consuming more power. Having reliable data allows action to be taken before any significant customer or environmental impact is felt. Where should the sector be going from here? In the near future we can expect to see fully connected networks, with hardware and analytics, making real-time decisions based on water company goals and challenges. The future is not about hardware alone, however next generation data loggers are the key to benefiting from this combined approach, that includes IoT connectivity, big data and advanced analytics. The current trend is clear, the market is moving in a direction that enables water companies to receive more and more data. Now, more than ever, the question is about getting the most value out of that data and having the right processes and systems in place to do so. We know that with more data comes more potential insight, but the true value comes when that data is efficiently visualised and analysed. Page 8
  • 9.
    Artificial Intelligence PredictsRiver Water Quality With Weather Data The difficulty and expense of collecting river water samples in remote areas has led to significant — and in some cases, decades-long — gaps in available water chemistry data, according to a Penn State-led team of researchers. The team is using artificial intelligence (AI) to predict water quality and fill the gaps in the data. Their efforts could lead to an improved understanding of how rivers react to human disturbances and climate change. The researchers developed a model that forecasts dissolved oxygen (DO), a key indicator of water’s capability to support aquatic life, in lightly monitored watersheds across the United States. They published their results in Environmental Science & Technology. Generally, the amount of oxygen dissolved in rivers and streams reflects their ecosystems, as certain organisms produce oxygen while others consume it. DO also varies based on the season and elevation, and the area’s local weather conditions cause fluctuations, too, according to Li Li, professor of civil and environmental engineering at Penn State. “People usually think about DO as being driven by stream biological and geochemical processes, like fish breathing in the water or aquatic plants making DO on sunny days,” Li said. “But weather can also be a major driver. Hydrometeorological conditions, including temperature and sunlight, are influencing the life in the water, and this in turn influences the concentration levels of DO.” Hydrometeorological data, which tracks how water moves between the surface of the Earth and the atmosphere, is recorded far more frequently and with more spatial coverage than water chemistry data, according to Wei Zhi, postdoctoral researcher in the Department of Civil and Environmental Engineering and first author of the paper. The team theorized that a nationwide hydrometeorological database, which would include measurements like air temperature, precipitation and stream flow rate, could be used to forecast DO concentrations in remote areas. “There is a lot of hydrometeorological data available, and we wanted to see if there was enough correlation, even indirectly, to make a prediction and help fill in the river water chemistry data gaps,” Zhi said. The model was created through an AI framework known as a Long Short-Term Memory (LSTM) network, an approach used to model natural “storage and release” systems, according to Chaopeng Shen, associate professor of civil and environmental engineering at Penn State. “Think of it like a box,” Shen said. “It can take in water and store it in a tank at certain rates, while on the other side releasing it at different rates, and each of those rates are determined by the training. We have used it in the past to model soil moisture, rain flow, water temperature and now, DO.” The researchers received data from the Catchment Attributes and Meteorology for Large-sample Studies (CAMELS) hydrology database, which included a recent addition of river water chemistry data from 1980 to 2014 for minimally disturbed watersheds. Of the 505 watersheds included in the “CAMELS-chem” data set, the team found 236 with the needed minimum of ten DO concentration measurements in the 35-year span. To train the LSTM network and create a model, they used watershed data from 1980 to 2000, including DO concentrations, daily hydrometeorological measurements and watershed attributes like topography, land cover and vegetation. According to Zhi, the team then tested the model’s accuracy against the remaining DO data from 2001 to 2014, finding that the model had generally learned the dynamics of DO solubility, including how oxygen decreases in warmer water temperatures and at higher elevation. It also proved to have strong predictive capability in almost three-quarters of test cases. “It is a really strong tool,” Zhi said. “It surprised us to see how well the model learned DO dynamics across many different watershed conditions on a continental scale.” He added that the model performed best in areas with steadier DO levels and stable water flow conditions, but more data would be needed to improve forecasting capabilities for watersheds with higher DO and streamflow variability. “If we can collect more samples that capture the high peaks and low troughs of DO levels, we will be able to reflect that in the training process and improve performance in the future,” Zhi said. Penn State researchers Dapeng Feng, doctoral candidate in environmental engineering, and Wen-Ping Tsai, postdoctoral researcher in the Department of Civil and Environmental Engineering, and University of Nevada, Reno researchers Adrian Harpold, associate professor of mountain ecohydrology, and Gary Sterle, graduate research assistant in hydrological sciences, also contributed to the project. A seed grant from Penn State’s Institute of Computation and Data Science, the U.S. Department of Energy Subsurface Biogeochemical Research program, and the National Science Foundation supported this research. Page 9
  • 10.
    Meteor Communications InnovatesWater Quality Monitoring Market For decades, anyone needing to monitor water quality would purchase equipment to measure the parameters of interest. Today, an innovative, rapidly growing company, Meteor Communications, has challenged that model with their ‘Water Quality as a Service’ (WQaaS) solution. “Ultimately, people monitor water quality because they need data,” explains Meteor’s MD Matt Dibbs. “So, we would happily sell them water quality monitoring systems, but many of our customers now prefer to just pay for the data - and let us manage the equipment.” This radical approach has proved so popular with water companies, regulators and environmental consultants that hundreds of stations are now in the field, delivering continuous real-time, water quality data. Matt says: “Our monitoring systems are ideal for providing real-time data from remote locations because they operate on very low power and wirelessly connect with the MeteorCloud secure web portal providing secure access for clients to view and download their own data.” Working with water companies and government agencies, Meteor Communications developed the ESNET (Environmental Sensor NETwork) autonomous water quality monitoring systems to allow rapid deployment with no requirement for pre-existing power or communication infrastructure. Modular and with multiparameter capability as well as built-in communications, ESNET systems deliver robust, high resolution real-time water quality data within minutes of deployment. The systems are available as a complete portable monitoring station or as part of a kiosk pumped system for semi-permanent or fixed installations. ESNET enables the rapid creation of monitoring networks, which is a particular advantage in the monitoring of catchments because it allows water managers to track the movement of water quality issues as they pass through a river system. ESNET sondes are typically loaded with sensors for parameters such as dissolved oxygen, temperature, pH, conductivity, turbidity, ammonium, Blue Green Algae and chlorophyll. However, it is also possible to include other water quality parameters as well as remote cameras, water level and flow, or meteorological measurements. The addition of autosamplers enables the collection of samples for laboratory analysis; either at pre-set intervals and/or initiated by specific alarm conditions. This is a particular advantage for water companies and regulators because it enables the immediate collection of samples in response to a pollution incident, which informs mitigation measures and helps to identify the source of contamination. Under a WQaaS agreement, Meteor Communications installs ESNET stations at the customers’ sites, measuring pre-specified parameters. Meteor is then responsible for all aspects of the installation and retains ownership of the equipment. The provision of high intensity (typically 15 minute intervals) water quality data is assured by daily online checks that the stations are performing correctly. In addition, regular site visits are conducted for service and maintenance including monthly visits to swap the water quality sondes with duplicates which have been calibrated at Meteor’s dedicated Water Quality Services Hub near Basingstoke. “This ability to swap sondes is a vitally important feature of the service,” Matt explains. “By providing this service to all WQaaS customers there is a major benefit of scale, because this has enabled us to establish a dedicated sonde service and calibration facility that is able to process large batches of sondes quickly and effectively.” The most important advantages are financial. With no capital costs, this model provides enormous flexibility for the users of the service because it means that they only have to spend money on the data that they need. In addition, there are no equipment depreciation costs and no requirement for investment in the resources that are necessary for ongoing service and calibration. For many of Meteor’s customers, the main advantage is peace of mind, because continuity of data is usually vitally important. With staff from its Water Quality Services Hub checking outstations every day, combined with regular site visits, users of the system can rest assured that uninterrupted monitoring will generate a comprehensive dataset. On rare occasions, monitoring activities can be hampered by vandalism or even natural events, but the WQaaS system ensures that such issues are detected immediately, so that appropriate action can be implemented quickly to protect the continuity of data. Risk reduction is also an advantage, because purchased equipment can fail, resulting in a requirement for repairs or replacement parts, which may cause a loss of data continuity. However, under the WQaaS scheme, Meteor is responsible for the system’s uptime, so spares for all of the ESNET’s modules are kept on standby as rapid replacements. Where water quality monitoring is required for a specific project, the equipment can be tailored to meet precise needs, and at the end of the project the monitoring equipment is simply removed. This is ideal for consultants or researchers bidding for projects with a monitoring element, because it allows them to define the costs very accurately in advance. Flexibility is the key benefit for water company users of the WQaaS model. Traditionally, final effluent water quality monitoring at wastewater treatment plants is undertaken by fixed equipment installed with appropriate capital works. This means that mainly larger plants benefit from continuous monitoring, so the major advantage of the ESNET systems is that they can be rapidly deployed at any site; delivering water quality insights later that same day. Then, once the investigation is complete, the equipment can be easily moved to a different plant. Summarising Matt says: “This technology has been developed over many years, and with hundreds of systems already in the field we have invested heavily in the resources that are necessary to support these networks. This means that our customers do not need to make the same investment, which delivers efficiency and cost-saving benefits for everyone. We still sell ESNET systems to those for whom ownership makes more sense, but for many others the advantages of WQaaS are significant, because when the monitoring stops; so does the cost!” For over 25 years, Meteor Communications has designed, built and installed remote environmental monitoring systems for global governmental, utility, industrial, consulting and academic organisations. Innovation underpins the success of the company, and all products and solutions have been developed in close cooperation with customers. Meteor’s products provide real-time access to vitally important field data, with two main themes. Remote water quality monitoring stations measure background levels, enabling trend analysis and the identification of pollution from diffuse and point sources. Remote, low-power, rugged cameras provide visualisation of key assets such as construction sites, flood gates, weirs, flumes, screens, grills etc. Both the cameras and the water quality monitoring stations provide immediate access to current conditions with alarm capability, which enables prompt remedial action, as well as the optimisation of maintenance activities. Meteor Communications provides a wide range of off-the-shelf and bespoke monitoring solutions. Most can be deployed within minutes, are solar powered and do not require significant infrastructure to run. Cloud-based data is accessed via secure login to the Meteor Communications data centre. This is achieved using any web-enabled device and provides instant access to live and stored data, which includes an interactive graphical display.Meteor Communications has a large installed base of remote monitoring stations and the company’s turnover has increased 5-fold in the last 6 years. Page 10
  • 11.
    ‘TOTEX’ is keywhen purchasing instrumentation There’s a lot to be considered in the price tag of an ultrasonic instrument. Derek Moore from Siemens explains how the historical way of thinking only of capital costs needs to change to the more holistic approach of total expenditures (TOTEX). For any purchase, a prudent decision involves thorough analysis with the long-term in mind. When buying a car,for example, we don’t just look at the price tag, which only represents the initial capital cost. We also consider important operating costs like fuel efficiency, reliability and maintenance. All of these contribute toward our understanding of the true total expenditure – or “TOTEX” – for the vehicle and we make our purchase decision accordingly. It means the sticker price might be higher on car A than car B, but car A might still the better deal because its long-term value could be greater when all of the operating costs over the full driving life of both vehicles are taken into account. It’s no different when purchasing an instrument for a water/wastewater facility. In addition to the initial capital cost there are a number of operating costs that must be considered. But all too often these are overlooked. ‘TOTEX’ is key when purchasing instrumentation It starts with installing the devices. Some instruments have a simpler and less costly installation process than others. Then there’s maintenance, with a number of questions to address in assessing that cost. How often does production need to be shut down for visual inspections and cleaning? For how long must each shutdown last? And what does that all that shutdown time and cleaning work add up to as a total cost for lost operating time over many years? It’s also important to consider the impact of Energy costs to determine the true operating cost of an instrument. Countries including Canada, UK, Germany, South Africa and Australia have different rates according to the time of day or season energy is consumed. It could cost up to 80 per cent more in peak periods compared to low periods. Since the instrument needs to run at all times, the high-cost periods are unavoidable. That’s where a special feature such as what is seen with Siemens’ ultrasonic controllers can make a big difference to reduce operating costs. The SITRANS LUT430 (Level, Volume, Pump and Flow Controller) and the SITRANS LUT440 (High-Accuracy Open Channel Monitor) both offer a full suite of advanced controls so that in normal operation, the controller will turn pumps on once water reaches the high-level set point, and then begin to pump down toward the low-level set point. In economy pumping, the controller will pump wells down to their lowest level before the premium rate period starts, which maximizes the well’s storage capacity. The controller then maintains a higher level during the higher-cost tariff period by using the storage capacity of the collection network. Pumping in this way ensures minimal energy use in peak tariff periods. In addition, costs can be saved with these and other devices in the SITRANS LUT400 family through pumped volume and built-in data-logging capabilities. In a closed collection network, it is inefficient and costly to pump rainwater entering the system from degraded pipes that are leaking. The SITRANS LUT400 calculates pumped volumes, which provides useful historical trending information for detecting abnormal increases of pumped water. A range of Siemens products can bring TOTEX costs down significantly through reduced operational costs. For example: • All Siemens Echomax ultrasonic transducers are robust and have a self-cleaning face to avoid product build-up which reduces the need to shut down production for cleaning. • The Siemens HydroRanger 200 and Siemens SITRANS LUT400 have sub mergence detection, with an alarm triggered before the device is fully submerged. Pumps can also be activated to attempt to lower the water level. This will avoid the costs associated with an overfill. • All Siemens level instruments have intelligent echo processing software that continuously adapts to changing environments and conditions in the application. Thanks to sophisticated algorithms at the heart of this innovation, users can rely on accurate readings, so they avoid false readings that lead to costly false alarms. • The new Siemens SIMATIC RTU3030C is a cost-saving device designed for data communications at remote locations. It’s a compact, energy- self-sufficient Remote Terminal Unit (RTU) with optimized energy consumption, so it requires no external power source. Because it is battery operated, and works with any Siemens ultrasonic device, no costly trips are needed to remote places to check on instrumentation, with everything handled from the control centre. All Siemens instruments can be connected via SIMATIC or other communication protocols, meaning all the needed information is one place – delivering cost- saving efficiency to the entire operation. To put all of these operational savings into full TOTEX perspective, consider a direct comparison between a given ultrasonic device purchased from the fictitious Zebra company and one bought from Siemens. The two devices might both have the same purchase price, but the self-cleaning face on the Siemens device alone has a huge impact when looked at across 100 units in your operation over the course of a 15-year lifespan for each instrument. Assume that cleaning feature saves just $100 per year per device. Over the course of 15 years for 100 devices, that’s a difference of $150,000 in TOTEX. It’s just one simple example to show how a capital cost is only one part of the equation in understanding the true total cost of an instrument. SIMATIC RTU3030C makes it possible to measure level measurement anywhere in the world while providing all the data in your centralized control center. Page 11
  • 12.
    Case Study: A brandnew rising main monitoring programme Following successful learnings from monitoring transients on water distribution networks, Anglian Water and Syrinix looked to see if there was an opportunity to transfer this knowledge across to wastewater ‘rising mains.’ Across the UK water industry monitoring of rising mains is limited and even less common is a partnership approach to analysing and developing the data. Pressure monitoring offered a whole new angle on providing data which could inform and influence working practice and ultimately be beneficial from both an environmental and cost saving perspective. The predominant key driver from Anglian Water was to first off explore the capability of pressure monitoring to identify bursts on rising mains. A burst on a wastewater network and the impact of its pollution is a problem with significant consequence for both customers and the environment and therefore increasing resilience by improving visibility and the time to respond is imperative, with obvious all-round benefits. Secondly, there was the issue of generally having a better understanding about the state of assets. Could Anglian Water get more out of existing assets? Could they last longer? A richer data set and the information it provided would lead to smarter investment decisions on assets and ultimately a reduction in the need to deliver huge capital solutions, (like mains replacement etc). Thirdly there was an efficiency point to make, taking into account both financial & operational efficiency. The identification of failing assets such as Non Return Valves, air valves, degradation of pumps and in general, assets which start to cost more than they should do, brings back some efficiency and by default aids carbon and energy reduction, all of which are key priorities within the next AMP cycle. Finally, geography plays its part and the topographical make-up of the Anglian Water area means before the sensors were installed, bursts could occur in rural areas and remote farmland that could lead to a catastrophic pollution. Having the technology enables AW to manage those areas better and hence it reduces the overall impact on the environment. How the partnership works The rising main monitoring and analysis service from Syrinix combines a high-resolution pressure sensor, deployed at a pumping station outlet and the retrieved network data which via diagnostic tools analyse the rising main system’s operation and performance. PIPEMINDER, Syrinix’s high resolution pressure monitor, collects and analyses 128 samples per second and provides one-minute summary data intervals. The rechargeable battery powered system comprises 3G communications in a rugged IP68 enclosure with an external digital pressure sensor makes it the ideal solution for deployment on rising mains. The data is sent every 6 hours to RADAR, Syrinix’s cloud-based platform, where it is analysed against set performance parameters, determining the system operating state. This enables the identification of asset issues such as blockages, sticking/passing non-return valves, worn pumps and burst mains. Syrinix provide a monitoring service to the water company and have automated alerts for burst main identification, which can be integrated with existing software. This project had been truly ground breaking in its ability to deliver early value to a wide range of stakeholders from asset planners who can now look at the effects of design on rising main performance and therefore inform future standards, through to operational teams who can now do greater amounts of fault diagnostics remotely such as identifying poor performing NRVs which previously may have gone undetected until costs increased or performance was significantly impeded. Anglian Water installed over 120 Syrinix PIPEMINDER devices As the project began to take form, several objectives were defined which took the scope beyond a simple alert system to a more sophisticated performance and diagnostic tool. These objectives included - • Asset performance monitoring • NRV operation • Rising main failure/ burst • Air valve operation • Asset condition monitoring • Rising main deterioration • Pump efficiency • Impact of pump operation on rising main life For the solution to be effectively utilised as BAU the following elements regarding integration were also considered. • Business/ stakeholder buy-in • Education and learning • Integration with existing systems • Dashboards • Effective presentation of data • Contextualised information • Intuitive GUI Initially sensors were placed on poor performing assets which were known to be at higher risk of failure. The intention behind monitoring these assets was to generate reference data which would allow an understanding of the patterns associated with bursts and poor performance. In doing so Anglian Water would be able to roll out a broader programme of ‘condition monitoring’ to proactively monitor for events and deteriorating performance with greater confidence in the Page 12
  • 13.
    accuracy of intelligencegenerated. Via analysis techniques it became possible to spot blockages and sticking non-return valves, giving predictive capabilities to asset owners and the proven ability to address under-performance before failure which gave the project real commercial value. Data collected from PIPEMINDER devices deployed at the pump station outlet is used in Syrinix’s patent pending technique to analyse the operation of the complete pipeline system. This translates to a visual representation of what good, bad and indifferent performance looks like, ultimately meaning Syrinix can advise upon how a system is currently operating, compared to what optimum performance should be. By analysing the one-minute summary data stream, the method extracts number of minutes with: • Low static head • Normal static head • Low delivery pressure • Normal delivery pressure • High delivery pressure • Excessive transient. A burst alert is raised in the operational control centre meaning the time to respond to asset failure has reduced significantly. By looking at analysis of time spent in other zones it became possible to determine the system operating state such as ragging blockages and sticking non-return valves, giving predictive capabilities to asset owners and the proven ability to address under-performance before failure. This gave the project real commercial value. This data tracked over time (figure 3) shows system issues raised in red and a state counter is used to indicate the asset issue. An alert is then raised in the Operational control centre so a review and response to the failure can be planned. This better use of data gives a complete understanding of system performance which can then feed a predictive maintenance plan. Working Examples of how monitoring has made true monetary savings. Example one - In May 2019 Early detection of a burst rising main meant a repair bill of £1100 as opposed to the £25k repair bill received in December 2018, prior to the burst alert. Early detection meant Anglian Water could minimise the impact on the environment, whilst lessening any customer impact and company reputation. The sensor placement of PIPEMINDER on a gravity wastewater network A graphical representation of these zones Page 13
  • 14.
    Example two –Data from RADAR overlaid with hydraulic analysis showed a series of examples where Non return valves (NRV’s) were draining back. This information could potentially save over a £1k a year on simply unblocking NRV’s. Drain down is when the non-return valve (two of which can be seen to the left of the PIPEMINDER-ONE monitoring device) which follows a pump does not close fully and allows for some of the fluid to pass despite being closed. This can be seen quite clearly in the next zone plot. The static head begins to fall as the fluid begins to flow back into the well. This means that the well will begin to fill not only from its source but also from the rising main itself. Subsequently, money is wasted in not only pumping this fluid back up the pipe but also pumping more often as the well fills more quickly. Using the Zone Plot, Syrinix has the capability to alert on when a rising main is draining down by looking for the presence of the highlighted section. This can alert on when the NRV needs maintenance and help save money in the long run. Example 3 By using the extracted data, Anglian Water have been able to reduce the burst frequency on a site from 19 bursts in 2018 to 0 for 2019 Below are A Zone plot of the event – Pressure drop of delivery pressure form ~4.4 Bar to ~2.5 Bar which tripped the zone alarm. Time between first sign of failure and repair: 47.4 Hrs. Page 14
  • 15.
    some examples chosenrandomly from the site: Date Pump Surge Pump stop to return to steady static pressure December 2018 1.233 6.15 minutes April 2019 0.712 3.87 minutes July 2019 0.312 2.63 minutes November 2019 0.313 2.71 minutes The above data shows that as time has gone on the aggressiveness of the pumps has been decreased. The overpressure due to pump surge and oscillations after the pump has stopped have been significantly reduced. “Anglian Water Optimisation Strategy Manager’ Rebecca Harrison said - “This new level of monitoring has allowed Anglian Water to deploy strategies aimed at extending the life of the rising main (such as soft starts on pumps and improved air valve maintenance) which early results suggested will allow deferral of capital investment by extending asset life. This enhanced understanding of performance also provides an essential targeting tool for the Optimisation team.” Mueller, Ferguson Waterworks deliver LoRaWAN® Class B Nodes with AMI system Mueller and Ferguson Waterworks have announced the successful deployment of the industry’s first LoRaWAN® Class B endpoints. The Town of Florence located in Central Pinal County, Ariz., is the first water utility to benefit from this technology advancement. LoRaWAN Class B endpoints provide flexibility to scale network coverage and integrate into remote disconnect meters (RDM), leak detection and pressure monitoring systems – unlocking greater network efficiency and improving data granularity. “The deployment of smart meters is accelerating our journey toward digital transformation and the foundation required to build out our smart city grid,” said Brent Billingsley, Town Manager of the Town of Florence. “We are confident that this open source network will provide new operational efficiencies, enhanced service opportunities and additional revenue streams.” Delivered by Mueller Systems, the Mi.Net® node, implemented with LoRaWAN Class B specifications, is a bi-directional endpoint capable of transmitting secure data to and from a network server within seconds, as opposed to hours with a Class A endpoint. At this unprecedented speed of communication, on-demand reads can be commanded and delivered without delay, providing real-time data to customer service and operations to identify and resolve outages quicker than before. “It is encouraging to see more cities and water utilities like the Town of Florence at the forefront of the Industrial Internet of Things (IIoT) revolution,” said Kenji Takeuchi, Senior Vice President, Technology Solutions at Mueller. “We understand that municipalities are facing challenges on many fronts. Our technology solutions can help drive a better focus on utility spending and return on investment, while helping them operate more efficiently.” By deploying Mi.Net® LoRaWAN Class B endpoints, the Town of Florence can simply pair them with Mueller Systems’ model 420 RDM to allow water meters to be turned on or off without the need for truck rolls. Each LoRa-based endpoint maintains the data in its non-volatile onboard memory and communicates with the Mueller Mi.Net® Advanced Metering Infrastructure (AMI) system. This helps to ensure water utilities are protected against any single point of failure. Alerts such as leak detection, no flow, low flow, and register tampering are monitored 24/7 by the Mueller Network Operations Center to provide an added layer of security. Page 15
  • 16.
    Article: The Smart WaterIndustry is no longer a choice...it’s a must Whatever you call it, be it Smart Water, Water 4.0 or even Digital Transformation, the world of the water industry is changing and the evolution of a “Smart” Water Industry is no longer a choice it is something that is just simply going to happen. This was the fundamental under-tone on this year’s WWT Smart Water Networks Conference and the WEX Global conference earlier this month . For the Smart Water industry it is now not a case of “If” it is a case of “when.” This is all very well but “what is the Smart Water Industry” was one of the questions that was asked during the conference sessions...do we have a definition for it? Well, if you look to “Industry 4.0,” the definition is that of Cyber-Physical systems. To apply this to the water industry is something challenging as you have a very disparate system that is tied in by walls such as a “Smart Factory” but is much more of an open structure and form. However it is a system or as Andrew Welsh of Xylem spoke about a series of snapshots of a system that when brought together make a whole. So, in the context of the Water Industry what is “Smart?” For me at least it is bringing together all of the data that we collect to give the industry, at least operationally, something called “Situational Awareness” allowing us to know what is going on within the operational framework in order to make an informed decision. This can be operationally in a relatively short time-scale or it can even refer to the customer by giving them the right data to enable them to make decision or it can even be about the performance of the assets or even resources on a much longer term enabling strategic and planning decisions. This is the fundamental heart of what a “Smart Water Industry” is to me and in order to get there we must work towards knowing what information that the industry requires on a stakeholder level whether that be the customer, CEO of the company or the operator on the ground. All of the informational needs are different and may even differ from company to company or region to region but the fundamental principle is the same. Where are we and how do we get there? The discussions have been going on for years and yet there are some great case-studies that are out there certainly on a “Smart City” approach. Eva Martinez Diaz from FCC Aqualia gave us some great examples on the “factory” approach to the water industry and the work that the innovation teams their have been doing including the development of biofuel from algae from Chiclana in Southern Spain when wastewater is used to grow algae which is then digested to create fuel not in the definition of “Smart” per se but certainly taking the principles of the circular economy and also the “Factory” approach that was proposed by STOWA so many moons ago in their report on the wastewater treatment works of 2030. On a more “Smart” approach is the work that has been done in San Ferran where the move from manual to automatic meter readings meant that the amount of data sky-rocketed from approximately 9,000 in 2016 to over 2 million the next year. As San Ferran is an island where water resources are stretched this gave a visibility of unaccounted for water that meant that the resources could be managed. The project could be seen to have a clear need and it made sense to take this approach. Where water resources are short it makes obvious sense to adopt the technologies to enable the water industry to take this route. In the UK at least the report by Sir James Bevan has highlighted an obvious need. In the UK at least there is an obvious need, certainly on water resources meaning careful monitoring of what resources we have in the environment but also protecting what we produce too through the management of non-revenue water. The need is there but is the technology. Within the conference this led us to the first poll of the day with the question as in figure 1 The answer that came out from the poll was “cautious” but “interested” which is wholly understandable position. Right now, within the industry we are awash with technologies, techniques and various “as a service” offerings all the way from data, software and the likes. It is very difficult to navigate through all of these offerings and it is also very easy to think of alot of these offerings as “widgets”. One of the reasons for this cautiousness was highlighted in the last poll of the day which asked the question (figure 2) The biggest barrier? What we already have in place, the legacy systems that have served their purpose over the years but no longer fit suit the needs of the industry but of course to replace the legacy takes time and alot of investment and it does not always financially stack up to replace these systems. What was interesting to see at this year’s conference is that the technologies to do what we need to do are already present. Various technologies are available for the industry along with various services. Some neatly address the industry’s needs such as chemical inventory systems and dosing optimisations systems as presented by Roderick Abinet of Keimira and Christopher Steele of Black & Veatch. The key to “Smart” is collaboration....and of course data...... “Smart” is not something that we can deliver in isolation though and this was demonstrated in many different ways at this year’s conference with Martin Jackson of Northumbrian Water Group talking about their development journey and the challenges and enablers that they have seem an the important areas that they’ve looked at including 1. Data Science - Yes there are the basics but its also leveraging the company expertise in different areas by developing those within the business. Did this with a Hackathon approach. They have created a culture where data is trusted to drive leading performance. Its not there yet but is getting there 2. Artificial Intelligence - When we look to use AI and have looked to bring this approach by having an in-house data architect. The focus has Figure 1: How would you sum up the water industry’s attitude to smart technology? Figure 2: What are the biggest barriers to realising the benefits of smart water net- works Page 16
  • 17.
    been through customerservices as this is an easy-win area where basically the volume of calls means that a human can’t do it 3. User Experience - Used an out of the box application and so not a bespoke service. An example of this is using Alexa to interact with the customer. Also developed a game approach for educational purposes. Basically using tailored application 4. Smart Technologies - There is a balance between new sensors and technologies and the existing. Its about outcomes rather than installing a new widget There are enablers out there with cloud storage prices coming down in price that enables companies to use it for a huge amount. Data storage where not being exactly free is at least priced very reasonably. Cyber Security is always going to be a risk but this can be limited to the data that needs to be secure (for example customer billing data). The now famous Northumbrian Water approach has been through a number of design sprints, hackathons and the likes encouraging others to get involved in an ever developing landscape. In this Northumbrian Water, although arguably being amongst the most developed, are not on their own. Welsh Water in the form of Nial Grimes presented their collaborative approach and came up with three small culture hacks that they’ve have taken 1. Make a big small change - which was the approach at Welsh Water to hackathons and the likes which enabled going further with technology faster than they’ve done before, true collaboration between different teams in the company, contractors and supply chain and finally a new way of working 2. The power is in the team - or if you want to think of it this way in the members of staff within the organisation and the removal of blockers and enabling good ideas a huge amount can be done including the case study that was given which included the development of an application for the capture of real time data on wastewater treatment works. 3. Find the crazy person - There are some truly talented and passionate people within most business who have the great ideas and if enabled can develop something truly wonderful and get the company to follow it. What this goes to show is that whether collaboration is internal, external within the UK or outside of it there are things that we can learn from each other across the water industry. There are barriers to this insofar as we are meant to be a competitive business although it was argued by Trevor Bishop that almost innovation & collaboration is almost more important from an industry-wide strategic perspective. A good case study of this was presented by Tertius Rust of South-East Water where they have leveraged the technical experts available and along with the supply chain, working in collaboration a smart network has been developed from the sensors in the ground, to the telecommunications systems and cloud bases all the way to a data lake which feeds both the systems across the corporate system which is also fed by data loggers in the field. The art of this is knowing what technologies to apply and where (like anything in the water industry) and in order to do this there is a requirement for a multi-disciplinary team which in reality stretches not only within a company but within its supply chain too. When we look to what is happening around the globe there are numerous good case studies as to what “smart” can look like especially in the case of non- revenue water. To most people non-revenue water means leakage...in reality this is not always the case. Yes, in the main it is leakage but it can also be things such as meter error, unmetered water use or even water theft from the distribution system. It is probably the most developed area where solutions are actually being developed in the water industry and being used right now all the way from instrument based systems looking at high-pressure transients to acoustic loggers looking at leakage at step 1 to pressure management systems and event management system at steps 2 & 3. Some of the examples of this across the world are in Australia where the CEO of Unitywater, George Theo, takes the approach that once a leak is costing more than it costs to repair then the situation is simple...its a case of just fixing it. In reality this “simple” approach is much more difficult as (a) you need to know where the leak is and (b) how much its costing. This is where Unitywater has used an event management system approach. Probably one of the best examples that I’ve seen in recent years is that of the Lisbon-based water company, EPAL. The journey that EPAL have gone through in the past 10 years saw the city drop from NRW levels of 23-24% to as low as 7% in only a few years. They took the approach of managing their data, converting it into information and using it to inform their water network asset replacement programme. However, as presented in the past few days, the path of their NRW has actually taken an up-turn in the past few years as shown in a copy of their slide. Although the up-turn was slight in comparison to previous years it was a definite worsening. The positive point that was picked up was that was a problem and the team within the company worked to its resolution. As a result of it the problem was detected in the trunk-mains within the city and working collaboration they instigated technological detection techniques using the WRC Sahara technique as well as the Xylem Smartball technique to see where the problems were. This resulted in the discovery of a number of leakage points within the large trunk mains that with intense planning were fixed and the unofficial figures for 2018 show the NRW% has fallen. The work that EPAL originally did has been documented and is freely available for download from the EPAL website by clicking here Despite all of this it is recognised that NRW is not just about leakage on the water companies distribution system but can be about meter error too. Its Figure 3: The case study of SE Water (from a presentation by Tertius Rust) Figure 4: The non-revenue water journey of EPAL as presented by Andrew Donnely Page 17
  • 18.
    an approach thathas been investigated on the supplier side by Z-Tech Control Systems through their experience in meter installation problems as well as by EPAL on the water company side of things. In the UK at least it can come with a large financial cost associated with the Outcome Delivery Incentives with a company that misses its leakage targets paying tens of thousands of pounds in penalties per 1000m³ or if they achieve their targets receiving tens of thousands of pounds per 1000m³. Its a similar approach in Denmark where the penalties for a water utility if they are over 9% NRW are large tax bills associated with the inefficiency. As a result of this regulation HOFOR, the countries largest utility has a 6.7% NRW and the socio-cultural attitudes towards water mean that the per capita consumption is around 100L per day. The Danish have achieved this through lots of hard work but also by the first key take-away from most workshops and conferences at the moment - the need for collaboration in what we do The meter uncertainty is not just linked to large meters though as demonstrated by the work that EPAL have done. In Portugal they have legal obligation to replace the water flow meters every 12 years. By using their technical specialisms they have analysed the performance of water meters and the potential for under-reading. If a meter under-reads (a) the customer isn’t being charged for all of their use but most importantly (b) the NRW% looks higher which is in reality due to meter error. Looking at this EPAL have figured out that there is benefit for them to replace at least some of the flow meter stock earlier than the statutory 12 years giving a return on investment in as little as 12-18 months. If information, situational awareness and informed decision making are all part of the “Smart” Water industry then at the root of all of this is the data quality. To use the old phrase Garbage in Garbage Out, which was first used by William Mellin in 1957, if we use data of poor quality then the whole fundamental basis of the Smart Water Industry will fail. If we want to use data analytics and machine learning then the concept that was first used 62 years ago has got to be understood. This is a constant theme that is heard all of the time at the moment and is true. At the heart of this, operationally at least, is the application, installation and maintenance of the online instruments that we use. Figure 5 shows an extreme case of poor installation and maintenance but is a real-life example (although not from the UK). The question has to asked - “where is our data” that we are going to use coming from. A person on site looking at this can understand that the data from this particular installation cannot be relied upon but a person who is remote from the local situation, they don’t have the situational awareness to know that the data is wrong. Of course nor will a machine. However, these are the obvious errors - what if the error is not so obvious and is down to an error that can’t necessarily be seen via air getting into pipeline or a local obstruction causing errors in the flow meter reading. Analysis that experts at Z-Tech have done (figure 6) show that in severe cases, due to air or other pipeline disturbances for example poorly fitted meters or meter fouling, that the associated error can be in the tens of percent. When looking at examples such as non-revenue water this can create huge apparent losses which in reality don’t exist and could be solved by utilising instrumentation expertise. ...the way to get there....through objectives At the end of the day though there has to be a point to the whole “smart water” concept and that is to make the industry more efficiently. The second poll of the day asked the question as shown in figure 7 In the end it is always going to be in improving the service to the customer as really in the poll the answer to the question was “all of the below” and more besides and the work that has been done in Denmark, Portugal and Australia are casing points which enable a water company to operate at a leaner level through efficiencies in operation. In general, there will be horizontal cross-company objectives in terms of increasing data quality by correct installation & maintenance (Level 2), more robust communications systems (Level 3) as well as visualisation (Level 4) and analytics (Level 5) but also vertical segments such as NRW reduction, flooding & pollution and (water/wastewater) system optimisation and is in the vertical segments that the objectives and drivers lie. Take-aways So,what do we take away from this year’s WWT Smart Water Networks conference and the WEX Global Conference: 1. Collaboration - As water companies, contractors and supply chain the technologies and ways to deliver them are out there. Specialists will need to be used, especially in the current environment, where technical skills can be a challenge. New skills will have to be developed as the industry shifts into areas which are not in the current “typical” skill set of the industry and these can be co-opted as necessary. We should also be looking outside of the water industry as well as there is a lot to be learned outside. 2. Data Quality - This goes back to the 1950’s and the US Army Corp of Engineers and the infamous phrase of “garbage in garbage out.” If we Figure 5: Know where your data comes from? Figure 6: Error analysis showing obvious installation problems Figure 7: Where do you think smart technology could make the biggest difference to the water industry? Page 18
  • 19.
    are to buildan industry based upon data we need to make sure that the data is right. Information based on wrong data will be wrong and the resultant analytics may come to the wrong conclusions. A machine cannot tell when data is wrong necessarily as its a difficult job for a person. This means we need more robust data sources and need to maintain them to ensure that they are right. This is essential moving forward as otherwise we are building a philosophy on uncertain foundations. However not only is data quality an issue but so is data availability too. The value of data and its quality and availability is all linked to its usefulness as information - this is a fundamental piece that I wrote about many years ago in what I termed at the time “the resistance to the effective use of information.” Basically put if we value the information we will look after the data source. 3. Technology is currently available in one form or another to enable us to build a smart water industry. It may take a little help and a little trial and error to enable the technology with different people but it is already in place. It is just as case of starting the journey. 4. Skills - There are some area that the skills exist, there are some areas that the skills are in shortage and there are some areas where we don’t know what skills are going to be needed. Some can be developed in-house, some are specialist or can be delivered more effectively externally either on an ad-hoc or permanent basis. There are the people available but not necessarily in the traditional sources. 5. The Smart Water Industry is a reality and is no longer a choice, it something that we simply must do in the Water Industry moving forward to address the challenges that we face. This, for me at least, were are some of the key takeaway points from the day. The Smart Water industry is certainly possible and in fact has been highlighted as necessity. Now its up to us as a collaborative industry to deliver them. There were some questions raised at the conference from the audience and these have been listed below to give some insight into where the industry is unsure or needs further clarification in certain areas. Questions 1. How are you going to feedback on the questions (un)answered - 2. Does the panel (everyone) have an agreed definition of “Smart Water” 3. Should we define smart networks - it should include treatment. Are we not considering data driven water to meet and exceed customer outcomes - 4. Do you feel the Water utilities have a digital roadmap and know where they are going and at what pace? 5. BIM, Digital Twin, analytics and Big Data are all interlinked. However they seem to be looked at separately within utilities - how do we join them up 6. Are you using the meter read data in Sant Ferran to feedback to customers and drive consumption reduction and if so how? Can we get a site visit to Sant Farran - it looks beautiful 7. There are a lot of people who want to help provide smarter solutions if only water companies provided access to there data. How can we solve this? 8. Could the partnership approach used in Yorkshire Water by Black & Veatch work on a wastewater network 9. When trying to implement smart systems into treatment works etc. do you see much resistance from the managers of those sites 10. No one has mentioned the skills shortage we are facing to enable and fully realise this Smart Future. What are the proposals to bridge this in time? Can AI help bridge this gap and is Northumbrian Water (or any water company) looking at this possibility? What level of training is envisaged for end users i.e. the front-line operators at treatment works 11. At a site tech level do bespoke controllers have a future as most are now integrating their smart “apps” into connected PLCs or Edge Devices 12. Are the data analysts in the back office now as important as the engineers(/technicians) out front in order for smart networks to succeed? 13. What are the ethical implications of deploying smart tech, automated condition monitoring and increasing AI 14. How do we value data 15. Can you change people or do you need to “change” the people? 16. England won’t be able to meet demand within 25 years, is there a role for smart networks to help the entire UK sector and not the individual companies 17. Tier 2 is the creation of cognitive hydraulic model. In practice can this be the conversion of offline models with added sensor data or something simpler. 18. How far away are Northumbrian Water form the South East water smart network system? How far ahead are South East Water compared to other companies. 19. How do your smart systems incorporate feedback and learning to continue optimisation from base model? 20. How as an industry are we connecting into the new innovation and data ethics committee set up by the government 21. Trevor (Bishop) talked about OFWAT’s push for a systems based approach but only one company’s draft business plan seem to satisfy them. What were the others missing? 22. Michael (Strahand) talked about data storage being (virtually) free. How far are we with establishing secure systems of data sharing systems with the growing threat of cyber-attacks. Page 19
  • 20.
    Introduction – Ahistory of the treatment works at Cookstown The Wastewater Treatment works at Cookstown in Northern Ireland is a treatment works that has a long and extensive history. It was originally commissioned in 1965 by the district’s local authority. Situated on the edge of the highly-respected Ballinderry River, the original works was designed to cater for an equivalent population of 11,500. Within a relatively short period of the old works being commissioned (and following the establishment of Water Service in 1973), it became apparent that the systems installed - although modern in their day - were not going to be able to deal effectively with the sewage from the town as well as the surge in volume of effluent being produced from the area’s rapidly expanding pork industry. The trade effluent was extremely high in strength due to the quantities of blood and fat associated with pig processing and was subsequently putting unprecedented pressure on the works. By the 1980s Cookstown’s population had increased beyond 24,000, and while the existing works had been extended to cope with the growing domestic and trade pressures, it was clear by the mid 1990s the sewage plant was operating well beyond its initial capacity. In addition, many of the tanks required unpleasant and labour - intensive operational procedures to maintain them; whilst other items of plant, such as the detritor. had become ineffective. Operational problems, such as blockages, were also frequently encountered. Despite the processes being well maintained, the fact remained that the works was substantially overloaded both hydraulically and biologically. As a result, the works had failed on a number of occasions to meet consent standards which meant that fines by the EC were imminent. During the 1990s, extensive studies were carried out in relation to the building of a new sewage treatment works in Cookstown. The planning authority ruled out the existing site for a bigger works on the grounds that it was too close to housing and that any development of the site would inhibit further residential expansion in that area of the town. Overall a total of seven sites were considered for the location of the new works with Environmental Impact Assessments drawn up for each option. An extensive public consultation exercise was undertaken to present the various sites to key stakeholders but all options were deemed unacceptable. Having exhausted all avenues, Water Service’s designers went back to looking in greater detail at ways in which they could overcome the constraints posed by the existing works site. The main problem with the site surrounded the restricted footprint that was available for introducing new infrastructure. However research showed that by utilising more modern treatment processes, Water Service would be able to incorporate a new higher capacity works within a much smaller area. From an environmental point of view, we knew that careful planting and screening of the new works would overcome any visual objections and that by introducing robust odour control systems, the tightest of standards would be satisfied. With this option offering the most economically advantageous option, Water Service proceeded with a design to replace the existing Cookstown WwTW with a modern new plant on the same site. Five alternative treatment processes were economically and practically appraised for their construction within the confines of the existing works site. The most suitable option deemed for the new Cookstown Works was a Sequential Batch Reactor (SBR) process- a compact footprint plant which did not require a separate secondary settlement stage (an element that would take up additional valuable space on site). Case Study: Optimisation of a SBR using Enhanced Control Figure 1: Cookstown WwTW Page 20
  • 21.
    Also, because theSBR process could be integrated into the existing works and operate without a short-term requirement for primary treatment, it eliminated the need for the provision of a significant temporary treatment plant In terms of whole life costs, the SBR option proved to be the most economically viable solution to produce high quality effluent. Working within the confines of the existing site footprint, coupled with the need to keep the existing works live was probably the biggest challenge that faced the construction team. Logistically the storing of materials also proved to be a significant problem and while ‘just time’ deliveries were scheduled as far as possible to maximize space, NI Water were keen to reuse as much of the excavated spoil as possible. To enable this to happen, stockpiles of rock and indigenous landscaping were created in the area just above the works itself. Much of this existing material was used during phase one of the construction programme (building of the SBR tanks and the inlet works) when much of the river improvement work was also undertaken. River improvements Prior to construction work getting underway, NI Water’s Engineering & Procurement team, set up a special river improvement workshop to offer a common platform for all those with an interest in the river to come together to discuss their concerns and put forward ideas for enhancing the river quality and its long-term protection. During the initial workshop, NI Water highlighted how the design of the works had been developed with cognisance of the adjacent Ballinderry River. To improve the conditions in the river and protect it from construction work in the short term, NI Water took the decision to carry out ancillary upgrades to the existing plant to temporarily raise the quality of the treatment process until the new works was brought on line and compiled with current discharge consents. The first meeting proved a most valuable exercise and from the outset of the scheme, provided a crucial stepping stone to building strategic links with some key project stakeholders. The knowledge gleaned from the Ballinderry River Enhancement Association (BREA) was fundamental in introducing the most effective river improvement methods to ensure minimal disturbance to the existing fish or invertebrate life. To the delight of the NI Water team, their joint venture contractors for the new works wholeheartedly bought into the idea of improving the river. Ahead of construction, all river banks were strengthened to prevent future erosion and a total of six weirs and groynes lying above and below the works were repaired using indigenous stone. A boom downstream of the works was introduced so that any silt or debris from the working site was caught and removed and a number of gravel spawning beds were introduced at agreed locations for the migrating fish such as salmon and dollaghan. The timing of the works was also taken into account with all construction work in the river undertaken to coincide with the migration of fish. Moving forward to today – Advanced ASP Control More recently the works at Cookstown was struggling to hydraulically treat all of the flows that it was receiving from the network with the storm tanks regularly filling as the sequencing batch reactor cycles were proving to be insufficient to complete treatment before flows were fully treated as such flows passing to storm tanks. In order to resolve this situation a solution was sought to improve the works control using an advanced activated sludge control system from Strathkelvin Instruments, the ASP-CON. The ASP-CON is a multi-parameter Activated Sludge Plant controller that is designed to measure up to 20 key Activated Sludge Plant parameters that are used to control the Activated Sludge Process. At its heart it is a respirometer that measures the Oxygen Utilisation rate and the health of the ASP process but the multiple measurement techniques that utilises allows a greater degree of control of the process (figure 2) The ASP-Con system measures basic parameters such as Dissolved Oxygen, Ammonia, MLSS, pH & Temperature as well as additional basic parameters such as Potassium, Conductivity, Settlement and TSS – Predicted as well as Advanced WwTW Control Parameters such as OUR and SOUR,. With these parameters fed to PLC there is a complete control of the ASP system. This unique access to all of the WWTP information allows the Operational Teams to decide how to deploy scarce operational resource. The in-situ eliminates the need for Operators to go out on plant and grab MLSS ASP-Con (Mixed Liquor Suspended Solid) and settlement samples. Depending on site size and layout this can save up to 2 hours of valuable time and ensuring consistent sampling techniques and measurement practises. If an issue occurs the ASP-Con can be programmed to grab another sample or programmed to collect samples more frequently, regardless of the time of day, day of the week, holiday schedule and regardless of adverse weather conditions. The samples are then tested in-situ – so avoiding the requirement to send off to the lab and wait a week on results, not knowing how well samples are stored and for how long before a lab technician is free to test any particular sample – results are Real-Time. The ASP-Con will also cut down the requirement of operator time for routine cleaning of ASP-Con probes. All the probes are on one instrument, that runs through a cleaning and calibration programme as dictated by the Operations Team. Cleaning is built-in to the normal operating procedures of the instrument. This also can be altered if and when required, by the Site Team. The demand on an Operator’s time for Maintenance of numerous probes on a site is huge. The fouling and ragging of “old generation” probes is a significant health and safety issue. The sheer physical requirement at times, to lift some probes out of the treatment plant due to excessive ragging should not be under-estimated. In contrast, the ASP-Con’s Self-Cleaning regime eliminates ragging completely. The regular cleaning regime automatically implemented significantly reduces fouling, improving accuracy reliability and repeatability of measures. Also health & safety risks to Operators in cold, wet and lone working conditions are significantly reduced. Figure 2: ASP-CON System Page 21
  • 22.
    What this meansat the wastewater treatment works at Cookstown was that the completion of the sequencing batch reactor cycles could be more accurately managed by using the ASP-Con system to measure when the Biodegradable load (by measuring Oxygen Uptake Rate – OUR and Ammonium) is completely removed during each aeration cycle. Once this has been confirmed as complete the ASP-Con system takes a sample to measure the MLSS and then the SVI in each basin. The SBR control software for the basin is then stepped on to complete the settle and decant phases before being allowed to idle until the level in the Anoxic basin requires the fill/aerate cycle to restart. The SBR basins were optimised by • Ensuring biodegradable load is completely removed during each aeration cycle. • Avoiding excessive energy consumption by avoiding overtreatment of wastewaters. • Maximising hydraulic throughput by maximising treatment basin availability. • Monitoring biological measures of performance to avoid long term issues. This can be seen in figure 3: What this meant, from a hydraulic point of view, was that the number of SBR cycles could be increased by decreasing the SBR cycle time so that 12 fixed volume cycles could be treated each week. This increased the hydraulic throughput in the plant by 50% ensuring that spills to the storm tanks could be limited to genuine storm events and not due to hydraulic overload of the treatment process. However, this was not the only benefit of the ASP-CON system at Cookstown as the plant worked on the principle of a Surge Anoxic Mix SBR. This has meant a large decrease in the amount of energy that is required to treat the wastewater to standard as can be seen in figure 4 Over a one month period there was a 50% reduction in the amount of energy that was consumed by the treatment process. All of these benefits also result in an increased stability of the treatment process which means overall the treatment works is more stable. Conclusions By utilising advanced monitoring and control using the ASP-CON system at Cookstown WwTW there has been a large improvement in environmental quality by increasing the hydraulic capacity of the works and decreasing the energy consumption. This is a double benefit that the water industry is seeking insofar more is being achieved, quite literally for less. This sort of system is usually reserved for larger works where there is a larger potential for savings. However Cookstown WwTW at a relatively small design population of 24,000 shows that advanced control systems are available on treatment works a lot smaller than has been traditionally considered for advanced ASP control systems. In a time where the water industry is looking to deliver more for less the ASP-CON system gives the industry a potential solution to realise the efficiencies that it needs to through instrumentation and control. Figure 3: Cookstown unoptmised (left) and optimised (right) Figure 4: Energy Savings at Cookstown utilising the ASP-CON System Page 22
  • 23.
    WirelessHART® networks: 7 myths thatcloud their consideration for process control Misinformation about WirelessHART networks prevails among many instrument engineers in the process industries. This article attempts to set the record straight by debunking 7 myths about these networks. Myth 1: WirelessHART is unsafe False. WirelessHART is safe. But why? A variety of tools make this so. Encryption—A WirelessHART network always encrypts communications. The network uses a 128-bit AES encryption system (Advanced Encryption Standard)—a standard in several fields of wired communication. The encryption cannot be disabled. The security manager in the WirelessHART gateway administers three parameters. The parameters include: • Network ID, • Join key and • Session key. Integrating a WirelessHART transmitter into a network requires a network ID and join key. After these are entered, the transmitter first searches for the network with the right ID. If it finds such a network, it sends a “Join Request” message with the key configured. The WirelessHART gateway checks the join key of the transmitter. If correct, the network accepts the transmitter. A session key encrypts the communication. Every network subscriber gets a separate session key. So it is possible only to be accepted into a network with the join key, but this does not decrypt the encrypted communication of the other subscribers. Access list—After completing commissioning, the acceptance of new network subscribers can be disabled. In this way, no new network subscriber can be integrated into the network even if the network ID and the join key are correct. To integrate a new subscriber, this function can either be disabled or the UID (Unique Identifier = unique device serial number) of the network subscriber can be entered manually into the gateway. A network subscriber that does not appear in the subscriber list of the gateway is also ignored by the other network subscribers when messages are forwarded. Join counter—If a WirelessHART transmitter is integrated into a network, it records this information in the so-called join counter. If the device is restarted and if it joins the same network, its join counter is increased. Both the network subscriber and the gateway have a join counter. They cannot be read out. If a device now tries to integrate into a network with a join counter that does not match the gateway, the gateway declines it. As a result, it is not possible to substitute one device with another without this being noticed, even if both have the same UID. Nonce counter—Each transmitted message has a nonce counter. This is composed, among others, of the UID and the number of messages sent by the transmitter so far. Each message is marked uniquely with this mechanism. If a message gets intercepted to resend it again later, it will be identified as outdated and thus rejected. This technique obstructs any manipulation in the communication. Modifying the network parameters—The network parameters, network ID and join key can only be changed by the gateway itself or at a WirelessHART transmitter locally via a service interface or the display. No network subscriber or hacker in the network can modify this information. Myth 2: Wireless is too expensive2. WirelessHART networks are too expensive Yes, WirelessHART devices are more expensive than wired HART devices. But, more importantly, how do costs for the overall communication investment compare? WirelessHART devices are more expensive because: • they contain ultra low power electronics to get long battery life • they require measures to achieve explosion protection • they use high frequency components. But the whole solution must be considered, not just the devices. The solution involves engineering hours, labour hours and material. Infrastructure for wired devices—The measurement signal of a new wired device usually must be connected to a PLC or DCS to use the data. This is either done by system’s local I/O, a remote I/O system or a fieldbus connection. While this is easy during a new installation (greenfield), this could rise to a challenge for an existing installation (brownfield). To add the new component, spare capacity must exist (free slots, channels, terminals). Another issue concerns bringing the wires from the measurement to the I/O, requiring routing and protection of the device cabling, junction boxes, cable trays and glands, and all of their accessories. All this infrastructure must be ordered, prepared and installed. Also an accessible location must be found. Otherwise this access must be gained by other means, such as by setting up a scaffold tower. Engineering and labour costs—Before all this, engineers must develop a plan involving where cables can run, which I/O makes sense, and how this work can be executed. The documentation must be continuously updated to track the location of wires. Hazardous areas—These areas further increase the difficulty and efforts compared with general purpose areas. Engineers must consider local conditions and technical issues. An expert in explosion protection must verify the planned installation, including a secure power supply and zone separation. Page 23
  • 24.
    Wireless device break-evenpoints—Of course, some planning and installation is also necessary for a WirelessHART network. The chief difference involves the effort since only the WirelessHART gateway requires a powered installation. Local conditions will determine affordability. The WirelessHART devices can be installed in whatever way optimizes the measurement. And separation of explosion zones happens by default since no physical connection exists between the zones apart from the mechanics (e.g. a thermowell). But how much could be saved? The wireless solution gains a breakeven point for the first installation of three or four WirelessHART devices plus one gateway. For example, consider take a well-known device, a monitored heat-exchanger having two inputs and two outputs. The heat exchanger will need four temperature transmitters. So assume: • 4 temperature transmitters, • a distance of 100 meters between control room and the scheduled junction box and • 10 meters of cables between the junction box and each transmitter. Realizing this solution will account about US$ 20,000, where just 20% represents the cost of the temperature transmitters. In the case of wireless, assume: • 4 temperature transmitters and • a distance of 10 meters between control room and the WirelessHART gateway. Realizing this solution will cost about USD15,000, where 80% represents the cost of the WirelessHART devices and the gateway. So the wireless solution saves 25% compared the wired one. And it will save even more in time. In fact, this solution could be available in a quarter of time. And the next heat exchanger? Wired, it will cost an additional USD20,000. Wireless, it will just add the cost of the new WirelessHART devices since the gateway is already available. While you could get three wireless solutions for the price of two wired solutions, you could get four wireless solutions in the same time as one wired solution! Myth 3: WirelessHART networks are unreliable A communication link for process control or even monitoring must be reliable and available as needed. Everyone knows examples of communication failures just when needed. So can a wireless communication ever be reliable? Surprisingly it can be more reliable than cable. This is achieved by using a time- synchronized, frequency hopping, meshed network. Meshed network—As mentioned earlier, every network has a gateway that transforms the wireless data into wired data ready for a DCS or PLC. Most wireless communication has a star architecture, meaning all network participants connect only to the star centre or head. WLAN and mobile phone communication are prominent examples for a star topology. WirelessHART has a mesh, rather than a star, architecture. Within a meshed network the participants are communicating with the gateway and additionally among one another. Furthermore, the wireless devices tell the gateway which other participants they can communicate with. Other wireless participants in range are called neighbours. The gateway analyses information about neighbours and creates a routing table. This table contains the information about which network participant has which neighbours. As participants can reach each other, they can also route the data packets from and to their neighbours. In this way, the gateway can create redundant communication paths for each network participant. Should one communication path fail, the sender will automatically switch to a redundant path. Since each transmitted packet must be acknowledged by its receiver, it’s easy to recognize a broken link. RSSI and path stability—The radio signal strength indicator (RSSI) indicates the quality of a communication link to the gateway. Knowing this, the gateway can determine if enough reserve strength is available or if the signal level is already too low. Since the gateway gets the RSSI of each single communication link, it can readily distinguish between high and low level signals. Additionally, the gateway counts the data packets lost during transmission for each link. By comparing the total number of transmitted packets within a network, the gateway can recognize paths with high losses and retransmissions. It uses both kinds of information to identify good or bad paths in a network. So the gateway now can pick the good paths that the network participants should use to communicate. FHSS and DSSS—To ensure reliability, WirelessHART makes use two techniques: Frequency Hopping Spread Spectrum (FHSS) and Direct Sequence Spread Spectrum (DSSS). WirelessHART is a frequency hopper in its 2.4 GHz band. After each transmission between two network participants, the radio channel changes. Hopping across multiple frequencies is a proven way to sidestep interference and overcome RF challenges. Should a transmission be blocked, the next transmission will be to an alternate participant on a different frequency. The result is simple but extremely resilient in the face of typical RF interference. DSSS transmits more information than necessary. It sends eight bits for each single information bit. Every bit is encrypted in such a way that the main bit is restored even if less than half of the eight bits are received. This makes the communication more robust against short disturbances and data does not need to be re-transmitted, which saves time, bandwidth and energy. Redundancy—Because each WirelessHART device can route data for other devices, it is possible to set up a network topology with redundant paths for each network participant. Having at least three independent and good communication paths ensures a reliable communication with the gateway. The gateway can determine all information concerning topology, network traffic, and quality of the communication paths. Myth 4: The range of WirelessHART networks is too short A common question concerns the maximum distance that can be covered by WirelessHART. Answers relating to surroundings and obstructions sometimes confuse the issue. What range does a WirelessHART device actually need to achieve? The practical answer revolves around the network setup, bandwidth, and repeaters. Network setup—The ultimate aim of the network is to get the wireless data to a gateway that transforms it into wired data ready for a DCS or PLC. A properly Page 24
  • 25.
    setup WirelessHART networkhas at least three devices within range of each other, including the gateway. This ensures a reliable connection to the gateway. In addition, the gateway should be located towards the middle of the network. Otherwise devices near the gateway become pinch points that shorten battery life and risk network failure. Following these recommendations for network setup should provide coverage of nearly 200 feet, even in a highly obstructed area. In reality coverage will often expand to 300 feet. Large installations will include installing more measuring points. This automatically expands the network coverage as every new WirelessHART device will route the communication for other devices. Frequency spectrum and bandwidth—To minimize power consumption, reduce the number of device transmissions to whatever is necessary to serve an application. It’s important to keep the number of re-transmissions as low as possible, too. To avoid collisions, WirelessHART uses time-division multiple access. This means each link has its unique time slot to communicate. If this link fails for some reason, transmission passes to another link. WirelessHART uses the license-free 2.4GHz ISM band. This band can be used by any other application as well (Industrial, Scientific and Medical Band). So WirelessHART must share its bandwidth with all other technologies working in the same band. And this will cause collisions and re-transmissions for each device within the network since these different networks are not synchronized to each other (WLAN, Bluetooth etc.). To keep the network reliable and stable, time slots for re-transmissions must be reserved even if rarely needed. Faster update rates of a device require more time slots, and the total available network bandwidth decreases. In fact, having an update rate of 1 second could easily result in a maximum amount of 12 devices within one gateway. As an alternative, operating two WirelessHART networks in parallel is possible, but this will also lead to collisions, reducing the bandwidth of both networks. As opposed to one long range network, having two short-range networks covering different areas with only a small overlapping areas will increase their stability and device battery lifetime. Repeater or routing device—Sometimes a measuring point is too far away from a network to connect. This can be corrected by installing additional routing devices. Any WirelessHART device will do, but the best fit is a device that is small, requires minimum effort to install, and provides an easily replaceable battery. Myth 5: Wireless constantly needs batteries5. WirelessHART devices constantly need new batteries What would a wireless device be that requires a power cord—not completely wireless of course. So an independent and reliable power supply is mandatory. Batteries can fulfil this requirement, but with the disadvantage of their finite energy. For sure, dead batteries must be replaced to get a battery powered de- vice running again. But how big is this disadvantage really? ABB’s WirelessHART devices use an industrial-standard D-size primary cell. This cell was especially designed for extended operating life over a wide temperature range of -55°C to +85°C to fulfil the requirements of process industries. But how much lifetime is achievable? It depends. Battery life is not predictable as a hard fact. Rather it behaves like the fuel consumption of a car. Some need more, some need less, depending on acceleration and speed, vehicle weight, and traffic. To maximize battery life, ABB electronics have an ultra-low power design—less by a factor of 20 compared to a conventional 4-20 mA HART device. All components have been chosen by their functionality and their current consumption. The design goal is to consume the minimum energy possible, including software. For example, sub-circuits power down if not needed. So the sensor itself powers down between two measurements as well as the display. If the update rate is slow enough, the device will fall into a “deep-sleep mode” between two measurements as often as possible The update rate is the user-defined interval at which a wireless device initiates a measurement and transmits the data to the gateway. The update rate has the largest impact on battery life—the faster the update rate, the lower the battery life. This means the update rate must be as slow as possible, but still meet the needs of the application. Depending on the time constant of the process variable, the update rate should be 3 to 4 times faster for monitoring open loop control applications and 4 to 10 times faster for regulatory closed loop control and some types of supervisory control. A special attention should be spent for update rates faster than four seconds. These faster rates will prevent the device from going into the deep-sleep mode. They will consume much more power as well, impacting the total number of devices that can be handled by one gateway. Burst command setup—All WirelessHART devices are able to burst up to three independent HART commands. Of course, the update rate of each command could be setup separately. But as described before, the device tries to fall into deep-sleep mode as much as possible. By default, the update rates are set up as multiples of each other, giving the device the best conditions to save as much energy as possible. Network topology—Mesh-functionality can also influence the battery life since each device has routing capability. If one device acts as a parent for another device and both devices are setup with the same burst configuration, the parent must transmit data twice as often as its child. The most power saving network topology has all devices within effective range of the gateway. While this is rarely possible, it’s more important to think about this before placing the gateway. To extend battery life, the gateway should be placed more or less in the middle of a planned network. In this way, the devices acting as parents would be equally distributed—not relying on only a few devices to route data. Knowing all this about battery life, what can be expected? Taking all these energy saving recommendations into account and assuming the following: • bursting one command • having a direct communication path to the gateway • having three child devices with the same update rate and • using the device at 21°C. Under these conditions the battery life could last up to • 5 years with an update rate of 8 seconds • 8 years with an update rate of 16 seconds and • 10 years with an update rate of 32 seconds. Page 25
  • 26.
    If a fasterupdate rate is favoured or if the device has a key position for routing within the network, ABB’s Energy Harvester option would reliably relieve the battery. And last—but not least—ABB’s WirelessHART transmitters use standard batteries, making them easy to procure. This will not save battery life, but will save money. Myth 6: WirelessHART networks require specialists to set up A lot of engineers think that setting up a wireless network can be an arduous and annoying job. Getting everything running, ensuring safe communication and including all desired network participants can take much time. But is this true? What do we really need to do to get a WirelessHART network running? The wireless elements of a WirelessHART network include: • field devices connected to the process or plant equipment. Of course, they all be WirelessHART capable. • a gateway that enables communication between host applications and the field devices in the WirelessHART network. • a set of network parameters: Network ID and Join Key. That’s it. Now you can set up your network in a few steps: Input of network parameter—To get the gateway into proper operation you must input the network parameter. This could be done easily via the integrated web browser of the WirelessHART gateway. Most gateways provide this comfortable way of configuration. Now the network participants can join the network. They also need the network parameters. Here’s the easiest way: order them with the desired network parameters. Otherwise you must input parameters manually. Since all WirelessHART devices provide a maintenance port, you can use the tools already available for wired HART devices; this avoids the need for additional equipment. And they can be operated just like the wired HART devices. Additionally, ABB WirelessHART devices can be brought into operation just by using their HMI. Again, you need not concern yourself with security because it’s built-in. Update rate—All WirelessHART devices burst their measurement values. By default, all ABB WirelessHART devices burst HART command 9 every 16 seconds. This includes the dynamic variables PV, SV, TV, QV (for devices with multiple outputs) with the status of each and the remaining battery lifetime. They burst HART command 48 every 32 seconds—the additional device status information. So typically, you needn’t deal with the burst configuration. Nevertheless the commands or the update rates can be changed as needed. Placement of field devices and gateway—Start with the gateway installation first. Find a suitable place for it and power it up. As it is the connection between host application and the WirelessHART network it will need a power supply and wired connection to the DCS. After the WirelessHART devices have been prepared they now can be installed in the field. Installation can be done in the same way as well-known for wired HART devices. But WirelessHART devices require less effort because they have no wires. This is especially true in hazardous areas where nothing will cross the zones and no output device needs to be checked with its ex parameters against an input device. After the devices are powered up they will appear in the network automatically. Everything else is handled by the gateway; a user does not need to take care of meshing the net or which device communicates with which. Myth 7: WirelessHART is too slow When asked for the required speed to cover an application, a user will often answer: as fast as possible. The update rate for WirelessHART devices within a network can be configured individually between once per second and once per hour. Is that fast enough for everything? Let’s look at a few considerations before answering too quickly. Usage—At first, examine the uses for which a WirelessHART network is actually intended: condition monitoring and process supervision. Remember, the wireless sample/update rate should be: - 3 to 4 times faster than the process time constant for condition monitoring and open loop control applications - 4 to 10 times faster for regulatory closed loop control. For measurements in the process industries today, more than 60% simply monitor conditions—not for control applications. So a WirelelessHART update rate that’s greater or equal to one second may fit many of these applications. Of course other factors may apply too. Timing—For wired devices, update rates and timing aren’t often considered. Engineers and operators assume the values in the DCS are the real time values from the process, achieved by oversampling. In fact, signals often are converted and scaled from the initial sensor element before reaching the DCS. So in a traditional wired installation, the measurement values also have latencies. Instrument engineers are rarely aware of these, but just assume these values are timely enough. In the world of WirelessHART, the data packets have time stamps that spell out how old a measurement value is. This indicator lets engineers assess latencies and properly react to them. Thinking differently—Instrument engineers must know how fast a process value can change for both control applications and condition monitoring. No additional knowledge is needed for WirelessHART. For wired installations, this knowledge affects a DCS or PLC. For WirelessHART, it affects the planning of the network. Because the bandwidth is a limited resource, engineers must consider how fast the update rate needs to be rather than how fast it could be. Comparing speeds—The traditional FSK-HART loop provides a speed of 1200 bits per second. In practice, HART on RS-485 cable is limited to 38,400 bits per second. WirelessHART provides a speed of 250,000 bits per second. This means WirelessHART is more than 200 times faster than wired HART and even six times faster than HART over RS-485 cable. By allocating the “Fast Pipe” to a network participant, the wireless gateway provides a high-bandwidth connection that is four times faster than normal. This is ideal for transmitting a large amount of data, such as up- and downloading a complete configuration. Page 26
  • 27.
    Article: Getting data qualitywrong and how to get it right Introduction The Water Industry has always been known to be data hungry, an estimate of the amount of data collected in the UK alone estimates it to be in the hundreds of millions of pieces of data per day ranging from customer billing data to operational data that is used to monitor the performance of the industry on a continuous basis. However the question has to be asked as to how much we can rely on this data on a day to day basis. This is especially the case with recent articles in the trade press that have disputed the accuracy of things like Smart Meters. In reality, the customer side of things can be relied upon, it used for billing purposes and there are strict international standards around the manufacture of flow monitoring and quality control checks and independent auditing and management systems ensure that things are accurate. The advent of Smart Meters has come under some teething problems in some areas of the world but how much of this is people getting used to vast quantities of data and how much of it are standard uncertainties that are normal with all measuring systems is debatable. However despite all of this is the final quality control check of the customer and they will rightly challenge where things aren’t necessarily right. Putting this aspect of the Water Industry aside it is in the operational side of the business where the use of instrumentation in order to collect data can have interesting results in informing operations as to the current state of operations especially in the Wastewater side of the business where the challenges of measurement are high. In this article the consequences of poor quality data and how to use standard calibration & verification techniques to ensure that the quality of data in the operational side of the business (with specific emphasis on wastewater) will be discussed. Where things can go wrong? Most operational members of staff have seen the obvious errors in measurement on site, especially when looking at SCADA or telemetry. The mimic that says the final effluent of the treatment works is supposedly at 10000C is amusing but certainly won’t be believed for a second. These are the obvious errors that are, annoying as the true data is not available but don’t have the potential to cause any damage. They are simply wrong. On the flip side of this is the innocuous errors that could be right but in reality actually aren’t. So what are the cause of poor quality data associated with online instrumentation? • Poor selection of the instrument type - for example selecting a high range instrument for a low range application • Poor installation - for example installing a flow meter with insufficient space • Poor commissioning - for example measuring the wrong empty distance on a level based instrument • Poor maintenance - not keeping the instrument clean, replacing consumable parts or reagents • Telemetry errors - Not checking the scaling between the instrument & the telemetry system These are probably the five most common problems that can occur which can produce errors that while erroneous the results that are produced are not sufficient enough to produce the confidence of the physical impossibility of the final effluent result at 10000C. An example of a flow meter result where a simple error in the calibration of a flow meter caused a surprising result is shown in figure 1 below Figure 1 - A long term calibration error Page 27
  • 28.
    Figure 1 showsa flow meter reading over a period of 8 years which was subject to routine checking procedures and a change in the instrumentation that performed the measurement and an error in the setup meant that the flow meter was reading significantly higher than the true situation. As the flow meter was still within it consented dry weather flow reading in the period of time the error was not catastrophic enough to not be believed. It was only during a routine dip check by a particularly diligent member of the maintenance staff that picked up the error and the correction was made. It was afterwards that look- ing at the long term scale that the error in flow measurement is particularly evident. This error was caused by a very minor error in the empty head distance and once it was realised could be fixed in less than 5 minutes. Figure 2 demonstrates another common cause of error in on-line instrumentation, telemetry scaling issues. In figure 2 we see an example, again of flow based measurement, but using a different technique. The error in this case is within the realms of believability and could be typical of a particularly wet year. However routine calibration of the meter itself showed that the scaling in the telemetry system significantly differed with that on site. As a result of this the meter in telemetry was reading approximately 2.5 times higher than the meter onsite making the site appear that it was not compliant with the permitted limit. In both of these cases a false situation was created by online measurement, in this case the flow that was passing through a wastewater treatment works. This could potentially lead to, depending upon what is actually being measured and how that measurement is being used to anything from under-estimation of what is leaving a wastewater treatment works to poor control of operational processes to poor investment decisions being made. Some scenarios..... Scenario 1 - A Dissolved Oxygen probe on an activated sludge plant is reading 4mg/L and the actual process condition is 1mg/L. The plant is using a standard PID loop control with no ammonia monitoring on the effluent of the treatment works (a very common situation). The control valves of the aeration system close decreasing the amount of air to attempt to control to what is thought to be 2mg/L. The process condition is actually sub 0.5mg/L and actually not enough air is being provided for the bacteria or for maintaining the minimum air flow for mixing. The MLSS level crashes as it all settles to the bottom and the ammonia levels rise due to both insufficient mixed liquor and insufficient air. A pollution event is the result. Scenario 2 - A treatment works has a history of flow non-compliance with its dry weather flow consent and so the flow meter readings are trusted. This results in an investigation into the root cause of the flow non-compliance and it appears to be infiltration related. This triggers surveying of the collection network. This reveals very little infiltration as the flow meter readings are actually falsely high. As a result the investment option is to apply for an increased permit and expansion of the treatment process. This results in unnecessary investment and a works that is suddenly over-sized for its current flows and loads creating not only an unnecessary CAPEX but an unnecessary OPEX expenditure in addition and a works that is more difficult to operate. Both of these scenarios are hypothetical but have a grain of plausibility within them. The impact of poor measurement of on-line instrumentation can be large. This highlights the importance of the maintenance of online instrumentation and if in the future there is a greater reliance on online instrumentation it comes with additional responsibility. The need to maintain the instrumentation in order to maintain the data quality. Figure 2: A typical example of scaling error Page 28
  • 29.
    AQC & MaintainingInstrumentation The Water Industry are experts at maintaining the instrumentation that are present on the treatment systems. Hundreds of thousands of checks are carried out each year along with all of the testing that is done in the laboratories. When you work in the laboratory there is something called Analytical Quality Control (AQC)m normal laboratory procedures sees both duplicate and check samples being done. This isn’t once a week, once a month or even once a year, the frequency of these check are at the very least once a day and depending upon the analyte it can be several times an hour (1 in 20 samples used to be a duplicate and 1 in 20 samples a check sample alternating so there was a check sample 1 in 10 times used to be typical). All of the check samples used to be traceable back to national or international standards and the laboratory method itself used to be certified along with the laboratory certification (typically to ISO17025). Moving to online instrumentation, depending upon the type of instrumentation, this rarely happens. The checking of whether an online instrument is actually working correctly very much depends upon operational or operational maintenance procedures. All of this does depend upon the type of instrument and what is actually being measured. • Online analytical analysers tend to have an internal calibration sequence that uses traceable standards that calibrate the analyser on a regular basis (typi- cally daily). This ensures that the analyser remains accurate. • Electromagnetic flow meters tend to go through complex factory based calibrations against a master meter and a factory calibration hard-coded into the meter. This then is internally verified by the meter itself to ensure it keeps itself within tolerances of the factory calibration. • Level based flow meters can typically be compared against a calibration plate to check that a standard distance is maintained. • Dissolved Oxygen probes typically have a replaceable measurement cap that needs to be changed in order to maintain measurement integrity (typically an annual task). Online instrumentation is accurate as long as, like the laboratory techniques that are so diligently practised, are also applied to the field. In principle the operational tasks for online instrumentation are different but they are based upon the same principles of quality control. For example, when you work in the laboratory the principle of cross-contamination is driven into you. It is something every analyst in the laboratory has done and every analyst in the laboratory has paid the price for in the form of wasted analytical time and embarrassment. For online instrumentation it is the principle of keeping your measurement point and your instrument as clean as practically possible. Apart from keeping online instrumentation clean are the concepts of calibration versus verification. The two concepts of calibration and verification are often confused and are often misunderstood and this is where online instrumentation needs to borrow from its analytical, laboratory based relations. Calibration, in terms of an online instrument is the procedure of changing the measured parameter of an instrument so that it matches that of a traceable method of measurement. This is often done by applying a factor within the instrument itself. This is often something that the instrument needs to be returned to the original manufacturer to complete. Although some manufacturers have field services that can accomplish a calibration routine. For analytical instru- ments this can be accomplished using traceable standards in the field. This should almost always be a wet method of analysis. In the laboratory this would comparable to running a calibration curve. Verification, in terms of an online instrument, is the checking of an instrument against a known measurement in order to confirm the correct operation of the instrument. It would not normally involve making any changes to the instrument itself. For an analytical analyser this would involve taking an independent sam- ple and comparing the results. For a flow meter checking the gauge or using an independent meter. The variant of the verification is the electronic verification which checks that the electronics of the device are working within a standard tolerance. Lastly, for any instrument onsite there is the end to end testing of the telemetry systems, checking what the system is actually reading is the last vital check. As can be seen by the earlier example this is something that often can be one of the steps that is often missed. Discussion Getting the quality of data right is actually a very simple thing to do in theory but as often in life it is normally one of the most difficult things to put into practice. The simple steps quite simply are 1. Select, install and commission any online instrument correctly. Do not cut corners as most often it will end up with poor data quality as a result 2. Keep it clean & maintained - Easier said than done, especially in a wastewater environment but absolutely vital. If it can’t be accessed then move it to where it can. All instruments should be accessible for maintenance especially ones with consumable parts. 3. Keep checking it - Getting into the habit of walking by a meter and seeing if its working or not is the first warning that something is not right. Comparing it against a known sample by using verification methods is the next step 4. Check what you are getting onsite is what everyone else is reading too. In reality the practises that have been developed in laboratories, Analytical Quality Control, should also be applied in the field with online instrumentation if the quality of the data in the water industry is to be relied upon in the future. Especially as the volume of data is set to increase dramatically. In essence this is actually taking the culture of AQC from the laboratory and applying it to the field-based environment. The alternative is, as the industry gets more reliant on field-based online instrumentation, is that the operational situation that we see is seen with a slightly skewed and erroneous point of view. Page 29
  • 30.
    Introduction Water 4.0 isa concept that has recently be raised as the “future” of the Water Industry...possibly, but apart from being a paraphrase of Industry 4.0 the question has to be asked - What is it and what has it got to do with the way the Water Industry operates in its current state? So to define what exactly Water 4.0 is we have to look at Industry 4.0 and what came before it, I.e. Industry 1,2 & 3. So what are these? Industry 1.0 - This was the first Industrial revolution and involved the mechanisation of production using water and steam power. Think Wa- ter Mills and Steam Engines Industry 2.0 - In short think of electricity and what it did for the mechanisation of industry Industry 3.0 - Think electronics and computers basically the start of automation within industry So Industry 4.0? it is a collective term for technologies and concepts of value chain organization. Based on the technological concepts of cyber-physical systems, the Internet of Things and the Internet of Services, it facilitates the vision of the Smart Factory. Within the modular structured Smart Factories of Industry 4.0, cyber-physical systems monitor physical processes, create a virtual copy of the physical world and make decentralized decisions. Over the Internet of Things, Cyber-physical systems communicate and cooperate with each other and humans in real time. Via the Internet of Services, both internal and cross-organizational services are offered and utilized by participants of the value chain. It is based upon six design principles 1. Interoperability: the ability of cyber-physical systems (I.e. work piece carriers, assembly stations and products), humans and Smart Factories to connect and communicate with each other via the Internet of Things and the Internet of Services 2. Virtualization: a virtual copy of the Smart Factory which is created by linking sensor data (from monitoring physical processes) with virtual plant models and simulation models 3. Decentralization: the ability of cyber-physical systems within Smart Factories to make decisions on their own 4. Real-Time Capability: the capability to collect and analyze data and provide the insights immediately 5. Service Orientation: offering of services (of cyber-physical systems, humans and Smart Factories) via the Internet of Services 6. Modularity: flexible adaptation of Smart Factories for changing requirements of individual modules The “Cyber Physical System” element of this can be defined as a system of collaborating computational elements controlling physical entities. CPS are physical and engineered systems whose operations are monitored, coordinated, controlled and integrated by a computing and communication core. They allow us to add capabilities to physical systems by merging computing and communication with physical processes. So how does this apply to the Water Industry? Industry 1-4 all apply to the manufacturing industry and for that industry it is relatively simple, something is being fabricated and put together putting together distinct parts. The Water Industry is actually quite different - be it Potable water or Wastewater it is being cleaned for discharge either to the customer’s tap or back to the environment. In reality, operationally, does Industry 4.0 apply to the Water Industry or are we trying to force concepts from another industry onto the Water Industry and creating something that doesn’t quite work? Possibly? But let’s play around with the design principles briefly and see where we get and see how far the Water Industry is with the concepts Interoperability - The way that I read interoperability is the ability of Water Industry Operators to connect, communicate and work with the treatment, collection and distribution systems to find out what is going on and be able to connect remotely. If you ignore the concept of doing this over the Internet it is arguable that we already have the ability to do this through SCADA systems. In someways you can almost say the Water Industry has achieved this on large treatment works and to some aspects with distribution systems but are no where near the interoperability concept on smaller treatment works and collection systems. Rating - Big Tick....at least in parts of the industry Virtualisation - a virtual copy of the Smart Factory - arguably a big tick in the Water Industry box. We have telemetry systems which at least allows us to see what is going on. On Advanced Wastewater Treatment Works we have process models that control aspects of the treatment works and in both advanced distribution and collection systems we even have a model based simulation models. It is certain that the technology is not quite there yet on a company wide basis but in pockets in the Water Industry it certainly works and is in place. Article: Is Water 4.0 the future of the Water Industry? Page 30
  • 31.
    Rating - Notfar off Decentralisation - the ability of the treatment works and network systems to control themselves. Again arguably this already exists, we as an industry have elements of treatment works that are more than capable of controlling themselves through monitoring and control systems, we have pumping stations that based upon the signals from level controllers will control pass forward pumps. We have PLCs that act as control centres for treatment works or individual parts of treatment works. So a big tick in the Water Industry has achieved the principle of decentralisation? Perhaps.... Rating - A “Big Tick? Perhaps Real Time Capability - The capability to collect and analyse data and provide the insights immediately? Hmmm....how do you define immediately, is it applicable to the Water Industry? Is immediately necessary? This is an area where the Water Industry can definitely develop in. The basics can be said to be done, we have the ability to alarm out if something is wrong and even the potential to react to the alarm remotely (on some systems) to repair the potential problem. Under Water 4.0 and the principles of Visualisation & Decentralisation the system should of course react itself. There is the potential for Real Time or even Near Time Capability (as applicable to the industry) but to be fair this is an area where the Water Industry could grade itself as “An area for improvement” Rating - An area for improvement Service Orientation - We’re a service industry, this is an absolutely tick in the box.....or is it? Well actually probably not? • Water meters are mostly manually read once or twice a year • Customer bills and other customer communications are mostly paper based and come through the post although some communication is through social media • Customer queries are handled over the telephone although text messaging, social media and texting to mobile phones are becoming more popular • Customer analytics are rare at best although with the advent of Smart Metering this is an area that the Industry is actively persuing and improving in Rating - “An area where improvements are being made but generally could do better Modularity - A flexible approach? Changing requirements? Does this design principle apply? Are we already doing it? Again arguably the answer is yes. The picture to the right is from a large wastewater treatment works and to me demonstrates modularity in the design of the final tanks of the treatment works as well as flexibility of the operation. The control system of an individual tank will be exactly the same as the control system for the tank next door to it (or probably in this example the group of tanks next door. Some of the Water Companies in the UK have their control system libraries so that they can take a control module from the “library” and apply it, with a little bit of tweaking to site requirements. So has the Water Industry achieved the design principle of modularity? Arguably perhaps but certainly not across the whole industry and perhaps not if you are going to take a pur- ist view of industry 4.0 but from a Water 4.0 point of view its a definitely maybe. Rating: - Getting there Purely going on the design principles of Industry 4.0 we can argue that Industry 4.0 does apply to the Water Industry and so as concept at least Water 4.0 is a direction that we should be at least, moving towards and in parts have actually achieved but as per anything you can take the individual ingredients of any recipe and put them all together in a mixer, it doesn’t of course mean that you will get anything resembling sense out of the other end. Delivering Water 4.0 - What does it practically mean for the Water Industry So, in nuts and bolts what does Water 4.0 actually look like from a Water Industry point of view? Well for me its a case of going back to basics and seeing what the Water Industry currently has and what it can do to bring the Industry forward to a point where we are at least adhering to the design principles. For me at least it is the management of the “Anthropogenic Water Cycle” from when we abstract water from the source to when we put it back to the environment and arguably further than that. It is seeing what we want to do, having a look at the technological gaps and then plugging them. There are examples of where this has been done, at least in part and it is these examples that we must look towards to shape the future of the Water Industry. To use the principle of the SWAN Layers where are we? Physical Layer - The first and most extensive of the Layers and includes all of the assets themselves from pipes, to tanks to pumps. This is the base of the Wa- ter Industry and it must be managed through the use of asset management systems, recording the assets that we have in a consistent way and in the same way across the Water Industry. Believe it or not this is an area of challenge as across the Water Industry the nomenclature is completely different. All of these assets of course need to managed in the short, medium and long term with systems such as Computerised Maintenance Management Systems (CMMS) but potentially Condition Base Maintenance Management Systems (CBMMS) Page 31
  • 32.
    Sensing & ControlLayer - This layer is relatively simple and yet is probably one of the major stumbling blocks within the Water Industry. The main reason being is generally within the Water Industry the requirements of the Sensing & Control layer have been very poorly specified and this has allowed the proliferation of the phenomenon of Data Richness Information Poverty. As such instrumentation has been installed with little or no attached value. This has led to the devaluing of instrumentation as a whole and the inability to extract usable intelligence from the vast amount of data that is collected everyday. If Water 4.0 is to become a true reality in the Water Industry then an exercise to define the information that the Water Industry need to operate needs to be completed. From the information requirements comes the data needs and from this the instrumentation that is required to feed the data needs. At this level of course Sensing & Control Management Systems are needed as well as data validation systems to check on the quality of the data that is collected. It is the Sensing & Control level that is absolutely vital if the Water Industry is to deliver Water 4.0 Collection & Communication Layer - The telemetry system layer where all of the data from the Sensing & Control Layer is collected that also includes PLCs and SCADA systems. It is at this level that alot of the debate will happen in the Water Industry and is potentially where the so-called Internet of Things comes into play connecting instruments with the wider system. For the Water Industry there are numerous different elements from the Water Industry Telemetry Standard (WITS) to the existing SCADA & PLC structure. The main concern and the main stumbling block for Water 4.0 is within this layer of the Water Industry and concerns Digital or Cyber Security. If you say to a communications or telemetry specialist in the Water Industry that you are just going to connect this instrument up to the Internet of Things the answer will be a quite secure “never in a million years,” bring “the Cloud” into the mix and you are definitely not going to be successful in your endeavours and the least that is said about local communication protocols the better. In fact the discussion over communication protocols in the Water Industry is assuredly going to be a debate for many years to come. If the definition of Water (or Industry) 4.0 is to connect to the Internet then it is more than likely in the Water Industry that it will never become a reality. The Data Management & Display (layer 4) and Data Fusion & Analysis Layer (Layer 5) are probably the Layers that are developed in some respects but undeveloped in others. Model of the various aspects of the Water Industry exist as do complex telemetry and information management systems. In addition to this are the business reporting systems, SAP, Click and all of the other management systems and now all of the Software as a Service (SaaS) systems that are available. On top of this are the various Excel spreadsheets and Access Databases that are almost a pre-requisite in the industry. The problem with this is that there are several different versions of the truth and accessibility to all of these different systems are compartmentalised across the various companies. The result if of course is that the truth becomes the truth depending upon whose information that you are looking at. Conclusions Water 4.0 - Is it something for the Water Industry, is it something that the Water Industry has already achieved or are we on path to it? The quick answers are that it is something for the Water Industry and in a large part we have been moving towards it for a number of years. As an industry we are moving further and further towards a factory approach to the products that we produce whether it is potable water for drinking, treated water for return- ing to the environment or biosolids to be used on agricultural land. More and more we are seeing product factories, minimisation of losses (through leakage reduction) and maximising the products that we can produce (through resource recovery. We as an industry are focused on providing the best customer service that we can hence why more and more companies are metering the water they provide and in a large number of cases this is through “Smart Metering,” to work with the customer to provide the best customer service. Water 4.0, the Smart Water Industry or just plain efficient operation (in truth whatever you want to call it) is central to these ways of working and it is through the development of the design principles of Industry 4.0 that we can deliver the future of the Water Industry. However there are some barriers to this approach to take into account and some decision that need to be taken, not on a company level but in real terms on an industry level as a whole. The first of these barriers is that of Communications Protocols insofar as we are industry that is mainly working off analogue signals in the main with Profibus on larger plants. The Industry seems to be heading towards a future of Ethernet and in the UK there is the whole direction of the Water Industry Telemetry Standard (WITS) with some heading in that direction and some not. The second is Cyber Security which more and more is becoming an increasingly urgent issue. For those talking about Cloud or Internet of Things environments then the proof of absolute security is an absolute must. Incidents of hacking of Water Treatment Works which have hit recent news along with past incidents only make the issue all the more important. The impact of a hacking incident that changed chemical levels can have serious implications to customer or the environment and zero risk must be the way forward for the Water Industry to even investigate this area. The third is instrumentation and data quality and an end to Data Richness Information Poverty. The Water Industry has a vast amount of instruments which produce a vast amount of data that gives no actionable intelligence and in reality needs to move towards an era of simply Information Richness where the information that is needed is available to the people that need it in an easy and digestable format that provides one version of the truth. This of course needs to be accurate which requires the correct instrumentation to be purchased, installed, operated and maintained correctly. This is not always the case in the Water Industry of today as the value of data and information is relatively low. Water 4.0 is something that the Water Industry should be aiming towards. How we are going to get there is going to be the fun bit over what probably is going to be the next decade or two. Page 32
  • 33.
    Article: Using Online WaterQuality Distribution Systems Monitoring to Detect and Control Nitrification Abstract Distribution system water quality monitoring is still in its infancy, but there is an emerging realization of its potential value. The initial emphasis has been on developing ways to detect deliberate or inadvertent chemical and biological attacks on water distribution systems. The potential for harm was made clear in the case of the Milwaukee Cryptosporidiosis outbreak of 1993 (McGuire 2006), in which thousands of people became ill. Historically, water quality monitoring of the distribution system has been limited to compliance with regulatory standards such as chlorine residual and total coliform. Yet the need for more comprehensive monitoring has been demonstrated by a growing body of research that indicates that water quality can change significantly between the water treatment plant (WTP) and the ultimate consumer (Baribeau et al. 2005; Zhang et al. 2002; LeChevallier 1990). Therefore, a second potential application for distribution monitoring is to ensure that the water received by the public has not degraded below acceptable standards. While examining data bases from numerous well-operated utilities (Cook et al. 2008), the authors determined that online real-time distribution system monitoring can provide early-warning of nitrification in chloraminated water systems. Early detection is one of the keys to controlling its spread and severity. This paper presents a case study in which distribution system monitoring data was used to detect water quality degradation, to explain why the degradation occurred, and to propose how such monitoring, along with data analysis, could be used to enhance effective operational decision-making. Introduction Thomas Kuhn (1922-1996) was a philosopher of science who in his The Structure of Scientific Revolutions introduced a new idea of how scientific revolutions happen. A main thesis of his work is that scientific revolutions occur when scientists changed their “paradigm” of describing reality, which Kuhn called a “paradigm shift”. For example, before Einstein all physicists operated under the paradigm of Isaac Newton’s laws in which mass and energy are constant in space and time. While such as paradigm was only an approximation, it was easily accurate enough to calculate a manned spaceflight to the Moon. Einstein brought a huge paradigm shift by showing how mass, energy and space had to be viewed relative to each other, that mass could change into energy, and that the only true constant is the speed of light (www.plato.stanford.edu/Thomas-Kuhn/). Likewise, a paradigm shift is occurring in how water distribution systems are viewed and how online monitoring of distribution systems could be used in helping to make operational decisions while improving customer - delivered water quality. The Original Paradigm The original paradigm of a distribution system consisted of having highly treated potable water pumped into a series of water mains in which there was little change in water quality between the water treatment plant (WTP) and the customer. This paradigm has been based upon the fact that: 1) no chemicals were added in route; 2) water mains are relatively inert chemically; and, 3) microbes are disinfected at the point of entrance. Hence, the original paradigm held that there was little motivation to establish online monitoring on the distribution system because it would provide little additional information. The new paradigm views the distribution system as a spatially large, complex bio-chemical reactor, with the pipe walls supporting bacterial growth, and with reactions taking place both within the bulk water itself and between the bulk water and the pipe walls supporting bio-films. Lengthy detention times and microbial action mediate an environment with less than perfect water main conditions. As a result, water quality changes between the WTP and water consumer. The new paradigm allows that variations in water quality on the distribution system can, to a great extent, be explained by the information contained in the water quality data leaving the WTP. Of course, in order to determine the actual water quality that the customer receives, it is necessary to monitor the water as close to the tap as possible. In order to detect degradations in water quality, various sensors and data analysis tools, some simple and others more complex, are available to assist in determining why water quality has degraded which affects a greater degree of control. Moreover, distribution system monitoring to protect public health enables remedial steps to be taken at the earliest possible time, both at the WTP and also on the distribution system. Types of Data Analysis and Modelling Online monitoring of distribution systems generate large quantities of data that must be converted into useful information. It is commonly assumed that there is a strong relationship between WTP water quality and distribution system water quality; however, variability in distribution system water quality cannot always be explained. Frequently, this relationship involves several input variables. However, computers can be programmed to quickly analyze large volumes of data to provide a more complete, multivariate evaluation and to support decision making to optimize water quality. Predictive numerical models are a means of analysis that generally fall into one of two categories, those based on equations from physics and empirical correlation functions that adapt generalized mathematical functions to fit a line or surface through data from two or more variables. The most commonly used and easily understood empirical approach is ordinary least squares (OLS), which relates variables using straight lines, planes, or hyper-planes, whether the actual Page 33
  • 34.
    relationships are linearor not. For systems that are well characterized by data, empirical models can be developed much faster and are more accurate; however, empirical models are prone to problems such as over-fitting when poorly applied. (Roehl et al. 2003). Techniques such as OLS and physics-based finite-difference models prescribe the functional form of the model’s fit of the calibration data. Alternatively, artificial neural networks (ANNs) employ flexible mathematical structures that are inspired by the brain where very complicated behaviours are derived from billions of interconnected devices, namely, neurons and synapses (Hinton 1992). One type of ANN is the multi-layer perceptron (MLP) which synthesizes rather than prescribes a function that fits curved surfaces through multivariate data (Jensen 1994). MLP ANNs can be more accurate than other modelling approaches when: 1. The available data comprehensively describe the behaviours of interest and the model will be used to interpolate within the range of those behaviours; and 2. There is significant mutual information shared among the measured varia- bles. MLP ANNs are commonly used in process engineering applications, e.g., appli- cations to model and control combined man-made and natural systems (Devine et al. 2003; Conrads and Roehl 2005). Case Study: Detecting Early Nitrification on Distribution Nitrification is a process in which nitrifying bacteria consume ammonia (NH3) to form nitrite (NO2) and eventually nitrate (NO3). The presence of ammonia in drinking water can be naturally occurring, or more often, a consequence of chloramines disinfection which is used by many WTPs as an alternative to traditional, DBP-causing chlorination-only disinfection. The disinfectant residual is necessary to inactivate potentially harmful microbes in the distribution system; however, water leaving a WTP can take several days or longer to reach customers. The disinfectant can be broken down by nitrifying bacteria as the water ages, which can impact taste and odour, and allow microbial re-growth (Harrington et al. 2002; Skadsen 2002; Lieu et al. 1993; and LeChevallier 1990). Kirmeyer et al. (1995) estimates that two-thirds of the medium - to - large chloraminated systems in the U.S. experience nitrification to some degree, and that fully half of these systems experience operational problems as a result. Nitrification can be inferred from parameters such as total chlorine residual, dissolved oxygen, and pH, among others. A mid-sized utility in the central U.S. was in the process of installing a number of moni- toring sites throughout their distribution system and provided early examples of their data for the authors’ Water Research Foundation study (Cook et al. 2008). For this utility, each site measures total chlorine residual, pH, turbidity, pres- sure, and temperature. A portion of the distribution system is schematized in Figure 1, showing relative storage tank and booster pump (BP) locations on three mains (1, 2, and 3) originating from the same WTP. Figure 2 shows approximately 10 months of concurrent total chlorine residual data, recorded at 15-minute intervals, from BP 1 on Main 1 and from the BPs on Mains 2 and 3. Except where the sensors drop out (downward spikes), the total chlorine residuals of the BPs on Mains 2 and 3 generally track together and do not fall below 2.0 mg/L; however, while BP 1 sometimes tracks the other two, it often falls and remains below 2.0 and lower for several days. Figure 3 shows the total chlorine residuals at BP 1 and at a location just downstream at Tank A. After observation 21,000, the BP 1 re- sidual rises above 3.5 mg/L and shows greatly reduced diurnal var- iability; however, the Tank A residual continues to have large varia- bility and large negative changes. Note that the upper values of the Tank A residual track the trend at BP 1. One possible explanation is that nitrifying bacteria grew in the tank because of sporadically low inflow residuals, and persisted in the tank even after the inflow residual returned to higher levels. Looking at the frequently low BP 1 residuals in Figure 3, it is also possible that the source of the nitri- fication originated upstream of BP 1. Figure 1. Schematic of a portion of a distribution system at a central U.S. utility. “BP” designates a booster pump station. Figure 2. Total chlorine residual at booster pumps (BP) on three different mains. Figure 3. Total chlorine residuals at Booster Pump 1 (BP 1) and Tank A on Main 1. Page 34
  • 35.
    Similarly, Figure 4shows the residuals at BPs 1, 2, and 3 on Main 1. The residuals at BPs 1 and 2 generally track togeth- er throughout the period shown. BP 3 is downstream of a second water tank, Tank B. Between observations 14,000 and 21,000, the residual at BP 3 is often much lower than at the upstream BPs, possibly indicating the presence of nitri- fication in Tank B. For some time after observation 21,000, the upper values of all three residuals are elevated, which appears to be sufficient to control the nitrifying bacteria in Tank B to allow the BP 3 residual to gradually approach those measured upstream. The appearance of possible nitrification in only one of three mains from the same WTP suggests that Main 1 is somehow different from the others, perhaps because its flows are lower and detention times longer. The 2008 Water Research Foundation study found that low flows were a contributor to probable nitrification at a second mid-sized utility in the southeast. Figures 2, 3 and 4 also indicate that the condi- tions that lead to and manifest ongoing nitrification, such as low chlorine residuals, are recognizable from standard SCADA trend charts and could be automatically alarmed using modified statistical process control limits or time derivatives (the rate of change of residuals with respect to time). This is also true for the main break and DBP examples. Concluaions Based upon a substantial body of research, the new paradigm is to consider distribution systems as complex bio-chemical reactors which alter water quality between the WTP and the consuming public. But just as importantly, it has been discovered that distribution system water quality changes in such as way that much of the variability in quality can be explained by analyzing both the finished and distribution system water quality data. Hence, there can be significant correlations between water quality variables in finished water and changes in water quality on the distribution system. A case study was presented to explain degradation in distribution system by nitrification. By knowing the relationships between finished water and distribution system water qualities, a utility can be alerted to rapid water quality degradation, with the ultimate goal of providing the best attainable water quality at the customers’ tap. Acknowledgements The genesis for this work was the result of research sponsored by the Water Research Foundation (formerly AwwaRF). The authors would like to thank the many participating utilities for providing valuable research data. References Baribeau, H., Gagnon, G., and R. Hofman. 2005. Impact of Distribution System Water Quality on Disinfection Efficacy. Denver, CO.: AwwaRF. Cook, J., Daamen, R., and E. Roehl. 2008. Distribution System Security and Water Quality Improvements through Data Mining. Denver, CO.: AwwaRF. Devine, T.W., Roehl, E.A., and J.B. Busby. 2003. Virtual Sensors – Cost Effective Monitoring, In Proceedings from the Air and Waste Management Association Annual Conference, June 2003. Emmert, G. L., Brown, M., Simone, P., Geme, G., and C. Gang. 2007. Methods for Real-Time Measurements of THMs and HAAs in Distribution Systems. Denver, CO.: AWWA and AwwaRF. Harrington, G. W., Noguera, D., Kandou, A., and D. VanHoven. 2002. Pilot-Scale Evaluation of Nitrification Control Strategies. Jour. AWWA, 94:11:78. Hinton, G.E. 1992. How Neural Networks Learn from Experience, Scientific American, September 1992, p.145-151. Jensen, B.A. 1994. Expert Systems - Neural Networks, Instrument Engineers’ Handbook 3rd Edition. Radnor, PA.: Chilton. p. 48-54. Kirmeyer, G. et al. 1995. Occurrence and Control of Nitrification in Chloraminated Water Systems, Denver, CO.: AwwaRF. LeChevallier, M. 1990. Coliform Regrowth in Drinking Water: A Review. Jour. AWWA, 82:11:74. Lieu, N. I., Wolfe, R., and E. Means. 1993. Optimizing Chloramine Disinfection for the Control of Nitrification. Jour. AWWA, 85:2:81. McGuire, M. 2006. Eight Revolutions in the History of US Drinking Water Disinfection. Jour. AWWA, 98:3:123. Roberts, M., Singer, P. and and A. Obolensky, A. 2002. Comparing Total HAA and Total THM Concentrations Using ICR Data, Jour. AWWA, 94:1:103. Roehl, E.A., Conrads, Paul, and Cook, J.B., 2003. Discussion of “Using complex permittivity and artificial neural networks for contaminant prediction”. Jour. Environmental Engineering, v. 129, p. 1069. Skadsen, J. 2002. Effectiveness of High pH in Controlling Nitrification. Jour. AWWA, 94:7:73. Website: www.plato.stanford.edu/Thomas-Kuhn/. Accessed by authors April 9, 2012. Zhang, M., Semmens, M., Schuler, D. and R. Hozalski. 2002. Biostability and Microbiological Quality in a Chloraminated Distribution System. Jour. AWWA, 94:9:112. Figure 4. Total chlorine residuals at booster pumps (BP) 1, 2, and 3 on Main 1 with water temperature at BP 2. BP 3 data is missing between observations 8,000 to 13,000. Page 35
  • 36.
    The concepts ofcontrolling the large wastewater treatment plants in the water industry have been around for many years. The majority of the power in a wastewater treatment plant is consumed in aeration and the first types of control systems aimed to address this fact. By stopping the operator from manually going to a panel to switch a blower on and off based on manually going up to an aeration lane and dropping a sensor in the mixed liquor. Dissolved Oxygen probes allowed trending of the oxygen levels and then control systems with PID loops kept the oxygen levels within tram lines. This is certainly not what the industry would term advanced but it did save the cost of running the activated sludge plant. The industry went forward and looked at Ammonia Control, most prominently feedback control from the end of the aeration lanes, as this was thought to be the area where most of the ammonia is treated and controlling the oxygen levels based upon the measured ammonia concentration. The more advanced and adventurous water companies looked at feed forward control and using a process model to predict how much would be required and deliver that amount of air. This was typically supplemented with some sort of feedback control system. In some water companies this has limited success in others. This, in some areas of the industry is where we are today. Others companies have taken the step of putting Advanced Process Control systems in place however the adoption of these technologies can be said to be limited. So what is quite meant by “Advanced Process Control” and how does it differ from just normal process control? For me at least it is using some sort of model to control the process. The modern water industry has the ability to model most of its processes.. We only have to look to models such as Biowin and GPSX and the specialists within the industry who understand how to do this process modelling. Most of the models are based upon the predecessor of the IWA and their ASM (Activated Sludge Models) to see the benefits of modelling. Probably one of the most famous examples of this is seminal paper by Andy Shaw et al of Black & Veatch and the case study of Daniel Island (click here) looking at the intelligent control of sequencing batch reactors that managed, with little adaptation, to double the total daily volume that could be successfully treated. This paper was presented at WEFTEC in 2006. Daniel Island was not the only development in the area of Advanced Process Control with one of the first examples in the UK being the development of a system for the Southern Water scheme at Peel Common (click here). This project was undertaken in 2008 and whilst extending the activated sludge ca- pacity converted the treatment works to a four stage Bardenpho Biological Nutrient Removal Process. The scheme at Peel Common was highly successful with the trial being conducted over a 10-week period and managed to achieve a 20 percent reduction in the amount of aeration, control of the amount of ammonia that was discharged, and a 50 percent reduction in the amount of methanol that was consumed. The system and controller that was developed for this treatment plant looked to monitor and automate the whole process including the nitrification and methanol dosing. This has formed the fundamental basis of the development of a whole range of controllers that are based upon the use of instrumentation in not just the activated sludge plant but also in other areas of the future production factory that the wastewater industry is moving towards in the future. So for the Activated Sludge Plant what was the key to achieving the savings? Firstly it will have been ammonia monitoring installed in the correct areas of the treatment plant and of course in the correct way. It was also managing the sludge age of the plant rather than sole- ly concentrating on the F/M ration which the industry has, and still does in some areas, concentrate on. With Peel Common it was also about controlling the whole process not just the individual elements. From the establishment of the Peel Common case study other projects developed including the Holdenhurst project for Wessex Water. This was based, like Peel Common on Hach’s WTOS system The WTOS system was installed at Wessex Water’s Holdenhurst fa- cility in Bournemouth (175,000 PE) which mainly treated domestic wastewater. Aeration for Holdenhurst’s fine bubble activated sludge treatment system is provided by four large mains powered variable speed blowers. The site has a good record for maintaining a low ammonia discharge, but had a heavy power/aeration requirement, particularly during storm events. Prior to the installation of the WTOS, LDO™ probes in the aeration lanes fed dis- solved oxygen data to the PLC which controlled the blowers to maintain DO at specific levels (approximately 2.5 mg/l) depending on the treatment zone. Article: The use of Advanced Process Control in the modern Wastewater Industry Schematic for Peel Common and the four stage Bardenpho BNR configuration Page 36
  • 37.
    Similarly, under theprevious sludge management regime, fixed volumes of sludge were returned based on laboratory mixed liquor suspended solids (MLSS) values and manual settlement tests. There are three main components to WTOS: (1) the RTCs (2) the process analysers (3) the PROGNOSYS system. Automated control systems necessitate reliable continuous measurement values 24 hours a day, so the PROGNOSIS system was developed to constantly check the diagnostic signals (health and service status) from the installed instruments in order to achieve the required levels of reliability. The capital outlay for the addition of the system was relatively small; the most significant extra cost was simply a requirement for extra sensors. WTOS overlays and compliments existing infrastructure, so it is possible to simply turn the control system off and revert to the former regime. Each RTC was implemented on an industrial PC which communicates with the sc1000 controller network and the local PLC. The WTOS RTC unit delivers set points for the DO concentration and Surplus Activated Sludge flow rate to the PLC, which applies those set points to the process Site specific characteristics such as layout and tank size are also taken in consideration when calculating the set points. All set points can be adjusted either via the SCADA system or the local WTOS controller user interface. This means, when the plant is under RTC control, DO set points are no longer ‘fixed’, instead they ‘float’ according to the load. To enable this, the N-RTC receives information about the actual • NH4-N inflow concentration and flow • MLSS concentration • Water temperature A simulation model is integrated within the controller for open loop control to calculate the DO concentrations necessary to achieve the desired ammonia outlet concentration. . The N-RTC also constantly reads the NH4-N concentration at the outlet of the aeration lane. This value provides a feed back control loop and ensures that the DO concentration is increased/decreased if the ammonia concentration is above/below the desired NH4-N set point. In this way, the N-RTC control module combines the advantages of feed forward and feed back control, which are (1) rapid response and (2) set point accuracy. The develops of the this controller was not unique and further research and development has let to other controllers for plant control all based upon the instrumentation monitoring what is going on in the operational process and modelling giving the next steps for the controllers to action. Amongst other advanced control systems this has been extended to another high cost area within the wastewater factory approach including sludge dewatering. An example of this is at Northumbrian Water’s treatment works at Bran Sands. At Bran Sands on Teesside, Northumbrian Water’s site houses a regional sludge treatment centre and effluent treatment works. The site treats the majority of sludge in the North East − with drying and digestion capabilities. The sludge is processed using the CAMBI thermal hydrolysis digestion process. The plant processes 40,000 tonnes of dry solids of indigenous and imported sewage sludge per year, and has a generating capacity of up to 4.7MW. Besides a reduction of carbon emissions, the process leads to huge reductions in consumption of biogas and imported electricity (90 % and 50 % respectively) and thus significantly saves on operating costs. Upstream of the CAMBI process, the incoming sludge has to be dewatered to increase the DS content from ~ 2 % to 18 %. Sludge dewatering requires mixing the incoming sludge with a polymer solution prior to the actual dewatering step in a decanter centrifuge. Adjusting the polymer dose had been done manually in the past, leading to high polymer consumption and subsequently a high anti-foam consumption to reduce the foam formation caused by an excess of polymer. Hence the objective of the sludge dewatering optimization was to keep the DS content at the desired 18 % and to reduce the polymer consumption. The installation of dry solids monitoring and a real time controller enabled the measurement of the feed dry solids and this enabled control of the polymer dose. Avoidance of the over dosing of polymer allowed minimisation of the isostatic repulsion which in turn caused a decrease in the amount of anti-foam that was used. The benefits were a stable dry solids concentration post dewatering, a 40% reduction in polymer and 75% reduction of antifoam Left diagram, before optimization: Very large variations in polymer dose rates leading to unsatisfactory cake quality (under dosing) and antifoam requirement due to overdosing. Right diagram, after optimization: Very stable polymer dose rates – Average 5.2 g polymer / kg DS. Page 37
  • 38.
    Using instrumentation onlyand advanced controlling elements of a treatment works isn’t the only approach to advanced process control within the Water Industry and the alternative approach within the UK Water Industry is contolling on a multi-variate approach using instrumentation and other process information from intelligence in the treatment works to infer values were necessary. This is the approach that has been used by Perceptive APC in the WaterMV Advanced Process Control technique. The WaterMV Advanced Process Control technique uses process and quality data to develop a digital model of each plant’s performance, behaviour, constraints and opportunities. The system is made robust by using software-derived values alongside real-world sensor measurements; when a hardware sensor fails or begins to drift, or when communications are lost, the ‘soft’ sensors automatically take over. The plant can continue to be tightly controlled, even with a high proportion of sensors unavailable. In fact, this approach permits fewer sensors to be installed in the first place, or allows sensors to be removed because they are no longer required. Because the control strategy is based on an accurate model of how the plant will perform under any set of circumstances, it is properly known as model predictive control, i.e., control moves are made ahead of time, based on how the plant will respond in future. This is of particular value when automatically managing storm or first flush events. The control model reaches all the way back to aeration because, in many cases, site aeration control is simply not accurate or reliable enough. For example, a fine-bubble diffused aeration ASP will have a common manifold juggling air flows across multiple zones, with multiple actuated valves all demanding or choking off the air in competition with each other. (This is the multivariable nature of many complex processes, and is the ‘MV’ in the product’s name). Modelling this behaviour enables a more elegant and coordinated scheme to deliver air when it’s needed, where it’s needed, without tripping blowers, starving pockets or over-aerating zones. As a result, the system can be implemented on ASPs that are surface aerated, jet-aerated, or use FBD, for which WaterMV is the best possible solution. Provided enough data and controllability exists, it is also a natural fit for BAFF plants and SBRs. The WaterMV controller sits above the SCADA/PLC as a supervisory layer; if the underlying PLC-based process control is not fit-for-purpose, WaterMV will not be considered. A key constraint built into each model is the final quality desired from that particular process. By improving control, WaterMV reduces variability in the process and, therefore, reduces risk of non-compliant operation. This can be exploited as further energy savings, or as an increase in process capability, allowing capital improvement or expansion projects to be deferred. In other words, it is a perfect fit for the aims of totex, which are common across the manufacturing sectors in which the technology was born. In addition, the model detects discrepancies between predicted and actual behaviour, to help pinpoint either developing external conditions (such as a toxic event), or a slow drift away from optimal operation caused by process failure or degradation. The Perceptive system is not limited to the ASP, but can automatically control RAS and SAS, sludge age and FST levels, to provide end-to-end improvement of the works. In addition, the same approach can be used to both optimise yield from anaerobic digesters, as well as energy generation from the CHP. By tying this together with ASP control, WaterMV can provide site-wide energy optimisation, with minimal operator intervention. The downside, if there is one, is that each monitoring and control scheme must be developed to address the challenges and issues associated with each particular site or asset. This is not a plug-and-play option, simply because no two plants are identical; a unique set of challenges requires a bespoke solution. The multivariate process technique was used at Lancaster Treatment Works as a proof of concept to United Utilities the dilemma was poor control of DO and high energy costly. Using historical process data, a robust mathematical model of the plant was constructed to enable the prediction of future behaviour and the impact of disturbances on performance. The model is also capable of assessing the quality of signals taken from the plant, determining which are reliable and which should be discounted from future control decisions. The final control scheme was able to reconstruct missing or corrupt data – in real time - enabling optimal operation to be maintained even if some critical signals are lost or the data becomes untrustworthy. The results of the scheme was an average energy saving over 12 months of 26% when compared with previous best performance, with continuing development offering significantly higher savings and fast return on investment. United Utilities calculate an annual reduction in equivalent CO2 of more than 250 Tonnes. Plant performance is more tightly controlled, with less operator intervention required to maintain optimal process operation and maintain high levels of final effluent discharge quality. Because of the way the system works on a more process based approach that questions the quality of instrumental data financial losses can be tracked across the plant and this is what has been done with this system at another treatment works providing control room decision support. It works on the fact that model based control requires instrumentation and if and when the sensor drifts or fails then a fall back position is the default. In the WaterMV system a “soft” inferential sensor takes over to operate and maintain an acceptable safety margin. The system then calculates the additional operating expense is the Lost Opportunity and includes daily and cumulative costs . This enables the Operators and managers to prioritize maintenance of process sensors,based on the cost and impact of their non-availability. What this shows is that there are huge potentials for Advanced Process Control within the water industry to optimise the performance of a well run treatment works and the benefits of both systems are not fully understood and this is acting as a barrier to the development of Advanced Process Control in the UK. Page 38
  • 39.
    Water, Wastewater &Environmental Monitoring Virtual 13th - 14th October 2021 The WWEM Conference & Exhibition has been changed to a virtual conference and exhibition for 2021 and a physical conference and exhibition in 2022. Details on WWEM Virtual will be released in the coming months but it is sure to include huge amount of technical workshops and events for attendees to enjoy. International Water Association Digital Water Summit 15th-18th November 2021 - Euskalduna Conference Centre, Bilbao, Spain In 2021, the first edition of the IWA Digital Water Summit will take place under the tag-line “Join the transformation journey” designed to be the reference in digitalisation for the global water sector. The Summit has a focus on business and industry, while technology providers and water utilities will be some of the key participants that will discuss and shape the agenda of the Summit. The programme includes plenary sessions, interactive discussions, side events, exhibition, technical visits, and social events Sensor for Water Interest Group Workshops The Sensors for Water Interest Group has moved their workshops for the foreseeable future to an online webinar format. The next workshops are 16th June 2021 - Achieving Net Zero 14th July 2021 - How can sensors protect our coastal waters Zero Pollutions Conference 2021 14th July 2021, Online The zero pollutions conference is returning for 2021 and is being hosted by Isle Utilities. The theme this year is "Today & Tomorrow" and tickets are available via Eventbrite. The conference is hosted by Isle Utilities WEX Global 2021 28th - 30th June 2021 - Valencia, Spain The WEX Global Conference. Sponsored by Idrica is currently due to take place in Valencia in Spain in June 2021. The conference concentrates on the circular economy and smart solutions to resolve some of the global water industry's issues Page 39 Conferences, Events, Seminars & Studies Conferences, Seminars & Events 2021 Conference Calendar Due to the current international crisis there has been a large amount of disruption in the conference calendar. A lot of workshops have moved online at least in the interim and a lot of organisations are using alternative means of getting the knowledge out there such as webinars popping up at short notice. Do check your regular channels about information and events that are going on. Also do check on the dates provided here as they are the best at the time of publishing but as normal things are subject to change.
  • 40.