4G is simmilar – NodeB is changed to eNodeB + some new boxes
Acronyms: base station controller (BSC) Radio Network Controller (or RNC) mobile switching center (MSC) Short Message Service Center (SMSC) Serving GPRS Support Node (SGSN)
Network Analytics portal Network operation & Development to detect and troubleshoot problems in the network. Customer technical support – track Quality of service of a specific customer
Based on batch jobs, Transforming and moving data between different layers (pre-stage, stage, datamarts,...),
Cons: - Data stored multiple times. Heavy to calculate correlations and aggregations About one hour latency.
Avro allows us to generate Java/Scala classes for our projects. There are Maven/SBT plugins, DDL scripts
At the time we were choosing stream processing framework this was the only one which met our needs.
We were considering Flink, Spark, Kafka Streams
Spark (1.6) -> did not handle large state well Kafka Steams -> not so rich API. Too new at that time
We have different setup for different clients.
Why? Separation of concerns More processors in case of nifi. Copy from sFTP, parse, push to kafka, copy raw data to hdfs,…. In case of ASN.1 parsing -> has been already done for batch processing, generating CSV files. Now changed to also produce messages to Kafka
AVOID NEW DB/CACHE – there is already whole Hadoop ensemble to maintain.
PROBLEM: we don’t get updates, we get new version of each codelist every day
Took too long while new values were reflected in the data stream
Receive command to refresh codelist, Broadcast command to all parallel instances of next component check timestamp weather your codelists aren’t newer.
-> It can be either refresh all, refresh one, refresh from different location…
So far it works. There is possible problem if our codelists grow too big – e.g. whole user profile with history for streaming machine learning algorithms etc.
Quite simple aggregations – usually SUM or COUNT
We have different jobs calculating different aggregations – differently keyed stream
We use tumble windows of length 5 minutes – which is our finest granularity.
Coarser granularities we calctulate on with SQL on query time – 15 minutes/1 hour/1 day
But it‘s possible to have defined multiple windows with different length
Very natural way to write SQL like syntax in Scala.
STREAMING API – reduce, aggregate, fold TABLE API SQL API – sql, window defined in group by
Stream processing on mobile networks
Apache Flink in action –
stream processing of mobile
Future of Data: Real Time Stream Processing with Apache Flink
Who we are
We are a company that deals with the
processing of data, its storage, distribution
and analysis. We combine advanced
technology with expert services in order to
obtain value for our customers.
Main focus is on the big data technologies,
like Hadoop, Kafka, NiFi, Flink.
What we‘re going to talk about
• Why mobile network operators need stream processing
• Business Challenges
• Operating Flink in Hadoop environment
• Stream processing challenges in our use case
data sources (probes, devices, ...)
Mobile operator’s data
• SMS – simplest transaction (mostly a few records)
• Data – lenght of session = number of records
• Calls – most complex joining of records
• Network usage
• Billing events
Typical use cases in telco
• fraud & security
• Customer Experience Management
• triggers alarms based on customer-related
• CEM KPI
• Fast issue diagnosis & Customer support
• reduce the Average Handling Time and First
Call Resolution rate
• Data source for analysis:
• Community analysis
• Household detection
• Churn prediction
• Behavioural analysis
• networks performing overlook
• service management support
• precise problem geolocation
• end-to-end in-depth troubleshooting
• real-time fault detection
• automated troubleshooting (diagnosis,
• QoS KPI trend analysis
Constant monitoring of network,
service and customer KPIs.
Use cases in action
• Network Analytics (web application)
• Getting raw data into HDFS for analysts – SQL queries via
They already do it
• DWH style
• Batch processing
• Conversion from binary format (e.g. ASN.1)
• Tightening the feedback loop
• Have solution ready for future use cases
• Anomaly detection
• Predictive maintenance
• Still allow people to run analytical queries on data