Successfully reported this slideshow.
Your SlideShare is downloading. ×

Visitor Tracking: Cutting through the hype, hot air, false claims and over-promises

Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad

Check these out next

1 of 28 Ad

Visitor Tracking: Cutting through the hype, hot air, false claims and over-promises

Presentation delivered by James Cobb of Crowd Connected at Event Tech Live 2019.

A review of visitor tracking data for events - why it's valuable, and why there's so much hype.

An examination of common inflated claims made about WiFi sniffing, beacons, RFID and indoor positioning.

Presentation delivered by James Cobb of Crowd Connected at Event Tech Live 2019.

A review of visitor tracking data for events - why it's valuable, and why there's so much hype.

An examination of common inflated claims made about WiFi sniffing, beacons, RFID and indoor positioning.

Advertisement
Advertisement

More Related Content

Similar to Visitor Tracking: Cutting through the hype, hot air, false claims and over-promises (20)

Recently uploaded (20)

Advertisement

Visitor Tracking: Cutting through the hype, hot air, false claims and over-promises

  1. 1. Visitor Tracking Cutting through the hype, hot air, false claims and over-promises James Cobb Founder & CEO
  2. 2. Events want to know more about their visitors so they can measure KPIs, personalise the experience, and target marketing.
  3. 3. Demographic data
  4. 4. Demographic data isn’t enough.
  5. 5. Demographic data isn’t enough. Seemingly detailed data... • Born 1948 • Grew up in England • Married twice • 2 children • Successful businesses • Wealthy
  6. 6. Demographic data isn’t enough. Seemingly detailed data... • Born 1948 • Grew up in England • Married twice • 2 children • Successful businesses • Wealthy Doesn’t tell us much. • Prince Charles • Ozzy Osborne
  7. 7. Behavioural data is what your attendees do at your event.
  8. 8. Behavioural data is what your attendees do at your event. Patterns in behavioural data allow us to infer why.
  9. 9. Visitor Tracking = gathering on-site behavioural data.
  10. 10. Gathering a detailed behavioural history is hard enough that no one technology has it cracked. Competing technologies are subject to considerable hype.
  11. 11. The Visitor Tracking Hype Cycle Technology Trigger Peak of Inflated Expectation Trough of Disillusionment Slope of Enlightenment Plateau of Productivity Expectation Time
  12. 12. The Visitor Tracking Hype Cycle Technology Trigger Peak of Inflated Expectation Trough of Disillusionment Slope of Enlightenment Plateau of Productivity Expectation Time Facial Recognition Beacon on a Badge Passive phone tracking Indoor positioning Beacons RFID Badge Scanning
  13. 13. Even the mature technologies are prone to hype, because understanding and comparing accuracy is complex.
  14. 14. Coverage: Are you collecting data only where you install a sensor, or for the entire visitor journey?
  15. 15. Accuracy: Can behaviour like attending a session be reliably distinguished?
  16. 16. The Sampling rate problem RFID Badges 100% App based 20% Passive WiFi 5%
  17. 17. The Reliability problem Badge Scanning Location based Range based (BLE / WiFi / RFID)
  18. 18. The Frequency problem App based Every 1s - 30s Passive WiFi Every 2m – 20m RFID Badges Only discrete checkpoints
  19. 19. Hype, hot air, false claims and over-promises
  20. 20. Passive WiFi sniffing is ‘90% accurate’
  21. 21. Indoor Positioning is ‘sub 2m accurate’
  22. 22. Beacons are ‘sub-metre accurate’
  23. 23. RFID badges are ‘100% accurate’
  24. 24. You need to ask questions about accuracy. Providers aren’t telling customers enough about the accuracy and reliability of their technology.
  25. 25. Even if you have accurate and reliable behavioural data, you need to understand the value to your business.
  26. 26. And you need to take the cost into account when you think about value.
  27. 27. To cut through the hype you need to consider coverage, sampling rate, reliability, frequency, cost and value.
  28. 28. To cut through the hype you need to consider coverage, sampling rate, reliability, frequency, cost and value.

Editor's Notes

  • This presentation looks at what Visitor Tracking is, why it’s valuable, why there’s still so much hype in the sector, and includes specific examples of some suspect claims that are frequently heard.

    It covers the basics of visitor data, but only in passing. The last thing we need is another presentation about data. Or Big Data. Or Data being the new oil… There’s enough nonsense talked about big data already.
  • Here’s the simple truth about visitor data.

    Events need to know as much as possible about their visitors. Who they are, what they do, and why they do it. Why is that valuable?
    First you can measure performance - of stands, content, sponsor activations, artists or new event layouts. If you’re not gathering data about your event, you can’t manage it… You might report footfall to an exhibitor. Having more detailed information about who visited their stand, how long they spent, and the demographic make up of passers by, all help when you’re rebooking space. Or brand exposure to a sponsor of a festival. Being able to report the percentage of the audience who were exposed to their brand, and how many engaged, and how long for, can be vital in maximising revenue. That’s analytics.
    Then there’s personalisation of the visitor experience. It could be in app content. Or you might suggest content, or networking contacts to people using an engine like Grip. You can’t do personalisation without data about your visitors.
    And then there’s marketing. Maybe the oldest use of visitor data. You want data about your visitors in your CRM system so you can optimise your marketing mix.
  • The easiest data to gather is demographic.  . It might just be the postcode for ticket purchases. Or your registration process might request company details (I’ve heard that called ‘firmographic data’) or purchase authority. Demographic data tells us who the visitor is.
  • But demographic data is really limited.
    Here’s a very old example to prove the point - many of you have probably come across it.
    Let’s say your attendees answer detailed questions during your registration process. You’re asking them for personal details like date of birth and country of residence. Family details like marital status and children. And business details like turnover and profit.
  • If your event happens to have Prince Charles visiting (well, we can dream). And that he completes the entire registration form (even more of a stretch) we might collect this data.
    That looks pretty detailed. Now we know what sort of people are visiting our event, and we can include this in our analytics. And we can start recommending networking and content to the visitor, as we have a good idea of their likely interests.

    But even this seemingly detailed demographic picture doesn’t tell us nearly as much about the visitor as you might think.
  • The same data might not have come from Prince Charles. It could be the Prince of Darkness himself, Ozzy Osborne…
    Now the recommendations for networking and content may not be that relevant. And the reports we’re creating may not be that useful.

    So what sort of data is more useful?
  • What’s really needed to report KPI’s, personlise the experience, and target marketing, is behavioural data.
    What do people actually do. And what don’t they do. If Demographic data is the ‘who’, behavioural data is the ‘where’ and the ‘when’ – where visitors go, and what they do there.

    Some events (like a music festival) know almost nothing about what individuals do once they’re through the gates. The only data they gather is transactional from cashless systems - if they use one.
    In fact that’s why I founded Crowd Connected. A frustration, managing festivals many years ago, that I had no idea what festival-goers did once they’d arrived.
    Exhibitions gather more behavioural data. They might scan attendees into every content session. They might use a centralised lead capture system.
    It’s a good start, but there are still a lot of gaps between the designated points where behaviour is captured.
  • If we can fill those gaps – if have enough behavioural data, we can infer psychographics, which is how people are thinking and feeling. If behavioural data is the ‘where’ and the ‘when’, psychographic data is the ‘why’.

    With the gaps filled in, our personalisation and our reporting become even more accurate and valuable.
  • That’s what I mean by ‘visitor tracking’. Gathering data about what your visitors actually do while they’re at your event.

    There are old technologies like badge scanning that gather limited data. And newer technologies that can track visitors every step of their journey.
    There’s fantastic opportunity to use technology to capture the visitor journey in more and more detail, so you can understand and influence it.
  • But it’s hard. Each technology has strengths, but also weaknesses. There’s no single answer.
    And there seems to be a lot of hype. And over-promising. And hot air.
    To look at how hype operates, I’m going to borrow Gartner’s hype cycle.
  • Once a new technology arrives, it quickly succumbs to hype. It’s new, shiny, and not well understood. So expectations are inflated.

    Over time, after over-promising and under-delivering, expectations and interest falls. We hit the trough of disillusionment.

    New improved versions of the technology arrive, and are better understood. We slowly climb the slope of enlightenment.

    Until we reach the plateau of productivity. The technology is well understood, and expectations are in line with what’s delivered.
  • Here’s an attempt to put some competing visitor tracking technologies onto the curve.

    It’s not a complete list. For that you need an ebook we’ve published - https://engage.crowdconnected.com/complete-guide-to-visitor-data-1
    Technologies to the left have been less tried and tested. There’s a problem of hype, and over-promising is common.

    What’s interesting is that hype seems to be prevalent not just for the newer technologies at ‘peak hype’, but also for more mature technologies like RFID.

    I heard of a big event recently - a senior manager has decided that next year they need to be using RFID. But there’s no explanation of why. What the value is. That’s hype.

    Why do we still see hype – inflated expectations – even in the more mature technologies, which should be better understood.

  • The answer – each technology has strengths and weaknesses that are really hard to unpick. There’s no single ‘accuracy’ number that tells you how good they are. But that’s still what event organisers ask for, leading to confusing claims.

    I’m going to try to break down the key issues that underpin performance for all visitor tracking technologies.
  • First is the coverage problem. 
    Most mature technologies like scanning session attendance are discrete – they only gather data at the locations you put the scanner, or senor. In between there’s a lot of interest and intent that doesn’t result in a scan, so is missed.

    Other technologies can track visitors wherever they are, not just at sensor locations. Even outside the event footprint.
  • And then there are questions of accuracy.
    What’s really behind ‘accuracy’? What’s important is whether the technology really distinguish a visit to a content session, and a passer-by? Or someone chatting on a nearby stand. And can it accurately measure the length of the visit?
    We can break these problems down into sampling rates, reliability and frequency
  • If you’re directly tracking people (maybe using CCTV, or infra red sensors), then there’s no problem. You’re tracking 100% of attendees.
    But as soon as you track a device (a phone, a badge), you’re tracking a sample of your audience, that you hope is representative.
    Badges will be close to 100%.
    Anything app based will depend on download rates, and the percentage of users who allow the right permissions. Typically that might be around 20%. But if the app is the visitor’s ticket, it could be 100%. And it could also be close to zero. Typically it’s somewhere in the middle.

    The same is true gathering data from a WiFi network that visitors log into. Usage will be highly variable, but might be around 20% - 30%.

    At the other end, passive WiFi sensors (detecting unassociated phones) might only be tracking only 5% of attendees.
    This affects the accuracy of the analytics we can do, due to statistical error. Say an area has 500 visitors. At a 20% sample rate we’ll be measuring at an accuracy of +/- 50 (or 10%). For a 5% sampling rate that drops to +/- 100 (or 20%).
    Sample rates are important. Some technologies are honest about it, and some aren’t.
  • There’s also a reliability problem. Let’s take a visit to an exhibitor stand. None of these methods can distinguish a passer by from a visit 100% of the time.
    There will be missed visits, and also phantom visits.

    Badge scans (if staff are good) could be near perfect.
    NFC taps will have zero false visits. But they’ll definitely miss visits.
    Location based approaches suffer from both to some degree.
    But range based - RFID readers above an exhibition stand, or a WiFi passive sensor, or a beacon stuck on a stand, suffer from both. And probably more than you realise.

    For someone located 10m away from the beacon or sensor, this technique is probably 10% - 20% likely to think they’re closer than 5m, and equally likely to think they’re over 20m away. So distinguishing between a person sat listening to the content session, and someone just a few metres away working on a stand, is impossible.
  • Then there’s a frequency problem.

    App based solutions can update a visitor’s location every 1s to 30s. But passive WiFi sniffing will only update every 2 minutes to 20 minutes.

    That means missed visits. And it means mi-estimating visit durations. It’s impossible to distinguish between a visit of 30 seconds, and one of 30 minutes.
  • All of that is complicated. One number seldom captures what you need to know. But single figures are what event organisers demand. So people just start quoting what their competitors quote, and whole industries start quoting meaningless, and misleading numbers.
    Here are some very specific examples that I repeatedly hear.
  • Passive WiFi sniffing is where you install a WiFi sensor. And it detects phones passing by without the need for them to use (or ‘associate with’) the network.
    I’ve heard multiple providers say their data is ‘90% accurate’. I’m not sure what that single number means. And I don’t think it’s true.

    I also don’t understand what it means. Are 90% of the figures correct, and 10% wrong? Or are the outputs within 10% of the truth?
    There’s a sampling rate problem. Not all phones can be tracked using this technique. We recently gathered data from a 250 AP network, for a 3 day event, with approx 20k people there. Only 5% of people had a phone that could be properly tracked without associating with the network.
    And a visit detection problem. If someone is measured at 10m from the sensor, you should probably be just 70% confident they’re between 5m and 20m away. Depends on device. In a pocket etc.
    And a problem scaling up from devices to people. I’ve not come across a good reliable way to do this.
    I had one provider say to me - we’re ‘90% accurate because 90% of people have a phone’. 
    And another say ‘We do indeed need to account for the possibility that not all visitors are detected, and that with MAC randomisation counts are definitely not 100% accurate.  From our experience at countless shows, we are roughly accurate at 90% for the final numbers we supply.’
    To say it’s ‘90% accurate’ is just misleading.
  • At Crowd Connected we’ve invested a lot of time and money in improving our indoor positioning capability. So we know a bit about it.
    Almost every Indoor Positioning provider says their system is accurate to 2 meters. But what does one number mean? Is the position always within 2m of the truth? Or within 2m 95% of the time? Or 70% of the time?
    Accuracy varies dramatically from one phone to another, from one environment to another, from one walk round the hall to another. 
    One number doesn’t capture performance. Are providers quoting the best phone? Or the average phone? Or the 25th centile?
    And I also don’t believe the number they’re quoting. From the work we’ve done, we know the limits of the techniques and technologies being employed. I challenge any provider to come into an environment like this, and demonstrate to me 2m accurate indoor positioning, on my phone, on a walk of my choosing, around the event.
    We need to start asking whether the technology is good enough for navigation, or good enough to capture visits to the content area, or good enough to capture stand visits. Not trying to sum this up in one figure.
  • I’ve heard it said that ‘GPS is fine for very approximate tracking, but if you want to get really accurate (sub meter) you need beacons.
    Let’s set the record straight. Beacons are not more accurate than GPS.
    The same is sometimes said about all range based technologies - beacons, passive WiFi sniffing, overhead RFID readers on an exhibition stand. 
    They all talk about super accuracy, and  ‘sub-metre’ accurate. I don’t know what that single figure is meant to represent. And I don’t believe it’s true. 
    When these technologies measure proximity at 10m, in reality there’s a 70% chance they’re between 5m and 20m. That’s in no way hyper accurate, more accurate than GPS, or sub meter accurate.
    Here’s a beacon manufacturer ‘Accuracy for GPS is never accurate for more than 50m. There will always be 100m of linear accuracy. Even then, you are going to get a 15 second variability for each user. Beacons can solve for this macro-level accuracy.’
    I can only guess exactly what they mean by that. And it’s not true.
  • At a conference or exhibition, it may well be possible to get very close to 100% of attendees wearing a badge. But that doesn’t mean 100% accurate visitor tracking!
    First there’s a big coverage issue. If you only install an arch at the main entrance, you’re not gathering very much useful behavioural data at all.

    Then there’s a reliability issue. I had someone say recently that they stopped using RFID mats this year, and moved to overhead sensors, because the mats weren’t accurate. If a group of people walked over the read together, they weren’t counted. 
    If you stand close to an exhibition stand that has an overhead RFID reader, it won’t register you if you’re facing the wrong way.

    100% of your attendees may have an RFID badge. But that doesn’t make the technology 100% accurate.
  • I’m not saying these technologies are rubbish, and that what Crowd Connected does is perfect. Quite the opposite. There are some cases where our location tracking technology will knock the spots off the competition. But other use cases where it really isn’t the answer. In fact we sometimes recommend to customers that they look at other technologies.
    What I want to do is make sure that event organisers can cut through the hype, and pick the right technology.

    That means asking questions about the performance of any technology.
  • Event organisers need to know what they’re going to do with visitor tracking data, and what value that has to their business. Without that, there’s no way to know if a particular technology is suitable.
  • And the value needs to be balanced with cost. There can be huge differences.
    The difference between sampling at 20% and sampling at 100% might be costs of £10k and of £300k.
    The difference in achieving indoor location accuracy of 3m and 6m might take costs from £10k to £100k. 6m might be good enough for many use cases.

  • So think about the coverage, sample rate, reliability, frequency and cost of any technology you’re considering. And work out if it delivers for your particular use case. Coverage - from a single scan on entry, to wide-area continual tracking
    Sample rate - From 100% to 5% (The difference between 500 people, and 500 +/- 100)
    Reliability - Very little information. Needs a direct comparison test
    Frequency - from every second, to every 20 minutes
    Cost - From £2k to > £200k for a large exhibition

×