Assessment of team-based learning (TBL) activities is hard. We describe the use of 360 degree video, along with activity metrics, to capture a rich perspective on team interactions.
1. Broadening your Perspective
in Assessing Team Activities
David Topps & Corey Wirun
Office of Health & Medical
Education Scholarship (OHMES)
University of Calgary
ICRE, Ottawa, September 2019
http://olab.ca
8. Activity metrics
• Objective data
• Fewer questionnaires
• Multiple sources
• LRS as intermediary
• xAPI Team Profile
9. Discussion
Main learning points:
• Easy to set up
• Never miss a shot
• Combine with activity metrics
Details are available at https://olab.ca
Editor's Notes
Print by itself, no speaker notes
Introduction
Assessment of team-based learning (TBL) activities is hard. There have been solid attempts such as TeamSTEPPS (AHRQ, 2006), EBAT (Rosen, 2010) and the TOSCE (Singleton, 1999). However, most studies look at whole teams and not the individual contributions. While overall team effectiveness is important, this overlooks the non-contributory coasters and underestimates the quiet workhorses.
We tend to overly rely on the subjective observations of facilitators, or team-member self-assessments, with all of their attendant biases, most of which we are blind to. Our own HSVO project in 2008 found that team activities tend to happen in bursts, rapidly overwhelming the cognitive capacity of even multiple observers, while boring them with long interludes of mild tedium.
Methods 1
In our HSVO Project in 2008, we made use of an extremely sophisticated camera array, developed by Jeremy Cooperstock and his team at McGill. The camera array used identical, high resolution video cameras, held in a high precision frame that required very detailed set up and alignment. The highly skilled McGill team created very effective software and firmware that enabled the generation of unique points-of-view (PoV). By combining the image streams from these cameras, the team could also create virtual PoV’s: such as a viewpoint that was interpolated from the other datastreams. This prevented the crucial lapse caused by the surgeon getting his head in the way of the optimal learner PoV. However, this expensive setup was difficult to implement in real-world situations because of the extraordinary setup and calibration requirements.
In our CollabraCam Project in 2011, we collaborated with the software team that created the CollabraCam app. The setup was cheap and easy, only requiring an iPad, four iPhones and a dedicated Wifi access point. Leaners could install the CollabraCam software and use their own iPhones if they wished. Learners were encouraged to move around to obtain the best view of the procedure being taught, while filiming with their devices. The video output from each iPhone was streamed to a single iPad where one of our nursing staff was able to act as the Director, picking the optimal viewpoint on-the-fly from the four generated. Higher resolution video was captured directly to each iPhone, thereby allowing improved post-hoc image extraction. But the main advantage was the huge amount of time saved on post-processing to create a decent team video. Learners were observed to be much more engaged than the usual stance as passive observers. However, the network setup was not robust and needed careful nursing(!) by the nurse.
Methods 2
The above methods showed some advantages of having multiple PoV’s, looking at a single, central subject. But in team-based learning (TBL) activities, it is often hard to know where to focus your attention. In our HSVO project, even with video support and multiple observers, the observation was often made that the key event would happen off-camera or, ironically, in camera. Sidebar discussions were detected but were not amenable to analysis. Non-verbal communications, a key difference in face-to-face interactions, compared to remotely mediated intercourse, were often missed, even with multiple cameras because they tended to be peripheral: outsiders looking in.
Based on the fascinating work of Victoria Brazil in Flinders, where she placed a GoPro on the chest of the mannequin, thereby recording the patient’s perspective of an emergency team’s interactions, we decided to take things one step further with a centrally placed video camera that was capable of recording a full 360 degrees. This cheap, consumer level camera was easy to set up and was found to be quite unintrusive, despite its central location. However, the first system that we tried, the LG 360, was quite sensitive to initial setup or position change. Its PoV was also fixed, both in direction and scope, which turned out to be surprisingly limiting.
We then turned to the Insta360 camera system, which promised FlowState image stabilization, much higher definition, and unique post-hoc editing capabilities. This is also a consumer-level system of quite reasonable cost.
Initial Results
The biggest surprise for us, initially, was the difference made by being able to retro-actively change the virtual PoV. This included both the direction in any axis and the angle of view, meaning that we could, post hoc, zoom into a clearer, more detailed view of the subject at hand. The 5.7K video resolution allowed for very detailed analysis.
It was also possible to generate what we called the Janus view: showing the faces of two participants on opposite sides of the camera (or any angle in between). This has turned out to be a huge advantage in analyzing all the non-verbal, as well as verbal, cues in an intense team interaction. Because this system is capturing all possible angles all the time, there is no such thing as being “out of the shot”. This panopticon effect was dramatic.
We have used non-clinical images to illustrate these effects, rather than compromising patient confidentiality during an ongoing study.
Cost Effectiveness
By using this cheap, consumer-grade, video system, it is perfectly feasible to distribute these cameras to a number of remote/rural communities. The system is easy to operate: there is no need for a professional AV team on-site, which greatly increases the accessibility of this approach.
The data files that are generated, at over 1 gigabyte per minute of recorded video, are absolutely HUGE! However, cloud storage is cheap and we have been able to establish a fully secured, on-shore (in Canada) cloud repository system. This mitigates concerns raised over patient data safety, data governance issues and access control issues, keeping IRBs happy. Video data is captured to an on-device SD card and then later transmitted to the cloud, pre-empting the requirement for extra-high upload bandwidth, which is not commonly available in rural/remote communities. Real-time virtual PoV generation is however possible, using a securely linked iPhone.
Practical Logistics
The FlowState image stabilization provided several advantages: as well as providing a dramatic degree of image stabilization, such that the camera could literally be waved around in the air without affecting the PoV at all, it also makes initial setup much easier. We simply instructed the local staff to screw the camera onto the operating room light handle, which always has a central and unobstructed PoV. Thereafter, the camera could be completely ignored. Even if the staff chose to reposition the light, the PoV does not change at all because it is virtually generated by the software.
In comparison with the illustrated, multiple camera views that we previously employed, we found that having a centrally-placed, outward looking panopticon view, was much more effective at capturing team activities.
However, we are also finding that video, while necessary, is sometimes not sufficient to capture all the richness in the complex interactions of TBL activities. Other data sources are needed.
Activity Metrics
While this is noted to a lesser degree than with our previous projects, we still find it hard to capture all the richness of team-based activities, especially during bursts. We also remain concerned about the subjective nature of some of the observations. Accordingly, we are exploring (in the PiHPES Project) various ways of capturing activity stream data from a variety of workplace information sources. We are all tired of questionnaires. This project aims to extract such activity data from current workflows, without interrupting the essential team performance.
Using a Learning Records Store (LRS) as an intermediary data repository, that is specifically designed to rapidly absorb large amounts of data from multiple simultaneous sources. We do this via the open-standard xAPI (or TinCan) protocol. We have developed an xAPI Team Profile and an xAPI EMR Profile. These are specifically tuned to capture rich data about what team members actually do in any given scenario, not what some observer thinks they do.
Discussion
We are entering the world of Precision Education, using big data to drive personalized learning designs, even in team-based learning scenarios.
Main learning points:
It is important, for sustainability and buy-in, to have an easy-to-use endpoint.
With full time 360 degree PoV, you never miss a shot
Combine the video analytics with activity metrics for a rich picture of TBL interactions.
These slides are available on SlideShare at