Each participant downloads our attentionTRACE Collection App onto their device. The App ensures the following process takes place (in chronological order):
(a) It exposes the right viewer to the right viewing session (ie. directs them to Facebook, YouTube, Twitter, TV channels or others). The participant logs in using their own login details (which we don’t scrape in case you’re wondering), so that the platform experience appears completely natural. The viewing session time is aligned with the typical experience on the platform, but importantly we consider impact by average second, to remove any biases.
(b) When the participant is experiencing the platform, the App activates the user-facing camera when an ad is displayed on their screen. This collects facial footage at five frames per second, which is then converted to attention data using one of our attention models.
(c) The App tags all ads the participant is exposed to. This is how we track the pixels of ads that are on screen, their duration on screen, the proportion of the screen that the ad covers as the viewer scrolls (we call this coverage), whether the sound is on or off and the volume level.
(d) Once the participant has finished their session, the App becomes redundant on their device.
(e) Television collection varies slightly from online platforms. A second phone is set up in the home (using hardware we send them) which streams content to their TV and provides a user-facing camera. Participants can freely get up and leave the room as they would normally in a natural TV experience. This technology includes Adaptive Bitrate Streaming (like Netflix) to ensure people with sub-optimal wi-fi can fulfil completion.
To find out more or discuss attention data with one of the Amplified Intelligence Customer Success Team contact us directly.