At a meetup last week largely devoted to eye tracking, Theis MacMadsen talked a bit about a media conference he’d recently attended. Hollywood, he said, was employing eye tracking tests to gage where on the screen their audience is looking, and whether they could save on elements that aren’t being actively looked at.
I assume that most of us would argue that they’re, as usual, well off the mark, and that those details, the ones that aren’t being actively regarded, are part of the construct filmmakers use to make sure that an audience can concentrate on what you do want them to see.
Would that scene work as well if there were fewer people in the café? Or less visual debris in the background? Sci-fi is obviously its own animal, but for all the components of those two shots, I’d argue that there’s no fat on them. And the reason the film works as well as it does is because it does such a convincing job of showing that world.
Documentary works differently. Instead of building up the world you’re portraying, you mainly have to reduce the world you’re looking at, making choices about which details really matter and should be brought to the foreground, and which can stay in shallow focus or be out of shot altogether.
Anyway, getting back to Theis and his chat about Hollywood and its eternal quest for ways to cut corners and streamline their expensive productions (and who can blame them really?). So how do you make the watching experience as lean as possible? As it turns out, the same way as you atrophy eye muscles. The studios are experimenting with reversing the technique of eye tracking. So if you’re looking at this point of the screen…
Instead of drawing your eye’s attention to this point…
They move it to where your eye has been tracked.
In effect, your eyes and body need never move at all. All the desired action comes to you. So imagine you’re plugged into a VR or 360 headset, and the whole screen slips over so the desired image lands right in front of you.
The idea freaked the hell out of me. The frightening bits (losing that choice, making the experience more passive etc) are obvious, but damn if the potential isn’t exciting too. Assuming a filmmaker’s goal isn’t to fully couch-potato viewers, then there’s a whole wide narrative language to be explored. I can only imagine it as abstract for now, but imagine if you could harness that kind of image slip and employ it not as a gimmick, but as a storytelling tool.
In a sense it completely changes what my role as an editor is and would be in this brave new world. As Brian Chirls says “sounds like the editor needs to learn how to code.” Maybe. At the very least, editors who’ve always used tricks to hide and / or enhance edits have a bit of an inside track when it comes to understanding how the eye functions in making connections. But whatever the case, I have to say that the horizon for narrative possibilities is more and more apparent. Just imagine shooting a doc in 360, where you get to see not only what someone says, but the listeners reaction. The ability to choose which POV you see, and to watch it over to see the other side, or even something altogether different.
There’s a big wave coming, where tech changes everything. There’s bound to be a lot of bad to go with the good, but I like to think my eyes are open.