Showing posts with label 3D Movies. Show all posts
Showing posts with label 3D Movies. Show all posts

Friday, December 14, 2012

Me on Hobbits and High Frame Rates

Concerning Hobbits....

(These are my opinions and not those of my employer etc. etc....)




A while back, I wrote a post called "Me on 3D", where I explained the fear and uncertainty over if Peter Jackson had made the right choice deciding to go to 48 fps for the Hobbit.


The post is here, if you want to read it first.

So I had my whole family there, at opening night, and out of curiousity, I decided to try the HFR 3D version. Not because I would actually expect to enjoy it, but as a... lets call it "technical study".

So in short - was my fears that High Frame Rate cinema look shitty hold?

I relly wish I could yell a resounding Yes to that, but there are a few niggly parts that makes me not do that. Make no mistake, though, it's still an 85-90% "Yes", but it is not the 110% doubt-free utter hate I had expected myself to feel. Which is very strange, and is the reason it took me so long to write this post.

So does the Hobbit at 48 FPS look like shitty soap opera?

Oddly - depends on who you ask!

My wife liked it. She things I'm being a luddite of some kind.

And to my surprise, my middle son, Oscar, also said he liked it. This is strange to me, because he was the kid that when we were watching "2012" in theatres, even though he was just 12 years old at the time, leaned over to me during those few shots shot with a 360 degree shutter (something with is perceptually very similar to a high frame rate) and with NO prompting from me whispered "Dad, did you see that? The framerate was off!"

Oliver (youngest) had no opinion, and Victor (the oldest) was... uncertain.

And if you ask me?

Oh for sure, to me it looks like crap. There is absolutely no doubt there.

...which is partially the reason I don't know what to think. I had expected a universal loathing, because to me it is so unquestionably aweful, that these other reactions confuse me... and seed.... doubt.


So what's my history with HFR?

As long as I've owned video cameras, I've always dispised the "look", and it was only when digital video with DV cameras came on to the scene, and you started to deinterlace the footage, something happened.

I live in a PAL country, which normally shoots video at 50 "fields" per second, which while not be 50 whole "frames" it kinda is, for all practical purpouses (though it is every other line, ignore that for now, the perceived temporal resolution is 50 images per second). And it looks like shite.

Then people realized that when you removed the interlace (effecively tossing out every other line) you also casued the image to be only 25 frames per second... and it looks way better. It stops looking like crappy video, and starts to look "cinematic".

Yes, "cinematic". An elusive word we will be re-visiting more in this post.

Soon, cameras had this as a feature: "Progressive". So instead of the insanity of two interlaced half-pictures, you had whole frames, at 25 fps. "25P" became a big selling point on video cameras. I paid almost $2000 for a video camera once, specifically so it should do "25P"



And it wasn't just me. As a matter of fact, every TV program (at least in Sweden) worth it's salt wanting to look "cool", immedaiately went to progressive mode, and 25 fps. Only news, sports, documentaries and other stuff that tries to "depict reality" was at 50i, everything dramatic, and narrative was at 25P.

This has been going on for, what, 15+ years now in TV land?

All was good in the world until .... 

...some crazy person in hopes to put butts in seats decided to yet-again revive the recurring dud of "3D" in movies.

And 24P does have troubles with 3D. The "stroby" motion, which gives such wonderful surreal looking action in 2D, plays all sorts of havoc with your spatial sense when blown out to 3D. So 3D in 24/25P barely works. This is known.

So now people have tried to "fix" this with HFR (claiming it as something magically "new", while it's just back to interlaced video look of 20 years ago, hardly "new" in any way, shape or form).

And sure, it does help 3D.

But.... what else does it do?

Why does HFR look Crappy?

This is the key question, the core of the poodle, as we say in Sweden. Why? What causes this? Or does nothing cause this?

There is a whole set of theories:


Theory #1: Familiarity / "Learned Response"

When I was a kid growing up we had a lot of British TV. One interesting aspect of BBC TV at the time was that stuff shot indoor was shot to tape on huge tube based video cameras large as houses. Since these were pretty much immobile, anything shot outdoors was shot on film. This created a very odd sensation that, within the same TV show, the perceived framerate could shift between "indoor" and "outdoor" looks. And I remember even then, thinking that "somehow, the indoor looks cheaper and crappier". I couldn't even articulate why, just that something was cheezy about it, and the outdoor was "cooler".

So there are people who claim that this is a "learned" thing, that we old farts have "learned" that "movies look like this, TV looks like this other thing", and then by repeated association connected one look with "quality" and one look with "crap".

I contest that, because I recall even at super young age (as per above) pegging the 50i as cheezy looking, without knowin why. And within the same TV programme, so it kinda makes the "by association" theory fly out the window, IMHO.

Yet the fact that some youngsters - and my wife - actually had nowhere near the amount of trouble with watching this thing as I had... it can't be as universal as I thought.

The people in the "learned response" camp also bring up things like "people were panning CD's in the beginning as bad sounding" and similar things, or "people didn't like color film at first" or "talkies didn't take root immediately"... I don't believe that.

For example, CD's vs. vinyl... vinyl disks need massive compression (audio level compression here) or the needle would - literally - jump out of the track. But CD's were touted as "high dynamic range" and used very little compression in the beginning. Turns out, psycho-acousitically, we percive more dynamic compression as "better" for some reason. Not a "learned response",  but a psycho-acoustic fact. Today, the audiophile complaint is, ironically, that CD's are overcompressed instead (which they are, because the race in todays world is "to sound the loudest on the radio", and the more you compress, the higher you brint up the average sound pressure level, and the "louder you sound". And the wimpier things like drums sound.)



Theory #2: Idealized Movement theory

This is a theory I believe more in myself. Because some thing I think I noticed in all the HFR examples I've seen (and that includes the HFR panel at SIGGRAPH 2012 where lots of different frame rates etc. was showcased - more on this at a later time) I notice one very peculiar effect:  HFR makes you a worse actor!

Yes, it is weird. Somehow, every mistake, and every nuance of movement is suddenly present, and you can tell the tiny fidgety moves people to. The theory is, that when we only "sample" the motion at 24 points every second, our brain fills in the intermediate movement with some "ideal", but when we actually have the HFR information, no such "motion smoothing" can occur in our brain, and we see the movements as jiggly and imprecise as they are, making them look more real but then also more like an actor acting.

Also, as many people have pointed out, HFR makes you see more detail. It has to do with how we perceive motion blur... not just that there is more motion blur in 25P content, but the fact that the brain actually adds blur in certain cases (long story, will explain later).

This causes you to see stuff you never saw... including... slight differences in color between prosthetics and skin, contact lenses being visible, how cheaply constructed some sets are, etc. etc. Even the ridiculousity of proportions somehow become more "obvious"... such as the dwarves in the Hobbit ... never sold as real creatures, but as the over-exaggerated prosthetics they were - sadly.



Theory #3: The constant "Stopped Clock Illusion"

This is my personal theory, came up with it all meself' - so sue me.

Part of the reason of me coming up with this hare-brained theory, is the repeated reports from people saying that they perceive HFR imagery as "sped up", as if they were watching something on a slight "fast forward", as if time was passing magically faster... while in fact it is not. And conversely, that the "cinematic" feel of 24P makes things seem "slower" in some... odd way.

What do I think?

Well, I think there is something inherent in the 24p rate that links to the speed of the saccadic movement our eyes do. Our brain is a freaky thing, and it invents almost everything you see on very flimsy foundations (the optic nerve simply doesn't have the bandwidth to give you as much visual information that you think you see - 'tis all an illusion made by the brain). If you check for the Stopped Clock Illusion you will find some interesting psycho-optical perceptual freakery that can make your head fall out your ear...

...wo shat if 24 frames per second happens to match the average lange of a saccade, and the fact that we are fed a new still image every 24:th of a second causes our brain to, effectively, be in a constant state of the "Stopped Clock Illusion". Something that higher framerates do not do.

The eye... is freaky. Our psycho-optical subsystem is... freakier. I had an interesting demo program once that scrolled some text. I happened - at the time - to own both a CRT moinotor and an LCD monitor. The scrolling graphics was completely readable on the CRT, but a smeary mess on the LCD. This was not because the LCD was "slow"... this was due to the temporal characteristics of the two media.


Why? Well it is similar to when I wrote my 1st raytracer, RayTracker back in 1989. I had it render every 8:th pixel first, then every 4:th, then every 2:nd etc. If I did this with little dots on a black background, it looked much better than if I did this with blocks. The eye could interpolate and "assume" the missing information when on black, but when it was ipso facto large blocks, it could not. The blocks were perceived as a much lower resolution than the blocks. Psycho-optics at work!


The same thing happens to time: A CRT basically "blinks" the image at you. The image exists for a super brief moment of time, and then fades. Whereas an LCD holds the image for a much longer time. The eye can sort of "interpolate" the movement in the case of the blinking, but can not in the case of the non-blinking CRT. The eye is trying to figure out the "ideal movement" based on the scrolling text, and the actual deviation from this (the text standing still for x milliseconds) is viewed as a discrepancy and hence a blur. Whereas the blink looked to the eye like a point-sample, and it could merrily interpolate the in-between information.



Or What?

I don't know.

Frankly, the fact that not everybody hates this crap is one thing that disturbs me. If my theories were right everyone and their mother would see that, indeed, the emperor has no clothes, and 48 fps is just ugly as crap insanity. But they don't. Some people seem to... like it.

The second problem is that there were pieces of this move where I didn't care. I.e. for certain segments, at certain ... brief.... moments... I found myself not caring. That disturbed me, because I was expecting to hate all of it with the fury of a thousand exploding suns. I didn't. There were whole stretches - somtimes up to almost a whole minute in length - where I wasn't annoyed out of my skull by the 48 fps. That smells danger to me. Why was that?  I bet the gold lives in knowing when it doesn't annoy you out of your skull.

I don't know.


Alas..... I ramble.


I'll shut up now.


Oh wait. The movie?


Could have lost about 40+ minutes, especially in the beginning. Some cool fanboy-serving bits. Gollum was nice. The lead CG goblin looked lead CG (Gollum much less so). Some action was stretched out beyond absurdity. But sure... it was vieable. Worth watching, even.

Not just completely sure it's necesary to watch 24 extra fames of it every second....


And the re-sizing of Gandalf was awesome and flawless. Wish I could say the same of the incessant color correction of his face....



/Z

ADDENDUM:

Stu Maswitch is a very clever guy, and has a few posts on this topic that is worth viewing w.r.t. this one:

Tuesday, November 22, 2011

Me on 3D

So, I just did something I never thought I'd do; me and my son (a budding film critic) went and re-watched Tintin... in 3D!



Now people may have heard me somewhat whine about 3D, because I think it fundamentally is a distracting viewing experience. So why on earth did I go see Tintin in 3D? Well, there are several reasons:
  1. Primary goal was actually to re-watch it in the original English. My first viewing was together with my in-laws, and my father in law was the one who introduced our kids to Tintin by owning all the albums. I allowed the in-laws to choose what screening to go to, and they favoured the Swedish dub - in 2D.
  2. I've already seen the movie. Since I historically find 3D distracting, it was an advantage that, having now seen the plot, it was "ok" to re-watch the movie on a more "technical" level.
  3. I actually wanted to study it from a technical standpoint and had some hopes that some bits - specifically, the opening credits - could work nicely in 3D!
What were my conclusions after doing this audacious thing? Well, same as always...

The 3D of Tintin - my Verdict

3D still feels fundamentally distracting. While the Tintin 3d was some of the best I've seen, there were still cases when they were poking canes  in my eye, or doing swirly stuff just to show off. For example, early in the film, Tintin steps into the street, is almost hit by a car, but saved by the Thom[p]sons trademark canes.
I'll bet that shot oroginally was the one shot of the car, and them then pulling him off the street.
But someone wanted wanted "more 3d", so insterted there is a completely pointless, rotate-y, spinny, undercarriage-viewing shot, which only distracted. It was out-of-place both editorially and stylistically, IMHO.

Other such sequences were much more gracefully handled; the massive long one-shot action sequence in the middle with the dam bursting was marvellously crafted for 3D. It was actually better (I never thought I'd ever say that!) in 3D than in the 2D showing! The problem with that sequence is instead that Tintin and Haddock pretty much demolish a small town with unknown casualties, and it is treated as a throwaway gag, which felt a bit out-of-place to me.

3D Overall - my verdict

I still think 3D has a long way to go before I willingly accept it as anything but a method for trying to "trick" people into going to the theater for the supposedly "enhanced experience" for extra money. In my life I've seen exactly three things that was better in 3D:

  • "Day and Night", beautiful Pixar short
  • The opening battle from Star Wars Episode 3 (shown as a proof of concept at Siggraph - they managed to make it look large)
  • Aforementioned single shot from Tintin

Here's the thing:

Actually shooting for 3D is tricky and cumbersome, and while it in theory should give the better result, issues of polarization in the mirror rig generally used actually causes very annoying differences between the eyes.

Converted 3D - don't even get me started on that mess. Yes it gets "better" for every year but without actually having the full 3D information from both eyes there in the first place, some stuff invariably has to be invented, pulling apart stuff from one eye and shifting it haphazardly to some "neat" place for the other eye. Some poor underpaid roto-person has to invent the missing pixels hid behind some object seen from the one camera but is needed for the other eye. Conversions tend to look flat, cardboard-cut-out-y, and anything volumetric in nature (smoke, mist, clouds), or spatially compelx (A Shrubbery! Ni!) are bound to cause issues. Sometimes the 3D effect is "enhanced" into a completely unrealistic space and if you have an acute sense of geometry you spot this immediately.

The only 3D that seems to "work" for me, is the CG generated, and specifically, the more simple CG generated. Like Day & Night that I mentioned above, the way-better-than-the-actual-movie animated sequence about the three brothers in Harry Potter and the Dead Hollow Things.


Why 3D doesn't work, and why Jim and Peter may be right

Others have explained why 3D doesn't work. The primary problems I see are these:
  • The fundamental discrepancy between focus plane and convergence for stuff going a lot off-the-screen plane. This isn't such a big deal, though, because we really only do a lot of changing to the focus of our eyes for stuff really close to us.
  • The fact that movie-makers try to "solve" this by playing with convergence, which actually ends up changing the "scale" of what you are looking at. Sometimes people are perceived as "huge", sometimes they are perceived as "tiny". I know 2D movies kinda do that too, but I'd honestly rather perceive a close up as "close" than "huge". Maybe that's just me.
  • Incorrect 3D space (mostly happens in converted movies where they literally screw up and make impossible geometry)
  • The fact that the perceived "depth" actually depends on the size of the movie screen and how far you sit from it!!
  • Frame rate judder! Yes! JC and PJ may be right! More on this below

Zap, did you say frame rate?

I did.

You see, movies are done at 24 frames per second and I love them for it. I have always been a proponent of 24p filmmaking (and using a 180 degree shutter, i.e. 50% of the time the shutter is open).

I was physically unable to sit through "Public Enemy" (the Johnny Depp ganster flick) because of the film was using a 360 shutter, which looks suspiciously like a higher framerate, and something that screams "VIDEO" to me!

I've spent a lot of extra cash over the years to be sure all cameras I buy shoot 25 progressive frames rather than shooting the standard PAL 50 fields interlaced. It has been expensive but worth it, and I still can't stand high framerate viewing for narrative content. It is perceived as cheap and crappy, sets look like sets and not places, actors look like actors and not characters, and it cheapens the experience tremendeously.

Having said all that, - Yes - I think 24fps is one of the issues with 3D! Because somehow, as your eye is trying to follow all the action, the juddery motion of 24 frames adds to the distraction. What is so beautiful in 2D somehow breaks down in the fundamentally different 3D viewing world.

So I honestly think Jim Cameron and Peter Jackson are right in this regard; 3D will look "much more lifelike" when shown at a higher framerate. Still, though, I am afraid, because I retain my fear from my 2D film experience; one could argue that high-framerate 2D viewing is also "more lifelike"... but that that is exactly what kills it!

Something strange and magical dies at higher framerates, and I don't know what it is. Does 3D change that... maybe, maybe not? I don't know yet!

I eagerly await to see The Hobbit with it's 48 fps 3D (tho I pray the 2D version is done 24 fps) because I need to see what it looks like.


Because - Yes - I am afraid!

I am afraid that we get in the same situation as the 2D; while it looks "more realistic", that also murders it as being filmic. That it will make sets look like sets, actors like actors, props like props, make-up like make-up, and will completely cheapen the whole thing.

I'm scared. I quiver.



The Zap Solution for 3D

So... what do I think people should do to add the third dimension? Or should they?

Well... movie makers have tried to give "depth" to their film for years. Because stereoscopic viewing (i.e. the parallax difference between the two eyes) are not the only depth cues our brain uses to decode depth. And make no mistake, preception happens in the brain, not in the eyes. If you can fool the brain, you don't need to fool the eyes.

A few other cues for depth are:
  • Motion parallax (as you move side to side, stuff at different distances move at different speeds in your field of view)
  • Perspective change (as you move forward and back, proportions of stuff and angles of lines change)
  • Depth haze (particles in the air change the contrast and saturation of near/far things)
  • Focus (playing with the depth of field of the camera can help induce a depth effect)
Filmmakers have always tried to give the fundamentally flat movie screen a sense of depth by doing all of the above; by adding smoke, focusing the lens, and dollying and trucking the camera. All these depth-cues try to communicate depth to your visual cortex. And sometimes it succeeds quite well and can give a good sense of depth. 

Unfortunately, two depth cues are fighting it:
  • Eye focus (eyes always focused in the screen plane)
  • Binocular Vision (both of your eyes see the same thing)
The focus thing isn't a massive deal, and good filmmakers have always been able to trick you with depth-of-field effects, but Binocular vision, is a bigger deal.

Your two eyes are clearly telling you "this is happening on a flat screen". So even if the filmmaker loads up his shot with a metric ton of dollying, trucking and smoke, your two eyes are still telling you... but it's all on a flat screen.

So what can we do?

We have a bunch of cues saying "It is 3D..." and one major cue (your two eyes) saying "....but it's not"?

Simple.

Remove the conflicting cue!

Yes - the Pirates were right. No. Not the ones downloading movies, the other ones. Y'know the kind that sometimes are from the Carribbean?

Coz they knew that the best, cheapest 3D glasses in the world - those which turns any well shot movie into a 3D movie with no extra production cost....


...is a pirate eyepatch.


You can thank me later.

/Z