Are we sure that footage works?

Getting HDR out is still a bit tricky – in theory every­thing works or should work. But what if you have to KNOW it works? Well, Aja has presented a machine that can eliminate the guesswork.
AJA HDR Image Analyzer
AJA HDR Image Analyzer

At IBC in Amsterdam we had a first look at the box and talked to Bryce Button from Aja about why analysis matters, what is going wrong on the production side, and where that whole HDR-thing is going.

DP: What does the Aja HDR Image Analyzer actually analyze? And what kind of errors get logged?
Bryce Button: The Aja HDR Image Analy­zer is capable of analyzing signals coming in from Arri, Canon, Panavision, RED and Sony camera log formats, and measuring color gamuts and luminance ranges for both HDR and SDR, including BT.2020, BT.709 (or SDR) and P3. BT.2020 is the widest color space, but for now, most displays can’t handle more than P3, so a lot of the analysis is to ensure that looks are within P3 in terms of a practical workflow. Currently Apple, Samsung and most major HDR displays are limited to P3. Other dynamic range inputs measured are SDR 709, PQ (Perceptual Quantizer) also known as ST 2084, and HLG. The types of measurement tools available include nit light levels – which were never measured before HDR outside of the standard 100 upper limit for SDR – waveforms, histograms and vectorscope; we even have what is known as Lumi Color, a way to show luminance levels with colors visually represented. This is very helpful when trying to understand the color and light combinations that might be an issue so that you can create optimal results with your materials.
We are also analyzing CIE XY gamut view, and this means that in HDR mode we are checking encoder colors and whether they are within valid limits of color range. This is very important in BT.2020, where pixels have to be limited to P3 color gamut for practical purposes as mentioned above.
When it comes to error logging, you can set the nit level within preferences so that anything outside of the nit level you are aiming for in a given project gets reported automatically as part of your error logging.
HDR Image Analyzer will also tell you if a project’s color or gamut space is outside of range and track that to timecode so that your team can double-check their work at specific points. That error logging can be saved as a file and you can email it to whomever else you need to. It can also be set to handle audio levels as well.

If you measure, you have options – a wide range of graphs and measurements is available, including a pixel-specific table (bottom right).
If you measure, you have options – a wide range of graphs and measurements is available, including a pixel-specific table (bottom right).

DP: How can something perceptual be measured? And what ranges of metrics are even sensible to measure?
Bryce Button: Ranges like BT.2020 and P3 are scientifically understood ranges, so we can certainly measure items that fall outside of them. Whether it’s the luminance levels, nit light levels you have set for the particular delivery goal or Color Gamut ranges as understood with Rec. 709 for SDR or P3 and BT.2020 for HDR, these items are indeed measurable. As a result, what really matters for production is understanding the nit
level you are going to need to deliver, ensuring items you are concerned with aren’t outside of that, and knowing the wide color gamma range that you are trying to achieve so it’s not overly saturated for the display range you are targeting.
With HDR, the viewing environment affects the decisions you make here the most, and many don’t understand that the cinema is a much lower nit level than what you’ll be viewing on a display which will be seen in a bright room. In a bright room, higher nit levels are necessary to produce results close to the human visual system, which is the overall goal of HDR.

DP: Which of the different HDR standards are supported?
Bryce Button: It is important, I think, to understand these aren’t standards per se. They are approaches to try and arrive at a result that satisfies the end goal, imagery as close as possible to the Human Visual System or HVS. From the consumer end it is all transparent, the latest displays automatically recognize the approach being developed and simply render them. It’s the data and handshake happening between an HDMI signal and the display that triggers the correct result. Although there aren’t that many HDR approaches, it can be confusing in that different terms are used to describe effectively the same thing.
Dolby as a company did the initial research into HDR and set goal posts that are now being achieved in a variety of ways. The entire Dolby approach is based on PQ (Perceptual Quantizer) and was initially designed to work with 12-bit systems and dynamic metadata to render the best results. HDR10, which is the HDMI-way of describing PQ but designed to work with 10-bit systems, is a subset of this approach. HDR10 only offers static metadata, so delivers one overall setting for an entire program, whereas Dolby Vision is designed to allow fluctuations from scene to scene or shot to shot. HDR10+ takes this same 10-bit PQ approach, but similarly to Dolby Vision adds dynamic metadata. HDR10 and HDR10+ are terms that are used for the output over HDMI but are the same thing when it comes to dynamic range – effectively PQ.
A second truly different approach is HLG, which was jointly developed by the BBC and NHK. HLG was designed for live production where metadata is not always desired due to the fear of metadata losing sync in complex pipelines, which can be a liability for live event capture. This HLG approach allows you to deliver both HDR and SDR across the same signal, and depending on the end display and what it’s capable of – SDR or HDR. So that’s why audiences saw HLG Live from Sony and HLG from BBC and NHK in action this summer, as major sports events began to implement HDR approaches. Often HLG will be used all the way through the pipeline and then converted to PQ at the last step for final delivery using products like Aja’s FS-HDR which can transform signals between HLG and PQ.
PQ is currently the most popular approach for produced programming today where trims can be performed for a range of delivery needs and immediacy of live production is not a concern.

Is it still safe to broadcast? One of the problems that gets analyzed is how far within – or without – the broadcast-safe color spaces your footage is.
Is it still safe to broadcast? One of the problems that gets analyzed is how far within – or without – the broadcast-safe color spaces your footage is.

DP: With the fundamental difference between PQ-EOTF and the BBCs HLG / ARIBSTD-B67: How can one keep up with the theoretically double-mastering of each project?
Bryce Button: Products like Aja’s FS-HDR are designed to handle both approaches with up to four realtime simultaneous signals being processed for HD work. The HDR Image Analyzer can also be set up to take in PQ or HLG for analysis. This means that the same camera log input can be simultaneously delivered in realtime in any combination you want with the FS-HDR. For example, you could take the output from a Sony camera and deliver PQ and HLG at the same time along two separate outputs. With our latest version firmware updates for FS-HDR you could also simultaneously output an SDR signal, giving you a number of options and choices. The latest FS-HDR firmware has also added support for the latest BBC HLG LUTs, which include both scene referenced and display referenced options covering EOTF and OETF workflows.

DP: Which one of the standards will become the de-facto standard in your personal opinion?
Bryce Button: PQ is currently the most widely adopted standard, but there will be continual variations on that approach for different needs. From the consumer perspective, there is no standards battle. Any display they have purchased in late 2018 already handles some combination of HLG, HDR10, and/or DV or HDR10+; it‘s a transparent experience for the end user.

DP: What other problems are showing up, that aren‘t analyzed so far?
Bryce Button: HDR Image Analyzer is going to analyze everything you need, since it already handles LOG camera formats; bigger issues in HDR arise during production. The real challenge for production is ensuring that the composition and lighting setups are tuned for the aesthetic demands of the show. This is the case for everything from make-up to background details in shots – if not viewed correctly in HDR on set, you may not realize that you have introduced overbearing highlights and other details in the background that fight with foreground focus. We have seen this regularly in HDR production. Without proper tools like the FS-HDR and an analyzer in play, the risk is not seeing an accurate picture, which can lead to huge headaches and expenses in post to tone down background details that may overpower foreground points of interest.

The common formats are supported – new definitions of those formats will be distributed by Aja.
The common formats are supported – new definitions of those formats will be distributed by Aja.

DP: From your experience: Which mistakes found by the Analyzer can be fixed reasonably well, and which require reshoots?
Bryce Button: HDR Image Analyzer will let you know where you are in terms of what you are producing and of course is the perfect tool for QC and mastering work. As long as you are exposing shots correctly, you will have what you need. If you are properly monitoring on set, you will be able to gauge perceptual results for HDR delivery. If you are just reviewing files on a standard range display and expecting to know the accuracy of what’s been captured, you are setting yourself up for headaches down the line in post. HDR Image Analyzer can help even if you don’t have an HDR display, because you are measuring everything correctly. You can tell if nit levels are too high or low and see if color range is within scope of what you are choosing to deliver. It can also gauge for SDR productions and HDR, so you can measure results for both. A perfect combination is to use HDR Image Analyzer with FS-HDR, so that you can transform between different approaches in realtime and send that on to Analyzer to get an idea of what the final result will look like.

DP: With the state of tech these days: If you had to imagine your perfect setup for shooting and delivering HDR content, what would that be?
Bryce Button: It’s quite subjective. For any HDR setup, it’s best to start with available pro cameras with the most dynamic range. Then you can feed signals through a pro­duct like FS-HDR to see how the potential end results will look. It’s also possible to upload your own .3d Cube LUTs, if you have a particular look you are aiming for with your project, and analyze it through HDR Image Analyzer, because it’s often impossible to understand what kind of nit levels and reach your color gamut is hitting without a proper analysis tool. Ultimately you want to pay attention to your composition, lighting setup and items like make-up, highlights and shadows and ensure that the balance will play well in HDR and SDR.

DP: And, speaking as a consumer of HDR content: What would be your perfect at home setup for a decent HDR experience?
Bryce Button: It involves doing your homework. Ensure that you are getting a model capable of handling both HLG and HDR at a minimum so that it doesn’t matter where the source is coming from. You are going to want a delivery system that is capable of delivering HDR content if your OTA or cable source is lean on HDR content, perhaps devices like Apple’s 4K TV device or a Sony Playstation, or one of the many third party devices that will deliver higher than standard cable box quality. There is not a lot of HDR content on standard network channels yet – but ample content is available through OTT services including Amazon, Netflix and Youtube.

Kommentar schreiben

Please enter your comment!
Please enter your name here

Diese Website verwendet Akismet, um Spam zu reduzieren. Erfahre mehr darüber, wie deine Kommentardaten verarbeitet werden.