Mit unserer nächsten DP Ausgabe 18 : 04 geben wir einen Überblick zum aktuellen Angebot der verschiedenen Render-Systeme – darunter mit Reviews zu Arnold, Octane, Renderman, Mantra und Redshift. Als Vorbereitung und Teil der vierten Ausgabe (Erscheinungstermin ist der 25.06) haben wir mit den Entwicklern des biased GPU Renderers gesprochen und einige Fragen geklärt.
Interview with Redshift Developer
DP: When and why did you start working on Redshift?
Panos: „The three Redshift co-founders (Rob, Nicolas and me) are all ex-videogame developers. We’ve been working on GPUs and 3d acceleration pretty much since day 1 (anyone remember the Voodoo Graphics? 🙂 ). While at our video game jobs we noticed the „GPU Compute“ trend and decided that the timing was right to try to get production-strength rendering and ray tracing on the GPU, to reap the performance benefits which were obvious to us. We (and I’m sure other people, too) were making these thoughts at a time where there were no commercial products on the market and no CUDA or OpenCL. Our first prototype was actually running on DirectX9 pixel shaders! And then, when CUDA appeared, we ported our to it. The company was incorporated in 2012 and we released our first beta version in 2013.“
DP: When did Redshift start to pick up pace? Which studios and artists use Redshift today?
Panos: „Some heads were turned pretty much on day one! (when we announced) 🙂 Our out-of-core geometry and texture tech and the general philosophy of the renderer was (and, in many ways, continues to be) different from other GPU renderers. Our product is meant to replace CPU renderers, not to live next to them as a preview tool or as a „poor cousin“. This was contrary to the goals of some of our competitors at the time. For this reason, studios were almost immediately attracted to our product which was pretty much the opposite of what we expected: We thought that in the beginning we’d only attract freelancers and hobbyists. This (attracting studios) defined a lot of the progress and priorities for our product moving forward. The list of studios and artists using Redshift today is too long to iterate here. Perhaps the most recognizable studio (to gamers, at least) is Blizzard. Their „Overwatch“ shorts were rendered with Redshift! But there are many more studios in that list from across different industries: Design, TV, film, commercials and others.“
DP: What advantages are there to switch to GPU rendering?
Panos: „The primary reason is speed. And that translates to 1) Lower costs (fewer machines, fewer licenses, less power, less space), 2) Better creativity because you can iterate more, 3) Better image quality because you can use effects in your animations which you wouldn’t before because of long render times and 4) Less stress in general! We’d like to say this to any prospective users who haven’t tried Redshift yet: You should download the freely demo and give it a spin! Many have done so and haven’t looked back to CPU rendering since! :-)“
DP: What are the main reasons for artists to use Redshift as GPU renderer?
Panos: „Redshift was designed from the ground up to be a production-strength renderer. We support more rendering features and higher rendering performance than any other GPU renderer out there. And with the upcoming release of Redshift 3.0, the feature and performance gap will widen further!“
DP: Biased rendering – what does this mean in regards to your rendering architecture and for achieving photorealistic results – both from technical and a user standpoint? How does it compare to unbiased render engines?
Panos: „This question comes up a lot. In a single sentence: „Biased“ means that the end result will not be 100% correct. Now let’s explain what „not correct“ means. Renderers often have to take shortcuts due to performance or visual quality reasons. For example, a renderer might limit the number of bounces or might limit the intensity of certain rays to avoid fireflies. Or it might ignore certain light interactions because it produces too many fireflies or is slow to rendering. If a renderer does any of these tricks, then it’s „biased“. Here’s the interesting thing: most renderers (even the „unbiased“ ones) do one or more of the above tricks. So they are biased, too! 🙂
„Biased“ also means being able to optionally approximate some aspects of the render. For example Redshift (and VRay, MentalRay, etc) have the ability to compute and store GI in points which is, in certain cases, much faster than doing it the „brute-force“ way. This GI solution might actually be of very good quality and not flicker at all (we certainly try to achieve that in Redshift!) but it will never be as accurate or „correct“ as brute-force GI. So we offer both options. So, in this case, „biased“ means „ability to apply tradeoffs for performance reasons“.
„Biased“, in our minds, also means „tweakable“. For example, being able to have a particular object not cast shadows (with visibility flags) or to be able to be lit by specific lights (light-object linking) or to only be able to see specific objects in its reflections (trace sets). All this things give powerful artistic control but they „break“ the realism and, in some cases, the „unbiased-ness“ of rendering. However they are near-essential in production. This is especially true in some industries like marketing where the important thing is not to render glass 100% accurate but to be able to translate what the artist has in mind on the computer screen – even if this means „breaking the rules“.“
DP: What PBR Workflows does Redshift support?
Panos: „Redshift’s standard shader (the „Redshift Material„) uses the standard roughness/metalness workflow that can be found in other renderers – including videogame ones. But it has also been extended to allow for multiple BRDFs and includes extra controls with regards to transmission and SSS, among other things. So, in some ways, it’s a superset of the standard PBR workflow.“
DP: Does Redshift work with partners to facilitate more GPU Cloud rendering services?
Panos: „Redshift has business relationships with a number of commercial render farms (the list can be found here.) Beyond that, we have ideas and plans on how to leverage cloud GPUs in more ‚direct‘ ways and are currently forming business partnerships with some of the industry’s key players. We, unfortunately, cannot go into more detail about this as this is still work-in-progress and no formal announcements have been made.“
DP: What main features for upcoming updates are you working on, that you can share with our readers?
Panos: „We’re currently working on enabling Cryptomatte in our 3d plugins. The core work for it is done so now we have to deal with the plugin side of things. We’re also currently working on improving the quality of lighting through refractive glass (make it more physically correct but also without killing performance), increasing the quality of bump and normal mapping, adding area light „directionality“ controls and adding multi-step deformation blur (we currently only do 2-step deformation and up to 31-step transformation). Beyond those near-term features, there’s a long list of Redshift 3.0 items which will include distributed rendering, automatic sampling (no more having to worry about „num samples“ controls), programmable shaders (including OSL), fixing some transparency trace depth limitations and others. We hope to be able to reveal more around SIGGRAPH! :-)“
DP: Are there plans to integrate MaterialX and Toon Shader as well?
Panos: „MaterialX is also on the radar although we’re waiting to see how it gets implemented in the various 3d apps too – as this might affect our implementation. Toon Shading is definitely on the radar too!“
DP: When will OptiX & Altus Denoising be available in Redshift customer Builds?
Panos: „These are already available to customer builds – in the 2.6 versions! Very soon (next week), we’ll be switching all our production builds to 2.6.“
DP: What are some of your long term goals?
Panos: „Our focus around and after SIGGRAPH will be the Redshift 3.0 features mentioned above. Beyond these, there’s a number of features we’re currently working on which I cannot mention yet (for the obvious reasons) and they all fall under „Redshift 3.0″. We hope to be able to say more around SIGGRAPH (and maybe even be able to show some of it!).“
DP: In which directions do you see rendering technology evolving – what should artists expect from future developments?
Panos: „One popular topic currently is real-time ray tracing. And by realtime I don’t mean „progressive rendering until the frame is clear“ but the real real-time! 🙂 I.e. the final frame (not a noisy version of it) being rendered at 30/60fps. The recent NVidia GTC demos were certainly impressive. The catch, of course, is that these are still limited approximations which, even though perfectly fine for a videogame, might not be great for production rendering. So the challenge here is to see how technology like that can be leveraged in a production environment and without losing too many of the features. In terms of rendering algorithms, improvements in monte carlo sampling (i.e. shooting „smarter“ rays instead of ones that contribute little to the image) are certainly a domain we’re interested in. Micro-displacement/mesostructure algorithms are also interesting and, in our opinion, are actually ideal for GPU rendering!“
For more information: Visit www.redshift3d.com