The Great Textured Area Light hack
Final rendered frame from my demo reel, using the TGALH technique. Q35 Jumpjet based on the design by Bram Lambrecht.
This page describes a rather contorted, monstrous hack that does the following:
1. creates what amounts to a textured area light that mimics the lighting from a flickering TV screen when used with image sequences, AND also creates spinning-light skydome rigs as a 6-10x faster alternative to background-only radiosity
2. renders hella fast with high AA, using shadow maps
3. Works with HDRI rendering, beating the hell out of BG radiosity for speed, especially in shots with other raytracing going on. Not only is it faster than BG-only radiosity by 3 to 50+ times, it permits soft reflections of the background to occur in the surface's SPECULAR channel. The Beethoven image shown here would go past one hour with soft raytraced reflections and BG-only radiosity (2 recursions).
4. The TGALH rig uses its own image/sequence, so you can use a super-sharp, hires skydome image with small hotspots, while using a low-res, blurred image for the rig. No speckling, and huge time savings versus bumping the rays-per-evaluation with radiosity.
5. This rig can be combined with one-bounce radiosity to yield effectively TWO-bounce results, again at a huge time savings.
The speed benefit and textured area light aspect is the main motive, otherwise I'd have skipped all this and kept the hair I tore out fighting to make it work...
I am not responsible for any depilatory or other effects resulting from working with this hack. I'm dead serious! This process works with the buggiest and least reliable part of LightWave, the Image Editor. Save often! Be sure to feed all your image sequences to an OpenGL-visible texture in your scene someplace to check that everything is updating as it should! And, make this work on an experimental basis before even thinking about using this in some mission-critical application. YOU HAVE BEEN WARNED.
UPDATE: I have arrived at an Image Editor stabilizing trick (step 14), so that you can safely save and reuse your area light hacks. This is the same process -- probably to deal with the same underlying bug(s) -- as used in Timothy Albee's similarly huge Sasquatch hack to deal with Saquatch "forgetting" that it was using sequences. Scene-loaded image lists getting scrambled in Image Editor are at issue in both cases. (5)
UPDATE 04/26/03: As most LightWavers know, most of the mathematical layer blending modes, including Multiply which is used here, do not actually work the way they should. Multiply, for example, yielded rather washed out results in this hack versus actual radiosity. Fortunately, the inaccuracy turns out to be a gamma applied in the multiply operation. It has no business being there any more than the "slide" belongs in the Texture Channel... but it's there, and we have to work around it by adding a counter-gamma in Image Editor of 0.52 (noted in Step 4).
BIG Update 4/28/03! HDR Images now tested, and found to work with the new gamma correction! Much faster in scenes with soft environmental reflections (from specularity) than the raytraced alternative!
UPDATE 5/16/03: I hope you didn't come back here thinking that the new 7.5c release fixes the issues for this hack. Nope, it doesn't touch a one. Image Ed still crashes and still fouls up the listings. No change. *sigh*
TV screen area light
All are rendered on Athlon dual 1800+, Win2k/LightWave 7.5b, 1 thread, Enhanced Medium, LightWave 7.5b, no radiosity or raytracing, shadow maps only, except where noted. All render times in minutes/seconds!))
48 seconds (Enhanced
Note especially that the shadows have the correct color fringes.
TGALH Skydome with HDRI Lighting:
3:09 , enhanced high
7:05 rendering time (600x800 original rez), Enhanced High, Raytrace Reflections 2 recursions, 1 thread. (Wanzer model courtesy Tom Wright, HDR probes from Paul Debevec)
If you want to try this yourself, here it is (1 meg ZIP file). File includes the TV scene, a short sequence, and a working, stabilized skydome (new as of 4/29/03). Saved out of LightWave 7.5b.
Here are the steps to building the rig yourself. It's a big slab 'o' text, I know. The example scene in the .zip file is set up for a target motion blur of 100%, but the rig can be modified for other moblur targets.
1. Load your source image/sequence into your existing scene. (For simple black-and-white TV operation, skip ahead to step 4 and use the source sequence for all Texture Channel inputs. Also, use Texture Channel on the light's intensity instead of the color so you don't need to do it 3x for each light.)
WARNING:, the color process cannot be done with DV .avi's because of Image Editor (instances of DV files just don't update, DV's cannot be duplicated, and you can't simply load more than one copy either.) To handle those, render out a mini version of the .avi, as a targa or .jpg sequence. Do it low-resolution, like 160x120, since this process uses few samples anyway, so small sequences will process much faster, especially when you are checking things with an OpenGL test screen. You should also blur it, to prevent the lights from landing on that one odd-colored pixel. DV will, however, work fine in black-and-white.
2. Clone it three times.
3. Apply a Texture Filter to each of the three clones, and apply a Value Texture to all three, with its blending mode set to Multiply.
4. For the first one, set the Value color to pure red; the second, pure green; the third, pure blue. This will separate out the RGB values from the source image, which we need to do because Texture Channel is monochrome/luminance only. ALSO: set the gamma for your three channels to 0.52. This corrects the error encoded into the Multiply operation. (Thanks to Joth at Wet Cement for showing me the Image Viewer blending modes for doing comparisons of images right inside LightWave.)
5. Create two nulls, the Group Carrier (which will perform the scanning or rotating of the lights, as well as positioning the whole assembly) and the Zipper (which will scan the texture, including the 1m/s compensation). The Zipper doesn't need to be parented to anything, it is merely a convenient control for the texture scanning and desliding process (see item 11 below).
6. Create the first light to be textured. Shadowmapped spot is preferred, depending on application. DON'T FORGET the limitations of these... if your rig has no shadows for some reason, check the shadowmap settings! Use shadows that are just soft enough to blend together at your particular motion blur settings, but not so low-resolution as to lose shadows in small detail areas.
7. Parent this light to the Carrier, and then position it as desired. For the examples seen here, I used a vertical column and scanned over the X axis. For tall screens/light sources, one could build a row of lights and scan up and down in the Y axis, but be sure to note that when setting up TextureChannel.
8. Apply Texture Channel to the red channel. Give it a texture, planar, default size. Set it to the "Zipper" null as reference object. Use the red clone image. Repeat the process for the other channels, using their respective images. Do this with as many lights as you expect to need, and space them out evenly along the scanner axis (in our case, Y), between 0.5 and -0.5 meters.
8A. Optional, but highly recommended: Clone EACH LIGHT once, then set the original group to Diffuse Only, and the second to Specular only. This allows you to control the spec and diffuse separately, which is a VERY GOOD IDEA (1)
9. Texture Channel works on luminance, treating the images as full-color when converting, so the blue and red image clones will be under-valued compared to green (it does a weighted averaging of the three channels, and we've made two of them solid black). To calibrate, use Scale on the modifier (in the Graph Ed)(2) so that all three colors top out at 100% where the test image has color values of 255. SMPTE color bars are good for this calibration -- they don't need to be at any particular height, just the same as each other. The ratios are approximately 1.9:1.0:5.2 (for R,G and B). I suggest using 24-bit images for calibration, then swap in HDR later, if you are going to try that.
10. The texture is essentially 1 meter by 1 meter in texture space, where 0,0 is in the middle and the extreme edges are +/- 0.5 or 500mm. Position the texture using its Y attribute to place the light as a specific "scan line" that corresponds to the light's position in the array you built in steps 6-7. The planned density of lights in your array will determine the distribution of the Y values, where the AA setting determines X. I used 9 lights for the above sample, and since I didn't want to hit the edge, I spaced them out in ninths: 4/9, 3/9, 2/9, 1/9, 0 and so on. Don't put any of them right at the edge (0.5), as it becomes unpredictable whether it will get its color from the top or bottom.
11. Animate the "Zipper" in X to scan the texture image. As the texture is 1m wide, you might think that you just have to animate the Zipper at 1m per frame, but of course it is far more complicated than that. (3) In my setup, I had Zipper keys on X at 0 frame, value =0.5 and 30 frames, value = 31.5, set to Repeat in both directions. Once done, then shorten the animation graph to match your target motion blur by placing the second keyframe at: keyframe = 30*(moblur/100) where moblur is the percentage.
12. For a skydome, arrange the lights in a half circle around the Group Carrier, and spin them around, using a heading spin graph set up with 0 at frame 0, and 360 at the second keyframe. Set the latter for your moblur target as described for the Zipper. Be sure to multiply the intensities of these lights with the cosine of their angle with the X axis, else the top and bottom of your objects will receive too much light. For example, if the middle "equator" light is 100% (cos(0) = 1), multiply the intensities of the others by the cosine of their angular positions. There is no point in having a light directly at the "poles" because cosine 90 = 0.
NOTE: for HDRI skydome rigs, use Paul Debevec's HDRshop or similar utility to convert image probes to spherical images ( from "angular map" to "latitude/longitude" form.) Image probes projected using Image World are 180 degrees off AND mirrored. So, it's easier to use Textured Environment spherical maps, since you'll need to do that to create low-res, blurred spherical maps for the rig anyhow.
For a TV screen/ "area light" (as in the sample image), simply set them up in a vertical line and use the Group Carrier to sweep them across the "screen" in a sawtooth pattern (repeat pre and post). Simply scale and rotate so the entire assembly "scans" the intended light source object.(4)
13. Use this formula to derive the best Motion Blur settings:
MOBLUR = MB-MB*(1-#AA passes-1 / #AA passes) where MB = your "intended" motion blur percentage. Don't forget that you can change the rigs to use different motion blur settings by shortening their graphs to match; e.g. if your target moblur is 50%, take the existing assembly spin and Zipper scan graphs and move the second keyframes so the graph is 15 frames long instead of 30. For 33%, set it to 10 frames, etc. Once that is done, ONLY THEN do you adjust the motion blur using the formula.
14. STABILIZE the scene: load a dummy object, texture it using your main image AND its RGB clones, save this object, and then hide it and save the scene. Never save the scene without this object in it, else it will be corrupted on reload.(5)
15. STAGGERING: If this rig is being used to light a scene with fast moving objects in it, the change of the lighting as the rig spins during the blurred motion, will result in strange lighting effects on that object. To minimize this, you can set up the rig to "stagger" the lighting passes in a "star" pattern instead of the circle pattern seen here, so that for each subsequent pass, the light rig is on the opposite side of the object as the previous/next passes, instead of right next to each other. To set up your rig for staggered operation, multiply the second keyframe value as follows: low AA, *2: med AA, *4; high AA *8; extreme AA, *16. I suggest making this change in this manner to distinguish from the adjustments made for different motion blur ratios.
WARNING: at higher animation frames, the rig will begin to jitter because of the limitations of single-precision floating point numbers as used in LW. This gets to be a problem much faster with shorter moblur settings and with staggered settings. This can be dealt with by using stepped keyframes to "reset" back to 0 for each complete set of passes, but I have not tested that.
(1) Specularity was never meant to be used in this way, so you are going to have some fun getting results that are consistent with normal lighting. Glossiness, for example, only adjusts the size of the highlight, not its intensity, so a smaller highlight will mean less total highlights to be "stacked up" in this hack. So, if you lower glossiness to make your highlights softer, it will also blow them out because they are bigger. In addition, for reason not entirely clear, I needed to adjust the specular value of the floor of my TV test up to 450% to get it to show up clearly, which made the spec highlight from normal lights just hideous. So, if you want to use "normal" surfaces with this lighting hack, splitting the spec and diffuse apart into separate light groups will go most of the way to getting decent spec highlights. It will also make mixed lighting a lot easier.
(2) The graph editor WILL LIE to you. It will show you something that looks like the waveform from the image sequence, but IT IS NOT ACCURATE when used with Texture Channel. Use it only as an overall indicator.
A way around it is to copy the texture channel modifier you want to observe, and attach it to something that's easy to observe in the layout window, like the X channel of a null. In fact, if you set that null to move vertically down the screen with two keyframes, and then turn on Show Motion Paths in your display options, you'll get a nice realtime graph of what is really going on. If you are working with SMPTE color bars, you get what looks like a waveform monitor shape.
(3)First, there is a 1m/s default animation or "slide" built into the damn Texture Channel, so the texture slides into negative X at 1m/s all on its own. Except maybe for Alan Hastings, no one has any idea why this is in there, or where the hell it comes from. So, instead of animating a sawtooth at 1 + 1/30 meters per frame, I animate the Zipper at 31 meters per *second*, which holds the pic still while keeping everything integer. (You can animate at 29m/s the other direction, if you need to scan the other way, but I handle scan direction with the Group Carrier sawtooth animation). To top it off, since the zero point is smack in the middle of the image, you need to offset both keys by 0.5m to set the start point at the edge of the image instead. That's where you can make "horizontal hold" adjustments; the third sample image above shows the wrong colors at the TV's left edge, so a minor tweak of the keyframes up or down by .01m or so would adjust the image slightly and get rid of that.
For skydomes, rotating the assembly works just as well as messing with the Zipper; in fact, you could get rid of the Zipper entirely and rotate the root assembly null once per second to get rid of the slide.
With texture channel, the Axis option in Graph Ed simply determines which of the three "directions" of texture space will be used. (When it's a planar Image map, the mapping axis is made effectivly irrelevant.) Because we are restricted to one axis, you cannot move your light around in three dimensions in the texture space. That wiped out my first plan, to texture the light by spinning it around a spherical projection, done to world co-ordinates. Fortunately, whether you are doing a skydome or a TV screen, it does not affect what you do here -- we will be "scanning" the image the same way for both. You simply use a spherical projection image (like Skytracer images baked to spherical) for skydomes.
(4) You can skip the scanning process by simply setting up a grid of lights, especially if you don't want the lights to be affected by motion blur settings, but you'd still have to use the Zipper to kill off that damn 1m/s slide.
(5)The secret to the stabilizer is that it introduces the image into the scene FROM AN OBJECT rather than the scene itself. That seems to bypass the ImageEd bug. The other bugs, in particular attempting to Replace an image with a different type (sequence to still, or to .avi) you are still going to have trouble. Even when merely swapping for the same type, you'll find that the image numeration for the clones will still screw up (1,2,3 becomes 1,1,1). Don't panic. The clones will update properly despite the numeration problem. BE SURE to go and update the textures on your stabilizer object (they won't catch the change and they will still list the "old" clones), save out a new version, then the scene. The numeration issue should go away on reload (i.e. back to 1,2,3) and the lights should work fine.
UPDATE 4/29/03: Apparently, even with a stabilizer present, importing other images, even if also on objects, can bugger the list up in Image Ed. You'll know that happened when you load your scene and have six instances instead of three in ImageEd, where the first three are numbered normally but the second are "instances of instances" [image.tga(2)(1), for instance]. Of coure all six will be scattered about the light inouts and they are hosed again. Your only hope other than rewiring all 27 (or so) inputs, is to strip out the light rig and stabilizer, resave the scene, reload, and then re-import the rig from your source scene (you did build one of those first, right?). Try to import the TGALH rig last.
When changing out source images, always follow this sequence: 1. Swap image and check instances to be sure they look right. 2. Save new stabilizer. 3. Save scene.
Jim May/Court Jester
Contact The Jester