Random Thoughts – Randocity!

Make LuxRender render faster

Posted in 3D Renderings, Daz Studio by commorancy on March 2, 2015

In addition to writing blogs here at Randosity, I also like creating 3D art. You can see some of it off to the right side of the screen in the Flickr images. I point this out because I typically like to use Daz Studio to do my 3D work. I also prefer working with the human form over still life, but occasionally I’ll also do a still life, landscape or some other type of scene. Today, I’m going to talk about a rendering engine that I like to use: LuxRender.  More specifically, how to get it to render faster. You can read more about it at www.luxrender.net. Let’s explore.

3Delight and Daz Studio

Daz Studio is what I use to compose my scenes. What comes built into Daz Studio is the rendering engine named 3Delight. It’s a very capable biased renderer. That is, it prefers to use lighting and internal short cuts to do its rendering work. While 3Delight does support global illumination (aka. GI or bounced lighting), it doesn’t do it as well or as fast as I would like. When GI is turned on, it takes forever for 3Delight to calculate the bounced light on surfaces. Unfortunately,  I don’t have that long to wait for a render to complete. So, I turn to a more capable renderer:  LuxRender. Though, keep in mind that I do render in 3Delight and I am able to get some very realistic scenes out of it, also. But, these scenes have a completely different look than Lux and they typically take a whole lot longer to set up (and a ton more lights).


What’s different about Lux? The developers consider it to be an unbiased renderer, that is, it is considered physics based. In fact, all renderers attempt to use physics, but Lux attempts to use physics on all light sources. What is the end result? Better, more accurate, more realistic lighting…. and lighting is the key to making a scene look its best. Without great lighting, the objects within it will look dull, flat and without volume. It would be like turning the lights off in a room and attempting to take a photograph without a flash. What you get is a grainy, low light, washed out and flat image. That’s not what you want. For the same reason you use a flash in photography, you want to use LuxRender to produce images.

Now, I’m not here to say that LuxRender is a perfect renderer. No no no. It is, by far, not perfect. It has its share of flaws. But, for lighting, it can produce some of the most realistically lit scenes from a 3D renderer that I’ve found. Unfortunately too, this renderer is also slow. Not as slow as 3Delight with GI enabled, but definitely not by any stretch fast. Though, the more light you add to a scene, the faster Lux renders.

However, even with sufficient lighting, there are still drawbacks to how fast it can render. Let’s understand why.

LuxRender UI

The developers who designed LuxRender also decided that it needed a UI. A tool that allows you to control and tweak your renders (even while they’re rendering). I applaud what the LuxRender team has done with the UI in terms of the image tweaking functionality, but for all of the great things in the UI, there are not-so-smart things done on the rendering side. As cool and tweakable as a render-in-progress is, it should never take away from the speed at how fast a renderer can render. Unfortunately, it does.

Let’s step back a minute. When you use Daz Studio, you need a bridge to operate Lux. It needs to be able to export the scene into a format that Lux can parse and render. There are two bridges out there. The first is Reality. The second is Luxus. I’ll leave it to you to find the bridge that works best for you. However, Reality has versions for both Daz Studio and Poser. So, if you have both of these, you can get each of these versions and have a similar experience between these two different apps. If you’re solely in the Daz world, you can get Luxus and be fine. Once you have this bridge and you export a scene to the LuxRender, that’s when you’ll notice a big glaring sore thumb problem while rendering.

Render Speed and LuxRender UI

When I first began using LuxRender, one thing became very apparent. LuxRender has this annoying habit of stopping and starting. Because my computer has fans that speed up when the CPU is put under load and slow down when not, I can hear this behavior.  What I hear is the fans spinning up and spinning down at regular intervals. I decided to investigate why. Note, renderers should be capable of running all of the CPU cores at full speed until the render has completed. 3Delight does this. Nearly every other rendering engine does this, but not LuxRender.

Here’s part of the answer. There are two automatic activities inside of the LuxRender UI while rendering:

  1. Tonemapping
  2. Saving the image to disk from memory
  3. Write FLM resume file

Both of these activities outright halt the rendering process for sometimes several minutes. This is insane. Now, let’s understand why this is insane. Most systems today offer 4 or more cores (8 or more hyperthreaded cores). Since you have more than one core, it makes no sense to stop all of the cores just to do one of the above tasks. No. Instead, the developers should have absconded with one of the cores for either of these processes leaving the rest of the cores to continue to do rendering work all of the time. The developers didn’t do this. Instead, they stop all cores, use one core (or less) to write the file to disk or update the GUI display and then wait and wait and wait. Finally, the cores start up again. This non-rendering time adds up to at least 5 minutes. That’s 5 minutes where zero rendering is taking place. That’s way too long.

How do I get around this issue? Well, I don’t entirely. If you want to use LuxRender, you should run over to luxrender.net and make a complaint to solve this problem. The second thing to do is set the tonemapping interval to 3600 seconds, the image write to disk interval to 3600 seconds and the FLM write interval to 3600 seconds. That means it will only save to disk every 1 hour. It will only update the screen every 1 hour and save a resume file every 1 hour. That means that LuxRender will have 1 hour of solid render time without interruptions from these silly update processes. This is especially important when you’re not even using the LuxRender UI.

Note that many applications set up intervals as short as a few seconds. That’s stupid considering the above. Yeah, we all want instant gratification, but I want my image to render its absolute fastest. I don’t need to see every single update interval in the UI. No, if I want to see an update, I can ask the UI to provide me that update when I bring it to the front. Automatically updating the UI at 10 second intervals (and stop the rendering) is just insane and a waste of time, especially when I can simply refresh the UI myself manually. In fact, there is absolutely no need for an automatic refresh of the UI ever.

Network Rendering

The second way to speed up rendering is to use other systems you may have around the house. They don’t necessarily need to be the fastest thing out there. But, even adding one more machine to the rendering pool makes a big difference on how fast your image might complete. This is especially important if you’re rendering at sizes of 3000 by 3000 pixels or higher.

System Specs and Lux

Of course, buying a well capable system will make rendering faster. To render your absolute fastest in Lux, it’s a given that you need CPU, CPU caching and large amounts of RAM to render. So, get what you can afford, but make sure it has a fair number of CPUs, a reasonable L1 and L2 cache set and at least 16GB of RAM (for 3k by 3k or larger images). If you add one or more GPUs to the mix, Lux will throw this processing power on top and get even faster rendering. But, this doesn’t solve the problem described above. Even if you have 32 cores, 128GB of RAM and the fastest L1 and L2 caches, it still doesn’t solve the stopping and starting problem with rendering.

If you want to dabble in LuxRender, you should run over to the luxrender.net and file a complaint to fix this cycling problem. In this day and age with multiple cores and multithreading, stopping the render process to save a file or update a UI is absolutely insane.  To get your fastest renders, set the update intervals to 3600 seconds. Note, though, that if LuxRender crashes during one of the one hour intervals, you will lose all of that work. Though, I haven’t had this happen while rendering.

So, that’s how you get your fastest render out of LuxRender.


3D TV: Flat cutouts no more!

Posted in computers, entertainment, movies, video gaming by commorancy on February 18, 2012

So, I’ve recently gotten interested in 3D technology. Well, not recently exactly, 3D technologies have always fascinated me even back in the blue-red glasses days. However, since there are new technologies that better take advantage of 3D imagery, I’ve recently taken an interest again. My interest was additionally sparked by the purchase of a Nintendo 3DS. With the 3DS, you don’t need glasses as the technology uses small louvers to block out the image to each eye.  This is similar to lenticular technologies, but it doesn’t use prisms for this.  Instead, small louvers block light to each eye.  Not to get into too many technical details, the technology works reasonably well, but requires viewing the screen at a very specific angle or the effect breaks down.  For portable gaming, it works ok, but because of the very specific viewing angle, it breaks down further when the action in the game gets heated and you start moving the unit around.  So, I find that I’m constantly shifting the unit to get it back into the proper position which is, of course, very distracting when you’re trying to concentrate on the game itself.

3D Gaming

On the other hand, I’ve found that with the Nintendo 3DS, the games appear truly 3D.  That is, the objects in the 3D space appear geometrically correct.  Boxes appear square.  Spheres appear round.  Characters appear to have the proper volumes and shapes and move around in the space properly (depth perception wise).  All appears to work well with 3D games.  In fact, the marriage of 3D technology works very well with 3D games. Although, because of the specific viewing angle, the jury is still out whether it actually enhances the game play enough to justify it.  However, since you can turn it off or adjust 3D effect to be stronger or weaker, you can do some things to reduce the specific viewing angle problem.

3D Live Action and Films

On the other hand, I’ve tried viewing 3D shorts filmed with actual cameras.  For whatever reason, the whole filmed 3D technology part doesn’t work at all.  I’ve come to realize that while the 3D gaming calculates the vectors exactly in space, with a camera you’re capturing two 2D images only slightly apart.  So, you’re not really sampling enough points in space, but just marrying two flat images taken a specified distance.  As a result, this 3D doesn’t truly appear to be 3D.  In fact, what I find is that this type of filmed 3D ends up looking like flat parallax planes moving in space.  That is, people and objects end up looking like flat cardboard cutouts.  These cutouts appear to be placed in space at a specified distance from the camera.  It kind of reminds me of a moving shadowbox.  I don’t know why this is, but it makes filmed 3D quite less than impressive and appears fake and unnatural.

At first, I thought this to be a problem with the size of the 3DS screen.  In fact, I visited Best Buy and viewed a 3D film on both a large Samsung and Sony monitor.  To my surprise, the filmed action still appeared as flat cutouts in space.  I believe this is the reason why 3D film is failing (and will continue to fail) with the general public.  Flat cutouts that move in parallax through perceived space just doesn’t cut it. We don’t perceive 3D in this way.  We perceive 3D in full 3D, not as flat cutouts.  For this reason, this triggers an Uncanny Valley response from many people.  Basically, it appears just fake enough that we dismiss it as being slightly off and are, in many cases, repulsed or, in some cases, physically sickened (headaches, nausea, etc).

Filmed 3D translated to 3D vector

To resolve this flat cutout problem, film producers will need to add an extra step in their film process to make 3D films actually appear 3D when using 3D glasses.  Instead of just filming two flat images and combining them, the entire filming and post processing step needs to be reworked.  The 2D images will need to be mapped onto a 3D surface in a computer.  Then, these 3D environments are then ‘re-filmed’ into left and right information from the computer’s vector information.  Basically, the film will be turned into 3D models and filmed as a 3D animation within the computer. This will effectively turn the film into a 3D vector video game cinematic. Once mapped into a computer 3D space, this should immediately resolve the flat cutout problem as now the scene is described by points in space and can then be captured properly, much the way the video game works.  So, the characters and objects now appear to have volume along with depth in space.  There will need to be some care taken for the conversion from 2D to 3D as it could look bad if done wrong.  But, done correctly, this will completely enhance the film’s 3D experience and reduce the Uncanny Valley problem.  It might even resolve some of the issues causing people to get sick.

In fact, it might even be better to store the film into a format that can be replayed by the computer using live 3D vector information rather than baking the computer’s 3D information down to 2D flat frames to be reassembled later. Using film today is a bit obsolete anyway.  Since we now have powerful computers, we can do much of this in real-time today. So, replaying 3D vector information overlaid with live motion filmed information should be possible.  Again, it has the possibility of looking really bad if done incorrectly.  So, care must be taken to do this properly.

Rethinking Film

Clearly, to create a 3D film properly, as a filmmaker you’ll need to film the entire scene with not just 2 cameras, but at least 6-8 either in a full 360 degree rotation or at least 180 degrees.  You’ll need this much information to have the computer translate to a believable model on the computer.  A model that can be rotated around using cameras placed in this 3D space so it can be ‘re-filmed’ properly.  Once the original filmed information is placed onto the extruded 3D surface and the film is then animated onto these surfaces, the 3D will come alive and will really appear to occupy space.  So, when translated to a 3D version of the film, it no longer appears like flat cutouts and now appears to have true 3D volumes.

In fact, it would be best to have a computer translate the scene you’re filming into 3D information as you are filming.  This way, you have the vector information from the actual live scene rather than trying to extrapolate this 3D information from 6-8 cameras of information later.  Extrapolation introduces errors that can be substantially reduced by getting the vector information from the scene directly.

Of course, this isn’t without cost because now you need more cameras and a filming computer to get the images to translate the filmed scene into a 3D scene in the computer.  Additionally, this adds the processing work to convert the film into a 3D surface in the computer and then basically recreate the film a second time with the extruded 3D surfaces and cameras within the 3D environment.  But, a properly created end result will speak for itself and end the flat cutout problem.

When thinking about 3D, we really must think truly in 3D, not just as flat images combined to create stereo.  Clearly, the eyes aren’t tricked that easily and more information is necessary to avoid the flat cutout problem.

3D Television: Eye candy or eye strain?

Posted in entertainment, technologies by commorancy on March 12, 2010

For whatever reason, movie producers have decided that 3D is where it’s at.  The entertainment industry has tried 3D technologies in film throughout the last 40 years and, to date, none have been all that successful.  The simple reason, side effects that include eye strain and headaches.  These are fairly hefty side effects to overcome.  Yet, here we are again with a barrage of new 3D films hitting the big screen.

In answer to all of those new films actually filmed in 3D, television makers have decided to try their hand at producing home 3D technologies.  The problem with any current 3D technology is that it’s based on a simplistic view of how 3D works.  That being, each eye sees a different image.  Yes, that’s true.  However, it’s hard to provide a quality 3D experience using a flat screen with each eye getting a different image.  There’s more to 3D then that.  So, while the each-eye-sees-a-different-image 3D technology does work, it does not seem realistic and, in a lot of other ways, it doesn’t really work.


Over the years, IMAX has had its fair share of 3D features.  Part of the appeal of IMAX is its very large screen.  You would think that watching 3D on that very large screen would be an astounding experience.  The reality is far different.  Once you don the special polarized 3D glasses, that huge screen is seemingly cut down to the size of a small TV.  The 3D imagery takes care of that effect.  I’m not sure why that effect happens, but 3D definitely makes very large screen seem quite small.  So, even though the screen is huge, were you watching the imagery as flat the 3D kills the scale of the screen.  Effectively, the screen seems about half or a quarter the size that it is when watching the same feature as flat.

Worse, transitions that work when the film is flat no longer work in 3D.  For example, fades from one scene to another are actually very difficult to watch when in 3D.  The reason is that while this transition is very natural in a flat film, this is a very unnatural type of transition in 3D.  Part of the reason for this transition problem is that the 3D depth changes confuse the senses and worsen the strain.  Basically, you’re wanting to watch 3D to make the entire film seem more real, but some creative elements don’t function properly when watching in 3D. So, that fade I mentioned makes the film appear strange and hard to watch.  While that fade would work perfectly when flat, it just doesn’t work at all in 3D.  Film makers need to take into account these subtle, but important differences.

Just like filmmakers have had to make some concessions to the HD format (every blemish and crease on clothing is seen), the same must be said of 3D features.

Velvet Elvis

Unfortunately, 3D features haven’t really come much farther along than the early adopters, like Jaws 3D.  So, the film maker employs such unnecessary tactics as poking spears at the camera or having flying objects come towards the camera or hovering things close near the camera.  It’s all playing to the 3D and not to the story.  These such tactics are trite and cliched… much like a velvet Elvis painting.  Film producers need to understand not to employ these silly and trite tactics to ‘take advantage’ of 3D film making.  There is no need for any extra planning. Let the chips fall where they may and let the film’s 3D do the talking.  You don’t need to add flying spears or having things thrown towards the camera.  If you didn’t need to do this in 2D, you don’t need to do it in 3D.

Emerging technologies

Television manufacturers are now trying their hand at producing 3D TVs.  So far, the technologies are limited to polarized screens or wearing glasses.  While this does work to produce a 3D effect, it has the same drawbacks as the big screen: eye strain and headaches.  So, I can’t see these technologies becoming common place in the home until a new technology emerges that requires no glasses and produces no eye strain.  So, for now, these television makers are likely to end up sitting on many of these novelty devices.  Worse, for the same reason the IMAX screen seems half the size, this effect is also present on Televisions.  So, while you may have that 60″ TV in your living room, donning a pair of 3D glasses and watching a 3D feature will effectively turn that huge screen into about half (or less) of its current size.  So, you may feel like you’re watching that 3D feature on a 20″ screen.

Going forward, we need a brand new paradigm shifting 3D technology.  A new technology that does not rely on glasses or polarization.  A new technology that can actually create 3D images in space rather than forcing the eyes to see something that isn’t really there.  It would be preferable to actually create 3D imagery in space.  Something that appears real and tangible, but isn’t.  Holograms come to mind, but we haven’t been able to perfect that technology yet… especially not projected holograms.  Once we have a technology on par with Star Trek’s Holodeck, then we might begin to have emersive 3D experiences that feel and seem real.


For me, the present state of 3D is novelty and produces too many negative effects.  However, because it is new, it is something that will win some support, but overall I think that people will still prefer to watch flat TV and movies because it causes far less eyestrain. So, I fully expect that this resurgence of 3D will dwindle to nothing within the next 2 years.  In fact, in 5 years time, I’d be surprised to see if any TV makers are still producing the current 3D TVs and film makers will have dropped back to flat features keying off of lack of support. Effectively, I see this 3D resurgence as similar to the failed quadrophonic technologies of 70s.

State of the Art: What is art?

Posted in art, images, render, terragen by commorancy on May 17, 2009

This debate has raged for many many years and will continue to rage for many more.  In certain internet digital art communities this debate is again resurfacing.  Some people put forth that using found digital materials like, for example, 3D models available through such sites as Daz3d.com and ContentParadise.com aren’t art when rendered through tools like Poser.  Well, I put for this response to these people.

What exactly is art?

That is an age old question. It doesn’t matter if you’re talking about ‘old’ mediums such as paint, canvas, pencil, clay, or metal or if you are discussing ‘new’ mediuma such as Poser, Daz, Bryce, Photoshop, Z-Brush or even Maya. The question is still valid and still remains unanswered. Basically, the answer mostly lies in the eye of the beholder. Thus, whether or not something is art is all based on opinion. Some people never believed that Marcel Duchamp’s urninals were art. Some people never believed that Jackson Pollack’s paint splatters were art. Some people never believed that Robert Rauschenberg’s mixed medium works (including tires and other found objects) were art. Some people still don’t.  But, does that make them not art?  No. Clearly, these men have been recognized as artists in art history.  Thus, what they have created is art.

The fact is, controversy has always surrounded new forms of art and new art mediums. There have been many artists who have taken existing pre-made structures and turned them into ‘art’.  In digital media, this is no different than inserting an existing Poser figure and using it in any given digital artwork.  Simply using Poser and a Poser figure does not necessarily make the work less profound as art.  

Creating things from scratch

For those who believe that you must create everything from scratch in 3D, I put forth this argument. Most artists who paint today do not make their own paints, construct their brushes and create their canvas (down to spinning the yarn and looming it into a fabric). If it were required by artists to create everything simply to ‘create art’, not much art would be created.  Most people would spend their time creating the tools they need to create the art.  Should you be required to create the graphite and shape the wood just to turn it into a pencil? No.  Sure, I admire those who want to create everything from scratch and I applaud them. But, that doesn’t mean every artist needs to work in that way.

If you want to take this argument further, then you should be required to write your own Photoshop application each time you want to modify an image.  Clearly, this is silly and no one would think this.  So, why is it that people believe that you must create every object you place in a 3D realized world and rendered image?  You don’t create every object you put in your home, why should you have to create every object you put in 3D world you create?  Again, this argument is completely silly.

Creating 3D objects

Yes, creating 3D objects using a modeling program is an art in itself.  It takes a lot of patience and consumes time creating these objects.  Again, I applaud these content producers. And I agree that it does make those objects art, but only in the sense of industrial design (very much like the camera or a chair).  The object is nothing, however, without a showplace.  Like the camera, if an object isn’t used in some way and no one ever sees it, then it’s not time well spent creating the object.  Thus, without a showplace, the object is not an artwork.  It is art in the sense of industrial design, functional art.  In this case, though, 3D objects only have functions when used in the context of creating scenes or together with other objects.  So, creating a 3D version of  a Ferrari F40 is great, but as an object on its own it’s really not a piece of artwork (other than industrial design).  However, this F40 could be used within a larger scene combined with other objects to create an artwork.  Then, the object becomes much more than its industrial design heritage.

That’s not to say I don’t respect and admire those who create 3D objects.  I do.  I applaud them and encourage them to create more.  Without such objects, artists won’t have the necessary things to create the imaginative scenes they can envision.

Art is what you make of it

Not to be overly redundant… ok, let’s…Art is what you personally make of it. Good art conveys emotion, makes a statement and usually motivates the viewer into a reaction (good or bad). However, whether a specific work is good or bad art is for each person to decide.  A 3D object, its texture and bump maps and all of its underlying components don’t and can’t both evoke a reaction and make a statement alone.  Only when these objects are placed within an imaginitive scene do these objects take on a new life and become much more than the sum of their parts.

Probably the single deciding factor for whether a specific piece of art is ‘good’ or ‘bad’ is whether or not the work was intentional (i.e., makes a statement about something). Thus, if someone intentionally takes a listless figure and does nothing to it and plops it in the middle of a scene seemingly uncreatively with a few simple lights, the deciding factor is if the artist did this scene intentionally to make a point about some subject matter. Intent is the single biggest factor in any artwork. As an artist, you have to understand this single aspect. Everything you put into a scene must be intentionally placed there and and placed there for a reason. If the scene does not seem intentionally constructed, then the artwork has failed as artwork. An artist might copy those who create ‘amateur’ works in a statement about amateur artwork, but which then becomes artwork in itself. It’s the statement itself that makes it art.

%d bloggers like this: