Tiling is a common technique to texture terrains, but suffer from repetition, as it IS the same texture repeated a lot of times. How can this be improved? You could add multiple textures, but that doesn't scale well when the number of textures increase. So what's the other option? Use the same texture, but make it LOOK different.

I accomplished that by creating a small 3D Noise texture ( 16 x 16 x 16 ) which I then sample in the pixel shader with the pixel world space position and a wrap sampler. By using multiples octaves, you get a nice noise. That noise is used as a mask to blend the same texture sampled at different sizes.
And there you go! Repetition is broken.
It could be improved a lot though. Some really good techniques are shown in GPU Gems 3, in a chapter about voxel-based terrains. Also that mask can be used to blend to different textures to achieve even more detail.

Here's a screenshot of the mask and of the final texture in my engine ( Fusion Engine ). Consider that there´s just one texture bound in the pixel shader.
Perlin, is that you?

Imagine this with some more textures...
I plan to add multiple textures in the near future ( next few days )
Here are some concept models ( and some not so concepts ) of guns for the Nanoattack demo.

"...modern guns work by either electromagnetic propulsion ( aka Gauss Rifle ) or plasma acceleration. To provide that amount of energy, miniaturized Casimir Reactors are used. The Casimir Reactors are devices that harvest the force generated by quantum fluctuations in the vacuum ( the so called Casimir Effect ) which is in fact tapping into the point-zero energy, providing an infinite amount of energy. While in theory this guns could shoot forever, the radiation generated by the harvesting of point-zero energy needs to be filtered, thus limiting the safety of the gun to the saturation time of the filter ( this can, however, be bypassed at the expense of high radiation doses ). Another big improvement is the use of Unified Bilateral Image Projectors (uBIP) to emulate the main parts of the gun with a holographic representation. This reduces the size of the guns, while allowing greater customization options..." Introduction to modern firearms, Stalkers Information Database, 2351



I allways try to prove that Blender is as least as powerful as the other software arround. Shadow Box, a nice feature of ZBrush, that allows you to create a mesh from 3 projections ( x, y, z ) can be easly done by combining some modifiers. In a nutshell.
    > Create 3 planes and paint the textures
    > Add a subdivision and displace modifier
    > Add some intersect boolean modifiers
    > *optional* Add a remesh and laplacian smooth to clean things up
Here's a screen of that:

and here's the above .blend file

The next step? Merge all this into a script, so it's more useful
Here's a model I found in the depths of my hard drive. Is a high-poly sculpt before of the times of Dynamic Topology in Blender, so there are a lot of polys. A lot. 
Rendered in Cycles at 200 samples. No normal maps where used, every detail is sculpted. In short, old model new render. 

Screen:

In HDR rendering ( and LDR too ) is common to add bloom to the scene. To create it, you need to mask the bright parts of the scene, blur it ( NVIDIA has a nice constant-time Gaussian blur ) and combine it with the scene, normally by adding them.
A lot of Brightpass filters exist, so I'm just gonna post mine here

   float4 bright = saturate((-color + pow(color,Power) * Scale) / Bias);

Screenshots:

Below is a plot of the function, with adjustable parameters. I use 2.58 for Power, 1.58 for Scale and 1.13 for Bias, but feel free to experiment. One of the graphs is the saturated function and the other is un-saturated

A while back, when working on my first engine, Atom Engine (DirectX 9), I played a bit with shaders for reflections. A fast but extremely hacky method to get them is to grab the pixel coordinates  (in screen space), and negate them in the y or x axis. Then just add them up with the color.
In practice, it works quite good.
You can extend it by adding a lot of blur, and also by using the surface normal to select the axist to negate. For example, negating both the x and y axis and using the reflection vector to mix both.

Below there are a couple screens, that just negate the y axis, and add too little some blur.
Works best for planar surfaces
The black regions are caused by the addressing mode. Should be set to wrap

Shader Snippet (HLSL) for the simple case
    // sspixel = coordinates in screen space (normalized 0 to 1)
    float2 newPixel = sspixel;
    newPixel.y = 1 - newPixel.y;
    
    // DX 9 Style
    float3 reflection = tex2D(texSampler,newPixel);
    // DX >10 Style
    //float3 reflection = colTexture.SampleLevel(texSampler,newPixel,0);

Pages

Powered by Blogger.