TITLE: 100 Days of Creative Coding
CATEGORY: Creative Coding
DATE: January, 2016 - Present
100 Days of Making is a new class at ITP that focuses on having a daily art practice for a period of 100 days. During my time at ITP I've developed a keen interest in Creative Coding. This interest began with Processing and has since expanded to Cinder, OpenFrameworks and WebGL / Three.js. With graduation drawing near, and my proposed Thesis topic not really incorporating much Creative Coding, I've decided to focus my 100 day challenge on building a solid foundation on which to carry forward after ITP.
For many of my projects at ITP, I've found myself diving in at the deep end and covering different topics as they were needed. I'm also the first to admit that perhaps I'm often too ambitious in what I hope to achieve, and would be more successful if I focused on smaller achievable goals. For my 100 day challenge, I plan to start with the basics and move forward in an iterative systematic fashion; in the hope that this fills the gaps in my existing knowledge and provides a solid foundation for covering more advanced topics in the future. The main goal for the challenge is to write some code every day that produces a visual output. Initially these will be simple, but hopefully as I cover more conceptually complex topics, the visuals will follow suit. Every day I'll be posting the visual output to Tumblr, and will write about what I've learned and/or observations for that particular day on this page. The total collection can be viewed by clicking on the gallery link below.
My rough plan is as follows (I'll update this as I go):
- Complete the OpenGL in Cinder Guide
- Complete the tutorials on learnopengl.com
GALLERY: Click Here
Created my first custom environment using a cubemap. It's much the same process as yesterday, except this texture is loaded as a single .jpg file with the 6 tiles layed out like a horizontal cross. Because there's a lot more pixel data, it's going to get harder and harder to produce perfect-loop gifs.
Following on from yesterday, this time using a cubemap texture to create an environment map for various geometry. The reflection is achieved by determining the reflection vector in the fragment shader, using the direction from the camera and the direction vector to the vertex.
Continuing with the theme of environment maps today. This time using the .obj file loader I created a few weeks ago and using a creating a reflection in the fragment shader by sampling the skybox texture.
Today's sketch deals with refraction. Refraction is based on the idea of Snell's Law, that describes how light "bends" or changes course when entering an object. GLSL has a handy refract() method that calculates the refraction index based on the angle you feed it.
Continuing yesterday's experiment by rendering a more complex scene offscreen.
Rehashing some old code from the first week, however the original sketch used Cinder's built-in shaders. This time, I'm writing my own shaders, doing some post-processing on the image, and rendering the Fbo I created.
Creating a black and white post-processing film effect by averaging the colours in the frag shader.
Some more post-processing effects in the frag shader. This time, using kernels to create a "sharpen" effect. This particular kernel takes the 8 surrounding pixel values and multiples them by a scaling factor (in this case 2) and multiplies the current pixel by -15. Multiplying the surrounding pixels by a weight and the current pixel by the larger negative value balances out the result.
Creating a kernel that highlights the edges of an object.
This is the last sketch in the Fbo and shader post-processing series. This blurred effect is created by dividing the kernel values by a weighted value (in this case 16). The growing/shrinking effect is caused by oscillating the offset distance for each kernel value.
Today was a tough one, creating a custom Cubemap from 6 separate textures. It took a while to get the textures to show up on the correct faces of the cube. I was incorrectly passing the faces' positions to the frag shader in world space as opposed to object space. I'm also not really sure where the weird transparency is coming from, i'll need to investigate that further...
I've been working on some old flocking algorithms for a a digital performance piece I've been collaborating on, so didn't really have time to start anything new today.. This is old code based on Robert Hodgin's intro to Cinder Tour.
Continuing on with the particle system. Nothing new to see here.
Today I fixed the issue i was having with the stencil buffer before, the problem is that I wasn't clearing the stencil buffer every frame. Still facing a few issues with the depth buffer though.
Really happy with today's sketch as this was something I was trying to do a few weeks back and failed miserably. I'm loading a .png file to create a texture and evaluating the alpha value of each fragment inside the fragment shader. If the fragment fails the check, the fragment is discarded, resulting in the empty space.
Nothing fancy today. Just playing with the alpha channel of the texture in the frag shader.
Not thrilled with the visual result of today's sketch, but I struggled to come up with a concept that demonstrated the technical stuff going on behind the scenes. Blending textures with alpha values requires drawing the object furthest from the camera first and so forth. This prevents the z-buffer discarding fragments of the objects that sit behind other objects. This is achieved using a std::map<float, vec3>, where the distance of the object from the camera serves as the key. The map automatically sorts the objects based on the value of the key. Meaning the smallest values go at the bottom of the stack. In order to draw the objects in the correct sequence, we use a reverse iterator to draw from the top of the stack.
Today's sketch is an exciting one for me. The visual result isn't all that interesting, but it represents a technical "milestone". I'm drawing the white cube and rendering it off screen in an Fbo (frame buffer object). I'm then creating a texture out of the Fbo and applying it to another object. This represents an immense amount of potential moving forward as the flexibility offered by framebuffers its almost limitless.
It took a while to get the look right for this one, in the end I just had to add more lights, although I’m still not 100% happy with the dullness of the material. I need to investigate further how to create more contrast, as raising the ambient and diffuse components only seems to create a matte effect. I managed to get a more “plasticy” look by raising the specular power of the highlight. I also think a model with more details would provide better results. I also cleaned up the code on the CPU side by setting the uniforms for each light inside a for loop, which significantly reduced the amount of code I needed to write in comparison to the previous few days.
Now that I’ve managed to load vertex and normal data from an .obj object, I wanted to try and manipulate it somehow. I created a uniform in the vertex shader that takes the global time that’s elapsed, I then used this to modify the vertex position by adding the product of the normal and the cosine of the time. This causes the model to appear as though it’s expanding and collapsing in on itself. Tomorrow i’ll try and do something a little more interesting, perhaps adding noise inside the shader.
I managed to apply some noise to each vertex inside the vertex shader using some code borrowed from Ken Perlin’s noise functions. I like the effect it achieved, giving the surface of the model a fractured appearance, though I’m not sure why this is happening. The position of each vertex is being manipulated by scaling the position vector by a float value returned by the noise function. I need to investigate why the “cracks” are appearing, instead of the fragment shader interpolating between each new vertex position. Strange, but beautiful.
I left today's sketch pretty late, so decided not to embark on anything new. One of the things I've been trying to "perfect" for a while is a black material with the right amount of shine. I haven't been able to achieve the effect using my previous lighting models, so today I experimented with alternative phong lighting equations. The result is closer to what I was hoping to achieve, but still not perfect, as I'd like some element of ambient lighting to illuminate the features, without creating a matte material. I used a different .obj file for the sake of switching things up.
I've reached the "Advanced OpenGL" section of learnopengl.com, so the topics are starting to get a little more abstract / low-level. Today's session covered the depthBuffer and the concept of z-fighting. For today's sketch, the brightness of each fragment is set according to its position in the depthBuffer (this is stored in the gl_FragCoord.z). The value first needs to be converted back to normalized device coordinates (NDC) and then divided by the value assigned to the far clipping plane (set using mCam.setPerspective() ).
Today was a complete cop out... I left it way too late and was too tired to put any thought into writing new code. This is code I altered and rehashed from day 43.
I've been trying to understand how the stencil buffer works in OpenGL. I think I grasp most of the theory, but the implementation's been a little tricky. I'm trying to draw some geometry, fill the stencil buffer, disable it, scale the geometry up slightly, and redraw it using a simple shader that draws the colour to white. The main idea here is that the stencil shader discards all the fragments that are in the position of the original geometry, thus only rendering the part that's larger for the second piece of geometry (ie an outline). Confused? Me too... It kind of worked, but was very buggy, so I decided to play around with it. I'll return to it tomorrow and try and figure out what's wrong.
In contrast to yesterday's sketch, I introduced attenuation to the model. This allows me to create a point light, where objects that are closer to the light appear brighter.
Not too much new code today, just playing with the linear and quadratic components of the attenuation equation to create some interesting lighting effects. I'm constantly amazed by how the change in lighting creates a very real sense of motion... something I want to expand on in the future.
Today felt like my biggest victory thus far in the 100-day challenge. I used everything I've learnt over the previous 45 days to fix bugs that I'm confident I wouldn't have been able to solve a week ago.
For today's sketch I created a spotlight by creating a lighting vector that points to a moving object (a focal point). Inside the frag shader, we check if the current fragment falls within the spotlight's sphere of influence (a cosine value set on the CPU side). If the fragment falls within this cone, we apply the desired lighting.
I'm really starting to feel like I'm hitting my stride. I find I'm more capable of executing ideas, without getting stuck on how to do it.
Today's sketch builds on yesterday. I've added a "faded edge" to the spotlight. This is done by creating a secondary cone. If the fragment is inside of the inner cone, it is fully illuminated, if the fragment lies between the diameter of the outer cone and the inner cone, the brightness is a function of the distance from the inner cone. Anything outside of the outer cone doesn't receive any light.
After experimenting with different type of lights, it's time to start using multiple lights in the scene. We create an array of pointLight objects inside the frag shader (using the struct container we've created). We also define a calcPointLight() function that returns a vec3 of the diffuse, ambient and specular components for each light source. The final output color is determined by adding together the various components of each individual light source.
Not much additional code for today’s sketch. I changed the ambient and diffuse components on the CPU side for each of the point lights, giving each one a bias of either Red, Green, or Blue. I also used a stock shader to colour each of the point lights to indicate which light has the relevant components. The blending ( simple addition of linear equations ) is again done in the frag shader.
I created my own little .obj file loader using Cinder’s ObjectLoader, I imported a .obj of myself (which I created using a Kinect 1414 and Skanect). The file contains the vertices, vertex normals and indices for the faces. The data is stored in a TriMesh object, a pointer to which is then used to create gl::Batch. I used Cinder’s gl::getStockShader() with wireframe enabled to visualize the model. I’ll probably create my own shader to add material and lighting to the object tomorrow.
Adding a specular component to yesterday's fragment shader. The specular highlight isn't quite working as it's supposed to.
I finally managed to get the specular highlight to show up on the model surface. Turns out I needed to multiply the light position by gl::getModelView() on the CPU side before passing it to the shader, but I'm still not 100% on the mechanics involved in this. The code still needs adjusting, as the highlight shows up even when the light is on the opposite side of the object, but overall i'm happy with the progression.
Starting to experiment with materials. This is a first attempt at setting material properties on the CPU side and sending them to the frag shader as uniforms. This sketch just varies the object's base colour over time.
Continuing from yesterday's experiments I added some additional material properties, this time to try and make the surface shinier. I also added structs to the fragment shader to encapsulate the different properties for the light and for the material.
This was a big win for me today in that I finally solved the issue I’ve been having over the past few days where the surface of the object remains illuminated when the object passes between the camera and the light source. It turns out that the problem I was having was due to the fact that all of my lighting calculations in the shader were taking place in view-space, while the light’s position was in world space. Multiplying the light’s position by the camera’s view matrix before sending it to the GPU solved the problem!
mGlsl->uniform("uLightPos", vec3( mCam.getViewMatrix() * vec4( lightPos, 1 )));
Continuing with the theme of lighting... For today's sketch, two textures are created to act as the diffuse and specular maps respectively. These are both passed as uniforms to the fragment shader. The diffuse and specular lighting components are then determined by sampling the respective textures, allowing the different materials of the object to take on different physical properties.
Created a simple directional light by using a uniform light direction for every fragment.
Playing around with the random distribution of points on a sphere, as well as Perlin Noise. The normal vectors are derived by subtracting the chosen point on the sphere's surface from the origin. A line is then drawn from this point to a vector derived by multiplying the unit vector by a scalar value.
I simply didn't have the energy today to put the effort into something new. This is an amalgamation of previous days' code.
I spent most of the day today trying to get the Cinder-KCB2 block up and running in windows with a KinectV2. This is a screenshot from one of the sample apps that does skeleton tracking.
This is a short video from a rehearsal today for a digital performance piece I'm collaborating on. The visuals are created by a dancer performing in front of a KinectV2.
I spent some time tweaking the visuals from yesterday's coding session.
Nothing fancy today... beginning an exploration into lighting with the simple application of ambient lighting inside the fragment shader by passing in the object colour and light colour in as uniforms.
Expanding on yesterday's sketch by using the normal and light direction vectors to calculate the diffuse shading component in the fragment shader. The object's color, the light's position and the light's color are passed to the frag shader as uniforms.
Experimenting with alpha transparencies with textures.
Animating textures with transparencies with Cinder's default shaders.
Distributing objects along the vertices of a sphere and applying Perlin Noise to the translation transformation.
Applying Perlin noise (Derivative of fractal Brownian motion) to the translation transformation
Creating my own view, model, and projection matrices (and applying transformations to them) and passing them to the vertex shader as uniforms, instead of using Cinder's built-in matrix transformations.
I've been experimenting with different camera techniques. In this sketch, the camera oscillates back and forth, bringing the objects in and out of clip space.
Despite the simple output, this was one of my most challenging days. Getting to grips with the different kind of buffers has been tricky. For this triangle, I'm creating a gl::VboMesh and filling it with the position and normal attributes. I'm also manually drawing each vertex using an index buffer.
Today's sketch is admittedly a bit of a copout.. I was short of time and applied a texture to a geom::plane, manipulating the vertices using the previous 3D sine wave code.
I spent a fair amount of time trying to manipulate the vertices from yesterday's sketch in a more interesting way. After a great deal of frustration, the result was not quite what I'd hoped. I applied another texture from an old triangulation experiment.
Having a little more time today, I returned to the code I started on Day 15. Today I was attempting to pass individual colors for each vertex. Today's output was the result of a happy accident where I forgot to clear the drawing buffer, resulting in this colourful glitchy output.
This was a somewhat successful day. I'm still having issues with drawing the triangles manually by defining the order of indices. The warped pattern is happening entirely in the fragment shader.
For today's sketch I revisited what I was trying to do on days 15 and 16. The wave pattern is a result of an uneven offset from the center of the grid. The colour variation is the result of passing individual colours to each vertex, which are determined by said vertex's z-value.
Expanding on yesterday's sketch, I spent most of the time today experimenting with manipulating the vertices.
Writing a custom shader program and passing an updating colour as a uniform. The object's shape comes from rotating 128 icosahedrons around the same point.
Passing texture coordinates to, and sampling texture colours in, a custom shader. The 3D sine wave comes from assosciating the XY distance of the vertex from the center of the plane to the Z value of the vertex.
Applying the easing animation to cubes laid out in a 10x10 grid.
This is a variation of the classic 3D sine wave. Each sphere oscillates along the z-axis, with its height offset being a factor of it's distance from the center of the grid.
I called this sketch "ode to the game of life" as it reminds me of the classic programming example of the "game of life". The sketch utilizes the same principles as yesterday, except instead of translating a sphere along the z-axis, I'm rotating flat discs around their own local x-axis. The unusual pattern comes from tweaks in the distance offset from the center.
I was short on time today. This sketch is a variation of day 11, increasing the density of the spheres and changing the camera angle to give a more dramatic view of the 3D sine wave.
Today I decided to combine this week's aesthetic (the undulating sine wave) with some of the code from day 4. I increased the radius between rotations, creating more of a "slinky" look.
Basic 2D transformations and setting HSV colour values based on location.
2D matrix transformations and rotation, varying length based on position.
Using gl::Batch to combine geometry and a default shader, in addition to using to gl::scale() to manipulate the spheres.
Continuing to play with gl::Batch, increasing the number of objects to 1028 and mapping the brightness to the height of the cube.
Drawing a 10x10x10 array of icosahedrons and scaling the values by which each object is translated.
Playing with easing, delaying the animation of element by the offset of its position in the for loop.
preloading a set of images into an array and displaying them at set intervals by binding them sequentially in the draw loop to create a texture animation.