
Three.js Particle Notes 03: FBO Technique: Building a GPGPU particle system from scratch.
Table of contents
This article continues from the template built in the first article and the Data Texture concept introduced in the second article. If those ideas are still unfamiliar, I recommend reading those notes first.
In this article, you will learn:
What is a frame buffer?
In simple terms, a Frame Buffer Object is a chunk of memory that stores pixel data. It is an intermediary step in the rendering process. When a 3D scene is rendered, the graphics pipeline goes through geometry transformation, rasterization, pixel shading, and other stages. At some point in that pipeline, the rendered result needs to be stored before it is displayed on the screen.
What is the FBO, or Frame Buffer Object, technique?
The simple explanation is that we write data into a texture so the GPU can perform calculations on that data. The core idea is to keep updating the texture, so the data itself can be updated over time. If we only pass a texture into a fragment shader, the texture will not update by itself. The workflow in this article is:
gl_FragColor. This gives us one computed result.This step was already explained in the previous article, so here I am only recording the implementation process.
createPositionBuffer(){
this.size = 32;
this.number = this.size * this.size;
this.positions = new Float32Array(this.number * 4);
for(let i=0; i<this.size; i++){
for(let j=0; j<this.size; j++){
let index = i * this.size + j;
this.positions[index * 4 ] = i/(this.size-1) - 0.5;
this.positions[index * 4 + 1] = j/(this.size-1) -0.5;
this.positions[index * 4 + 2] = 0;
this.positions[index * 4 + 3] = 1;
}
}
this.positionTexture = new THREE.DataTexture(this.positions,this.size, this.size, THREE.RGBAFormat, THREE.FloatType);
this.positionTexture.needsUpdate = true;
}
This step creates the minimum elements required for the GPU computation unit:
OrthographicCamera.setupFBO(){
this.sceneFBO = new THREE.Scene();
this.cameraFBO = new THREE.OrthographicCamera(-1,1,1,-1,0, 1);
this.cameraFBO.position.z = 1;
this.cameraFBO.lookAt(new THREE.Vector3(0,0,0));
this.geoFBO = new THREE.PlaneGeometry(2,2);
this.matFBO = new THREE.ShaderMaterial({
uniforms: {
uMousePos: {value: this.uMousePos},
uPosTexture: {value: this.positionTexture},
uOriginPosTexture: {value: this.positionTexture}
},
vertexShader:fboVertex,
fragmentShader:fboFragment,
})
this.meshFBO = new THREE.Mesh(this.geoFBO, this.matFBO);
// this.meshFBO.position.x = 0.5;
this.sceneFBO.add(this.meshFBO);
...
}
What is a RenderTarget?
In Three.js, a render target is an object used to capture the output of the rendering process. It represents an area where the scene will be rendered, and it is usually a texture or a framebuffer object.
This step stores the fragment shader output exactly as a new texture. In Three.js, the process of outputting rendered data through a render target is:
setupFBO(){
...
/*
* .magFilter : number (THREE.LinearFilter or THREE.NearestFilter)
* How the texture is sampled when a texel covers more than one pixel. The default is THREE.LinearFilter, which takes the four closest texels and bilinearly interpolates among them. The other option is THREE.NearestFilter, which uses the value of the closest texel.
* See the texture constants page for details.
* .minFilter : number (THREE.LinearFilter or THREE.NearestFilter)
* How the texture is sampled when a texel covers less than one pixel. The default is THREE.LinearMipmapLinearFilter, which uses mipmapping and a trilinear filter.
*
* type : number() (THREE.UnsignedByteType
THREE.ByteType
THREE.ShortType
THREE.UnsignedShortType
THREE.IntType
THREE.UnsignedIntType
THREE.FloatType
THREE.HalfFloatType
THREE.UnsignedShort4444Type
THREE.UnsignedShort5551Type
THREE.UnsignedInt248Type)
* This must correspond to the .format. The default is THREE.UnsignedByteType, which will be used for most texture formats.
*/
this.renderTarget = new THREE.WebGLRenderTarget(this.size,this.size,{
minFilter: THREE.NearestFilter,
magFilter: THREE.NearestFilter,
type: THREE.FloatType
})
}
The output image result is passed into the shader that will actually render the particles.
addObject(){
this.geometry = new THREE.PlaneGeometry(10,10,50,50);
this.material = new THREE.MeshNormalMaterial();
this.time = 0;
this.material = new THREE.ShaderMaterial({
uniforms: {
time: {value: this.time},
uTexture: this.positionTexture
},
vertexShader: vertexShader,
fragmentShader: fragmentShader,
})
this.mesh = new THREE.Points(this.geometry, this.material);
this.scene.add(this.mesh);
}
To map the mouse position into 3D space, we need these steps:
setupMouseEvent(){
this.uMousePos = new THREE.Vector3(0,0,0);
this.raycastMesh = new THREE.Mesh(
new THREE.PlaneGeometry(10,10),
new THREE.MeshBasicMaterial()
)
window.addEventListener('pointermove', (e)=>{
this.pointer.x = (e.clientX/this.width) * 2 - 1;
this.pointer.y = -(e.clientY/this.height) * 2 + 1;
this.raycaster.setFromCamera( this.pointer, this.camera );
const intersects = this.raycaster.intersectObjects([this.raycastMesh]);
if (intersects.length>0){
// console.log(intersects[0].point);
this.uMousePos = intersects[0].point;
}
})
}
this.matFBO.uniforms.uMousePos.value = this.uMousePos;
vec4 pos = texture2D(uPosTexture, vUv);
vec3 originPos = texture2D(uOriginPosTexture, vUv).xyz;
vec3 force = pos.xyz-uMousePos;
I recommend using desmos.com here. It lets you draw functions as graphs, which makes it much easier to visually understand the formulas.
Once we have a decreasing formula, we can add it to the fragment shader. This creates the effect where particles farther from the interaction point receive a gradually smaller force.
As the force becomes weaker, each particle gradually moves back toward its original position.
vec3 posToGo = originPos + normalize(force)*forceFractor;
pos.xy += (posToGo.xy-pos.xy)*0.005;
varying vec2 vUv;
uniform sampler2D uPosTexture;
uniform sampler2D uOriginPosTexture;
uniform vec3 uMousePos;
void main() {
vec4 pos = texture2D(uPosTexture, vUv);
vec3 originPos = texture2D(uOriginPosTexture, vUv).xyz;
// color.x += 0.01;
vec3 force = pos.xyz-uMousePos;
float len = length(force);
float forceFractor = 1./max(0.1,len*50.);
vec3 posToGo = originPos + normalize(force)*forceFractor;
pos.xy += (posToGo.xy-pos.xy)*0.005;
gl_FragColor = vec4( pos.xyz, 1.0 );
}
Full code: link
It writes computed data into a texture so the GPU can update particle state over time and feed the result back into another shader.
It implements the core pieces of a GPGPU particle workflow: data textures, an FBO scene, render targets, mouse interaction, and the fragment shader logic.