table of contents
本次的內容會使用
在本篇文章中你將會理解到
What is Frame Buffer ? In simple terms, a Frame Buffer Object is a chunk of memory that stores pixel data. It's an intermediary step in the rendering process. When you render a scene in a 3D application, the graphics pipeline processes various stages, such as geometry transformation, rasterization, and pixel shading. At some point in this pipeline, the rendered image needs to be stored before being displayed on the screen.
What is FBO ( Frame Buffer Object ) Technique ? 簡單的來說,就是把資料寫入貼圖讓GPU可以對資料進行運算,核心觀念就是不斷更新貼圖達到讓資料進行更新的做法,如果只是單純把貼圖寫進fragment shader是沒有辦法更新貼圖的,本篇文章的做法是
step1 創造一個新的場景, 並用正交相機截取畫面
step2 創造一個新的幾何體,然後相機的鏡頭準確地對準該幾何體
step3 賦予該幾何體一個新的fragment shader
step4 把資料的運算邏輯寫在fragment shader,再把結果輸出到gl_fragColor,如此一來我們就可以得到運算一次的資料
step5 把camera照到的內容輸出一張新貼圖,再把這張貼圖輸入到fragment shader中,邏輯有點類似Recursion algorithm step6 把carema輸出的貼圖再匯入到其他shader裡面即可以取得運算完後的資料
這步驟上一篇文章有說明過了,這邊單純紀錄過程
createPositionBuffer(){
this.size = 32;
this.number = this.size * this.size;
this.positions = new Float32Array(this.number * 4);
for(let i=0; i<this.size; i++){
for(let j=0; j<this.size; j++){
let index = i * this.size + j;
this.positions[index * 4 ] = i/(this.size-1) - 0.5;
this.positions[index * 4 + 1] = j/(this.size-1) -0.5;
this.positions[index * 4 + 2] = 0;
this.positions[index * 4 + 3] = 1;
}
}
this.positionTexture = new THREE.DataTexture(this.positions,this.size, this.size, THREE.RGBAFormat, THREE.FloatType);
this.positionTexture.needsUpdate = true;
}
這步驟就是建立我們的 gpu 運算單元所需要的最小元素
setupFBO(){
this.sceneFBO = new THREE.Scene();
this.cameraFBO = new THREE.OrthographicCamera(-1,1,1,-1,0, 1);
this.cameraFBO.position.z = 1;
this.cameraFBO.lookAt(new THREE.Vector3(0,0,0));
this.geoFBO = new THREE.PlaneGeometry(2,2);
this.matFBO = new THREE.ShaderMaterial({
uniforms: {
uMousePos: {value: this.uMousePos},
uPosTexture: {value: this.positionTexture},
uOriginPosTexture: {value: this.positionTexture}
},
vertexShader:fboVertex,
fragmentShader:fboFragment,
})
this.meshFBO = new THREE.Mesh(this.geoFBO, this.matFBO);
// this.meshFBO.position.x = 0.5;
this.sceneFBO.add(this.meshFBO);
...
}
what is RenderTarget In Three.js, a render target is an object used to capture the output of the rendering process. It represents an area where the scene will be rendered, and it's typically a texture or a framebuffer object.
這步驟就是把fragment shader輸出的內容原封不動的儲存成一張新的貼圖,在threejs中我們要使用render target進行螢幕影像輸出的過程為
step 1 宣告render target的影像格式。
step 2 在render的時候指定rendertarget要渲染哪一個camrea跟場景的內容。
step 3 將rendertarget中暫存的資料輸出成貼圖。
setupFBO(){
...
/*
* .magFilter : number (THREE.LinearFilter or THREE.NearestFilter)
* How the texture is sampled when a texel covers more than one pixel. The default is THREE.LinearFilter, which takes the four closest texels and bilinearly interpolates among them. The other option is THREE.NearestFilter, which uses the value of the closest texel.
* See the texture constants page for details.
* .minFilter : number (THREE.LinearFilter or THREE.NearestFilter)
* How the texture is sampled when a texel covers less than one pixel. The default is THREE.LinearMipmapLinearFilter, which uses mipmapping and a trilinear filter.
*
* type : number() (THREE.UnsignedByteType
THREE.ByteType
THREE.ShortType
THREE.UnsignedShortType
THREE.IntType
THREE.UnsignedIntType
THREE.FloatType
THREE.HalfFloatType
THREE.UnsignedShort4444Type
THREE.UnsignedShort5551Type
THREE.UnsignedInt248Type)
* This must correspond to the .format. The default is THREE.UnsignedByteType, which will be used for most texture formats.
*/
this.renderTarget = new THREE.WebGLRenderTarget(this.size,this.size,{
minFilter: THREE.NearestFilter,
magFilter: THREE.NearestFilter,
type: THREE.FloatType
})
}
將輸出的影像結果輸入到真正的要用來渲染的shader中
addObject(){
this.geometry = new THREE.PlaneGeometry(10,10,50,50);
this.material = new THREE.MeshNormalMaterial();
this.time = 0;
this.material = new THREE.ShaderMaterial({
uniforms: {
time: {value: this.time},
uTexture: this.positionTexture
},
vertexShader: vertexShader,
fragmentShader: fragmentShader,
})
this.mesh = new THREE.Points(this.geometry, this.material);
this.scene.add(this.mesh);
}
要將滑鼠座標映射到3d空間有以下幾個步驟
setupMouseEvent(){
this.uMousePos = new THREE.Vector3(0,0,0);
this.raycastMesh = new THREE.Mesh(
new THREE.PlaneGeometry(10,10),
new THREE.MeshBasicMaterial()
)
window.addEventListener('pointermove', (e)=>{
this.pointer.x = (e.clientX/this.width) * 2 - 1;
this.pointer.y = -(e.clientY/this.height) * 2 + 1;
this.raycaster.setFromCamera( this.pointer, this.camera );
const intersects = this.raycaster.intersectObjects([this.raycastMesh]);
if (intersects.length>0){
// console.log(intersects[0].point);
this.uMousePos = intersects[0].point;
}
})
}
this.matFBO.uniforms.uMousePos.value = this.uMousePos;
vec4 pos = texture2D(uPosTexture, vUv);
vec3 originPos = texture2D(uOriginPosTexture, vUv).xyz;
vec3 force = pos.xyz-uMousePos;
這邊推薦一個好用的網站 :
有了可以遞減的公式以後,我們就可以把他加入到fragment shader,來作出當粒子離影響點越遠的時候被附加的力會逐漸縮小的現象。
當所受到的力影響越來越弱的時候,粒子的位置會逐間向原本的位置靠近
vec3 posToGo = originPos + normalize(force)*forceFractor;
pos.xy += (posToGo.xy-pos.xy)*0.005;
varying vec2 vUv;
uniform sampler2D uPosTexture;
uniform sampler2D uOriginPosTexture;
uniform vec3 uMousePos;
void main() {
vec4 pos = texture2D(uPosTexture, vUv);
vec3 originPos = texture2D(uOriginPosTexture, vUv).xyz;
// color.x += 0.01;
vec3 force = pos.xyz-uMousePos;
float len = length(force);
float forceFractor = 1./max(0.1,len*50.);
vec3 posToGo = originPos + normalize(force)*forceFractor;
pos.xy += (posToGo.xy-pos.xy)*0.005;
gl_FragColor = vec4( pos.xyz, 1.0 );
}
full code :