ライブデモ Live Demo
Simplex ノイズのリッジ発光(ridged noise)を2レイヤー重ね、pow で中間調を押しつぶした深緑→シアンのビームが流れる WebGL ヒーロー背景です。スライダーで輝度・速度・マウス影響度を調整してください。
Two layers of ridged simplex noise are multiplied and crushed with a pow function, producing deep-green to cyan beams as a WebGL hero background. Use the sliders to tune glow intensity, drift speed, and mouse influence.
Neural signal backdrop
WebGL simplex shader with mouse-reactive ridged glow — zero assets, pure GLSL.
AI向け説明 AI Description
`M-014` は WebGL(GLSL)で実装した Hero 背景エフェクトです。Stefan Gustavson の Simplex 2D ノイズ (`snoise`) を使い、リッジ発光(`1.0 - abs(snoise(...))`)を2レイヤー生成して乗算後に `pow(beam, u_bloom)` で中間調を圧縮することで、深緑→シアン→ほぼ白のビームを描画します。マウス座標はキャンバス座標系に変換(Y軸反転)し、各レイヤーのサンプリング位置をわずかにオフセットして揺らぎを加えます。モバイルは `window.innerWidth < 768` または `'ontouchstart' in window` で判定し、`requestAnimationFrame` ループの偶数フレームをスキップして実質 30fps に抑えます。`prefers-reduced-motion: reduce` 時はデルタタイムの積算を止め、静止状態で描画します。WebGL が利用できない場合はキャンバスを非表示にし、CSSフォールバックグラデーションを表示します。
`M-014` is a WebGL (GLSL) hero background. It uses Stefan Gustavson's Simplex 2D noise (`snoise`) to create two ridged-noise layers (`1.0 - abs(snoise(...))`), multiplies them, then crushes midtones with `pow(beam, u_bloom)`, producing deep-green → cyan → near-white beams. Mouse coordinates are converted to canvas space (Y-flipped) and used to subtly offset each layer's sampling position. Mobile detection (`window.innerWidth < 768` or `'ontouchstart' in window`) caps rendering to ~30fps by skipping every other frame. Under `prefers-reduced-motion: reduce`, delta-time accumulation stops, freezing the animation. If WebGL is unavailable, the canvas is hidden and a static CSS gradient fallback is shown.
調整可能パラメータ Adjustable Parameters
- u_bloom - pow の指数(1.0〜6.0)。値が大きいほど中間調が潰れビームが鋭くなる pow exponent (1.0–6.0); higher values crush midtones for sharper beam edges
- speed - JS 側でデルタ時間に掛ける係数(0.05〜2.0)。ドリフトの速さを制御 multiplier applied to delta-time in JS (0.05–2.0); controls how fast beams drift
- u_mouse_str - マウス影響度(0.0〜0.5)。カーソル周辺のノイズサンプリングをずらす量 mouse influence (0.0–0.5); how much the cursor displaces each layer's noise sample
- layer scales - シェーダー内の `2.2` と `4.0`(ノイズ周波数)を変えてビームの粗さを調整 noise frequency constants `2.2` and `4.0` in the shader control beam coarseness
- color stops - シェーダー内の `bg` / `mid` / `bright` ベクトルでカラーパレットを変更 swap the `bg`, `mid`, `bright` vectors in the shader to retheme the palette
- prefers-reduced-motion - 積算時間を凍結して静止画として描画 freezes accumulated time so the shader renders a still frame
実装 Implementation
HTML + CSS
<div class="neuro-hero" id="neuro_stage">
<canvas id="neuro_canvas" aria-hidden="true"></canvas>
<div class="neuro-card">
<h2>Hero heading</h2>
<p>Subtitle copy here.</p>
</div>
</div>
<style>
.neuro-hero {
position: relative;
overflow: hidden;
border-radius: 20px;
height: 320px;
}
#neuro_canvas {
position: absolute;
inset: 0;
width: 100%;
height: 100%;
display: block;
}
/* WebGL fallback */
.neuro-hero.neuro-fallback {
background: linear-gradient(135deg, #012010, #001208);
}
.neuro-hero.neuro-fallback #neuro_canvas { display: none; }
/* Card sits above canvas */
.neuro-card {
position: absolute;
inset: 0;
display: flex;
flex-direction: column;
justify-content: center;
padding: 40px;
pointer-events: none;
}
</style>
JavaScript (WebGL IIFE)
(function () {
const VS = `
attribute vec2 a_pos;
void main() { gl_Position = vec4(a_pos, 0.0, 1.0); }
`;
const FS = `
precision highp float;
uniform float u_time;
uniform vec2 u_resolution;
uniform vec2 u_mouse;
uniform float u_bloom;
uniform float u_mouse_str;
vec3 permute(vec3 x) {
return mod(((x * 34.0) + 1.0) * x, 289.0);
}
float snoise(vec2 v) {
const vec4 C = vec4(0.211324865405187, 0.366025403784439,
-0.577350269189626, 0.024390243902439);
vec2 i = floor(v + dot(v, C.yy));
vec2 x0 = v - i + dot(i, C.xx);
vec2 i1 = (x0.x > x0.y) ? vec2(1.0,0.0) : vec2(0.0,1.0);
vec4 x12 = x0.xyxy + C.xxzz;
x12.xy -= i1;
i = mod(i, 289.0);
vec3 p = permute(permute(i.y + vec3(0.0,i1.y,1.0))
+ i.x + vec3(0.0,i1.x,1.0));
vec3 m = max(0.5 - vec3(dot(x0,x0),
dot(x12.xy,x12.xy),
dot(x12.zw,x12.zw)), 0.0);
m = m*m; m = m*m;
vec3 x = 2.0 * fract(p * C.www) - 1.0;
vec3 h = abs(x) - 0.5;
vec3 a0 = x - floor(x + 0.5);
m *= 1.79284291400159 - 0.85373472095314*(a0*a0 + h*h);
vec3 g;
g.x = a0.x * x0.x + h.x * x0.y;
g.yz = a0.yz * x12.xz + h.yz * x12.yw;
return 130.0 * dot(m, g);
}
void main() {
vec2 uv = gl_FragCoord.xy / u_resolution;
float asp = u_resolution.x / u_resolution.y;
vec2 st = vec2(uv.x * asp, uv.y);
vec2 mUV = vec2((u_mouse.x / u_resolution.x) * asp,
u_mouse.y / u_resolution.y);
float md = distance(st, mUV);
float mi = u_mouse_str * exp(-md * 3.5);
float t = u_time;
vec2 p1 = st*2.2 + vec2(t*0.18, t*0.10)
+ vec2(mi*0.40, mi*-0.30);
float r1 = 1.0 - abs(snoise(p1)); r1 *= r1;
vec2 p2 = st*4.0 + vec2(-t*0.13, t*0.20)
+ vec2(mi*-0.30, mi*0.50);
float r2 = 1.0 - abs(snoise(p2)); r2 *= r2;
float beam = pow(max(r1 * r2, 0.0), u_bloom);
vec3 bg = vec3(0.008, 0.055, 0.038);
vec3 mid = vec3(0.0, 0.82, 0.68);
vec3 hi = vec3(0.80, 1.0, 1.0);
vec3 col = mix(bg, mid, smoothstep(0.0, 0.55, beam));
col = mix(col, hi, smoothstep(0.55, 1.0, beam));
float vig = 1.0 - smoothstep(0.35, 1.1,
length(uv - 0.5) * 1.7);
col *= mix(0.45, 1.0, vig);
gl_FragColor = vec4(col, 1.0);
}
`;
const canvas = document.getElementById("neuro_canvas");
const gl = canvas.getContext("webgl")
|| canvas.getContext("experimental-webgl");
if (!gl) {
document.getElementById("neuro_stage")
.classList.add("neuro-fallback");
return;
}
function compile(type, src) {
const s = gl.createShader(type);
gl.shaderSource(s, src);
gl.compileShader(s);
return s;
}
const prog = gl.createProgram();
gl.attachShader(prog, compile(gl.VERTEX_SHADER, VS));
gl.attachShader(prog, compile(gl.FRAGMENT_SHADER, FS));
gl.linkProgram(prog);
gl.useProgram(prog);
const buf = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, buf);
gl.bufferData(gl.ARRAY_BUFFER,
new Float32Array([-1,-1, 1,-1, -1,1, 1,1]), gl.STATIC_DRAW);
const aPos = gl.getAttribLocation(prog, "a_pos");
gl.enableVertexAttribArray(aPos);
gl.vertexAttribPointer(aPos, 2, gl.FLOAT, false, 0, 0);
const uTime = gl.getUniformLocation(prog, "u_time");
const uRes = gl.getUniformLocation(prog, "u_resolution");
const uMouse = gl.getUniformLocation(prog, "u_mouse");
const uBloom = gl.getUniformLocation(prog, "u_bloom");
const uMouseStr = gl.getUniformLocation(prog, "u_mouse_str");
let accTime = 0, lastTs = performance.now(), frameN = 0;
let mouse = { x: -9999, y: -9999 };
let bloom = 3.5, speed = 0.5, mouseStr = 0.15;
const isMobile = window.innerWidth < 768 || "ontouchstart" in window;
const reduced = window.matchMedia(
"(prefers-reduced-motion: reduce)").matches;
function resize() {
canvas.width = canvas.offsetWidth;
canvas.height = canvas.offsetHeight;
gl.viewport(0, 0, canvas.width, canvas.height);
}
window.addEventListener("resize", resize);
resize();
const stage = document.getElementById("neuro_stage");
stage.addEventListener("mousemove", (e) => {
const r = canvas.getBoundingClientRect();
mouse.x = e.clientX - r.left;
mouse.y = canvas.height - (e.clientY - r.top);
});
stage.addEventListener("mouseleave", () => {
mouse.x = -9999; mouse.y = -9999;
});
stage.addEventListener("touchmove", (e) => {
const r = canvas.getBoundingClientRect();
const t = e.touches[0];
mouse.x = t.clientX - r.left;
mouse.y = canvas.height - (t.clientY - r.top);
}, { passive: true });
function render(ts) {
requestAnimationFrame(render);
frameN++;
if (isMobile && frameN % 2 === 0) return;
const dt = (ts - lastTs) * 0.001;
lastTs = ts;
if (!reduced) accTime += dt * speed;
gl.uniform1f(uTime, accTime);
gl.uniform2f(uRes, canvas.width, canvas.height);
gl.uniform2f(uMouse, mouse.x, mouse.y);
gl.uniform1f(uBloom, bloom);
gl.uniform1f(uMouseStr, mouseStr);
gl.drawArrays(gl.TRIANGLE_STRIP, 0, 4);
}
requestAnimationFrame(render);
// Expose for slider wiring
window.__neuro = {
setBloom: (v) => { bloom = v; },
setSpeed: (v) => { speed = v; },
setMouseStr: (v) => { mouseStr = v; },
};
})();
React (JSX)
// react/M-014.jsx
import { useEffect, useRef } from "react";
import "./M-014.css";
const VS = `attribute vec2 a_pos; void main(){gl_Position=vec4(a_pos,0.,1.);}`;
// (paste full FS string from JS snippet above)
const FS = `/* ... full fragment shader source ... */`;
export default function NeuroNoise({
speed = 0.5,
bloom = 3.5,
mouseStr = 0.15,
}) {
const canvasRef = useRef(null);
const glRef = useRef(null);
const stateRef = useRef({ speed, bloom, mouseStr });
useEffect(() => {
stateRef.current = { speed, bloom, mouseStr };
}, [speed, bloom, mouseStr]);
useEffect(() => {
const canvas = canvasRef.current;
const gl = canvas.getContext("webgl");
if (!gl) {
canvas.parentElement.classList.add("neuro-fallback");
return;
}
glRef.current = gl;
// compile / link (same as vanilla snippet)
// ... (compile VS + FS, link prog, upload quad, get uniforms) ...
let accTime = 0, lastTs = performance.now(), frameN = 0;
let rafId;
const mouse = { x: -9999, y: -9999 };
const reduced = window.matchMedia(
"(prefers-reduced-motion: reduce)").matches;
const isMobile = window.innerWidth < 768;
function resize() {
canvas.width = canvas.offsetWidth;
canvas.height = canvas.offsetHeight;
gl.viewport(0, 0, canvas.width, canvas.height);
}
const ro = new ResizeObserver(resize);
ro.observe(canvas);
resize();
function render(ts) {
rafId = requestAnimationFrame(render);
frameN++;
if (isMobile && frameN % 2 === 0) return;
const dt = (ts - lastTs) * 0.001;
lastTs = ts;
const { speed, bloom, mouseStr } = stateRef.current;
if (!reduced) accTime += dt * speed;
// gl.uniform... (u_time, u_resolution, u_mouse, u_bloom, u_mouse_str)
gl.drawArrays(gl.TRIANGLE_STRIP, 0, 4);
}
rafId = requestAnimationFrame(render);
return () => {
cancelAnimationFrame(rafId);
ro.disconnect();
};
}, []);
return (
<div className="neuro-hero">
<canvas ref={canvasRef} className="neuro-canvas" />
<div className="neuro-card">
<h2>Hero heading</h2>
<p>Subtitle copy here.</p>
</div>
</div>
);
}
AIへの指示テンプレート AI Prompt Template
以下のテンプレートをコピーしてAIアシスタントに貼り付けると、このパターンの実装を依頼できます。 Copy the template below and paste it into your AI assistant to request an implementation of this pattern.