M-014 Background complex

How to create a neuro noise webgl shader hero background Neuro Noise WebGL Shader(FV背景)の作り方

A WebGL hero background using ridged simplex noise to render deep-green to cyan beams, with mouse-driven drift. Simplexノイズのリッジ発光で、深緑→シアンのビームが流れるWebGL背景。マウスで揺らぎを操作。

ライブデモ Live Demo

Neuro Noise

Neural signal backdrop

WebGL simplex shader with mouse-reactive ridged glow — zero assets, pure GLSL.

3.5
0.5
0.15

概要・用途・特徴Overview, Usage & Features

何ができるかWhat it does

A WebGL hero background using ridged simplex noise to render deep-green to cyan beams, with mouse-driven drift.

Simplexノイズのリッジ発光で、深緑→シアンのビームが流れるWebGL背景。マウスで揺らぎを操作。

どこで使うかWhere to use

web application, marketing page

ヒーローセクション、フルスクリーン背景、テクノロジー系ランディングページ、アート系サイト

特徴Key features

Real-time WebGL fragment shader producing organic flowing noise for a hero background. Runs on the GPU so it does not block the main thread. Uniforms exposed for color, speed, and scale control. Falls back to a CSS gradient when WebGL is unavailable.

リアルタイムWebGLフラグメントシェーダーによるオーガニックなフローノイズのヒーロー背景。GPUで実行するためメインスレッドをブロックしない。カラー・速度・スケールのuniformを公開。WebGL非対応時はCSSグラデーションにフォールバック。

調整可能パラメータ Adjustable Parameters

Parameter Default Description

実装コード Implementation Code

// react/M-014.jsx
import { useEffect, useRef } from "react";
import "./M-014.css";

const VS = `attribute vec2 a_pos; void main(){gl_Position=vec4(a_pos,0.,1.);}`;
// (paste full FS string from JS snippet above)
const FS = `/* ... full fragment shader source ... */`;

export default function NeuroNoise({
  speed    = 0.5,
  bloom    = 3.5,
  mouseStr = 0.15,
}) {
  const canvasRef = useRef(null);
  const glRef     = useRef(null);
  const stateRef  = useRef({ speed, bloom, mouseStr });

  useEffect(() => {
    stateRef.current = { speed, bloom, mouseStr };
  }, [speed, bloom, mouseStr]);

  useEffect(() => {
    const canvas = canvasRef.current;
    const gl = canvas.getContext("webgl");
    if (!gl) {
      canvas.parentElement.classList.add("neuro-fallback");
      return;
    }
    glRef.current = gl;

    // compile / link (same as vanilla snippet)
    // ... (compile VS + FS, link prog, upload quad, get uniforms) ...

    let accTime = 0, lastTs = performance.now(), frameN = 0;
    let rafId;
    const mouse = { x: -9999, y: -9999 };
    const reduced = window.matchMedia(
      "(prefers-reduced-motion: reduce)").matches;
    const isMobile = window.innerWidth < 768;

    function resize() {
      canvas.width  = canvas.offsetWidth;
      canvas.height = canvas.offsetHeight;
      gl.viewport(0, 0, canvas.width, canvas.height);
    }
    const ro = new ResizeObserver(resize);
    ro.observe(canvas);
    resize();

    function render(ts) {
      rafId = requestAnimationFrame(render);
      frameN++;
      if (isMobile && frameN % 2 === 0) return;
      const dt = (ts - lastTs) * 0.001;
      lastTs = ts;
      const { speed, bloom, mouseStr } = stateRef.current;
      if (!reduced) accTime += dt * speed;
      // gl.uniform... (u_time, u_resolution, u_mouse, u_bloom, u_mouse_str)
      gl.drawArrays(gl.TRIANGLE_STRIP, 0, 4);
    }
    rafId = requestAnimationFrame(render);

    return () => {
      cancelAnimationFrame(rafId);
      ro.disconnect();
    };
  }, []);

  return (
    <div className="neuro-hero">
      <canvas ref={canvasRef} className="neuro-canvas" />
      <div className="neuro-card">
        <h2>Hero heading</h2>
        <p>Subtitle copy here.</p>
      </div>
    </div>
  );
}
.neuro-hero {
  position: relative;
  overflow: hidden;
  border-radius: 20px;
  height: 320px;
}
#neuro_canvas {
  position: absolute;
  inset: 0;
  width: 100%;
  height: 100%;
  display: block;
}
/* WebGL fallback */
.neuro-hero.neuro-fallback {
  background: linear-gradient(135deg, #012010, #001208);
}
.neuro-hero.neuro-fallback #neuro_canvas { display: none; }
/* Card sits above canvas */
.neuro-card {
  position: absolute;
  inset: 0;
  display: flex;
  flex-direction: column;
  justify-content: center;
  padding: 40px;
  pointer-events: none;
}
import { useEffect, useRef } from "react";
import "./M-014.css";

const VERT = "attribute vec2 a_pos;void main(){gl_Position=vec4(a_pos,0.0,1.0);}";

const FRAG = [
  "precision highp float;",
  "uniform float u_time;",
  "uniform vec2  u_resolution;",
  "uniform vec2  u_mouse;",
  "uniform float u_bloom;",
  "uniform float u_mouse_str;",
  "vec3 permute(vec3 x){return mod(((x*34.0)+1.0)*x,289.0);}",
  "float snoise(vec2 v){",
  "  const vec4 C=vec4(0.211324865405187,0.366025403784439,-0.577350269189626,0.024390243902439);",
  "  vec2 i=floor(v+dot(v,C.yy));",
  "  vec2 x0=v-i+dot(i,C.xx);",
  "  vec2 i1=(x0.x>x0.y)?vec2(1.0,0.0):vec2(0.0,1.0);",
  "  vec4 x12=x0.xyxy+C.xxzz; x12.xy-=i1;",
  "  i=mod(i,289.0);",
  "  vec3 p=permute(permute(i.y+vec3(0.0,i1.y,1.0))+i.x+vec3(0.0,i1.x,1.0));",
  "  vec3 m=max(0.5-vec3(dot(x0,x0),dot(x12.xy,x12.xy),dot(x12.zw,x12.zw)),0.0);",
  "  m=m*m; m=m*m;",
  "  vec3 x=2.0*fract(p*C.www)-1.0;",
  "  vec3 h=abs(x)-0.5;",
  "  vec3 a0=x-floor(x+0.5);",
  "  m*=1.79284291400159-0.85373472095314*(a0*a0+h*h);",
  "  vec3 g;",
  "  g.x=a0.x*x0.x+h.x*x0.y;",
  "  g.yz=a0.yz*x12.xz+h.yz*x12.yw;",
  "  return 130.0*dot(m,g);",
  "}",
  "void main(){",
  "  vec2 uv=gl_FragCoord.xy/u_resolution;",
  "  float asp=u_resolution.x/u_resolution.y;",
  "  vec2 st=vec2(uv.x*asp,uv.y);",
  "  vec2 mUV=vec2((u_mouse.x/u_resolution.x)*asp,u_mouse.y/u_resolution.y);",
  "  float md=distance(st,mUV);",
  "  float mi=u_mouse_str*exp(-md*3.5);",
  "  float t=u_time;",
  "  vec2 p1=st*2.2+vec2(t*0.18,t*0.10)+vec2(mi*0.40,mi*-0.30);",
  "  float r1=1.0-abs(snoise(p1)); r1*=r1;",
  "  vec2 p2=st*4.0+vec2(-t*0.13,t*0.20)+vec2(mi*-0.30,mi*0.50);",
  "  float r2=1.0-abs(snoise(p2)); r2*=r2;",
  "  float beam=pow(max(r1*r2,0.0),u_bloom);",
  "  vec3 bg=vec3(0.008,0.055,0.038);",
  "  vec3 mid=vec3(0.0,0.82,0.68);",
  "  vec3 hi=vec3(0.80,1.0,1.0);",
  "  vec3 col=mix(bg,mid,smoothstep(0.0,0.55,beam));",
  "  col=mix(col,hi,smoothstep(0.55,1.0,beam));",
  "  float vig=1.0-smoothstep(0.35,1.1,length(uv-0.5)*1.7);",
  "  col*=mix(0.45,1.0,vig);",
  "  gl_FragColor=vec4(col,1.0);",
  "}",
].join("\n");

function mkShader(gl, type, src) {
  const s = gl.createShader(type);
  gl.shaderSource(s, src);
  gl.compileShader(s);
  return s;
}

export default function NeuroNoise({
  speed    = 0.5,
  bloom    = 3.5,
  mouseStr = 0.15,
  height   = 320,
  children,
}) {
  const canvasRef  = useRef(null);
  const stateRef   = useRef({ speed, bloom, mouseStr });

  useEffect(() => {
    stateRef.current = { speed, bloom, mouseStr };
  }, [speed, bloom, mouseStr]);

  useEffect(() => {
    const canvas = canvasRef.current;
    if (!canvas) return;

    const gl =
      canvas.getContext("webgl") ||
      canvas.getContext("experimental-webgl");

    if (!gl) {
      canvas.parentElement?.classList.add("neuro-noise--fallback");
      return;
    }

    const prog = gl.createProgram();
    gl.attachShader(prog, mkShader(gl, gl.VERTEX_SHADER,   VERT));
    gl.attachShader(prog, mkShader(gl, gl.FRAGMENT_SHADER, FRAG));
    gl.linkProgram(prog);
    gl.useProgram(prog);

    const buf = gl.createBuffer();
    gl.bindBuffer(gl.ARRAY_BUFFER, buf);
    gl.bufferData(
      gl.ARRAY_BUFFER,
      new Float32Array([-1, -1, 1, -1, -1, 1, 1, 1]),
      gl.STATIC_DRAW
    );
    const aPos = gl.getAttribLocation(prog, "a_pos");
    gl.enableVertexAttribArray(aPos);
    gl.vertexAttribPointer(aPos, 2, gl.FLOAT, false, 0, 0);

    const uTime     = gl.getUniformLocation(prog, "u_time");
    const uRes      = gl.getUniformLocation(prog, "u_resolution");
    const uMouse    = gl.getUniformLocation(prog, "u_mouse");
    const uBloom    = gl.getUniformLocation(prog, "u_bloom");
    const uMouseStr = gl.getUniformLocation(prog, "u_mouse_str");

    let accTime = 0;
    let lastTs  = performance.now();
    let frameN  = 0;
    let rafId;
    const mouse = { x: -9999, y: -9999 };

    const isMobile = window.innerWidth < 768 || "ontouchstart" in window;
    const reduced  = window.matchMedia("(prefers-reduced-motion: reduce)").matches;

    function resize() {
      canvas.width  = canvas.offsetWidth;
      canvas.height = canvas.offsetHeight;
      gl.viewport(0, 0, canvas.width, canvas.height);
    }
    const ro = new ResizeObserver(resize);
    ro.observe(canvas);
    resize();

    const onMove = (e) => {
      const r = canvas.getBoundingClientRect();
      mouse.x = e.clientX - r.left;
      mouse.y = canvas.height - (e.clientY - r.top);
    };
    const onLeave = () => { mouse.x = -9999; mouse.y = -9999; };
    const onTouch = (e) => {
      const r = canvas.getBoundingClientRect();
      const t = e.touches[0];
      mouse.x = t.clientX - r.left;
      mouse.y = canvas.height - (t.clientY - r.top);
    };

    const el = canvas.parentElement;
    el?.addEventListener("mousemove", onMove);
    el?.addEventListener("mouseleave", onLeave);
    el?.addEventListener("touchmove", onTouch, { passive: true });
    el?.addEventListener("touchend", onLeave, { passive: true });

    function render(ts) {
      rafId = requestAnimationFrame(render);
      frameN++;
      if (isMobile && frameN % 2 === 0) return;
      const dt = (ts - lastTs) * 0.001;
      lastTs = ts;
      const { speed, bloom, mouseStr } = stateRef.current;
      if (!reduced) accTime += dt * speed;
      gl.uniform1f(uTime,     accTime);
      gl.uniform2f(uRes,      canvas.width, canvas.height);
      gl.uniform2f(uMouse,    mouse.x, mouse.y);
      gl.uniform1f(uBloom,    bloom);
      gl.uniform1f(uMouseStr, mouseStr);
      gl.drawArrays(gl.TRIANGLE_STRIP, 0, 4);
    }
    rafId = requestAnimationFrame(render);

    return () => {
      cancelAnimationFrame(rafId);
      ro.disconnect();
      el?.removeEventListener("mousemove", onMove);
      el?.removeEventListener("mouseleave", onLeave);
      el?.removeEventListener("touchmove", onTouch);
      el?.removeEventListener("touchend", onLeave);
      gl.deleteProgram(prog);
      gl.deleteBuffer(buf);
    };
  }, []);

  return (
    <div className="neuro-noise" style={{ height }}>
      <canvas ref={canvasRef} className="neuro-noise__canvas" />
      {children && <div className="neuro-noise__content">{children}</div>}
    </div>
  );
}
.neuro-noise {
  position: relative;
  overflow: hidden;
  border-radius: 20px;
}

.neuro-noise__canvas {
  position: absolute;
  inset: 0;
  width: 100%;
  height: 100%;
  display: block;
}

.neuro-noise--fallback {
  background: linear-gradient(135deg, #012010 0%, #001208 100%);
}

.neuro-noise--fallback .neuro-noise__canvas {
  display: none;
}

.neuro-noise__content {
  position: absolute;
  inset: 0;
  display: flex;
  flex-direction: column;
  align-items: flex-start;
  justify-content: center;
  padding: clamp(24px, 5vw, 44px);
  pointer-events: none;
  color: #e8fff8;
}

仕組みとカスタマイズHow It Works & Customization

仕組みHow it works

A <canvas> element fills the viewport. A WebGL context runs a custom GLSL fragment shader that implements 3D Perlin/simplex noise, animated by a time uniform incremented each frame in requestAnimationFrame. The output pixel color is computed as a function of UV coordinates and time, creating the flowing organic pattern.

<canvas>要素がビューポートを満たします。WebGLコンテキストがカスタムGLSLフラグメントシェーダーを実行し、requestAnimationFrameで毎フレームインクリメントされる時間uniformでアニメートされた3D Perlin/simplex noiseを実装。出力ピクセル色はUV座標と時間の関数として計算され、流れるオーガニックパターンを生成。

カスタマイズ方法Customization

Modify the GLSL color mix uniforms to shift the palette from cool blues to warm ambers. Decrease the time increment to slow the flow. Increase the noise frequency (scale uniform) for finer grain. Reduce canvas resolution (e.g., render at 50% then scale up with CSS) for a performance boost.

GLSLカラーミックスuniformを変更してパレットをクールブルーからウォームアンバーにシフト。時間インクリメントを減らしてフローを遅くする。ノイズ周波数(スケールuniform)を増やして細かい粒度に。パフォーマンス向上のためキャンバス解像度を下げる(例:50%でレンダリングしCSSでスケールアップ)。

注意点Caveats

WebGL requires a GPU-capable browser and is unavailable in some server-side rendering contexts. Always provide a CSS gradient fallback. Heavy shader computations can drain mobile battery; consider disabling or reducing quality on low-end devices detected via navigator.hardwareConcurrency.

WebGLはGPU対応ブラウザが必要で、一部のSSRコンテキストでは利用できません。常にCSSグラデーションのフォールバックを提供してください。重いシェーダー計算はモバイルバッテリーを消耗させるため、navigator.hardwareConcurrencyで検出した低スペック端末では無効化または品質低下を検討してください。

よくある質問 Frequently Asked Questions

How to customize the neuro noise webgl shader (hero background)? Neuro Noise WebGL Shader (Hero Background)をカスタマイズするには?

Modify the CSS custom properties and class styles defined in the code section. Key adjustable values include colors, sizes, durations, and spacing. See the Adjustable Parameters section for specific variables.

コードセクションで定義されているCSSカスタムプロパティとクラススタイルを変更してください。色、サイズ、時間、間隔が主な調整可能値です。具体的な変数は調整可能パラメータセクションを参照してください。

How to use the neuro noise webgl shader (hero background) in React? ReactでNeuro Noise WebGL Shader (Hero Background)を使うには?

Import the provided React component and its CSS file. The component accepts props for customization. Check the React code section for the full implementation and available props.

提供されているReactコンポーネントとCSSファイルをインポートしてください。コンポーネントのpropsでカスタマイズできます。完全な実装と利用可能なpropsはReactコードセクションを参照してください。

What are the performance implications of neuro noise webgl shader (hero background)? Neuro Noise WebGL Shader (Hero Background)のパフォーマンスへの影響は?

This implementation uses CSS transforms and opacity for animations, which are GPU-accelerated. It's lightweight and doesn't cause layout thrashing. Consider using prefers-reduced-motion for accessibility.

この実装はCSSのtransformとopacityを使用しており、GPUアクセラレーションされます。軽量でレイアウトスラッシングを引き起こしません。アクセシビリティのためにprefers-reduced-motionの使用を検討してください。

AIへの指示テンプレート AI Prompt Template

以下をAIに貼り付けると実装を依頼できます。 Paste the following into your AI assistant to request implementation.