Compare commits

...

41 Commits

Author SHA1 Message Date
7d59b1eed5 docs: add project status, deferred items, and CLAUDE.md
- STATUS.md: completed phases, crate structure, test counts, next steps
- DEFERRED.md: simplified/postponed items per phase
- CLAUDE.md: build rules, wgpu quirks, project conventions

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-24 21:42:49 +09:00
080ac92fbb fix(renderer): merge IBL into group(3) to stay within max_bind_groups limit of 4
wgpu's default max_bind_groups is 4 (groups 0-3), but the PBR shader was
using group(4) for BRDF LUT bindings. This merges IBL bindings into the
shadow bind group (group 3) at binding slots 3-4, removes the standalone
IBL bind group layout/creation, and updates all examples accordingly.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-24 21:38:45 +09:00
9202bfadef feat: add IBL demo with normal mapping and procedural environment lighting
Fix pbr_demo, multi_light_demo, and shadow_demo to use the new 7-param
create_pbr_pipeline with PBR texture bind group (4-entry: albedo+normal)
and IBL bind group. Create ibl_demo showcasing a 7x7 metallic/roughness
sphere grid with IBL-based ambient lighting via BRDF LUT integration.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-24 21:34:06 +09:00
5232552aa4 feat(renderer): add normal mapping and procedural IBL to PBR shader
- Add tangent input (location 3) and TBN computation in vertex shader
- Add normal map sampling (group 1, bindings 2-3) for tangent-space normal mapping
- Add BRDF LUT binding (group 4, bindings 0-1) for specular IBL
- Add procedural sky environment function for diffuse/specular IBL
- Replace flat ambient with split-sum IBL approximation
- Add pbr_texture_bind_group_layout (4 entries: albedo + normal)
- Add create_pbr_texture_bind_group helper and flat_normal_1x1 texture
- Update create_pbr_pipeline to accept ibl_layout parameter (group 4)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-24 21:28:32 +09:00
ea8af38263 feat(renderer): add BRDF LUT generator and IBL resources
Implements CPU-based BRDF LUT generation using the split-sum IBL
approximation (Hammersley sampling, GGX importance sampling, Smith
geometry with IBL k=a²/2). Wraps the 256×256 Rgba8Unorm LUT in
IblResources for GPU upload via wgpu 28.0 API.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-24 21:19:09 +09:00
4d7ff5a122 feat(renderer): add tangent to MeshVertex with computation in OBJ parser and sphere generator
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-24 21:18:26 +09:00
88fabf2905 docs: add Phase 4c normal mapping + IBL implementation plan
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-24 21:16:39 +09:00
5f962f376e feat: add shadow demo with directional light shadow mapping and 3x3 PCF
- Add Mat4::orthographic() to voltex_math for light projection
- Fix pbr_demo and multi_light_demo to provide shadow bind group (group 3)
  required by updated PBR pipeline (dummy shadow with size=0 disables it)
- Create shadow_demo with two-pass rendering: shadow depth pass using
  orthographic light projection, then PBR color pass with shadow sampling
- Scene: ground plane, 3 spheres, 2 cubes with directional light

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-24 21:09:27 +09:00
8f962368e9 feat(renderer): integrate shadow map sampling with 3x3 PCF into PBR shader
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-24 21:03:31 +09:00
b5a6159526 feat(renderer): add ShadowMap, shadow depth shader, and shadow pipeline
Implements ShadowMap (2048x2048 Depth32Float texture with comparison sampler),
shadow_shader.wgsl (depth-only vertex shader), shadow_pipeline (front-face
culling, depth bias constant=2/slope=2.0), and associated uniform types.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-24 21:01:51 +09:00
1ce6acf80c docs: add Phase 4b-2 shadow mapping implementation plan
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-24 21:00:03 +09:00
62f505c838 fix(renderer): align LightsUniform to match WGSL vec3 padding (1056 bytes)
WGSL vec3<f32> requires 16-byte alignment, causing the shader to expect
1056 bytes while Rust struct was 1040. Added padding fields to match.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-24 20:57:10 +09:00
fdfe4aaf5f feat: add multi-light demo with orbiting point lights and spot light
Fix pbr_demo to use LightsUniform/LightData instead of old LightUniform.
Create multi_light_demo with 5 PBR spheres (varying metallic), a ground
plane, 4 colored orbiting point lights, a directional fill light, and a
spot light from above.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-24 20:55:24 +09:00
b0934970b9 feat(renderer): add multi-light system with LightsUniform and updated PBR shader
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-24 20:50:13 +09:00
297b3c633f docs: add Phase 4b-1 multi-light implementation plan
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-24 20:48:06 +09:00
07497c3d80 feat: add PBR demo with metallic/roughness sphere grid
7x7 grid of spheres demonstrating PBR material variation:
metallic increases along X axis, roughness along Y axis.
Uses dynamic UBO pattern for both camera and material uniforms.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-24 20:44:29 +09:00
b09e1df878 feat(renderer): add PBR material, sphere generator, Cook-Torrance shader, and PBR pipeline
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-24 20:41:02 +09:00
cca50c8bc2 docs: add Phase 4a PBR rendering implementation plan
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-24 20:38:54 +09:00
b0c51aaa45 feat: add asset_demo with Handle-based mesh management
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-24 20:33:03 +09:00
9a411e72da feat(asset): add voltex_asset crate with Handle, AssetStorage, and Assets manager
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-24 20:29:55 +09:00
ee22d3e62c docs: add Phase 3c asset manager implementation plan
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-24 20:27:16 +09:00
801ced197a feat: add hierarchy_demo with solar system scene graph
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-24 20:23:59 +09:00
c24c60d080 feat(ecs): add scene serialization/deserialization (.vscn format)
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-24 20:21:11 +09:00
135364ca6d feat(ecs): add WorldTransform propagation through parent-child hierarchy
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-24 20:20:46 +09:00
3e475c93dd feat(ecs): add Parent/Children hierarchy with add_child, remove_child, despawn_recursive
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-24 20:19:28 +09:00
504b7b4d6b docs: add Phase 3b scene graph implementation plan
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-24 20:18:10 +09:00
ecf876d249 fix(many_cubes): use dynamic uniform buffer for per-entity rendering
Previous approach called write_buffer inside render pass which doesn't
work — GPU only sees the last value at submit time. Now pre-computes all
entity uniforms into a dynamic UBO and uses dynamic offsets per draw call.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-24 20:14:18 +09:00
19e37f7f96 feat: add many_cubes ECS demo with 400 entities
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-24 20:09:44 +09:00
59753b2264 feat(ecs): add World with type-erased storage, queries, and Transform component
Implements Task 3 (World: spawn/despawn, add/get/remove components, query/query2
with type-erased HashMap<TypeId, Box<dyn ComponentStorage>>) and Task 4 (Transform:
position/rotation/scale with matrix() building T*RotY*RotX*RotZ*S). 25 tests pass.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-24 20:07:17 +09:00
2d64d226a2 feat(ecs): add voltex_ecs crate with Entity, EntityAllocator, and SparseSet
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-24 20:05:15 +09:00
96cebecc6d docs: add Phase 3a ECS implementation plan
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-24 20:02:50 +09:00
df06615de4 feat: add model viewer demo with OBJ loading, Blinn-Phong lighting, FPS camera
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-24 19:56:28 +09:00
71f6081dc9 feat(renderer): add BMP texture loader and GPU texture upload
Implements parse_bmp (24/32-bit uncompressed BMP to RGBA), GpuTexture with
wgpu 28.0 write_texture API (TexelCopyTextureInfo/TexelCopyBufferLayout),
bind_group_layout, white_1x1 fallback, and 3 BMP parser unit tests.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-24 19:52:35 +09:00
04ca5df062 feat(renderer): add Blinn-Phong shader, light uniforms, mesh pipeline
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-24 19:50:51 +09:00
ffd6d3786b feat(renderer): add Camera and FpsController
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-24 19:49:23 +09:00
c7d089d970 feat(renderer): implement OBJ parser with triangle/quad support
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-24 19:49:23 +09:00
78dcc30258 feat(renderer): add MeshVertex, Mesh, and depth buffer support
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-24 19:47:59 +09:00
82e3c19b53 feat(math): add Mat4 with transforms, look_at, perspective
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-24 19:46:28 +09:00
c644b784a6 feat(math): add Vec2 and Vec4 types
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-24 19:45:28 +09:00
870c412270 docs: add Phase 2 rendering basics implementation plan
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-24 19:41:12 +09:00
81ba6f7e5d feat: implement Phase 1 foundation - triangle rendering
- voltex_math: Vec3 with arithmetic ops, dot, cross, length, normalize
- voltex_platform: VoltexWindow (winit wrapper), InputState (keyboard/mouse),
  GameTimer (fixed timestep + variable render loop)
- voltex_renderer: GpuContext (wgpu init), Vertex + buffer layout,
  WGSL shader, render pipeline
- triangle example: colored triangle with ESC to exit

All 13 tests passing. Window renders RGB triangle on dark background.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-24 19:34:39 +09:00
74 changed files with 16930 additions and 8 deletions

27
CLAUDE.md Normal file
View File

@@ -0,0 +1,27 @@
# Voltex Engine
Rust 게임 엔진 프로젝트. wgpu 28.0 + winit 0.30 기반.
## 작업 현황
- `docs/STATUS.md` — 완료된 Phase, crate 구조, 테스트 현황, 다음 작업
- `docs/DEFERRED.md` — 간소화/미뤄진 항목 목록
## 스펙
- `docs/superpowers/specs/2026-03-24-voltex-engine-design.md` — 전체 엔진 설계
## 구현 계획
- `docs/superpowers/plans/` — 각 Phase별 상세 구현 계획
## 빌드/테스트
```bash
cargo build --workspace
cargo test --workspace
```
## 규칙
- Cargo path: `export PATH="$HOME/.cargo/bin:$PATH"` (Windows bash)
- wgpu 28.0 API: `immediate_size`, `multiview_mask`, `TexelCopyTextureInfo` 등 28.0 전용 필드 사용
- WGSL vec3 alignment: Rust struct에서 vec3 뒤에 padding 필요 (16바이트 정렬)
- max_bind_groups=4 (group 0~3). 리소스를 합쳐서 4개 이내로 유지
- Dynamic UBO 패턴: per-entity uniform은 aligned staging buffer + dynamic offset 사용
- 기존 예제(triangle, model_viewer 등)는 mesh_shader.wgsl(Blinn-Phong) 사용 — PBR 변경에 영향 없음

135
Cargo.lock generated
View File

@@ -157,6 +157,23 @@ dependencies = [
"libloading",
]
[[package]]
name = "asset_demo"
version = "0.1.0"
dependencies = [
"bytemuck",
"env_logger",
"log",
"pollster",
"voltex_asset",
"voltex_ecs",
"voltex_math",
"voltex_platform",
"voltex_renderer",
"wgpu",
"winit",
]
[[package]]
name = "atomic-waker"
version = "1.1.2"
@@ -684,6 +701,37 @@ version = "0.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "dfa686283ad6dd069f105e5ab091b04c62850d3e4cf5d67debad1933f55023df"
[[package]]
name = "hierarchy_demo"
version = "0.1.0"
dependencies = [
"bytemuck",
"env_logger",
"log",
"pollster",
"voltex_ecs",
"voltex_math",
"voltex_platform",
"voltex_renderer",
"wgpu",
"winit",
]
[[package]]
name = "ibl_demo"
version = "0.1.0"
dependencies = [
"bytemuck",
"env_logger",
"log",
"pollster",
"voltex_math",
"voltex_platform",
"voltex_renderer",
"wgpu",
"winit",
]
[[package]]
name = "indexmap"
version = "2.13.0"
@@ -895,6 +943,22 @@ dependencies = [
"libc",
]
[[package]]
name = "many_cubes"
version = "0.1.0"
dependencies = [
"bytemuck",
"env_logger",
"log",
"pollster",
"voltex_ecs",
"voltex_math",
"voltex_platform",
"voltex_renderer",
"wgpu",
"winit",
]
[[package]]
name = "memchr"
version = "2.8.0"
@@ -925,6 +989,36 @@ dependencies = [
"paste",
]
[[package]]
name = "model_viewer"
version = "0.1.0"
dependencies = [
"bytemuck",
"env_logger",
"log",
"pollster",
"voltex_math",
"voltex_platform",
"voltex_renderer",
"wgpu",
"winit",
]
[[package]]
name = "multi_light_demo"
version = "0.1.0"
dependencies = [
"bytemuck",
"env_logger",
"log",
"pollster",
"voltex_math",
"voltex_platform",
"voltex_renderer",
"wgpu",
"winit",
]
[[package]]
name = "naga"
version = "28.0.0"
@@ -1294,6 +1388,21 @@ version = "1.0.15"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "57c0d7b74b563b49d38dae00a0c37d4d6de9b432382b2892f0574ddcae73fd0a"
[[package]]
name = "pbr_demo"
version = "0.1.0"
dependencies = [
"bytemuck",
"env_logger",
"log",
"pollster",
"voltex_math",
"voltex_platform",
"voltex_renderer",
"wgpu",
"winit",
]
[[package]]
name = "percent-encoding"
version = "2.3.2"
@@ -1618,6 +1727,21 @@ dependencies = [
"syn",
]
[[package]]
name = "shadow_demo"
version = "0.1.0"
dependencies = [
"bytemuck",
"env_logger",
"log",
"pollster",
"voltex_math",
"voltex_platform",
"voltex_renderer",
"wgpu",
"winit",
]
[[package]]
name = "shlex"
version = "1.3.0"
@@ -1898,6 +2022,17 @@ version = "0.9.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0b928f33d975fc6ad9f86c8f283853ad26bdd5b10b7f1542aa2fa15e2289105a"
[[package]]
name = "voltex_asset"
version = "0.1.0"
[[package]]
name = "voltex_ecs"
version = "0.1.0"
dependencies = [
"voltex_math",
]
[[package]]
name = "voltex_math"
version = "0.1.0"

View File

@@ -4,13 +4,25 @@ members = [
"crates/voltex_math",
"crates/voltex_platform",
"crates/voltex_renderer",
"crates/voltex_ecs",
"crates/voltex_asset",
"examples/triangle",
"examples/model_viewer",
"examples/many_cubes",
"examples/hierarchy_demo",
"examples/asset_demo",
"examples/pbr_demo",
"examples/multi_light_demo",
"examples/shadow_demo",
"examples/ibl_demo",
]
[workspace.dependencies]
voltex_math = { path = "crates/voltex_math" }
voltex_platform = { path = "crates/voltex_platform" }
voltex_renderer = { path = "crates/voltex_renderer" }
voltex_ecs = { path = "crates/voltex_ecs" }
voltex_asset = { path = "crates/voltex_asset" }
wgpu = "28.0"
winit = "0.30"
bytemuck = { version = "1", features = ["derive"] }

28
assets/cube.obj Normal file
View File

@@ -0,0 +1,28 @@
# assets/cube.obj
v -0.5 -0.5 0.5
v 0.5 -0.5 0.5
v 0.5 0.5 0.5
v -0.5 0.5 0.5
v -0.5 -0.5 -0.5
v 0.5 -0.5 -0.5
v 0.5 0.5 -0.5
v -0.5 0.5 -0.5
vn 0.0 0.0 1.0
vn 0.0 0.0 -1.0
vn 1.0 0.0 0.0
vn -1.0 0.0 0.0
vn 0.0 1.0 0.0
vn 0.0 -1.0 0.0
vt 0.0 0.0
vt 1.0 0.0
vt 1.0 1.0
vt 0.0 1.0
f 1/1/1 2/2/1 3/3/1 4/4/1
f 6/1/2 5/2/2 8/3/2 7/4/2
f 2/1/3 6/2/3 7/3/3 3/4/3
f 5/1/4 1/2/4 4/3/4 8/4/4
f 4/1/5 3/2/5 7/3/5 8/4/5
f 5/1/6 6/2/6 2/3/6 1/4/6

View File

@@ -0,0 +1,6 @@
[package]
name = "voltex_asset"
version = "0.1.0"
edition = "2021"
[dependencies]

View File

@@ -0,0 +1,164 @@
use std::any::TypeId;
use std::collections::HashMap;
use crate::handle::Handle;
use crate::storage::{AssetStorage, AssetStorageDyn};
pub struct Assets {
storages: HashMap<TypeId, Box<dyn AssetStorageDyn>>,
}
impl Assets {
pub fn new() -> Self {
Self {
storages: HashMap::new(),
}
}
fn storage_mut_or_insert<T: 'static>(&mut self) -> &mut AssetStorage<T> {
self.storages
.entry(TypeId::of::<T>())
.or_insert_with(|| Box::new(AssetStorage::<T>::new()))
.as_any_mut()
.downcast_mut::<AssetStorage<T>>()
.unwrap()
}
pub fn insert<T: 'static>(&mut self, asset: T) -> Handle<T> {
self.storage_mut_or_insert::<T>().insert(asset)
}
pub fn get<T: 'static>(&self, handle: Handle<T>) -> Option<&T> {
self.storages
.get(&TypeId::of::<T>())?
.as_any()
.downcast_ref::<AssetStorage<T>>()?
.get(handle)
}
pub fn get_mut<T: 'static>(&mut self, handle: Handle<T>) -> Option<&mut T> {
self.storages
.get_mut(&TypeId::of::<T>())?
.as_any_mut()
.downcast_mut::<AssetStorage<T>>()?
.get_mut(handle)
}
pub fn add_ref<T: 'static>(&mut self, handle: Handle<T>) {
if let Some(storage) = self
.storages
.get_mut(&TypeId::of::<T>())
.and_then(|s| s.as_any_mut().downcast_mut::<AssetStorage<T>>())
{
storage.add_ref(handle);
}
}
pub fn release<T: 'static>(&mut self, handle: Handle<T>) -> bool {
if let Some(storage) = self
.storages
.get_mut(&TypeId::of::<T>())
.and_then(|s| s.as_any_mut().downcast_mut::<AssetStorage<T>>())
{
storage.release(handle)
} else {
false
}
}
pub fn count<T: 'static>(&self) -> usize {
self.storages
.get(&TypeId::of::<T>())
.map(|s| s.count())
.unwrap_or(0)
}
pub fn storage<T: 'static>(&self) -> Option<&AssetStorage<T>> {
self.storages
.get(&TypeId::of::<T>())?
.as_any()
.downcast_ref::<AssetStorage<T>>()
}
pub fn storage_mut<T: 'static>(&mut self) -> &mut AssetStorage<T> {
self.storage_mut_or_insert::<T>()
}
}
impl Default for Assets {
fn default() -> Self {
Self::new()
}
}
#[cfg(test)]
mod tests {
use super::*;
struct Mesh {
verts: u32,
}
struct Texture {
width: u32,
}
#[test]
fn insert_and_get_different_types() {
let mut assets = Assets::new();
let hm = assets.insert(Mesh { verts: 3 });
let ht = assets.insert(Texture { width: 512 });
assert_eq!(assets.get(hm).unwrap().verts, 3);
assert_eq!(assets.get(ht).unwrap().width, 512);
}
#[test]
fn count_per_type() {
let mut assets = Assets::new();
assets.insert(Mesh { verts: 3 });
assets.insert(Mesh { verts: 6 });
assets.insert(Texture { width: 512 });
assert_eq!(assets.count::<Mesh>(), 2);
assert_eq!(assets.count::<Texture>(), 1);
}
#[test]
fn release_through_assets() {
let mut assets = Assets::new();
let h = assets.insert(Mesh { verts: 3 });
assert_eq!(assets.count::<Mesh>(), 1);
let removed = assets.release(h);
assert!(removed);
assert_eq!(assets.count::<Mesh>(), 0);
assert!(assets.get(h).is_none());
}
#[test]
fn ref_counting_through_assets() {
let mut assets = Assets::new();
let h = assets.insert(Mesh { verts: 3 });
assets.add_ref(h);
let r1 = assets.release(h);
assert!(!r1);
assert!(assets.get(h).is_some());
let r2 = assets.release(h);
assert!(r2);
assert!(assets.get(h).is_none());
}
#[test]
fn storage_access() {
let mut assets = Assets::new();
let h = assets.insert(Mesh { verts: 3 });
{
let s = assets.storage::<Mesh>().unwrap();
assert_eq!(s.len(), 1);
assert_eq!(s.get(h).unwrap().verts, 3);
}
{
let s = assets.storage_mut::<Mesh>();
s.get_mut(h).unwrap().verts = 9;
}
assert_eq!(assets.get(h).unwrap().verts, 9);
}
}

View File

@@ -0,0 +1,82 @@
use std::fmt;
use std::hash::{Hash, Hasher};
use std::marker::PhantomData;
pub struct Handle<T> {
pub(crate) id: u32,
pub(crate) generation: u32,
_marker: PhantomData<T>,
}
impl<T> Handle<T> {
pub(crate) fn new(id: u32, generation: u32) -> Self {
Self {
id,
generation,
_marker: PhantomData,
}
}
}
impl<T> Clone for Handle<T> {
fn clone(&self) -> Self {
Self {
id: self.id,
generation: self.generation,
_marker: PhantomData,
}
}
}
impl<T> Copy for Handle<T> {}
impl<T> PartialEq for Handle<T> {
fn eq(&self, other: &Self) -> bool {
self.id == other.id && self.generation == other.generation
}
}
impl<T> Eq for Handle<T> {}
impl<T> Hash for Handle<T> {
fn hash<H: Hasher>(&self, state: &mut H) {
self.id.hash(state);
self.generation.hash(state);
}
}
impl<T> fmt::Debug for Handle<T> {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.debug_struct("Handle")
.field("id", &self.id)
.field("generation", &self.generation)
.finish()
}
}
#[cfg(test)]
mod tests {
use super::*;
struct Dummy;
#[test]
fn test_handle_copy() {
let h: Handle<Dummy> = Handle::new(0, 0);
let h2 = h; // copy
let h3 = h; // still usable after copy
assert_eq!(h2, h3);
}
#[test]
fn test_handle_eq() {
let h1: Handle<Dummy> = Handle::new(1, 2);
let h2: Handle<Dummy> = Handle::new(1, 2);
let h3: Handle<Dummy> = Handle::new(1, 3);
let h4: Handle<Dummy> = Handle::new(2, 2);
assert_eq!(h1, h2);
assert_ne!(h1, h3);
assert_ne!(h1, h4);
}
}

View File

@@ -0,0 +1,7 @@
pub mod handle;
pub mod storage;
pub mod assets;
pub use handle::Handle;
pub use storage::AssetStorage;
pub use assets::Assets;

View File

@@ -0,0 +1,227 @@
use std::any::Any;
use crate::handle::Handle;
struct AssetEntry<T> {
asset: T,
generation: u32,
ref_count: u32,
}
pub struct AssetStorage<T> {
entries: Vec<Option<AssetEntry<T>>>,
free_ids: Vec<u32>,
}
impl<T> AssetStorage<T> {
pub fn new() -> Self {
Self {
entries: Vec::new(),
free_ids: Vec::new(),
}
}
pub fn insert(&mut self, asset: T) -> Handle<T> {
if let Some(id) = self.free_ids.pop() {
let generation = self.entries[id as usize]
.as_ref()
.map(|e| e.generation)
.unwrap_or(0)
+ 1;
self.entries[id as usize] = Some(AssetEntry {
asset,
generation,
ref_count: 1,
});
Handle::new(id, generation)
} else {
let id = self.entries.len() as u32;
self.entries.push(Some(AssetEntry {
asset,
generation: 0,
ref_count: 1,
}));
Handle::new(id, 0)
}
}
pub fn get(&self, handle: Handle<T>) -> Option<&T> {
self.entries
.get(handle.id as usize)?
.as_ref()
.filter(|e| e.generation == handle.generation)
.map(|e| &e.asset)
}
pub fn get_mut(&mut self, handle: Handle<T>) -> Option<&mut T> {
self.entries
.get_mut(handle.id as usize)?
.as_mut()
.filter(|e| e.generation == handle.generation)
.map(|e| &mut e.asset)
}
pub fn add_ref(&mut self, handle: Handle<T>) {
if let Some(Some(entry)) = self.entries.get_mut(handle.id as usize) {
if entry.generation == handle.generation {
entry.ref_count += 1;
}
}
}
/// Decrements the ref_count. Returns true if the asset was removed.
pub fn release(&mut self, handle: Handle<T>) -> bool {
if let Some(slot) = self.entries.get_mut(handle.id as usize) {
if let Some(entry) = slot.as_mut() {
if entry.generation == handle.generation {
entry.ref_count = entry.ref_count.saturating_sub(1);
if entry.ref_count == 0 {
*slot = None;
self.free_ids.push(handle.id);
return true;
}
}
}
}
false
}
pub fn len(&self) -> usize {
self.entries.iter().filter(|e| e.is_some()).count()
}
pub fn is_empty(&self) -> bool {
self.len() == 0
}
pub fn ref_count(&self, handle: Handle<T>) -> u32 {
self.entries
.get(handle.id as usize)
.and_then(|e| e.as_ref())
.filter(|e| e.generation == handle.generation)
.map(|e| e.ref_count)
.unwrap_or(0)
}
pub fn iter(&self) -> impl Iterator<Item = (Handle<T>, &T)> {
self.entries
.iter()
.enumerate()
.filter_map(|(id, slot)| {
slot.as_ref().map(|e| (Handle::new(id as u32, e.generation), &e.asset))
})
}
}
impl<T> Default for AssetStorage<T> {
fn default() -> Self {
Self::new()
}
}
/// Trait for type-erased access to an AssetStorage.
pub trait AssetStorageDyn: Any {
fn as_any(&self) -> &dyn Any;
fn as_any_mut(&mut self) -> &mut dyn Any;
/// Number of live assets in this storage.
fn count(&self) -> usize;
}
impl<T: 'static> AssetStorageDyn for AssetStorage<T> {
fn as_any(&self) -> &dyn Any {
self
}
fn as_any_mut(&mut self) -> &mut dyn Any {
self
}
fn count(&self) -> usize {
self.len()
}
}
#[cfg(test)]
mod tests {
use super::*;
struct Mesh {
verts: u32,
}
#[test]
fn insert_and_get() {
let mut storage: AssetStorage<Mesh> = AssetStorage::new();
let h = storage.insert(Mesh { verts: 3 });
assert_eq!(storage.get(h).unwrap().verts, 3);
}
#[test]
fn get_mut() {
let mut storage: AssetStorage<Mesh> = AssetStorage::new();
let h = storage.insert(Mesh { verts: 3 });
storage.get_mut(h).unwrap().verts = 6;
assert_eq!(storage.get(h).unwrap().verts, 6);
}
#[test]
fn release_removes_at_zero() {
let mut storage: AssetStorage<Mesh> = AssetStorage::new();
let h = storage.insert(Mesh { verts: 3 });
assert_eq!(storage.len(), 1);
let removed = storage.release(h);
assert!(removed);
assert_eq!(storage.len(), 0);
assert!(storage.get(h).is_none());
}
#[test]
fn ref_counting() {
let mut storage: AssetStorage<Mesh> = AssetStorage::new();
let h = storage.insert(Mesh { verts: 3 });
storage.add_ref(h);
assert_eq!(storage.ref_count(h), 2);
let removed1 = storage.release(h);
assert!(!removed1);
assert_eq!(storage.ref_count(h), 1);
let removed2 = storage.release(h);
assert!(removed2);
assert!(storage.get(h).is_none());
}
#[test]
fn stale_handle() {
let mut storage: AssetStorage<Mesh> = AssetStorage::new();
let h = storage.insert(Mesh { verts: 3 });
storage.release(h);
// h is now stale; get should return None
assert!(storage.get(h).is_none());
}
#[test]
fn id_reuse() {
let mut storage: AssetStorage<Mesh> = AssetStorage::new();
let h1 = storage.insert(Mesh { verts: 3 });
storage.release(h1);
let h2 = storage.insert(Mesh { verts: 9 });
// Same slot reused but different generation
assert_eq!(h1.id, h2.id);
assert_ne!(h1.generation, h2.generation);
assert!(storage.get(h1).is_none());
assert_eq!(storage.get(h2).unwrap().verts, 9);
}
#[test]
fn iter() {
let mut storage: AssetStorage<Mesh> = AssetStorage::new();
let h1 = storage.insert(Mesh { verts: 3 });
let h2 = storage.insert(Mesh { verts: 6 });
let mut verts: Vec<u32> = storage.iter().map(|(_, m)| m.verts).collect();
verts.sort();
assert_eq!(verts, vec![3, 6]);
// handles from iter should be usable
let handles: Vec<Handle<Mesh>> = storage.iter().map(|(h, _)| h).collect();
assert!(handles.contains(&h1));
assert!(handles.contains(&h2));
}
}

View File

@@ -0,0 +1,7 @@
[package]
name = "voltex_ecs"
version = "0.1.0"
edition = "2021"
[dependencies]
voltex_math.workspace = true

View File

@@ -0,0 +1,136 @@
#[derive(Clone, Copy, Debug, PartialEq, Eq, Hash)]
pub struct Entity {
pub id: u32,
pub generation: u32,
}
struct EntityEntry {
generation: u32,
alive: bool,
}
pub struct EntityAllocator {
entries: Vec<EntityEntry>,
free_list: Vec<u32>,
alive_count: usize,
}
impl EntityAllocator {
pub fn new() -> Self {
Self {
entries: Vec::new(),
free_list: Vec::new(),
alive_count: 0,
}
}
pub fn allocate(&mut self) -> Entity {
self.alive_count += 1;
if let Some(id) = self.free_list.pop() {
let entry = &mut self.entries[id as usize];
// generation was already incremented on deallocate
entry.alive = true;
Entity {
id,
generation: entry.generation,
}
} else {
let id = self.entries.len() as u32;
self.entries.push(EntityEntry {
generation: 0,
alive: true,
});
Entity { id, generation: 0 }
}
}
pub fn deallocate(&mut self, entity: Entity) -> bool {
let Some(entry) = self.entries.get_mut(entity.id as usize) else {
return false;
};
if !entry.alive || entry.generation != entity.generation {
return false;
}
entry.alive = false;
entry.generation = entry.generation.wrapping_add(1);
self.free_list.push(entity.id);
self.alive_count -= 1;
true
}
pub fn is_alive(&self, entity: Entity) -> bool {
self.entries
.get(entity.id as usize)
.map_or(false, |e| e.alive && e.generation == entity.generation)
}
pub fn alive_count(&self) -> usize {
self.alive_count
}
}
impl Default for EntityAllocator {
fn default() -> Self {
Self::new()
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_allocate() {
let mut alloc = EntityAllocator::new();
let e0 = alloc.allocate();
let e1 = alloc.allocate();
assert_eq!(e0.id, 0);
assert_eq!(e1.id, 1);
assert_eq!(e0.generation, 0);
assert_eq!(e1.generation, 0);
}
#[test]
fn test_deallocate_and_reuse() {
let mut alloc = EntityAllocator::new();
let e0 = alloc.allocate();
let _e1 = alloc.allocate();
assert!(alloc.deallocate(e0));
let e0_new = alloc.allocate();
assert_eq!(e0_new.id, 0);
assert_eq!(e0_new.generation, 1);
}
#[test]
fn test_is_alive() {
let mut alloc = EntityAllocator::new();
let e = alloc.allocate();
assert!(alloc.is_alive(e));
alloc.deallocate(e);
assert!(!alloc.is_alive(e));
}
#[test]
fn test_stale_entity_rejected() {
let mut alloc = EntityAllocator::new();
let e = alloc.allocate();
alloc.deallocate(e);
// stale entity not alive
assert!(!alloc.is_alive(e));
// double-delete fails
assert!(!alloc.deallocate(e));
}
#[test]
fn test_alive_count() {
let mut alloc = EntityAllocator::new();
assert_eq!(alloc.alive_count(), 0);
let e0 = alloc.allocate();
let e1 = alloc.allocate();
assert_eq!(alloc.alive_count(), 2);
alloc.deallocate(e0);
assert_eq!(alloc.alive_count(), 1);
alloc.deallocate(e1);
assert_eq!(alloc.alive_count(), 0);
}
}

View File

@@ -0,0 +1,182 @@
use crate::entity::Entity;
use crate::world::World;
use crate::transform::Transform;
#[derive(Debug, Clone, Copy)]
pub struct Parent(pub Entity);
#[derive(Debug, Clone)]
pub struct Children(pub Vec<Entity>);
/// Set `child`'s Parent to `parent` and register `child` in `parent`'s Children list.
/// Does nothing if `child` is already in the Children list (no duplicates).
pub fn add_child(world: &mut World, parent: Entity, child: Entity) {
// Set the Parent component on the child
world.add(child, Parent(parent));
// Check whether parent already has a Children component
if world.get::<Children>(parent).is_some() {
let children = world.get_mut::<Children>(parent).unwrap();
if !children.0.contains(&child) {
children.0.push(child);
}
} else {
world.add(parent, Children(vec![child]));
}
}
/// Remove `child` from `parent`'s Children list and strip the Parent component from `child`.
pub fn remove_child(world: &mut World, parent: Entity, child: Entity) {
// Remove the child from the parent's Children list
if let Some(children) = world.get_mut::<Children>(parent) {
children.0.retain(|&e| e != child);
}
// Remove the Parent component from the child
world.remove::<Parent>(child);
}
/// Despawn `entity` and all of its descendants recursively.
/// Also removes `entity` from its own parent's Children list.
pub fn despawn_recursive(world: &mut World, entity: Entity) {
// Collect children first to avoid borrow conflicts during recursion
let children: Vec<Entity> = world
.get::<Children>(entity)
.map(|c| c.0.clone())
.unwrap_or_default();
// Recurse into each child
for child in children {
despawn_recursive(world, child);
}
// Remove this entity from its parent's Children list (if it has a parent)
let parent_entity = world.get::<Parent>(entity).map(|p| p.0);
if let Some(parent) = parent_entity {
if let Some(siblings) = world.get_mut::<Children>(parent) {
siblings.0.retain(|&e| e != entity);
}
}
// Despawn the entity itself (this removes all its components too)
world.despawn(entity);
}
/// Return all entities that have a Transform but no Parent — i.e. scene roots.
pub fn roots(world: &World) -> Vec<Entity> {
world
.query::<Transform>()
.filter(|(entity, _)| world.get::<Parent>(*entity).is_none())
.map(|(entity, _)| entity)
.collect()
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_add_child() {
let mut world = World::new();
let parent = world.spawn();
let child = world.spawn();
add_child(&mut world, parent, child);
// Parent component set on child
let p = world.get::<Parent>(child).expect("child should have Parent");
assert_eq!(p.0, parent);
// Children component on parent contains child
let c = world.get::<Children>(parent).expect("parent should have Children");
assert!(c.0.contains(&child));
}
#[test]
fn test_add_multiple_children() {
let mut world = World::new();
let parent = world.spawn();
let child1 = world.spawn();
let child2 = world.spawn();
add_child(&mut world, parent, child1);
add_child(&mut world, parent, child2);
let c = world.get::<Children>(parent).expect("parent should have Children");
assert_eq!(c.0.len(), 2);
assert!(c.0.contains(&child1));
assert!(c.0.contains(&child2));
}
#[test]
fn test_remove_child() {
let mut world = World::new();
let parent = world.spawn();
let child = world.spawn();
add_child(&mut world, parent, child);
remove_child(&mut world, parent, child);
// Child should no longer have a Parent
assert!(world.get::<Parent>(child).is_none(), "child should have no Parent after removal");
// Parent's Children list should be empty
let c = world.get::<Children>(parent).expect("parent should still have Children component");
assert!(c.0.is_empty(), "Children list should be empty after removal");
}
#[test]
fn test_despawn_recursive() {
let mut world = World::new();
let root = world.spawn();
let child = world.spawn();
let grandchild = world.spawn();
// Add transforms so they are proper scene nodes
world.add(root, Transform::new());
world.add(child, Transform::new());
world.add(grandchild, Transform::new());
add_child(&mut world, root, child);
add_child(&mut world, child, grandchild);
despawn_recursive(&mut world, root);
assert!(!world.is_alive(root), "root should be despawned");
assert!(!world.is_alive(child), "child should be despawned");
assert!(!world.is_alive(grandchild), "grandchild should be despawned");
}
#[test]
fn test_roots() {
let mut world = World::new();
let root1 = world.spawn();
let root2 = world.spawn();
let child = world.spawn();
world.add(root1, Transform::new());
world.add(root2, Transform::new());
world.add(child, Transform::new());
add_child(&mut world, root1, child);
let r = roots(&world);
assert_eq!(r.len(), 2, "should have exactly 2 roots");
assert!(r.contains(&root1));
assert!(r.contains(&root2));
assert!(!r.contains(&child));
}
#[test]
fn test_no_duplicate_child() {
let mut world = World::new();
let parent = world.spawn();
let child = world.spawn();
add_child(&mut world, parent, child);
add_child(&mut world, parent, child); // add same child again
let c = world.get::<Children>(parent).expect("parent should have Children");
assert_eq!(c.0.len(), 1, "Children should not contain duplicates");
}
}

View File

@@ -0,0 +1,15 @@
pub mod entity;
pub mod sparse_set;
pub mod world;
pub mod transform;
pub mod hierarchy;
pub mod world_transform;
pub mod scene;
pub use entity::{Entity, EntityAllocator};
pub use sparse_set::SparseSet;
pub use world::World;
pub use transform::Transform;
pub use hierarchy::{Parent, Children, add_child, remove_child, despawn_recursive, roots};
pub use world_transform::{WorldTransform, propagate_transforms};
pub use scene::{Tag, serialize_scene, deserialize_scene};

View File

@@ -0,0 +1,273 @@
use std::collections::HashMap;
use voltex_math::Vec3;
use crate::entity::Entity;
use crate::world::World;
use crate::transform::Transform;
use crate::hierarchy::{add_child, Parent};
/// String tag for entity identification.
#[derive(Debug, Clone)]
pub struct Tag(pub String);
/// Parse three space-separated f32 values into a Vec3.
fn parse_vec3(s: &str) -> Option<Vec3> {
let parts: Vec<&str> = s.split_whitespace().collect();
if parts.len() != 3 {
return None;
}
let x = parts[0].parse::<f32>().ok()?;
let y = parts[1].parse::<f32>().ok()?;
let z = parts[2].parse::<f32>().ok()?;
Some(Vec3::new(x, y, z))
}
/// Parse a transform line of the form "px py pz | rx ry rz | sx sy sz".
fn parse_transform(s: &str) -> Option<Transform> {
let parts: Vec<&str> = s.splitn(3, '|').collect();
if parts.len() != 3 {
return None;
}
let position = parse_vec3(parts[0].trim())?;
let rotation = parse_vec3(parts[1].trim())?;
let scale = parse_vec3(parts[2].trim())?;
Some(Transform { position, rotation, scale })
}
/// Serialize all entities with a Transform component to the .vscn text format.
pub fn serialize_scene(world: &World) -> String {
// Collect all entities with Transform
let entities_with_transform: Vec<(Entity, Transform)> = world
.query::<Transform>()
.map(|(e, t)| (e, *t))
.collect();
// Build entity -> local index map
let entity_to_index: HashMap<Entity, usize> = entities_with_transform
.iter()
.enumerate()
.map(|(i, (e, _))| (*e, i))
.collect();
let mut output = String::from("# Voltex Scene v1\n");
for (local_idx, (entity, transform)) in entities_with_transform.iter().enumerate() {
output.push('\n');
output.push_str(&format!("entity {}\n", local_idx));
// Transform line
let p = transform.position;
let r = transform.rotation;
let s = transform.scale;
output.push_str(&format!(
" transform {} {} {} | {} {} {} | {} {} {}\n",
p.x, p.y, p.z,
r.x, r.y, r.z,
s.x, s.y, s.z
));
// Parent line (if entity has a Parent)
if let Some(parent_comp) = world.get::<Parent>(*entity) {
if let Some(&parent_local_idx) = entity_to_index.get(&parent_comp.0) {
output.push_str(&format!(" parent {}\n", parent_local_idx));
}
}
// Tag line (if entity has a Tag)
if let Some(tag) = world.get::<Tag>(*entity) {
output.push_str(&format!(" tag {}\n", tag.0));
}
}
output
}
/// Parse a .vscn string, create entities in the world, and return the created entities.
pub fn deserialize_scene(world: &mut World, source: &str) -> Vec<Entity> {
// Intermediate storage: local_index -> (transform, tag, parent_local_index)
let mut local_transforms: Vec<Option<Transform>> = Vec::new();
let mut local_tags: Vec<Option<String>> = Vec::new();
let mut local_parents: Vec<Option<usize>> = Vec::new();
let mut current_index: Option<usize> = None;
for line in source.lines() {
let trimmed = line.trim();
// Skip comments and empty lines
if trimmed.is_empty() || trimmed.starts_with('#') {
continue;
}
if let Some(rest) = trimmed.strip_prefix("entity ") {
let idx: usize = rest.trim().parse().unwrap_or(local_transforms.len());
// Ensure vectors are large enough
while local_transforms.len() <= idx {
local_transforms.push(None);
local_tags.push(None);
local_parents.push(None);
}
current_index = Some(idx);
} else if let Some(rest) = trimmed.strip_prefix("transform ") {
if let Some(idx) = current_index {
if let Some(t) = parse_transform(rest) {
local_transforms[idx] = Some(t);
}
}
} else if let Some(rest) = trimmed.strip_prefix("parent ") {
if let Some(idx) = current_index {
if let Ok(parent_idx) = rest.trim().parse::<usize>() {
local_parents[idx] = Some(parent_idx);
}
}
} else if let Some(rest) = trimmed.strip_prefix("tag ") {
if let Some(idx) = current_index {
local_tags[idx] = Some(rest.trim().to_string());
}
}
}
// Create entities
let mut created: Vec<Entity> = Vec::with_capacity(local_transforms.len());
for i in 0..local_transforms.len() {
let entity = world.spawn();
// Add transform (default if not present)
let transform = local_transforms[i].unwrap_or_else(Transform::new);
world.add(entity, transform);
// Add tag if present
if let Some(ref tag_str) = local_tags[i] {
world.add(entity, Tag(tag_str.clone()));
}
created.push(entity);
}
// Apply parent relationships
for (child_local_idx, parent_local_opt) in local_parents.iter().enumerate() {
if let Some(parent_local_idx) = parent_local_opt {
let child_entity = created[child_local_idx];
let parent_entity = created[*parent_local_idx];
add_child(world, parent_entity, child_entity);
}
}
created
}
#[cfg(test)]
mod tests {
use super::*;
use crate::hierarchy::{add_child, roots, Parent};
use voltex_math::Vec3;
#[test]
fn test_serialize_single_entity() {
let mut world = World::new();
let e = world.spawn();
world.add(e, Transform {
position: Vec3::new(1.0, 2.0, 3.0),
rotation: Vec3::ZERO,
scale: Vec3::ONE,
});
world.add(e, Tag("sun".to_string()));
let output = serialize_scene(&world);
assert!(output.contains("entity 0"), "should contain 'entity 0'");
assert!(output.contains("transform"), "should contain 'transform'");
assert!(output.contains("tag"), "should contain 'tag'");
assert!(output.contains("sun"), "should contain tag value 'sun'");
}
#[test]
fn test_serialize_with_parent() {
let mut world = World::new();
let parent = world.spawn();
let child = world.spawn();
world.add(parent, Transform::new());
world.add(child, Transform::new());
add_child(&mut world, parent, child);
let output = serialize_scene(&world);
assert!(output.contains("parent"), "should contain 'parent' for child entity");
}
#[test]
fn test_roundtrip() {
let mut world1 = World::new();
// Entity 0: sun (root)
let sun = world1.spawn();
world1.add(sun, Transform {
position: Vec3::new(1.0, 2.0, 3.0),
rotation: Vec3::new(0.0, 0.5, 0.0),
scale: Vec3::ONE,
});
world1.add(sun, Tag("sun".to_string()));
// Entity 1: planet (child of sun)
let planet = world1.spawn();
world1.add(planet, Transform {
position: Vec3::new(5.0, 0.0, 0.0),
rotation: Vec3::ZERO,
scale: Vec3::new(0.5, 0.5, 0.5),
});
world1.add(planet, Tag("planet".to_string()));
add_child(&mut world1, sun, planet);
let serialized = serialize_scene(&world1);
let mut world2 = World::new();
let entities = deserialize_scene(&mut world2, &serialized);
assert_eq!(entities.len(), 2, "should have 2 entities");
// Verify Transform values
let sun2 = entities[0];
let planet2 = entities[1];
let sun_transform = world2.get::<Transform>(sun2).expect("sun should have Transform");
assert!((sun_transform.position.x - 1.0).abs() < 1e-4, "sun position.x");
assert!((sun_transform.position.y - 2.0).abs() < 1e-4, "sun position.y");
assert!((sun_transform.position.z - 3.0).abs() < 1e-4, "sun position.z");
assert!((sun_transform.rotation.y - 0.5).abs() < 1e-4, "sun rotation.y");
let planet_transform = world2.get::<Transform>(planet2).expect("planet should have Transform");
assert!((planet_transform.position.x - 5.0).abs() < 1e-4, "planet position.x");
assert!((planet_transform.scale.x - 0.5).abs() < 1e-4, "planet scale.x");
// Verify Parent relationship
let parent_comp = world2.get::<Parent>(planet2).expect("planet should have Parent");
assert_eq!(parent_comp.0, sun2, "planet's parent should be sun");
// Verify Tag values
let sun_tag = world2.get::<Tag>(sun2).expect("sun should have Tag");
assert_eq!(sun_tag.0, "sun");
let planet_tag = world2.get::<Tag>(planet2).expect("planet should have Tag");
assert_eq!(planet_tag.0, "planet");
}
#[test]
fn test_deserialize_roots() {
let source = r#"# Voltex Scene v1
entity 0
transform 0 0 0 | 0 0 0 | 1 1 1
tag root_a
entity 1
transform 10 0 0 | 0 0 0 | 1 1 1
tag root_b
entity 2
parent 0
transform 1 0 0 | 0 0 0 | 1 1 1
tag child_of_a
"#;
let mut world = World::new();
deserialize_scene(&mut world, source);
let scene_roots = roots(&world);
assert_eq!(scene_roots.len(), 2, "should have exactly 2 root entities");
}
}

View File

@@ -0,0 +1,256 @@
use std::any::Any;
use crate::entity::Entity;
pub struct SparseSet<T> {
sparse: Vec<Option<usize>>,
dense_entities: Vec<Entity>,
dense_data: Vec<T>,
}
impl<T> SparseSet<T> {
pub fn new() -> Self {
Self {
sparse: Vec::new(),
dense_entities: Vec::new(),
dense_data: Vec::new(),
}
}
pub fn insert(&mut self, entity: Entity, value: T) {
let id = entity.id as usize;
// Grow sparse vec if needed
if id >= self.sparse.len() {
self.sparse.resize(id + 1, None);
}
if let Some(dense_idx) = self.sparse[id] {
// Overwrite existing
self.dense_data[dense_idx] = value;
self.dense_entities[dense_idx] = entity;
} else {
let dense_idx = self.dense_data.len();
self.sparse[id] = Some(dense_idx);
self.dense_entities.push(entity);
self.dense_data.push(value);
}
}
pub fn remove(&mut self, entity: Entity) -> Option<T> {
let id = entity.id as usize;
let dense_idx = *self.sparse.get(id)?.as_ref()?;
// Check entity matches (generation safety)
if self.dense_entities[dense_idx] != entity {
return None;
}
let last_idx = self.dense_data.len() - 1;
self.sparse[id] = None;
if dense_idx == last_idx {
self.dense_entities.pop();
Some(self.dense_data.pop().unwrap())
} else {
// Swap with last
let swapped_entity = self.dense_entities[last_idx];
self.sparse[swapped_entity.id as usize] = Some(dense_idx);
self.dense_entities.swap_remove(dense_idx);
Some(self.dense_data.swap_remove(dense_idx))
}
}
pub fn get(&self, entity: Entity) -> Option<&T> {
let id = entity.id as usize;
let dense_idx = self.sparse.get(id)?.as_ref().copied()?;
if self.dense_entities[dense_idx] != entity {
return None;
}
Some(&self.dense_data[dense_idx])
}
pub fn get_mut(&mut self, entity: Entity) -> Option<&mut T> {
let id = entity.id as usize;
let dense_idx = self.sparse.get(id)?.as_ref().copied()?;
if self.dense_entities[dense_idx] != entity {
return None;
}
Some(&mut self.dense_data[dense_idx])
}
pub fn contains(&self, entity: Entity) -> bool {
let id = entity.id as usize;
self.sparse
.get(id)
.and_then(|opt| opt.as_ref())
.map_or(false, |&dense_idx| {
self.dense_entities[dense_idx] == entity
})
}
pub fn len(&self) -> usize {
self.dense_data.len()
}
pub fn is_empty(&self) -> bool {
self.dense_data.is_empty()
}
pub fn iter(&self) -> impl Iterator<Item = (Entity, &T)> {
self.dense_entities.iter().copied().zip(self.dense_data.iter())
}
pub fn iter_mut(&mut self) -> impl Iterator<Item = (Entity, &mut T)> {
self.dense_entities.iter().copied().zip(self.dense_data.iter_mut())
}
pub fn entities(&self) -> &[Entity] {
&self.dense_entities
}
pub fn data(&self) -> &[T] {
&self.dense_data
}
pub fn data_mut(&mut self) -> &mut [T] {
&mut self.dense_data
}
}
impl<T> Default for SparseSet<T> {
fn default() -> Self {
Self::new()
}
}
pub trait ComponentStorage: Any {
fn as_any(&self) -> &dyn Any;
fn as_any_mut(&mut self) -> &mut dyn Any;
fn remove_entity(&mut self, entity: Entity);
fn storage_len(&self) -> usize;
}
impl<T: 'static> ComponentStorage for SparseSet<T> {
fn as_any(&self) -> &dyn Any {
self
}
fn as_any_mut(&mut self) -> &mut dyn Any {
self
}
fn remove_entity(&mut self, entity: Entity) {
self.remove(entity);
}
fn storage_len(&self) -> usize {
self.dense_data.len()
}
}
#[cfg(test)]
mod tests {
use super::*;
fn make_entity(id: u32, generation: u32) -> Entity {
Entity { id, generation }
}
#[test]
fn test_insert_and_get() {
let mut set: SparseSet<i32> = SparseSet::new();
let e = make_entity(0, 0);
set.insert(e, 42);
assert_eq!(set.get(e), Some(&42));
assert_eq!(set.len(), 1);
}
#[test]
fn test_overwrite() {
let mut set: SparseSet<i32> = SparseSet::new();
let e = make_entity(0, 0);
set.insert(e, 1);
set.insert(e, 99);
assert_eq!(set.get(e), Some(&99));
assert_eq!(set.len(), 1);
}
#[test]
fn test_remove() {
let mut set: SparseSet<i32> = SparseSet::new();
let e0 = make_entity(0, 0);
let e1 = make_entity(1, 0);
let e2 = make_entity(2, 0);
set.insert(e0, 10);
set.insert(e1, 20);
set.insert(e2, 30);
// Remove middle
let removed = set.remove(e1);
assert_eq!(removed, Some(20));
assert_eq!(set.len(), 2);
assert!(set.get(e1).is_none());
// Remaining still accessible
assert_eq!(set.get(e0), Some(&10));
assert_eq!(set.get(e2), Some(&30));
}
#[test]
fn test_remove_nonexistent() {
let mut set: SparseSet<i32> = SparseSet::new();
let e = make_entity(5, 0);
assert_eq!(set.remove(e), None);
}
#[test]
fn test_iter() {
let mut set: SparseSet<i32> = SparseSet::new();
let e0 = make_entity(0, 0);
let e1 = make_entity(1, 0);
set.insert(e0, 100);
set.insert(e1, 200);
let mut values: Vec<i32> = set.iter().map(|(_, v)| *v).collect();
values.sort();
assert_eq!(values, vec![100, 200]);
}
#[test]
fn test_iter_mut() {
let mut set: SparseSet<i32> = SparseSet::new();
let e0 = make_entity(0, 0);
let e1 = make_entity(1, 0);
set.insert(e0, 1);
set.insert(e1, 2);
for (_, v) in set.iter_mut() {
*v *= 10;
}
assert_eq!(set.get(e0), Some(&10));
assert_eq!(set.get(e1), Some(&20));
}
#[test]
fn test_contains() {
let mut set: SparseSet<i32> = SparseSet::new();
let e = make_entity(3, 0);
assert!(!set.contains(e));
set.insert(e, 7);
assert!(set.contains(e));
set.remove(e);
assert!(!set.contains(e));
}
#[test]
fn test_swap_remove_correctness() {
let mut set: SparseSet<i32> = SparseSet::new();
let e0 = make_entity(0, 0);
let e1 = make_entity(1, 0);
let e2 = make_entity(2, 0);
set.insert(e0, 10);
set.insert(e1, 20);
set.insert(e2, 30);
// Remove first (triggers swap with last)
let removed = set.remove(e0);
assert_eq!(removed, Some(10));
assert_eq!(set.len(), 2);
assert!(set.get(e0).is_none());
// Remaining still accessible
assert_eq!(set.get(e1), Some(&20));
assert_eq!(set.get(e2), Some(&30));
}
}

View File

@@ -0,0 +1,111 @@
use voltex_math::{Vec3, Mat4};
#[derive(Debug, Clone, Copy)]
pub struct Transform {
pub position: Vec3,
pub rotation: Vec3, // euler angles (radians): pitch(x), yaw(y), roll(z)
pub scale: Vec3,
}
impl Transform {
pub fn new() -> Self {
Self {
position: Vec3::ZERO,
rotation: Vec3::ZERO,
scale: Vec3::ONE,
}
}
pub fn from_position(position: Vec3) -> Self {
Self {
position,
rotation: Vec3::ZERO,
scale: Vec3::ONE,
}
}
pub fn from_position_scale(position: Vec3, scale: Vec3) -> Self {
Self {
position,
rotation: Vec3::ZERO,
scale,
}
}
/// Builds the model matrix: Translation * RotY * RotX * RotZ * Scale
pub fn matrix(&self) -> Mat4 {
let t = Mat4::translation(self.position.x, self.position.y, self.position.z);
let ry = Mat4::rotation_y(self.rotation.y);
let rx = Mat4::rotation_x(self.rotation.x);
let rz = Mat4::rotation_z(self.rotation.z);
let s = Mat4::scale(self.scale.x, self.scale.y, self.scale.z);
t * ry * rx * rz * s
}
}
impl Default for Transform {
fn default() -> Self {
Self::new()
}
}
#[cfg(test)]
mod tests {
use super::*;
use voltex_math::Vec4;
use std::f32::consts::FRAC_PI_2;
fn approx_eq(a: f32, b: f32) -> bool {
(a - b).abs() < 1e-5
}
#[test]
fn test_identity_transform() {
let t = Transform::new();
let m = t.matrix();
// Transform point (1, 2, 3) — should be unchanged
let p = Vec4::new(1.0, 2.0, 3.0, 1.0);
let result = m * p;
assert!(approx_eq(result.x, 1.0), "x: {}", result.x);
assert!(approx_eq(result.y, 2.0), "y: {}", result.y);
assert!(approx_eq(result.z, 3.0), "z: {}", result.z);
assert!(approx_eq(result.w, 1.0), "w: {}", result.w);
}
#[test]
fn test_translation() {
let t = Transform::from_position(Vec3::new(10.0, 20.0, 30.0));
let m = t.matrix();
// Transform origin — should move to (10,20,30)
let p = Vec4::new(0.0, 0.0, 0.0, 1.0);
let result = m * p;
assert!(approx_eq(result.x, 10.0), "x: {}", result.x);
assert!(approx_eq(result.y, 20.0), "y: {}", result.y);
assert!(approx_eq(result.z, 30.0), "z: {}", result.z);
}
#[test]
fn test_scale() {
let t = Transform::from_position_scale(Vec3::ZERO, Vec3::new(2.0, 3.0, 4.0));
let m = t.matrix();
// Scale (1,1,1) to (2,3,4)
let p = Vec4::new(1.0, 1.0, 1.0, 1.0);
let result = m * p;
assert!(approx_eq(result.x, 2.0), "x: {}", result.x);
assert!(approx_eq(result.y, 3.0), "y: {}", result.y);
assert!(approx_eq(result.z, 4.0), "z: {}", result.z);
}
#[test]
fn test_rotation_y() {
let mut t = Transform::new();
// 90° Y rotation on (1,0,0) -> approx (0,0,-1)
t.rotation.y = FRAC_PI_2;
let m = t.matrix();
let p = Vec4::new(1.0, 0.0, 0.0, 1.0);
let result = m * p;
assert!(approx_eq(result.x, 0.0), "x: {}", result.x);
assert!(approx_eq(result.y, 0.0), "y: {}", result.y);
assert!(approx_eq(result.z, -1.0), "z: {}", result.z);
}
}

View File

@@ -0,0 +1,237 @@
use std::any::TypeId;
use std::collections::HashMap;
use crate::entity::{Entity, EntityAllocator};
use crate::sparse_set::{SparseSet, ComponentStorage};
pub struct World {
allocator: EntityAllocator,
storages: HashMap<TypeId, Box<dyn ComponentStorage>>,
}
impl World {
pub fn new() -> Self {
Self {
allocator: EntityAllocator::new(),
storages: HashMap::new(),
}
}
pub fn spawn(&mut self) -> Entity {
self.allocator.allocate()
}
pub fn despawn(&mut self, entity: Entity) -> bool {
if !self.allocator.deallocate(entity) {
return false;
}
for storage in self.storages.values_mut() {
storage.remove_entity(entity);
}
true
}
pub fn is_alive(&self, entity: Entity) -> bool {
self.allocator.is_alive(entity)
}
pub fn entity_count(&self) -> usize {
self.allocator.alive_count()
}
pub fn add<T: 'static>(&mut self, entity: Entity, component: T) {
let type_id = TypeId::of::<T>();
let storage = self.storages
.entry(type_id)
.or_insert_with(|| Box::new(SparseSet::<T>::new()));
let set = storage.as_any_mut().downcast_mut::<SparseSet<T>>().unwrap();
set.insert(entity, component);
}
pub fn get<T: 'static>(&self, entity: Entity) -> Option<&T> {
let type_id = TypeId::of::<T>();
let storage = self.storages.get(&type_id)?;
let set = storage.as_any().downcast_ref::<SparseSet<T>>()?;
set.get(entity)
}
pub fn get_mut<T: 'static>(&mut self, entity: Entity) -> Option<&mut T> {
let type_id = TypeId::of::<T>();
let storage = self.storages.get_mut(&type_id)?;
let set = storage.as_any_mut().downcast_mut::<SparseSet<T>>()?;
set.get_mut(entity)
}
pub fn remove<T: 'static>(&mut self, entity: Entity) -> Option<T> {
let type_id = TypeId::of::<T>();
let storage = self.storages.get_mut(&type_id)?;
let set = storage.as_any_mut().downcast_mut::<SparseSet<T>>()?;
set.remove(entity)
}
pub fn storage<T: 'static>(&self) -> Option<&SparseSet<T>> {
let type_id = TypeId::of::<T>();
let storage = self.storages.get(&type_id)?;
storage.as_any().downcast_ref::<SparseSet<T>>()
}
pub fn storage_mut<T: 'static>(&mut self) -> Option<&mut SparseSet<T>> {
let type_id = TypeId::of::<T>();
let storage = self.storages.get_mut(&type_id)?;
storage.as_any_mut().downcast_mut::<SparseSet<T>>()
}
pub fn query<T: 'static>(&self) -> impl Iterator<Item = (Entity, &T)> {
self.storage::<T>()
.map(|s| s.iter())
.into_iter()
.flatten()
}
pub fn query2<A: 'static, B: 'static>(&self) -> Vec<(Entity, &A, &B)> {
let a_storage = match self.storage::<A>() {
Some(s) => s,
None => return Vec::new(),
};
let b_storage = match self.storage::<B>() {
Some(s) => s,
None => return Vec::new(),
};
// Iterate the smaller set, look up in the larger
let mut result = Vec::new();
if a_storage.len() <= b_storage.len() {
for (entity, a) in a_storage.iter() {
if let Some(b) = b_storage.get(entity) {
result.push((entity, a, b));
}
}
} else {
for (entity, b) in b_storage.iter() {
if let Some(a) = a_storage.get(entity) {
result.push((entity, a, b));
}
}
}
result
}
}
impl Default for World {
fn default() -> Self {
Self::new()
}
}
#[cfg(test)]
mod tests {
use super::*;
#[derive(Debug, PartialEq)]
struct Position { x: f32, y: f32 }
#[derive(Debug, PartialEq)]
struct Velocity { dx: f32, dy: f32 }
#[derive(Debug, PartialEq)]
struct Name(String);
#[test]
fn test_spawn_and_add() {
let mut world = World::new();
let e = world.spawn();
world.add(e, Position { x: 1.0, y: 2.0 });
let pos = world.get::<Position>(e).unwrap();
assert_eq!(pos.x, 1.0);
assert_eq!(pos.y, 2.0);
}
#[test]
fn test_get_missing() {
let world = World::new();
let e = Entity { id: 0, generation: 0 };
assert!(world.get::<Position>(e).is_none());
}
#[test]
fn test_get_mut() {
let mut world = World::new();
let e = world.spawn();
world.add(e, Position { x: 0.0, y: 0.0 });
{
let pos = world.get_mut::<Position>(e).unwrap();
pos.x = 42.0;
}
assert_eq!(world.get::<Position>(e).unwrap().x, 42.0);
}
#[test]
fn test_remove_component() {
let mut world = World::new();
let e = world.spawn();
world.add(e, Position { x: 5.0, y: 6.0 });
let removed = world.remove::<Position>(e);
assert_eq!(removed, Some(Position { x: 5.0, y: 6.0 }));
assert!(world.get::<Position>(e).is_none());
}
#[test]
fn test_despawn() {
let mut world = World::new();
let e = world.spawn();
world.add(e, Position { x: 1.0, y: 2.0 });
world.add(e, Velocity { dx: 3.0, dy: 4.0 });
assert!(world.despawn(e));
assert!(!world.is_alive(e));
assert!(world.get::<Position>(e).is_none());
assert!(world.get::<Velocity>(e).is_none());
}
#[test]
fn test_query_single() {
let mut world = World::new();
let e0 = world.spawn();
let e1 = world.spawn();
let _e2 = world.spawn(); // no Position
world.add(e0, Position { x: 1.0, y: 0.0 });
world.add(e1, Position { x: 2.0, y: 0.0 });
let results: Vec<(Entity, &Position)> = world.query::<Position>().collect();
assert_eq!(results.len(), 2);
let entities: Vec<Entity> = results.iter().map(|(e, _)| *e).collect();
assert!(entities.contains(&e0));
assert!(entities.contains(&e1));
}
#[test]
fn test_query2() {
let mut world = World::new();
let e0 = world.spawn();
let e1 = world.spawn();
let e2 = world.spawn(); // only Position, no Velocity
world.add(e0, Position { x: 1.0, y: 0.0 });
world.add(e0, Velocity { dx: 1.0, dy: 0.0 });
world.add(e1, Position { x: 2.0, y: 0.0 });
world.add(e1, Velocity { dx: 2.0, dy: 0.0 });
world.add(e2, Position { x: 3.0, y: 0.0 });
let results = world.query2::<Position, Velocity>();
assert_eq!(results.len(), 2);
let entities: Vec<Entity> = results.iter().map(|(e, _, _)| *e).collect();
assert!(entities.contains(&e0));
assert!(entities.contains(&e1));
assert!(!entities.contains(&e2));
}
#[test]
fn test_entity_count() {
let mut world = World::new();
assert_eq!(world.entity_count(), 0);
let e0 = world.spawn();
let e1 = world.spawn();
assert_eq!(world.entity_count(), 2);
world.despawn(e0);
assert_eq!(world.entity_count(), 1);
world.despawn(e1);
assert_eq!(world.entity_count(), 0);
}
}

View File

@@ -0,0 +1,135 @@
use voltex_math::Mat4;
use crate::{Entity, World, Transform};
use crate::hierarchy::{Parent, Children};
#[derive(Debug, Clone, Copy)]
pub struct WorldTransform(pub Mat4);
impl WorldTransform {
pub fn identity() -> Self { Self(Mat4::IDENTITY) }
}
pub fn propagate_transforms(world: &mut World) {
// Collect roots: entities with Transform but no Parent
let roots: Vec<Entity> = world.query::<Transform>()
.filter(|(e, _)| world.get::<Parent>(*e).is_none())
.map(|(e, _)| e)
.collect();
for root in roots {
propagate_entity(world, root, Mat4::IDENTITY);
}
}
fn propagate_entity(world: &mut World, entity: Entity, parent_world: Mat4) {
let local = match world.get::<Transform>(entity) {
Some(t) => t.matrix(),
None => return,
};
let world_matrix = parent_world * local;
world.add(entity, WorldTransform(world_matrix));
// Clone children to avoid borrow issues
let children: Vec<Entity> = world.get::<Children>(entity)
.map(|c| c.0.clone())
.unwrap_or_default();
for child in children {
propagate_entity(world, child, world_matrix);
}
}
#[cfg(test)]
mod tests {
use super::*;
use voltex_math::{Vec3, Vec4};
use crate::hierarchy::add_child;
fn approx_eq(a: f32, b: f32) -> bool {
(a - b).abs() < 1e-4
}
#[test]
fn test_root_world_transform() {
let mut world = World::new();
let e = world.spawn();
world.add(e, Transform::from_position(Vec3::new(5.0, 0.0, 0.0)));
propagate_transforms(&mut world);
let wt = world.get::<WorldTransform>(e).expect("WorldTransform should be set");
// Transform the origin — should land at (5, 0, 0)
let result = wt.0 * Vec4::new(0.0, 0.0, 0.0, 1.0);
assert!(approx_eq(result.x, 5.0), "x: {}", result.x);
assert!(approx_eq(result.y, 0.0), "y: {}", result.y);
assert!(approx_eq(result.z, 0.0), "z: {}", result.z);
}
#[test]
fn test_child_inherits_parent() {
let mut world = World::new();
let parent = world.spawn();
let child = world.spawn();
world.add(parent, Transform::from_position(Vec3::new(10.0, 0.0, 0.0)));
world.add(child, Transform::from_position(Vec3::new(0.0, 5.0, 0.0)));
add_child(&mut world, parent, child);
propagate_transforms(&mut world);
let wt = world.get::<WorldTransform>(child).expect("child WorldTransform should be set");
// Child origin in world space should be (10, 5, 0)
let result = wt.0 * Vec4::new(0.0, 0.0, 0.0, 1.0);
assert!(approx_eq(result.x, 10.0), "x: {}", result.x);
assert!(approx_eq(result.y, 5.0), "y: {}", result.y);
assert!(approx_eq(result.z, 0.0), "z: {}", result.z);
}
#[test]
fn test_three_level_hierarchy() {
let mut world = World::new();
let root = world.spawn();
let mid = world.spawn();
let leaf = world.spawn();
world.add(root, Transform::from_position(Vec3::new(1.0, 0.0, 0.0)));
world.add(mid, Transform::from_position(Vec3::new(0.0, 2.0, 0.0)));
world.add(leaf, Transform::from_position(Vec3::new(0.0, 0.0, 3.0)));
add_child(&mut world, root, mid);
add_child(&mut world, mid, leaf);
propagate_transforms(&mut world);
let wt = world.get::<WorldTransform>(leaf).expect("leaf WorldTransform should be set");
// Leaf origin in world space should be (1, 2, 3)
let result = wt.0 * Vec4::new(0.0, 0.0, 0.0, 1.0);
assert!(approx_eq(result.x, 1.0), "x: {}", result.x);
assert!(approx_eq(result.y, 2.0), "y: {}", result.y);
assert!(approx_eq(result.z, 3.0), "z: {}", result.z);
}
#[test]
fn test_parent_scale_affects_child() {
let mut world = World::new();
let parent = world.spawn();
let child = world.spawn();
// Parent scaled 2x at origin
world.add(parent, Transform::from_position_scale(
Vec3::ZERO,
Vec3::new(2.0, 2.0, 2.0),
));
// Child at local (1, 0, 0)
world.add(child, Transform::from_position(Vec3::new(1.0, 0.0, 0.0)));
add_child(&mut world, parent, child);
propagate_transforms(&mut world);
let wt = world.get::<WorldTransform>(child).expect("child WorldTransform should be set");
// Child origin in world space: parent scale 2x means (1,0,0) -> (2,0,0)
let result = wt.0 * Vec4::new(0.0, 0.0, 0.0, 1.0);
assert!(approx_eq(result.x, 2.0), "x: {}", result.x);
assert!(approx_eq(result.y, 0.0), "y: {}", result.y);
assert!(approx_eq(result.z, 0.0), "z: {}", result.z);
}
}

View File

@@ -1 +1,9 @@
// Voltex Math Library - Phase 1
pub mod vec2;
pub mod vec3;
pub mod vec4;
pub mod mat4;
pub use vec2::Vec2;
pub use vec3::Vec3;
pub use vec4::Vec4;
pub use mat4::Mat4;

View File

@@ -0,0 +1,355 @@
use std::ops::Mul;
use crate::{Vec3, Vec4};
/// 4x4 matrix in column-major order (matches wgpu/WGSL convention).
///
/// `cols[i]` is the i-th column, stored as `[f32; 4]`.
#[derive(Debug, Clone, Copy, PartialEq)]
pub struct Mat4 {
pub cols: [[f32; 4]; 4],
}
impl Mat4 {
/// The identity matrix.
pub const IDENTITY: Self = Self {
cols: [
[1.0, 0.0, 0.0, 0.0],
[0.0, 1.0, 0.0, 0.0],
[0.0, 0.0, 1.0, 0.0],
[0.0, 0.0, 0.0, 1.0],
],
};
/// Construct from four column vectors.
pub fn from_cols(c0: [f32; 4], c1: [f32; 4], c2: [f32; 4], c3: [f32; 4]) -> Self {
Self { cols: [c0, c1, c2, c3] }
}
/// Return a flat 16-element slice suitable for GPU upload.
///
/// # Safety
/// `[[f32; 4]; 4]` and `[f32; 16]` have identical layout (both are 64 bytes,
/// 4-byte aligned), so the transmute is well-defined.
pub fn as_slice(&self) -> &[f32; 16] {
// SAFETY: [[f32;4];4] is layout-identical to [f32;16].
unsafe { &*(self.cols.as_ptr() as *const [f32; 16]) }
}
/// Matrix × matrix multiplication.
pub fn mul_mat4(&self, rhs: &Mat4) -> Mat4 {
let mut result = [[0.0f32; 4]; 4];
for col in 0..4 {
for row in 0..4 {
let mut sum = 0.0f32;
for k in 0..4 {
sum += self.cols[k][row] * rhs.cols[col][k];
}
result[col][row] = sum;
}
}
Mat4 { cols: result }
}
/// Matrix × Vec4 multiplication.
pub fn mul_vec4(&self, v: Vec4) -> Vec4 {
let x = self.cols[0][0] * v.x + self.cols[1][0] * v.y + self.cols[2][0] * v.z + self.cols[3][0] * v.w;
let y = self.cols[0][1] * v.x + self.cols[1][1] * v.y + self.cols[2][1] * v.z + self.cols[3][1] * v.w;
let z = self.cols[0][2] * v.x + self.cols[1][2] * v.y + self.cols[2][2] * v.z + self.cols[3][2] * v.w;
let w = self.cols[0][3] * v.x + self.cols[1][3] * v.y + self.cols[2][3] * v.z + self.cols[3][3] * v.w;
Vec4 { x, y, z, w }
}
/// Translation matrix for (x, y, z).
pub fn translation(x: f32, y: f32, z: f32) -> Self {
Self {
cols: [
[1.0, 0.0, 0.0, 0.0],
[0.0, 1.0, 0.0, 0.0],
[0.0, 0.0, 1.0, 0.0],
[x, y, z, 1.0],
],
}
}
/// Uniform/non-uniform scale matrix.
pub fn scale(sx: f32, sy: f32, sz: f32) -> Self {
Self {
cols: [
[sx, 0.0, 0.0, 0.0],
[0.0, sy, 0.0, 0.0],
[0.0, 0.0, sz, 0.0],
[0.0, 0.0, 0.0, 1.0],
],
}
}
/// Rotation around the X axis by `angle` radians (right-handed).
pub fn rotation_x(angle: f32) -> Self {
let (s, c) = angle.sin_cos();
Self {
cols: [
[1.0, 0.0, 0.0, 0.0],
[0.0, c, s, 0.0],
[0.0, -s, c, 0.0],
[0.0, 0.0, 0.0, 1.0],
],
}
}
/// Rotation around the Y axis by `angle` radians (right-handed).
pub fn rotation_y(angle: f32) -> Self {
let (s, c) = angle.sin_cos();
Self {
cols: [
[ c, 0.0, -s, 0.0],
[0.0, 1.0, 0.0, 0.0],
[ s, 0.0, c, 0.0],
[0.0, 0.0, 0.0, 1.0],
],
}
}
/// Rotation around the Z axis by `angle` radians (right-handed).
pub fn rotation_z(angle: f32) -> Self {
let (s, c) = angle.sin_cos();
Self {
cols: [
[ c, s, 0.0, 0.0],
[-s, c, 0.0, 0.0],
[0.0, 0.0, 1.0, 0.0],
[0.0, 0.0, 0.0, 1.0],
],
}
}
/// Right-handed look-at view matrix.
///
/// - `eye` — camera position
/// - `target` — point the camera is looking at
/// - `up` — world up vector (usually `Vec3::Y`)
pub fn look_at(eye: Vec3, target: Vec3, up: Vec3) -> Self {
let f = (target - eye).normalize(); // forward
let r = f.cross(up).normalize(); // right
let u = r.cross(f); // true up
Self {
cols: [
[r.x, u.x, -f.x, 0.0],
[r.y, u.y, -f.y, 0.0],
[r.z, u.z, -f.z, 0.0],
[-r.dot(eye), -u.dot(eye), f.dot(eye), 1.0],
],
}
}
/// Perspective projection for wgpu NDC (z in [0, 1]).
///
/// - `fov_y` — vertical field of view in radians
/// - `aspect` — width / height
/// - `near` — near clip distance (positive)
/// - `far` — far clip distance (positive)
pub fn perspective(fov_y: f32, aspect: f32, near: f32, far: f32) -> Self {
let f = 1.0 / (fov_y / 2.0).tan();
let range_inv = 1.0 / (near - far);
Self {
cols: [
[f / aspect, 0.0, 0.0, 0.0],
[0.0, f, 0.0, 0.0],
[0.0, 0.0, far * range_inv, -1.0],
[0.0, 0.0, near * far * range_inv, 0.0],
],
}
}
/// Orthographic projection (wgpu NDC: z [0,1])
pub fn orthographic(left: f32, right: f32, bottom: f32, top: f32, near: f32, far: f32) -> Self {
let rml = right - left;
let tmb = top - bottom;
let fmn = far - near;
Self::from_cols(
[2.0 / rml, 0.0, 0.0, 0.0],
[0.0, 2.0 / tmb, 0.0, 0.0],
[0.0, 0.0, -1.0 / fmn, 0.0],
[-(right + left) / rml, -(top + bottom) / tmb, -near / fmn, 1.0],
)
}
/// Return the transpose of this matrix.
pub fn transpose(&self) -> Self {
let c = &self.cols;
Self {
cols: [
[c[0][0], c[1][0], c[2][0], c[3][0]],
[c[0][1], c[1][1], c[2][1], c[3][1]],
[c[0][2], c[1][2], c[2][2], c[3][2]],
[c[0][3], c[1][3], c[2][3], c[3][3]],
],
}
}
}
// ---------------------------------------------------------------------------
// Operator overloads
// ---------------------------------------------------------------------------
impl Mul<Mat4> for Mat4 {
type Output = Mat4;
fn mul(self, rhs: Mat4) -> Mat4 {
self.mul_mat4(&rhs)
}
}
impl Mul<Vec4> for Mat4 {
type Output = Vec4;
fn mul(self, rhs: Vec4) -> Vec4 {
self.mul_vec4(rhs)
}
}
// ---------------------------------------------------------------------------
// Tests
// ---------------------------------------------------------------------------
#[cfg(test)]
mod tests {
use super::*;
use std::f32::consts::FRAC_PI_2;
fn approx_eq(a: f32, b: f32) -> bool {
(a - b).abs() < 1e-5
}
fn mat4_approx_eq(a: &Mat4, b: &Mat4) -> bool {
for col in 0..4 {
for row in 0..4 {
if !approx_eq(a.cols[col][row], b.cols[col][row]) {
return false;
}
}
}
true
}
fn vec4_approx_eq(a: Vec4, b: Vec4) -> bool {
approx_eq(a.x, b.x) && approx_eq(a.y, b.y) && approx_eq(a.z, b.z) && approx_eq(a.w, b.w)
}
// 1. IDENTITY * translation == translation
#[test]
fn test_identity_mul() {
let t = Mat4::translation(1.0, 2.0, 3.0);
let result = Mat4::IDENTITY * t;
assert!(mat4_approx_eq(&result, &t));
}
// 2. translate(10,20,30) * point(1,2,3,1) == (11,22,33,1)
#[test]
fn test_translation_mul_vec4() {
let t = Mat4::translation(10.0, 20.0, 30.0);
let v = Vec4 { x: 1.0, y: 2.0, z: 3.0, w: 1.0 };
let result = t * v;
assert!(vec4_approx_eq(result, Vec4 { x: 11.0, y: 22.0, z: 33.0, w: 1.0 }));
}
// 3. scale(2,3,4) * (1,1,1,1) == (2,3,4,1)
#[test]
fn test_scale() {
let s = Mat4::scale(2.0, 3.0, 4.0);
let v = Vec4 { x: 1.0, y: 1.0, z: 1.0, w: 1.0 };
let result = s * v;
assert!(vec4_approx_eq(result, Vec4 { x: 2.0, y: 3.0, z: 4.0, w: 1.0 }));
}
// 4. rotation_y(90°) * (1,0,0,1) -> approximately (0,0,-1,1)
#[test]
fn test_rotation_y_90() {
let r = Mat4::rotation_y(FRAC_PI_2);
let v = Vec4 { x: 1.0, y: 0.0, z: 0.0, w: 1.0 };
let result = r * v;
assert!(approx_eq(result.x, 0.0));
assert!(approx_eq(result.y, 0.0));
assert!(approx_eq(result.z, -1.0));
assert!(approx_eq(result.w, 1.0));
}
// 5. look_at(eye=(0,0,5), target=origin, up=Y) — origin maps to (0,0,-5)
#[test]
fn test_look_at_origin() {
let eye = Vec3::new(0.0, 0.0, 5.0);
let target = Vec3::ZERO;
let up = Vec3::Y;
let view = Mat4::look_at(eye, target, up);
// The world-space origin in homogeneous coords:
let origin = Vec4 { x: 0.0, y: 0.0, z: 0.0, w: 1.0 };
let result = view * origin;
assert!(approx_eq(result.x, 0.0));
assert!(approx_eq(result.y, 0.0));
assert!(approx_eq(result.z, -5.0));
assert!(approx_eq(result.w, 1.0));
}
// 6. Near plane point maps to NDC z = 0
#[test]
fn test_perspective_near_plane() {
let fov_y = std::f32::consts::FRAC_PI_2; // 90°
let aspect = 1.0f32;
let near = 1.0f32;
let far = 100.0f32;
let proj = Mat4::perspective(fov_y, aspect, near, far);
// A point exactly at the near plane in view space (z = -near in RH).
let p = Vec4 { x: 0.0, y: 0.0, z: -near, w: 1.0 };
let clip = proj * p;
// NDC z = clip.z / clip.w should equal 0 for the near plane.
let ndc_z = clip.z / clip.w;
assert!(approx_eq(ndc_z, 0.0), "near-plane NDC z = {ndc_z}, expected 0");
}
// 7. Transpose swaps rows and columns
#[test]
fn test_transpose() {
let m = Mat4::from_cols(
[1.0, 2.0, 3.0, 4.0],
[5.0, 6.0, 7.0, 8.0],
[9.0, 10.0, 11.0, 12.0],
[13.0, 14.0, 15.0, 16.0],
);
let t = m.transpose();
// After transpose, col[i][j] == original col[j][i]
for col in 0..4 {
for row in 0..4 {
assert!(approx_eq(t.cols[col][row], m.cols[row][col]),
"t.cols[{col}][{row}] = {} != m.cols[{row}][{col}] = {}",
t.cols[col][row], m.cols[row][col]);
}
}
}
// 8. Orthographic projection
#[test]
fn test_orthographic() {
let proj = Mat4::orthographic(-10.0, 10.0, -10.0, 10.0, 0.1, 100.0);
// Center point should map to (0, 0, ~0)
let p = proj * Vec4::new(0.0, 0.0, -0.1, 1.0);
let ndc = Vec3::new(p.x / p.w, p.y / p.w, p.z / p.w);
assert!(approx_eq(ndc.x, 0.0));
assert!(approx_eq(ndc.y, 0.0));
}
// 9. as_slice — identity diagonal
#[test]
fn test_as_slice() {
let slice = Mat4::IDENTITY.as_slice();
assert_eq!(slice.len(), 16);
// Diagonal indices in column-major flat layout: 0, 5, 10, 15
assert!(approx_eq(slice[0], 1.0));
assert!(approx_eq(slice[5], 1.0));
assert!(approx_eq(slice[10], 1.0));
assert!(approx_eq(slice[15], 1.0));
// Off-diagonal should be zero (spot check)
assert!(approx_eq(slice[1], 0.0));
assert!(approx_eq(slice[4], 0.0));
}
}

View File

@@ -0,0 +1,106 @@
use std::ops::{Add, Sub, Mul, Neg};
/// 2D vector (f32)
#[derive(Debug, Clone, Copy, PartialEq)]
pub struct Vec2 {
pub x: f32,
pub y: f32,
}
impl Vec2 {
pub const ZERO: Self = Self { x: 0.0, y: 0.0 };
pub const ONE: Self = Self { x: 1.0, y: 1.0 };
pub const fn new(x: f32, y: f32) -> Self {
Self { x, y }
}
pub fn dot(self, rhs: Self) -> f32 {
self.x * rhs.x + self.y * rhs.y
}
pub fn length_squared(self) -> f32 {
self.dot(self)
}
pub fn length(self) -> f32 {
self.length_squared().sqrt()
}
pub fn normalize(self) -> Self {
let len = self.length();
Self {
x: self.x / len,
y: self.y / len,
}
}
}
impl Add for Vec2 {
type Output = Self;
fn add(self, rhs: Self) -> Self {
Self { x: self.x + rhs.x, y: self.y + rhs.y }
}
}
impl Sub for Vec2 {
type Output = Self;
fn sub(self, rhs: Self) -> Self {
Self { x: self.x - rhs.x, y: self.y - rhs.y }
}
}
impl Mul<f32> for Vec2 {
type Output = Self;
fn mul(self, rhs: f32) -> Self {
Self { x: self.x * rhs, y: self.y * rhs }
}
}
impl Neg for Vec2 {
type Output = Self;
fn neg(self) -> Self {
Self { x: -self.x, y: -self.y }
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_new() {
let v = Vec2::new(1.0, 2.0);
assert_eq!(v.x, 1.0);
assert_eq!(v.y, 2.0);
}
#[test]
fn test_add() {
let a = Vec2::new(1.0, 2.0);
let b = Vec2::new(3.0, 4.0);
let c = a + b;
assert_eq!(c, Vec2::new(4.0, 6.0));
}
#[test]
fn test_dot() {
let a = Vec2::new(1.0, 2.0);
let b = Vec2::new(3.0, 4.0);
assert_eq!(a.dot(b), 11.0);
}
#[test]
fn test_length() {
let v = Vec2::new(3.0, 4.0);
assert!((v.length() - 5.0).abs() < f32::EPSILON);
}
#[test]
fn test_normalize() {
let v = Vec2::new(4.0, 0.0);
let n = v.normalize();
assert!((n.length() - 1.0).abs() < 1e-6);
assert_eq!(n, Vec2::new(1.0, 0.0));
}
}

View File

@@ -0,0 +1,158 @@
use std::ops::{Add, Sub, Mul, Neg};
/// 3D vector (f32)
#[derive(Debug, Clone, Copy, PartialEq)]
pub struct Vec3 {
pub x: f32,
pub y: f32,
pub z: f32,
}
impl Vec3 {
pub const ZERO: Self = Self { x: 0.0, y: 0.0, z: 0.0 };
pub const ONE: Self = Self { x: 1.0, y: 1.0, z: 1.0 };
pub const X: Self = Self { x: 1.0, y: 0.0, z: 0.0 };
pub const Y: Self = Self { x: 0.0, y: 1.0, z: 0.0 };
pub const Z: Self = Self { x: 0.0, y: 0.0, z: 1.0 };
pub const fn new(x: f32, y: f32, z: f32) -> Self {
Self { x, y, z }
}
pub fn dot(self, rhs: Self) -> f32 {
self.x * rhs.x + self.y * rhs.y + self.z * rhs.z
}
pub fn cross(self, rhs: Self) -> Self {
Self {
x: self.y * rhs.z - self.z * rhs.y,
y: self.z * rhs.x - self.x * rhs.z,
z: self.x * rhs.y - self.y * rhs.x,
}
}
pub fn length_squared(self) -> f32 {
self.dot(self)
}
pub fn length(self) -> f32 {
self.length_squared().sqrt()
}
pub fn normalize(self) -> Self {
let len = self.length();
Self {
x: self.x / len,
y: self.y / len,
z: self.z / len,
}
}
}
impl Add for Vec3 {
type Output = Self;
fn add(self, rhs: Self) -> Self {
Self { x: self.x + rhs.x, y: self.y + rhs.y, z: self.z + rhs.z }
}
}
impl Sub for Vec3 {
type Output = Self;
fn sub(self, rhs: Self) -> Self {
Self { x: self.x - rhs.x, y: self.y - rhs.y, z: self.z - rhs.z }
}
}
impl Mul<f32> for Vec3 {
type Output = Self;
fn mul(self, rhs: f32) -> Self {
Self { x: self.x * rhs, y: self.y * rhs, z: self.z * rhs }
}
}
impl Neg for Vec3 {
type Output = Self;
fn neg(self) -> Self {
Self { x: -self.x, y: -self.y, z: -self.z }
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_new() {
let v = Vec3::new(1.0, 2.0, 3.0);
assert_eq!(v.x, 1.0);
assert_eq!(v.y, 2.0);
assert_eq!(v.z, 3.0);
}
#[test]
fn test_zero() {
let v = Vec3::ZERO;
assert_eq!(v.x, 0.0);
assert_eq!(v.y, 0.0);
assert_eq!(v.z, 0.0);
}
#[test]
fn test_add() {
let a = Vec3::new(1.0, 2.0, 3.0);
let b = Vec3::new(4.0, 5.0, 6.0);
let c = a + b;
assert_eq!(c, Vec3::new(5.0, 7.0, 9.0));
}
#[test]
fn test_sub() {
let a = Vec3::new(4.0, 5.0, 6.0);
let b = Vec3::new(1.0, 2.0, 3.0);
let c = a - b;
assert_eq!(c, Vec3::new(3.0, 3.0, 3.0));
}
#[test]
fn test_scalar_mul() {
let v = Vec3::new(1.0, 2.0, 3.0);
let r = v * 2.0;
assert_eq!(r, Vec3::new(2.0, 4.0, 6.0));
}
#[test]
fn test_dot() {
let a = Vec3::new(1.0, 2.0, 3.0);
let b = Vec3::new(4.0, 5.0, 6.0);
assert_eq!(a.dot(b), 32.0);
}
#[test]
fn test_cross() {
let a = Vec3::new(1.0, 0.0, 0.0);
let b = Vec3::new(0.0, 1.0, 0.0);
let c = a.cross(b);
assert_eq!(c, Vec3::new(0.0, 0.0, 1.0));
}
#[test]
fn test_length() {
let v = Vec3::new(3.0, 4.0, 0.0);
assert!((v.length() - 5.0).abs() < f32::EPSILON);
}
#[test]
fn test_normalize() {
let v = Vec3::new(3.0, 0.0, 0.0);
let n = v.normalize();
assert!((n.length() - 1.0).abs() < 1e-6);
assert_eq!(n, Vec3::new(1.0, 0.0, 0.0));
}
#[test]
fn test_neg() {
let v = Vec3::new(1.0, -2.0, 3.0);
let n = -v;
assert_eq!(n, Vec3::new(-1.0, 2.0, -3.0));
}
}

View File

@@ -0,0 +1,112 @@
use std::ops::{Add, Sub, Mul, Neg};
use crate::Vec3;
/// 4D vector (f32)
#[derive(Debug, Clone, Copy, PartialEq)]
pub struct Vec4 {
pub x: f32,
pub y: f32,
pub z: f32,
pub w: f32,
}
impl Vec4 {
pub const ZERO: Self = Self { x: 0.0, y: 0.0, z: 0.0, w: 0.0 };
pub const ONE: Self = Self { x: 1.0, y: 1.0, z: 1.0, w: 1.0 };
pub const fn new(x: f32, y: f32, z: f32, w: f32) -> Self {
Self { x, y, z, w }
}
pub fn from_vec3(v: Vec3, w: f32) -> Self {
Self { x: v.x, y: v.y, z: v.z, w }
}
pub fn xyz(self) -> Vec3 {
Vec3::new(self.x, self.y, self.z)
}
pub fn dot(self, rhs: Self) -> f32 {
self.x * rhs.x + self.y * rhs.y + self.z * rhs.z + self.w * rhs.w
}
pub fn length_squared(self) -> f32 {
self.dot(self)
}
pub fn length(self) -> f32 {
self.length_squared().sqrt()
}
}
impl Add for Vec4 {
type Output = Self;
fn add(self, rhs: Self) -> Self {
Self { x: self.x + rhs.x, y: self.y + rhs.y, z: self.z + rhs.z, w: self.w + rhs.w }
}
}
impl Sub for Vec4 {
type Output = Self;
fn sub(self, rhs: Self) -> Self {
Self { x: self.x - rhs.x, y: self.y - rhs.y, z: self.z - rhs.z, w: self.w - rhs.w }
}
}
impl Mul<f32> for Vec4 {
type Output = Self;
fn mul(self, rhs: f32) -> Self {
Self { x: self.x * rhs, y: self.y * rhs, z: self.z * rhs, w: self.w * rhs }
}
}
impl Neg for Vec4 {
type Output = Self;
fn neg(self) -> Self {
Self { x: -self.x, y: -self.y, z: -self.z, w: -self.w }
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_new() {
let v = Vec4::new(1.0, 2.0, 3.0, 4.0);
assert_eq!(v.x, 1.0);
assert_eq!(v.y, 2.0);
assert_eq!(v.z, 3.0);
assert_eq!(v.w, 4.0);
}
#[test]
fn test_from_vec3() {
let v3 = Vec3::new(1.0, 2.0, 3.0);
let v4 = Vec4::from_vec3(v3, 1.0);
assert_eq!(v4, Vec4::new(1.0, 2.0, 3.0, 1.0));
}
#[test]
fn test_xyz() {
let v4 = Vec4::new(1.0, 2.0, 3.0, 4.0);
let v3 = v4.xyz();
assert_eq!(v3, Vec3::new(1.0, 2.0, 3.0));
}
#[test]
fn test_dot() {
let a = Vec4::new(1.0, 2.0, 3.0, 4.0);
let b = Vec4::new(5.0, 6.0, 7.0, 8.0);
assert_eq!(a.dot(b), 70.0);
}
#[test]
fn test_add() {
let a = Vec4::new(1.0, 2.0, 3.0, 4.0);
let b = Vec4::new(5.0, 6.0, 7.0, 8.0);
let c = a + b;
assert_eq!(c, Vec4::new(6.0, 8.0, 10.0, 12.0));
}
}

View File

@@ -0,0 +1,85 @@
use std::time::{Duration, Instant};
pub struct GameTimer {
last_frame: Instant,
accumulator: Duration,
fixed_dt: Duration,
frame_time: Duration,
}
impl GameTimer {
pub fn new(fixed_hz: u32) -> Self {
Self {
last_frame: Instant::now(),
accumulator: Duration::ZERO,
fixed_dt: Duration::from_secs_f64(1.0 / fixed_hz as f64),
frame_time: Duration::ZERO,
}
}
pub fn tick(&mut self) {
let now = Instant::now();
self.frame_time = now - self.last_frame;
if self.frame_time > Duration::from_millis(250) {
self.frame_time = Duration::from_millis(250);
}
self.accumulator += self.frame_time;
self.last_frame = now;
}
pub fn should_fixed_update(&mut self) -> bool {
if self.accumulator >= self.fixed_dt {
self.accumulator -= self.fixed_dt;
true
} else {
false
}
}
pub fn fixed_dt(&self) -> f32 {
self.fixed_dt.as_secs_f32()
}
pub fn frame_dt(&self) -> f32 {
self.frame_time.as_secs_f32()
}
pub fn alpha(&self) -> f32 {
self.accumulator.as_secs_f32() / self.fixed_dt.as_secs_f32()
}
}
#[cfg(test)]
mod tests {
use super::*;
use std::thread;
#[test]
fn test_fixed_dt() {
let timer = GameTimer::new(60);
let expected = 1.0 / 60.0;
assert!((timer.fixed_dt() - expected).abs() < 1e-6);
}
#[test]
fn test_should_fixed_update_accumulates() {
let mut timer = GameTimer::new(60);
thread::sleep(Duration::from_millis(100));
timer.tick();
let mut count = 0;
while timer.should_fixed_update() {
count += 1;
}
assert!(count >= 5 && count <= 7, "Expected ~6 fixed updates, got {count}");
}
#[test]
fn test_alpha_range() {
let mut timer = GameTimer::new(60);
thread::sleep(Duration::from_millis(10));
timer.tick();
while timer.should_fixed_update() {}
let alpha = timer.alpha();
assert!(alpha >= 0.0 && alpha <= 1.0, "Alpha should be 0..1, got {alpha}");
}
}

View File

@@ -0,0 +1,106 @@
use winit::keyboard::KeyCode;
use std::collections::HashSet;
use winit::event::MouseButton;
pub struct InputState {
pressed: HashSet<KeyCode>,
just_pressed: HashSet<KeyCode>,
just_released: HashSet<KeyCode>,
mouse_position: (f64, f64),
mouse_delta: (f64, f64),
mouse_buttons: HashSet<MouseButton>,
mouse_buttons_just_pressed: HashSet<MouseButton>,
mouse_buttons_just_released: HashSet<MouseButton>,
mouse_scroll_delta: f32,
}
impl InputState {
pub fn new() -> Self {
Self {
pressed: HashSet::new(),
just_pressed: HashSet::new(),
just_released: HashSet::new(),
mouse_position: (0.0, 0.0),
mouse_delta: (0.0, 0.0),
mouse_buttons: HashSet::new(),
mouse_buttons_just_pressed: HashSet::new(),
mouse_buttons_just_released: HashSet::new(),
mouse_scroll_delta: 0.0,
}
}
pub fn is_key_pressed(&self, key: KeyCode) -> bool {
self.pressed.contains(&key)
}
pub fn is_key_just_pressed(&self, key: KeyCode) -> bool {
self.just_pressed.contains(&key)
}
pub fn is_key_just_released(&self, key: KeyCode) -> bool {
self.just_released.contains(&key)
}
pub fn mouse_position(&self) -> (f64, f64) {
self.mouse_position
}
pub fn mouse_delta(&self) -> (f64, f64) {
self.mouse_delta
}
pub fn is_mouse_button_pressed(&self, button: MouseButton) -> bool {
self.mouse_buttons.contains(&button)
}
pub fn is_mouse_button_just_pressed(&self, button: MouseButton) -> bool {
self.mouse_buttons_just_pressed.contains(&button)
}
pub fn mouse_scroll(&self) -> f32 {
self.mouse_scroll_delta
}
pub fn begin_frame(&mut self) {
self.just_pressed.clear();
self.just_released.clear();
self.mouse_buttons_just_pressed.clear();
self.mouse_buttons_just_released.clear();
self.mouse_delta = (0.0, 0.0);
self.mouse_scroll_delta = 0.0;
}
pub fn process_key(&mut self, key: KeyCode, pressed: bool) {
if pressed {
if self.pressed.insert(key) {
self.just_pressed.insert(key);
}
} else {
if self.pressed.remove(&key) {
self.just_released.insert(key);
}
}
}
pub fn process_mouse_move(&mut self, x: f64, y: f64) {
self.mouse_delta.0 += x - self.mouse_position.0;
self.mouse_delta.1 += y - self.mouse_position.1;
self.mouse_position = (x, y);
}
pub fn process_mouse_button(&mut self, button: MouseButton, pressed: bool) {
if pressed {
if self.mouse_buttons.insert(button) {
self.mouse_buttons_just_pressed.insert(button);
}
} else {
if self.mouse_buttons.remove(&button) {
self.mouse_buttons_just_released.insert(button);
}
}
}
pub fn process_scroll(&mut self, delta: f32) {
self.mouse_scroll_delta += delta;
}
}

View File

@@ -1,2 +1,7 @@
// Voltex Platform - Phase 1
// Modules will be added in Task 3
pub mod window;
pub mod input;
pub mod game_loop;
pub use window::{VoltexWindow, WindowConfig};
pub use input::InputState;
pub use game_loop::GameTimer;

View File

@@ -0,0 +1,55 @@
use std::sync::Arc;
use winit::event_loop::ActiveEventLoop;
use winit::window::{Window as WinitWindow, WindowAttributes};
pub struct WindowConfig {
pub title: String,
pub width: u32,
pub height: u32,
pub fullscreen: bool,
pub vsync: bool,
}
impl Default for WindowConfig {
fn default() -> Self {
Self {
title: "Voltex Engine".to_string(),
width: 1280,
height: 720,
fullscreen: false,
vsync: true,
}
}
}
pub struct VoltexWindow {
pub handle: Arc<WinitWindow>,
pub vsync: bool,
}
impl VoltexWindow {
pub fn new(event_loop: &ActiveEventLoop, config: &WindowConfig) -> Self {
let mut attrs = WindowAttributes::default()
.with_title(&config.title)
.with_inner_size(winit::dpi::LogicalSize::new(config.width, config.height));
if config.fullscreen {
attrs = attrs.with_fullscreen(Some(winit::window::Fullscreen::Borderless(None)));
}
let window = event_loop.create_window(attrs).expect("Failed to create window");
Self {
handle: Arc::new(window),
vsync: config.vsync,
}
}
pub fn inner_size(&self) -> (u32, u32) {
let size = self.handle.inner_size();
(size.width, size.height)
}
pub fn request_redraw(&self) {
self.handle.request_redraw();
}
}

View File

@@ -0,0 +1,131 @@
/// Van der Corput sequence via bit-reversal.
pub fn radical_inverse_vdc(mut bits: u32) -> f32 {
bits = (bits << 16) | (bits >> 16);
bits = ((bits & 0x55555555) << 1) | ((bits & 0xAAAAAAAA) >> 1);
bits = ((bits & 0x33333333) << 2) | ((bits & 0xCCCCCCCC) >> 2);
bits = ((bits & 0x0F0F0F0F) << 4) | ((bits & 0xF0F0F0F0) >> 4);
bits = ((bits & 0x00FF00FF) << 8) | ((bits & 0xFF00FF00) >> 8);
bits as f32 * 2.328_306_4e-10 // / 0x100000000
}
/// Hammersley low-discrepancy 2D sample.
pub fn hammersley(i: u32, n: u32) -> [f32; 2] {
[i as f32 / n as f32, radical_inverse_vdc(i)]
}
/// GGX importance-sampled half vector in tangent space (N = (0,0,1)).
pub fn importance_sample_ggx(xi: [f32; 2], roughness: f32) -> [f32; 3] {
let a = roughness * roughness;
let phi = 2.0 * std::f32::consts::PI * xi[0];
let cos_theta = ((1.0 - xi[1]) / (1.0 + (a * a - 1.0) * xi[1])).sqrt();
let sin_theta = (1.0 - cos_theta * cos_theta).max(0.0).sqrt();
[phi.cos() * sin_theta, phi.sin() * sin_theta, cos_theta]
}
/// Smith geometry function for IBL: k = a²/2.
pub fn geometry_smith_ibl(n_dot_v: f32, n_dot_l: f32, roughness: f32) -> f32 {
let a = roughness * roughness;
let k = a / 2.0;
let ggx_v = n_dot_v / (n_dot_v * (1.0 - k) + k);
let ggx_l = n_dot_l / (n_dot_l * (1.0 - k) + k);
ggx_v * ggx_l
}
/// Monte Carlo integration of the split-sum BRDF for a given NdotV and roughness.
/// Returns (scale, bias) such that F_env ≈ F0 * scale + bias.
pub fn integrate_brdf(n_dot_v: f32, roughness: f32) -> (f32, f32) {
const NUM_SAMPLES: u32 = 1024;
// View vector in tangent space where N = (0,0,1).
let v = [
(1.0 - n_dot_v * n_dot_v).max(0.0).sqrt(),
0.0_f32,
n_dot_v,
];
let mut scale = 0.0_f32;
let mut bias = 0.0_f32;
for i in 0..NUM_SAMPLES {
let xi = hammersley(i, NUM_SAMPLES);
let h = importance_sample_ggx(xi, roughness);
// dot(V, H)
let v_dot_h = (v[0] * h[0] + v[1] * h[1] + v[2] * h[2]).max(0.0);
// Reflect V around H to get L.
let l = [
2.0 * v_dot_h * h[0] - v[0],
2.0 * v_dot_h * h[1] - v[1],
2.0 * v_dot_h * h[2] - v[2],
];
let n_dot_l = l[2].max(0.0); // L.z in tangent space
let n_dot_h = h[2].max(0.0);
if n_dot_l > 0.0 {
let g = geometry_smith_ibl(n_dot_v, n_dot_l, roughness);
let g_vis = g * v_dot_h / (n_dot_h * n_dot_v).max(0.001);
let fc = (1.0 - v_dot_h).powi(5);
scale += g_vis * (1.0 - fc);
bias += g_vis * fc;
}
}
(scale / NUM_SAMPLES as f32, bias / NUM_SAMPLES as f32)
}
/// Generate the BRDF LUT for the split-sum IBL approximation.
///
/// Returns `size * size` elements. Each element is `[scale, bias]` where
/// the x-axis (u) maps NdotV in [0, 1] and the y-axis (v) maps roughness in [0, 1].
pub fn generate_brdf_lut(size: u32) -> Vec<[f32; 2]> {
let mut lut = Vec::with_capacity((size * size) as usize);
for row in 0..size {
// v maps to roughness (row 0 → roughness near 0, last row → 1).
let roughness = ((row as f32 + 0.5) / size as f32).clamp(0.0, 1.0);
for col in 0..size {
// u maps to NdotV.
let n_dot_v = ((col as f32 + 0.5) / size as f32).clamp(0.0, 1.0);
let (scale, bias) = integrate_brdf(n_dot_v, roughness);
lut.push([scale, bias]);
}
}
lut
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_brdf_lut_dimensions() {
let size = 64u32;
let lut = generate_brdf_lut(size);
assert_eq!(lut.len(), (size * size) as usize);
}
#[test]
fn test_brdf_lut_values_in_range() {
let lut = generate_brdf_lut(64);
for pixel in &lut {
assert!(
pixel[0] >= 0.0 && pixel[0] <= 1.5,
"scale {} out of range",
pixel[0]
);
assert!(
pixel[1] >= 0.0 && pixel[1] <= 1.5,
"bias {} out of range",
pixel[1]
);
}
}
#[test]
fn test_hammersley() {
let n = 1024u32;
let sample = hammersley(0, n);
assert_eq!(sample[0], 0.0, "hammersley(0, N).x should be 0");
}
}

View File

@@ -0,0 +1,143 @@
use voltex_math::{Vec3, Mat4};
pub struct Camera {
pub position: Vec3,
pub yaw: f32, // radians, Y-axis rotation
pub pitch: f32, // radians, X-axis rotation
pub fov_y: f32, // radians
pub aspect: f32,
pub near: f32,
pub far: f32,
}
impl Camera {
pub fn new(position: Vec3, aspect: f32) -> Self {
Self {
position,
yaw: 0.0,
pitch: 0.0,
fov_y: std::f32::consts::FRAC_PI_4, // 45 degrees
aspect,
near: 0.1,
far: 100.0,
}
}
pub fn forward(&self) -> Vec3 {
Vec3::new(
self.yaw.sin() * self.pitch.cos(),
self.pitch.sin(),
-self.yaw.cos() * self.pitch.cos(),
)
}
pub fn right(&self) -> Vec3 {
self.forward().cross(Vec3::Y).normalize()
}
pub fn view_matrix(&self) -> Mat4 {
let target = self.position + self.forward();
Mat4::look_at(self.position, target, Vec3::Y)
}
pub fn projection_matrix(&self) -> Mat4 {
Mat4::perspective(self.fov_y, self.aspect, self.near, self.far)
}
pub fn view_projection(&self) -> Mat4 {
self.projection_matrix() * self.view_matrix()
}
}
pub struct FpsController {
pub speed: f32,
pub mouse_sensitivity: f32,
}
impl FpsController {
pub fn new() -> Self {
Self { speed: 5.0, mouse_sensitivity: 0.003 }
}
pub fn process_movement(
&self, camera: &mut Camera,
forward: f32, right: f32, up: f32, dt: f32,
) {
let cam_forward = camera.forward();
let cam_right = camera.right();
let velocity = self.speed * dt;
camera.position = camera.position
+ cam_forward * (forward * velocity)
+ cam_right * (right * velocity)
+ Vec3::Y * (up * velocity);
}
pub fn process_mouse(&self, camera: &mut Camera, dx: f64, dy: f64) {
camera.yaw += dx as f32 * self.mouse_sensitivity;
camera.pitch -= dy as f32 * self.mouse_sensitivity;
let limit = 89.0_f32.to_radians();
camera.pitch = camera.pitch.clamp(-limit, limit);
}
}
#[cfg(test)]
mod tests {
use super::*;
use std::f32::consts::PI;
fn approx_eq(a: f32, b: f32) -> bool {
(a - b).abs() < 1e-5
}
fn vec3_approx_eq(a: Vec3, b: Vec3) -> bool {
approx_eq(a.x, b.x) && approx_eq(a.y, b.y) && approx_eq(a.z, b.z)
}
#[test]
fn test_camera_default_forward() {
let cam = Camera::new(Vec3::new(0.0, 0.0, 0.0), 1.0);
// yaw=0, pitch=0 → forward ≈ (0, 0, -1)
let fwd = cam.forward();
assert!(
vec3_approx_eq(fwd, Vec3::new(0.0, 0.0, -1.0)),
"Expected (0, 0, -1), got ({}, {}, {})",
fwd.x, fwd.y, fwd.z
);
}
#[test]
fn test_camera_yaw_90() {
let mut cam = Camera::new(Vec3::new(0.0, 0.0, 0.0), 1.0);
cam.yaw = PI / 2.0;
// yaw=PI/2 → forward ≈ (1, 0, 0)
let fwd = cam.forward();
assert!(
vec3_approx_eq(fwd, Vec3::new(1.0, 0.0, 0.0)),
"Expected (1, 0, 0), got ({}, {}, {})",
fwd.x, fwd.y, fwd.z
);
}
#[test]
fn test_fps_pitch_clamp() {
let controller = FpsController::new();
let mut cam = Camera::new(Vec3::new(0.0, 0.0, 0.0), 1.0);
let limit = 89.0_f32.to_radians();
// Extreme upward mouse movement
controller.process_mouse(&mut cam, 0.0, -1_000_000.0);
assert!(
cam.pitch <= limit + 1e-5,
"Pitch should be clamped to +89°, got {}",
cam.pitch.to_degrees()
);
// Extreme downward mouse movement
controller.process_mouse(&mut cam, 0.0, 1_000_000.0);
assert!(
cam.pitch >= -limit - 1e-5,
"Pitch should be clamped to -89°, got {}",
cam.pitch.to_degrees()
);
}
}

View File

@@ -0,0 +1,104 @@
use std::sync::Arc;
use winit::window::Window;
pub const DEPTH_FORMAT: wgpu::TextureFormat = wgpu::TextureFormat::Depth32Float;
fn create_depth_texture(device: &wgpu::Device, width: u32, height: u32) -> wgpu::TextureView {
let texture = device.create_texture(&wgpu::TextureDescriptor {
label: Some("Depth Texture"),
size: wgpu::Extent3d { width, height, depth_or_array_layers: 1 },
mip_level_count: 1,
sample_count: 1,
dimension: wgpu::TextureDimension::D2,
format: DEPTH_FORMAT,
usage: wgpu::TextureUsages::RENDER_ATTACHMENT | wgpu::TextureUsages::TEXTURE_BINDING,
view_formats: &[],
});
texture.create_view(&wgpu::TextureViewDescriptor::default())
}
pub struct GpuContext {
pub surface: wgpu::Surface<'static>,
pub device: wgpu::Device,
pub queue: wgpu::Queue,
pub config: wgpu::SurfaceConfiguration,
pub surface_format: wgpu::TextureFormat,
pub depth_view: wgpu::TextureView,
}
impl GpuContext {
pub fn new(window: Arc<Window>) -> Self {
pollster::block_on(Self::new_async(window))
}
async fn new_async(window: Arc<Window>) -> Self {
let size = window.inner_size();
let instance = wgpu::Instance::new(&wgpu::InstanceDescriptor {
backends: wgpu::Backends::PRIMARY,
..Default::default()
});
let surface = instance.create_surface(window).expect("Failed to create surface");
let adapter = instance
.request_adapter(&wgpu::RequestAdapterOptions {
power_preference: wgpu::PowerPreference::HighPerformance,
compatible_surface: Some(&surface),
force_fallback_adapter: false,
})
.await
.expect("Failed to find a suitable GPU adapter");
let (device, queue) = adapter
.request_device(&wgpu::DeviceDescriptor {
label: Some("Voltex Device"),
required_features: wgpu::Features::empty(),
required_limits: wgpu::Limits::default(),
memory_hints: Default::default(),
..Default::default()
})
.await
.expect("Failed to create device");
let surface_caps = surface.get_capabilities(&adapter);
let surface_format = surface_caps
.formats
.iter()
.find(|f| f.is_srgb())
.copied()
.unwrap_or(surface_caps.formats[0]);
let config = wgpu::SurfaceConfiguration {
usage: wgpu::TextureUsages::RENDER_ATTACHMENT,
format: surface_format,
width: size.width.max(1),
height: size.height.max(1),
present_mode: surface_caps.present_modes[0],
alpha_mode: surface_caps.alpha_modes[0],
view_formats: vec![],
desired_maximum_frame_latency: 2,
};
surface.configure(&device, &config);
let depth_view = create_depth_texture(&device, config.width, config.height);
Self {
surface,
device,
queue,
config,
surface_format,
depth_view,
}
}
pub fn resize(&mut self, width: u32, height: u32) {
if width > 0 && height > 0 {
self.config.width = width;
self.config.height = height;
self.surface.configure(&self.device, &self.config);
self.depth_view = create_depth_texture(&self.device, width, height);
}
}
}

View File

@@ -0,0 +1,82 @@
use crate::brdf_lut::generate_brdf_lut;
pub const BRDF_LUT_SIZE: u32 = 256;
pub struct IblResources {
pub brdf_lut_texture: wgpu::Texture,
pub brdf_lut_view: wgpu::TextureView,
pub brdf_lut_sampler: wgpu::Sampler,
}
impl IblResources {
pub fn new(device: &wgpu::Device, queue: &wgpu::Queue) -> Self {
let size = BRDF_LUT_SIZE;
// Generate CPU-side LUT data.
let lut_data = generate_brdf_lut(size);
// Convert [f32; 2] → RGBA8 pixels (R=scale*255, G=bias*255, B=0, A=255).
let mut pixels: Vec<u8> = Vec::with_capacity((size * size * 4) as usize);
for [scale, bias] in &lut_data {
pixels.push((scale.clamp(0.0, 1.0) * 255.0).round() as u8);
pixels.push((bias.clamp(0.0, 1.0) * 255.0).round() as u8);
pixels.push(0u8);
pixels.push(255u8);
}
let extent = wgpu::Extent3d {
width: size,
height: size,
depth_or_array_layers: 1,
};
// Create the texture (linear, NOT sRGB).
let brdf_lut_texture = device.create_texture(&wgpu::TextureDescriptor {
label: Some("BrdfLutTexture"),
size: extent,
mip_level_count: 1,
sample_count: 1,
dimension: wgpu::TextureDimension::D2,
format: wgpu::TextureFormat::Rgba8Unorm,
usage: wgpu::TextureUsages::TEXTURE_BINDING | wgpu::TextureUsages::COPY_DST,
view_formats: &[],
});
queue.write_texture(
wgpu::TexelCopyTextureInfo {
texture: &brdf_lut_texture,
mip_level: 0,
origin: wgpu::Origin3d::ZERO,
aspect: wgpu::TextureAspect::All,
},
&pixels,
wgpu::TexelCopyBufferLayout {
offset: 0,
bytes_per_row: Some(4 * size),
rows_per_image: Some(size),
},
extent,
);
let brdf_lut_view =
brdf_lut_texture.create_view(&wgpu::TextureViewDescriptor::default());
let brdf_lut_sampler = device.create_sampler(&wgpu::SamplerDescriptor {
label: Some("BrdfLutSampler"),
address_mode_u: wgpu::AddressMode::ClampToEdge,
address_mode_v: wgpu::AddressMode::ClampToEdge,
address_mode_w: wgpu::AddressMode::ClampToEdge,
mag_filter: wgpu::FilterMode::Linear,
min_filter: wgpu::FilterMode::Linear,
mipmap_filter: wgpu::MipmapFilterMode::Nearest,
..Default::default()
});
Self {
brdf_lut_texture,
brdf_lut_view,
brdf_lut_sampler,
}
}
}

View File

@@ -1,2 +1,27 @@
// Voltex Renderer - Phase 1
// Modules will be added in Task 4
pub mod gpu;
pub mod light;
pub mod obj;
pub mod pipeline;
pub mod texture;
pub mod vertex;
pub mod mesh;
pub mod camera;
pub mod material;
pub mod sphere;
pub mod pbr_pipeline;
pub mod shadow;
pub mod shadow_pipeline;
pub mod brdf_lut;
pub mod ibl;
pub use gpu::{GpuContext, DEPTH_FORMAT};
pub use light::{CameraUniform, LightUniform, LightData, LightsUniform, MAX_LIGHTS, LIGHT_DIRECTIONAL, LIGHT_POINT, LIGHT_SPOT};
pub use mesh::Mesh;
pub use camera::{Camera, FpsController};
pub use texture::{GpuTexture, pbr_texture_bind_group_layout, create_pbr_texture_bind_group};
pub use material::MaterialUniform;
pub use sphere::generate_sphere;
pub use pbr_pipeline::create_pbr_pipeline;
pub use shadow::{ShadowMap, ShadowUniform, ShadowPassUniform, SHADOW_MAP_SIZE, SHADOW_FORMAT};
pub use shadow_pipeline::{create_shadow_pipeline, shadow_pass_bind_group_layout};
pub use ibl::IblResources;

View File

@@ -0,0 +1,199 @@
use bytemuck::{Pod, Zeroable};
#[repr(C)]
#[derive(Copy, Clone, Debug, Pod, Zeroable)]
pub struct CameraUniform {
pub view_proj: [[f32; 4]; 4],
pub model: [[f32; 4]; 4],
pub camera_pos: [f32; 3],
pub _padding: f32,
}
impl CameraUniform {
pub fn new() -> Self {
Self {
view_proj: [[1.0,0.0,0.0,0.0],[0.0,1.0,0.0,0.0],[0.0,0.0,1.0,0.0],[0.0,0.0,0.0,1.0]],
model: [[1.0,0.0,0.0,0.0],[0.0,1.0,0.0,0.0],[0.0,0.0,1.0,0.0],[0.0,0.0,0.0,1.0]],
camera_pos: [0.0; 3],
_padding: 0.0,
}
}
}
#[repr(C)]
#[derive(Copy, Clone, Debug, Pod, Zeroable)]
pub struct LightUniform {
pub direction: [f32; 3],
pub _padding1: f32,
pub color: [f32; 3],
pub ambient_strength: f32,
}
impl LightUniform {
pub fn new() -> Self {
Self {
direction: [0.0, -1.0, -1.0],
_padding1: 0.0,
color: [1.0, 1.0, 1.0],
ambient_strength: 0.1,
}
}
}
// Multi-light support
pub const MAX_LIGHTS: usize = 16;
pub const LIGHT_DIRECTIONAL: u32 = 0;
pub const LIGHT_POINT: u32 = 1;
pub const LIGHT_SPOT: u32 = 2;
/// Per-light data. Must be exactly 64 bytes (4 × vec4) for WGSL array alignment.
#[repr(C)]
#[derive(Copy, Clone, Debug, Pod, Zeroable)]
pub struct LightData {
pub position: [f32; 3],
pub light_type: u32, // 16 bytes
pub direction: [f32; 3],
pub range: f32, // 32 bytes
pub color: [f32; 3],
pub intensity: f32, // 48 bytes
pub inner_cone: f32,
pub outer_cone: f32,
pub _padding: [f32; 2], // 64 bytes
}
impl LightData {
pub fn directional(direction: [f32; 3], color: [f32; 3], intensity: f32) -> Self {
Self {
position: [0.0; 3],
light_type: LIGHT_DIRECTIONAL,
direction,
range: 0.0,
color,
intensity,
inner_cone: 0.0,
outer_cone: 0.0,
_padding: [0.0; 2],
}
}
pub fn point(position: [f32; 3], color: [f32; 3], intensity: f32, range: f32) -> Self {
Self {
position,
light_type: LIGHT_POINT,
direction: [0.0; 3],
range,
color,
intensity,
inner_cone: 0.0,
outer_cone: 0.0,
_padding: [0.0; 2],
}
}
pub fn spot(
position: [f32; 3],
direction: [f32; 3],
color: [f32; 3],
intensity: f32,
range: f32,
inner_angle_deg: f32,
outer_angle_deg: f32,
) -> Self {
Self {
position,
light_type: LIGHT_SPOT,
direction,
range,
color,
intensity,
inner_cone: inner_angle_deg.to_radians().cos(),
outer_cone: outer_angle_deg.to_radians().cos(),
_padding: [0.0; 2],
}
}
}
/// Uniform buffer holding up to MAX_LIGHTS lights plus ambient color.
#[repr(C)]
#[derive(Copy, Clone, Debug, Pod, Zeroable)]
pub struct LightsUniform {
pub lights: [LightData; MAX_LIGHTS], // 1024 bytes
pub count: u32, // 4 bytes
pub _pad_count: [f32; 3], // 12 bytes (align ambient_color to 16-byte for WGSL vec3)
pub ambient_color: [f32; 3], // 12 bytes at offset 1040
pub _pad_end: f32, // 4 bytes → total 1056 (matches WGSL)
}
impl LightsUniform {
pub fn new() -> Self {
Self {
lights: [LightData::zeroed(); MAX_LIGHTS],
count: 0,
_pad_count: [0.0; 3],
ambient_color: [0.03, 0.03, 0.03],
_pad_end: 0.0,
}
}
pub fn add_light(&mut self, light: LightData) {
if (self.count as usize) < MAX_LIGHTS {
self.lights[self.count as usize] = light;
self.count += 1;
}
}
pub fn clear(&mut self) {
self.count = 0;
}
}
#[cfg(test)]
mod tests {
use super::*;
use std::mem;
#[test]
fn test_light_data_size() {
assert_eq!(mem::size_of::<LightData>() % 16, 0,
"LightData must be a multiple of 16 bytes for WGSL array alignment");
assert_eq!(mem::size_of::<LightData>(), 64,
"LightData must be exactly 64 bytes");
}
#[test]
fn test_lights_uniform_add() {
let mut u = LightsUniform::new();
u.add_light(LightData::directional([0.0, -1.0, 0.0], [1.0, 1.0, 1.0], 1.0));
u.add_light(LightData::point([0.0, 5.0, 0.0], [1.0, 0.0, 0.0], 2.0, 10.0));
assert_eq!(u.count, 2);
}
#[test]
fn test_lights_uniform_max() {
let mut u = LightsUniform::new();
for _ in 0..20 {
u.add_light(LightData::directional([0.0, -1.0, 0.0], [1.0, 1.0, 1.0], 1.0));
}
assert_eq!(u.count, MAX_LIGHTS as u32,
"count must be capped at MAX_LIGHTS (16)");
}
#[test]
fn test_spot_light_cone() {
let light = LightData::spot(
[0.0, 10.0, 0.0],
[0.0, -1.0, 0.0],
[1.0, 1.0, 1.0],
3.0,
20.0,
15.0,
30.0,
);
let expected_inner = 15.0_f32.to_radians().cos();
let expected_outer = 30.0_f32.to_radians().cos();
assert!((light.inner_cone - expected_inner).abs() < 1e-6,
"inner_cone should be cos(15°)");
assert!((light.outer_cone - expected_outer).abs() < 1e-6,
"outer_cone should be cos(30°)");
}
}

View File

@@ -0,0 +1,51 @@
use bytemuck::{Pod, Zeroable};
#[repr(C)]
#[derive(Copy, Clone, Debug, Pod, Zeroable)]
pub struct MaterialUniform {
pub base_color: [f32; 4],
pub metallic: f32,
pub roughness: f32,
pub ao: f32,
pub _padding: f32,
}
impl MaterialUniform {
pub fn new() -> Self {
Self {
base_color: [1.0, 1.0, 1.0, 1.0],
metallic: 0.0,
roughness: 0.5,
ao: 1.0,
_padding: 0.0,
}
}
pub fn with_params(base_color: [f32; 4], metallic: f32, roughness: f32) -> Self {
Self {
base_color,
metallic,
roughness,
ao: 1.0,
_padding: 0.0,
}
}
pub fn bind_group_layout(device: &wgpu::Device) -> wgpu::BindGroupLayout {
device.create_bind_group_layout(&wgpu::BindGroupLayoutDescriptor {
label: Some("Material Bind Group Layout"),
entries: &[wgpu::BindGroupLayoutEntry {
binding: 0,
visibility: wgpu::ShaderStages::FRAGMENT,
ty: wgpu::BindingType::Buffer {
ty: wgpu::BufferBindingType::Uniform,
has_dynamic_offset: true,
min_binding_size: wgpu::BufferSize::new(
std::mem::size_of::<MaterialUniform>() as u64,
),
},
count: None,
}],
})
}
}

View File

@@ -0,0 +1,24 @@
use crate::vertex::MeshVertex;
use wgpu::util::DeviceExt;
pub struct Mesh {
pub vertex_buffer: wgpu::Buffer,
pub index_buffer: wgpu::Buffer,
pub num_indices: u32,
}
impl Mesh {
pub fn new(device: &wgpu::Device, vertices: &[MeshVertex], indices: &[u32]) -> Self {
let vertex_buffer = device.create_buffer_init(&wgpu::util::BufferInitDescriptor {
label: Some("Mesh Vertex Buffer"),
contents: bytemuck::cast_slice(vertices),
usage: wgpu::BufferUsages::VERTEX,
});
let index_buffer = device.create_buffer_init(&wgpu::util::BufferInitDescriptor {
label: Some("Mesh Index Buffer"),
contents: bytemuck::cast_slice(indices),
usage: wgpu::BufferUsages::INDEX,
});
Self { vertex_buffer, index_buffer, num_indices: indices.len() as u32 }
}
}

View File

@@ -0,0 +1,57 @@
struct CameraUniform {
view_proj: mat4x4<f32>,
model: mat4x4<f32>,
camera_pos: vec3<f32>,
};
struct LightUniform {
direction: vec3<f32>,
color: vec3<f32>,
ambient_strength: f32,
};
@group(0) @binding(0) var<uniform> camera: CameraUniform;
@group(0) @binding(1) var<uniform> light: LightUniform;
@group(1) @binding(0) var t_diffuse: texture_2d<f32>;
@group(1) @binding(1) var s_diffuse: sampler;
struct VertexInput {
@location(0) position: vec3<f32>,
@location(1) normal: vec3<f32>,
@location(2) uv: vec2<f32>,
};
struct VertexOutput {
@builtin(position) clip_position: vec4<f32>,
@location(0) world_normal: vec3<f32>,
@location(1) world_pos: vec3<f32>,
@location(2) uv: vec2<f32>,
};
@vertex
fn vs_main(model_v: VertexInput) -> VertexOutput {
var out: VertexOutput;
let world_pos = camera.model * vec4<f32>(model_v.position, 1.0);
out.world_pos = world_pos.xyz;
out.world_normal = (camera.model * vec4<f32>(model_v.normal, 0.0)).xyz;
out.clip_position = camera.view_proj * world_pos;
out.uv = model_v.uv;
return out;
}
@fragment
fn fs_main(in: VertexOutput) -> @location(0) vec4<f32> {
let tex_color = textureSample(t_diffuse, s_diffuse, in.uv);
let normal = normalize(in.world_normal);
let light_dir = normalize(-light.direction);
let ambient = light.ambient_strength * light.color;
let diff = max(dot(normal, light_dir), 0.0);
let diffuse = diff * light.color;
let view_dir = normalize(camera.camera_pos - in.world_pos);
let half_dir = normalize(light_dir + view_dir);
let spec = pow(max(dot(normal, half_dir), 0.0), 32.0);
let specular = spec * light.color * 0.5;
let result = (ambient + diffuse + specular) * tex_color.rgb;
return vec4<f32>(result, tex_color.a);
}

View File

@@ -0,0 +1,294 @@
use std::collections::HashMap;
use crate::vertex::MeshVertex;
pub struct ObjData {
pub vertices: Vec<MeshVertex>,
pub indices: Vec<u32>,
}
pub fn parse_obj(source: &str) -> ObjData {
let mut positions: Vec<[f32; 3]> = Vec::new();
let mut normals: Vec<[f32; 3]> = Vec::new();
let mut uvs: Vec<[f32; 2]> = Vec::new();
// Intermediate face data: list of (v_idx, vt_idx, vn_idx) per face
let mut faces: Vec<Vec<(u32, u32, u32)>> = Vec::new();
for line in source.lines() {
let line = line.trim();
if line.is_empty() || line.starts_with('#') {
continue;
}
let mut parts = line.splitn(2, char::is_whitespace);
let keyword = parts.next().unwrap_or("");
let rest = parts.next().unwrap_or("").trim();
match keyword {
"v" => {
let coords: Vec<f32> = rest
.split_whitespace()
.filter_map(|s| s.parse().ok())
.collect();
if coords.len() >= 3 {
positions.push([coords[0], coords[1], coords[2]]);
}
}
"vn" => {
let coords: Vec<f32> = rest
.split_whitespace()
.filter_map(|s| s.parse().ok())
.collect();
if coords.len() >= 3 {
normals.push([coords[0], coords[1], coords[2]]);
}
}
"vt" => {
let coords: Vec<f32> = rest
.split_whitespace()
.filter_map(|s| s.parse().ok())
.collect();
if coords.len() >= 2 {
uvs.push([coords[0], coords[1]]);
} else if coords.len() == 1 {
uvs.push([coords[0], 0.0]);
}
}
"f" => {
let face: Vec<(u32, u32, u32)> = rest
.split_whitespace()
.map(|token| parse_face_vertex(token))
.collect();
if face.len() >= 3 {
faces.push(face);
}
}
_ => {}
}
}
// Deduplicate vertices using a HashMap keyed by (v_idx, vt_idx, vn_idx)
let mut vertex_map: HashMap<(u32, u32, u32), u32> = HashMap::new();
let mut vertices: Vec<MeshVertex> = Vec::new();
let mut indices: Vec<u32> = Vec::new();
let default_normal = [0.0_f32, 1.0, 0.0];
let default_uv = [0.0_f32, 0.0];
for face in &faces {
// Triangulate using fan method: (0,1,2), (0,2,3), (0,3,4), ...
let fan_anchor = &face[0];
for i in 1..(face.len() - 1) {
let tri = [fan_anchor, &face[i], &face[i + 1]];
for &&(v_idx, vt_idx, vn_idx) in &tri {
let key = (v_idx, vt_idx, vn_idx);
let final_idx = if let Some(&existing) = vertex_map.get(&key) {
existing
} else {
// OBJ indices are 1-based; 0 means missing
let position = if v_idx > 0 {
positions
.get((v_idx - 1) as usize)
.copied()
.unwrap_or([0.0, 0.0, 0.0])
} else {
[0.0, 0.0, 0.0]
};
let normal = if vn_idx > 0 {
normals
.get((vn_idx - 1) as usize)
.copied()
.unwrap_or(default_normal)
} else {
default_normal
};
let uv = if vt_idx > 0 {
uvs.get((vt_idx - 1) as usize)
.copied()
.unwrap_or(default_uv)
} else {
default_uv
};
let new_idx = vertices.len() as u32;
vertices.push(MeshVertex {
position,
normal,
uv,
tangent: [0.0; 4],
});
vertex_map.insert(key, new_idx);
new_idx
};
indices.push(final_idx);
}
}
}
compute_tangents(&mut vertices, &indices);
ObjData { vertices, indices }
}
pub fn compute_tangents(vertices: &mut [MeshVertex], indices: &[u32]) {
// Accumulate tangent per vertex from triangles
let mut tangents = vec![[0.0f32; 3]; vertices.len()];
let mut bitangents = vec![[0.0f32; 3]; vertices.len()];
for tri in indices.chunks(3) {
if tri.len() < 3 { continue; }
let i0 = tri[0] as usize;
let i1 = tri[1] as usize;
let i2 = tri[2] as usize;
let v0 = vertices[i0]; let v1 = vertices[i1]; let v2 = vertices[i2];
let edge1 = [v1.position[0]-v0.position[0], v1.position[1]-v0.position[1], v1.position[2]-v0.position[2]];
let edge2 = [v2.position[0]-v0.position[0], v2.position[1]-v0.position[1], v2.position[2]-v0.position[2]];
let duv1 = [v1.uv[0]-v0.uv[0], v1.uv[1]-v0.uv[1]];
let duv2 = [v2.uv[0]-v0.uv[0], v2.uv[1]-v0.uv[1]];
let det = duv1[0]*duv2[1] - duv2[0]*duv1[1];
if det.abs() < 1e-8 { continue; }
let f = 1.0 / det;
let t = [
f * (duv2[1]*edge1[0] - duv1[1]*edge2[0]),
f * (duv2[1]*edge1[1] - duv1[1]*edge2[1]),
f * (duv2[1]*edge1[2] - duv1[1]*edge2[2]),
];
let b = [
f * (-duv2[0]*edge1[0] + duv1[0]*edge2[0]),
f * (-duv2[0]*edge1[1] + duv1[0]*edge2[1]),
f * (-duv2[0]*edge1[2] + duv1[0]*edge2[2]),
];
for &idx in &[i0, i1, i2] {
tangents[idx] = [tangents[idx][0]+t[0], tangents[idx][1]+t[1], tangents[idx][2]+t[2]];
bitangents[idx] = [bitangents[idx][0]+b[0], bitangents[idx][1]+b[1], bitangents[idx][2]+b[2]];
}
}
// Orthogonalize and compute handedness
for (i, v) in vertices.iter_mut().enumerate() {
let n = v.normal;
let t = tangents[i];
// Gram-Schmidt orthogonalize: T' = normalize(T - N * dot(N, T))
let n_dot_t = n[0]*t[0] + n[1]*t[1] + n[2]*t[2];
let ortho = [t[0]-n[0]*n_dot_t, t[1]-n[1]*n_dot_t, t[2]-n[2]*n_dot_t];
let len = (ortho[0]*ortho[0] + ortho[1]*ortho[1] + ortho[2]*ortho[2]).sqrt();
if len > 1e-8 {
let normalized = [ortho[0]/len, ortho[1]/len, ortho[2]/len];
// Handedness: sign of dot(cross(N, T'), B)
let cross = [
n[1]*normalized[2] - n[2]*normalized[1],
n[2]*normalized[0] - n[0]*normalized[2],
n[0]*normalized[1] - n[1]*normalized[0],
];
let b = bitangents[i];
let dot_b = cross[0]*b[0] + cross[1]*b[1] + cross[2]*b[2];
let w = if dot_b < 0.0 { -1.0 } else { 1.0 };
v.tangent = [normalized[0], normalized[1], normalized[2], w];
} else {
v.tangent = [1.0, 0.0, 0.0, 1.0]; // fallback
}
}
}
/// Parse a face vertex token of the form "v", "v/vt", "v//vn", or "v/vt/vn".
/// Returns (v_idx, vt_idx, vn_idx) where 0 means absent.
fn parse_face_vertex(token: &str) -> (u32, u32, u32) {
let parts: Vec<&str> = token.split('/').collect();
let v = parts.get(0).and_then(|s| s.parse::<u32>().ok()).unwrap_or(0);
let vt = parts.get(1).and_then(|s| s.parse::<u32>().ok()).unwrap_or(0);
let vn = parts.get(2).and_then(|s| s.parse::<u32>().ok()).unwrap_or(0);
(v, vt, vn)
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_parse_triangle() {
let src = "\
v 0.0 0.0 0.0
v 1.0 0.0 0.0
v 0.0 1.0 0.0
vn 0.0 0.0 1.0
f 1//1 2//1 3//1
";
let data = parse_obj(src);
assert_eq!(data.vertices.len(), 3);
assert_eq!(data.indices.len(), 3);
// Verify positions
assert_eq!(data.vertices[0].position, [0.0, 0.0, 0.0]);
assert_eq!(data.vertices[1].position, [1.0, 0.0, 0.0]);
assert_eq!(data.vertices[2].position, [0.0, 1.0, 0.0]);
// Verify normals
for v in &data.vertices {
assert_eq!(v.normal, [0.0, 0.0, 1.0]);
}
}
#[test]
fn test_parse_quad_triangulated() {
let src = "\
v 0.0 0.0 0.0
v 1.0 0.0 0.0
v 1.0 1.0 0.0
v 0.0 1.0 0.0
vn 0.0 0.0 1.0
f 1//1 2//1 3//1 4//1
";
let data = parse_obj(src);
// 4-vertex quad → 2 triangles → 6 indices
assert_eq!(data.indices.len(), 6);
// 4 unique vertices
assert_eq!(data.vertices.len(), 4);
}
#[test]
fn test_parse_with_uv() {
let src = "\
v 0.0 0.0 0.0
v 1.0 0.0 0.0
v 0.0 1.0 0.0
vt 0.0 0.0
vt 1.0 0.0
vt 0.0 1.0
vn 0.0 0.0 1.0
f 1/1/1 2/2/1 3/3/1
";
let data = parse_obj(src);
assert_eq!(data.vertices.len(), 3);
assert_eq!(data.indices.len(), 3);
// Verify UV coordinates
assert_eq!(data.vertices[0].uv, [0.0, 0.0]);
assert_eq!(data.vertices[1].uv, [1.0, 0.0]);
assert_eq!(data.vertices[2].uv, [0.0, 1.0]);
}
#[test]
fn test_vertex_dedup() {
let src = "\
v 0.0 0.0 0.0
v 1.0 0.0 0.0
v 0.0 1.0 0.0
vn 0.0 0.0 1.0
f 1//1 2//1 3//1
f 1//1 3//1 2//1
";
let data = parse_obj(src);
// Both triangles share the same 3 vertices → only 3 unique vertices
assert_eq!(data.vertices.len(), 3);
// 2 triangles → 6 indices
assert_eq!(data.indices.len(), 6);
}
}

View File

@@ -0,0 +1,66 @@
use crate::vertex::MeshVertex;
use crate::gpu::DEPTH_FORMAT;
pub fn create_pbr_pipeline(
device: &wgpu::Device,
format: wgpu::TextureFormat,
camera_light_layout: &wgpu::BindGroupLayout,
texture_layout: &wgpu::BindGroupLayout,
material_layout: &wgpu::BindGroupLayout,
shadow_layout: &wgpu::BindGroupLayout,
) -> wgpu::RenderPipeline {
let shader = device.create_shader_module(wgpu::ShaderModuleDescriptor {
label: Some("PBR Shader"),
source: wgpu::ShaderSource::Wgsl(include_str!("pbr_shader.wgsl").into()),
});
let layout = device.create_pipeline_layout(&wgpu::PipelineLayoutDescriptor {
label: Some("PBR Pipeline Layout"),
bind_group_layouts: &[camera_light_layout, texture_layout, material_layout, shadow_layout],
immediate_size: 0,
});
device.create_render_pipeline(&wgpu::RenderPipelineDescriptor {
label: Some("PBR Pipeline"),
layout: Some(&layout),
vertex: wgpu::VertexState {
module: &shader,
entry_point: Some("vs_main"),
buffers: &[MeshVertex::LAYOUT],
compilation_options: wgpu::PipelineCompilationOptions::default(),
},
fragment: Some(wgpu::FragmentState {
module: &shader,
entry_point: Some("fs_main"),
targets: &[Some(wgpu::ColorTargetState {
format,
blend: Some(wgpu::BlendState::REPLACE),
write_mask: wgpu::ColorWrites::ALL,
})],
compilation_options: wgpu::PipelineCompilationOptions::default(),
}),
primitive: wgpu::PrimitiveState {
topology: wgpu::PrimitiveTopology::TriangleList,
strip_index_format: None,
front_face: wgpu::FrontFace::Ccw,
cull_mode: Some(wgpu::Face::Back),
polygon_mode: wgpu::PolygonMode::Fill,
unclipped_depth: false,
conservative: false,
},
depth_stencil: Some(wgpu::DepthStencilState {
format: DEPTH_FORMAT,
depth_write_enabled: true,
depth_compare: wgpu::CompareFunction::Less,
stencil: wgpu::StencilState::default(),
bias: wgpu::DepthBiasState::default(),
}),
multisample: wgpu::MultisampleState {
count: 1,
mask: !0,
alpha_to_coverage_enabled: false,
},
multiview_mask: None,
cache: None,
})
}

View File

@@ -0,0 +1,316 @@
struct CameraUniform {
view_proj: mat4x4<f32>,
model: mat4x4<f32>,
camera_pos: vec3<f32>,
};
struct LightData {
position: vec3<f32>,
light_type: u32,
direction: vec3<f32>,
range: f32,
color: vec3<f32>,
intensity: f32,
inner_cone: f32,
outer_cone: f32,
_padding: vec2<f32>,
};
struct LightsUniform {
lights: array<LightData, 16>,
count: u32,
ambient_color: vec3<f32>,
};
struct MaterialUniform {
base_color: vec4<f32>,
metallic: f32,
roughness: f32,
ao: f32,
};
@group(0) @binding(0) var<uniform> camera: CameraUniform;
@group(0) @binding(1) var<uniform> lights_uniform: LightsUniform;
@group(1) @binding(0) var t_diffuse: texture_2d<f32>;
@group(1) @binding(1) var s_diffuse: sampler;
@group(1) @binding(2) var t_normal: texture_2d<f32>;
@group(1) @binding(3) var s_normal: sampler;
@group(2) @binding(0) var<uniform> material: MaterialUniform;
struct ShadowUniform {
light_view_proj: mat4x4<f32>,
shadow_map_size: f32,
shadow_bias: f32,
};
@group(3) @binding(0) var t_shadow: texture_depth_2d;
@group(3) @binding(1) var s_shadow: sampler_comparison;
@group(3) @binding(2) var<uniform> shadow: ShadowUniform;
@group(3) @binding(3) var t_brdf_lut: texture_2d<f32>;
@group(3) @binding(4) var s_brdf_lut: sampler;
struct VertexInput {
@location(0) position: vec3<f32>,
@location(1) normal: vec3<f32>,
@location(2) uv: vec2<f32>,
@location(3) tangent: vec4<f32>,
};
struct VertexOutput {
@builtin(position) clip_position: vec4<f32>,
@location(0) world_normal: vec3<f32>,
@location(1) world_pos: vec3<f32>,
@location(2) uv: vec2<f32>,
@location(3) light_space_pos: vec4<f32>,
@location(4) world_tangent: vec3<f32>,
@location(5) world_bitangent: vec3<f32>,
};
@vertex
fn vs_main(model_v: VertexInput) -> VertexOutput {
var out: VertexOutput;
let world_pos = camera.model * vec4<f32>(model_v.position, 1.0);
out.world_pos = world_pos.xyz;
out.world_normal = (camera.model * vec4<f32>(model_v.normal, 0.0)).xyz;
out.clip_position = camera.view_proj * world_pos;
out.uv = model_v.uv;
out.light_space_pos = shadow.light_view_proj * world_pos;
let T = normalize((camera.model * vec4<f32>(model_v.tangent.xyz, 0.0)).xyz);
let N_out = normalize((camera.model * vec4<f32>(model_v.normal, 0.0)).xyz);
let B = cross(N_out, T) * model_v.tangent.w;
out.world_tangent = T;
out.world_bitangent = B;
return out;
}
// GGX Normal Distribution Function
fn distribution_ggx(N: vec3<f32>, H: vec3<f32>, roughness: f32) -> f32 {
let a = roughness * roughness;
let a2 = a * a;
let NdotH = max(dot(N, H), 0.0);
let NdotH2 = NdotH * NdotH;
let denom_inner = NdotH2 * (a2 - 1.0) + 1.0;
let denom = 3.14159265358979 * denom_inner * denom_inner;
return a2 / denom;
}
// Schlick-GGX geometry function (single direction)
fn geometry_schlick_ggx(NdotV: f32, roughness: f32) -> f32 {
let r = roughness + 1.0;
let k = (r * r) / 8.0;
return NdotV / (NdotV * (1.0 - k) + k);
}
// Smith geometry function (both directions)
fn geometry_smith(N: vec3<f32>, V: vec3<f32>, L: vec3<f32>, roughness: f32) -> f32 {
let NdotV = max(dot(N, V), 0.0);
let NdotL = max(dot(N, L), 0.0);
let ggx1 = geometry_schlick_ggx(NdotV, roughness);
let ggx2 = geometry_schlick_ggx(NdotL, roughness);
return ggx1 * ggx2;
}
// Fresnel-Schlick approximation
fn fresnel_schlick(cosTheta: f32, F0: vec3<f32>) -> vec3<f32> {
return F0 + (1.0 - F0) * pow(clamp(1.0 - cosTheta, 0.0, 1.0), 5.0);
}
// Point light distance attenuation: inverse-square with smooth falloff at range boundary
fn attenuation_point(distance: f32, range: f32) -> f32 {
let d_over_r = distance / range;
let d_over_r4 = d_over_r * d_over_r * d_over_r * d_over_r;
let falloff = clamp(1.0 - d_over_r4, 0.0, 1.0);
return (falloff * falloff) / (distance * distance + 0.0001);
}
// Spot light angular attenuation
fn attenuation_spot(light: LightData, L: vec3<f32>) -> f32 {
let spot_dir = normalize(light.direction);
let theta = dot(spot_dir, -L);
return clamp(
(theta - light.outer_cone) / (light.inner_cone - light.outer_cone + 0.0001),
0.0,
1.0,
);
}
// Cook-Torrance BRDF contribution for one light
fn compute_light_contribution(
light: LightData,
N: vec3<f32>,
V: vec3<f32>,
world_pos: vec3<f32>,
F0: vec3<f32>,
albedo: vec3<f32>,
metallic: f32,
roughness: f32,
) -> vec3<f32> {
var L: vec3<f32>;
var radiance: vec3<f32>;
if light.light_type == 0u {
// Directional
L = normalize(-light.direction);
radiance = light.color * light.intensity;
} else if light.light_type == 1u {
// Point
let to_light = light.position - world_pos;
let dist = length(to_light);
L = normalize(to_light);
let att = attenuation_point(dist, light.range);
radiance = light.color * light.intensity * att;
} else {
// Spot
let to_light = light.position - world_pos;
let dist = length(to_light);
L = normalize(to_light);
let att_dist = attenuation_point(dist, light.range);
let att_ang = attenuation_spot(light, L);
radiance = light.color * light.intensity * att_dist * att_ang;
}
let H = normalize(V + L);
let NDF = distribution_ggx(N, H, roughness);
let G = geometry_smith(N, V, L, roughness);
let F = fresnel_schlick(max(dot(H, V), 0.0), F0);
let ks = F;
let kd = (vec3<f32>(1.0) - ks) * (1.0 - metallic);
let numerator = NDF * G * F;
let NdotL = max(dot(N, L), 0.0);
let NdotV = max(dot(N, V), 0.0);
let denominator = 4.0 * NdotV * NdotL + 0.0001;
let specular = numerator / denominator;
return (kd * albedo / 3.14159265358979 + specular) * radiance * NdotL;
}
fn calculate_shadow(light_space_pos: vec4<f32>) -> f32 {
// If shadow_map_size == 0, shadow is disabled
if shadow.shadow_map_size == 0.0 {
return 1.0;
}
let proj_coords = light_space_pos.xyz / light_space_pos.w;
// wgpu NDC: x,y [-1,1], z [0,1]
let shadow_uv = vec2<f32>(
proj_coords.x * 0.5 + 0.5,
-proj_coords.y * 0.5 + 0.5,
);
let current_depth = proj_coords.z;
if shadow_uv.x < 0.0 || shadow_uv.x > 1.0 || shadow_uv.y < 0.0 || shadow_uv.y > 1.0 {
return 1.0;
}
if current_depth > 1.0 || current_depth < 0.0 {
return 1.0;
}
// 3x3 PCF
let texel_size = 1.0 / shadow.shadow_map_size;
var shadow_val = 0.0;
for (var x = -1; x <= 1; x++) {
for (var y = -1; y <= 1; y++) {
let offset = vec2<f32>(f32(x), f32(y)) * texel_size;
shadow_val += textureSampleCompare(
t_shadow, s_shadow,
shadow_uv + offset,
current_depth - shadow.shadow_bias,
);
}
}
return shadow_val / 9.0;
}
// Procedural environment sampling for IBL
fn sample_environment(direction: vec3<f32>, roughness: f32) -> vec3<f32> {
let t = direction.y * 0.5 + 0.5;
var env: vec3<f32>;
if direction.y > 0.0 {
let horizon = vec3<f32>(0.6, 0.6, 0.5);
let sky = vec3<f32>(0.3, 0.5, 0.9);
env = mix(horizon, sky, pow(direction.y, 0.4));
} else {
let horizon = vec3<f32>(0.6, 0.6, 0.5);
let ground = vec3<f32>(0.1, 0.08, 0.06);
env = mix(horizon, ground, pow(-direction.y, 0.4));
}
let avg = vec3<f32>(0.3, 0.35, 0.4);
return mix(env, avg, roughness * roughness);
}
@fragment
fn fs_main(in: VertexOutput) -> @location(0) vec4<f32> {
let tex_color = textureSample(t_diffuse, s_diffuse, in.uv);
let albedo = material.base_color.rgb * tex_color.rgb;
let metallic = material.metallic;
let roughness = material.roughness;
let ao = material.ao;
// Normal mapping via TBN matrix
let T = normalize(in.world_tangent);
let B = normalize(in.world_bitangent);
let N_geom = normalize(in.world_normal);
// Sample normal map (tangent space normal)
let normal_sample = textureSample(t_normal, s_normal, in.uv).rgb;
let tangent_normal = normal_sample * 2.0 - 1.0;
// TBN matrix transforms tangent space -> world space
let TBN = mat3x3<f32>(T, B, N_geom);
let N = normalize(TBN * tangent_normal);
let V = normalize(camera.camera_pos - in.world_pos);
// F0: base reflectivity; 0.04 for dielectrics, albedo for metals
let F0 = mix(vec3<f32>(0.04, 0.04, 0.04), albedo, metallic);
// Accumulate contribution from all active lights
let shadow_factor = calculate_shadow(in.light_space_pos);
var Lo = vec3<f32>(0.0);
let light_count = min(lights_uniform.count, 16u);
for (var i = 0u; i < light_count; i++) {
var contribution = compute_light_contribution(
lights_uniform.lights[i],
N, V, in.world_pos, F0, albedo, metallic, roughness,
);
if lights_uniform.lights[i].light_type == 0u {
contribution = contribution * shadow_factor;
}
Lo += contribution;
}
// IBL ambient term
let NdotV_ibl = max(dot(N, V), 0.0);
let R = reflect(-V, N);
// Diffuse IBL
let irradiance = sample_environment(N, 1.0);
let F_env = fresnel_schlick(NdotV_ibl, F0);
let kd_ibl = (vec3<f32>(1.0) - F_env) * (1.0 - metallic);
let diffuse_ibl = kd_ibl * albedo * irradiance;
// Specular IBL
let prefiltered = sample_environment(R, roughness);
let brdf_val = textureSample(t_brdf_lut, s_brdf_lut, vec2<f32>(NdotV_ibl, roughness));
let specular_ibl = prefiltered * (F0 * brdf_val.r + vec3<f32>(brdf_val.g));
let ambient = (diffuse_ibl + specular_ibl) * ao;
var color = ambient + Lo;
// Reinhard tone mapping
color = color / (color + vec3<f32>(1.0));
// Gamma correction
color = pow(color, vec3<f32>(1.0 / 2.2));
return vec4<f32>(color, material.base_color.a * tex_color.a);
}

View File

@@ -0,0 +1,114 @@
use crate::vertex::{Vertex, MeshVertex};
use crate::gpu::DEPTH_FORMAT;
pub fn create_render_pipeline(
device: &wgpu::Device,
format: wgpu::TextureFormat,
) -> wgpu::RenderPipeline {
let shader = device.create_shader_module(wgpu::ShaderModuleDescriptor {
label: Some("Voltex Shader"),
source: wgpu::ShaderSource::Wgsl(include_str!("shader.wgsl").into()),
});
let layout = device.create_pipeline_layout(&wgpu::PipelineLayoutDescriptor {
label: Some("Render Pipeline Layout"),
bind_group_layouts: &[],
immediate_size: 0,
});
device.create_render_pipeline(&wgpu::RenderPipelineDescriptor {
label: Some("Render Pipeline"),
layout: Some(&layout),
vertex: wgpu::VertexState {
module: &shader,
entry_point: Some("vs_main"),
buffers: &[Vertex::LAYOUT],
compilation_options: wgpu::PipelineCompilationOptions::default(),
},
fragment: Some(wgpu::FragmentState {
module: &shader,
entry_point: Some("fs_main"),
targets: &[Some(wgpu::ColorTargetState {
format,
blend: Some(wgpu::BlendState::REPLACE),
write_mask: wgpu::ColorWrites::ALL,
})],
compilation_options: wgpu::PipelineCompilationOptions::default(),
}),
primitive: wgpu::PrimitiveState {
topology: wgpu::PrimitiveTopology::TriangleList,
strip_index_format: None,
front_face: wgpu::FrontFace::Ccw,
cull_mode: Some(wgpu::Face::Back),
polygon_mode: wgpu::PolygonMode::Fill,
unclipped_depth: false,
conservative: false,
},
depth_stencil: None,
multisample: wgpu::MultisampleState {
count: 1,
mask: !0,
alpha_to_coverage_enabled: false,
},
multiview_mask: None,
cache: None,
})
}
pub fn create_mesh_pipeline(
device: &wgpu::Device,
format: wgpu::TextureFormat,
camera_light_layout: &wgpu::BindGroupLayout,
texture_layout: &wgpu::BindGroupLayout,
) -> wgpu::RenderPipeline {
let shader = device.create_shader_module(wgpu::ShaderModuleDescriptor {
label: Some("Mesh Shader"),
source: wgpu::ShaderSource::Wgsl(include_str!("mesh_shader.wgsl").into()),
});
let layout = device.create_pipeline_layout(&wgpu::PipelineLayoutDescriptor {
label: Some("Mesh Pipeline Layout"),
bind_group_layouts: &[camera_light_layout, texture_layout],
immediate_size: 0,
});
device.create_render_pipeline(&wgpu::RenderPipelineDescriptor {
label: Some("Mesh Pipeline"),
layout: Some(&layout),
vertex: wgpu::VertexState {
module: &shader,
entry_point: Some("vs_main"),
buffers: &[MeshVertex::LAYOUT],
compilation_options: wgpu::PipelineCompilationOptions::default(),
},
fragment: Some(wgpu::FragmentState {
module: &shader,
entry_point: Some("fs_main"),
targets: &[Some(wgpu::ColorTargetState {
format,
blend: Some(wgpu::BlendState::REPLACE),
write_mask: wgpu::ColorWrites::ALL,
})],
compilation_options: wgpu::PipelineCompilationOptions::default(),
}),
primitive: wgpu::PrimitiveState {
topology: wgpu::PrimitiveTopology::TriangleList,
strip_index_format: None,
front_face: wgpu::FrontFace::Ccw,
cull_mode: Some(wgpu::Face::Back),
polygon_mode: wgpu::PolygonMode::Fill,
unclipped_depth: false,
conservative: false,
},
depth_stencil: Some(wgpu::DepthStencilState {
format: DEPTH_FORMAT,
depth_write_enabled: true,
depth_compare: wgpu::CompareFunction::Less,
stencil: wgpu::StencilState::default(),
bias: wgpu::DepthBiasState::default(),
}),
multisample: wgpu::MultisampleState { count: 1, mask: !0, alpha_to_coverage_enabled: false },
multiview_mask: None,
cache: None,
})
}

View File

@@ -0,0 +1,22 @@
struct VertexInput {
@location(0) position: vec3<f32>,
@location(1) color: vec3<f32>,
};
struct VertexOutput {
@builtin(position) clip_position: vec4<f32>,
@location(0) color: vec3<f32>,
};
@vertex
fn vs_main(model: VertexInput) -> VertexOutput {
var out: VertexOutput;
out.color = model.color;
out.clip_position = vec4<f32>(model.position, 1.0);
return out;
}
@fragment
fn fs_main(in: VertexOutput) -> @location(0) vec4<f32> {
return vec4<f32>(in.color, 1.0);
}

View File

@@ -0,0 +1,153 @@
use bytemuck::{Pod, Zeroable};
pub const SHADOW_MAP_SIZE: u32 = 2048;
pub const SHADOW_FORMAT: wgpu::TextureFormat = wgpu::TextureFormat::Depth32Float;
pub struct ShadowMap {
pub texture: wgpu::Texture,
pub view: wgpu::TextureView,
pub sampler: wgpu::Sampler,
}
impl ShadowMap {
pub fn new(device: &wgpu::Device) -> Self {
let texture = device.create_texture(&wgpu::TextureDescriptor {
label: Some("Shadow Map Texture"),
size: wgpu::Extent3d {
width: SHADOW_MAP_SIZE,
height: SHADOW_MAP_SIZE,
depth_or_array_layers: 1,
},
mip_level_count: 1,
sample_count: 1,
dimension: wgpu::TextureDimension::D2,
format: SHADOW_FORMAT,
usage: wgpu::TextureUsages::RENDER_ATTACHMENT | wgpu::TextureUsages::TEXTURE_BINDING,
view_formats: &[],
});
let view = texture.create_view(&wgpu::TextureViewDescriptor::default());
let sampler = device.create_sampler(&wgpu::SamplerDescriptor {
label: Some("Shadow Map Sampler"),
address_mode_u: wgpu::AddressMode::ClampToEdge,
address_mode_v: wgpu::AddressMode::ClampToEdge,
address_mode_w: wgpu::AddressMode::ClampToEdge,
mag_filter: wgpu::FilterMode::Linear,
min_filter: wgpu::FilterMode::Linear,
mipmap_filter: wgpu::MipmapFilterMode::Nearest,
compare: Some(wgpu::CompareFunction::LessEqual),
..Default::default()
});
Self { texture, view, sampler }
}
pub fn bind_group_layout(device: &wgpu::Device) -> wgpu::BindGroupLayout {
device.create_bind_group_layout(&wgpu::BindGroupLayoutDescriptor {
label: Some("Shadow Bind Group Layout"),
entries: &[
// binding 0: depth texture
wgpu::BindGroupLayoutEntry {
binding: 0,
visibility: wgpu::ShaderStages::FRAGMENT,
ty: wgpu::BindingType::Texture {
sample_type: wgpu::TextureSampleType::Depth,
view_dimension: wgpu::TextureViewDimension::D2,
multisampled: false,
},
count: None,
},
// binding 1: comparison sampler
wgpu::BindGroupLayoutEntry {
binding: 1,
visibility: wgpu::ShaderStages::FRAGMENT,
ty: wgpu::BindingType::Sampler(wgpu::SamplerBindingType::Comparison),
count: None,
},
// binding 2: ShadowUniform buffer
wgpu::BindGroupLayoutEntry {
binding: 2,
visibility: wgpu::ShaderStages::VERTEX | wgpu::ShaderStages::FRAGMENT,
ty: wgpu::BindingType::Buffer {
ty: wgpu::BufferBindingType::Uniform,
has_dynamic_offset: false,
min_binding_size: wgpu::BufferSize::new(
std::mem::size_of::<ShadowUniform>() as u64,
),
},
count: None,
},
// binding 3: BRDF LUT texture
wgpu::BindGroupLayoutEntry {
binding: 3,
visibility: wgpu::ShaderStages::FRAGMENT,
ty: wgpu::BindingType::Texture {
multisampled: false,
view_dimension: wgpu::TextureViewDimension::D2,
sample_type: wgpu::TextureSampleType::Float { filterable: true },
},
count: None,
},
// binding 4: BRDF LUT sampler
wgpu::BindGroupLayoutEntry {
binding: 4,
visibility: wgpu::ShaderStages::FRAGMENT,
ty: wgpu::BindingType::Sampler(wgpu::SamplerBindingType::Filtering),
count: None,
},
],
})
}
pub fn create_bind_group(
&self,
device: &wgpu::Device,
layout: &wgpu::BindGroupLayout,
shadow_uniform_buffer: &wgpu::Buffer,
brdf_lut_view: &wgpu::TextureView,
brdf_lut_sampler: &wgpu::Sampler,
) -> wgpu::BindGroup {
device.create_bind_group(&wgpu::BindGroupDescriptor {
label: Some("Shadow Bind Group"),
layout,
entries: &[
wgpu::BindGroupEntry {
binding: 0,
resource: wgpu::BindingResource::TextureView(&self.view),
},
wgpu::BindGroupEntry {
binding: 1,
resource: wgpu::BindingResource::Sampler(&self.sampler),
},
wgpu::BindGroupEntry {
binding: 2,
resource: shadow_uniform_buffer.as_entire_binding(),
},
wgpu::BindGroupEntry {
binding: 3,
resource: wgpu::BindingResource::TextureView(brdf_lut_view),
},
wgpu::BindGroupEntry {
binding: 4,
resource: wgpu::BindingResource::Sampler(brdf_lut_sampler),
},
],
})
}
}
#[repr(C)]
#[derive(Copy, Clone, Debug, Pod, Zeroable)]
pub struct ShadowUniform {
pub light_view_proj: [[f32; 4]; 4],
pub shadow_map_size: f32,
pub shadow_bias: f32,
pub _padding: [f32; 2],
}
#[repr(C)]
#[derive(Copy, Clone, Debug, Pod, Zeroable)]
pub struct ShadowPassUniform {
pub light_vp_model: [[f32; 4]; 4],
}

View File

@@ -0,0 +1,77 @@
use crate::vertex::MeshVertex;
use crate::shadow::{SHADOW_FORMAT, ShadowPassUniform};
pub fn shadow_pass_bind_group_layout(device: &wgpu::Device) -> wgpu::BindGroupLayout {
device.create_bind_group_layout(&wgpu::BindGroupLayoutDescriptor {
label: Some("Shadow Pass Bind Group Layout"),
entries: &[
wgpu::BindGroupLayoutEntry {
binding: 0,
visibility: wgpu::ShaderStages::VERTEX,
ty: wgpu::BindingType::Buffer {
ty: wgpu::BufferBindingType::Uniform,
has_dynamic_offset: true,
min_binding_size: wgpu::BufferSize::new(
std::mem::size_of::<ShadowPassUniform>() as u64,
),
},
count: None,
},
],
})
}
pub fn create_shadow_pipeline(
device: &wgpu::Device,
shadow_pass_layout: &wgpu::BindGroupLayout,
) -> wgpu::RenderPipeline {
let shader = device.create_shader_module(wgpu::ShaderModuleDescriptor {
label: Some("Shadow Shader"),
source: wgpu::ShaderSource::Wgsl(include_str!("shadow_shader.wgsl").into()),
});
let layout = device.create_pipeline_layout(&wgpu::PipelineLayoutDescriptor {
label: Some("Shadow Pipeline Layout"),
bind_group_layouts: &[shadow_pass_layout],
immediate_size: 0,
});
device.create_render_pipeline(&wgpu::RenderPipelineDescriptor {
label: Some("Shadow Pipeline"),
layout: Some(&layout),
vertex: wgpu::VertexState {
module: &shader,
entry_point: Some("vs_main"),
buffers: &[MeshVertex::LAYOUT],
compilation_options: wgpu::PipelineCompilationOptions::default(),
},
fragment: None,
primitive: wgpu::PrimitiveState {
topology: wgpu::PrimitiveTopology::TriangleList,
strip_index_format: None,
front_face: wgpu::FrontFace::Ccw,
cull_mode: Some(wgpu::Face::Front),
polygon_mode: wgpu::PolygonMode::Fill,
unclipped_depth: false,
conservative: false,
},
depth_stencil: Some(wgpu::DepthStencilState {
format: SHADOW_FORMAT,
depth_write_enabled: true,
depth_compare: wgpu::CompareFunction::Less,
stencil: wgpu::StencilState::default(),
bias: wgpu::DepthBiasState {
constant: 2,
slope_scale: 2.0,
clamp: 0.0,
},
}),
multisample: wgpu::MultisampleState {
count: 1,
mask: !0,
alpha_to_coverage_enabled: false,
},
multiview_mask: None,
cache: None,
})
}

View File

@@ -0,0 +1,17 @@
struct ShadowPassUniform {
light_vp_model: mat4x4<f32>,
};
@group(0) @binding(0) var<uniform> shadow_pass: ShadowPassUniform;
struct VertexInput {
@location(0) position: vec3<f32>,
@location(1) normal: vec3<f32>,
@location(2) uv: vec2<f32>,
@location(3) tangent: vec4<f32>,
};
@vertex
fn vs_main(model_v: VertexInput) -> @builtin(position) vec4<f32> {
return shadow_pass.light_vp_model * vec4<f32>(model_v.position, 1.0);
}

View File

@@ -0,0 +1,99 @@
use crate::vertex::MeshVertex;
use std::f32::consts::PI;
/// Generate a UV sphere with Y-up coordinate system.
/// Returns (vertices, indices).
pub fn generate_sphere(radius: f32, sectors: u32, stacks: u32) -> (Vec<MeshVertex>, Vec<u32>) {
let mut vertices: Vec<MeshVertex> = Vec::new();
let mut indices: Vec<u32> = Vec::new();
let sector_step = 2.0 * PI / sectors as f32;
let stack_step = PI / stacks as f32;
for i in 0..=stacks {
// Stack angle from PI/2 (top) to -PI/2 (bottom)
let stack_angle = PI / 2.0 - (i as f32) * stack_step;
let xz = radius * stack_angle.cos();
let y = radius * stack_angle.sin();
for j in 0..=sectors {
let sector_angle = (j as f32) * sector_step;
let x = xz * sector_angle.cos();
let z = xz * sector_angle.sin();
let position = [x, y, z];
let normal = [x / radius, y / radius, z / radius];
let uv = [
j as f32 / sectors as f32,
i as f32 / stacks as f32,
];
// Tangent follows the longitude direction (increasing sector angle) in XZ plane
let tangent_x = -sector_angle.sin();
let tangent_z = sector_angle.cos();
vertices.push(MeshVertex {
position,
normal,
uv,
tangent: [tangent_x, 0.0, tangent_z, 1.0],
});
}
}
// Indices: two triangles per quad
for i in 0..stacks {
for j in 0..sectors {
let k1 = i * (sectors + 1) + j;
let k2 = k1 + sectors + 1;
// First triangle
indices.push(k1);
indices.push(k2);
indices.push(k1 + 1);
// Second triangle
indices.push(k1 + 1);
indices.push(k2);
indices.push(k2 + 1);
}
}
(vertices, indices)
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_sphere_vertex_count() {
let sectors = 36u32;
let stacks = 18u32;
let (vertices, _) = generate_sphere(1.0, sectors, stacks);
assert_eq!(vertices.len(), ((stacks + 1) * (sectors + 1)) as usize);
}
#[test]
fn test_sphere_index_count() {
let sectors = 36u32;
let stacks = 18u32;
let (_, indices) = generate_sphere(1.0, sectors, stacks);
assert_eq!(indices.len(), (stacks * sectors * 6) as usize);
}
#[test]
fn test_sphere_normals_unit_length() {
let (vertices, _) = generate_sphere(1.0, 12, 8);
for v in &vertices {
let n = v.normal;
let len = (n[0] * n[0] + n[1] * n[1] + n[2] * n[2]).sqrt();
assert!(
(len - 1.0).abs() < 1e-5,
"Normal length {} is not unit length",
len
);
}
}
}

View File

@@ -0,0 +1,371 @@
pub struct BmpImage {
pub width: u32,
pub height: u32,
pub pixels: Vec<u8>, // RGBA
}
pub fn parse_bmp(data: &[u8]) -> Result<BmpImage, String> {
if data.len() < 54 {
return Err(format!("BMP too small: {} bytes", data.len()));
}
if data[0] != b'B' || data[1] != b'M' {
return Err("Not a BMP file: missing 'BM' signature".to_string());
}
let pixel_offset = u32::from_le_bytes(data[10..14].try_into().unwrap()) as usize;
let width_raw = i32::from_le_bytes(data[18..22].try_into().unwrap());
let height_raw = i32::from_le_bytes(data[22..26].try_into().unwrap());
let bpp = u16::from_le_bytes(data[28..30].try_into().unwrap());
let compression = u32::from_le_bytes(data[30..34].try_into().unwrap());
if bpp != 24 && bpp != 32 {
return Err(format!("Unsupported BMP bpp: {} (only 24 and 32 supported)", bpp));
}
if compression != 0 {
return Err(format!("Unsupported BMP compression: {} (only 0/uncompressed supported)", compression));
}
let width = width_raw.unsigned_abs();
let height = height_raw.unsigned_abs();
let bottom_up = height_raw > 0;
let bytes_per_pixel = (bpp as usize) / 8;
let row_size = ((bpp as usize * width as usize + 31) / 32) * 4;
let required_data_size = pixel_offset + row_size * height as usize;
if data.len() < required_data_size {
return Err(format!(
"BMP data too small: need {} bytes, got {}",
required_data_size,
data.len()
));
}
let mut pixels = vec![0u8; (width * height * 4) as usize];
for row in 0..height as usize {
let src_row = if bottom_up { height as usize - 1 - row } else { row };
let row_start = pixel_offset + src_row * row_size;
for col in 0..width as usize {
let src_offset = row_start + col * bytes_per_pixel;
let dst_offset = (row * width as usize + col) * 4;
// BMP stores BGR(A), convert to RGBA
let b = data[src_offset];
let g = data[src_offset + 1];
let r = data[src_offset + 2];
let a = if bytes_per_pixel == 4 { data[src_offset + 3] } else { 255 };
pixels[dst_offset] = r;
pixels[dst_offset + 1] = g;
pixels[dst_offset + 2] = b;
pixels[dst_offset + 3] = a;
}
}
Ok(BmpImage { width, height, pixels })
}
pub struct GpuTexture {
pub texture: wgpu::Texture,
pub view: wgpu::TextureView,
pub sampler: wgpu::Sampler,
pub bind_group: wgpu::BindGroup,
}
impl GpuTexture {
pub fn from_rgba(
device: &wgpu::Device,
queue: &wgpu::Queue,
width: u32,
height: u32,
pixels: &[u8],
layout: &wgpu::BindGroupLayout,
) -> Self {
let size = wgpu::Extent3d {
width,
height,
depth_or_array_layers: 1,
};
let texture = device.create_texture(&wgpu::TextureDescriptor {
label: Some("BmpTexture"),
size,
mip_level_count: 1,
sample_count: 1,
dimension: wgpu::TextureDimension::D2,
format: wgpu::TextureFormat::Rgba8UnormSrgb,
usage: wgpu::TextureUsages::TEXTURE_BINDING | wgpu::TextureUsages::COPY_DST,
view_formats: &[],
});
queue.write_texture(
wgpu::TexelCopyTextureInfo {
texture: &texture,
mip_level: 0,
origin: wgpu::Origin3d::ZERO,
aspect: wgpu::TextureAspect::All,
},
pixels,
wgpu::TexelCopyBufferLayout {
offset: 0,
bytes_per_row: Some(4 * width),
rows_per_image: Some(height),
},
size,
);
let view = texture.create_view(&wgpu::TextureViewDescriptor::default());
let sampler = device.create_sampler(&wgpu::SamplerDescriptor {
label: Some("BmpSampler"),
address_mode_u: wgpu::AddressMode::Repeat,
address_mode_v: wgpu::AddressMode::Repeat,
address_mode_w: wgpu::AddressMode::Repeat,
mag_filter: wgpu::FilterMode::Linear,
min_filter: wgpu::FilterMode::Linear,
mipmap_filter: wgpu::MipmapFilterMode::Linear,
..Default::default()
});
let bind_group = device.create_bind_group(&wgpu::BindGroupDescriptor {
label: Some("BmpBindGroup"),
layout,
entries: &[
wgpu::BindGroupEntry {
binding: 0,
resource: wgpu::BindingResource::TextureView(&view),
},
wgpu::BindGroupEntry {
binding: 1,
resource: wgpu::BindingResource::Sampler(&sampler),
},
],
});
Self { texture, view, sampler, bind_group }
}
pub fn white_1x1(
device: &wgpu::Device,
queue: &wgpu::Queue,
layout: &wgpu::BindGroupLayout,
) -> Self {
Self::from_rgba(device, queue, 1, 1, &[255, 255, 255, 255], layout)
}
pub fn bind_group_layout(device: &wgpu::Device) -> wgpu::BindGroupLayout {
device.create_bind_group_layout(&wgpu::BindGroupLayoutDescriptor {
label: Some("TextureBindGroupLayout"),
entries: &[
wgpu::BindGroupLayoutEntry {
binding: 0,
visibility: wgpu::ShaderStages::FRAGMENT,
ty: wgpu::BindingType::Texture {
multisampled: false,
view_dimension: wgpu::TextureViewDimension::D2,
sample_type: wgpu::TextureSampleType::Float { filterable: true },
},
count: None,
},
wgpu::BindGroupLayoutEntry {
binding: 1,
visibility: wgpu::ShaderStages::FRAGMENT,
ty: wgpu::BindingType::Sampler(wgpu::SamplerBindingType::Filtering),
count: None,
},
],
})
}
/// Create a 1x1 flat normal map texture (tangent-space up: 0,0,1).
/// Uses Rgba8Unorm (linear) since normal data is not sRGB.
pub fn flat_normal_1x1(
device: &wgpu::Device,
queue: &wgpu::Queue,
) -> (wgpu::Texture, wgpu::TextureView, wgpu::Sampler) {
let size = wgpu::Extent3d {
width: 1,
height: 1,
depth_or_array_layers: 1,
};
// [128, 128, 255, 255] maps to (0, 0, 1) after * 2 - 1
let pixels: [u8; 4] = [128, 128, 255, 255];
let texture = device.create_texture(&wgpu::TextureDescriptor {
label: Some("FlatNormalTexture"),
size,
mip_level_count: 1,
sample_count: 1,
dimension: wgpu::TextureDimension::D2,
format: wgpu::TextureFormat::Rgba8Unorm,
usage: wgpu::TextureUsages::TEXTURE_BINDING | wgpu::TextureUsages::COPY_DST,
view_formats: &[],
});
queue.write_texture(
wgpu::TexelCopyTextureInfo {
texture: &texture,
mip_level: 0,
origin: wgpu::Origin3d::ZERO,
aspect: wgpu::TextureAspect::All,
},
&pixels,
wgpu::TexelCopyBufferLayout {
offset: 0,
bytes_per_row: Some(4),
rows_per_image: Some(1),
},
size,
);
let view = texture.create_view(&wgpu::TextureViewDescriptor::default());
let sampler = device.create_sampler(&wgpu::SamplerDescriptor {
label: Some("FlatNormalSampler"),
address_mode_u: wgpu::AddressMode::Repeat,
address_mode_v: wgpu::AddressMode::Repeat,
address_mode_w: wgpu::AddressMode::Repeat,
mag_filter: wgpu::FilterMode::Linear,
min_filter: wgpu::FilterMode::Linear,
mipmap_filter: wgpu::MipmapFilterMode::Linear,
..Default::default()
});
(texture, view, sampler)
}
}
/// Bind group layout for PBR textures: albedo (binding 0-1) + normal map (binding 2-3).
pub fn pbr_texture_bind_group_layout(device: &wgpu::Device) -> wgpu::BindGroupLayout {
device.create_bind_group_layout(&wgpu::BindGroupLayoutDescriptor {
label: Some("PBR Texture Bind Group Layout"),
entries: &[
// binding 0: albedo texture
wgpu::BindGroupLayoutEntry {
binding: 0,
visibility: wgpu::ShaderStages::FRAGMENT,
ty: wgpu::BindingType::Texture {
multisampled: false,
view_dimension: wgpu::TextureViewDimension::D2,
sample_type: wgpu::TextureSampleType::Float { filterable: true },
},
count: None,
},
// binding 1: albedo sampler
wgpu::BindGroupLayoutEntry {
binding: 1,
visibility: wgpu::ShaderStages::FRAGMENT,
ty: wgpu::BindingType::Sampler(wgpu::SamplerBindingType::Filtering),
count: None,
},
// binding 2: normal map texture
wgpu::BindGroupLayoutEntry {
binding: 2,
visibility: wgpu::ShaderStages::FRAGMENT,
ty: wgpu::BindingType::Texture {
multisampled: false,
view_dimension: wgpu::TextureViewDimension::D2,
sample_type: wgpu::TextureSampleType::Float { filterable: true },
},
count: None,
},
// binding 3: normal map sampler
wgpu::BindGroupLayoutEntry {
binding: 3,
visibility: wgpu::ShaderStages::FRAGMENT,
ty: wgpu::BindingType::Sampler(wgpu::SamplerBindingType::Filtering),
count: None,
},
],
})
}
/// Create a bind group for PBR textures (albedo + normal map).
pub fn create_pbr_texture_bind_group(
device: &wgpu::Device,
layout: &wgpu::BindGroupLayout,
albedo_view: &wgpu::TextureView,
albedo_sampler: &wgpu::Sampler,
normal_view: &wgpu::TextureView,
normal_sampler: &wgpu::Sampler,
) -> wgpu::BindGroup {
device.create_bind_group(&wgpu::BindGroupDescriptor {
label: Some("PBR Texture Bind Group"),
layout,
entries: &[
wgpu::BindGroupEntry {
binding: 0,
resource: wgpu::BindingResource::TextureView(albedo_view),
},
wgpu::BindGroupEntry {
binding: 1,
resource: wgpu::BindingResource::Sampler(albedo_sampler),
},
wgpu::BindGroupEntry {
binding: 2,
resource: wgpu::BindingResource::TextureView(normal_view),
},
wgpu::BindGroupEntry {
binding: 3,
resource: wgpu::BindingResource::Sampler(normal_sampler),
},
],
})
}
#[cfg(test)]
mod tests {
use super::*;
fn make_bmp_24bit(width: u32, height: u32, pixel_bgr: [u8; 3]) -> Vec<u8> {
let row_size = ((24 * width + 31) / 32 * 4) as usize;
let pixel_data_size = row_size * height as usize;
let file_size = 54 + pixel_data_size;
let mut data = vec![0u8; file_size];
data[0] = b'B';
data[1] = b'M';
data[2..6].copy_from_slice(&(file_size as u32).to_le_bytes());
data[10..14].copy_from_slice(&54u32.to_le_bytes());
data[14..18].copy_from_slice(&40u32.to_le_bytes());
data[18..22].copy_from_slice(&(width as i32).to_le_bytes());
data[22..26].copy_from_slice(&(height as i32).to_le_bytes());
data[26..28].copy_from_slice(&1u16.to_le_bytes());
data[28..30].copy_from_slice(&24u16.to_le_bytes());
for row in 0..height {
for col in 0..width {
let offset = 54 + (row as usize) * row_size + (col as usize) * 3;
data[offset] = pixel_bgr[0];
data[offset + 1] = pixel_bgr[1];
data[offset + 2] = pixel_bgr[2];
}
}
data
}
#[test]
fn test_parse_bmp_24bit() {
let bmp = make_bmp_24bit(2, 2, [255, 0, 0]); // BGR blue
let img = parse_bmp(&bmp).unwrap();
assert_eq!(img.width, 2);
assert_eq!(img.height, 2);
assert_eq!(img.pixels[0], 0); // R
assert_eq!(img.pixels[1], 0); // G
assert_eq!(img.pixels[2], 255); // B
assert_eq!(img.pixels[3], 255); // A
}
#[test]
fn test_parse_bmp_not_bmp() {
let data = vec![0u8; 100];
assert!(parse_bmp(&data).is_err());
}
#[test]
fn test_parse_bmp_too_small() {
let data = vec![0u8; 10];
assert!(parse_bmp(&data).is_err());
}
}

View File

@@ -0,0 +1,65 @@
use bytemuck::{Pod, Zeroable};
#[repr(C)]
#[derive(Copy, Clone, Debug, Pod, Zeroable)]
pub struct Vertex {
pub position: [f32; 3],
pub color: [f32; 3],
}
impl Vertex {
pub const LAYOUT: wgpu::VertexBufferLayout<'static> = wgpu::VertexBufferLayout {
array_stride: std::mem::size_of::<Vertex>() as wgpu::BufferAddress,
step_mode: wgpu::VertexStepMode::Vertex,
attributes: &[
wgpu::VertexAttribute {
offset: 0,
shader_location: 0,
format: wgpu::VertexFormat::Float32x3,
},
wgpu::VertexAttribute {
offset: std::mem::size_of::<[f32; 3]>() as wgpu::BufferAddress,
shader_location: 1,
format: wgpu::VertexFormat::Float32x3,
},
],
};
}
#[repr(C)]
#[derive(Copy, Clone, Debug, Pod, Zeroable)]
pub struct MeshVertex {
pub position: [f32; 3],
pub normal: [f32; 3],
pub uv: [f32; 2],
pub tangent: [f32; 4],
}
impl MeshVertex {
pub const LAYOUT: wgpu::VertexBufferLayout<'static> = wgpu::VertexBufferLayout {
array_stride: std::mem::size_of::<MeshVertex>() as wgpu::BufferAddress,
step_mode: wgpu::VertexStepMode::Vertex,
attributes: &[
wgpu::VertexAttribute {
offset: 0,
shader_location: 0,
format: wgpu::VertexFormat::Float32x3,
},
wgpu::VertexAttribute {
offset: 12,
shader_location: 1,
format: wgpu::VertexFormat::Float32x3,
},
wgpu::VertexAttribute {
offset: 24,
shader_location: 2,
format: wgpu::VertexFormat::Float32x2,
},
wgpu::VertexAttribute {
offset: 32,
shader_location: 3,
format: wgpu::VertexFormat::Float32x4,
},
],
};
}

48
docs/DEFERRED.md Normal file
View File

@@ -0,0 +1,48 @@
# 미뤄진/간소화된 구현 항목
## Phase 2
- **PNG 디코더 자체 구현** — deflate + 필터링. 현재 BMP만 지원.
- **JPG 디코더 자체 구현** — Huffman + DCT. 현재 미구현.
- **glTF 파서** — OBJ만 지원 중.
## Phase 3a
- **Archetype 기반 스토리지** → SparseSet 사용 중. 대규모 씬에서 성능 이슈 시 전환.
- **시스템 스케줄러** — 의존성 기반 실행 순서/병렬 실행 미구현. 시스템은 함수 호출.
- **쿼리 필터** — With, Without, Changed 미구현. query/query2만 존재.
- **query3+** — query2까지만 있음.
## Phase 3b
- **JSON 직렬화** → 커스텀 .vscn 텍스트 포맷 사용.
- **바이너리 씬 포맷** — 미구현.
- **임의 컴포넌트 직렬화** — Transform/Parent/Tag만 지원.
## Phase 3c
- **비동기 로딩** — 동기 insert만.
- **핫 리로드** — 파일 변경 감지 미구현.
## Phase 4a
- **Metallic/Roughness/AO 텍스처 맵** → 파라미터 값만 사용. 텍스처 샘플링 미구현.
- **Emissive 맵** — 미구현.
## Phase 4b
- **CSM (Cascaded Shadow Maps)** → 단일 캐스케이드만. 원거리 그림자 해상도 낮음.
- **Point Light Shadow (큐브맵)** — 미구현.
- **Spot Light Shadow** — 미구현.
- **라이트 컬링** — 타일/클러스터 기반 미구현.
## Phase 4c
- **HDR 큐브맵 환경맵** → 프로시저럴 sky 함수로 대체.
- **Irradiance/Prefiltered Map 컨볼루션** → 프로시저럴 근사.
- **GPU 컴퓨트 BRDF LUT** → CPU 생성 (256x256).
## 렌더링 한계
- **per-entity dynamic UBO** — 수천 개 이상은 인스턴싱 필요.
- **max_bind_groups=4** — IBL을 shadow group에 합쳐서 해결. 추가 group 필요 시 리소스 합치거나 bindless 활용.

95
docs/STATUS.md Normal file
View File

@@ -0,0 +1,95 @@
# Voltex Engine - 작업 현황
## 완료된 Phase
### Phase 1: Foundation (삼각형 렌더링)
- voltex_math: Vec3
- voltex_platform: VoltexWindow, InputState, GameTimer
- voltex_renderer: GpuContext, Vertex, shader, pipeline
- examples/triangle
### Phase 2: Rendering Basics
- voltex_math: Vec2, Vec4, Mat4 (transforms, look_at, perspective, orthographic)
- voltex_renderer: MeshVertex(+tangent), Mesh, depth buffer, OBJ parser, Camera, FpsController
- voltex_renderer: Blinn-Phong shader, BMP texture loader, GpuTexture
- examples/model_viewer
### Phase 3a: ECS
- voltex_ecs: Entity(id+generation), SparseSet<T>, World(type-erased storage)
- voltex_ecs: query<T>, query2<A,B>, Transform component
- examples/many_cubes (400 entities, dynamic UBO)
### Phase 3b: Scene Graph
- voltex_ecs: Parent/Children hierarchy, add_child/remove_child/despawn_recursive
- voltex_ecs: WorldTransform propagation (top-down)
- voltex_ecs: Scene serialization (.vscn text format), Tag component
- examples/hierarchy_demo (solar system)
### Phase 3c: Asset Manager
- voltex_asset: Handle<T>(generation), AssetStorage<T>(ref counting), Assets(type-erased)
- examples/asset_demo
### Phase 4a: PBR Rendering
- voltex_renderer: MaterialUniform (base_color, metallic, roughness, ao)
- voltex_renderer: Cook-Torrance BRDF shader (GGX NDF + Smith geometry + Fresnel-Schlick)
- voltex_renderer: Procedural UV sphere generator
- voltex_renderer: PBR pipeline (3→4 bind groups)
- examples/pbr_demo (7x7 metallic/roughness sphere grid)
### Phase 4b-1: Multi-Light
- voltex_renderer: LightData (Directional/Point/Spot), LightsUniform (MAX_LIGHTS=16)
- PBR shader: multi-light loop, point attenuation, spot cone falloff
- examples/multi_light_demo (orbiting colored point lights)
### Phase 4b-2: Shadow Mapping
- voltex_renderer: ShadowMap (2048x2048 depth), ShadowUniform, ShadowPassUniform
- Shadow depth-only shader + pipeline (front-face cull, depth bias)
- PBR shader: shadow map sampling + 3x3 PCF
- examples/shadow_demo (directional light shadows)
### Phase 4c: Normal Map + IBL
- MeshVertex: tangent[4] added, computed in OBJ parser + sphere generator
- voltex_renderer: BRDF LUT (CPU Monte Carlo, 256x256), IblResources
- PBR shader: TBN normal mapping, procedural sky IBL, split-sum approximation
- Texture bind group: albedo + normal map (pbr_texture_bind_group_layout)
- IBL merged into shadow bind group (group 3) due to max_bind_groups=4
- examples/ibl_demo
## Crate 구조
```
crates/
├── voltex_math — Vec2, Vec3, Vec4, Mat4
├── voltex_platform — VoltexWindow, InputState, GameTimer
├── voltex_renderer — GPU, Mesh, OBJ, Camera, Material, PBR, Shadow, IBL, Sphere
├── voltex_ecs — Entity, SparseSet, World, Transform, Hierarchy, Scene, WorldTransform
└── voltex_asset — Handle<T>, AssetStorage<T>, Assets
```
## 테스트: 105개 전부 통과
- voltex_asset: 14
- voltex_ecs: 39
- voltex_math: 29 (28 + orthographic)
- voltex_platform: 3
- voltex_renderer: 20
## Examples (8개)
- triangle — Phase 1 삼각형
- model_viewer — OBJ 큐브 + Blinn-Phong
- many_cubes — 400 ECS 엔티티 렌더링
- hierarchy_demo — 태양계 씬 그래프
- asset_demo — Handle 기반 에셋 관리
- pbr_demo — metallic/roughness 구체 그리드
- multi_light_demo — 다중 색상 라이트
- shadow_demo — Directional Light 그림자
- ibl_demo — Normal map + IBL
## 다음: Phase 5 (물리 엔진)
스펙 참조: `docs/superpowers/specs/2026-03-24-voltex-engine-design.md`
## 간소화/미뤄진 항목
상세: `docs/DEFERRED.md`

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,784 @@
# Phase 3b: Scene Graph + Serialization Implementation Plan
> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking.
**Goal:** 부모-자식 트랜스폼 계층으로 씬 그래프를 구축하고, 로컬→월드 트랜스폼 자동 전파, 간단한 텍스트 포맷으로 씬 저장/로드를 구현한다.
**Architecture:** voltex_ecs에 `Parent(Entity)`, `Children(Vec<Entity>)` 컴포넌트를 추가하고, 계층 관리 함수를 `hierarchy.rs`에 구현한다. World 트랜스폼 전파는 루트에서 리프까지 top-down 순회. 씬 직렬화는 커스텀 텍스트 포맷(`.vscn`)으로 Transform/Parent/Children/커스텀 태그를 저장한다.
**Tech Stack:** Rust 1.94, voltex_math (Vec3, Mat4), voltex_ecs (World, Entity, Transform, SparseSet)
**Spec:** `docs/superpowers/specs/2026-03-24-voltex-engine-design.md` Phase 3 (3-2. 씬 그래프)
---
## File Structure
```
crates/voltex_ecs/src/
├── lib.rs # 모듈 re-export 업데이트
├── hierarchy.rs # Parent, Children 컴포넌트 + 계층 관리 함수 (NEW)
├── world_transform.rs # WorldTransform + 전파 함수 (NEW)
├── scene.rs # 씬 직렬화/역직렬화 (NEW)
examples/
└── hierarchy_demo/ # 씬 그래프 데모 (NEW)
├── Cargo.toml
└── src/
└── main.rs
```
---
## Task 1: 계층 컴포넌트 + 관리 함수
**Files:**
- Create: `crates/voltex_ecs/src/hierarchy.rs`
- Modify: `crates/voltex_ecs/src/lib.rs`
Parent(Entity)와 Children(Vec<Entity>) 컴포넌트. 계층을 조작하는 free function들.
- [ ] **Step 1: hierarchy.rs 작성**
```rust
// crates/voltex_ecs/src/hierarchy.rs
use crate::{Entity, World};
/// 부모 엔티티를 가리키는 컴포넌트
#[derive(Debug, Clone, Copy)]
pub struct Parent(pub Entity);
/// 자식 엔티티 목록 컴포넌트
#[derive(Debug, Clone)]
pub struct Children(pub Vec<Entity>);
/// entity를 parent의 자식으로 추가
pub fn add_child(world: &mut World, parent: Entity, child: Entity) {
// child에 Parent 컴포넌트 설정
world.add(child, Parent(parent));
// parent의 Children에 child 추가
if let Some(children) = world.get_mut::<Children>(parent) {
if !children.0.contains(&child) {
children.0.push(child);
}
} else {
world.add(parent, Children(vec![child]));
}
}
/// entity를 부모에서 분리 (parent-child 관계 제거)
pub fn remove_child(world: &mut World, parent: Entity, child: Entity) {
// parent의 Children에서 child 제거
if let Some(children) = world.get_mut::<Children>(parent) {
children.0.retain(|&e| e != child);
}
// child의 Parent 제거
world.remove::<Parent>(child);
}
/// entity와 모든 자손을 재귀적으로 삭제
pub fn despawn_recursive(world: &mut World, entity: Entity) {
// 자식들 먼저 수집 (borrow 문제 회피)
let children: Vec<Entity> = world.get::<Children>(entity)
.map(|c| c.0.clone())
.unwrap_or_default();
for child in children {
despawn_recursive(world, child);
}
// 부모에서 자신 제거
if let Some(parent) = world.get::<Parent>(entity).map(|p| p.0) {
if let Some(children) = world.get_mut::<Children>(parent) {
children.0.retain(|&e| e != entity);
}
}
world.despawn(entity);
}
/// 루트 엔티티 목록 반환 (Parent가 없는 엔티티)
pub fn roots(world: &World) -> Vec<Entity> {
// Transform가 있지만 Parent가 없는 엔티티가 루트
world.query::<crate::Transform>()
.filter(|(entity, _)| world.get::<Parent>(*entity).is_none())
.map(|(entity, _)| entity)
.collect()
}
#[cfg(test)]
mod tests {
use super::*;
use crate::Transform;
use voltex_math::Vec3;
#[test]
fn test_add_child() {
let mut world = World::new();
let parent = world.spawn();
let child = world.spawn();
world.add(parent, Transform::new());
world.add(child, Transform::new());
add_child(&mut world, parent, child);
assert_eq!(world.get::<Parent>(child).unwrap().0, parent);
assert_eq!(world.get::<Children>(parent).unwrap().0.len(), 1);
assert_eq!(world.get::<Children>(parent).unwrap().0[0], child);
}
#[test]
fn test_add_multiple_children() {
let mut world = World::new();
let parent = world.spawn();
let c1 = world.spawn();
let c2 = world.spawn();
world.add(parent, Transform::new());
world.add(c1, Transform::new());
world.add(c2, Transform::new());
add_child(&mut world, parent, c1);
add_child(&mut world, parent, c2);
assert_eq!(world.get::<Children>(parent).unwrap().0.len(), 2);
}
#[test]
fn test_remove_child() {
let mut world = World::new();
let parent = world.spawn();
let child = world.spawn();
world.add(parent, Transform::new());
world.add(child, Transform::new());
add_child(&mut world, parent, child);
remove_child(&mut world, parent, child);
assert!(world.get::<Parent>(child).is_none());
assert_eq!(world.get::<Children>(parent).unwrap().0.len(), 0);
}
#[test]
fn test_despawn_recursive() {
let mut world = World::new();
let root = world.spawn();
let child = world.spawn();
let grandchild = world.spawn();
world.add(root, Transform::new());
world.add(child, Transform::new());
world.add(grandchild, Transform::new());
add_child(&mut world, root, child);
add_child(&mut world, child, grandchild);
despawn_recursive(&mut world, root);
assert!(!world.is_alive(root));
assert!(!world.is_alive(child));
assert!(!world.is_alive(grandchild));
}
#[test]
fn test_roots() {
let mut world = World::new();
let r1 = world.spawn();
let r2 = world.spawn();
let child = world.spawn();
world.add(r1, Transform::new());
world.add(r2, Transform::new());
world.add(child, Transform::new());
add_child(&mut world, r1, child);
let root_list = roots(&world);
assert_eq!(root_list.len(), 2);
assert!(root_list.contains(&r1));
assert!(root_list.contains(&r2));
assert!(!root_list.contains(&child));
}
#[test]
fn test_no_duplicate_child() {
let mut world = World::new();
let parent = world.spawn();
let child = world.spawn();
world.add(parent, Transform::new());
world.add(child, Transform::new());
add_child(&mut world, parent, child);
add_child(&mut world, parent, child); // 중복 추가
assert_eq!(world.get::<Children>(parent).unwrap().0.len(), 1);
}
}
```
- [ ] **Step 2: lib.rs 업데이트**
```rust
// crates/voltex_ecs/src/lib.rs
pub mod entity;
pub mod sparse_set;
pub mod world;
pub mod transform;
pub mod hierarchy;
pub use entity::{Entity, EntityAllocator};
pub use sparse_set::SparseSet;
pub use world::World;
pub use transform::Transform;
pub use hierarchy::{Parent, Children, add_child, remove_child, despawn_recursive, roots};
```
- [ ] **Step 3: 테스트 통과 확인**
Run: `cargo test -p voltex_ecs`
Expected: 기존 25 + hierarchy 6 = 31개 PASS
- [ ] **Step 4: 커밋**
```bash
git add crates/voltex_ecs/
git commit -m "feat(ecs): add Parent/Children hierarchy with add_child, remove_child, despawn_recursive"
```
---
## Task 2: WorldTransform 전파
**Files:**
- Create: `crates/voltex_ecs/src/world_transform.rs`
- Modify: `crates/voltex_ecs/src/lib.rs`
로컬 Transform에서 월드 행렬을 계산하여 WorldTransform에 저장. 루트에서 리프까지 top-down 순회로 부모의 월드 행렬을 자식에 곱한다.
- [ ] **Step 1: world_transform.rs 작성**
```rust
// crates/voltex_ecs/src/world_transform.rs
use voltex_math::Mat4;
use crate::{Entity, World, Transform};
use crate::hierarchy::{Parent, Children};
/// 계산된 월드 트랜스폼 (로컬 * 부모 월드)
#[derive(Debug, Clone, Copy)]
pub struct WorldTransform(pub Mat4);
impl WorldTransform {
pub fn identity() -> Self {
Self(Mat4::IDENTITY)
}
}
/// 모든 엔티티의 WorldTransform을 갱신한다.
/// 루트(Parent 없는)부터 시작하여 자식으로 전파.
pub fn propagate_transforms(world: &mut World) {
// 루트 엔티티 수집
let root_entities: Vec<Entity> = world.query::<Transform>()
.filter(|(e, _)| world.get::<Parent>(*e).is_none())
.map(|(e, _)| e)
.collect();
for root in root_entities {
propagate_entity(world, root, Mat4::IDENTITY);
}
}
fn propagate_entity(world: &mut World, entity: Entity, parent_world: Mat4) {
let local = match world.get::<Transform>(entity) {
Some(t) => t.matrix(),
None => return,
};
let world_matrix = parent_world * local;
world.add(entity, WorldTransform(world_matrix));
// 자식들 수집 (borrow 회피)
let children: Vec<Entity> = world.get::<Children>(entity)
.map(|c| c.0.clone())
.unwrap_or_default();
for child in children {
propagate_entity(world, child, world_matrix);
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::hierarchy::add_child;
use voltex_math::{Vec3, Vec4};
fn approx_eq(a: f32, b: f32) -> bool {
(a - b).abs() < 1e-4
}
#[test]
fn test_root_world_transform() {
let mut world = World::new();
let e = world.spawn();
world.add(e, Transform::from_position(Vec3::new(5.0, 0.0, 0.0)));
propagate_transforms(&mut world);
let wt = world.get::<WorldTransform>(e).unwrap();
let p = wt.0 * Vec4::new(0.0, 0.0, 0.0, 1.0);
assert!(approx_eq(p.x, 5.0));
}
#[test]
fn test_child_inherits_parent() {
let mut world = World::new();
let parent = world.spawn();
let child = world.spawn();
world.add(parent, Transform::from_position(Vec3::new(10.0, 0.0, 0.0)));
world.add(child, Transform::from_position(Vec3::new(0.0, 5.0, 0.0)));
add_child(&mut world, parent, child);
propagate_transforms(&mut world);
// child의 월드 위치: parent(10,0,0) + child(0,5,0) = (10,5,0)
let wt = world.get::<WorldTransform>(child).unwrap();
let p = wt.0 * Vec4::new(0.0, 0.0, 0.0, 1.0);
assert!(approx_eq(p.x, 10.0), "x: {}", p.x);
assert!(approx_eq(p.y, 5.0), "y: {}", p.y);
}
#[test]
fn test_three_level_hierarchy() {
let mut world = World::new();
let root = world.spawn();
let mid = world.spawn();
let leaf = world.spawn();
world.add(root, Transform::from_position(Vec3::new(1.0, 0.0, 0.0)));
world.add(mid, Transform::from_position(Vec3::new(0.0, 2.0, 0.0)));
world.add(leaf, Transform::from_position(Vec3::new(0.0, 0.0, 3.0)));
add_child(&mut world, root, mid);
add_child(&mut world, mid, leaf);
propagate_transforms(&mut world);
// leaf 월드 위치: (1, 2, 3)
let wt = world.get::<WorldTransform>(leaf).unwrap();
let p = wt.0 * Vec4::new(0.0, 0.0, 0.0, 1.0);
assert!(approx_eq(p.x, 1.0));
assert!(approx_eq(p.y, 2.0));
assert!(approx_eq(p.z, 3.0));
}
#[test]
fn test_parent_scale_affects_child() {
let mut world = World::new();
let parent = world.spawn();
let child = world.spawn();
world.add(parent, Transform::from_position_scale(
Vec3::ZERO,
Vec3::new(2.0, 2.0, 2.0),
));
world.add(child, Transform::from_position(Vec3::new(1.0, 0.0, 0.0)));
add_child(&mut world, parent, child);
propagate_transforms(&mut world);
// parent 스케일 2x → child(1,0,0)이 (2,0,0)으로
let wt = world.get::<WorldTransform>(child).unwrap();
let p = wt.0 * Vec4::new(0.0, 0.0, 0.0, 1.0);
assert!(approx_eq(p.x, 2.0), "x: {}", p.x);
}
}
```
- [ ] **Step 2: lib.rs 업데이트**
```rust
// lib.rs에 추가:
pub mod world_transform;
pub use world_transform::{WorldTransform, propagate_transforms};
```
- [ ] **Step 3: 테스트 통과 확인**
Run: `cargo test -p voltex_ecs`
Expected: 31 + 4 = 35개 PASS
- [ ] **Step 4: 커밋**
```bash
git add crates/voltex_ecs/
git commit -m "feat(ecs): add WorldTransform propagation through parent-child hierarchy"
```
---
## Task 3: 씬 직렬화 (.vscn 포맷)
**Files:**
- Create: `crates/voltex_ecs/src/scene.rs`
- Modify: `crates/voltex_ecs/src/lib.rs`
간단한 텍스트 포맷으로 씬을 저장/로드. 포맷:
```
# Voltex Scene v1
entity 0
transform 1.0 2.0 3.0 | 0.0 0.5 0.0 | 1.0 1.0 1.0
tag solar_system
entity 1
parent 0
transform 5.0 0.0 0.0 | 0.0 0.0 0.0 | 0.5 0.5 0.5
tag planet
```
규칙:
- `entity N` — 엔티티 시작 (N은 파일 내 로컬 인덱스)
- ` transform px py pz | rx ry rz | sx sy sz` — Transform 컴포넌트
- ` parent N` — 부모 엔티티의 로컬 인덱스
- ` tag name` — 문자열 태그 (선택적, 디버깅용)
- [ ] **Step 1: scene.rs 작성**
```rust
// crates/voltex_ecs/src/scene.rs
use crate::{Entity, World, Transform};
use crate::hierarchy::{Parent, Children, add_child};
use voltex_math::Vec3;
/// 디버깅/식별용 태그 컴포넌트
#[derive(Debug, Clone)]
pub struct Tag(pub String);
/// World의 씬 데이터를 .vscn 텍스트로 직렬화
pub fn serialize_scene(world: &World) -> String {
let mut output = String::from("# Voltex Scene v1\n");
// Transform를 가진 모든 엔티티 수집
let entities: Vec<(Entity, Transform)> = world.query::<Transform>()
.map(|(e, t)| (e, *t))
.collect();
// Entity → 로컬 인덱스 매핑
let entity_to_idx: std::collections::HashMap<Entity, usize> = entities.iter()
.enumerate()
.map(|(i, (e, _))| (*e, i))
.collect();
for (idx, (entity, transform)) in entities.iter().enumerate() {
output.push_str(&format!("\nentity {}\n", idx));
// Transform
output.push_str(&format!(
" transform {} {} {} | {} {} {} | {} {} {}\n",
transform.position.x, transform.position.y, transform.position.z,
transform.rotation.x, transform.rotation.y, transform.rotation.z,
transform.scale.x, transform.scale.y, transform.scale.z,
));
// Parent
if let Some(parent) = world.get::<Parent>(*entity) {
if let Some(&parent_idx) = entity_to_idx.get(&parent.0) {
output.push_str(&format!(" parent {}\n", parent_idx));
}
}
// Tag
if let Some(tag) = world.get::<Tag>(*entity) {
output.push_str(&format!(" tag {}\n", tag.0));
}
}
output
}
/// .vscn 텍스트를 파싱하여 World에 엔티티를 생성
pub fn deserialize_scene(world: &mut World, source: &str) -> Vec<Entity> {
let mut entities: Vec<Entity> = Vec::new();
let mut current_entity: Option<Entity> = None;
// 로컬 인덱스 → (parent_local_idx) 매핑 (나중에 해석)
let mut parent_map: Vec<Option<usize>> = Vec::new();
for line in source.lines() {
let line = line.trim();
if line.is_empty() || line.starts_with('#') {
continue;
}
if line.starts_with("entity ") {
let entity = world.spawn();
entities.push(entity);
current_entity = Some(entity);
parent_map.push(None);
continue;
}
let entity = match current_entity {
Some(e) => e,
None => continue,
};
if line.starts_with("transform ") {
if let Some(transform) = parse_transform(&line[10..]) {
world.add(entity, transform);
}
} else if line.starts_with("parent ") {
if let Ok(parent_idx) = line[7..].trim().parse::<usize>() {
let idx = entities.len() - 1;
parent_map[idx] = Some(parent_idx);
}
} else if line.starts_with("tag ") {
let tag_name = line[4..].trim().to_string();
world.add(entity, Tag(tag_name));
}
}
// parent 관계 설정
for (child_idx, parent_idx_opt) in parent_map.iter().enumerate() {
if let Some(parent_idx) = parent_idx_opt {
if *parent_idx < entities.len() && child_idx < entities.len() {
add_child(world, entities[*parent_idx], entities[child_idx]);
}
}
}
entities
}
fn parse_transform(s: &str) -> Option<Transform> {
// "px py pz | rx ry rz | sx sy sz"
let parts: Vec<&str> = s.split('|').collect();
if parts.len() != 3 {
return None;
}
let pos = parse_vec3(parts[0].trim())?;
let rot = parse_vec3(parts[1].trim())?;
let scale = parse_vec3(parts[2].trim())?;
Some(Transform {
position: pos,
rotation: rot,
scale,
})
}
fn parse_vec3(s: &str) -> Option<Vec3> {
let nums: Vec<f32> = s.split_whitespace()
.filter_map(|n| n.parse().ok())
.collect();
if nums.len() == 3 {
Some(Vec3::new(nums[0], nums[1], nums[2]))
} else {
None
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::hierarchy::roots;
fn approx_eq(a: f32, b: f32) -> bool {
(a - b).abs() < 1e-5
}
#[test]
fn test_serialize_single_entity() {
let mut world = World::new();
let e = world.spawn();
world.add(e, Transform::from_position(Vec3::new(1.0, 2.0, 3.0)));
world.add(e, Tag("test".into()));
let text = serialize_scene(&world);
assert!(text.contains("entity 0"));
assert!(text.contains("transform 1 2 3"));
assert!(text.contains("tag test"));
}
#[test]
fn test_serialize_with_parent() {
let mut world = World::new();
let parent = world.spawn();
let child = world.spawn();
world.add(parent, Transform::new());
world.add(child, Transform::new());
add_child(&mut world, parent, child);
let text = serialize_scene(&world);
assert!(text.contains("parent"));
}
#[test]
fn test_roundtrip() {
let mut world = World::new();
let root = world.spawn();
let child = world.spawn();
world.add(root, Transform::from_position(Vec3::new(10.0, 0.0, 0.0)));
world.add(root, Tag("root_node".into()));
world.add(child, Transform::from_position(Vec3::new(0.0, 5.0, 0.0)));
world.add(child, Tag("child_node".into()));
add_child(&mut world, root, child);
let text = serialize_scene(&world);
// 새 World에 역직렬화
let mut world2 = World::new();
let entities = deserialize_scene(&mut world2, &text);
assert_eq!(entities.len(), 2);
// Transform 확인
let t0 = world2.get::<Transform>(entities[0]).unwrap();
assert!(approx_eq(t0.position.x, 10.0));
let t1 = world2.get::<Transform>(entities[1]).unwrap();
assert!(approx_eq(t1.position.y, 5.0));
// 부모-자식 관계 확인
let parent_comp = world2.get::<Parent>(entities[1]).unwrap();
assert_eq!(parent_comp.0, entities[0]);
// Tag 확인
let tag0 = world2.get::<Tag>(entities[0]).unwrap();
assert_eq!(tag0.0, "root_node");
}
#[test]
fn test_deserialize_roots() {
let scene = "\
# Voltex Scene v1
entity 0
transform 0 0 0 | 0 0 0 | 1 1 1
entity 1
parent 0
transform 1 0 0 | 0 0 0 | 1 1 1
entity 2
transform 5 0 0 | 0 0 0 | 1 1 1
";
let mut world = World::new();
let entities = deserialize_scene(&mut world, scene);
assert_eq!(entities.len(), 3);
let root_list = roots(&world);
assert_eq!(root_list.len(), 2); // entity 0 and entity 2
}
}
```
- [ ] **Step 2: lib.rs 업데이트**
```rust
// lib.rs에 추가:
pub mod scene;
pub use scene::{Tag, serialize_scene, deserialize_scene};
```
- [ ] **Step 3: 테스트 통과 확인**
Run: `cargo test -p voltex_ecs`
Expected: 35 + 4 = 39개 PASS
- [ ] **Step 4: 커밋**
```bash
git add crates/voltex_ecs/
git commit -m "feat(ecs): add scene serialization/deserialization (.vscn format)"
```
---
## Task 4: hierarchy_demo 예제
**Files:**
- Create: `examples/hierarchy_demo/Cargo.toml`
- Create: `examples/hierarchy_demo/src/main.rs`
- Modify: `Cargo.toml` (워크스페이스에 추가)
씬 그래프를 시각적으로 보여주는 데모. 태양계 모델: 태양(회전) → 행성(공전+자전) → 위성(공전). 씬을 .vscn 파일로 저장하고 다시 로드하는 기능 포함.
- [ ] **Step 1: 워크스페이스 + Cargo.toml**
workspace members에 `"examples/hierarchy_demo"` 추가.
```toml
# examples/hierarchy_demo/Cargo.toml
[package]
name = "hierarchy_demo"
version = "0.1.0"
edition = "2021"
[dependencies]
voltex_math.workspace = true
voltex_platform.workspace = true
voltex_renderer.workspace = true
voltex_ecs.workspace = true
wgpu.workspace = true
winit.workspace = true
bytemuck.workspace = true
pollster.workspace = true
env_logger.workspace = true
log.workspace = true
```
- [ ] **Step 2: main.rs 작성**
이 파일은 many_cubes를 기반으로 하되, ECS 계층을 사용한다. 핵심 변경:
1. 태양계 구축: 태양(중앙) → 행성 3개(공전) → 위성 1개(행성의 자식)
2. 매 프레임: Transform의 rotation.y를 dt만큼 증가시킨 뒤 `propagate_transforms()` 호출
3. 렌더링: `world.query::<WorldTransform>()`으로 월드 행렬을 직접 사용
4. S키: 씬을 `scene.vscn`로 저장, L키: `scene.vscn` 로드
구현은 many_cubes의 dynamic UBO 패턴을 따른다. 핵심 차이:
- Transform 대신 WorldTransform의 행렬을 uniform.model로 사용
- 엔티티별 회전은 Transform.rotation.y를 증가시킴
- propagate_transforms로 월드 행렬 계산
파일을 작성하기 전에 반드시 `examples/many_cubes/src/main.rs`를 읽고 dynamic UBO 패턴을 따를 것.
씬 구축:
```
Sun: position(0,0,0), scale(2,2,2), rotation.y += dt*0.2
├── Planet1: position(6,0,0), scale(0.5,0.5,0.5), rotation.y += dt*1.0
│ └── Moon: position(1.5,0,0), scale(0.3,0.3,0.3), rotation.y += dt*2.0
├── Planet2: position(10,0,0), scale(0.7,0.7,0.7), rotation.y += dt*0.6
└── Planet3: position(14,0,0), scale(0.4,0.4,0.4), rotation.y += dt*0.3
```
S키 저장: `voltex_ecs::serialize_scene(&world)``scene.vscn`에 쓰기.
L키 로드: `scene.vscn`을 읽어 `voltex_ecs::deserialize_scene()`으로 World 재구축.
- [ ] **Step 3: 빌드 + 테스트 확인**
Run: `cargo build --workspace`
Run: `cargo test --workspace`
- [ ] **Step 4: 실행 확인**
Run: `cargo run -p hierarchy_demo`
Expected: 태양 주위로 행성이 공전, 행성 주위로 위성 공전. S키로 씬 저장, L키로 로드.
- [ ] **Step 5: 커밋**
```bash
git add Cargo.toml examples/hierarchy_demo/
git commit -m "feat: add hierarchy_demo with solar system scene graph and save/load"
```
---
## Phase 3b 완료 기준 체크리스트
- [ ] `cargo build --workspace` 성공
- [ ] `cargo test --workspace` — 모든 테스트 통과
- [ ] 부모-자식 트랜스폼 전파 정상 (3단계 계층)
- [ ] 씬 직렬화 roundtrip: 저장 → 로드 → 동일 결과
- [ ] `cargo run -p hierarchy_demo` — 태양계 렌더링, 계층적 회전, S/L 저장/로드
- [ ] 기존 예제 (triangle, model_viewer, many_cubes) 여전히 동작

View File

@@ -0,0 +1,636 @@
# Phase 3c: Asset Manager Implementation Plan
> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking.
**Goal:** 핸들 기반 에셋 관리 시스템으로 Mesh, Texture 등의 에셋을 타입별로 저장/조회/삭제하고, 참조 카운팅으로 메모리를 관리한다.
**Architecture:**`voltex_asset` crate를 만든다. `Handle<T>`는 경량 타입 안전 참조(u32 id + generation). `AssetStorage<T>`는 제네릭 에셋 저장소로 참조 카운팅 지원. `Assets`는 중앙 관리자로 타입별 스토리지를 type-erased로 관리한다. 로딩은 앱에서 기존 파서(parse_obj, parse_bmp)로 에셋을 만들고 `insert`하는 방식.
**Tech Stack:** Rust 1.94
**Spec:** `docs/superpowers/specs/2026-03-24-voltex-engine-design.md` Phase 3 (3-3. 에셋 매니저)
**스코프 제한:** 비동기 로딩, 핫 리로드는 별도 Phase로 분리. 이번에는 핸들 + 스토리지 + 참조 카운팅만 구현.
---
## File Structure
```
crates/
└── voltex_asset/
├── Cargo.toml
└── src/
├── lib.rs # 모듈 re-export
├── handle.rs # Handle<T> 제네릭 핸들
├── storage.rs # AssetStorage<T> 참조 카운팅 스토리지
└── assets.rs # Assets 중앙 관리자 (type-erased)
examples/
└── asset_demo/ # 에셋 매니저 통합 데모 (NEW)
├── Cargo.toml
└── src/
└── main.rs
```
---
## Task 1: Handle<T> + AssetStorage<T>
**Files:**
- Create: `crates/voltex_asset/Cargo.toml`
- Create: `crates/voltex_asset/src/lib.rs`
- Create: `crates/voltex_asset/src/handle.rs`
- Create: `crates/voltex_asset/src/storage.rs`
- Modify: `Cargo.toml` (워크스페이스에 voltex_asset 추가)
Handle은 에셋을 가리키는 경량 식별자. AssetStorage는 에셋 + 참조 카운트를 관리.
- [ ] **Step 1: Cargo.toml 작성**
```toml
# crates/voltex_asset/Cargo.toml
[package]
name = "voltex_asset"
version = "0.1.0"
edition = "2021"
[dependencies]
```
- [ ] **Step 2: 워크스페이스 업데이트**
`Cargo.toml` root: members에 `"crates/voltex_asset"` 추가, workspace.dependencies에 `voltex_asset = { path = "crates/voltex_asset" }` 추가.
- [ ] **Step 3: handle.rs 작성**
```rust
// crates/voltex_asset/src/handle.rs
use std::marker::PhantomData;
/// 타입 안전 에셋 핸들. 경량 식별자로 에셋을 참조.
#[derive(Debug)]
pub struct Handle<T> {
pub id: u32,
pub generation: u32,
_marker: PhantomData<T>,
}
// PhantomData로 인해 자동 구현이 안되므로 수동 구현
impl<T> Clone for Handle<T> {
fn clone(&self) -> Self {
Self { id: self.id, generation: self.generation, _marker: PhantomData }
}
}
impl<T> Copy for Handle<T> {}
impl<T> PartialEq for Handle<T> {
fn eq(&self, other: &Self) -> bool {
self.id == other.id && self.generation == other.generation
}
}
impl<T> Eq for Handle<T> {}
impl<T> std::hash::Hash for Handle<T> {
fn hash<H: std::hash::Hasher>(&self, state: &mut H) {
self.id.hash(state);
self.generation.hash(state);
}
}
impl<T> Handle<T> {
pub(crate) fn new(id: u32, generation: u32) -> Self {
Self { id, generation, _marker: PhantomData }
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_handle_copy() {
let h: Handle<String> = Handle::new(0, 0);
let h2 = h;
assert_eq!(h, h2); // Copy 확인
}
#[test]
fn test_handle_eq() {
let a: Handle<i32> = Handle::new(1, 0);
let b: Handle<i32> = Handle::new(1, 0);
let c: Handle<i32> = Handle::new(1, 1);
assert_eq!(a, b);
assert_ne!(a, c);
}
}
```
- [ ] **Step 4: storage.rs 작성**
```rust
// crates/voltex_asset/src/storage.rs
use crate::handle::Handle;
use std::any::Any;
struct AssetEntry<T> {
asset: T,
generation: u32,
ref_count: u32,
}
/// 타입별 에셋 스토리지. 참조 카운팅으로 메모리 관리.
pub struct AssetStorage<T> {
entries: Vec<Option<AssetEntry<T>>>,
free_ids: Vec<u32>,
}
impl<T> AssetStorage<T> {
pub fn new() -> Self {
Self {
entries: Vec::new(),
free_ids: Vec::new(),
}
}
/// 에셋 추가. 핸들 반환. 초기 ref_count = 1.
pub fn insert(&mut self, asset: T) -> Handle<T> {
if let Some(id) = self.free_ids.pop() {
let idx = id as usize;
let generation = match &self.entries[idx] {
Some(e) => e.generation + 1,
None => 0,
};
self.entries[idx] = Some(AssetEntry {
asset,
generation,
ref_count: 1,
});
Handle::new(id, generation)
} else {
let id = self.entries.len() as u32;
self.entries.push(Some(AssetEntry {
asset,
generation: 0,
ref_count: 1,
}));
Handle::new(id, 0)
}
}
/// 핸들로 에셋 참조
pub fn get(&self, handle: Handle<T>) -> Option<&T> {
let entry = self.entries.get(handle.id as usize)?.as_ref()?;
if entry.generation == handle.generation {
Some(&entry.asset)
} else {
None
}
}
/// 핸들로 에셋 가변 참조
pub fn get_mut(&mut self, handle: Handle<T>) -> Option<&mut T> {
let entry = self.entries.get_mut(handle.id as usize)?.as_mut()?;
if entry.generation == handle.generation {
Some(&mut entry.asset)
} else {
None
}
}
/// 참조 카운트 증가
pub fn add_ref(&mut self, handle: Handle<T>) {
if let Some(Some(entry)) = self.entries.get_mut(handle.id as usize) {
if entry.generation == handle.generation {
entry.ref_count += 1;
}
}
}
/// 참조 카운트 감소. 0이 되면 에셋 삭제.
/// 삭제되었으면 true 반환.
pub fn release(&mut self, handle: Handle<T>) -> bool {
let idx = handle.id as usize;
if let Some(Some(entry)) = self.entries.get_mut(idx) {
if entry.generation == handle.generation {
entry.ref_count = entry.ref_count.saturating_sub(1);
if entry.ref_count == 0 {
self.entries[idx] = None;
self.free_ids.push(handle.id);
return true;
}
}
}
false
}
/// 현재 저장된 에셋 수
pub fn len(&self) -> usize {
self.entries.iter().filter(|e| e.is_some()).count()
}
pub fn is_empty(&self) -> bool {
self.len() == 0
}
/// 참조 카운트 조회
pub fn ref_count(&self, handle: Handle<T>) -> u32 {
self.entries.get(handle.id as usize)
.and_then(|e| e.as_ref())
.filter(|e| e.generation == handle.generation)
.map(|e| e.ref_count)
.unwrap_or(0)
}
/// 모든 에셋 순회 (handle, &asset)
pub fn iter(&self) -> impl Iterator<Item = (Handle<T>, &T)> {
self.entries.iter().enumerate().filter_map(|(i, entry)| {
entry.as_ref().map(|e| (Handle::new(i as u32, e.generation), &e.asset))
})
}
}
/// Type erasure trait for Assets manager
pub trait AssetStorageDyn: Any {
fn as_any(&self) -> &dyn Any;
fn as_any_mut(&mut self) -> &mut dyn Any;
fn count(&self) -> usize;
}
impl<T: 'static> AssetStorageDyn for AssetStorage<T> {
fn as_any(&self) -> &dyn Any { self }
fn as_any_mut(&mut self) -> &mut dyn Any { self }
fn count(&self) -> usize { self.len() }
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_insert_and_get() {
let mut storage = AssetStorage::new();
let h = storage.insert("hello".to_string());
assert_eq!(storage.get(h), Some(&"hello".to_string()));
}
#[test]
fn test_get_mut() {
let mut storage = AssetStorage::new();
let h = storage.insert(vec![1, 2, 3]);
storage.get_mut(h).unwrap().push(4);
assert_eq!(storage.get(h).unwrap().len(), 4);
}
#[test]
fn test_release_removes_at_zero() {
let mut storage = AssetStorage::new();
let h = storage.insert(42);
assert_eq!(storage.len(), 1);
let removed = storage.release(h);
assert!(removed);
assert_eq!(storage.len(), 0);
assert_eq!(storage.get(h), None);
}
#[test]
fn test_ref_counting() {
let mut storage = AssetStorage::new();
let h = storage.insert(42);
assert_eq!(storage.ref_count(h), 1);
storage.add_ref(h);
assert_eq!(storage.ref_count(h), 2);
assert!(!storage.release(h)); // ref_count 2 → 1, not removed
assert_eq!(storage.ref_count(h), 1);
assert!(storage.release(h)); // ref_count 1 → 0, removed
assert_eq!(storage.ref_count(h), 0);
}
#[test]
fn test_stale_handle() {
let mut storage = AssetStorage::new();
let h1 = storage.insert(10);
storage.release(h1);
let h2 = storage.insert(20); // reuse slot, generation+1
assert_eq!(storage.get(h1), None); // old handle invalid
assert_eq!(storage.get(h2), Some(&20));
}
#[test]
fn test_id_reuse() {
let mut storage = AssetStorage::new();
let h1 = storage.insert("first");
let id1 = h1.id;
storage.release(h1);
let h2 = storage.insert("second");
assert_eq!(h2.id, id1); // ID 재사용
assert_eq!(h2.generation, 1); // generation 증가
}
#[test]
fn test_iter() {
let mut storage = AssetStorage::new();
storage.insert(10);
storage.insert(20);
storage.insert(30);
let values: Vec<i32> = storage.iter().map(|(_, v)| *v).collect();
assert_eq!(values.len(), 3);
assert!(values.contains(&10));
assert!(values.contains(&20));
assert!(values.contains(&30));
}
}
```
- [ ] **Step 5: lib.rs 작성**
```rust
// crates/voltex_asset/src/lib.rs
pub mod handle;
pub mod storage;
pub use handle::Handle;
pub use storage::AssetStorage;
```
- [ ] **Step 6: 테스트 통과 확인**
Run: `cargo test -p voltex_asset`
Expected: handle 2개 + storage 7개 = 9개 PASS
- [ ] **Step 7: 커밋**
```bash
git add Cargo.toml crates/voltex_asset/
git commit -m "feat(asset): add voltex_asset crate with Handle<T> and AssetStorage<T>"
```
---
## Task 2: Assets 중앙 관리자
**Files:**
- Create: `crates/voltex_asset/src/assets.rs`
- Modify: `crates/voltex_asset/src/lib.rs`
Assets는 여러 타입의 AssetStorage를 type-erased로 보관. World처럼 TypeId 기반.
- [ ] **Step 1: assets.rs 작성**
```rust
// crates/voltex_asset/src/assets.rs
use std::any::TypeId;
use std::collections::HashMap;
use crate::handle::Handle;
use crate::storage::{AssetStorage, AssetStorageDyn};
/// 중앙 에셋 관리자. 타입별 AssetStorage를 관리.
pub struct Assets {
storages: HashMap<TypeId, Box<dyn AssetStorageDyn>>,
}
impl Assets {
pub fn new() -> Self {
Self {
storages: HashMap::new(),
}
}
/// 에셋 추가. 스토리지 자동 등록.
pub fn insert<T: 'static>(&mut self, asset: T) -> Handle<T> {
self.storage_mut::<T>().insert(asset)
}
/// 핸들로 에셋 참조
pub fn get<T: 'static>(&self, handle: Handle<T>) -> Option<&T> {
self.storage::<T>()?.get(handle)
}
/// 핸들로 에셋 가변 참조
pub fn get_mut<T: 'static>(&mut self, handle: Handle<T>) -> Option<&mut T> {
self.storage_mut::<T>().get_mut(handle)
}
/// 참조 카운트 증가
pub fn add_ref<T: 'static>(&mut self, handle: Handle<T>) {
self.storage_mut::<T>().add_ref(handle);
}
/// 참조 카운트 감소
pub fn release<T: 'static>(&mut self, handle: Handle<T>) -> bool {
self.storage_mut::<T>().release(handle)
}
/// 타입별 에셋 수
pub fn count<T: 'static>(&self) -> usize {
self.storage::<T>().map(|s| s.len()).unwrap_or(0)
}
/// 타입별 스토리지 읽기 참조
pub fn storage<T: 'static>(&self) -> Option<&AssetStorage<T>> {
self.storages.get(&TypeId::of::<T>())
.and_then(|s| s.as_any().downcast_ref())
}
/// 타입별 스토리지 가변 참조 (없으면 생성)
pub fn storage_mut<T: 'static>(&mut self) -> &mut AssetStorage<T> {
self.storages
.entry(TypeId::of::<T>())
.or_insert_with(|| Box::new(AssetStorage::<T>::new()))
.as_any_mut()
.downcast_mut()
.unwrap()
}
}
#[cfg(test)]
mod tests {
use super::*;
#[derive(Debug, PartialEq)]
struct MeshData { vertex_count: u32 }
#[derive(Debug, PartialEq)]
struct TextureData { width: u32, height: u32 }
#[test]
fn test_insert_and_get_different_types() {
let mut assets = Assets::new();
let mesh_h = assets.insert(MeshData { vertex_count: 100 });
let tex_h = assets.insert(TextureData { width: 256, height: 256 });
assert_eq!(assets.get(mesh_h).unwrap().vertex_count, 100);
assert_eq!(assets.get(tex_h).unwrap().width, 256);
}
#[test]
fn test_count_per_type() {
let mut assets = Assets::new();
assets.insert(MeshData { vertex_count: 10 });
assets.insert(MeshData { vertex_count: 20 });
assets.insert(TextureData { width: 64, height: 64 });
assert_eq!(assets.count::<MeshData>(), 2);
assert_eq!(assets.count::<TextureData>(), 1);
}
#[test]
fn test_release_through_assets() {
let mut assets = Assets::new();
let h = assets.insert(42_i32);
assert_eq!(assets.count::<i32>(), 1);
assets.release(h);
assert_eq!(assets.count::<i32>(), 0);
}
#[test]
fn test_ref_counting_through_assets() {
let mut assets = Assets::new();
let h = assets.insert("shared".to_string());
assets.add_ref(h);
assert!(!assets.release(h)); // 2 → 1
assert!(assets.get(h).is_some()); // still alive
assert!(assets.release(h)); // 1 → 0
assert!(assets.get(h).is_none());
}
#[test]
fn test_storage_access() {
let mut assets = Assets::new();
assets.insert(10_i32);
assets.insert(20_i32);
let storage = assets.storage::<i32>().unwrap();
let values: Vec<i32> = storage.iter().map(|(_, v)| *v).collect();
assert_eq!(values.len(), 2);
}
}
```
- [ ] **Step 2: lib.rs 업데이트**
```rust
// crates/voltex_asset/src/lib.rs
pub mod handle;
pub mod storage;
pub mod assets;
pub use handle::Handle;
pub use storage::AssetStorage;
pub use assets::Assets;
```
- [ ] **Step 3: 테스트 통과 확인**
Run: `cargo test -p voltex_asset`
Expected: 9 + 5 = 14개 PASS
- [ ] **Step 4: 커밋**
```bash
git add crates/voltex_asset/
git commit -m "feat(asset): add Assets central manager with type-erased storage"
```
---
## Task 3: asset_demo 예제
**Files:**
- Create: `examples/asset_demo/Cargo.toml`
- Create: `examples/asset_demo/src/main.rs`
- Modify: `Cargo.toml` (워크스페이스에 추가)
에셋 매니저를 사용하여 메시와 텍스처를 관리하는 데모. many_cubes를 기반으로 하되, Mesh와 GpuTexture를 Assets에 등록하고 Handle로 참조.
- [ ] **Step 1: 워크스페이스 + Cargo.toml**
workspace members에 `"examples/asset_demo"` 추가.
```toml
# examples/asset_demo/Cargo.toml
[package]
name = "asset_demo"
version = "0.1.0"
edition = "2021"
[dependencies]
voltex_math.workspace = true
voltex_platform.workspace = true
voltex_renderer.workspace = true
voltex_ecs.workspace = true
voltex_asset.workspace = true
wgpu.workspace = true
winit.workspace = true
bytemuck.workspace = true
pollster.workspace = true
env_logger.workspace = true
log.workspace = true
```
- [ ] **Step 2: main.rs 작성**
이 데모는 many_cubes와 비슷하지만:
1. `Assets` 매니저에 Mesh를 등록: `let mesh_handle = assets.insert(mesh);`
2. ECS 엔티티가 `Handle<Mesh>` 컴포넌트로 에셋을 참조
3. 렌더링 시 `assets.get(mesh_handle)` 로 Mesh를 가져와 사용
4. 참조 카운팅 데모: 타이틀바에 에셋 수 표시
5. R키: 랜덤 엔티티 10개 despawn (에셋은 다른 엔티티가 참조하므로 유지)
6. 단일 큐브 메시를 모든 엔티티가 공유
파일을 작성하기 전에 `examples/many_cubes/src/main.rs`의 dynamic UBO 패턴을 반드시 읽을 것.
핵심 구조:
```rust
struct AppState {
// ...
assets: Assets,
world: World,
// mesh_handle은 world의 엔티티 컴포넌트로 저장
}
// MeshRef 컴포넌트: Handle<Mesh>를 감싸는 newtype
#[derive(Clone, Copy)]
struct MeshRef(Handle<Mesh>);
```
렌더링 루프:
```rust
// WorldTransform가 있는 엔티티 + MeshRef 쿼리
let entities = state.world.query2::<WorldTransform, MeshRef>();
for (_, wt, mesh_ref) in &entities {
if let Some(mesh) = state.assets.get(mesh_ref.0) {
// set vertex/index buffer, draw
}
}
```
- [ ] **Step 3: 빌드 + 테스트**
Run: `cargo build --workspace`
Run: `cargo test --workspace`
- [ ] **Step 4: 실행 확인**
Run: `cargo run -p asset_demo`
Expected: 큐브 그리드가 렌더링됨. R키로 엔티티 삭제 시 메시 에셋은 유지.
- [ ] **Step 5: 커밋**
```bash
git add Cargo.toml examples/asset_demo/
git commit -m "feat: add asset_demo with Handle-based mesh management"
```
---
## Phase 3c 완료 기준 체크리스트
- [ ] `cargo build --workspace` 성공
- [ ] `cargo test --workspace` — 모든 테스트 통과 (기존 80 + asset 14 = 94개)
- [ ] Handle<T>: Copy, Eq, Hash, generation으로 stale 감지
- [ ] AssetStorage<T>: insert/get/release, 참조 카운팅, ID 재사용
- [ ] Assets: 타입별 스토리지 관리, 자동 등록
- [ ] `cargo run -p asset_demo` — 에셋 핸들로 메시 관리, R키 엔티티 삭제
- [ ] 기존 예제 모두 동작

View File

@@ -0,0 +1,612 @@
# Phase 4a: PBR Rendering Implementation Plan
> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking.
**Goal:** Cook-Torrance BRDF 기반 PBR 셰이더로 metallic/roughness 파라미터에 따라 금속, 비금속 머티리얼을 사실적으로 렌더링한다.
**Architecture:** MaterialUniform (base_color, metallic, roughness)을 새 bind group으로 셰이더에 전달. PBR WGSL 셰이더에서 Cook-Torrance BRDF (GGX NDF + Smith geometry + Fresnel-Schlick)를 구현. 프로시저럴 구체 메시 생성기를 추가하여 PBR 데모에서 metallic/roughness 그리드를 보여준다. 기존 MeshVertex(position, normal, uv)는 변경 없음 — 노멀 맵은 Phase 4c에서 추가.
**Tech Stack:** Rust 1.94, wgpu 28.0, WGSL
**Spec:** `docs/superpowers/specs/2026-03-24-voltex-engine-design.md` Phase 4 (4-1. PBR 머티리얼)
**스코프 제한:** Albedo 텍스처 + metallic/roughness 파라미터만. Normal/AO/Emissive 맵, 다중 라이트, 섀도우, IBL은 Phase 4b/4c.
---
## File Structure
```
crates/voltex_renderer/src/
├── material.rs # MaterialUniform + bind group layout (NEW)
├── pbr_shader.wgsl # PBR Cook-Torrance 셰이더 (NEW)
├── pbr_pipeline.rs # PBR 렌더 파이프라인 (NEW)
├── sphere.rs # 프로시저럴 구체 메시 생성 (NEW)
├── lib.rs # 모듈 re-export 업데이트
├── vertex.rs # 기존 유지
├── mesh.rs # 기존 유지
├── pipeline.rs # 기존 유지 (Blinn-Phong용)
examples/
└── pbr_demo/ # PBR 데모 (NEW)
├── Cargo.toml
└── src/
└── main.rs
```
---
## Task 1: MaterialUniform + 프로시저럴 구체
**Files:**
- Create: `crates/voltex_renderer/src/material.rs`
- Create: `crates/voltex_renderer/src/sphere.rs`
- Modify: `crates/voltex_renderer/src/lib.rs`
MaterialUniform은 PBR 파라미터를 GPU에 전달. 구체 메시 생성기는 PBR 데모에 필수.
- [ ] **Step 1: material.rs 작성**
```rust
// crates/voltex_renderer/src/material.rs
use bytemuck::{Pod, Zeroable};
/// PBR 머티리얼 파라미터 (GPU uniform)
#[repr(C)]
#[derive(Copy, Clone, Debug, Pod, Zeroable)]
pub struct MaterialUniform {
pub base_color: [f32; 4], // RGBA
pub metallic: f32,
pub roughness: f32,
pub ao: f32, // ambient occlusion (1.0 = no occlusion)
pub _padding: f32,
}
impl MaterialUniform {
pub fn new() -> Self {
Self {
base_color: [1.0, 1.0, 1.0, 1.0],
metallic: 0.0,
roughness: 0.5,
ao: 1.0,
_padding: 0.0,
}
}
pub fn with_params(base_color: [f32; 4], metallic: f32, roughness: f32) -> Self {
Self {
base_color,
metallic,
roughness,
ao: 1.0,
_padding: 0.0,
}
}
/// Material bind group layout (group 2)
pub fn bind_group_layout(device: &wgpu::Device) -> wgpu::BindGroupLayout {
device.create_bind_group_layout(&wgpu::BindGroupLayoutDescriptor {
label: Some("Material Bind Group Layout"),
entries: &[
wgpu::BindGroupLayoutEntry {
binding: 0,
visibility: wgpu::ShaderStages::FRAGMENT,
ty: wgpu::BindingType::Buffer {
ty: wgpu::BufferBindingType::Uniform,
has_dynamic_offset: true,
min_binding_size: wgpu::BufferSize::new(
std::mem::size_of::<MaterialUniform>() as u64,
),
},
count: None,
},
],
})
}
}
```
- [ ] **Step 2: sphere.rs 작성**
UV sphere 생성기. sectors(경도)와 stacks(위도) 파라미터로 MeshVertex 배열 생성.
```rust
// crates/voltex_renderer/src/sphere.rs
use crate::vertex::MeshVertex;
use std::f32::consts::PI;
/// UV sphere 메시 데이터 생성
pub fn generate_sphere(radius: f32, sectors: u32, stacks: u32) -> (Vec<MeshVertex>, Vec<u32>) {
let mut vertices = Vec::new();
let mut indices = Vec::new();
for i in 0..=stacks {
let stack_angle = PI / 2.0 - (i as f32) * PI / (stacks as f32); // π/2 to -π/2
let xy = radius * stack_angle.cos();
let z = radius * stack_angle.sin();
for j in 0..=sectors {
let sector_angle = (j as f32) * 2.0 * PI / (sectors as f32);
let x = xy * sector_angle.cos();
let y = xy * sector_angle.sin();
let nx = x / radius;
let ny = y / radius;
let nz = z / radius;
let u = j as f32 / sectors as f32;
let v = i as f32 / stacks as f32;
vertices.push(MeshVertex {
position: [x, z, y], // Y-up: swap y and z
normal: [nx, nz, ny],
uv: [u, v],
});
}
}
// Indices
for i in 0..stacks {
for j in 0..sectors {
let first = i * (sectors + 1) + j;
let second = first + sectors + 1;
indices.push(first);
indices.push(second);
indices.push(first + 1);
indices.push(first + 1);
indices.push(second);
indices.push(second + 1);
}
}
(vertices, indices)
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_sphere_vertex_count() {
let (verts, _) = generate_sphere(1.0, 16, 8);
// (stacks+1) * (sectors+1) = 9 * 17 = 153
assert_eq!(verts.len(), 153);
}
#[test]
fn test_sphere_index_count() {
let (_, indices) = generate_sphere(1.0, 16, 8);
// stacks * sectors * 6 = 8 * 16 * 6 = 768
assert_eq!(indices.len(), 768);
}
#[test]
fn test_sphere_normals_unit_length() {
let (verts, _) = generate_sphere(1.0, 8, 4);
for v in &verts {
let len = (v.normal[0].powi(2) + v.normal[1].powi(2) + v.normal[2].powi(2)).sqrt();
assert!((len - 1.0).abs() < 1e-4, "Normal length: {}", len);
}
}
}
```
- [ ] **Step 3: lib.rs 업데이트**
```rust
// crates/voltex_renderer/src/lib.rs에 추가:
pub mod material;
pub mod sphere;
pub use material::MaterialUniform;
pub use sphere::generate_sphere;
```
- [ ] **Step 4: 테스트 통과 확인**
Run: `cargo test -p voltex_renderer`
Expected: 기존 10 + sphere 3 = 13개 PASS
- [ ] **Step 5: 커밋**
```bash
git add crates/voltex_renderer/
git commit -m "feat(renderer): add MaterialUniform and procedural sphere generator"
```
---
## Task 2: PBR WGSL 셰이더
**Files:**
- Create: `crates/voltex_renderer/src/pbr_shader.wgsl`
Cook-Torrance BRDF:
- **D (Normal Distribution Function)**: GGX/Trowbridge-Reitz
- **G (Geometry Function)**: Smith's method with Schlick-GGX
- **F (Fresnel)**: Fresnel-Schlick 근사
```
f_cook_torrance = D * G * F / (4 * dot(N,V) * dot(N,L))
```
셰이더 바인딩 레이아웃:
- group(0) binding(0): CameraUniform (view_proj, model, camera_pos) — dynamic offset
- group(0) binding(1): LightUniform (direction, color, ambient)
- group(1) binding(0): albedo texture
- group(1) binding(1): sampler
- group(2) binding(0): MaterialUniform (base_color, metallic, roughness, ao) — dynamic offset
- [ ] **Step 1: pbr_shader.wgsl 작성**
```wgsl
// crates/voltex_renderer/src/pbr_shader.wgsl
const PI: f32 = 3.14159265359;
struct CameraUniform {
view_proj: mat4x4<f32>,
model: mat4x4<f32>,
camera_pos: vec3<f32>,
};
struct LightUniform {
direction: vec3<f32>,
color: vec3<f32>,
ambient_strength: f32,
};
struct MaterialUniform {
base_color: vec4<f32>,
metallic: f32,
roughness: f32,
ao: f32,
};
@group(0) @binding(0) var<uniform> camera: CameraUniform;
@group(0) @binding(1) var<uniform> light: LightUniform;
@group(1) @binding(0) var t_albedo: texture_2d<f32>;
@group(1) @binding(1) var s_albedo: sampler;
@group(2) @binding(0) var<uniform> material: MaterialUniform;
struct VertexInput {
@location(0) position: vec3<f32>,
@location(1) normal: vec3<f32>,
@location(2) uv: vec2<f32>,
};
struct VertexOutput {
@builtin(position) clip_position: vec4<f32>,
@location(0) world_normal: vec3<f32>,
@location(1) world_pos: vec3<f32>,
@location(2) uv: vec2<f32>,
};
@vertex
fn vs_main(model_v: VertexInput) -> VertexOutput {
var out: VertexOutput;
let world_pos = camera.model * vec4<f32>(model_v.position, 1.0);
out.world_pos = world_pos.xyz;
out.world_normal = normalize((camera.model * vec4<f32>(model_v.normal, 0.0)).xyz);
out.clip_position = camera.view_proj * world_pos;
out.uv = model_v.uv;
return out;
}
// --- PBR Functions ---
// GGX/Trowbridge-Reitz Normal Distribution Function
fn distribution_ggx(n: vec3<f32>, h: vec3<f32>, roughness: f32) -> f32 {
let a = roughness * roughness;
let a2 = a * a;
let n_dot_h = max(dot(n, h), 0.0);
let n_dot_h2 = n_dot_h * n_dot_h;
let denom = n_dot_h2 * (a2 - 1.0) + 1.0;
return a2 / (PI * denom * denom);
}
// Schlick-GGX Geometry function (single direction)
fn geometry_schlick_ggx(n_dot_v: f32, roughness: f32) -> f32 {
let r = roughness + 1.0;
let k = (r * r) / 8.0;
return n_dot_v / (n_dot_v * (1.0 - k) + k);
}
// Smith's Geometry function (both directions)
fn geometry_smith(n: vec3<f32>, v: vec3<f32>, l: vec3<f32>, roughness: f32) -> f32 {
let n_dot_v = max(dot(n, v), 0.0);
let n_dot_l = max(dot(n, l), 0.0);
let ggx1 = geometry_schlick_ggx(n_dot_v, roughness);
let ggx2 = geometry_schlick_ggx(n_dot_l, roughness);
return ggx1 * ggx2;
}
// Fresnel-Schlick approximation
fn fresnel_schlick(cos_theta: f32, f0: vec3<f32>) -> vec3<f32> {
return f0 + (1.0 - f0) * pow(clamp(1.0 - cos_theta, 0.0, 1.0), 5.0);
}
@fragment
fn fs_main(in: VertexOutput) -> @location(0) vec4<f32> {
let albedo_tex = textureSample(t_albedo, s_albedo, in.uv);
let albedo = albedo_tex.rgb * material.base_color.rgb;
let metallic = material.metallic;
let roughness = material.roughness;
let ao = material.ao;
let n = normalize(in.world_normal);
let v = normalize(camera.camera_pos - in.world_pos);
// Fresnel reflectance at normal incidence
// Non-metal: 0.04, metal: albedo color
let f0 = mix(vec3<f32>(0.04, 0.04, 0.04), albedo, metallic);
// Directional light
let l = normalize(-light.direction);
let h = normalize(v + l);
let n_dot_l = max(dot(n, l), 0.0);
// Cook-Torrance BRDF
let d = distribution_ggx(n, h, roughness);
let g = geometry_smith(n, v, l, roughness);
let f = fresnel_schlick(max(dot(h, v), 0.0), f0);
let numerator = d * g * f;
let denominator = 4.0 * max(dot(n, v), 0.0) * n_dot_l + 0.0001;
let specular = numerator / denominator;
// Energy conservation: diffuse + specular = 1
let ks = f; // specular fraction
let kd = (1.0 - ks) * (1.0 - metallic); // diffuse fraction (metals have no diffuse)
let diffuse = kd * albedo / PI;
// Final color
let lo = (diffuse + specular) * light.color * n_dot_l;
// Ambient (simple constant for now, IBL in Phase 4c)
let ambient = vec3<f32>(0.03, 0.03, 0.03) * albedo * ao;
var color = ambient + lo;
// HDR → LDR tone mapping (Reinhard)
color = color / (color + vec3<f32>(1.0, 1.0, 1.0));
// Gamma correction
color = pow(color, vec3<f32>(1.0 / 2.2, 1.0 / 2.2, 1.0 / 2.2));
return vec4<f32>(color, 1.0);
}
```
- [ ] **Step 2: 빌드 확인**
Run: `cargo build -p voltex_renderer`
Expected: 빌드 성공 (셰이더는 include_str!로 참조하므로 파일만 존재하면 됨)
- [ ] **Step 3: 커밋**
```bash
git add crates/voltex_renderer/src/pbr_shader.wgsl
git commit -m "feat(renderer): add PBR Cook-Torrance BRDF shader"
```
---
## Task 3: PBR 파이프라인
**Files:**
- Create: `crates/voltex_renderer/src/pbr_pipeline.rs`
- Modify: `crates/voltex_renderer/src/lib.rs`
기존 mesh_pipeline과 비슷하지만, bind group이 3개: camera+light(0), texture(1), material(2). material은 dynamic offset 사용 (per-entity 머티리얼).
- [ ] **Step 1: pbr_pipeline.rs 작성**
```rust
// crates/voltex_renderer/src/pbr_pipeline.rs
use crate::vertex::MeshVertex;
use crate::gpu::DEPTH_FORMAT;
pub fn create_pbr_pipeline(
device: &wgpu::Device,
format: wgpu::TextureFormat,
camera_light_layout: &wgpu::BindGroupLayout,
texture_layout: &wgpu::BindGroupLayout,
material_layout: &wgpu::BindGroupLayout,
) -> wgpu::RenderPipeline {
let shader = device.create_shader_module(wgpu::ShaderModuleDescriptor {
label: Some("PBR Shader"),
source: wgpu::ShaderSource::Wgsl(include_str!("pbr_shader.wgsl").into()),
});
let layout = device.create_pipeline_layout(&wgpu::PipelineLayoutDescriptor {
label: Some("PBR Pipeline Layout"),
bind_group_layouts: &[camera_light_layout, texture_layout, material_layout],
immediate_size: 0,
});
device.create_render_pipeline(&wgpu::RenderPipelineDescriptor {
label: Some("PBR Pipeline"),
layout: Some(&layout),
vertex: wgpu::VertexState {
module: &shader,
entry_point: Some("vs_main"),
buffers: &[MeshVertex::LAYOUT],
compilation_options: wgpu::PipelineCompilationOptions::default(),
},
fragment: Some(wgpu::FragmentState {
module: &shader,
entry_point: Some("fs_main"),
targets: &[Some(wgpu::ColorTargetState {
format,
blend: Some(wgpu::BlendState::REPLACE),
write_mask: wgpu::ColorWrites::ALL,
})],
compilation_options: wgpu::PipelineCompilationOptions::default(),
}),
primitive: wgpu::PrimitiveState {
topology: wgpu::PrimitiveTopology::TriangleList,
strip_index_format: None,
front_face: wgpu::FrontFace::Ccw,
cull_mode: Some(wgpu::Face::Back),
polygon_mode: wgpu::PolygonMode::Fill,
unclipped_depth: false,
conservative: false,
},
depth_stencil: Some(wgpu::DepthStencilState {
format: DEPTH_FORMAT,
depth_write_enabled: true,
depth_compare: wgpu::CompareFunction::Less,
stencil: wgpu::StencilState::default(),
bias: wgpu::DepthBiasState::default(),
}),
multisample: wgpu::MultisampleState {
count: 1,
mask: !0,
alpha_to_coverage_enabled: false,
},
multiview_mask: None,
cache: None,
})
}
```
- [ ] **Step 2: lib.rs 업데이트**
```rust
// crates/voltex_renderer/src/lib.rs에 추가:
pub mod pbr_pipeline;
pub use pbr_pipeline::create_pbr_pipeline;
```
- [ ] **Step 3: 빌드 확인**
Run: `cargo build -p voltex_renderer`
Expected: 빌드 성공
- [ ] **Step 4: 커밋**
```bash
git add crates/voltex_renderer/
git commit -m "feat(renderer): add PBR render pipeline with 3 bind groups"
```
---
## Task 4: PBR 데모
**Files:**
- Create: `examples/pbr_demo/Cargo.toml`
- Create: `examples/pbr_demo/src/main.rs`
- Modify: `Cargo.toml` (워크스페이스에 추가)
metallic(X축)과 roughness(Y축)을 변화시킨 구체 그리드를 PBR로 렌더링.
- [ ] **Step 1: 워크스페이스 + Cargo.toml**
workspace members에 `"examples/pbr_demo"` 추가.
```toml
# examples/pbr_demo/Cargo.toml
[package]
name = "pbr_demo"
version = "0.1.0"
edition = "2021"
[dependencies]
voltex_math.workspace = true
voltex_platform.workspace = true
voltex_renderer.workspace = true
voltex_ecs.workspace = true
voltex_asset.workspace = true
wgpu.workspace = true
winit.workspace = true
bytemuck.workspace = true
pollster.workspace = true
env_logger.workspace = true
log.workspace = true
```
- [ ] **Step 2: main.rs 작성**
이 파일은 asset_demo의 dynamic UBO 패턴 + PBR 파이프라인을 결합한다.
반드시 읽어야 할 파일:
1. `examples/many_cubes/src/main.rs` — dynamic UBO 패턴
2. `crates/voltex_renderer/src/material.rs` — MaterialUniform API
3. `crates/voltex_renderer/src/sphere.rs` — generate_sphere API
4. `crates/voltex_renderer/src/pbr_pipeline.rs` — create_pbr_pipeline API
5. `crates/voltex_renderer/src/texture.rs` — GpuTexture API
핵심 구조:
- 7x7 구체 그리드 (49개). X축: metallic 0.0→1.0, Y축: roughness 0.05→1.0
- 각 구체는 ECS 엔티티 (Transform + MaterialIndex(usize))
- Camera 위치: (0, 0, 25), 그리드를 정면으로 봄
- Directional light: 위쪽-앞에서 비추는 방향
바인드 그룹 구성:
- group(0): CameraUniform(dynamic) + LightUniform(static) — many_cubes와 동일
- group(1): albedo texture (white 1x1) + sampler — 기존 GpuTexture
- group(2): MaterialUniform(dynamic) — per-entity 머티리얼
Dynamic UBO 2개:
1. Camera UBO: per-entity model matrix (dynamic offset)
2. Material UBO: per-entity metallic/roughness (dynamic offset)
렌더 루프:
```
for (i, entity) in entities.iter().enumerate() {
let camera_offset = i * camera_aligned_size;
let material_offset = i * material_aligned_size;
render_pass.set_bind_group(0, &camera_light_bg, &[camera_offset as u32]);
render_pass.set_bind_group(2, &material_bg, &[material_offset as u32]);
render_pass.draw_indexed(...);
}
```
구체 생성: `generate_sphere(0.4, 32, 16)` — 반지름 0.4, 충분한 해상도.
그리드 배치: 7x7, spacing 1.2
```
for row in 0..7 { // roughness axis
for col in 0..7 { // metallic axis
let metallic = col as f32 / 6.0;
let roughness = 0.05 + row as f32 * (0.95 / 6.0);
// position: centered grid
}
}
```
- [ ] **Step 3: 빌드 + 테스트**
Run: `cargo build --workspace`
Run: `cargo test --workspace`
- [ ] **Step 4: 실행 확인**
Run: `cargo run -p pbr_demo`
Expected: 7x7 구체 그리드. 왼→오 metallic 증가(반사적), 아래→위 roughness 증가(거친). FPS 카메라. ESC 종료.
- [ ] **Step 5: 커밋**
```bash
git add Cargo.toml examples/pbr_demo/
git commit -m "feat: add PBR demo with metallic/roughness sphere grid"
```
---
## Phase 4a 완료 기준 체크리스트
- [ ] `cargo build --workspace` 성공
- [ ] `cargo test --workspace` — 모든 테스트 통과
- [ ] PBR 셰이더: GGX NDF + Smith geometry + Fresnel-Schlick
- [ ] Reinhard 톤 매핑 + 감마 보정
- [ ] MaterialUniform: base_color, metallic, roughness, ao
- [ ] 프로시저럴 구체: 올바른 노멀, 조절 가능한 해상도
- [ ] `cargo run -p pbr_demo` — 구체 그리드에서 metallic/roughness 차이 시각적으로 확인
- [ ] 기존 예제 모두 동작

View File

@@ -0,0 +1,522 @@
# Phase 4b-1: Multi-Light System Implementation Plan
> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking.
**Goal:** Directional, Point, Spot 3종류의 라이트를 최대 16개 동시 지원하여 PBR 셰이더에서 다중 광원 라이팅을 렌더링한다.
**Architecture:** 기존 단일 `LightUniform``LightsUniform`으로 교체. GPU에 고정 크기 라이트 배열(MAX_LIGHTS=16)과 활성 라이트 수를 전달. PBR 셰이더에서 라이트 루프를 돌며 각 라이트 타입별로 radiance를 계산. Point Light는 거리 감쇠(inverse square + range clamp), Spot Light는 원뿔 각도 감쇠를 적용.
**Tech Stack:** Rust 1.94, wgpu 28.0, WGSL
---
## File Structure
```
crates/voltex_renderer/src/
├── light.rs # LightData, LightsUniform 교체 (MODIFY)
├── pbr_shader.wgsl # 다중 라이트 루프 (MODIFY)
├── pbr_pipeline.rs # 기존 유지 (bind group 변경 없음 — light는 group(0) binding(1) 그대로)
├── lib.rs # re-export 업데이트 (MODIFY)
examples/
└── multi_light_demo/ # 다중 라이트 데모 (NEW)
├── Cargo.toml
└── src/
└── main.rs
```
---
## Task 1: LightData + LightsUniform
**Files:**
- Modify: `crates/voltex_renderer/src/light.rs`
- Modify: `crates/voltex_renderer/src/lib.rs`
기존 `LightUniform`(단일 directional)을 유지하면서(하위 호환), 새 `LightData` + `LightsUniform`을 추가.
- [ ] **Step 1: light.rs에 새 타입 추가**
기존 CameraUniform, LightUniform은 유지 (기존 예제 호환). 새 타입 추가:
```rust
// crates/voltex_renderer/src/light.rs — 기존 코드 아래에 추가
pub const MAX_LIGHTS: usize = 16;
/// 라이트 타입 상수
pub const LIGHT_DIRECTIONAL: u32 = 0;
pub const LIGHT_POINT: u32 = 1;
pub const LIGHT_SPOT: u32 = 2;
/// 개별 라이트 데이터 (GPU 전달용)
#[repr(C)]
#[derive(Copy, Clone, Debug, Pod, Zeroable)]
pub struct LightData {
pub position: [f32; 3],
pub light_type: u32, // 0=directional, 1=point, 2=spot
pub direction: [f32; 3],
pub range: f32, // point/spot: 최대 영향 거리
pub color: [f32; 3],
pub intensity: f32,
pub inner_cone: f32, // spot: 내부 원뿔 각도 (cos)
pub outer_cone: f32, // spot: 외부 원뿔 각도 (cos)
pub _padding: [f32; 2],
}
impl LightData {
pub fn directional(direction: [f32; 3], color: [f32; 3], intensity: f32) -> Self {
Self {
position: [0.0; 3],
light_type: LIGHT_DIRECTIONAL,
direction,
range: 0.0,
color,
intensity,
inner_cone: 0.0,
outer_cone: 0.0,
_padding: [0.0; 2],
}
}
pub fn point(position: [f32; 3], color: [f32; 3], intensity: f32, range: f32) -> Self {
Self {
position,
light_type: LIGHT_POINT,
direction: [0.0; 3],
range,
color,
intensity,
inner_cone: 0.0,
outer_cone: 0.0,
_padding: [0.0; 2],
}
}
pub fn spot(
position: [f32; 3],
direction: [f32; 3],
color: [f32; 3],
intensity: f32,
range: f32,
inner_angle_deg: f32,
outer_angle_deg: f32,
) -> Self {
Self {
position,
light_type: LIGHT_SPOT,
direction,
range,
color,
intensity,
inner_cone: inner_angle_deg.to_radians().cos(),
outer_cone: outer_angle_deg.to_radians().cos(),
_padding: [0.0; 2],
}
}
}
/// 다중 라이트 uniform (고정 크기 배열)
#[repr(C)]
#[derive(Copy, Clone, Debug, Pod, Zeroable)]
pub struct LightsUniform {
pub lights: [LightData; MAX_LIGHTS],
pub count: u32,
pub ambient_color: [f32; 3],
}
impl LightsUniform {
pub fn new() -> Self {
Self {
lights: [LightData::directional([0.0, -1.0, 0.0], [1.0; 3], 1.0); MAX_LIGHTS],
count: 0,
ambient_color: [0.03, 0.03, 0.03],
}
}
pub fn add_light(&mut self, light: LightData) {
if (self.count as usize) < MAX_LIGHTS {
self.lights[self.count as usize] = light;
self.count += 1;
}
}
pub fn clear(&mut self) {
self.count = 0;
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_light_data_size() {
// LightData must be 16-byte aligned for WGSL array
assert_eq!(std::mem::size_of::<LightData>() % 16, 0);
}
#[test]
fn test_lights_uniform_add() {
let mut lights = LightsUniform::new();
lights.add_light(LightData::point([0.0, 5.0, 0.0], [1.0, 0.0, 0.0], 10.0, 20.0));
lights.add_light(LightData::directional([0.0, -1.0, 0.0], [1.0, 1.0, 1.0], 1.0));
assert_eq!(lights.count, 2);
}
#[test]
fn test_lights_uniform_max() {
let mut lights = LightsUniform::new();
for i in 0..20 {
lights.add_light(LightData::point([i as f32, 0.0, 0.0], [1.0; 3], 1.0, 10.0));
}
assert_eq!(lights.count, MAX_LIGHTS as u32); // capped at 16
}
#[test]
fn test_spot_light_cone() {
let spot = LightData::spot(
[0.0; 3], [0.0, -1.0, 0.0],
[1.0; 3], 10.0, 20.0,
15.0, 30.0,
);
// cos(15°) ≈ 0.9659, cos(30°) ≈ 0.8660
assert!((spot.inner_cone - 15.0_f32.to_radians().cos()).abs() < 1e-4);
assert!((spot.outer_cone - 30.0_f32.to_radians().cos()).abs() < 1e-4);
}
}
```
- [ ] **Step 2: lib.rs 업데이트**
기존 re-export에 추가:
```rust
pub use light::{LightData, LightsUniform, MAX_LIGHTS, LIGHT_DIRECTIONAL, LIGHT_POINT, LIGHT_SPOT};
```
- [ ] **Step 3: 테스트 통과 확인**
Run: `cargo test -p voltex_renderer`
Expected: 기존 13 + light 4 = 17개 PASS
- [ ] **Step 4: 커밋**
```bash
git add crates/voltex_renderer/
git commit -m "feat(renderer): add LightData and LightsUniform for multi-light support"
```
---
## Task 2: PBR 셰이더 다중 라이트
**Files:**
- Modify: `crates/voltex_renderer/src/pbr_shader.wgsl`
기존 단일 directional light 로직을 다중 라이트 루프로 교체. 라이트 타입별 radiance 계산.
- [ ] **Step 1: pbr_shader.wgsl 교체**
```wgsl
// crates/voltex_renderer/src/pbr_shader.wgsl
const PI: f32 = 3.14159265358979;
const MAX_LIGHTS: u32 = 16u;
const LIGHT_DIRECTIONAL: u32 = 0u;
const LIGHT_POINT: u32 = 1u;
const LIGHT_SPOT: u32 = 2u;
struct CameraUniform {
view_proj: mat4x4<f32>,
model: mat4x4<f32>,
camera_pos: vec3<f32>,
};
struct LightData {
position: vec3<f32>,
light_type: u32,
direction: vec3<f32>,
range: f32,
color: vec3<f32>,
intensity: f32,
inner_cone: f32,
outer_cone: f32,
_padding: vec2<f32>,
};
struct LightsUniform {
lights: array<LightData, 16>,
count: u32,
ambient_color: vec3<f32>,
};
struct MaterialUniform {
base_color: vec4<f32>,
metallic: f32,
roughness: f32,
ao: f32,
};
@group(0) @binding(0) var<uniform> camera: CameraUniform;
@group(0) @binding(1) var<uniform> lights_uniform: LightsUniform;
@group(1) @binding(0) var t_diffuse: texture_2d<f32>;
@group(1) @binding(1) var s_diffuse: sampler;
@group(2) @binding(0) var<uniform> material: MaterialUniform;
struct VertexInput {
@location(0) position: vec3<f32>,
@location(1) normal: vec3<f32>,
@location(2) uv: vec2<f32>,
};
struct VertexOutput {
@builtin(position) clip_position: vec4<f32>,
@location(0) world_normal: vec3<f32>,
@location(1) world_pos: vec3<f32>,
@location(2) uv: vec2<f32>,
};
@vertex
fn vs_main(model_v: VertexInput) -> VertexOutput {
var out: VertexOutput;
let world_pos = camera.model * vec4<f32>(model_v.position, 1.0);
out.world_pos = world_pos.xyz;
out.world_normal = normalize((camera.model * vec4<f32>(model_v.normal, 0.0)).xyz);
out.clip_position = camera.view_proj * world_pos;
out.uv = model_v.uv;
return out;
}
// --- PBR Functions ---
fn distribution_ggx(N: vec3<f32>, H: vec3<f32>, roughness: f32) -> f32 {
let a = roughness * roughness;
let a2 = a * a;
let NdotH = max(dot(N, H), 0.0);
let NdotH2 = NdotH * NdotH;
let denom_inner = NdotH2 * (a2 - 1.0) + 1.0;
return a2 / (PI * denom_inner * denom_inner);
}
fn geometry_schlick_ggx(NdotV: f32, roughness: f32) -> f32 {
let r = roughness + 1.0;
let k = (r * r) / 8.0;
return NdotV / (NdotV * (1.0 - k) + k);
}
fn geometry_smith(N: vec3<f32>, V: vec3<f32>, L: vec3<f32>, roughness: f32) -> f32 {
let NdotV = max(dot(N, V), 0.0);
let NdotL = max(dot(N, L), 0.0);
return geometry_schlick_ggx(NdotV, roughness) * geometry_schlick_ggx(NdotL, roughness);
}
fn fresnel_schlick(cosTheta: f32, F0: vec3<f32>) -> vec3<f32> {
return F0 + (1.0 - F0) * pow(clamp(1.0 - cosTheta, 0.0, 1.0), 5.0);
}
// --- Light attenuation ---
fn attenuation_point(distance: f32, range: f32) -> f32 {
let att = 1.0 / (distance * distance + 0.0001);
// Smooth range falloff
let falloff = clamp(1.0 - pow(distance / range, 4.0), 0.0, 1.0);
return att * falloff * falloff;
}
fn attenuation_spot(light: LightData, L: vec3<f32>) -> f32 {
let theta = dot(normalize(light.direction), -L);
let epsilon = light.inner_cone - light.outer_cone;
return clamp((theta - light.outer_cone) / epsilon, 0.0, 1.0);
}
// --- Per-light radiance ---
fn compute_light_contribution(
light: LightData,
N: vec3<f32>,
V: vec3<f32>,
world_pos: vec3<f32>,
F0: vec3<f32>,
albedo: vec3<f32>,
metallic: f32,
roughness: f32,
) -> vec3<f32> {
var L: vec3<f32>;
var radiance: vec3<f32>;
if light.light_type == LIGHT_DIRECTIONAL {
L = normalize(-light.direction);
radiance = light.color * light.intensity;
} else {
// Point or Spot
let to_light = light.position - world_pos;
let distance = length(to_light);
L = to_light / distance;
let att = attenuation_point(distance, light.range);
radiance = light.color * light.intensity * att;
if light.light_type == LIGHT_SPOT {
radiance = radiance * attenuation_spot(light, L);
}
}
let H = normalize(V + L);
let NdotL = max(dot(N, L), 0.0);
if NdotL <= 0.0 {
return vec3<f32>(0.0);
}
let NDF = distribution_ggx(N, H, roughness);
let G = geometry_smith(N, V, L, roughness);
let F = fresnel_schlick(max(dot(H, V), 0.0), F0);
let ks = F;
let kd = (vec3<f32>(1.0) - ks) * (1.0 - metallic);
let numerator = NDF * G * F;
let NdotV = max(dot(N, V), 0.0);
let denominator = 4.0 * NdotV * NdotL + 0.0001;
let specular = numerator / denominator;
return (kd * albedo / PI + specular) * radiance * NdotL;
}
@fragment
fn fs_main(in: VertexOutput) -> @location(0) vec4<f32> {
let tex_color = textureSample(t_diffuse, s_diffuse, in.uv);
let albedo = material.base_color.rgb * tex_color.rgb;
let metallic = material.metallic;
let roughness = material.roughness;
let ao = material.ao;
let N = normalize(in.world_normal);
let V = normalize(camera.camera_pos - in.world_pos);
let F0 = mix(vec3<f32>(0.04, 0.04, 0.04), albedo, metallic);
// Accumulate light contributions
var Lo = vec3<f32>(0.0);
let light_count = min(lights_uniform.count, MAX_LIGHTS);
for (var i = 0u; i < light_count; i = i + 1u) {
Lo = Lo + compute_light_contribution(
lights_uniform.lights[i], N, V, in.world_pos,
F0, albedo, metallic, roughness,
);
}
// Ambient
let ambient = lights_uniform.ambient_color * albedo * ao;
var color = ambient + Lo;
// Reinhard tone mapping
color = color / (color + vec3<f32>(1.0));
// Gamma correction
color = pow(color, vec3<f32>(1.0 / 2.2));
return vec4<f32>(color, material.base_color.a * tex_color.a);
}
```
- [ ] **Step 2: 빌드 확인**
Run: `cargo build -p voltex_renderer`
Expected: 빌드 성공
중요: PBR 셰이더의 light uniform이 바뀌었으므로, 기존 pbr_demo는 **컴파일은 되지만** 런타임에 bind group 크기가 맞지 않아 크래시할 수 있다. pbr_demo는 Task 3에서 업데이트.
- [ ] **Step 3: 커밋**
```bash
git add crates/voltex_renderer/src/pbr_shader.wgsl
git commit -m "feat(renderer): update PBR shader for multi-light with point and spot support"
```
---
## Task 3: multi_light_demo + pbr_demo 수정
**Files:**
- Create: `examples/multi_light_demo/Cargo.toml`
- Create: `examples/multi_light_demo/src/main.rs`
- Modify: `examples/pbr_demo/src/main.rs` (LightsUniform 사용으로 업데이트)
- Modify: `Cargo.toml` (워크스페이스에 multi_light_demo 추가)
### pbr_demo 수정
기존 pbr_demo가 `LightUniform`을 사용하고 있으므로 `LightsUniform`으로 교체해야 한다. 변경 최소화:
- `LightUniform::new()``LightsUniform::new()` + `add_light(LightData::directional(...))`
- light_buffer 크기가 `LightsUniform` 크기로 변경
- 나머지 로직 동일
### multi_light_demo
PBR 구체 여러 개 + 다양한 색상의 Point/Spot Light로 다중 라이트를 데모.
장면 구성:
- 바닥: 큰 큐브(scale 10x0.1x10)를 y=-0.5에 배치 (roughness 0.8, 비금속)
- 구체 5개: 일렬 배치, 다양한 metallic/roughness
- Point Light 4개: 빨강, 초록, 파랑, 노랑 — 구체 위에서 원형으로 공전
- Directional Light 1개: 약한 하얀빛 (전체 조명)
- Spot Light 1개: 바닥 중앙을 비추는 흰색
카메라: (0, 5, 10) pitch=-0.3
동적 라이트: 매 프레임 Point Light 위치를 time 기반으로 원형 궤도에서 업데이트.
dynamic UBO 패턴 사용 (many_cubes 기반). LightsUniform은 매 프레임 write_buffer로 갱신 (static, dynamic offset 없음).
파일을 작성하기 전에 반드시 읽어야 할 파일:
1. `examples/pbr_demo/src/main.rs` — 수정 대상
2. `examples/many_cubes/src/main.rs` — dynamic UBO 패턴
3. `crates/voltex_renderer/src/light.rs` — LightData, LightsUniform API
4. `crates/voltex_renderer/src/material.rs` — MaterialUniform API
5. `crates/voltex_renderer/src/sphere.rs` — generate_sphere
- [ ] **Step 1: pbr_demo/main.rs 수정**
핵심 변경:
- `use voltex_renderer::LightUniform``use voltex_renderer::{LightsUniform, LightData}`
- light_uniform 초기화: `LightsUniform::new()` + `add_light(LightData::directional([-1.0, -1.0, -1.0], [1.0,1.0,1.0], 1.0))`
- light_buffer 크기: `std::mem::size_of::<LightsUniform>()`
- write_buffer에 `bytemuck::cast_slice(&[lights_uniform])`
- [ ] **Step 2: multi_light_demo 작성**
워크스페이스에 추가, Cargo.toml 작성, main.rs 작성.
구체 + 바닥 = 6개 엔티티 (간단하므로 ECS 없이 직접 관리). dynamic UBO for camera (per-entity) + material (per-entity). LightsUniform은 static (per-frame update).
- [ ] **Step 3: 빌드 + 테스트**
Run: `cargo build --workspace`
Run: `cargo test --workspace`
- [ ] **Step 4: 실행 확인**
Run: `cargo run -p pbr_demo` — 여전히 동작 (단일 directional light)
Run: `cargo run -p multi_light_demo` — 다중 색상 라이트가 구체들을 비추며 공전
- [ ] **Step 5: 커밋**
```bash
git add Cargo.toml examples/pbr_demo/ examples/multi_light_demo/
git commit -m "feat: add multi-light demo with point/spot lights, update pbr_demo for LightsUniform"
```
---
## Phase 4b-1 완료 기준 체크리스트
- [ ] `cargo build --workspace` 성공
- [ ] `cargo test --workspace` — 모든 테스트 통과
- [ ] LightData: Directional, Point, Spot 3가지 타입 생성자
- [ ] LightsUniform: 최대 16개 라이트 배열, add/clear
- [ ] PBR 셰이더: 라이트 루프, Point attenuation, Spot cone falloff
- [ ] `cargo run -p pbr_demo` — 기존 기능 유지 (단일 directional)
- [ ] `cargo run -p multi_light_demo` — 다중 색상 Point Light 공전, Spot Light
- [ ] 기존 예제 모두 동작 (mesh_shader.wgsl 사용하는 것들은 변경 없음)

View File

@@ -0,0 +1,505 @@
# Phase 4b-2: Directional Light Shadow Map Implementation Plan
> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking.
**Goal:** Directional Light에서 단일 Shadow Map을 생성하고, PBR 셰이더에서 그림자를 샘플링하여 PCF 소프트 섀도우를 렌더링한다.
**Architecture:** 2-pass 렌더링: (1) Shadow pass — 라이트 시점의 orthographic 투영으로 씬을 depth-only 텍스처에 렌더링, (2) Color pass — 기존 PBR 렌더링 + shadow map 샘플링. ShadowMap 구조체가 depth 텍스처와 라이트 VP 행렬을 관리. PBR 셰이더의 새 bind group(3)으로 shadow map + shadow uniform을 전달. 3x3 PCF로 소프트 섀도우.
**Tech Stack:** Rust 1.94, wgpu 28.0, WGSL
---
## File Structure
```
crates/voltex_renderer/src/
├── shadow.rs # ShadowMap, ShadowUniform, shadow depth texture (NEW)
├── shadow_shader.wgsl # Depth-only vertex shader for shadow pass (NEW)
├── shadow_pipeline.rs # Depth-only render pipeline (NEW)
├── pbr_shader.wgsl # Shadow sampling + PCF 추가 (MODIFY)
├── pbr_pipeline.rs # group(3) shadow bind group 추가 (MODIFY)
├── lib.rs # re-export 업데이트 (MODIFY)
examples/
└── shadow_demo/ # 섀도우 데모 (NEW)
├── Cargo.toml
└── src/
└── main.rs
```
---
## Task 1: ShadowMap + Shadow Depth Shader + Shadow Pipeline
**Files:**
- Create: `crates/voltex_renderer/src/shadow.rs`
- Create: `crates/voltex_renderer/src/shadow_shader.wgsl`
- Create: `crates/voltex_renderer/src/shadow_pipeline.rs`
- Modify: `crates/voltex_renderer/src/lib.rs`
### shadow.rs
```rust
// crates/voltex_renderer/src/shadow.rs
use bytemuck::{Pod, Zeroable};
pub const SHADOW_MAP_SIZE: u32 = 2048;
pub const SHADOW_FORMAT: wgpu::TextureFormat = wgpu::TextureFormat::Depth32Float;
/// Shadow map에 필요한 GPU 리소스
pub struct ShadowMap {
pub texture: wgpu::Texture,
pub view: wgpu::TextureView,
pub sampler: wgpu::Sampler,
}
impl ShadowMap {
pub fn new(device: &wgpu::Device) -> Self {
let texture = device.create_texture(&wgpu::TextureDescriptor {
label: Some("Shadow Map"),
size: wgpu::Extent3d {
width: SHADOW_MAP_SIZE,
height: SHADOW_MAP_SIZE,
depth_or_array_layers: 1,
},
mip_level_count: 1,
sample_count: 1,
dimension: wgpu::TextureDimension::D2,
format: SHADOW_FORMAT,
usage: wgpu::TextureUsages::RENDER_ATTACHMENT | wgpu::TextureUsages::TEXTURE_BINDING,
view_formats: &[],
});
let view = texture.create_view(&wgpu::TextureViewDescriptor::default());
// Comparison sampler for hardware-assisted shadow comparison
let sampler = device.create_sampler(&wgpu::SamplerDescriptor {
label: Some("Shadow Sampler"),
address_mode_u: wgpu::AddressMode::ClampToEdge,
address_mode_v: wgpu::AddressMode::ClampToEdge,
mag_filter: wgpu::FilterMode::Linear,
min_filter: wgpu::FilterMode::Linear,
compare: Some(wgpu::CompareFunction::LessEqual),
..Default::default()
});
Self { texture, view, sampler }
}
/// Shadow bind group layout (group 3)
/// binding 0: shadow depth texture (comparison)
/// binding 1: shadow comparison sampler
/// binding 2: ShadowUniform (light VP + params)
pub fn bind_group_layout(device: &wgpu::Device) -> wgpu::BindGroupLayout {
device.create_bind_group_layout(&wgpu::BindGroupLayoutDescriptor {
label: Some("Shadow Bind Group Layout"),
entries: &[
wgpu::BindGroupLayoutEntry {
binding: 0,
visibility: wgpu::ShaderStages::FRAGMENT,
ty: wgpu::BindingType::Texture {
multisampled: false,
view_dimension: wgpu::TextureViewDimension::D2,
sample_type: wgpu::TextureSampleType::Depth,
},
count: None,
},
wgpu::BindGroupLayoutEntry {
binding: 1,
visibility: wgpu::ShaderStages::FRAGMENT,
ty: wgpu::BindingType::Sampler(wgpu::SamplerBindingType::Comparison),
count: None,
},
wgpu::BindGroupLayoutEntry {
binding: 2,
visibility: wgpu::ShaderStages::VERTEX | wgpu::ShaderStages::FRAGMENT,
ty: wgpu::BindingType::Buffer {
ty: wgpu::BufferBindingType::Uniform,
has_dynamic_offset: false,
min_binding_size: None,
},
count: None,
},
],
})
}
pub fn create_bind_group(
&self,
device: &wgpu::Device,
layout: &wgpu::BindGroupLayout,
shadow_uniform_buffer: &wgpu::Buffer,
) -> wgpu::BindGroup {
device.create_bind_group(&wgpu::BindGroupDescriptor {
label: Some("Shadow Bind Group"),
layout,
entries: &[
wgpu::BindGroupEntry {
binding: 0,
resource: wgpu::BindingResource::TextureView(&self.view),
},
wgpu::BindGroupEntry {
binding: 1,
resource: wgpu::BindingResource::Sampler(&self.sampler),
},
wgpu::BindGroupEntry {
binding: 2,
resource: shadow_uniform_buffer.as_entire_binding(),
},
],
})
}
}
/// Shadow pass에 필요한 uniform (light view-projection 행렬)
#[repr(C)]
#[derive(Copy, Clone, Debug, Pod, Zeroable)]
pub struct ShadowUniform {
pub light_view_proj: [[f32; 4]; 4],
pub shadow_map_size: f32,
pub shadow_bias: f32,
pub _padding: [f32; 2],
}
impl ShadowUniform {
pub fn new() -> Self {
Self {
light_view_proj: [
[1.0,0.0,0.0,0.0],
[0.0,1.0,0.0,0.0],
[0.0,0.0,1.0,0.0],
[0.0,0.0,0.0,1.0],
],
shadow_map_size: SHADOW_MAP_SIZE as f32,
shadow_bias: 0.005,
_padding: [0.0; 2],
}
}
}
/// Shadow pass용 per-object uniform (light VP * model)
#[repr(C)]
#[derive(Copy, Clone, Debug, Pod, Zeroable)]
pub struct ShadowPassUniform {
pub light_vp_model: [[f32; 4]; 4],
}
```
### shadow_shader.wgsl
Shadow pass에서 사용하는 depth-only 셰이더. Fragment output 없음 — depth만 기록.
```wgsl
// crates/voltex_renderer/src/shadow_shader.wgsl
struct ShadowPassUniform {
light_vp_model: mat4x4<f32>,
};
@group(0) @binding(0) var<uniform> shadow_pass: ShadowPassUniform;
struct VertexInput {
@location(0) position: vec3<f32>,
@location(1) normal: vec3<f32>,
@location(2) uv: vec2<f32>,
};
@vertex
fn vs_main(model_v: VertexInput) -> @builtin(position) vec4<f32> {
return shadow_pass.light_vp_model * vec4<f32>(model_v.position, 1.0);
}
```
### shadow_pipeline.rs
Depth-only 렌더 파이프라인. Fragment 스테이지 없음.
```rust
// crates/voltex_renderer/src/shadow_pipeline.rs
use crate::vertex::MeshVertex;
use crate::shadow::SHADOW_FORMAT;
pub fn create_shadow_pipeline(
device: &wgpu::Device,
shadow_pass_layout: &wgpu::BindGroupLayout,
) -> wgpu::RenderPipeline {
let shader = device.create_shader_module(wgpu::ShaderModuleDescriptor {
label: Some("Shadow Shader"),
source: wgpu::ShaderSource::Wgsl(include_str!("shadow_shader.wgsl").into()),
});
let layout = device.create_pipeline_layout(&wgpu::PipelineLayoutDescriptor {
label: Some("Shadow Pipeline Layout"),
bind_group_layouts: &[shadow_pass_layout],
immediate_size: 0,
});
device.create_render_pipeline(&wgpu::RenderPipelineDescriptor {
label: Some("Shadow Pipeline"),
layout: Some(&layout),
vertex: wgpu::VertexState {
module: &shader,
entry_point: Some("vs_main"),
buffers: &[MeshVertex::LAYOUT],
compilation_options: wgpu::PipelineCompilationOptions::default(),
},
fragment: None, // depth-only
primitive: wgpu::PrimitiveState {
topology: wgpu::PrimitiveTopology::TriangleList,
strip_index_format: None,
front_face: wgpu::FrontFace::Ccw,
cull_mode: Some(wgpu::Face::Front), // front-face culling reduces peter-panning
polygon_mode: wgpu::PolygonMode::Fill,
unclipped_depth: false,
conservative: false,
},
depth_stencil: Some(wgpu::DepthStencilState {
format: SHADOW_FORMAT,
depth_write_enabled: true,
depth_compare: wgpu::CompareFunction::LessEqual,
stencil: wgpu::StencilState::default(),
bias: wgpu::DepthBiasState {
constant: 2, // depth bias 로 shadow acne 방지
slope_scale: 2.0,
clamp: 0.0,
},
}),
multisample: wgpu::MultisampleState::default(),
multiview_mask: None,
cache: None,
})
}
/// Shadow pass용 bind group layout (group 0: ShadowPassUniform)
pub fn shadow_pass_bind_group_layout(device: &wgpu::Device) -> wgpu::BindGroupLayout {
device.create_bind_group_layout(&wgpu::BindGroupLayoutDescriptor {
label: Some("Shadow Pass BGL"),
entries: &[
wgpu::BindGroupLayoutEntry {
binding: 0,
visibility: wgpu::ShaderStages::VERTEX,
ty: wgpu::BindingType::Buffer {
ty: wgpu::BufferBindingType::Uniform,
has_dynamic_offset: true,
min_binding_size: wgpu::BufferSize::new(
std::mem::size_of::<crate::shadow::ShadowPassUniform>() as u64,
),
},
count: None,
},
],
})
}
```
### lib.rs 업데이트
```rust
pub mod shadow;
pub mod shadow_pipeline;
pub use shadow::{ShadowMap, ShadowUniform, ShadowPassUniform, SHADOW_MAP_SIZE, SHADOW_FORMAT};
pub use shadow_pipeline::{create_shadow_pipeline, shadow_pass_bind_group_layout};
```
- [ ] **Step 1: 위 파일들 모두 작성**
- [ ] **Step 2: 빌드 확인**`cargo build -p voltex_renderer`
- [ ] **Step 3: 커밋**
```bash
git add crates/voltex_renderer/
git commit -m "feat(renderer): add ShadowMap, shadow depth shader, and shadow pipeline"
```
---
## Task 2: PBR 셰이더 + 파이프라인에 Shadow 통합
**Files:**
- Modify: `crates/voltex_renderer/src/pbr_shader.wgsl`
- Modify: `crates/voltex_renderer/src/pbr_pipeline.rs`
PBR 셰이더에 group(3) shadow bind group을 추가하고, directional light의 그림자를 PCF로 샘플링.
### pbr_shader.wgsl 변경
기존 코드에 다음을 추가:
**Uniforms (group 3):**
```wgsl
struct ShadowUniform {
light_view_proj: mat4x4<f32>,
shadow_map_size: f32,
shadow_bias: f32,
};
@group(3) @binding(0) var t_shadow: texture_depth_2d;
@group(3) @binding(1) var s_shadow: sampler_comparison;
@group(3) @binding(2) var<uniform> shadow: ShadowUniform;
```
**VertexOutput에 추가:**
```wgsl
@location(3) light_space_pos: vec4<f32>,
```
**Vertex shader에서 light space position 계산:**
```wgsl
out.light_space_pos = shadow.light_view_proj * world_pos;
```
**Shadow sampling 함수:**
```wgsl
fn calculate_shadow(light_space_pos: vec4<f32>) -> f32 {
// Perspective divide
let proj_coords = light_space_pos.xyz / light_space_pos.w;
// NDC → shadow map UV: x [-1,1]→[0,1], y [-1,1]→[0,1] (flip y)
let shadow_uv = vec2<f32>(
proj_coords.x * 0.5 + 0.5,
-proj_coords.y * 0.5 + 0.5,
);
let current_depth = proj_coords.z;
// Out of shadow map bounds → no shadow
if shadow_uv.x < 0.0 || shadow_uv.x > 1.0 || shadow_uv.y < 0.0 || shadow_uv.y > 1.0 {
return 1.0;
}
if current_depth > 1.0 || current_depth < 0.0 {
return 1.0;
}
// 3x3 PCF
let texel_size = 1.0 / shadow.shadow_map_size;
var shadow_val = 0.0;
for (var x = -1; x <= 1; x++) {
for (var y = -1; y <= 1; y++) {
let offset = vec2<f32>(f32(x), f32(y)) * texel_size;
shadow_val += textureSampleCompare(
t_shadow, s_shadow,
shadow_uv + offset,
current_depth - shadow.shadow_bias,
);
}
}
return shadow_val / 9.0;
}
```
**Fragment shader에서 directional light에 shadow 적용:**
```wgsl
// compute_light_contribution 안에서, directional light일 때만:
// radiance *= shadow_factor
```
실제로는 `compute_light_contribution` 함수에 shadow_factor 파라미터를 추가하거나, fragment shader에서 directional light(첫 번째 라이트)에만 shadow를 적용.
가장 간단한 접근: fs_main에서 라이트 루프 전에 shadow를 계산하고, directional light(type==0)의 contribution에만 곱함.
### pbr_pipeline.rs 변경
`create_pbr_pipeline`의 bind_group_layouts에 shadow layout 추가:
```rust
bind_group_layouts: &[camera_light_layout, texture_layout, material_layout, shadow_layout],
```
함수 시그니처에 `shadow_layout: &wgpu::BindGroupLayout` 파라미터 추가.
**주의:** 이 변경은 기존 pbr_demo, multi_light_demo 예제에 영향을 줌. 해당 예제들은 Task 3에서 shadow bind group을 추가해야 함. 또는 "no shadow" 용 dummy bind group을 사용.
- [ ] **Step 1: pbr_shader.wgsl 수정** — shadow uniforms, vertex output, sampling, PCF
- [ ] **Step 2: pbr_pipeline.rs 수정** — shadow bind group layout 추가
- [ ] **Step 3: 빌드 확인**`cargo build -p voltex_renderer`
- [ ] **Step 4: 커밋**
```bash
git add crates/voltex_renderer/
git commit -m "feat(renderer): integrate shadow map sampling with PCF into PBR shader"
```
---
## Task 3: 기존 예제 수정 + Shadow Demo
**Files:**
- Modify: `examples/pbr_demo/src/main.rs` — dummy shadow bind group 추가
- Modify: `examples/multi_light_demo/src/main.rs` — dummy shadow bind group 추가
- Create: `examples/shadow_demo/Cargo.toml`
- Create: `examples/shadow_demo/src/main.rs`
- Modify: `Cargo.toml` (워크스페이스에 shadow_demo 추가)
### pbr_demo, multi_light_demo 수정
PBR pipeline이 이제 shadow bind group(group 3)을 요구하므로, shadow가 필요 없는 예제에는 dummy shadow bind group을 제공:
- 1x1 depth texture (값 1.0 = 그림자 없음)
- 또는 ShadowMap::new()로 생성 후 모든 depth = 1.0인 상태로 둠 (cleared가 0이면 모두 그림자 → 안 됨)
더 간단한 접근: ShadowMap을 생성하되, shadow pass를 실행하지 않음. shadow map은 초기값(cleared) 상태. PBR 셰이더의 `calculate_shadow`에서 `current_depth > 1.0`이면 shadow=1.0(그림자 없음)을 반환하므로, clear된 상태면 그림자 없는 것과 동일.
실제로는 depth texture clear value가 0.0이므로 비교 시 모든 곳이 그림자가 됨. 이를 방지하려면:
- shadow uniform의 light_view_proj를 identity로 두면 모든 점의 z가 양수(~원래 위치)가 되어 depth=0과 비교 시 항상 "밝음"이 됨
또는 더 간단하게: shadow bias를 매우 크게 설정 (10.0) → 항상 밝게.
가장 깔끔한 해결: shader에서 `shadow.shadow_map_size == 0.0`이면 shadow 비활성(return 1.0). Dummy에서 shadow_map_size=0으로 설정.
### shadow_demo
장면:
- 바닥 평면: 큰 큐브 (scale 15x0.1x15), y=-0.5, roughness 0.8
- 구체 3개 + 큐브 2개: 바닥 위에 배치
- Directional Light: 방향 (-1, -2, -1) normalized, 위에서 비스듬히
렌더링 루프:
1. Shadow pass:
- 라이트 VP 행렬 계산: `Mat4::look_at(light_pos, target, up) * Mat4::orthographic(...)`
- 필요: orthographic 투영 함수 (Mat4에 추가하거나 inline으로)
- Shadow pipeline으로 모든 오브젝트를 shadow map에 렌더링
- per-object: ShadowPassUniform { light_vp * model }
2. Color pass:
- ShadowUniform { light_view_proj, shadow_map_size, shadow_bias } write
- PBR pipeline으로 렌더링 (shadow bind group 포함)
카메라: (5, 8, 12), pitch=-0.4
**필요: Mat4::orthographic 추가**
voltex_math의 Mat4에 orthographic 투영 함수가 필요:
```rust
pub fn orthographic(left: f32, right: f32, bottom: f32, top: f32, near: f32, far: f32) -> Self
```
wgpu NDC (z: [0,1]):
```
col0: [2/(r-l), 0, 0, 0]
col1: [0, 2/(t-b), 0, 0]
col2: [0, 0, 1/(near-far), 0] // note: reversed for wgpu z[0,1]
col3: [-(r+l)/(r-l), -(t+b)/(t-b), near/(near-far), 1]
```
이것은 shadow_demo에서 inline으로 구현하거나, voltex_math의 Mat4에 추가할 수 있다. Mat4에 추가하는 것이 재사용성이 좋다.
- [ ] **Step 1: voltex_math/mat4.rs에 orthographic 추가 + 테스트**
- [ ] **Step 2: pbr_demo, multi_light_demo에 dummy shadow bind group 추가**
- [ ] **Step 3: shadow_demo 작성**
- [ ] **Step 4: 빌드 + 테스트**
- [ ] **Step 5: 실행 확인**`cargo run -p shadow_demo`
- [ ] **Step 6: 커밋**
```bash
git add Cargo.toml crates/voltex_math/ examples/
git commit -m "feat: add shadow demo with directional light shadow mapping and PCF"
```
---
## Phase 4b-2 완료 기준 체크리스트
- [ ] `cargo build --workspace` 성공
- [ ] `cargo test --workspace` — 모든 테스트 통과
- [ ] Shadow map: 2048x2048 depth texture, comparison sampler
- [ ] Shadow pass: depth-only pipeline, front-face culling, depth bias
- [ ] PBR 셰이더: shadow map 샘플링 + 3x3 PCF
- [ ] `cargo run -p shadow_demo` — 바닥에 오브젝트 그림자 보임
- [ ] `cargo run -p pbr_demo` — 그림자 없이 동작 (dummy shadow)
- [ ] `cargo run -p multi_light_demo` — 그림자 없이 동작
- [ ] 기존 예제 (mesh_shader 사용하는 것들) 영향 없음

View File

@@ -0,0 +1,599 @@
# Phase 4c: Normal Mapping + Simple IBL Implementation Plan
> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking.
**Goal:** Normal map으로 표면 디테일을 추가하고, 프로시저럴 환경광(간이 IBL) + BRDF LUT로 roughness에 따른 반사 차이를 시각적으로 확인할 수 있게 한다.
**Architecture:** MeshVertex에 tangent 벡터를 추가하여 TBN 행렬을 구성하고, PBR 셰이더에서 normal map을 샘플링한다. IBL은 큐브맵 없이 프로시저럴 sky 함수로 환경광을 계산하고, CPU에서 생성한 BRDF LUT로 split-sum 근사를 수행한다. 나중에 프로시저럴 sky를 실제 HDR 큐브맵으로 교체하면 full IBL이 된다.
**Tech Stack:** Rust 1.94, wgpu 28.0, WGSL
---
## File Structure
```
crates/voltex_renderer/src/
├── vertex.rs # MeshVertex에 tangent 추가 (MODIFY)
├── obj.rs # tangent 계산 추가 (MODIFY)
├── sphere.rs # tangent 계산 추가 (MODIFY)
├── brdf_lut.rs # CPU BRDF LUT 생성 (NEW)
├── ibl.rs # IBL bind group + dummy resources (NEW)
├── pbr_shader.wgsl # normal mapping + IBL (MODIFY)
├── pbr_pipeline.rs # group(4) IBL bind group (MODIFY)
├── shadow_shader.wgsl # vertex layout 변경 반영 (MODIFY)
├── lib.rs # re-export 업데이트 (MODIFY)
examples/
└── ibl_demo/ # Normal map + IBL 데모 (NEW)
├── Cargo.toml
└── src/
└── main.rs
```
---
## Task 1: MeshVertex tangent + 계산
**Files:**
- Modify: `crates/voltex_renderer/src/vertex.rs`
- Modify: `crates/voltex_renderer/src/obj.rs`
- Modify: `crates/voltex_renderer/src/sphere.rs`
- Modify: `crates/voltex_renderer/src/shadow_shader.wgsl`
MeshVertex에 `tangent: [f32; 4]`를 추가 (w=handedness, +1 or -1). OBJ 파서와 sphere 생성기에서 tangent를 계산.
- [ ] **Step 1: vertex.rs — MeshVertex에 tangent 추가**
```rust
#[repr(C)]
#[derive(Copy, Clone, Debug, Pod, Zeroable)]
pub struct MeshVertex {
pub position: [f32; 3],
pub normal: [f32; 3],
pub uv: [f32; 2],
pub tangent: [f32; 4], // xyz = tangent direction, w = handedness (+1 or -1)
}
```
LAYOUT에 tangent attribute 추가:
```rust
// location 3, Float32x4, offset after uv
wgpu::VertexAttribute {
offset: (std::mem::size_of::<[f32; 3]>() * 2 + std::mem::size_of::<[f32; 2]>()) as wgpu::BufferAddress,
shader_location: 3,
format: wgpu::VertexFormat::Float32x4,
},
```
- [ ] **Step 2: obj.rs — tangent 계산 추가**
OBJ 파서 후처리로 Mikktspace-like tangent 계산. `parse_obj` 끝에서 삼각형별로 tangent를 계산하고 정점에 누적:
```rust
/// 인덱스 배열의 삼각형들로부터 tangent 벡터를 계산.
/// UV 기반으로 tangent/bitangent를 구하고, 정점에 누적 후 정규화.
pub fn compute_tangents(vertices: &mut [MeshVertex], indices: &[u32]) {
// 각 삼각형에 대해:
// edge1 = v1.pos - v0.pos, edge2 = v2.pos - v0.pos
// duv1 = v1.uv - v0.uv, duv2 = v2.uv - v0.uv
// f = 1.0 / (duv1.x * duv2.y - duv2.x * duv1.y)
// tangent = f * (duv2.y * edge1 - duv1.y * edge2)
// bitangent = f * (-duv2.x * edge1 + duv1.x * edge2)
// 누적 후 정규화, handedness = sign(dot(cross(N, T), B))
}
```
`parse_obj` 끝에서 `compute_tangents(&mut vertices, &indices)` 호출.
- [ ] **Step 3: sphere.rs — tangent 계산 추가**
UV sphere에서 tangent는 해석적으로 계산 가능:
- tangent 방향 = longitude 방향의 접선 (sector angle의 미분)
- `tx = -sin(sector_angle), tz = cos(sector_angle)` (Y-up에서)
- handedness w = 1.0
`generate_sphere`에서 각 정점 생성 시 tangent를 직접 계산.
- [ ] **Step 4: shadow_shader.wgsl — vertex input에 tangent 추가**
shadow shader도 MeshVertex를 사용하므로 VertexInput에 tangent를 추가해야 빌드가 됨:
```wgsl
struct VertexInput {
@location(0) position: vec3<f32>,
@location(1) normal: vec3<f32>,
@location(2) uv: vec2<f32>,
@location(3) tangent: vec4<f32>,
};
```
vertex shader는 tangent를 사용하지 않고 position만 변환 — 기존과 동일.
- [ ] **Step 5: 빌드 + 테스트**
Run: `cargo build -p voltex_renderer`
Run: `cargo test -p voltex_renderer`
Expected: 기존 테스트 통과. OBJ 테스트에서 tangent 필드가 추가된 MeshVertex를 확인.
참고: 기존 OBJ 테스트는 position/normal/uv만 검증하므로 tangent 추가로 깨지지 않음. sphere 테스트는 vertex count/index count만 확인하므로 OK.
- [ ] **Step 6: 커밋**
```bash
git add crates/voltex_renderer/
git commit -m "feat(renderer): add tangent to MeshVertex with computation in OBJ parser and sphere generator"
```
---
## Task 2: BRDF LUT + IBL 리소스
**Files:**
- Create: `crates/voltex_renderer/src/brdf_lut.rs`
- Create: `crates/voltex_renderer/src/ibl.rs`
- Modify: `crates/voltex_renderer/src/lib.rs`
BRDF LUT는 split-sum 근사의 핵심. NdotV(x축)와 roughness(y축)에 대한 scale+bias를 2D 텍스처로 저장.
- [ ] **Step 1: brdf_lut.rs — CPU에서 BRDF LUT 생성**
```rust
// crates/voltex_renderer/src/brdf_lut.rs
/// BRDF LUT 생성 (256x256, RG16Float or RGBA8 근사)
/// x축 = NdotV (0..1), y축 = roughness (0..1)
/// 출력: (scale, bias) per texel → R=scale, G=bias
pub fn generate_brdf_lut(size: u32) -> Vec<[f32; 2]> {
let mut data = vec![[0.0f32; 2]; (size * size) as usize];
for y in 0..size {
let roughness = (y as f32 + 0.5) / size as f32;
for x in 0..size {
let n_dot_v = (x as f32 + 0.5) / size as f32;
let (scale, bias) = integrate_brdf(n_dot_v.max(0.001), roughness.max(0.001));
data[(y * size + x) as usize] = [scale, bias];
}
}
data
}
/// Hammersley sequence (low-discrepancy)
fn radical_inverse_vdc(mut bits: u32) -> f32 {
bits = (bits << 16) | (bits >> 16);
bits = ((bits & 0x55555555) << 1) | ((bits & 0xAAAAAAAA) >> 1);
bits = ((bits & 0x33333333) << 2) | ((bits & 0xCCCCCCCC) >> 2);
bits = ((bits & 0x0F0F0F0F) << 4) | ((bits & 0xF0F0F0F0) >> 4);
bits = ((bits & 0x00FF00FF) << 8) | ((bits & 0xFF00FF00) >> 8);
bits as f32 * 2.3283064365386963e-10 // 1/2^32
}
fn hammersley(i: u32, n: u32) -> [f32; 2] {
[i as f32 / n as f32, radical_inverse_vdc(i)]
}
/// GGX importance sampling
fn importance_sample_ggx(xi: [f32; 2], roughness: f32) -> [f32; 3] {
let a = roughness * roughness;
let phi = 2.0 * std::f32::consts::PI * xi[0];
let cos_theta = ((1.0 - xi[1]) / (1.0 + (a * a - 1.0) * xi[1])).sqrt();
let sin_theta = (1.0 - cos_theta * cos_theta).sqrt();
[phi.cos() * sin_theta, phi.sin() * sin_theta, cos_theta]
}
/// Numerical integration of BRDF for given NdotV and roughness
fn integrate_brdf(n_dot_v: f32, roughness: f32) -> (f32, f32) {
let v = [
(1.0 - n_dot_v * n_dot_v).sqrt(), // sin
0.0,
n_dot_v, // cos
];
let n = [0.0f32, 0.0, 1.0];
let mut scale = 0.0f32;
let mut bias = 0.0f32;
let sample_count = 1024u32;
for i in 0..sample_count {
let xi = hammersley(i, sample_count);
let h = importance_sample_ggx(xi, roughness);
// L = 2 * dot(V, H) * H - V
let v_dot_h = (v[0] * h[0] + v[1] * h[1] + v[2] * h[2]).max(0.0);
let l = [
2.0 * v_dot_h * h[0] - v[0],
2.0 * v_dot_h * h[1] - v[1],
2.0 * v_dot_h * h[2] - v[2],
];
let n_dot_l = l[2].max(0.0); // dot(N, L) where N = (0,0,1)
let n_dot_h = h[2].max(0.0);
if n_dot_l > 0.0 {
let g = geometry_smith_ibl(n_dot_v, n_dot_l, roughness);
let g_vis = (g * v_dot_h) / (n_dot_h * n_dot_v).max(0.001);
let fc = (1.0 - v_dot_h).powi(5);
scale += g_vis * (1.0 - fc);
bias += g_vis * fc;
}
}
(scale / sample_count as f32, bias / sample_count as f32)
}
fn geometry_smith_ibl(n_dot_v: f32, n_dot_l: f32, roughness: f32) -> f32 {
let a = roughness;
let k = (a * a) / 2.0; // IBL uses k = a^2/2 (not (a+1)^2/8)
let g1 = n_dot_v / (n_dot_v * (1.0 - k) + k);
let g2 = n_dot_l / (n_dot_l * (1.0 - k) + k);
g1 * g2
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_brdf_lut_dimensions() {
let lut = generate_brdf_lut(64);
assert_eq!(lut.len(), 64 * 64);
}
#[test]
fn test_brdf_lut_values_in_range() {
let lut = generate_brdf_lut(32);
for [scale, bias] in &lut {
assert!(*scale >= 0.0 && *scale <= 1.5, "scale out of range: {}", scale);
assert!(*bias >= 0.0 && *bias <= 1.5, "bias out of range: {}", bias);
}
}
#[test]
fn test_hammersley() {
let h = hammersley(0, 16);
assert_eq!(h[0], 0.0);
}
}
```
- [ ] **Step 2: ibl.rs — IBL 리소스 관리**
```rust
// crates/voltex_renderer/src/ibl.rs
use crate::brdf_lut::generate_brdf_lut;
pub const BRDF_LUT_SIZE: u32 = 256;
/// IBL 리소스 (BRDF LUT 텍스처)
pub struct IblResources {
pub brdf_lut_texture: wgpu::Texture,
pub brdf_lut_view: wgpu::TextureView,
pub brdf_lut_sampler: wgpu::Sampler,
}
impl IblResources {
pub fn new(device: &wgpu::Device, queue: &wgpu::Queue) -> Self {
// Generate BRDF LUT on CPU
let lut_data = generate_brdf_lut(BRDF_LUT_SIZE);
// Convert [f32; 2] to RGBA8 (R=scale*255, G=bias*255, B=0, A=255)
let mut pixels = vec![0u8; (BRDF_LUT_SIZE * BRDF_LUT_SIZE * 4) as usize];
for (i, [scale, bias]) in lut_data.iter().enumerate() {
pixels[i * 4] = (scale.clamp(0.0, 1.0) * 255.0) as u8;
pixels[i * 4 + 1] = (bias.clamp(0.0, 1.0) * 255.0) as u8;
pixels[i * 4 + 2] = 0;
pixels[i * 4 + 3] = 255;
}
let size = wgpu::Extent3d {
width: BRDF_LUT_SIZE,
height: BRDF_LUT_SIZE,
depth_or_array_layers: 1,
};
let brdf_lut_texture = device.create_texture(&wgpu::TextureDescriptor {
label: Some("BRDF LUT"),
size,
mip_level_count: 1,
sample_count: 1,
dimension: wgpu::TextureDimension::D2,
format: wgpu::TextureFormat::Rgba8Unorm, // NOT sRGB
usage: wgpu::TextureUsages::TEXTURE_BINDING | wgpu::TextureUsages::COPY_DST,
view_formats: &[],
});
queue.write_texture(
wgpu::TexelCopyTextureInfo {
texture: &brdf_lut_texture,
mip_level: 0,
origin: wgpu::Origin3d::ZERO,
aspect: wgpu::TextureAspect::All,
},
&pixels,
wgpu::TexelCopyBufferLayout {
offset: 0,
bytes_per_row: Some(4 * BRDF_LUT_SIZE),
rows_per_image: Some(BRDF_LUT_SIZE),
},
size,
);
let brdf_lut_view = brdf_lut_texture.create_view(&wgpu::TextureViewDescriptor::default());
let brdf_lut_sampler = device.create_sampler(&wgpu::SamplerDescriptor {
label: Some("BRDF LUT Sampler"),
address_mode_u: wgpu::AddressMode::ClampToEdge,
address_mode_v: wgpu::AddressMode::ClampToEdge,
mag_filter: wgpu::FilterMode::Linear,
min_filter: wgpu::FilterMode::Linear,
..Default::default()
});
Self {
brdf_lut_texture,
brdf_lut_view,
brdf_lut_sampler,
}
}
/// IBL bind group layout (group 4)
/// binding 0: BRDF LUT texture
/// binding 1: BRDF LUT sampler
pub fn bind_group_layout(device: &wgpu::Device) -> wgpu::BindGroupLayout {
device.create_bind_group_layout(&wgpu::BindGroupLayoutDescriptor {
label: Some("IBL Bind Group Layout"),
entries: &[
wgpu::BindGroupLayoutEntry {
binding: 0,
visibility: wgpu::ShaderStages::FRAGMENT,
ty: wgpu::BindingType::Texture {
multisampled: false,
view_dimension: wgpu::TextureViewDimension::D2,
sample_type: wgpu::TextureSampleType::Float { filterable: true },
},
count: None,
},
wgpu::BindGroupLayoutEntry {
binding: 1,
visibility: wgpu::ShaderStages::FRAGMENT,
ty: wgpu::BindingType::Sampler(wgpu::SamplerBindingType::Filtering),
count: None,
},
],
})
}
pub fn create_bind_group(
&self,
device: &wgpu::Device,
layout: &wgpu::BindGroupLayout,
) -> wgpu::BindGroup {
device.create_bind_group(&wgpu::BindGroupDescriptor {
label: Some("IBL Bind Group"),
layout,
entries: &[
wgpu::BindGroupEntry {
binding: 0,
resource: wgpu::BindingResource::TextureView(&self.brdf_lut_view),
},
wgpu::BindGroupEntry {
binding: 1,
resource: wgpu::BindingResource::Sampler(&self.brdf_lut_sampler),
},
],
})
}
}
```
- [ ] **Step 3: lib.rs 업데이트**
```rust
pub mod brdf_lut;
pub mod ibl;
pub use ibl::IblResources;
```
- [ ] **Step 4: 테스트 통과**
Run: `cargo test -p voltex_renderer`
Expected: 기존 + brdf_lut 3개 PASS
- [ ] **Step 5: 커밋**
```bash
git add crates/voltex_renderer/
git commit -m "feat(renderer): add BRDF LUT generator and IBL resources"
```
---
## Task 3: PBR 셰이더 Normal Map + IBL 통합
**Files:**
- Modify: `crates/voltex_renderer/src/pbr_shader.wgsl`
- Modify: `crates/voltex_renderer/src/pbr_pipeline.rs`
### PBR 셰이더 변경
1. **VertexInput에 tangent 추가:**
```wgsl
@location(3) tangent: vec4<f32>,
```
2. **VertexOutput에 tangent/bitangent 추가:**
```wgsl
@location(4) world_tangent: vec3<f32>,
@location(5) world_bitangent: vec3<f32>,
```
3. **vs_main에서 TBN 계산:**
```wgsl
let T = normalize((camera.model * vec4<f32>(model_v.tangent.xyz, 0.0)).xyz);
let B = cross(out.world_normal, T) * model_v.tangent.w;
out.world_tangent = T;
out.world_bitangent = B;
```
4. **group(1) 확장 — normal map texture 추가:**
```wgsl
@group(1) @binding(2) var t_normal: texture_2d<f32>;
@group(1) @binding(3) var s_normal: sampler;
```
기존 group(1) bind group layout도 normal map 바인딩을 추가해야 함. 하지만 이것은 기존 GpuTexture의 layout을 변경하는 것이라 영향이 큼.
**대안:** normal map을 material bind group(group 2)에 추가하거나, 별도 bind group 사용.
**가장 간단한 접근:** group(1)에 normal map 추가. texture bind group layout을 확장. 기존 예제에서는 normal map에 1x1 "flat blue" 텍스처 ((128, 128, 255, 255) = (0,0,1) normal) 사용.
5. **group(4) IBL 바인딩:**
```wgsl
@group(4) @binding(0) var t_brdf_lut: texture_2d<f32>;
@group(4) @binding(1) var s_brdf_lut: sampler;
```
6. **프로시저럴 환경 함수:**
```wgsl
fn sample_environment(direction: vec3<f32>, roughness: f32) -> vec3<f32> {
let t = direction.y * 0.5 + 0.5;
let sky = mix(vec3<f32>(0.05, 0.05, 0.08), vec3<f32>(0.3, 0.5, 0.9), t);
let horizon = vec3<f32>(0.6, 0.6, 0.5);
let ground = vec3<f32>(0.1, 0.08, 0.06);
var env: vec3<f32>;
if direction.y > 0.0 {
env = mix(horizon, sky, pow(direction.y, 0.4));
} else {
env = mix(horizon, ground, pow(-direction.y, 0.4));
}
// Roughness → blur (lerp toward average)
let avg = (sky + horizon + ground) / 3.0;
return mix(env, avg, roughness * roughness);
}
```
7. **fs_main에서 IBL ambient 교체:**
기존:
```wgsl
let ambient = lights_uniform.ambient_color * albedo * ao;
```
새:
```wgsl
let NdotV = max(dot(N, V), 0.0);
let R = reflect(-V, N);
// Diffuse IBL
let irradiance = sample_environment(N, 1.0);
let diffuse_ibl = kd_ambient * albedo * irradiance;
// Specular IBL
let prefiltered = sample_environment(R, roughness);
let brdf_sample = textureSample(t_brdf_lut, s_brdf_lut, vec2<f32>(NdotV, roughness));
let F_env = F0 * brdf_sample.r + vec3<f32>(brdf_sample.g);
let specular_ibl = prefiltered * F_env;
let ambient = (diffuse_ibl + specular_ibl) * ao;
```
여기서 `kd_ambient = (1.0 - F_env_avg) * (1.0 - metallic)` — 에너지 보존.
### PBR 파이프라인 변경
`create_pbr_pipeline``ibl_layout: &wgpu::BindGroupLayout` 파라미터 추가:
```rust
pub fn create_pbr_pipeline(
device, format,
camera_light_layout,
texture_layout, // group(1): now includes normal map
material_layout,
shadow_layout,
ibl_layout, // NEW: group(4)
) -> wgpu::RenderPipeline
```
bind_group_layouts: `&[camera_light, texture, material, shadow, ibl]`
### Texture bind group layout 확장
GpuTexture::bind_group_layout을 수정하거나 새 함수를 만들어 normal map도 포함하도록:
```rust
// 기존 (bindings 0-1): albedo texture + sampler
// 새로 추가 (bindings 2-3): normal map texture + sampler
```
기존 예제 호환을 위해 `texture_with_normal_bind_group_layout(device)` 새 함수를 만들고, 기존 `bind_group_layout`은 유지.
- [ ] **Step 1: pbr_shader.wgsl 수정**
- [ ] **Step 2: pbr_pipeline.rs 수정**
- [ ] **Step 3: texture.rs에 normal map 포함 bind group layout 추가**
- [ ] **Step 4: 빌드 확인**`cargo build -p voltex_renderer`
- [ ] **Step 5: 커밋**
```bash
git add crates/voltex_renderer/
git commit -m "feat(renderer): add normal mapping and procedural IBL to PBR shader"
```
---
## Task 4: 기존 예제 수정 + IBL Demo
**Files:**
- Modify: `examples/pbr_demo/src/main.rs`
- Modify: `examples/multi_light_demo/src/main.rs`
- Modify: `examples/shadow_demo/src/main.rs`
- Create: `examples/ibl_demo/Cargo.toml`
- Create: `examples/ibl_demo/src/main.rs`
- Modify: `Cargo.toml`
### 기존 예제 수정
모든 PBR 예제에:
1. `create_pbr_pipeline``ibl_layout` 파라미터 추가
2. IblResources 생성, IBL bind group 생성
3. Normal map: "flat blue" 1x1 텍스처 (128, 128, 255, 255) 사용
4. texture bind group에 normal map 추가
5. render pass에 IBL bind group (group 4) 설정
### ibl_demo
7x7 구체 그리드 (pbr_demo와 유사하지만 IBL 효과가 보임):
- metallic X축, roughness Y축
- IBL이 켜져 있으므로 roughness 차이가 확연히 보임
- smooth metallic 구체는 환경을 반사, rough 구체는 blurry 반사
- Camera: (0, 0, 12)
반드시 읽어야 할 파일: pbr_demo/src/main.rs (기반), shadow_demo/src/main.rs (shadow bind group 패턴)
- [ ] **Step 1: 기존 예제 수정 (pbr_demo, multi_light_demo, shadow_demo)**
- [ ] **Step 2: ibl_demo 작성**
- [ ] **Step 3: 빌드 + 테스트**
- [ ] **Step 4: 실행 확인**`cargo run -p ibl_demo`
- [ ] **Step 5: 커밋**
```bash
git add Cargo.toml examples/
git commit -m "feat: add IBL demo with normal mapping and procedural environment lighting"
```
---
## Phase 4c 완료 기준 체크리스트
- [ ] `cargo build --workspace` 성공
- [ ] `cargo test --workspace` — 모든 테스트 통과
- [ ] MeshVertex에 tangent 포함, OBJ/sphere에서 자동 계산
- [ ] PBR 셰이더: TBN 행렬로 normal map 샘플링
- [ ] BRDF LUT: CPU 생성, 256x256 텍스처
- [ ] 프로시저럴 IBL: sky gradient + roughness-based blur
- [ ] `cargo run -p ibl_demo` — roughness 차이가 시각적으로 확연히 보임
- [ ] 기존 예제 모두 동작 (flat normal map + IBL)

View File

@@ -0,0 +1,17 @@
[package]
name = "asset_demo"
version = "0.1.0"
edition = "2021"
[dependencies]
voltex_math.workspace = true
voltex_platform.workspace = true
voltex_renderer.workspace = true
voltex_ecs.workspace = true
voltex_asset.workspace = true
wgpu.workspace = true
winit.workspace = true
bytemuck.workspace = true
pollster.workspace = true
env_logger.workspace = true
log.workspace = true

View File

@@ -0,0 +1,434 @@
use winit::{
application::ApplicationHandler,
event::WindowEvent,
event_loop::{ActiveEventLoop, EventLoop},
keyboard::{KeyCode, PhysicalKey},
window::WindowId,
};
use voltex_math::Vec3;
use voltex_platform::{VoltexWindow, WindowConfig, InputState, GameTimer};
use voltex_renderer::{
GpuContext, Camera, FpsController, CameraUniform, LightUniform, Mesh, GpuTexture, pipeline, obj,
};
use voltex_ecs::{World, Entity, Transform, propagate_transforms, WorldTransform};
use voltex_asset::{Assets, Handle};
use wgpu::util::DeviceExt;
/// Component: a handle into the asset system pointing at a Mesh.
struct MeshRef(#[allow(dead_code)] Handle<Mesh>);
const MAX_ENTITIES: usize = 1024;
struct AssetDemoApp {
state: Option<AppState>,
}
struct AppState {
window: VoltexWindow,
gpu: GpuContext,
pipeline: wgpu::RenderPipeline,
assets: Assets,
mesh_handle: Handle<Mesh>,
camera: Camera,
fps_controller: FpsController,
camera_uniform: CameraUniform,
light_uniform: LightUniform,
camera_buffer: wgpu::Buffer,
light_buffer: wgpu::Buffer,
camera_light_bind_group: wgpu::BindGroup,
_texture: GpuTexture,
input: InputState,
timer: GameTimer,
world: World,
time: f32,
uniform_alignment: u32,
r_was_pressed: bool,
}
fn camera_light_bind_group_layout(device: &wgpu::Device) -> wgpu::BindGroupLayout {
device.create_bind_group_layout(&wgpu::BindGroupLayoutDescriptor {
label: Some("Camera+Light Bind Group Layout"),
entries: &[
wgpu::BindGroupLayoutEntry {
binding: 0,
visibility: wgpu::ShaderStages::VERTEX | wgpu::ShaderStages::FRAGMENT,
ty: wgpu::BindingType::Buffer {
ty: wgpu::BufferBindingType::Uniform,
has_dynamic_offset: true,
min_binding_size: wgpu::BufferSize::new(
std::mem::size_of::<CameraUniform>() as u64
),
},
count: None,
},
wgpu::BindGroupLayoutEntry {
binding: 1,
visibility: wgpu::ShaderStages::FRAGMENT,
ty: wgpu::BindingType::Buffer {
ty: wgpu::BufferBindingType::Uniform,
has_dynamic_offset: false,
min_binding_size: None,
},
count: None,
},
],
})
}
impl ApplicationHandler for AssetDemoApp {
fn resumed(&mut self, event_loop: &ActiveEventLoop) {
let config = WindowConfig {
title: "Voltex - Asset Demo".to_string(),
width: 1280,
height: 720,
..Default::default()
};
let window = VoltexWindow::new(event_loop, &config);
let gpu = GpuContext::new(window.handle.clone());
// Dynamic uniform buffer alignment
let uniform_alignment = gpu.device.limits().min_uniform_buffer_offset_alignment;
let uniform_size = std::mem::size_of::<CameraUniform>() as u32;
let aligned_size = ((uniform_size + uniform_alignment - 1) / uniform_alignment) * uniform_alignment;
// Parse OBJ and create Mesh
let obj_src = include_str!("../../../assets/cube.obj");
let obj_data = obj::parse_obj(obj_src);
let mesh = Mesh::new(&gpu.device, &obj_data.vertices, &obj_data.indices);
// Insert mesh into asset system
let mut assets = Assets::new();
let mesh_handle = assets.insert(mesh);
// Camera: position (0, 10, 18), pitch=-0.4
let aspect = gpu.config.width as f32 / gpu.config.height as f32;
let mut camera = Camera::new(Vec3::new(0.0, 10.0, 18.0), aspect);
camera.pitch = -0.4;
let fps_controller = FpsController::new();
// Uniforms
let camera_uniform = CameraUniform::new();
let light_uniform = LightUniform::new();
// Dynamic uniform buffer: room for MAX_ENTITIES camera uniforms
let camera_buffer = gpu.device.create_buffer(&wgpu::BufferDescriptor {
label: Some("Camera Dynamic Uniform Buffer"),
size: (aligned_size as usize * MAX_ENTITIES) as u64,
usage: wgpu::BufferUsages::UNIFORM | wgpu::BufferUsages::COPY_DST,
mapped_at_creation: false,
});
let light_buffer = gpu.device.create_buffer_init(&wgpu::util::BufferInitDescriptor {
label: Some("Light Uniform Buffer"),
contents: bytemuck::cast_slice(&[light_uniform]),
usage: wgpu::BufferUsages::UNIFORM | wgpu::BufferUsages::COPY_DST,
});
// Bind group layouts
let cl_layout = camera_light_bind_group_layout(&gpu.device);
let tex_layout = GpuTexture::bind_group_layout(&gpu.device);
// Bind group
let camera_light_bind_group = gpu.device.create_bind_group(&wgpu::BindGroupDescriptor {
label: Some("Camera+Light Bind Group"),
layout: &cl_layout,
entries: &[
wgpu::BindGroupEntry {
binding: 0,
resource: wgpu::BindingResource::Buffer(wgpu::BufferBinding {
buffer: &camera_buffer,
offset: 0,
size: wgpu::BufferSize::new(std::mem::size_of::<CameraUniform>() as u64),
}),
},
wgpu::BindGroupEntry {
binding: 1,
resource: light_buffer.as_entire_binding(),
},
],
});
let texture = GpuTexture::white_1x1(&gpu.device, &gpu.queue, &tex_layout);
let render_pipeline = pipeline::create_mesh_pipeline(
&gpu.device,
gpu.surface_format,
&cl_layout,
&tex_layout,
);
// ECS: spawn 100 entities in a 10x10 grid, spacing 2.0
let mut world = World::new();
let spacing = 2.0_f32;
let offset = (10.0 - 1.0) * spacing * 0.5;
for row in 0..10 {
for col in 0..10 {
let x = col as f32 * spacing - offset;
let z = row as f32 * spacing - offset;
let entity = world.spawn();
world.add(entity, Transform::from_position(Vec3::new(x, 0.0, z)));
world.add(entity, MeshRef(mesh_handle));
}
}
self.state = Some(AppState {
window,
gpu,
pipeline: render_pipeline,
assets,
mesh_handle,
camera,
fps_controller,
camera_uniform,
light_uniform,
camera_buffer,
light_buffer,
camera_light_bind_group,
_texture: texture,
input: InputState::new(),
timer: GameTimer::new(60),
world,
time: 0.0,
uniform_alignment: aligned_size,
r_was_pressed: false,
});
}
fn window_event(
&mut self,
event_loop: &ActiveEventLoop,
_window_id: WindowId,
event: WindowEvent,
) {
let state = match &mut self.state {
Some(s) => s,
None => return,
};
match event {
WindowEvent::CloseRequested => event_loop.exit(),
WindowEvent::KeyboardInput {
event: winit::event::KeyEvent {
physical_key: PhysicalKey::Code(key_code),
state: key_state,
..
},
..
} => {
let pressed = key_state == winit::event::ElementState::Pressed;
state.input.process_key(key_code, pressed);
if key_code == KeyCode::Escape && pressed {
event_loop.exit();
}
}
WindowEvent::Resized(size) => {
state.gpu.resize(size.width, size.height);
if size.width > 0 && size.height > 0 {
state.camera.aspect = size.width as f32 / size.height as f32;
}
}
WindowEvent::CursorMoved { position, .. } => {
state.input.process_mouse_move(position.x, position.y);
}
WindowEvent::MouseInput { state: btn_state, button, .. } => {
let pressed = btn_state == winit::event::ElementState::Pressed;
state.input.process_mouse_button(button, pressed);
}
WindowEvent::MouseWheel { delta, .. } => {
let y = match delta {
winit::event::MouseScrollDelta::LineDelta(_, y) => y,
winit::event::MouseScrollDelta::PixelDelta(pos) => pos.y as f32,
};
state.input.process_scroll(y);
}
WindowEvent::RedrawRequested => {
state.timer.tick();
let dt = state.timer.frame_dt();
// Camera input
if state.input.is_mouse_button_pressed(winit::event::MouseButton::Right) {
let (dx, dy) = state.input.mouse_delta();
state.fps_controller.process_mouse(&mut state.camera, dx, dy);
}
let mut forward = 0.0f32;
let mut right = 0.0f32;
let mut up = 0.0f32;
if state.input.is_key_pressed(KeyCode::KeyW) { forward += 1.0; }
if state.input.is_key_pressed(KeyCode::KeyS) { forward -= 1.0; }
if state.input.is_key_pressed(KeyCode::KeyD) { right += 1.0; }
if state.input.is_key_pressed(KeyCode::KeyA) { right -= 1.0; }
if state.input.is_key_pressed(KeyCode::Space) { up += 1.0; }
if state.input.is_key_pressed(KeyCode::ShiftLeft) { up -= 1.0; }
state.fps_controller.process_movement(&mut state.camera, forward, right, up, dt);
// R key: remove 10 random entities (on press, not hold)
let r_pressed = state.input.is_key_pressed(KeyCode::KeyR);
if r_pressed && !state.r_was_pressed {
let entities: Vec<Entity> = state.world.query2::<Transform, MeshRef>()
.iter()
.map(|(e, _, _)| *e)
.collect();
let remove_count = entities.len().min(10);
for i in 0..remove_count {
state.world.despawn(entities[i]);
}
log::info!(
"Removed {} entities. Remaining: {}, Mesh assets: {}",
remove_count,
state.world.entity_count(),
state.assets.count::<Mesh>(),
);
}
state.r_was_pressed = r_pressed;
state.input.begin_frame();
state.time += dt;
// Propagate transforms to compute WorldTransform
propagate_transforms(&mut state.world);
// Update window title with entity and asset counts
let entity_count = state.world.query2::<WorldTransform, MeshRef>()
.len();
let mesh_count = state.assets.count::<Mesh>();
state.window.handle.set_title(&format!(
"Voltex - Asset Demo | Entities: {}, Mesh assets: {}",
entity_count, mesh_count,
));
// Pre-compute all entity uniforms and write to dynamic buffer
let view_proj = state.camera.view_projection();
let cam_pos = [
state.camera.position.x,
state.camera.position.y,
state.camera.position.z,
];
let entities = state.world.query2::<WorldTransform, MeshRef>();
let aligned = state.uniform_alignment as usize;
// Build staging data: one CameraUniform per entity, padded to alignment
let total_bytes = entities.len() * aligned;
let mut staging = vec![0u8; total_bytes];
for (i, (_, world_transform, _mesh_ref)) in entities.iter().enumerate() {
let mut uniform = state.camera_uniform;
uniform.view_proj = view_proj.cols;
uniform.camera_pos = cam_pos;
uniform.model = world_transform.0.cols;
let bytes = bytemuck::bytes_of(&uniform);
let offset = i * aligned;
staging[offset..offset + bytes.len()].copy_from_slice(bytes);
}
state.gpu.queue.write_buffer(&state.camera_buffer, 0, &staging);
// Write light uniform
state.gpu.queue.write_buffer(
&state.light_buffer,
0,
bytemuck::cast_slice(&[state.light_uniform]),
);
// Get mesh from asset system
let mesh = match state.assets.get(state.mesh_handle) {
Some(m) => m,
None => return,
};
// Render
let output = match state.gpu.surface.get_current_texture() {
Ok(t) => t,
Err(wgpu::SurfaceError::Lost) => {
let (w, h) = state.window.inner_size();
state.gpu.resize(w, h);
return;
}
Err(wgpu::SurfaceError::OutOfMemory) => {
event_loop.exit();
return;
}
Err(_) => return,
};
let view = output.texture.create_view(&wgpu::TextureViewDescriptor::default());
let mut encoder = state.gpu.device.create_command_encoder(
&wgpu::CommandEncoderDescriptor { label: Some("Render Encoder") },
);
{
let mut render_pass = encoder.begin_render_pass(&wgpu::RenderPassDescriptor {
label: Some("Render Pass"),
color_attachments: &[Some(wgpu::RenderPassColorAttachment {
view: &view,
resolve_target: None,
depth_slice: None,
ops: wgpu::Operations {
load: wgpu::LoadOp::Clear(wgpu::Color {
r: 0.1, g: 0.1, b: 0.15, a: 1.0,
}),
store: wgpu::StoreOp::Store,
},
})],
depth_stencil_attachment: Some(wgpu::RenderPassDepthStencilAttachment {
view: &state.gpu.depth_view,
depth_ops: Some(wgpu::Operations {
load: wgpu::LoadOp::Clear(1.0),
store: wgpu::StoreOp::Store,
}),
stencil_ops: None,
}),
occlusion_query_set: None,
timestamp_writes: None,
multiview_mask: None,
});
render_pass.set_pipeline(&state.pipeline);
render_pass.set_bind_group(1, &state._texture.bind_group, &[]);
render_pass.set_vertex_buffer(0, mesh.vertex_buffer.slice(..));
render_pass.set_index_buffer(
mesh.index_buffer.slice(..),
wgpu::IndexFormat::Uint32,
);
// Draw each entity with its dynamic offset
for (i, _) in entities.iter().enumerate() {
let dynamic_offset = (i as u32) * state.uniform_alignment;
render_pass.set_bind_group(
0,
&state.camera_light_bind_group,
&[dynamic_offset],
);
render_pass.draw_indexed(0..mesh.num_indices, 0, 0..1);
}
}
state.gpu.queue.submit(std::iter::once(encoder.finish()));
output.present();
}
_ => {}
}
}
fn about_to_wait(&mut self, _event_loop: &ActiveEventLoop) {
if let Some(state) = &self.state {
state.window.request_redraw();
}
}
}
fn main() {
env_logger::init();
let event_loop = EventLoop::new().unwrap();
let mut app = AssetDemoApp { state: None };
event_loop.run_app(&mut app).unwrap();
}

View File

@@ -0,0 +1,16 @@
[package]
name = "hierarchy_demo"
version = "0.1.0"
edition = "2021"
[dependencies]
voltex_math.workspace = true
voltex_platform.workspace = true
voltex_renderer.workspace = true
voltex_ecs.workspace = true
wgpu.workspace = true
winit.workspace = true
bytemuck.workspace = true
pollster.workspace = true
env_logger.workspace = true
log.workspace = true

View File

@@ -0,0 +1,489 @@
use winit::{
application::ApplicationHandler,
event::WindowEvent,
event_loop::{ActiveEventLoop, EventLoop},
keyboard::{KeyCode, PhysicalKey},
window::WindowId,
};
use voltex_math::{Vec3, Mat4};
use voltex_platform::{VoltexWindow, WindowConfig, InputState, GameTimer};
use voltex_renderer::{
GpuContext, Camera, FpsController, CameraUniform, LightUniform, Mesh, GpuTexture, pipeline, obj,
};
use voltex_ecs::{
World, Transform, Tag, Entity,
add_child, propagate_transforms, WorldTransform,
};
use wgpu::util::DeviceExt;
const MAX_ENTITIES: usize = 64;
/// Stores entity handle + orbit speed for animation.
struct OrbitalBody {
entity: Entity,
speed: f32,
}
struct HierarchyDemoApp {
state: Option<AppState>,
}
struct AppState {
window: VoltexWindow,
gpu: GpuContext,
pipeline: wgpu::RenderPipeline,
mesh: Mesh,
camera: Camera,
fps_controller: FpsController,
camera_uniform: CameraUniform,
light_uniform: LightUniform,
camera_buffer: wgpu::Buffer,
light_buffer: wgpu::Buffer,
camera_light_bind_group: wgpu::BindGroup,
_texture: GpuTexture,
input: InputState,
timer: GameTimer,
world: World,
bodies: Vec<OrbitalBody>,
time: f32,
uniform_alignment: u32,
}
fn camera_light_bind_group_layout(device: &wgpu::Device) -> wgpu::BindGroupLayout {
device.create_bind_group_layout(&wgpu::BindGroupLayoutDescriptor {
label: Some("Camera+Light Bind Group Layout"),
entries: &[
wgpu::BindGroupLayoutEntry {
binding: 0,
visibility: wgpu::ShaderStages::VERTEX | wgpu::ShaderStages::FRAGMENT,
ty: wgpu::BindingType::Buffer {
ty: wgpu::BufferBindingType::Uniform,
has_dynamic_offset: true,
min_binding_size: wgpu::BufferSize::new(
std::mem::size_of::<CameraUniform>() as u64
),
},
count: None,
},
wgpu::BindGroupLayoutEntry {
binding: 1,
visibility: wgpu::ShaderStages::FRAGMENT,
ty: wgpu::BindingType::Buffer {
ty: wgpu::BufferBindingType::Uniform,
has_dynamic_offset: false,
min_binding_size: None,
},
count: None,
},
],
})
}
impl ApplicationHandler for HierarchyDemoApp {
fn resumed(&mut self, event_loop: &ActiveEventLoop) {
let config = WindowConfig {
title: "Voltex - Hierarchy Demo (Solar System)".to_string(),
width: 1280,
height: 720,
..Default::default()
};
let window = VoltexWindow::new(event_loop, &config);
let gpu = GpuContext::new(window.handle.clone());
// Dynamic uniform buffer alignment
let uniform_alignment = gpu.device.limits().min_uniform_buffer_offset_alignment;
let uniform_size = std::mem::size_of::<CameraUniform>() as u32;
let aligned_size =
((uniform_size + uniform_alignment - 1) / uniform_alignment) * uniform_alignment;
// Parse OBJ
let obj_src = include_str!("../../../assets/cube.obj");
let obj_data = obj::parse_obj(obj_src);
let mesh = Mesh::new(&gpu.device, &obj_data.vertices, &obj_data.indices);
// Camera
let aspect = gpu.config.width as f32 / gpu.config.height as f32;
let mut camera = Camera::new(Vec3::new(0.0, 10.0, 20.0), aspect);
camera.pitch = -0.4;
let fps_controller = FpsController::new();
// Uniforms
let camera_uniform = CameraUniform::new();
let light_uniform = LightUniform::new();
// Dynamic uniform buffer: room for MAX_ENTITIES camera uniforms
let camera_buffer = gpu.device.create_buffer(&wgpu::BufferDescriptor {
label: Some("Camera Dynamic Uniform Buffer"),
size: (aligned_size as usize * MAX_ENTITIES) as u64,
usage: wgpu::BufferUsages::UNIFORM | wgpu::BufferUsages::COPY_DST,
mapped_at_creation: false,
});
let light_buffer = gpu.device.create_buffer_init(&wgpu::util::BufferInitDescriptor {
label: Some("Light Uniform Buffer"),
contents: bytemuck::cast_slice(&[light_uniform]),
usage: wgpu::BufferUsages::UNIFORM | wgpu::BufferUsages::COPY_DST,
});
// Bind group layouts
let cl_layout = camera_light_bind_group_layout(&gpu.device);
let tex_layout = GpuTexture::bind_group_layout(&gpu.device);
// Bind group: camera binding uses dynamic offset, size = one CameraUniform
let camera_light_bind_group = gpu.device.create_bind_group(&wgpu::BindGroupDescriptor {
label: Some("Camera+Light Bind Group"),
layout: &cl_layout,
entries: &[
wgpu::BindGroupEntry {
binding: 0,
resource: wgpu::BindingResource::Buffer(wgpu::BufferBinding {
buffer: &camera_buffer,
offset: 0,
size: wgpu::BufferSize::new(
std::mem::size_of::<CameraUniform>() as u64,
),
}),
},
wgpu::BindGroupEntry {
binding: 1,
resource: light_buffer.as_entire_binding(),
},
],
});
let texture = GpuTexture::white_1x1(&gpu.device, &gpu.queue, &tex_layout);
let render_pipeline = pipeline::create_mesh_pipeline(
&gpu.device,
gpu.surface_format,
&cl_layout,
&tex_layout,
);
// ---- ECS: Build solar system hierarchy ----
let mut world = World::new();
let mut bodies: Vec<OrbitalBody> = Vec::new();
// Sun: position(0,0,0), scale(1.5,1.5,1.5)
let sun = world.spawn();
world.add(
sun,
Transform::from_position_scale(Vec3::ZERO, Vec3::new(1.5, 1.5, 1.5)),
);
world.add(sun, Tag("sun".to_string()));
bodies.push(OrbitalBody { entity: sun, speed: 0.2 });
// Planet1: position(5,0,0), scale(0.5,0.5,0.5)
let planet1 = world.spawn();
world.add(
planet1,
Transform::from_position_scale(Vec3::new(5.0, 0.0, 0.0), Vec3::new(0.5, 0.5, 0.5)),
);
world.add(planet1, Tag("planet1".to_string()));
add_child(&mut world, sun, planet1);
bodies.push(OrbitalBody { entity: planet1, speed: 0.8 });
// Moon1: position(1.5,0,0), scale(0.3,0.3,0.3) — child of Planet1
let moon1 = world.spawn();
world.add(
moon1,
Transform::from_position_scale(
Vec3::new(1.5, 0.0, 0.0),
Vec3::new(0.3, 0.3, 0.3),
),
);
world.add(moon1, Tag("moon1".to_string()));
add_child(&mut world, planet1, moon1);
bodies.push(OrbitalBody { entity: moon1, speed: 2.0 });
// Planet2: position(9,0,0), scale(0.7,0.7,0.7)
let planet2 = world.spawn();
world.add(
planet2,
Transform::from_position_scale(Vec3::new(9.0, 0.0, 0.0), Vec3::new(0.7, 0.7, 0.7)),
);
world.add(planet2, Tag("planet2".to_string()));
add_child(&mut world, sun, planet2);
bodies.push(OrbitalBody { entity: planet2, speed: 0.5 });
// Planet3: position(13,0,0), scale(0.4,0.4,0.4)
let planet3 = world.spawn();
world.add(
planet3,
Transform::from_position_scale(
Vec3::new(13.0, 0.0, 0.0),
Vec3::new(0.4, 0.4, 0.4),
),
);
world.add(planet3, Tag("planet3".to_string()));
add_child(&mut world, sun, planet3);
bodies.push(OrbitalBody { entity: planet3, speed: 0.3 });
self.state = Some(AppState {
window,
gpu,
pipeline: render_pipeline,
mesh,
camera,
fps_controller,
camera_uniform,
light_uniform,
camera_buffer,
light_buffer,
camera_light_bind_group,
_texture: texture,
input: InputState::new(),
timer: GameTimer::new(60),
world,
bodies,
time: 0.0,
uniform_alignment: aligned_size,
});
}
fn window_event(
&mut self,
event_loop: &ActiveEventLoop,
_window_id: WindowId,
event: WindowEvent,
) {
let state = match &mut self.state {
Some(s) => s,
None => return,
};
match event {
WindowEvent::CloseRequested => event_loop.exit(),
WindowEvent::KeyboardInput {
event:
winit::event::KeyEvent {
physical_key: PhysicalKey::Code(key_code),
state: key_state,
..
},
..
} => {
let pressed = key_state == winit::event::ElementState::Pressed;
state.input.process_key(key_code, pressed);
if pressed {
match key_code {
KeyCode::Escape => event_loop.exit(),
KeyCode::KeyP => {
// Print serialized scene to stdout
let scene_str =
voltex_ecs::serialize_scene(&state.world);
println!("--- Scene Snapshot ---\n{}", scene_str);
}
_ => {}
}
}
}
WindowEvent::Resized(size) => {
state.gpu.resize(size.width, size.height);
if size.width > 0 && size.height > 0 {
state.camera.aspect = size.width as f32 / size.height as f32;
}
}
WindowEvent::CursorMoved { position, .. } => {
state.input.process_mouse_move(position.x, position.y);
}
WindowEvent::MouseInput {
state: btn_state,
button,
..
} => {
let pressed = btn_state == winit::event::ElementState::Pressed;
state.input.process_mouse_button(button, pressed);
}
WindowEvent::MouseWheel { delta, .. } => {
let y = match delta {
winit::event::MouseScrollDelta::LineDelta(_, y) => y,
winit::event::MouseScrollDelta::PixelDelta(pos) => pos.y as f32,
};
state.input.process_scroll(y);
}
WindowEvent::RedrawRequested => {
state.timer.tick();
let dt = state.timer.frame_dt();
// ---- Input / Camera ----
if state
.input
.is_mouse_button_pressed(winit::event::MouseButton::Right)
{
let (dx, dy) = state.input.mouse_delta();
state.fps_controller.process_mouse(&mut state.camera, dx, dy);
}
let mut forward = 0.0f32;
let mut right = 0.0f32;
let mut up = 0.0f32;
if state.input.is_key_pressed(KeyCode::KeyW) { forward += 1.0; }
if state.input.is_key_pressed(KeyCode::KeyS) { forward -= 1.0; }
if state.input.is_key_pressed(KeyCode::KeyD) { right += 1.0; }
if state.input.is_key_pressed(KeyCode::KeyA) { right -= 1.0; }
if state.input.is_key_pressed(KeyCode::Space) { up += 1.0; }
if state.input.is_key_pressed(KeyCode::ShiftLeft) { up -= 1.0; }
state
.fps_controller
.process_movement(&mut state.camera, forward, right, up, dt);
state.input.begin_frame();
state.time += dt;
// ---- Animate: rotate each body around Y ----
for body in &state.bodies {
if let Some(t) = state.world.get_mut::<Transform>(body.entity) {
t.rotation.y += dt * body.speed;
}
}
// ---- Propagate hierarchy transforms ----
propagate_transforms(&mut state.world);
// ---- Build per-entity uniforms using WorldTransform ----
let view_proj = state.camera.view_projection();
let cam_pos = [
state.camera.position.x,
state.camera.position.y,
state.camera.position.z,
];
let world_transforms: Vec<(Entity, Mat4)> = state
.world
.query::<WorldTransform>()
.map(|(e, wt)| (e, wt.0))
.collect();
let aligned = state.uniform_alignment as usize;
let total_bytes = world_transforms.len() * aligned;
let mut staging = vec![0u8; total_bytes];
for (i, (_entity, world_mat)) in world_transforms.iter().enumerate() {
let mut uniform = state.camera_uniform;
uniform.view_proj = view_proj.cols;
uniform.camera_pos = cam_pos;
uniform.model = world_mat.cols;
let bytes = bytemuck::bytes_of(&uniform);
let offset = i * aligned;
staging[offset..offset + bytes.len()].copy_from_slice(bytes);
}
state
.gpu
.queue
.write_buffer(&state.camera_buffer, 0, &staging);
// Write light uniform
state.gpu.queue.write_buffer(
&state.light_buffer,
0,
bytemuck::cast_slice(&[state.light_uniform]),
);
// ---- Render ----
let output = match state.gpu.surface.get_current_texture() {
Ok(t) => t,
Err(wgpu::SurfaceError::Lost) => {
let (w, h) = state.window.inner_size();
state.gpu.resize(w, h);
return;
}
Err(wgpu::SurfaceError::OutOfMemory) => {
event_loop.exit();
return;
}
Err(_) => return,
};
let view = output
.texture
.create_view(&wgpu::TextureViewDescriptor::default());
let mut encoder = state.gpu.device.create_command_encoder(
&wgpu::CommandEncoderDescriptor {
label: Some("Render Encoder"),
},
);
{
let mut render_pass =
encoder.begin_render_pass(&wgpu::RenderPassDescriptor {
label: Some("Render Pass"),
color_attachments: &[Some(wgpu::RenderPassColorAttachment {
view: &view,
resolve_target: None,
depth_slice: None,
ops: wgpu::Operations {
load: wgpu::LoadOp::Clear(wgpu::Color {
r: 0.02,
g: 0.02,
b: 0.05,
a: 1.0,
}),
store: wgpu::StoreOp::Store,
},
})],
depth_stencil_attachment: Some(
wgpu::RenderPassDepthStencilAttachment {
view: &state.gpu.depth_view,
depth_ops: Some(wgpu::Operations {
load: wgpu::LoadOp::Clear(1.0),
store: wgpu::StoreOp::Store,
}),
stencil_ops: None,
},
),
occlusion_query_set: None,
timestamp_writes: None,
multiview_mask: None,
});
render_pass.set_pipeline(&state.pipeline);
render_pass.set_bind_group(1, &state._texture.bind_group, &[]);
render_pass.set_vertex_buffer(0, state.mesh.vertex_buffer.slice(..));
render_pass.set_index_buffer(
state.mesh.index_buffer.slice(..),
wgpu::IndexFormat::Uint32,
);
// Draw each entity with its dynamic offset
for (i, _) in world_transforms.iter().enumerate() {
let dynamic_offset = (i as u32) * state.uniform_alignment;
render_pass.set_bind_group(
0,
&state.camera_light_bind_group,
&[dynamic_offset],
);
render_pass.draw_indexed(0..state.mesh.num_indices, 0, 0..1);
}
}
state.gpu.queue.submit(std::iter::once(encoder.finish()));
output.present();
}
_ => {}
}
}
fn about_to_wait(&mut self, _event_loop: &ActiveEventLoop) {
if let Some(state) = &self.state {
state.window.request_redraw();
}
}
}
fn main() {
env_logger::init();
let event_loop = EventLoop::new().unwrap();
let mut app = HierarchyDemoApp { state: None };
event_loop.run_app(&mut app).unwrap();
}

View File

@@ -0,0 +1,15 @@
[package]
name = "ibl_demo"
version = "0.1.0"
edition = "2021"
[dependencies]
voltex_math.workspace = true
voltex_platform.workspace = true
voltex_renderer.workspace = true
wgpu.workspace = true
winit.workspace = true
bytemuck.workspace = true
pollster.workspace = true
env_logger.workspace = true
log.workspace = true

View File

@@ -0,0 +1,534 @@
use winit::{
application::ApplicationHandler,
event::WindowEvent,
event_loop::{ActiveEventLoop, EventLoop},
keyboard::{KeyCode, PhysicalKey},
window::WindowId,
};
use voltex_math::{Vec3, Mat4};
use voltex_platform::{VoltexWindow, WindowConfig, InputState, GameTimer};
use voltex_renderer::{
GpuContext, Camera, FpsController, CameraUniform, LightsUniform, LightData,
Mesh, GpuTexture, MaterialUniform, generate_sphere, create_pbr_pipeline,
ShadowMap, ShadowUniform,
IblResources, pbr_texture_bind_group_layout, create_pbr_texture_bind_group,
};
use wgpu::util::DeviceExt;
const GRID_SIZE: usize = 7;
const NUM_SPHERES: usize = GRID_SIZE * GRID_SIZE;
const SPACING: f32 = 1.2;
struct IblDemoApp {
state: Option<AppState>,
}
struct AppState {
window: VoltexWindow,
gpu: GpuContext,
pipeline: wgpu::RenderPipeline,
mesh: Mesh,
camera: Camera,
fps_controller: FpsController,
camera_buffer: wgpu::Buffer,
light_buffer: wgpu::Buffer,
material_buffer: wgpu::Buffer,
camera_light_bind_group: wgpu::BindGroup,
_albedo_tex: GpuTexture,
_normal_tex: (wgpu::Texture, wgpu::TextureView, wgpu::Sampler),
pbr_texture_bind_group: wgpu::BindGroup,
material_bind_group: wgpu::BindGroup,
shadow_bind_group: wgpu::BindGroup,
_shadow_map: ShadowMap,
_ibl: IblResources,
input: InputState,
timer: GameTimer,
cam_aligned_size: u32,
mat_aligned_size: u32,
}
fn camera_light_bind_group_layout(device: &wgpu::Device) -> wgpu::BindGroupLayout {
device.create_bind_group_layout(&wgpu::BindGroupLayoutDescriptor {
label: Some("Camera+Light Bind Group Layout"),
entries: &[
wgpu::BindGroupLayoutEntry {
binding: 0,
visibility: wgpu::ShaderStages::VERTEX | wgpu::ShaderStages::FRAGMENT,
ty: wgpu::BindingType::Buffer {
ty: wgpu::BufferBindingType::Uniform,
has_dynamic_offset: true,
min_binding_size: wgpu::BufferSize::new(
std::mem::size_of::<CameraUniform>() as u64,
),
},
count: None,
},
wgpu::BindGroupLayoutEntry {
binding: 1,
visibility: wgpu::ShaderStages::FRAGMENT,
ty: wgpu::BindingType::Buffer {
ty: wgpu::BufferBindingType::Uniform,
has_dynamic_offset: false,
min_binding_size: None,
},
count: None,
},
],
})
}
fn align_up(size: u32, alignment: u32) -> u32 {
((size + alignment - 1) / alignment) * alignment
}
impl ApplicationHandler for IblDemoApp {
fn resumed(&mut self, event_loop: &ActiveEventLoop) {
let config = WindowConfig {
title: "Voltex - IBL Demo".to_string(),
width: 1280,
height: 720,
..Default::default()
};
let window = VoltexWindow::new(event_loop, &config);
let gpu = GpuContext::new(window.handle.clone());
// Dynamic uniform buffer alignment
let alignment = gpu.device.limits().min_uniform_buffer_offset_alignment;
let cam_aligned_size = align_up(std::mem::size_of::<CameraUniform>() as u32, alignment);
let mat_aligned_size = align_up(std::mem::size_of::<MaterialUniform>() as u32, alignment);
// Generate sphere mesh
let (vertices, indices) = generate_sphere(0.4, 32, 16);
let mesh = Mesh::new(&gpu.device, &vertices, &indices);
// Camera at (0, 0, 12) looking toward origin
let aspect = gpu.config.width as f32 / gpu.config.height as f32;
let camera = Camera::new(Vec3::new(0.0, 0.0, 12.0), aspect);
let fps_controller = FpsController::new();
// Mild directional light — IBL provides the primary illumination
let mut lights_uniform = LightsUniform::new();
lights_uniform.add_light(LightData::directional(
[-1.0, -1.0, -1.0],
[1.0, 1.0, 1.0],
1.0,
));
// Camera dynamic uniform buffer (one CameraUniform per sphere)
let camera_buffer = gpu.device.create_buffer(&wgpu::BufferDescriptor {
label: Some("Camera Dynamic Uniform Buffer"),
size: (cam_aligned_size as usize * NUM_SPHERES) as u64,
usage: wgpu::BufferUsages::UNIFORM | wgpu::BufferUsages::COPY_DST,
mapped_at_creation: false,
});
let light_buffer = gpu.device.create_buffer_init(&wgpu::util::BufferInitDescriptor {
label: Some("Light Uniform Buffer"),
contents: bytemuck::cast_slice(&[lights_uniform]),
usage: wgpu::BufferUsages::UNIFORM | wgpu::BufferUsages::COPY_DST,
});
// Material dynamic uniform buffer (one MaterialUniform per sphere)
let material_buffer = gpu.device.create_buffer(&wgpu::BufferDescriptor {
label: Some("Material Dynamic Uniform Buffer"),
size: (mat_aligned_size as usize * NUM_SPHERES) as u64,
usage: wgpu::BufferUsages::UNIFORM | wgpu::BufferUsages::COPY_DST,
mapped_at_creation: false,
});
// Bind group layouts
let cl_layout = camera_light_bind_group_layout(&gpu.device);
let pbr_tex_layout = pbr_texture_bind_group_layout(&gpu.device);
let mat_layout = MaterialUniform::bind_group_layout(&gpu.device);
// Camera+Light bind group
let camera_light_bind_group = gpu.device.create_bind_group(&wgpu::BindGroupDescriptor {
label: Some("Camera+Light Bind Group"),
layout: &cl_layout,
entries: &[
wgpu::BindGroupEntry {
binding: 0,
resource: wgpu::BindingResource::Buffer(wgpu::BufferBinding {
buffer: &camera_buffer,
offset: 0,
size: wgpu::BufferSize::new(
std::mem::size_of::<CameraUniform>() as u64,
),
}),
},
wgpu::BindGroupEntry {
binding: 1,
resource: light_buffer.as_entire_binding(),
},
],
});
// PBR texture bind group (albedo + normal)
let old_tex_layout = GpuTexture::bind_group_layout(&gpu.device);
let albedo_tex = GpuTexture::white_1x1(&gpu.device, &gpu.queue, &old_tex_layout);
let normal_tex = GpuTexture::flat_normal_1x1(&gpu.device, &gpu.queue);
let pbr_texture_bind_group = create_pbr_texture_bind_group(
&gpu.device,
&pbr_tex_layout,
&albedo_tex.view,
&albedo_tex.sampler,
&normal_tex.1,
&normal_tex.2,
);
// IBL resources
let ibl = IblResources::new(&gpu.device, &gpu.queue);
// Material bind group
let material_bind_group = gpu.device.create_bind_group(&wgpu::BindGroupDescriptor {
label: Some("Material Bind Group"),
layout: &mat_layout,
entries: &[wgpu::BindGroupEntry {
binding: 0,
resource: wgpu::BindingResource::Buffer(wgpu::BufferBinding {
buffer: &material_buffer,
offset: 0,
size: wgpu::BufferSize::new(
std::mem::size_of::<MaterialUniform>() as u64,
),
}),
}],
});
// Shadow resources (dummy — shadows disabled)
let shadow_map = ShadowMap::new(&gpu.device);
let shadow_layout = ShadowMap::bind_group_layout(&gpu.device);
let shadow_uniform = ShadowUniform {
light_view_proj: [[0.0; 4]; 4],
shadow_map_size: 0.0,
shadow_bias: 0.0,
_padding: [0.0; 2],
};
let shadow_uniform_buffer = gpu.device.create_buffer_init(&wgpu::util::BufferInitDescriptor {
label: Some("Shadow Uniform Buffer"),
contents: bytemuck::cast_slice(&[shadow_uniform]),
usage: wgpu::BufferUsages::UNIFORM,
});
let shadow_bind_group = shadow_map.create_bind_group(
&gpu.device,
&shadow_layout,
&shadow_uniform_buffer,
&ibl.brdf_lut_view,
&ibl.brdf_lut_sampler,
);
// PBR pipeline
let pipeline = create_pbr_pipeline(
&gpu.device,
gpu.surface_format,
&cl_layout,
&pbr_tex_layout,
&mat_layout,
&shadow_layout,
);
self.state = Some(AppState {
window,
gpu,
pipeline,
mesh,
camera,
fps_controller,
camera_buffer,
light_buffer,
material_buffer,
camera_light_bind_group,
_albedo_tex: albedo_tex,
_normal_tex: normal_tex,
pbr_texture_bind_group,
material_bind_group,
shadow_bind_group,
_shadow_map: shadow_map,
_ibl: ibl,
input: InputState::new(),
timer: GameTimer::new(60),
cam_aligned_size,
mat_aligned_size,
});
}
fn window_event(
&mut self,
event_loop: &ActiveEventLoop,
_window_id: WindowId,
event: WindowEvent,
) {
let state = match &mut self.state {
Some(s) => s,
None => return,
};
match event {
WindowEvent::CloseRequested => event_loop.exit(),
WindowEvent::KeyboardInput {
event:
winit::event::KeyEvent {
physical_key: PhysicalKey::Code(key_code),
state: key_state,
..
},
..
} => {
let pressed = key_state == winit::event::ElementState::Pressed;
state.input.process_key(key_code, pressed);
if key_code == KeyCode::Escape && pressed {
event_loop.exit();
}
}
WindowEvent::Resized(size) => {
state.gpu.resize(size.width, size.height);
if size.width > 0 && size.height > 0 {
state.camera.aspect = size.width as f32 / size.height as f32;
}
}
WindowEvent::CursorMoved { position, .. } => {
state.input.process_mouse_move(position.x, position.y);
}
WindowEvent::MouseInput {
state: btn_state,
button,
..
} => {
let pressed = btn_state == winit::event::ElementState::Pressed;
state.input.process_mouse_button(button, pressed);
}
WindowEvent::MouseWheel { delta, .. } => {
let y = match delta {
winit::event::MouseScrollDelta::LineDelta(_, y) => y,
winit::event::MouseScrollDelta::PixelDelta(pos) => pos.y as f32,
};
state.input.process_scroll(y);
}
WindowEvent::RedrawRequested => {
state.timer.tick();
let dt = state.timer.frame_dt();
// Camera input
if state
.input
.is_mouse_button_pressed(winit::event::MouseButton::Right)
{
let (dx, dy) = state.input.mouse_delta();
state
.fps_controller
.process_mouse(&mut state.camera, dx, dy);
}
let mut forward = 0.0f32;
let mut right = 0.0f32;
let mut up = 0.0f32;
if state.input.is_key_pressed(KeyCode::KeyW) {
forward += 1.0;
}
if state.input.is_key_pressed(KeyCode::KeyS) {
forward -= 1.0;
}
if state.input.is_key_pressed(KeyCode::KeyD) {
right += 1.0;
}
if state.input.is_key_pressed(KeyCode::KeyA) {
right -= 1.0;
}
if state.input.is_key_pressed(KeyCode::Space) {
up += 1.0;
}
if state.input.is_key_pressed(KeyCode::ShiftLeft) {
up -= 1.0;
}
state
.fps_controller
.process_movement(&mut state.camera, forward, right, up, dt);
state.input.begin_frame();
// Compute view-projection
let view_proj = state.camera.view_projection();
let cam_pos = [
state.camera.position.x,
state.camera.position.y,
state.camera.position.z,
];
let cam_aligned = state.cam_aligned_size as usize;
let mat_aligned = state.mat_aligned_size as usize;
// Build staging data for camera and material uniforms
let cam_total = NUM_SPHERES * cam_aligned;
let mat_total = NUM_SPHERES * mat_aligned;
let mut cam_staging = vec![0u8; cam_total];
let mut mat_staging = vec![0u8; mat_total];
let half_grid = (GRID_SIZE as f32 - 1.0) * SPACING * 0.5;
for row in 0..GRID_SIZE {
for col in 0..GRID_SIZE {
let i = row * GRID_SIZE + col;
let x = col as f32 * SPACING - half_grid;
let y = row as f32 * SPACING - half_grid;
// Camera uniform: view_proj + model (translation) + camera_pos
let model = Mat4::translation(x, y, 0.0);
let cam_uniform = CameraUniform {
view_proj: view_proj.cols,
model: model.cols,
camera_pos: cam_pos,
_padding: 0.0,
};
let bytes = bytemuck::bytes_of(&cam_uniform);
let offset = i * cam_aligned;
cam_staging[offset..offset + bytes.len()].copy_from_slice(bytes);
// Material: metallic varies with col, roughness with row
// Reddish base color for IBL visibility
let metallic = col as f32 / (GRID_SIZE as f32 - 1.0);
let roughness =
0.05 + row as f32 * (0.95 / (GRID_SIZE as f32 - 1.0));
let mat_uniform = MaterialUniform::with_params(
[0.8, 0.2, 0.2, 1.0],
metallic,
roughness,
);
let bytes = bytemuck::bytes_of(&mat_uniform);
let offset = i * mat_aligned;
mat_staging[offset..offset + bytes.len()].copy_from_slice(bytes);
}
}
state
.gpu
.queue
.write_buffer(&state.camera_buffer, 0, &cam_staging);
state
.gpu
.queue
.write_buffer(&state.material_buffer, 0, &mat_staging);
// Write light uniform (mild directional — IBL provides ambient)
let mut lights_uniform = LightsUniform::new();
lights_uniform.add_light(LightData::directional(
[-1.0, -1.0, -1.0],
[1.0, 1.0, 1.0],
1.0,
));
state.gpu.queue.write_buffer(
&state.light_buffer,
0,
bytemuck::cast_slice(&[lights_uniform]),
);
// Render
let output = match state.gpu.surface.get_current_texture() {
Ok(t) => t,
Err(wgpu::SurfaceError::Lost) => {
let (w, h) = state.window.inner_size();
state.gpu.resize(w, h);
return;
}
Err(wgpu::SurfaceError::OutOfMemory) => {
event_loop.exit();
return;
}
Err(_) => return,
};
let view =
output
.texture
.create_view(&wgpu::TextureViewDescriptor::default());
let mut encoder = state.gpu.device.create_command_encoder(
&wgpu::CommandEncoderDescriptor {
label: Some("Render Encoder"),
},
);
{
let mut render_pass =
encoder.begin_render_pass(&wgpu::RenderPassDescriptor {
label: Some("IBL Render Pass"),
color_attachments: &[Some(wgpu::RenderPassColorAttachment {
view: &view,
resolve_target: None,
depth_slice: None,
ops: wgpu::Operations {
load: wgpu::LoadOp::Clear(wgpu::Color {
r: 0.1,
g: 0.1,
b: 0.15,
a: 1.0,
}),
store: wgpu::StoreOp::Store,
},
})],
depth_stencil_attachment: Some(
wgpu::RenderPassDepthStencilAttachment {
view: &state.gpu.depth_view,
depth_ops: Some(wgpu::Operations {
load: wgpu::LoadOp::Clear(1.0),
store: wgpu::StoreOp::Store,
}),
stencil_ops: None,
},
),
occlusion_query_set: None,
timestamp_writes: None,
multiview_mask: None,
});
render_pass.set_pipeline(&state.pipeline);
render_pass.set_bind_group(1, &state.pbr_texture_bind_group, &[]);
render_pass.set_bind_group(3, &state.shadow_bind_group, &[]);
render_pass.set_vertex_buffer(0, state.mesh.vertex_buffer.slice(..));
render_pass.set_index_buffer(
state.mesh.index_buffer.slice(..),
wgpu::IndexFormat::Uint32,
);
// Draw each sphere with dynamic offsets for camera and material
for i in 0..NUM_SPHERES {
let cam_offset = (i as u32) * state.cam_aligned_size;
let mat_offset = (i as u32) * state.mat_aligned_size;
render_pass.set_bind_group(
0,
&state.camera_light_bind_group,
&[cam_offset],
);
render_pass.set_bind_group(
2,
&state.material_bind_group,
&[mat_offset],
);
render_pass.draw_indexed(0..state.mesh.num_indices, 0, 0..1);
}
}
state.gpu.queue.submit(std::iter::once(encoder.finish()));
output.present();
}
_ => {}
}
}
fn about_to_wait(&mut self, _event_loop: &ActiveEventLoop) {
if let Some(state) = &self.state {
state.window.request_redraw();
}
}
}
fn main() {
env_logger::init();
let event_loop = EventLoop::new().unwrap();
let mut app = IblDemoApp { state: None };
event_loop.run_app(&mut app).unwrap();
}

View File

@@ -0,0 +1,16 @@
[package]
name = "many_cubes"
version = "0.1.0"
edition = "2021"
[dependencies]
voltex_math.workspace = true
voltex_platform.workspace = true
voltex_renderer.workspace = true
voltex_ecs.workspace = true
wgpu.workspace = true
winit.workspace = true
bytemuck.workspace = true
pollster.workspace = true
env_logger.workspace = true
log.workspace = true

View File

@@ -0,0 +1,390 @@
use winit::{
application::ApplicationHandler,
event::WindowEvent,
event_loop::{ActiveEventLoop, EventLoop},
keyboard::{KeyCode, PhysicalKey},
window::WindowId,
};
use voltex_math::{Vec3, Mat4};
use voltex_platform::{VoltexWindow, WindowConfig, InputState, GameTimer};
use voltex_renderer::{
GpuContext, Camera, FpsController, CameraUniform, LightUniform, Mesh, GpuTexture, pipeline, obj,
};
use voltex_ecs::{World, Transform};
use wgpu::util::DeviceExt;
/// App-level component: index into the mesh list.
struct MeshHandle(#[allow(dead_code)] u32);
const MAX_ENTITIES: usize = 1024;
struct ManyCubesApp {
state: Option<AppState>,
}
struct AppState {
window: VoltexWindow,
gpu: GpuContext,
pipeline: wgpu::RenderPipeline,
mesh: Mesh,
camera: Camera,
fps_controller: FpsController,
camera_uniform: CameraUniform,
light_uniform: LightUniform,
camera_buffer: wgpu::Buffer,
light_buffer: wgpu::Buffer,
camera_light_bind_group: wgpu::BindGroup,
_texture: GpuTexture,
input: InputState,
timer: GameTimer,
world: World,
time: f32,
uniform_alignment: u32,
}
fn camera_light_bind_group_layout(device: &wgpu::Device) -> wgpu::BindGroupLayout {
device.create_bind_group_layout(&wgpu::BindGroupLayoutDescriptor {
label: Some("Camera+Light Bind Group Layout"),
entries: &[
wgpu::BindGroupLayoutEntry {
binding: 0,
visibility: wgpu::ShaderStages::VERTEX | wgpu::ShaderStages::FRAGMENT,
ty: wgpu::BindingType::Buffer {
ty: wgpu::BufferBindingType::Uniform,
has_dynamic_offset: true,
min_binding_size: wgpu::BufferSize::new(
std::mem::size_of::<CameraUniform>() as u64
),
},
count: None,
},
wgpu::BindGroupLayoutEntry {
binding: 1,
visibility: wgpu::ShaderStages::FRAGMENT,
ty: wgpu::BindingType::Buffer {
ty: wgpu::BufferBindingType::Uniform,
has_dynamic_offset: false,
min_binding_size: None,
},
count: None,
},
],
})
}
impl ApplicationHandler for ManyCubesApp {
fn resumed(&mut self, event_loop: &ActiveEventLoop) {
let config = WindowConfig {
title: "Voltex - Many Cubes (ECS)".to_string(),
width: 1280,
height: 720,
..Default::default()
};
let window = VoltexWindow::new(event_loop, &config);
let gpu = GpuContext::new(window.handle.clone());
// Dynamic uniform buffer alignment
let uniform_alignment = gpu.device.limits().min_uniform_buffer_offset_alignment;
let uniform_size = std::mem::size_of::<CameraUniform>() as u32;
let aligned_size = ((uniform_size + uniform_alignment - 1) / uniform_alignment) * uniform_alignment;
// Parse OBJ
let obj_src = include_str!("../../../assets/cube.obj");
let obj_data = obj::parse_obj(obj_src);
let mesh = Mesh::new(&gpu.device, &obj_data.vertices, &obj_data.indices);
// Camera
let aspect = gpu.config.width as f32 / gpu.config.height as f32;
let mut camera = Camera::new(Vec3::new(0.0, 15.0, 25.0), aspect);
camera.pitch = -0.5;
let fps_controller = FpsController::new();
// Uniforms
let camera_uniform = CameraUniform::new();
let light_uniform = LightUniform::new();
// Dynamic uniform buffer: room for MAX_ENTITIES camera uniforms
let camera_buffer = gpu.device.create_buffer(&wgpu::BufferDescriptor {
label: Some("Camera Dynamic Uniform Buffer"),
size: (aligned_size as usize * MAX_ENTITIES) as u64,
usage: wgpu::BufferUsages::UNIFORM | wgpu::BufferUsages::COPY_DST,
mapped_at_creation: false,
});
let light_buffer = gpu.device.create_buffer_init(&wgpu::util::BufferInitDescriptor {
label: Some("Light Uniform Buffer"),
contents: bytemuck::cast_slice(&[light_uniform]),
usage: wgpu::BufferUsages::UNIFORM | wgpu::BufferUsages::COPY_DST,
});
// Bind group layouts
let cl_layout = camera_light_bind_group_layout(&gpu.device);
let tex_layout = GpuTexture::bind_group_layout(&gpu.device);
// Bind group: camera binding uses dynamic offset, size = one CameraUniform
let camera_light_bind_group = gpu.device.create_bind_group(&wgpu::BindGroupDescriptor {
label: Some("Camera+Light Bind Group"),
layout: &cl_layout,
entries: &[
wgpu::BindGroupEntry {
binding: 0,
resource: wgpu::BindingResource::Buffer(wgpu::BufferBinding {
buffer: &camera_buffer,
offset: 0,
size: wgpu::BufferSize::new(std::mem::size_of::<CameraUniform>() as u64),
}),
},
wgpu::BindGroupEntry {
binding: 1,
resource: light_buffer.as_entire_binding(),
},
],
});
let texture = GpuTexture::white_1x1(&gpu.device, &gpu.queue, &tex_layout);
let render_pipeline = pipeline::create_mesh_pipeline(
&gpu.device,
gpu.surface_format,
&cl_layout,
&tex_layout,
);
// ECS: spawn 400 entities in a 20x20 grid
let mut world = World::new();
let spacing = 1.5_f32;
let offset = (20.0 - 1.0) * spacing * 0.5;
for row in 0..20 {
for col in 0..20 {
let x = col as f32 * spacing - offset;
let z = row as f32 * spacing - offset;
let entity = world.spawn();
world.add(entity, Transform::from_position(Vec3::new(x, 0.0, z)));
world.add(entity, MeshHandle(0));
}
}
self.state = Some(AppState {
window,
gpu,
pipeline: render_pipeline,
mesh,
camera,
fps_controller,
camera_uniform,
light_uniform,
camera_buffer,
light_buffer,
camera_light_bind_group,
_texture: texture,
input: InputState::new(),
timer: GameTimer::new(60),
world,
time: 0.0,
uniform_alignment: aligned_size,
});
}
fn window_event(
&mut self,
event_loop: &ActiveEventLoop,
_window_id: WindowId,
event: WindowEvent,
) {
let state = match &mut self.state {
Some(s) => s,
None => return,
};
match event {
WindowEvent::CloseRequested => event_loop.exit(),
WindowEvent::KeyboardInput {
event: winit::event::KeyEvent {
physical_key: PhysicalKey::Code(key_code),
state: key_state,
..
},
..
} => {
let pressed = key_state == winit::event::ElementState::Pressed;
state.input.process_key(key_code, pressed);
if key_code == KeyCode::Escape && pressed {
event_loop.exit();
}
}
WindowEvent::Resized(size) => {
state.gpu.resize(size.width, size.height);
if size.width > 0 && size.height > 0 {
state.camera.aspect = size.width as f32 / size.height as f32;
}
}
WindowEvent::CursorMoved { position, .. } => {
state.input.process_mouse_move(position.x, position.y);
}
WindowEvent::MouseInput { state: btn_state, button, .. } => {
let pressed = btn_state == winit::event::ElementState::Pressed;
state.input.process_mouse_button(button, pressed);
}
WindowEvent::MouseWheel { delta, .. } => {
let y = match delta {
winit::event::MouseScrollDelta::LineDelta(_, y) => y,
winit::event::MouseScrollDelta::PixelDelta(pos) => pos.y as f32,
};
state.input.process_scroll(y);
}
WindowEvent::RedrawRequested => {
state.timer.tick();
let dt = state.timer.frame_dt();
// Input
if state.input.is_mouse_button_pressed(winit::event::MouseButton::Right) {
let (dx, dy) = state.input.mouse_delta();
state.fps_controller.process_mouse(&mut state.camera, dx, dy);
}
let mut forward = 0.0f32;
let mut right = 0.0f32;
let mut up = 0.0f32;
if state.input.is_key_pressed(KeyCode::KeyW) { forward += 1.0; }
if state.input.is_key_pressed(KeyCode::KeyS) { forward -= 1.0; }
if state.input.is_key_pressed(KeyCode::KeyD) { right += 1.0; }
if state.input.is_key_pressed(KeyCode::KeyA) { right -= 1.0; }
if state.input.is_key_pressed(KeyCode::Space) { up += 1.0; }
if state.input.is_key_pressed(KeyCode::ShiftLeft) { up -= 1.0; }
state.fps_controller.process_movement(&mut state.camera, forward, right, up, dt);
state.input.begin_frame();
state.time += dt;
// Pre-compute all entity uniforms and write to dynamic buffer
let view_proj = state.camera.view_projection();
let cam_pos = [
state.camera.position.x,
state.camera.position.y,
state.camera.position.z,
];
let entities = state.world.query2::<Transform, MeshHandle>();
let aligned = state.uniform_alignment as usize;
// Build staging data: one CameraUniform per entity, padded to alignment
let total_bytes = entities.len() * aligned;
let mut staging = vec![0u8; total_bytes];
for (i, (_, transform, _)) in entities.iter().enumerate() {
let mut uniform = state.camera_uniform;
uniform.view_proj = view_proj.cols;
uniform.camera_pos = cam_pos;
let mut t = **transform;
t.rotation.y = state.time * 0.5 + t.position.x * 0.1 + t.position.z * 0.1;
uniform.model = t.matrix().cols;
let bytes = bytemuck::bytes_of(&uniform);
let offset = i * aligned;
staging[offset..offset + bytes.len()].copy_from_slice(bytes);
}
state.gpu.queue.write_buffer(&state.camera_buffer, 0, &staging);
// Write light uniform
state.gpu.queue.write_buffer(
&state.light_buffer,
0,
bytemuck::cast_slice(&[state.light_uniform]),
);
// Render
let output = match state.gpu.surface.get_current_texture() {
Ok(t) => t,
Err(wgpu::SurfaceError::Lost) => {
let (w, h) = state.window.inner_size();
state.gpu.resize(w, h);
return;
}
Err(wgpu::SurfaceError::OutOfMemory) => {
event_loop.exit();
return;
}
Err(_) => return,
};
let view = output.texture.create_view(&wgpu::TextureViewDescriptor::default());
let mut encoder = state.gpu.device.create_command_encoder(
&wgpu::CommandEncoderDescriptor { label: Some("Render Encoder") },
);
{
let mut render_pass = encoder.begin_render_pass(&wgpu::RenderPassDescriptor {
label: Some("Render Pass"),
color_attachments: &[Some(wgpu::RenderPassColorAttachment {
view: &view,
resolve_target: None,
depth_slice: None,
ops: wgpu::Operations {
load: wgpu::LoadOp::Clear(wgpu::Color {
r: 0.1, g: 0.1, b: 0.15, a: 1.0,
}),
store: wgpu::StoreOp::Store,
},
})],
depth_stencil_attachment: Some(wgpu::RenderPassDepthStencilAttachment {
view: &state.gpu.depth_view,
depth_ops: Some(wgpu::Operations {
load: wgpu::LoadOp::Clear(1.0),
store: wgpu::StoreOp::Store,
}),
stencil_ops: None,
}),
occlusion_query_set: None,
timestamp_writes: None,
multiview_mask: None,
});
render_pass.set_pipeline(&state.pipeline);
render_pass.set_bind_group(1, &state._texture.bind_group, &[]);
render_pass.set_vertex_buffer(0, state.mesh.vertex_buffer.slice(..));
render_pass.set_index_buffer(
state.mesh.index_buffer.slice(..),
wgpu::IndexFormat::Uint32,
);
// Draw each entity with its dynamic offset
for (i, _) in entities.iter().enumerate() {
let dynamic_offset = (i as u32) * state.uniform_alignment;
render_pass.set_bind_group(
0,
&state.camera_light_bind_group,
&[dynamic_offset],
);
render_pass.draw_indexed(0..state.mesh.num_indices, 0, 0..1);
}
}
state.gpu.queue.submit(std::iter::once(encoder.finish()));
output.present();
}
_ => {}
}
}
fn about_to_wait(&mut self, _event_loop: &ActiveEventLoop) {
if let Some(state) = &self.state {
state.window.request_redraw();
}
}
}
fn main() {
env_logger::init();
let event_loop = EventLoop::new().unwrap();
let mut app = ManyCubesApp { state: None };
event_loop.run_app(&mut app).unwrap();
}

View File

@@ -0,0 +1,15 @@
[package]
name = "model_viewer"
version = "0.1.0"
edition = "2021"
[dependencies]
voltex_math.workspace = true
voltex_platform.workspace = true
voltex_renderer.workspace = true
wgpu.workspace = true
winit.workspace = true
bytemuck.workspace = true
pollster.workspace = true
env_logger.workspace = true
log.workspace = true

View File

@@ -0,0 +1,337 @@
use winit::{
application::ApplicationHandler,
event::WindowEvent,
event_loop::{ActiveEventLoop, EventLoop},
keyboard::{KeyCode, PhysicalKey},
window::WindowId,
};
use voltex_math::{Vec3, Mat4};
use voltex_platform::{VoltexWindow, WindowConfig, InputState, GameTimer};
use voltex_renderer::{GpuContext, Camera, FpsController, CameraUniform, LightUniform, Mesh, GpuTexture, pipeline, obj};
use wgpu::util::DeviceExt;
struct ModelViewerApp {
state: Option<AppState>,
}
struct AppState {
window: VoltexWindow,
gpu: GpuContext,
pipeline: wgpu::RenderPipeline,
mesh: Mesh,
camera: Camera,
fps_controller: FpsController,
camera_uniform: CameraUniform,
light_uniform: LightUniform,
camera_buffer: wgpu::Buffer,
light_buffer: wgpu::Buffer,
camera_light_bind_group: wgpu::BindGroup,
_texture: GpuTexture,
input: InputState,
timer: GameTimer,
angle: f32,
}
fn camera_light_bind_group_layout(device: &wgpu::Device) -> wgpu::BindGroupLayout {
device.create_bind_group_layout(&wgpu::BindGroupLayoutDescriptor {
label: Some("Camera+Light Bind Group Layout"),
entries: &[
wgpu::BindGroupLayoutEntry {
binding: 0,
visibility: wgpu::ShaderStages::VERTEX | wgpu::ShaderStages::FRAGMENT,
ty: wgpu::BindingType::Buffer {
ty: wgpu::BufferBindingType::Uniform,
has_dynamic_offset: false,
min_binding_size: None,
},
count: None,
},
wgpu::BindGroupLayoutEntry {
binding: 1,
visibility: wgpu::ShaderStages::FRAGMENT,
ty: wgpu::BindingType::Buffer {
ty: wgpu::BufferBindingType::Uniform,
has_dynamic_offset: false,
min_binding_size: None,
},
count: None,
},
],
})
}
impl ApplicationHandler for ModelViewerApp {
fn resumed(&mut self, event_loop: &ActiveEventLoop) {
let config = WindowConfig {
title: "Voltex - Model Viewer".to_string(),
width: 1280,
height: 720,
..Default::default()
};
let window = VoltexWindow::new(event_loop, &config);
let gpu = GpuContext::new(window.handle.clone());
// Parse OBJ
let obj_src = include_str!("../../../assets/cube.obj");
let obj_data = obj::parse_obj(obj_src);
let mesh = Mesh::new(&gpu.device, &obj_data.vertices, &obj_data.indices);
// Camera at (0, 1, 3) looking toward origin
let aspect = gpu.config.width as f32 / gpu.config.height as f32;
let mut camera = Camera::new(Vec3::new(0.0, 1.0, 3.0), aspect);
// Point camera toward origin: yaw=0 means forward is (0,0,-1) which is roughly toward origin from (0,1,3)
// We need to look slightly down
let dir = Vec3::new(0.0, -1.0, -3.0);
camera.yaw = dir.x.atan2(-dir.z);
camera.pitch = (dir.y / (dir.x * dir.x + dir.y * dir.y + dir.z * dir.z).sqrt()).asin();
let fps_controller = FpsController::new();
// Uniforms
let camera_uniform = CameraUniform::new();
let light_uniform = LightUniform::new();
let camera_buffer = gpu.device.create_buffer_init(&wgpu::util::BufferInitDescriptor {
label: Some("Camera Uniform Buffer"),
contents: bytemuck::cast_slice(&[camera_uniform]),
usage: wgpu::BufferUsages::UNIFORM | wgpu::BufferUsages::COPY_DST,
});
let light_buffer = gpu.device.create_buffer_init(&wgpu::util::BufferInitDescriptor {
label: Some("Light Uniform Buffer"),
contents: bytemuck::cast_slice(&[light_uniform]),
usage: wgpu::BufferUsages::UNIFORM | wgpu::BufferUsages::COPY_DST,
});
// Bind group layouts
let cl_layout = camera_light_bind_group_layout(&gpu.device);
let tex_layout = GpuTexture::bind_group_layout(&gpu.device);
// Bind groups
let camera_light_bind_group = gpu.device.create_bind_group(&wgpu::BindGroupDescriptor {
label: Some("Camera+Light Bind Group"),
layout: &cl_layout,
entries: &[
wgpu::BindGroupEntry {
binding: 0,
resource: camera_buffer.as_entire_binding(),
},
wgpu::BindGroupEntry {
binding: 1,
resource: light_buffer.as_entire_binding(),
},
],
});
// Default white texture
let texture = GpuTexture::white_1x1(&gpu.device, &gpu.queue, &tex_layout);
// Pipeline
let render_pipeline = pipeline::create_mesh_pipeline(
&gpu.device,
gpu.surface_format,
&cl_layout,
&tex_layout,
);
self.state = Some(AppState {
window,
gpu,
pipeline: render_pipeline,
mesh,
camera,
fps_controller,
camera_uniform,
light_uniform,
camera_buffer,
light_buffer,
camera_light_bind_group,
_texture: texture,
input: InputState::new(),
timer: GameTimer::new(60),
angle: 0.0,
});
}
fn window_event(
&mut self,
event_loop: &ActiveEventLoop,
_window_id: WindowId,
event: WindowEvent,
) {
let state = match &mut self.state {
Some(s) => s,
None => return,
};
match event {
WindowEvent::CloseRequested => event_loop.exit(),
WindowEvent::KeyboardInput {
event: winit::event::KeyEvent {
physical_key: PhysicalKey::Code(key_code),
state: key_state,
..
},
..
} => {
let pressed = key_state == winit::event::ElementState::Pressed;
state.input.process_key(key_code, pressed);
if key_code == KeyCode::Escape && pressed {
event_loop.exit();
}
}
WindowEvent::Resized(size) => {
state.gpu.resize(size.width, size.height);
if size.width > 0 && size.height > 0 {
state.camera.aspect = size.width as f32 / size.height as f32;
}
}
WindowEvent::CursorMoved { position, .. } => {
state.input.process_mouse_move(position.x, position.y);
}
WindowEvent::MouseInput { state: btn_state, button, .. } => {
let pressed = btn_state == winit::event::ElementState::Pressed;
state.input.process_mouse_button(button, pressed);
}
WindowEvent::MouseWheel { delta, .. } => {
let y = match delta {
winit::event::MouseScrollDelta::LineDelta(_, y) => y,
winit::event::MouseScrollDelta::PixelDelta(pos) => pos.y as f32,
};
state.input.process_scroll(y);
}
WindowEvent::RedrawRequested => {
// 1. Tick timer
state.timer.tick();
let dt = state.timer.frame_dt();
// 2. Read input state BEFORE begin_frame clears it
// Camera rotation via right-click drag
if state.input.is_mouse_button_pressed(winit::event::MouseButton::Right) {
let (dx, dy) = state.input.mouse_delta();
state.fps_controller.process_mouse(&mut state.camera, dx, dy);
}
// WASD movement
let mut forward = 0.0f32;
let mut right = 0.0f32;
let mut up = 0.0f32;
if state.input.is_key_pressed(KeyCode::KeyW) { forward += 1.0; }
if state.input.is_key_pressed(KeyCode::KeyS) { forward -= 1.0; }
if state.input.is_key_pressed(KeyCode::KeyD) { right += 1.0; }
if state.input.is_key_pressed(KeyCode::KeyA) { right -= 1.0; }
if state.input.is_key_pressed(KeyCode::Space) { up += 1.0; }
if state.input.is_key_pressed(KeyCode::ShiftLeft) { up -= 1.0; }
state.fps_controller.process_movement(&mut state.camera, forward, right, up, dt);
// 3. Clear per-frame input state for next frame
state.input.begin_frame();
// 4. Auto-rotate model
state.angle += dt * 0.5;
// Update uniforms
state.camera_uniform.view_proj = state.camera.view_projection().cols;
state.camera_uniform.model = Mat4::rotation_y(state.angle).cols;
state.camera_uniform.camera_pos = [
state.camera.position.x,
state.camera.position.y,
state.camera.position.z,
];
state.gpu.queue.write_buffer(
&state.camera_buffer,
0,
bytemuck::cast_slice(&[state.camera_uniform]),
);
state.gpu.queue.write_buffer(
&state.light_buffer,
0,
bytemuck::cast_slice(&[state.light_uniform]),
);
// 5. Render
let output = match state.gpu.surface.get_current_texture() {
Ok(t) => t,
Err(wgpu::SurfaceError::Lost) => {
let (w, h) = state.window.inner_size();
state.gpu.resize(w, h);
return;
}
Err(wgpu::SurfaceError::OutOfMemory) => {
event_loop.exit();
return;
}
Err(_) => return,
};
let view = output.texture.create_view(&wgpu::TextureViewDescriptor::default());
let mut encoder = state.gpu.device.create_command_encoder(
&wgpu::CommandEncoderDescriptor { label: Some("Render Encoder") },
);
{
let mut render_pass = encoder.begin_render_pass(&wgpu::RenderPassDescriptor {
label: Some("Render Pass"),
color_attachments: &[Some(wgpu::RenderPassColorAttachment {
view: &view,
resolve_target: None,
depth_slice: None,
ops: wgpu::Operations {
load: wgpu::LoadOp::Clear(wgpu::Color {
r: 0.1,
g: 0.1,
b: 0.15,
a: 1.0,
}),
store: wgpu::StoreOp::Store,
},
})],
depth_stencil_attachment: Some(wgpu::RenderPassDepthStencilAttachment {
view: &state.gpu.depth_view,
depth_ops: Some(wgpu::Operations {
load: wgpu::LoadOp::Clear(1.0),
store: wgpu::StoreOp::Store,
}),
stencil_ops: None,
}),
occlusion_query_set: None,
timestamp_writes: None,
multiview_mask: None,
});
render_pass.set_pipeline(&state.pipeline);
render_pass.set_bind_group(0, &state.camera_light_bind_group, &[]);
render_pass.set_bind_group(1, &state._texture.bind_group, &[]);
render_pass.set_vertex_buffer(0, state.mesh.vertex_buffer.slice(..));
render_pass.set_index_buffer(state.mesh.index_buffer.slice(..), wgpu::IndexFormat::Uint32);
render_pass.draw_indexed(0..state.mesh.num_indices, 0, 0..1);
}
state.gpu.queue.submit(std::iter::once(encoder.finish()));
output.present();
}
_ => {}
}
}
fn about_to_wait(&mut self, _event_loop: &ActiveEventLoop) {
if let Some(state) = &self.state {
state.window.request_redraw();
}
}
}
fn main() {
env_logger::init();
let event_loop = EventLoop::new().unwrap();
let mut app = ModelViewerApp { state: None };
event_loop.run_app(&mut app).unwrap();
}

View File

@@ -0,0 +1,15 @@
[package]
name = "multi_light_demo"
version = "0.1.0"
edition = "2021"
[dependencies]
voltex_math.workspace = true
voltex_platform.workspace = true
voltex_renderer.workspace = true
wgpu.workspace = true
winit.workspace = true
bytemuck.workspace = true
pollster.workspace = true
env_logger.workspace = true
log.workspace = true

View File

@@ -0,0 +1,611 @@
use winit::{
application::ApplicationHandler,
event::WindowEvent,
event_loop::{ActiveEventLoop, EventLoop},
keyboard::{KeyCode, PhysicalKey},
window::WindowId,
};
use voltex_math::{Vec3, Mat4};
use voltex_platform::{VoltexWindow, WindowConfig, InputState, GameTimer};
use voltex_renderer::{
GpuContext, Camera, FpsController, CameraUniform, LightsUniform, LightData,
Mesh, GpuTexture, MaterialUniform, generate_sphere, create_pbr_pipeline, obj,
ShadowMap, ShadowUniform,
IblResources, pbr_texture_bind_group_layout, create_pbr_texture_bind_group,
};
use wgpu::util::DeviceExt;
const NUM_OBJECTS: usize = 6; // 5 spheres + 1 ground plane
struct MultiLightApp {
state: Option<AppState>,
}
struct AppState {
window: VoltexWindow,
gpu: GpuContext,
pipeline: wgpu::RenderPipeline,
sphere_mesh: Mesh,
ground_mesh: Mesh,
camera: Camera,
fps_controller: FpsController,
camera_buffer: wgpu::Buffer,
light_buffer: wgpu::Buffer,
material_buffer: wgpu::Buffer,
camera_light_bind_group: wgpu::BindGroup,
_albedo_tex: GpuTexture,
_normal_tex: (wgpu::Texture, wgpu::TextureView, wgpu::Sampler),
pbr_texture_bind_group: wgpu::BindGroup,
material_bind_group: wgpu::BindGroup,
shadow_bind_group: wgpu::BindGroup,
_shadow_map: ShadowMap,
_ibl: IblResources,
input: InputState,
timer: GameTimer,
cam_aligned_size: u32,
mat_aligned_size: u32,
time: f32,
}
fn camera_light_bind_group_layout(device: &wgpu::Device) -> wgpu::BindGroupLayout {
device.create_bind_group_layout(&wgpu::BindGroupLayoutDescriptor {
label: Some("Camera+Light Bind Group Layout"),
entries: &[
wgpu::BindGroupLayoutEntry {
binding: 0,
visibility: wgpu::ShaderStages::VERTEX | wgpu::ShaderStages::FRAGMENT,
ty: wgpu::BindingType::Buffer {
ty: wgpu::BufferBindingType::Uniform,
has_dynamic_offset: true,
min_binding_size: wgpu::BufferSize::new(
std::mem::size_of::<CameraUniform>() as u64,
),
},
count: None,
},
wgpu::BindGroupLayoutEntry {
binding: 1,
visibility: wgpu::ShaderStages::FRAGMENT,
ty: wgpu::BindingType::Buffer {
ty: wgpu::BufferBindingType::Uniform,
has_dynamic_offset: false,
min_binding_size: None,
},
count: None,
},
],
})
}
fn align_up(size: u32, alignment: u32) -> u32 {
((size + alignment - 1) / alignment) * alignment
}
impl ApplicationHandler for MultiLightApp {
fn resumed(&mut self, event_loop: &ActiveEventLoop) {
let config = WindowConfig {
title: "Voltex - Multi-Light Demo".to_string(),
width: 1280,
height: 720,
..Default::default()
};
let window = VoltexWindow::new(event_loop, &config);
let gpu = GpuContext::new(window.handle.clone());
// Dynamic uniform buffer alignment
let alignment = gpu.device.limits().min_uniform_buffer_offset_alignment;
let cam_aligned_size = align_up(std::mem::size_of::<CameraUniform>() as u32, alignment);
let mat_aligned_size = align_up(std::mem::size_of::<MaterialUniform>() as u32, alignment);
// Generate sphere mesh (shared by all 5 spheres)
let (vertices, indices) = generate_sphere(0.5, 32, 16);
let sphere_mesh = Mesh::new(&gpu.device, &vertices, &indices);
// Ground plane: cube.obj scaled to (10, 0.1, 10)
let obj_src = include_str!("../../../assets/cube.obj");
let obj_data = obj::parse_obj(obj_src);
let ground_mesh = Mesh::new(&gpu.device, &obj_data.vertices, &obj_data.indices);
// Camera at (0, 5, 12), looking down slightly
let aspect = gpu.config.width as f32 / gpu.config.height as f32;
let mut camera = Camera::new(Vec3::new(0.0, 5.0, 12.0), aspect);
camera.pitch = -0.3;
let fps_controller = FpsController::new();
// Initial lights uniform
let mut lights_uniform = LightsUniform::new();
lights_uniform.add_light(LightData::directional([0.0, -1.0, -0.5], [1.0, 1.0, 1.0], 0.3));
// Point lights at initial positions (will be updated per frame)
lights_uniform.add_light(LightData::point([5.0, 2.0, 0.0], [1.0, 0.0, 0.0], 15.0, 15.0));
lights_uniform.add_light(LightData::point([0.0, 2.0, 5.0], [0.0, 1.0, 0.0], 15.0, 15.0));
lights_uniform.add_light(LightData::point([-5.0, 2.0, 0.0], [0.0, 0.0, 1.0], 15.0, 15.0));
lights_uniform.add_light(LightData::point([0.0, 2.0, -5.0], [1.0, 1.0, 0.0], 15.0, 15.0));
lights_uniform.add_light(LightData::spot(
[0.0, 5.0, 0.0], [0.0, -1.0, 0.0], [1.0, 1.0, 1.0],
20.0, 10.0, 20.0, 35.0,
));
// Camera dynamic uniform buffer (one CameraUniform per object)
let camera_buffer = gpu.device.create_buffer(&wgpu::BufferDescriptor {
label: Some("Camera Dynamic Uniform Buffer"),
size: (cam_aligned_size as usize * NUM_OBJECTS) as u64,
usage: wgpu::BufferUsages::UNIFORM | wgpu::BufferUsages::COPY_DST,
mapped_at_creation: false,
});
let light_buffer = gpu.device.create_buffer_init(&wgpu::util::BufferInitDescriptor {
label: Some("Light Uniform Buffer"),
contents: bytemuck::cast_slice(&[lights_uniform]),
usage: wgpu::BufferUsages::UNIFORM | wgpu::BufferUsages::COPY_DST,
});
// Material dynamic uniform buffer (one MaterialUniform per object)
let material_buffer = gpu.device.create_buffer(&wgpu::BufferDescriptor {
label: Some("Material Dynamic Uniform Buffer"),
size: (mat_aligned_size as usize * NUM_OBJECTS) as u64,
usage: wgpu::BufferUsages::UNIFORM | wgpu::BufferUsages::COPY_DST,
mapped_at_creation: false,
});
// Bind group layouts
let cl_layout = camera_light_bind_group_layout(&gpu.device);
let pbr_tex_layout = pbr_texture_bind_group_layout(&gpu.device);
let mat_layout = MaterialUniform::bind_group_layout(&gpu.device);
// Camera+Light bind group
let camera_light_bind_group = gpu.device.create_bind_group(&wgpu::BindGroupDescriptor {
label: Some("Camera+Light Bind Group"),
layout: &cl_layout,
entries: &[
wgpu::BindGroupEntry {
binding: 0,
resource: wgpu::BindingResource::Buffer(wgpu::BufferBinding {
buffer: &camera_buffer,
offset: 0,
size: wgpu::BufferSize::new(
std::mem::size_of::<CameraUniform>() as u64,
),
}),
},
wgpu::BindGroupEntry {
binding: 1,
resource: light_buffer.as_entire_binding(),
},
],
});
// PBR texture bind group (albedo + normal)
let old_tex_layout = GpuTexture::bind_group_layout(&gpu.device);
let albedo_tex = GpuTexture::white_1x1(&gpu.device, &gpu.queue, &old_tex_layout);
let normal_tex = GpuTexture::flat_normal_1x1(&gpu.device, &gpu.queue);
let pbr_texture_bind_group = create_pbr_texture_bind_group(
&gpu.device,
&pbr_tex_layout,
&albedo_tex.view,
&albedo_tex.sampler,
&normal_tex.1,
&normal_tex.2,
);
// IBL resources
let ibl = IblResources::new(&gpu.device, &gpu.queue);
// Material bind group
let material_bind_group = gpu.device.create_bind_group(&wgpu::BindGroupDescriptor {
label: Some("Material Bind Group"),
layout: &mat_layout,
entries: &[wgpu::BindGroupEntry {
binding: 0,
resource: wgpu::BindingResource::Buffer(wgpu::BufferBinding {
buffer: &material_buffer,
offset: 0,
size: wgpu::BufferSize::new(
std::mem::size_of::<MaterialUniform>() as u64,
),
}),
}],
});
// Shadow resources (dummy — shadows disabled)
let shadow_map = ShadowMap::new(&gpu.device);
let shadow_layout = ShadowMap::bind_group_layout(&gpu.device);
let shadow_uniform = ShadowUniform {
light_view_proj: [[0.0; 4]; 4],
shadow_map_size: 0.0,
shadow_bias: 0.0,
_padding: [0.0; 2],
};
let shadow_uniform_buffer = gpu.device.create_buffer_init(&wgpu::util::BufferInitDescriptor {
label: Some("Shadow Uniform Buffer"),
contents: bytemuck::cast_slice(&[shadow_uniform]),
usage: wgpu::BufferUsages::UNIFORM,
});
let shadow_bind_group = shadow_map.create_bind_group(
&gpu.device,
&shadow_layout,
&shadow_uniform_buffer,
&ibl.brdf_lut_view,
&ibl.brdf_lut_sampler,
);
// PBR pipeline
let pipeline = create_pbr_pipeline(
&gpu.device,
gpu.surface_format,
&cl_layout,
&pbr_tex_layout,
&mat_layout,
&shadow_layout,
);
self.state = Some(AppState {
window,
gpu,
pipeline,
sphere_mesh,
ground_mesh,
camera,
fps_controller,
camera_buffer,
light_buffer,
material_buffer,
camera_light_bind_group,
_albedo_tex: albedo_tex,
_normal_tex: normal_tex,
pbr_texture_bind_group,
material_bind_group,
shadow_bind_group,
_shadow_map: shadow_map,
_ibl: ibl,
input: InputState::new(),
timer: GameTimer::new(60),
cam_aligned_size,
mat_aligned_size,
time: 0.0,
});
}
fn window_event(
&mut self,
event_loop: &ActiveEventLoop,
_window_id: WindowId,
event: WindowEvent,
) {
let state = match &mut self.state {
Some(s) => s,
None => return,
};
match event {
WindowEvent::CloseRequested => event_loop.exit(),
WindowEvent::KeyboardInput {
event:
winit::event::KeyEvent {
physical_key: PhysicalKey::Code(key_code),
state: key_state,
..
},
..
} => {
let pressed = key_state == winit::event::ElementState::Pressed;
state.input.process_key(key_code, pressed);
if key_code == KeyCode::Escape && pressed {
event_loop.exit();
}
}
WindowEvent::Resized(size) => {
state.gpu.resize(size.width, size.height);
if size.width > 0 && size.height > 0 {
state.camera.aspect = size.width as f32 / size.height as f32;
}
}
WindowEvent::CursorMoved { position, .. } => {
state.input.process_mouse_move(position.x, position.y);
}
WindowEvent::MouseInput {
state: btn_state,
button,
..
} => {
let pressed = btn_state == winit::event::ElementState::Pressed;
state.input.process_mouse_button(button, pressed);
}
WindowEvent::MouseWheel { delta, .. } => {
let y = match delta {
winit::event::MouseScrollDelta::LineDelta(_, y) => y,
winit::event::MouseScrollDelta::PixelDelta(pos) => pos.y as f32,
};
state.input.process_scroll(y);
}
WindowEvent::RedrawRequested => {
state.timer.tick();
let dt = state.timer.frame_dt();
// Camera input
if state
.input
.is_mouse_button_pressed(winit::event::MouseButton::Right)
{
let (dx, dy) = state.input.mouse_delta();
state
.fps_controller
.process_mouse(&mut state.camera, dx, dy);
}
let mut forward = 0.0f32;
let mut right = 0.0f32;
let mut up = 0.0f32;
if state.input.is_key_pressed(KeyCode::KeyW) {
forward += 1.0;
}
if state.input.is_key_pressed(KeyCode::KeyS) {
forward -= 1.0;
}
if state.input.is_key_pressed(KeyCode::KeyD) {
right += 1.0;
}
if state.input.is_key_pressed(KeyCode::KeyA) {
right -= 1.0;
}
if state.input.is_key_pressed(KeyCode::Space) {
up += 1.0;
}
if state.input.is_key_pressed(KeyCode::ShiftLeft) {
up -= 1.0;
}
state
.fps_controller
.process_movement(&mut state.camera, forward, right, up, dt);
state.input.begin_frame();
state.time += dt;
// Compute view-projection
let view_proj = state.camera.view_projection();
let cam_pos = [
state.camera.position.x,
state.camera.position.y,
state.camera.position.z,
];
let cam_aligned = state.cam_aligned_size as usize;
let mat_aligned = state.mat_aligned_size as usize;
// Build staging data for camera and material uniforms
let cam_total = NUM_OBJECTS * cam_aligned;
let mat_total = NUM_OBJECTS * mat_aligned;
let mut cam_staging = vec![0u8; cam_total];
let mut mat_staging = vec![0u8; mat_total];
// Object layout: indices 0..4 = spheres, index 5 = ground plane
// Spheres at y=0, x = [-4, -2, 0, 2, 4], metallic varies 0.0..1.0
for i in 0..5usize {
let x = -4.0 + i as f32 * 2.0;
let model = Mat4::translation(x, 0.0, 0.0);
let cam_uniform = CameraUniform {
view_proj: view_proj.cols,
model: model.cols,
camera_pos: cam_pos,
_padding: 0.0,
};
let bytes = bytemuck::bytes_of(&cam_uniform);
let offset = i * cam_aligned;
cam_staging[offset..offset + bytes.len()].copy_from_slice(bytes);
let metallic = i as f32 / 4.0;
let mat_uniform = MaterialUniform::with_params(
[0.8, 0.2, 0.2, 1.0],
metallic,
0.3,
);
let bytes = bytemuck::bytes_of(&mat_uniform);
let offset = i * mat_aligned;
mat_staging[offset..offset + bytes.len()].copy_from_slice(bytes);
}
// Ground plane at y=-0.5, scale (10, 0.1, 10)
{
let i = 5;
let model = Mat4::translation(0.0, -0.5, 0.0)
.mul_mat4(&Mat4::scale(10.0, 0.1, 10.0));
let cam_uniform = CameraUniform {
view_proj: view_proj.cols,
model: model.cols,
camera_pos: cam_pos,
_padding: 0.0,
};
let bytes = bytemuck::bytes_of(&cam_uniform);
let offset = i * cam_aligned;
cam_staging[offset..offset + bytes.len()].copy_from_slice(bytes);
let mat_uniform = MaterialUniform::with_params(
[0.5, 0.5, 0.5, 1.0],
0.0,
0.8,
);
let bytes = bytemuck::bytes_of(&mat_uniform);
let offset = i * mat_aligned;
mat_staging[offset..offset + bytes.len()].copy_from_slice(bytes);
}
state
.gpu
.queue
.write_buffer(&state.camera_buffer, 0, &cam_staging);
state
.gpu
.queue
.write_buffer(&state.material_buffer, 0, &mat_staging);
// Update lights with orbiting point lights
let radius = 5.0f32;
let time = state.time;
let mut lights_uniform = LightsUniform::new();
// Directional fill light
lights_uniform.add_light(LightData::directional(
[0.0, -1.0, -0.5], [1.0, 1.0, 1.0], 0.3,
));
// 4 orbiting point lights
let offsets = [0.0f32, std::f32::consts::FRAC_PI_2, std::f32::consts::PI, 3.0 * std::f32::consts::FRAC_PI_2];
let colors = [
[1.0, 0.0, 0.0], // Red
[0.0, 1.0, 0.0], // Green
[0.0, 0.0, 1.0], // Blue
[1.0, 1.0, 0.0], // Yellow
];
for j in 0..4 {
let angle = time + offsets[j];
let px = radius * angle.cos();
let pz = radius * angle.sin();
lights_uniform.add_light(LightData::point(
[px, 2.0, pz], colors[j], 15.0, 15.0,
));
}
// Spot light from above
lights_uniform.add_light(LightData::spot(
[0.0, 5.0, 0.0], [0.0, -1.0, 0.0], [1.0, 1.0, 1.0],
20.0, 10.0, 20.0, 35.0,
));
state.gpu.queue.write_buffer(
&state.light_buffer,
0,
bytemuck::cast_slice(&[lights_uniform]),
);
// Render
let output = match state.gpu.surface.get_current_texture() {
Ok(t) => t,
Err(wgpu::SurfaceError::Lost) => {
let (w, h) = state.window.inner_size();
state.gpu.resize(w, h);
return;
}
Err(wgpu::SurfaceError::OutOfMemory) => {
event_loop.exit();
return;
}
Err(_) => return,
};
let view =
output
.texture
.create_view(&wgpu::TextureViewDescriptor::default());
let mut encoder = state.gpu.device.create_command_encoder(
&wgpu::CommandEncoderDescriptor {
label: Some("Render Encoder"),
},
);
{
let mut render_pass =
encoder.begin_render_pass(&wgpu::RenderPassDescriptor {
label: Some("Multi-Light Render Pass"),
color_attachments: &[Some(wgpu::RenderPassColorAttachment {
view: &view,
resolve_target: None,
depth_slice: None,
ops: wgpu::Operations {
load: wgpu::LoadOp::Clear(wgpu::Color {
r: 0.05,
g: 0.05,
b: 0.08,
a: 1.0,
}),
store: wgpu::StoreOp::Store,
},
})],
depth_stencil_attachment: Some(
wgpu::RenderPassDepthStencilAttachment {
view: &state.gpu.depth_view,
depth_ops: Some(wgpu::Operations {
load: wgpu::LoadOp::Clear(1.0),
store: wgpu::StoreOp::Store,
}),
stencil_ops: None,
},
),
occlusion_query_set: None,
timestamp_writes: None,
multiview_mask: None,
});
render_pass.set_pipeline(&state.pipeline);
render_pass.set_bind_group(1, &state.pbr_texture_bind_group, &[]);
render_pass.set_bind_group(3, &state.shadow_bind_group, &[]);
// Draw 5 spheres (objects 0..4)
render_pass.set_vertex_buffer(0, state.sphere_mesh.vertex_buffer.slice(..));
render_pass.set_index_buffer(
state.sphere_mesh.index_buffer.slice(..),
wgpu::IndexFormat::Uint32,
);
for i in 0..5u32 {
let cam_offset = i * state.cam_aligned_size;
let mat_offset = i * state.mat_aligned_size;
render_pass.set_bind_group(
0,
&state.camera_light_bind_group,
&[cam_offset],
);
render_pass.set_bind_group(
2,
&state.material_bind_group,
&[mat_offset],
);
render_pass.draw_indexed(0..state.sphere_mesh.num_indices, 0, 0..1);
}
// Draw ground plane (object 5)
render_pass.set_vertex_buffer(0, state.ground_mesh.vertex_buffer.slice(..));
render_pass.set_index_buffer(
state.ground_mesh.index_buffer.slice(..),
wgpu::IndexFormat::Uint32,
);
{
let cam_offset = 5u32 * state.cam_aligned_size;
let mat_offset = 5u32 * state.mat_aligned_size;
render_pass.set_bind_group(
0,
&state.camera_light_bind_group,
&[cam_offset],
);
render_pass.set_bind_group(
2,
&state.material_bind_group,
&[mat_offset],
);
render_pass.draw_indexed(0..state.ground_mesh.num_indices, 0, 0..1);
}
}
state.gpu.queue.submit(std::iter::once(encoder.finish()));
output.present();
}
_ => {}
}
}
fn about_to_wait(&mut self, _event_loop: &ActiveEventLoop) {
if let Some(state) = &self.state {
state.window.request_redraw();
}
}
}
fn main() {
env_logger::init();
let event_loop = EventLoop::new().unwrap();
let mut app = MultiLightApp { state: None };
event_loop.run_app(&mut app).unwrap();
}

View File

@@ -0,0 +1,15 @@
[package]
name = "pbr_demo"
version = "0.1.0"
edition = "2021"
[dependencies]
voltex_math.workspace = true
voltex_platform.workspace = true
voltex_renderer.workspace = true
wgpu.workspace = true
winit.workspace = true
bytemuck.workspace = true
pollster.workspace = true
env_logger.workspace = true
log.workspace = true

View File

@@ -0,0 +1,525 @@
use winit::{
application::ApplicationHandler,
event::WindowEvent,
event_loop::{ActiveEventLoop, EventLoop},
keyboard::{KeyCode, PhysicalKey},
window::WindowId,
};
use voltex_math::{Vec3, Mat4};
use voltex_platform::{VoltexWindow, WindowConfig, InputState, GameTimer};
use voltex_renderer::{
GpuContext, Camera, FpsController, CameraUniform, LightsUniform, LightData,
Mesh, GpuTexture, MaterialUniform, generate_sphere, create_pbr_pipeline,
ShadowMap, ShadowUniform,
IblResources, pbr_texture_bind_group_layout, create_pbr_texture_bind_group,
};
use wgpu::util::DeviceExt;
const GRID_SIZE: usize = 7;
const NUM_SPHERES: usize = GRID_SIZE * GRID_SIZE;
const SPACING: f32 = 1.2;
struct PbrDemoApp {
state: Option<AppState>,
}
struct AppState {
window: VoltexWindow,
gpu: GpuContext,
pipeline: wgpu::RenderPipeline,
mesh: Mesh,
camera: Camera,
fps_controller: FpsController,
camera_buffer: wgpu::Buffer,
light_buffer: wgpu::Buffer,
material_buffer: wgpu::Buffer,
camera_light_bind_group: wgpu::BindGroup,
_albedo_tex: GpuTexture,
_normal_tex: (wgpu::Texture, wgpu::TextureView, wgpu::Sampler),
pbr_texture_bind_group: wgpu::BindGroup,
material_bind_group: wgpu::BindGroup,
shadow_bind_group: wgpu::BindGroup,
_shadow_map: ShadowMap,
_ibl: IblResources,
input: InputState,
timer: GameTimer,
cam_aligned_size: u32,
mat_aligned_size: u32,
}
fn camera_light_bind_group_layout(device: &wgpu::Device) -> wgpu::BindGroupLayout {
device.create_bind_group_layout(&wgpu::BindGroupLayoutDescriptor {
label: Some("Camera+Light Bind Group Layout"),
entries: &[
wgpu::BindGroupLayoutEntry {
binding: 0,
visibility: wgpu::ShaderStages::VERTEX | wgpu::ShaderStages::FRAGMENT,
ty: wgpu::BindingType::Buffer {
ty: wgpu::BufferBindingType::Uniform,
has_dynamic_offset: true,
min_binding_size: wgpu::BufferSize::new(
std::mem::size_of::<CameraUniform>() as u64,
),
},
count: None,
},
wgpu::BindGroupLayoutEntry {
binding: 1,
visibility: wgpu::ShaderStages::FRAGMENT,
ty: wgpu::BindingType::Buffer {
ty: wgpu::BufferBindingType::Uniform,
has_dynamic_offset: false,
min_binding_size: None,
},
count: None,
},
],
})
}
fn align_up(size: u32, alignment: u32) -> u32 {
((size + alignment - 1) / alignment) * alignment
}
impl ApplicationHandler for PbrDemoApp {
fn resumed(&mut self, event_loop: &ActiveEventLoop) {
let config = WindowConfig {
title: "Voltex - PBR Demo".to_string(),
width: 1280,
height: 720,
..Default::default()
};
let window = VoltexWindow::new(event_loop, &config);
let gpu = GpuContext::new(window.handle.clone());
// Dynamic uniform buffer alignment
let alignment = gpu.device.limits().min_uniform_buffer_offset_alignment;
let cam_aligned_size = align_up(std::mem::size_of::<CameraUniform>() as u32, alignment);
let mat_aligned_size = align_up(std::mem::size_of::<MaterialUniform>() as u32, alignment);
// Generate sphere mesh
let (vertices, indices) = generate_sphere(0.4, 32, 16);
let mesh = Mesh::new(&gpu.device, &vertices, &indices);
// Camera at (0, 0, 12) looking toward -Z (toward origin where the grid is)
let aspect = gpu.config.width as f32 / gpu.config.height as f32;
let camera = Camera::new(Vec3::new(0.0, 0.0, 12.0), aspect);
let fps_controller = FpsController::new();
// Light: direction [-1, -1, -1], color white, intensity 1.0
let mut lights_uniform = LightsUniform::new();
lights_uniform.add_light(LightData::directional([-1.0, -1.0, -1.0], [1.0, 1.0, 1.0], 1.0));
// Camera dynamic uniform buffer (one CameraUniform per sphere)
let camera_buffer = gpu.device.create_buffer(&wgpu::BufferDescriptor {
label: Some("Camera Dynamic Uniform Buffer"),
size: (cam_aligned_size as usize * NUM_SPHERES) as u64,
usage: wgpu::BufferUsages::UNIFORM | wgpu::BufferUsages::COPY_DST,
mapped_at_creation: false,
});
let light_buffer = gpu.device.create_buffer_init(&wgpu::util::BufferInitDescriptor {
label: Some("Light Uniform Buffer"),
contents: bytemuck::cast_slice(&[lights_uniform]),
usage: wgpu::BufferUsages::UNIFORM | wgpu::BufferUsages::COPY_DST,
});
// Material dynamic uniform buffer (one MaterialUniform per sphere)
let material_buffer = gpu.device.create_buffer(&wgpu::BufferDescriptor {
label: Some("Material Dynamic Uniform Buffer"),
size: (mat_aligned_size as usize * NUM_SPHERES) as u64,
usage: wgpu::BufferUsages::UNIFORM | wgpu::BufferUsages::COPY_DST,
mapped_at_creation: false,
});
// Bind group layouts
let cl_layout = camera_light_bind_group_layout(&gpu.device);
let pbr_tex_layout = pbr_texture_bind_group_layout(&gpu.device);
let mat_layout = MaterialUniform::bind_group_layout(&gpu.device);
// Camera+Light bind group
let camera_light_bind_group = gpu.device.create_bind_group(&wgpu::BindGroupDescriptor {
label: Some("Camera+Light Bind Group"),
layout: &cl_layout,
entries: &[
wgpu::BindGroupEntry {
binding: 0,
resource: wgpu::BindingResource::Buffer(wgpu::BufferBinding {
buffer: &camera_buffer,
offset: 0,
size: wgpu::BufferSize::new(
std::mem::size_of::<CameraUniform>() as u64,
),
}),
},
wgpu::BindGroupEntry {
binding: 1,
resource: light_buffer.as_entire_binding(),
},
],
});
// PBR texture bind group (albedo + normal)
let old_tex_layout = GpuTexture::bind_group_layout(&gpu.device);
let albedo_tex = GpuTexture::white_1x1(&gpu.device, &gpu.queue, &old_tex_layout);
let normal_tex = GpuTexture::flat_normal_1x1(&gpu.device, &gpu.queue);
let pbr_texture_bind_group = create_pbr_texture_bind_group(
&gpu.device,
&pbr_tex_layout,
&albedo_tex.view,
&albedo_tex.sampler,
&normal_tex.1,
&normal_tex.2,
);
// IBL resources
let ibl = IblResources::new(&gpu.device, &gpu.queue);
// Material bind group
let material_bind_group = gpu.device.create_bind_group(&wgpu::BindGroupDescriptor {
label: Some("Material Bind Group"),
layout: &mat_layout,
entries: &[wgpu::BindGroupEntry {
binding: 0,
resource: wgpu::BindingResource::Buffer(wgpu::BufferBinding {
buffer: &material_buffer,
offset: 0,
size: wgpu::BufferSize::new(
std::mem::size_of::<MaterialUniform>() as u64,
),
}),
}],
});
// Shadow resources (dummy — shadows disabled)
let shadow_map = ShadowMap::new(&gpu.device);
let shadow_layout = ShadowMap::bind_group_layout(&gpu.device);
let shadow_uniform = ShadowUniform {
light_view_proj: [[0.0; 4]; 4],
shadow_map_size: 0.0,
shadow_bias: 0.0,
_padding: [0.0; 2],
};
let shadow_uniform_buffer = gpu.device.create_buffer_init(&wgpu::util::BufferInitDescriptor {
label: Some("Shadow Uniform Buffer"),
contents: bytemuck::cast_slice(&[shadow_uniform]),
usage: wgpu::BufferUsages::UNIFORM,
});
let shadow_bind_group = shadow_map.create_bind_group(
&gpu.device,
&shadow_layout,
&shadow_uniform_buffer,
&ibl.brdf_lut_view,
&ibl.brdf_lut_sampler,
);
// PBR pipeline
let pipeline = create_pbr_pipeline(
&gpu.device,
gpu.surface_format,
&cl_layout,
&pbr_tex_layout,
&mat_layout,
&shadow_layout,
);
self.state = Some(AppState {
window,
gpu,
pipeline,
mesh,
camera,
fps_controller,
camera_buffer,
light_buffer,
material_buffer,
camera_light_bind_group,
_albedo_tex: albedo_tex,
_normal_tex: normal_tex,
pbr_texture_bind_group,
material_bind_group,
shadow_bind_group,
_shadow_map: shadow_map,
_ibl: ibl,
input: InputState::new(),
timer: GameTimer::new(60),
cam_aligned_size,
mat_aligned_size,
});
}
fn window_event(
&mut self,
event_loop: &ActiveEventLoop,
_window_id: WindowId,
event: WindowEvent,
) {
let state = match &mut self.state {
Some(s) => s,
None => return,
};
match event {
WindowEvent::CloseRequested => event_loop.exit(),
WindowEvent::KeyboardInput {
event:
winit::event::KeyEvent {
physical_key: PhysicalKey::Code(key_code),
state: key_state,
..
},
..
} => {
let pressed = key_state == winit::event::ElementState::Pressed;
state.input.process_key(key_code, pressed);
if key_code == KeyCode::Escape && pressed {
event_loop.exit();
}
}
WindowEvent::Resized(size) => {
state.gpu.resize(size.width, size.height);
if size.width > 0 && size.height > 0 {
state.camera.aspect = size.width as f32 / size.height as f32;
}
}
WindowEvent::CursorMoved { position, .. } => {
state.input.process_mouse_move(position.x, position.y);
}
WindowEvent::MouseInput {
state: btn_state,
button,
..
} => {
let pressed = btn_state == winit::event::ElementState::Pressed;
state.input.process_mouse_button(button, pressed);
}
WindowEvent::MouseWheel { delta, .. } => {
let y = match delta {
winit::event::MouseScrollDelta::LineDelta(_, y) => y,
winit::event::MouseScrollDelta::PixelDelta(pos) => pos.y as f32,
};
state.input.process_scroll(y);
}
WindowEvent::RedrawRequested => {
state.timer.tick();
let dt = state.timer.frame_dt();
// Camera input
if state
.input
.is_mouse_button_pressed(winit::event::MouseButton::Right)
{
let (dx, dy) = state.input.mouse_delta();
state
.fps_controller
.process_mouse(&mut state.camera, dx, dy);
}
let mut forward = 0.0f32;
let mut right = 0.0f32;
let mut up = 0.0f32;
if state.input.is_key_pressed(KeyCode::KeyW) {
forward += 1.0;
}
if state.input.is_key_pressed(KeyCode::KeyS) {
forward -= 1.0;
}
if state.input.is_key_pressed(KeyCode::KeyD) {
right += 1.0;
}
if state.input.is_key_pressed(KeyCode::KeyA) {
right -= 1.0;
}
if state.input.is_key_pressed(KeyCode::Space) {
up += 1.0;
}
if state.input.is_key_pressed(KeyCode::ShiftLeft) {
up -= 1.0;
}
state
.fps_controller
.process_movement(&mut state.camera, forward, right, up, dt);
state.input.begin_frame();
// Compute view-projection
let view_proj = state.camera.view_projection();
let cam_pos = [
state.camera.position.x,
state.camera.position.y,
state.camera.position.z,
];
let cam_aligned = state.cam_aligned_size as usize;
let mat_aligned = state.mat_aligned_size as usize;
// Build staging data for camera and material uniforms
let cam_total = NUM_SPHERES * cam_aligned;
let mat_total = NUM_SPHERES * mat_aligned;
let mut cam_staging = vec![0u8; cam_total];
let mut mat_staging = vec![0u8; mat_total];
let half_grid = (GRID_SIZE as f32 - 1.0) * SPACING * 0.5;
for row in 0..GRID_SIZE {
for col in 0..GRID_SIZE {
let i = row * GRID_SIZE + col;
let x = col as f32 * SPACING - half_grid;
let y = row as f32 * SPACING - half_grid;
// Camera uniform: view_proj + model (translation) + camera_pos
let model = Mat4::translation(x, y, 0.0);
let cam_uniform = CameraUniform {
view_proj: view_proj.cols,
model: model.cols,
camera_pos: cam_pos,
_padding: 0.0,
};
let bytes = bytemuck::bytes_of(&cam_uniform);
let offset = i * cam_aligned;
cam_staging[offset..offset + bytes.len()].copy_from_slice(bytes);
// Material uniform: metallic varies with col, roughness with row
let metallic = col as f32 / (GRID_SIZE as f32 - 1.0);
let roughness =
0.05 + row as f32 * (0.95 / (GRID_SIZE as f32 - 1.0));
let mat_uniform = MaterialUniform::with_params(
[0.8, 0.2, 0.2, 1.0],
metallic,
roughness,
);
let bytes = bytemuck::bytes_of(&mat_uniform);
let offset = i * mat_aligned;
mat_staging[offset..offset + bytes.len()].copy_from_slice(bytes);
}
}
state
.gpu
.queue
.write_buffer(&state.camera_buffer, 0, &cam_staging);
state
.gpu
.queue
.write_buffer(&state.material_buffer, 0, &mat_staging);
// Write light uniform
let mut lights_uniform = LightsUniform::new();
lights_uniform.add_light(LightData::directional([-1.0, -1.0, -1.0], [1.0, 1.0, 1.0], 1.0));
state.gpu.queue.write_buffer(
&state.light_buffer,
0,
bytemuck::cast_slice(&[lights_uniform]),
);
// Render
let output = match state.gpu.surface.get_current_texture() {
Ok(t) => t,
Err(wgpu::SurfaceError::Lost) => {
let (w, h) = state.window.inner_size();
state.gpu.resize(w, h);
return;
}
Err(wgpu::SurfaceError::OutOfMemory) => {
event_loop.exit();
return;
}
Err(_) => return,
};
let view =
output
.texture
.create_view(&wgpu::TextureViewDescriptor::default());
let mut encoder = state.gpu.device.create_command_encoder(
&wgpu::CommandEncoderDescriptor {
label: Some("Render Encoder"),
},
);
{
let mut render_pass =
encoder.begin_render_pass(&wgpu::RenderPassDescriptor {
label: Some("PBR Render Pass"),
color_attachments: &[Some(wgpu::RenderPassColorAttachment {
view: &view,
resolve_target: None,
depth_slice: None,
ops: wgpu::Operations {
load: wgpu::LoadOp::Clear(wgpu::Color {
r: 0.1,
g: 0.1,
b: 0.15,
a: 1.0,
}),
store: wgpu::StoreOp::Store,
},
})],
depth_stencil_attachment: Some(
wgpu::RenderPassDepthStencilAttachment {
view: &state.gpu.depth_view,
depth_ops: Some(wgpu::Operations {
load: wgpu::LoadOp::Clear(1.0),
store: wgpu::StoreOp::Store,
}),
stencil_ops: None,
},
),
occlusion_query_set: None,
timestamp_writes: None,
multiview_mask: None,
});
render_pass.set_pipeline(&state.pipeline);
render_pass.set_bind_group(1, &state.pbr_texture_bind_group, &[]);
render_pass.set_bind_group(3, &state.shadow_bind_group, &[]);
render_pass.set_vertex_buffer(0, state.mesh.vertex_buffer.slice(..));
render_pass.set_index_buffer(
state.mesh.index_buffer.slice(..),
wgpu::IndexFormat::Uint32,
);
// Draw each sphere with dynamic offsets for camera and material
for i in 0..NUM_SPHERES {
let cam_offset = (i as u32) * state.cam_aligned_size;
let mat_offset = (i as u32) * state.mat_aligned_size;
render_pass.set_bind_group(
0,
&state.camera_light_bind_group,
&[cam_offset],
);
render_pass.set_bind_group(
2,
&state.material_bind_group,
&[mat_offset],
);
render_pass.draw_indexed(0..state.mesh.num_indices, 0, 0..1);
}
}
state.gpu.queue.submit(std::iter::once(encoder.finish()));
output.present();
}
_ => {}
}
}
fn about_to_wait(&mut self, _event_loop: &ActiveEventLoop) {
if let Some(state) = &self.state {
state.window.request_redraw();
}
}
}
fn main() {
env_logger::init();
let event_loop = EventLoop::new().unwrap();
let mut app = PbrDemoApp { state: None };
event_loop.run_app(&mut app).unwrap();
}

View File

@@ -0,0 +1,15 @@
[package]
name = "shadow_demo"
version = "0.1.0"
edition = "2021"
[dependencies]
voltex_math.workspace = true
voltex_platform.workspace = true
voltex_renderer.workspace = true
wgpu.workspace = true
winit.workspace = true
bytemuck.workspace = true
pollster.workspace = true
env_logger.workspace = true
log.workspace = true

View File

@@ -0,0 +1,623 @@
use winit::{
application::ApplicationHandler,
event::WindowEvent,
event_loop::{ActiveEventLoop, EventLoop},
keyboard::{KeyCode, PhysicalKey},
window::WindowId,
};
use voltex_math::{Vec3, Mat4};
use voltex_platform::{VoltexWindow, WindowConfig, InputState, GameTimer};
use voltex_renderer::{
GpuContext, Camera, FpsController, CameraUniform, LightsUniform, LightData,
Mesh, GpuTexture, MaterialUniform, generate_sphere, create_pbr_pipeline, obj,
ShadowMap, ShadowUniform, ShadowPassUniform, SHADOW_MAP_SIZE,
create_shadow_pipeline, shadow_pass_bind_group_layout,
IblResources, pbr_texture_bind_group_layout, create_pbr_texture_bind_group,
};
use wgpu::util::DeviceExt;
/// 6 objects: ground plane, 3 spheres, 2 cubes
const NUM_OBJECTS: usize = 6;
struct ShadowDemoApp {
state: Option<AppState>,
}
struct AppState {
window: VoltexWindow,
gpu: GpuContext,
pbr_pipeline: wgpu::RenderPipeline,
shadow_pipeline: wgpu::RenderPipeline,
sphere_mesh: Mesh,
cube_mesh: Mesh,
camera: Camera,
fps_controller: FpsController,
// Color pass resources
camera_buffer: wgpu::Buffer,
light_buffer: wgpu::Buffer,
material_buffer: wgpu::Buffer,
camera_light_bind_group: wgpu::BindGroup,
_albedo_tex: GpuTexture,
_normal_tex: (wgpu::Texture, wgpu::TextureView, wgpu::Sampler),
pbr_texture_bind_group: wgpu::BindGroup,
material_bind_group: wgpu::BindGroup,
// Shadow resources
shadow_map: ShadowMap,
shadow_uniform_buffer: wgpu::Buffer,
shadow_bind_group: wgpu::BindGroup,
shadow_pass_buffer: wgpu::Buffer,
shadow_pass_bind_group: wgpu::BindGroup,
_ibl: IblResources,
// Misc
input: InputState,
timer: GameTimer,
cam_aligned_size: u32,
mat_aligned_size: u32,
shadow_pass_aligned_size: u32,
}
fn camera_light_bind_group_layout(device: &wgpu::Device) -> wgpu::BindGroupLayout {
device.create_bind_group_layout(&wgpu::BindGroupLayoutDescriptor {
label: Some("Camera+Light Bind Group Layout"),
entries: &[
wgpu::BindGroupLayoutEntry {
binding: 0,
visibility: wgpu::ShaderStages::VERTEX | wgpu::ShaderStages::FRAGMENT,
ty: wgpu::BindingType::Buffer {
ty: wgpu::BufferBindingType::Uniform,
has_dynamic_offset: true,
min_binding_size: wgpu::BufferSize::new(
std::mem::size_of::<CameraUniform>() as u64,
),
},
count: None,
},
wgpu::BindGroupLayoutEntry {
binding: 1,
visibility: wgpu::ShaderStages::FRAGMENT,
ty: wgpu::BindingType::Buffer {
ty: wgpu::BufferBindingType::Uniform,
has_dynamic_offset: false,
min_binding_size: None,
},
count: None,
},
],
})
}
fn align_up(size: u32, alignment: u32) -> u32 {
((size + alignment - 1) / alignment) * alignment
}
/// Model matrices for the 6 scene objects.
fn object_models() -> [Mat4; NUM_OBJECTS] {
[
// 0: ground plane — cube scaled (15, 0.1, 15) at y=-0.5
Mat4::translation(0.0, -0.5, 0.0).mul_mat4(&Mat4::scale(15.0, 0.1, 15.0)),
// 1-3: spheres (unit sphere radius 0.5)
Mat4::translation(-3.0, 1.0, 0.0),
Mat4::translation(0.0, 1.5, 0.0),
Mat4::translation(3.0, 0.8, 0.0),
// 4-5: cubes
Mat4::translation(-1.5, 0.5, -2.0),
Mat4::translation(1.5, 0.5, 2.0),
]
}
/// Material parameters for each object (base_color, metallic, roughness).
fn object_materials() -> [([ f32; 4], f32, f32); NUM_OBJECTS] {
[
([0.7, 0.7, 0.7, 1.0], 0.0, 0.8), // ground: light gray
([0.9, 0.2, 0.2, 1.0], 0.3, 0.4), // sphere: red
([0.2, 0.9, 0.2, 1.0], 0.5, 0.3), // sphere: green
([0.2, 0.2, 0.9, 1.0], 0.1, 0.6), // sphere: blue
([0.9, 0.8, 0.2, 1.0], 0.7, 0.2), // cube: yellow
([0.8, 0.3, 0.8, 1.0], 0.2, 0.5), // cube: purple
]
}
/// Returns true if the object at index `i` uses the cube mesh; false → sphere.
fn is_cube(i: usize) -> bool {
i == 0 || i == 4 || i == 5
}
impl ApplicationHandler for ShadowDemoApp {
fn resumed(&mut self, event_loop: &ActiveEventLoop) {
let config = WindowConfig {
title: "Voltex - Shadow Demo".to_string(),
width: 1280,
height: 720,
..Default::default()
};
let window = VoltexWindow::new(event_loop, &config);
let gpu = GpuContext::new(window.handle.clone());
// Dynamic uniform buffer alignment
let alignment = gpu.device.limits().min_uniform_buffer_offset_alignment;
let cam_aligned_size = align_up(std::mem::size_of::<CameraUniform>() as u32, alignment);
let mat_aligned_size = align_up(std::mem::size_of::<MaterialUniform>() as u32, alignment);
let shadow_pass_aligned_size = align_up(std::mem::size_of::<ShadowPassUniform>() as u32, alignment);
// Meshes
let (sphere_verts, sphere_idx) = generate_sphere(0.5, 32, 16);
let sphere_mesh = Mesh::new(&gpu.device, &sphere_verts, &sphere_idx);
let obj_src = include_str!("../../../assets/cube.obj");
let obj_data = obj::parse_obj(obj_src);
let cube_mesh = Mesh::new(&gpu.device, &obj_data.vertices, &obj_data.indices);
// Camera at (8, 8, 12) looking toward origin
let aspect = gpu.config.width as f32 / gpu.config.height as f32;
let mut camera = Camera::new(Vec3::new(8.0, 8.0, 12.0), aspect);
camera.pitch = -0.4;
// Compute yaw to look toward origin
let to_origin = Vec3::ZERO - camera.position;
camera.yaw = to_origin.x.atan2(-to_origin.z);
let fps_controller = FpsController::new();
// Light
let mut lights_uniform = LightsUniform::new();
lights_uniform.ambient_color = [0.05, 0.05, 0.05];
lights_uniform.add_light(LightData::directional(
[-1.0, -2.0, -1.0],
[1.0, 1.0, 1.0],
2.0,
));
// ---- Color pass buffers ----
let camera_buffer = gpu.device.create_buffer(&wgpu::BufferDescriptor {
label: Some("Camera Dynamic UBO"),
size: (cam_aligned_size as usize * NUM_OBJECTS) as u64,
usage: wgpu::BufferUsages::UNIFORM | wgpu::BufferUsages::COPY_DST,
mapped_at_creation: false,
});
let light_buffer = gpu.device.create_buffer_init(&wgpu::util::BufferInitDescriptor {
label: Some("Light UBO"),
contents: bytemuck::cast_slice(&[lights_uniform]),
usage: wgpu::BufferUsages::UNIFORM | wgpu::BufferUsages::COPY_DST,
});
let material_buffer = gpu.device.create_buffer(&wgpu::BufferDescriptor {
label: Some("Material Dynamic UBO"),
size: (mat_aligned_size as usize * NUM_OBJECTS) as u64,
usage: wgpu::BufferUsages::UNIFORM | wgpu::BufferUsages::COPY_DST,
mapped_at_creation: false,
});
// Bind group layouts
let cl_layout = camera_light_bind_group_layout(&gpu.device);
let pbr_tex_layout = pbr_texture_bind_group_layout(&gpu.device);
let mat_layout = MaterialUniform::bind_group_layout(&gpu.device);
// Camera+Light bind group
let camera_light_bind_group = gpu.device.create_bind_group(&wgpu::BindGroupDescriptor {
label: Some("Camera+Light BG"),
layout: &cl_layout,
entries: &[
wgpu::BindGroupEntry {
binding: 0,
resource: wgpu::BindingResource::Buffer(wgpu::BufferBinding {
buffer: &camera_buffer,
offset: 0,
size: wgpu::BufferSize::new(std::mem::size_of::<CameraUniform>() as u64),
}),
},
wgpu::BindGroupEntry {
binding: 1,
resource: light_buffer.as_entire_binding(),
},
],
});
// PBR texture bind group (albedo + normal)
let old_tex_layout = GpuTexture::bind_group_layout(&gpu.device);
let albedo_tex = GpuTexture::white_1x1(&gpu.device, &gpu.queue, &old_tex_layout);
let normal_tex = GpuTexture::flat_normal_1x1(&gpu.device, &gpu.queue);
let pbr_texture_bind_group = create_pbr_texture_bind_group(
&gpu.device,
&pbr_tex_layout,
&albedo_tex.view,
&albedo_tex.sampler,
&normal_tex.1,
&normal_tex.2,
);
// IBL resources
let ibl = IblResources::new(&gpu.device, &gpu.queue);
// Material bind group
let material_bind_group = gpu.device.create_bind_group(&wgpu::BindGroupDescriptor {
label: Some("Material BG"),
layout: &mat_layout,
entries: &[wgpu::BindGroupEntry {
binding: 0,
resource: wgpu::BindingResource::Buffer(wgpu::BufferBinding {
buffer: &material_buffer,
offset: 0,
size: wgpu::BufferSize::new(std::mem::size_of::<MaterialUniform>() as u64),
}),
}],
});
// ---- Shadow resources ----
let shadow_map = ShadowMap::new(&gpu.device);
let shadow_layout = ShadowMap::bind_group_layout(&gpu.device);
let shadow_uniform = ShadowUniform {
light_view_proj: Mat4::IDENTITY.cols,
shadow_map_size: SHADOW_MAP_SIZE as f32,
shadow_bias: 0.005,
_padding: [0.0; 2],
};
let shadow_uniform_buffer = gpu.device.create_buffer_init(&wgpu::util::BufferInitDescriptor {
label: Some("Shadow Uniform Buffer"),
contents: bytemuck::cast_slice(&[shadow_uniform]),
usage: wgpu::BufferUsages::UNIFORM | wgpu::BufferUsages::COPY_DST,
});
let shadow_bind_group = shadow_map.create_bind_group(
&gpu.device,
&shadow_layout,
&shadow_uniform_buffer,
&ibl.brdf_lut_view,
&ibl.brdf_lut_sampler,
);
// Shadow pass dynamic UBO (one ShadowPassUniform per object)
let sp_layout = shadow_pass_bind_group_layout(&gpu.device);
let shadow_pass_buffer = gpu.device.create_buffer(&wgpu::BufferDescriptor {
label: Some("Shadow Pass Dynamic UBO"),
size: (shadow_pass_aligned_size as usize * NUM_OBJECTS) as u64,
usage: wgpu::BufferUsages::UNIFORM | wgpu::BufferUsages::COPY_DST,
mapped_at_creation: false,
});
let shadow_pass_bind_group = gpu.device.create_bind_group(&wgpu::BindGroupDescriptor {
label: Some("Shadow Pass BG"),
layout: &sp_layout,
entries: &[wgpu::BindGroupEntry {
binding: 0,
resource: wgpu::BindingResource::Buffer(wgpu::BufferBinding {
buffer: &shadow_pass_buffer,
offset: 0,
size: wgpu::BufferSize::new(std::mem::size_of::<ShadowPassUniform>() as u64),
}),
}],
});
// ---- Pipelines ----
let shadow_pipeline = create_shadow_pipeline(&gpu.device, &sp_layout);
let pbr_pipeline = create_pbr_pipeline(
&gpu.device,
gpu.surface_format,
&cl_layout,
&pbr_tex_layout,
&mat_layout,
&shadow_layout,
);
self.state = Some(AppState {
window,
gpu,
pbr_pipeline,
shadow_pipeline,
sphere_mesh,
cube_mesh,
camera,
fps_controller,
camera_buffer,
light_buffer,
material_buffer,
camera_light_bind_group,
_albedo_tex: albedo_tex,
_normal_tex: normal_tex,
pbr_texture_bind_group,
material_bind_group,
shadow_map,
shadow_uniform_buffer,
shadow_bind_group,
shadow_pass_buffer,
shadow_pass_bind_group,
_ibl: ibl,
input: InputState::new(),
timer: GameTimer::new(60),
cam_aligned_size,
mat_aligned_size,
shadow_pass_aligned_size,
});
}
fn window_event(
&mut self,
event_loop: &ActiveEventLoop,
_window_id: WindowId,
event: WindowEvent,
) {
let state = match &mut self.state {
Some(s) => s,
None => return,
};
match event {
WindowEvent::CloseRequested => event_loop.exit(),
WindowEvent::KeyboardInput {
event:
winit::event::KeyEvent {
physical_key: PhysicalKey::Code(key_code),
state: key_state,
..
},
..
} => {
let pressed = key_state == winit::event::ElementState::Pressed;
state.input.process_key(key_code, pressed);
if key_code == KeyCode::Escape && pressed {
event_loop.exit();
}
}
WindowEvent::Resized(size) => {
state.gpu.resize(size.width, size.height);
if size.width > 0 && size.height > 0 {
state.camera.aspect = size.width as f32 / size.height as f32;
}
}
WindowEvent::CursorMoved { position, .. } => {
state.input.process_mouse_move(position.x, position.y);
}
WindowEvent::MouseInput {
state: btn_state,
button,
..
} => {
let pressed = btn_state == winit::event::ElementState::Pressed;
state.input.process_mouse_button(button, pressed);
}
WindowEvent::MouseWheel { delta, .. } => {
let y = match delta {
winit::event::MouseScrollDelta::LineDelta(_, y) => y,
winit::event::MouseScrollDelta::PixelDelta(pos) => pos.y as f32,
};
state.input.process_scroll(y);
}
WindowEvent::RedrawRequested => {
state.timer.tick();
let dt = state.timer.frame_dt();
// Camera input
if state.input.is_mouse_button_pressed(winit::event::MouseButton::Right) {
let (dx, dy) = state.input.mouse_delta();
state.fps_controller.process_mouse(&mut state.camera, dx, dy);
}
let mut forward = 0.0f32;
let mut right = 0.0f32;
let mut up = 0.0f32;
if state.input.is_key_pressed(KeyCode::KeyW) { forward += 1.0; }
if state.input.is_key_pressed(KeyCode::KeyS) { forward -= 1.0; }
if state.input.is_key_pressed(KeyCode::KeyD) { right += 1.0; }
if state.input.is_key_pressed(KeyCode::KeyA) { right -= 1.0; }
if state.input.is_key_pressed(KeyCode::Space) { up += 1.0; }
if state.input.is_key_pressed(KeyCode::ShiftLeft) { up -= 1.0; }
state.fps_controller.process_movement(&mut state.camera, forward, right, up, dt);
state.input.begin_frame();
// ----- Compute light VP -----
let light_dir = Vec3::new(-1.0, -2.0, -1.0).normalize();
let light_pos = Vec3::ZERO - light_dir * 20.0;
let light_view = Mat4::look_at(light_pos, Vec3::ZERO, Vec3::Y);
let light_proj = Mat4::orthographic(-15.0, 15.0, -15.0, 15.0, 0.1, 50.0);
let light_vp = light_proj * light_view;
let models = object_models();
let materials = object_materials();
let cam_aligned = state.cam_aligned_size as usize;
let mat_aligned = state.mat_aligned_size as usize;
let sp_aligned = state.shadow_pass_aligned_size as usize;
// ----- Build shadow pass staging data -----
let sp_total = sp_aligned * NUM_OBJECTS;
let mut sp_staging = vec![0u8; sp_total];
for i in 0..NUM_OBJECTS {
let sp_uniform = ShadowPassUniform {
light_vp_model: (light_vp * models[i]).cols,
};
let bytes = bytemuck::bytes_of(&sp_uniform);
let offset = i * sp_aligned;
sp_staging[offset..offset + bytes.len()].copy_from_slice(bytes);
}
state.gpu.queue.write_buffer(&state.shadow_pass_buffer, 0, &sp_staging);
// ----- Build color pass staging data -----
let view_proj = state.camera.view_projection();
let cam_pos = [
state.camera.position.x,
state.camera.position.y,
state.camera.position.z,
];
let cam_total = cam_aligned * NUM_OBJECTS;
let mat_total = mat_aligned * NUM_OBJECTS;
let mut cam_staging = vec![0u8; cam_total];
let mut mat_staging = vec![0u8; mat_total];
for i in 0..NUM_OBJECTS {
let cam_uniform = CameraUniform {
view_proj: view_proj.cols,
model: models[i].cols,
camera_pos: cam_pos,
_padding: 0.0,
};
let bytes = bytemuck::bytes_of(&cam_uniform);
let offset = i * cam_aligned;
cam_staging[offset..offset + bytes.len()].copy_from_slice(bytes);
let (color, metallic, roughness) = materials[i];
let mat_uniform = MaterialUniform::with_params(color, metallic, roughness);
let bytes = bytemuck::bytes_of(&mat_uniform);
let offset = i * mat_aligned;
mat_staging[offset..offset + bytes.len()].copy_from_slice(bytes);
}
state.gpu.queue.write_buffer(&state.camera_buffer, 0, &cam_staging);
state.gpu.queue.write_buffer(&state.material_buffer, 0, &mat_staging);
// Update shadow uniform with light VP
let shadow_uniform = ShadowUniform {
light_view_proj: light_vp.cols,
shadow_map_size: SHADOW_MAP_SIZE as f32,
shadow_bias: 0.005,
_padding: [0.0; 2],
};
state.gpu.queue.write_buffer(
&state.shadow_uniform_buffer,
0,
bytemuck::cast_slice(&[shadow_uniform]),
);
// Write light uniform
let mut lights_uniform = LightsUniform::new();
lights_uniform.ambient_color = [0.05, 0.05, 0.05];
lights_uniform.add_light(LightData::directional(
[-1.0, -2.0, -1.0],
[1.0, 1.0, 1.0],
2.0,
));
state.gpu.queue.write_buffer(
&state.light_buffer,
0,
bytemuck::cast_slice(&[lights_uniform]),
);
// ----- Render -----
let output = match state.gpu.surface.get_current_texture() {
Ok(t) => t,
Err(wgpu::SurfaceError::Lost) => {
let (w, h) = state.window.inner_size();
state.gpu.resize(w, h);
return;
}
Err(wgpu::SurfaceError::OutOfMemory) => {
event_loop.exit();
return;
}
Err(_) => return,
};
let color_view = output.texture.create_view(&wgpu::TextureViewDescriptor::default());
let mut encoder = state.gpu.device.create_command_encoder(
&wgpu::CommandEncoderDescriptor { label: Some("Shadow Demo Encoder") },
);
// ===== Pass 1: Shadow =====
{
let mut shadow_pass = encoder.begin_render_pass(&wgpu::RenderPassDescriptor {
label: Some("Shadow Pass"),
color_attachments: &[],
depth_stencil_attachment: Some(wgpu::RenderPassDepthStencilAttachment {
view: &state.shadow_map.view,
depth_ops: Some(wgpu::Operations {
load: wgpu::LoadOp::Clear(1.0),
store: wgpu::StoreOp::Store,
}),
stencil_ops: None,
}),
occlusion_query_set: None,
timestamp_writes: None,
multiview_mask: None,
});
shadow_pass.set_pipeline(&state.shadow_pipeline);
for i in 0..NUM_OBJECTS {
let offset = (i as u32) * state.shadow_pass_aligned_size;
shadow_pass.set_bind_group(0, &state.shadow_pass_bind_group, &[offset]);
if is_cube(i) {
shadow_pass.set_vertex_buffer(0, state.cube_mesh.vertex_buffer.slice(..));
shadow_pass.set_index_buffer(state.cube_mesh.index_buffer.slice(..), wgpu::IndexFormat::Uint32);
shadow_pass.draw_indexed(0..state.cube_mesh.num_indices, 0, 0..1);
} else {
shadow_pass.set_vertex_buffer(0, state.sphere_mesh.vertex_buffer.slice(..));
shadow_pass.set_index_buffer(state.sphere_mesh.index_buffer.slice(..), wgpu::IndexFormat::Uint32);
shadow_pass.draw_indexed(0..state.sphere_mesh.num_indices, 0, 0..1);
}
}
}
// ===== Pass 2: Color (PBR) =====
{
let mut render_pass = encoder.begin_render_pass(&wgpu::RenderPassDescriptor {
label: Some("Color Pass"),
color_attachments: &[Some(wgpu::RenderPassColorAttachment {
view: &color_view,
resolve_target: None,
depth_slice: None,
ops: wgpu::Operations {
load: wgpu::LoadOp::Clear(wgpu::Color {
r: 0.1, g: 0.1, b: 0.15, a: 1.0,
}),
store: wgpu::StoreOp::Store,
},
})],
depth_stencil_attachment: Some(wgpu::RenderPassDepthStencilAttachment {
view: &state.gpu.depth_view,
depth_ops: Some(wgpu::Operations {
load: wgpu::LoadOp::Clear(1.0),
store: wgpu::StoreOp::Store,
}),
stencil_ops: None,
}),
occlusion_query_set: None,
timestamp_writes: None,
multiview_mask: None,
});
render_pass.set_pipeline(&state.pbr_pipeline);
render_pass.set_bind_group(1, &state.pbr_texture_bind_group, &[]);
render_pass.set_bind_group(3, &state.shadow_bind_group, &[]);
for i in 0..NUM_OBJECTS {
let cam_offset = (i as u32) * state.cam_aligned_size;
let mat_offset = (i as u32) * state.mat_aligned_size;
render_pass.set_bind_group(0, &state.camera_light_bind_group, &[cam_offset]);
render_pass.set_bind_group(2, &state.material_bind_group, &[mat_offset]);
if is_cube(i) {
render_pass.set_vertex_buffer(0, state.cube_mesh.vertex_buffer.slice(..));
render_pass.set_index_buffer(state.cube_mesh.index_buffer.slice(..), wgpu::IndexFormat::Uint32);
render_pass.draw_indexed(0..state.cube_mesh.num_indices, 0, 0..1);
} else {
render_pass.set_vertex_buffer(0, state.sphere_mesh.vertex_buffer.slice(..));
render_pass.set_index_buffer(state.sphere_mesh.index_buffer.slice(..), wgpu::IndexFormat::Uint32);
render_pass.draw_indexed(0..state.sphere_mesh.num_indices, 0, 0..1);
}
}
}
state.gpu.queue.submit(std::iter::once(encoder.finish()));
output.present();
}
_ => {}
}
}
fn about_to_wait(&mut self, _event_loop: &ActiveEventLoop) {
if let Some(state) = &self.state {
state.window.request_redraw();
}
}
}
fn main() {
env_logger::init();
let event_loop = EventLoop::new().unwrap();
let mut app = ShadowDemoApp { state: None };
event_loop.run_app(&mut app).unwrap();
}

View File

@@ -1,4 +1,184 @@
// examples/triangle/src/main.rs
fn main() {
println!("Voltex Triangle Demo");
use winit::{
application::ApplicationHandler,
event::WindowEvent,
event_loop::{ActiveEventLoop, EventLoop},
keyboard::{KeyCode, PhysicalKey},
window::WindowId,
};
use voltex_platform::{VoltexWindow, WindowConfig, InputState, GameTimer};
use voltex_renderer::{GpuContext, pipeline, vertex::Vertex};
use wgpu::util::DeviceExt;
const TRIANGLE_VERTICES: &[Vertex] = &[
Vertex { position: [0.0, 0.5, 0.0], color: [1.0, 0.0, 0.0] },
Vertex { position: [-0.5, -0.5, 0.0], color: [0.0, 1.0, 0.0] },
Vertex { position: [0.5, -0.5, 0.0], color: [0.0, 0.0, 1.0] },
];
struct TriangleApp {
state: Option<AppState>,
}
struct AppState {
window: VoltexWindow,
gpu: GpuContext,
pipeline: wgpu::RenderPipeline,
vertex_buffer: wgpu::Buffer,
input: InputState,
timer: GameTimer,
}
impl ApplicationHandler for TriangleApp {
fn resumed(&mut self, event_loop: &ActiveEventLoop) {
let config = WindowConfig {
title: "Voltex - Triangle".to_string(),
width: 1280,
height: 720,
..Default::default()
};
let window = VoltexWindow::new(event_loop, &config);
let gpu = GpuContext::new(window.handle.clone());
let pipeline = pipeline::create_render_pipeline(&gpu.device, gpu.surface_format);
let vertex_buffer = gpu.device.create_buffer_init(&wgpu::util::BufferInitDescriptor {
label: Some("Triangle Vertex Buffer"),
contents: bytemuck::cast_slice(TRIANGLE_VERTICES),
usage: wgpu::BufferUsages::VERTEX,
});
self.state = Some(AppState {
window,
gpu,
pipeline,
vertex_buffer,
input: InputState::new(),
timer: GameTimer::new(60),
});
}
fn window_event(
&mut self,
event_loop: &ActiveEventLoop,
_window_id: WindowId,
event: WindowEvent,
) {
let state = match &mut self.state {
Some(s) => s,
None => return,
};
match event {
WindowEvent::CloseRequested => event_loop.exit(),
WindowEvent::KeyboardInput {
event: winit::event::KeyEvent {
physical_key: PhysicalKey::Code(key_code),
state: key_state,
..
},
..
} => {
let pressed = key_state == winit::event::ElementState::Pressed;
state.input.process_key(key_code, pressed);
if key_code == KeyCode::Escape && pressed {
event_loop.exit();
}
}
WindowEvent::Resized(size) => {
state.gpu.resize(size.width, size.height);
}
WindowEvent::CursorMoved { position, .. } => {
state.input.process_mouse_move(position.x, position.y);
}
WindowEvent::MouseInput { state: btn_state, button, .. } => {
let pressed = btn_state == winit::event::ElementState::Pressed;
state.input.process_mouse_button(button, pressed);
}
WindowEvent::MouseWheel { delta, .. } => {
let y = match delta {
winit::event::MouseScrollDelta::LineDelta(_, y) => y,
winit::event::MouseScrollDelta::PixelDelta(pos) => pos.y as f32,
};
state.input.process_scroll(y);
}
WindowEvent::RedrawRequested => {
state.timer.tick();
state.input.begin_frame();
// Fixed update loop
while state.timer.should_fixed_update() {
let _fixed_dt = state.timer.fixed_dt();
}
let output = match state.gpu.surface.get_current_texture() {
Ok(t) => t,
Err(wgpu::SurfaceError::Lost) => {
let (w, h) = state.window.inner_size();
state.gpu.resize(w, h);
return;
}
Err(wgpu::SurfaceError::OutOfMemory) => {
event_loop.exit();
return;
}
Err(_) => return,
};
let view = output.texture.create_view(&wgpu::TextureViewDescriptor::default());
let mut encoder = state.gpu.device.create_command_encoder(
&wgpu::CommandEncoderDescriptor { label: Some("Render Encoder") },
);
{
let mut render_pass = encoder.begin_render_pass(&wgpu::RenderPassDescriptor {
label: Some("Render Pass"),
color_attachments: &[Some(wgpu::RenderPassColorAttachment {
view: &view,
resolve_target: None,
depth_slice: None,
ops: wgpu::Operations {
load: wgpu::LoadOp::Clear(wgpu::Color {
r: 0.1,
g: 0.1,
b: 0.15,
a: 1.0,
}),
store: wgpu::StoreOp::Store,
},
})],
depth_stencil_attachment: None,
occlusion_query_set: None,
timestamp_writes: None,
multiview_mask: None,
});
render_pass.set_pipeline(&state.pipeline);
render_pass.set_vertex_buffer(0, state.vertex_buffer.slice(..));
render_pass.draw(0..3, 0..1);
}
state.gpu.queue.submit(std::iter::once(encoder.finish()));
output.present();
}
_ => {}
}
}
fn about_to_wait(&mut self, _event_loop: &ActiveEventLoop) {
if let Some(state) = &self.state {
state.window.request_redraw();
}
}
}
fn main() {
env_logger::init();
let event_loop = EventLoop::new().unwrap();
let mut app = TriangleApp { state: None };
event_loop.run_app(&mut app).unwrap();
}