Decoupling Full-Body Motion: Introducing a Stratified Approach to Solve Sparse Observation Challenge Post date October 21, 2025 Post author By Zaddy Post categories In 3d-avatar-generation, arvr-immersion, deep-learning, diffusion-models, full-body-reconstruction, generative-models, kinematic-tree, latent-diffusion-model
Decoupling Full-Body Motion: Introducing a Stratified Approach to Solve Sparse Observation Challenge Post date October 21, 2025 Post author By Zaddy Post categories In 3d-avatar-generation, arvr-immersion, deep-learning, diffusion-models, full-body-reconstruction, generative-models, kinematic-tree, latent-diffusion-model
From Images to Programs: A Denoising Diffusion Method for Inverse Graphics Post date September 24, 2025 Post author By Photosynthesis Technology: It's not just for plants! Post categories In boolean-operators, code-generation, denoising-diffusion-models, diffusion-models, neural-networks, program-synthesis, syntax-trees, tree-diffusion
What Is a Diffusion LLM and Why Does It Matter? Post date March 1, 2025 Post author By Bruce Li Post categories In ai, auto-regressive-llms, coding, diffusion-large-language-model, diffusion-llm, diffusion-models, llm, what-is-a-diffusion-llm
Coin3D Outperforms Image-Based Methods in 3D Generation Accuracy Post date February 4, 2025 Post author By Rendering Technology Breakthroughs Post categories In 3d-asset-generation, coarse-geometry-proxy, coin3d, diffusion-models, interactive-3d-generation, interactive-3d-modeling, proxy-guided-conditioning, volumetric-shape-control
Coin3D Enables Real-Time 3D Editing and Interactive Previews Post date February 4, 2025 Post author By Rendering Technology Breakthroughs Post categories In 3d-asset-generation, coarse-geometry-proxy, coin3d, diffusion-models, interactive-3d-generation, interactive-3d-modeling, proxy-guided-conditioning, volumetric-shape-control
Coin3D Advances 3D Generation with Precise Control and Interactivity Post date February 4, 2025 Post author By Rendering Technology Breakthroughs Post categories In 3d-asset-generation, coarse-geometry-proxy, coin3d, diffusion-models, interactive-3d-generation, interactive-3d-modeling, proxy-guided-conditioning, volumetric-shape-control
Coin3D Integrates 3D-Aware Control for Interactive Object Generation Post date February 4, 2025 Post author By Rendering Technology Breakthroughs Post categories In 3d-asset-generation, coarse-geometry-proxy, coin3d, diffusion-models, interactive-3d-generation, interactive-3d-modeling, proxy-guided-conditioning, volumetric-shape-control
Coin3D Introduces a New Standard for Interactive 3D Asset Generation Post date February 4, 2025 Post author By Rendering Technology Breakthroughs Post categories In 3d-asset-generation, coarse-geometry-proxy, coin3d, diffusion-models, interactive-3d-generation, interactive-3d-modeling, proxy-guided-conditioning, volumetric-shape-control
Coin3D Achieves Superior Control and Efficiency in 3D Generation Post date February 4, 2025 Post author By Rendering Technology Breakthroughs Post categories In 3d-asset-generation, coarse-geometry-proxy, coin3d, diffusion-models, interactive-3d-generation, interactive-3d-modeling, proxy-guided-conditioning, volumetric-shape-control
Coin3D Enables Interactive and Controllable 3D Asset Generation Post date February 4, 2025 Post author By Rendering Technology Breakthroughs Post categories In 3d-asset-generation, coarse-geometry-proxy, coin3d, diffusion-models, interactive-3d-generation, interactive-3d-modeling, proxy-guided-conditioning, volumetric-shape-control
Coin3D Optimizes Training and Evaluation for High-Fidelity 3D Generation Post date February 4, 2025 Post author By Rendering Technology Breakthroughs Post categories In 3d-asset-generation, coarse-geometry-proxy, coin3d, diffusion-models, interactive-3d-generation, interactive-3d-modeling, proxy-guided-conditioning, volumetric-shape-control
Wonder3D: What Is Cross-Domain Diffusion? Post date January 1, 2025 Post author By Ringi Post categories In 2d-stable-diffusion-models, cross-domain-attention, cross-domain-diffusion, cross-domain-diffusion-details, diffusion-models, domain-switcher, what-is-cross-domain-diffusion, wonder3d
Wonder3D: A Look At Our Method and Consistent Multi-view Generation Post date January 1, 2025 Post author By Ringi Post categories In 2d-diffusion-models, diffusion-models, multi-view-generation, mvdream, what-is-wonder3d, wonder3d, wonder3d-explained, xiaoxiao-long
Wonder3D: Learn More About Diffusion Models Post date January 1, 2025 Post author By Ringi Post categories In diffusion-models, diffusion-models-explained, image-diffusion-models, reverse-markov-chain, what-is-wonder3d, wonder3d, wonder3d-explained, xiaoxiao-long
Wonder3D: 3D Generative Models and Multi-View Diffusion Models Post date January 1, 2025 Post author By Ringi Post categories In 3d-generative-models, 3d-reconstruction, diffusion-models, multi-view-diffusion-models, mvdream, syncdreamer, viewset-diffusion, wonder3d
2D Diffusion Models for 3D Generation: How They’re Related to Wonder3D Post date January 1, 2025 Post author By Ringi Post categories In 2d-diffusion-models, 3d-generation, 3d-synthesis, diffusion-models, sparseneus, textto-3d, what-is-wonder3d, wonder3d
Image Generation: Using Diffusion Networks Explained Post date October 17, 2024 Post author By Ilia Post categories In ai-image-generation, diffusion-models, diffusion-networks, diffusion-process, image-generation, latent-diffusion-model, stable-diffusion, text-to-image-diffusion
FlowVid: Taming Imperfect Optical Flows: Webpage Demo and Quantitative Comparisons Post date October 9, 2024 Post author By Kinetograph: The Video Editing Technology Publication Post categories In diffusion-models, flowvid, image-to-image-synthesis, spatial-conditions, temporal-consistency, temporal-optical-flow, v2v-synthesis-framework, video-to-video-synthesis
FlowVid: Taming Imperfect Optical Flows: Conclusion, Acknowledgments and References Post date October 9, 2024 Post author By Kinetograph: The Video Editing Technology Publication Post categories In diffusion-models, flowvid, image-to-image-synthesis, spatial-conditions, temporal-consistency, temporal-optical-flow, v2v-synthesis-framework, video-to-video-synthesis
FlowVid: Taming Imperfect Optical Flows: Ablation Study and Limitations Post date October 9, 2024 Post author By Kinetograph: The Video Editing Technology Publication Post categories In diffusion-models, flowvid, image-to-image-synthesis, spatial-conditions, temporal-consistency, temporal-optical-flow, v2v-synthesis-framework, video-to-video-synthesis
FlowVid: Taming Imperfect Optical Flows for Consistent Video-to-Video Synthesis: Quantitative result Post date October 9, 2024 Post author By Kinetograph: The Video Editing Technology Publication Post categories In diffusion-models, flowvid, image-to-image-synthesis, spatial-conditions, temporal-consistency, temporal-optical-flow, v2v-synthesis-framework, video-to-video-synthesis
FlowVid: Taming Imperfect Optical Flows for Consistent Video-to-Video Synthesis: Qualitative Results Post date October 9, 2024 Post author By Kinetograph: The Video Editing Technology Publication Post categories In diffusion-models, flowvid, image-to-image-synthesis, spatial-conditions, temporal-consistency, temporal-optical-flow, v2v-synthesis-framework, video-to-video-synthesis
FlowVid: Taming Imperfect Optical Flows for Consistent Video-to-Video Synthesis: Settings Post date October 9, 2024 Post author By Kinetograph: The Video Editing Technology Publication Post categories In diffusion-models, flowvid, image-to-image-synthesis, spatial-conditions, temporal-consistency, temporal-optical-flow, v2v-synthesis-framework, video-to-video-synthesis
FlowVid: Taming Imperfect Optical Flows: Generation: Edit the First Frame Then Propagate Post date October 9, 2024 Post author By Kinetograph: The Video Editing Technology Publication Post categories In diffusion-models, flowvid, image-to-image-synthesis, spatial-conditions, temporal-consistency, temporal-optical-flow, v2v-synthesis-framework, video-to-video-synthesis
FlowVid: Taming Imperfect Optical Flows: Training With Joint Spatial-temporal Conditions Post date October 9, 2024 Post author By EScholar: Electronic Academic Papers for Scholars Post categories In diffusion-models, flowvid, image-to-image-synthesis, spatial-conditions, temporal-consistency, temporal-optical-flow, v2v-synthesis-framework, video-to-video-synthesis
FlowVid: Taming Imperfect Optical Flows: Inflating Image U-Net to Accommodate Video Post date October 9, 2024 Post author By Kinetograph: The Video Editing Technology Publication Post categories In diffusion-models, flowvid, image-to-image-synthesis, spatial-conditions, temporal-consistency, temporal-optical-flow, v2v-synthesis-framework, video-to-video-synthesis
The Chosen One: Consistent Characters in Text-to-Image Diffusion Models: Related Work Post date July 18, 2024 Post author By Gamifications Post categories In computing-methodologies, diffusion-models, generative-models, machine-learning, neural-networks, story-visualization, text-to-image, text-to-image-diffusion-models
The Chosen One: Consistent Characters in Text-to-Image Diffusion Models: Abstract and Introduction Post date July 18, 2024 Post author By Gamifications Post categories In computing-methodologies, diffusion-models, generative-models, machine-learning, neural-networks, story-visualization, text-to-image, text-to-image-diffusion-models
The Chosen One: Consistent Characters in Text-to-Image Diffusion Models: Societal Impact Post date July 18, 2024 Post author By Gamifications Post categories In computing-methodologies, diffusion-models, generative-models, machine-learning, neural-networks, story-visualization, text-to-image, text-to-image-diffusion-models
The Chosen One: Consistent Characters in Text-to-Image Diffusion Models: Additional Experiments Post date July 18, 2024 Post author By Gamifications Post categories In computing-methodologies, diffusion-models, generative-models, machine-learning, neural-networks, story-visualization, text-to-image, text-to-image-diffusion-models
The Chosen One: Consistent Characters in Text-to-Image Diffusion Models: Method Post date July 18, 2024 Post author By Gamifications Post categories In computing-methodologies, diffusion-models, generative-models, machine-learning, neural-networks, story-visualization, text-to-image, text-to-image-diffusion-models
The Chosen One: Consistent Characters in Text-to-Image Diffusion Models: Limitations and Conclusions Post date July 18, 2024 Post author By Gamifications Post categories In computing-methodologies, diffusion-models, generative-models, machine-learning, neural-networks, story-visualization, text-to-image, text-to-image-diffusion-models
The Chosen One: Consistent Characters in Text-to-Image Diffusion Models: References Post date July 18, 2024 Post author By Gamifications Post categories In computing-methodologies, diffusion-models, generative-models, machine-learning, neural-networks, story-visualization, text-to-image, text-to-image-diffusion-models
The Chosen One: Consistent Characters in Text-to-Image Diffusion Models: Experiments Post date July 18, 2024 Post author By Gamifications Post categories In computing-methodologies, diffusion-models, generative-models, machine-learning, neural-networks, story-visualization, text-to-image, text-to-image-diffusion-models
AutoStory: Generating Diverse Storytelling Images with Minimal Human Effort: Experiments Post date July 17, 2024 Post author By UserStory Post categories In autostory, computing-methodologies, diffusion-models, generative-models, low-rank-adaptation, machine-learning, neural-networks, story-visualization
AutoStory: Generating Diverse Storytelling Images with Minimal Human Effort: Our Method Post date July 17, 2024 Post author By UserStory Post categories In autostory, computing-methodologies, diffusion-models, generative-models, low-rank-adaptation, machine-learning, neural-networks, story-visualization
AutoStory: Generating Diverse Storytelling Images with Minimal Human Effort: Related Work Post date July 17, 2024 Post author By UserStory Post categories In autostory, computing-methodologies, diffusion-models, generative-models, low-rank-adaptation, machine-learning, neural-networks, story-visualization
AutoStory: Generating Diverse Storytelling Images with Minimal Effort: Conclusion and References Post date July 17, 2024 Post author By UserStory Post categories In autostory, computing-methodologies, diffusion-models, generative-models, low-rank-adaptation, machine-learning, neural-networks, story-visualization