Roblox 3D and 4D Breakthroughs at SIGGRAPH 2024

Roblox relentlessly innovates to build a 3D immersive platform where millions of creators make avatars, accessories, and experiences that allow people all over the world to connect with each other.

At SIGGRAPH, the premier conference for computer graphics and interactive techniques worldwide, we will share new technological and algorithmic breakthroughs. The work we’re sharing includes new methods to enable 3D materials that can stretch like rubber, a faster way to produce avatars with facial animation, and hair that moves in a more lifelike way. Combined, these are strong theoretical results and early-stage prototypes for empowering the future of immersive 3D. Join our sessions at SIGGRAPH in Denver for the full technical details.

Increasingly Detailed Avatars

Avatars are the heart of personal expression on Roblox, with full facial animation, configurable bodies, layered clothing, and consistent appearance across the platform. Our recent Digital Expressions Report found that 88 percent of Gen Z respondents say that expressing themselves in metaverse-like worlds such as Roblox has likely helped them more comfortably express themselves in their everyday lives. To support this self-expression, we continue to advance the state of the art in avatar technology.

Creating a new avatar from a 3D mesh traditionally requires several stages of highly technical work. This is one of the challenges of 4D generation: extending static 3D assets to be fully dynamic, interactive, and combinatorial so that they can come to life in a metaverse experience. The stages involved, including caging, rigging, and skinning, can take up to a week per avatar, even for professional creators. Additional work is then required to ensure compatibility with our platform’s advanced clothing and facial expression features.

In their talk “End-to-end Automatic Body and Face Setup for Generative or User-Created 3D Avatars,” Roblox’s Avatar and CoreAI teams present a multistage pipeline that combines machine learning and geometry-processing techniques. This method makes the avatar creation process significantly faster and easier, and it empowers less-experienced creators to design and upload their own fully functional avatars.

You can experience the impact of this technology on the platform via Avatar Auto Setup, an automated system that converts an input geometry-only model into a Roblox-compatible, customizable, animation-ready, rigged, and skinned avatar. With this system, a process that once took up to a week can now be completed in minutes.

One popular way for people to express themselves on Roblox is by changing their avatars’ hair. In 2023 alone, Roblox users purchased more than 139 million hairstyles, and 7.3 million users bought five or more hairstyles. But achieving a realistic hairstyle, with each strand moving as it would in the physical world, is extremely challenging. On average, the human scalp has somewhere between 100K and 150K hair follicles. Simulating, storing, and transferring complex geometries at that scale is difficult in terms of both computational efficiency and robustness.

In their paper “Real-time Physically Guided Hair Interpolation,” Roblox’s Cem Yuksel and his colleagues from LightSpeed Studios and the University of Utah present a novel physical-driven hair interpolation scheme that utilizes existing simulated guide hair data. This work substantially improves the visual quality of hair rendering for experiences with next to no overhead.

Realistic 3D Simulation and Rendering

Creators on Roblox create not only the experiences people join to play or connect, but also the objects that populate those experiences. As Roblox becomes available on more platforms, varying from a lower resolution Android to a high resolution gaming console or VR headset, it’s important that these objects can be shown at the best resolution possible on the user’s device.

Light and shadows often pose a challenge to 3D rendering. Recent research has made great strides, but existing methods may see blurring with camera effects like depth of field and antialiasing. In collaboration with NVIDIA and the University of Utah, Roblox’s Cem Yuksel presents “Area ReSTIR: Resampling for Real-Time Defocus and Antialiasing.” This work introduces area sampling to ReSTIR, which resolves these camera effects more efficiently. The end result is improved definition between light and shadows and greater detail, with fewer samples required.

A street scene shown with the previous version of ReSTIR.

The same street scene shown with our new Area ReSTIR, with improvements in lighting and shadows.

In their paper “A Unified Differentiable Boolean Operator with Fuzzy Logic,” Roblox’s Hsueh Derek Liu and colleagues present a method that enables generative AI for constructive solid geometry (CSG) 3D representations. Roblox’s physics simulator derives its robustness from engineering industry solid modeling via CSG, which also simplifies creation of plausible virtual shapes. The entertainment industry’s thin surface modeling is more common but doesn’t represent the volume inside an object. CSG was previously incompatible with generative AI because the differentiation step of training and applying AI to geometry requires a mathematical property similar to continuous evolution of shapes. By inventing a new mathematical primitive for performing “fuzzy” operations, we unlocked differentiable CSG, and we then built a CSG generative AI from it.


Accurately simulating elastic materials such as rubber is notoriously challenging in computer graphics. Roblox’s Liu and colleagues present Stabler Neo-Hookean Simulation — Absolute Eigenvalue Filtering for Projected Newton, a novel method to stabilize the simulation. The new method requires only a single line of code change in the existing framework and achieves a significant improvement in both stability and convergence speed. The resulting models retain a more stable shape when stretched.



Previous general simulation methods required a trade-off between the realism of the simulation and the computational resources required. Roblox’s Yuksel and colleagues from the University of Utah present Vertex Block Descent, a novel method that produces both fast and robust physics simulation. The resulting method is faster and more stable than previous simulation processes for 3D dynamics.

For nearly two decades, our platform and the community thriving on it have been empowered by technical innovation, thanks to Roblox’s heavy investments in R&D. Great R&D requires risk and honest evaluation. Not all of our R&D investigations achieve results that are of the right form or at the right time to become product features, and the work described in this article is speculative and forward-looking. However, we’re pleased that some of the new techniques described here are already part of tools that are available to Roblox creators, unlocking more realistic avatars and 3D worlds. All investigations are steps toward advancing the field as a whole and the technology for 3D immersive platforms.

Across AI, avatars, physics, and graphics, we are excited to share a set of new advances with the world at SIGGRAPH 2024.