Roblox RDC 2025 spills its secrets: announcements that reshape creators, players, and safety, all in one electrifying keynote.
In the RDC 2025 livestream, Roblox CEO David Baszucki revealed how the platform had grown over the past year. The daily active player count bumped to 112 million players, concurrent active players at a single time reached a peak of more than 45 million, developer earnings were more than $ 1 million annually, and finally, the Brazil servers are online. With that being said, let’s break down every major announcement from RDC 2025 under five key pillars that were focused on during the livestream.
New Standards for Safe Play
Roblox introduced new player safety standards focused on supervision, content moderation, and transparency. The platform will expand parental controls, giving guardians better oversight of chat and spending features. Developers are required to meet updated age-appropriate guidelines for their experiences, supported by automated tools that detect risky interactions or unsafe content. Roblox also announced clearer reporting systems and faster review processes to ensure safer play for all age groups.
Today we're announcing our plan to roll out platform-wide age estimation, designed to set a new standard for online safety. Learn more about the 100+ safety initiatives we shipped this year: https://t.co/KRIpYzwCVG
— Roblox (@Roblox) September 3, 2025
Roblox reaffirmed its commitment to creating a safe and civil platform, unveiling more than 100 safety innovations. Now, it will be ensured that users connect in more verified and meaningful ways. Instead of simply entering their age on the keyboard, users must now submit their official identification and a photo to verify their identity.
Advanced moderation tools on voice chat will be introduced to keep the communication clean. The experience guideline rules are updated to match the community’s needs. The most restricted rating is moving from 17 to 18. This push sets Roblox close to the industry safety standards, thereby making sure every user can enjoy the platform responsibly.
100k Player Servers for Experiences
Roblox announced a major leap in server capacity, introducing support for experiences hosting up to 100,000 concurrent players. This upgrade gives developers the tools to build massive shared environments, from large-scale concerts to community hubs and open-world simulations. It marks a technical milestone for Roblox, pushing the boundaries of multiplayer design and offering creators new ways to connect players on an unprecedented scale.
Roblox is entering its most ambitious technical phase yet, focusing on interchangeability. David Baszucki says that Roblox aims to enable any avatar, regardless of its attire, to interact with anything else in the game. In addition to that, other goals include supporting 100,000 players at once, supporting large-scale experiences like battle royale shooters.
More advanced avatar movements such as strafing, crouching, and sprinting are coming soon to Roblox.#Roblox #RDC25 $RBLX pic.twitter.com/CFH5dAr0me
— Bloxy News (@Bloxy_News) September 5, 2025
The graphics will be photorealistic with near-zero latency. Players will be able to join the game instantaneously, without the need for downloading anything. Developers will be able to push updates at any time. Moreover, a low-spec device with even 2GB RAM was shown to be running high-fidelity Roblox worlds.
Although still in its experimental stage, one of the most groundbreaking introductions was about AI-powered creations and voice integrations.
How will the new voice and text APIs work for NPCs
Roblox’s new voice and text APIs, announced at RDC 2025, enable developers to create dynamic, conversational NPCs by chaining Speech-to-Text (STT), Text Generation, and Text-to-Speech (TTS) services. Players speak into their microphone (with permission granted), STT converts the audio to text in real-time, Text Generation crafts context-aware NPC responses, and TTS voices those replies for immersive back-and-forth interactions.
Implementation Steps
Developers enable these beta APIs in Roblox Studio and monitor usage via observability dashboards. For example, a player says a command like “attack the monster,” STT transcribes it, the NPC processes via Text Generation for dynamic replies (e.g., casting spells), and TTS narrates the response while triggering gameplay. This supports voice-controlled mechanics, subtitles for accessibility, and multilingual potential, though currently English-focused with future echo cancellation and latency improvements planned.
Key Limitations
Mic access is user-opted, so not all players can participate, and responses must align with Roblox Community Standards-no restricted keywords yet, but wake words like “Hey Roblox” may be reserved later. Free usage tiers exist, with paid Roblox Extended Services for high-volume experiences.
How to implement real-time voice chat for NPCs in Unity
Implementing real-time voice chat for NPCs in Unity involves integrating speech-to-text (STT), language models for responses, and text-to-speech (TTS) into a pipeline that processes player audio input and generates NPC audio output with low latency.
Core Pipeline
Player microphone input feeds into an STT service like OpenAI Whisper via Unity’s Sentis library for real-time transcription, which then prompts an LLM (e.g., GPT-4 or Inworld AI) to generate context-aware NPC dialogue. The response text converts to speech using TTS APIs such as ElevenLabs or Google Cloud TTS, playing back through Unity’s AudioSource while syncing lip animations via tools like NVIDIA Audio2Face. Code examples typically use async HttpClient calls for API requests and coroutines to handle streaming audio playback without blocking the main thread.
Setup Steps
-
Environment: Install Unity packages like Animation Rigging and import an rigged NPC model from Blender.
-
STT Integration: Use Sentis for local Whisper inference or cloud APIs; capture microphone via Unity’s Microphone class and process in chunks for <500ms latency.
-
LLM Handling: Seed models with NPC personality via plugins from Inworld or Convai, sending transcribed text as prompts.
-
TTS and Lip-Sync: Stream TTS audio to AudioSource; map phonemes to blendshapes with Audio2Face or open-source alternatives like MuseTalk.
-
Optimization: Run heavy processing (e.g., Audio2Face) on a separate thread or server; add visual indicators for listening states.
Example Code Snippet
csharpusingUnityEngine;usingSystem.Net.Http;usingSystem.Threading.Tasks;publicclassNPCTalker:MonoBehaviour{privateAudioSource audioSource;asyncTaskHandleVoiceInput(AudioClip micClip){// STT to textstring playerText =await STTService.Transcribe(micClip);// LLM responsestring npcResponse =await LLMService.Generate(playerText, npcContext);// TTS audioAudioClip speechClip =await TTSService.Synthesize(npcResponse); audioSource.PlayOneShot(speechClip); LipSyncController.ApplyLipSync(speechClip);// Sync animations}}
This setup supports multiplayer via WebRTC (e.g., Agora) but requires user mic permissions and handles fallback to text chat. Test for latency under 300ms and ensure compliance with platform audio policies.