New 14-inch NVIDIA Studio laptops, equipped with GeForce RTX 40 Series Laptop GPUs, give creators peak portability with a significant increase in performance over the last generation. AI-dedicated hardware called Tensor Cores power time-saving tasks in popular apps like Davinci Resolve. Ray Tracing Cores together with our neural rendering technology, DLSS 3, boost performance in real-time 3D rendering applications like D5 Render and NVIDIA Omniverse.
NVIDIA also introduced a new method for accelerating video encoding. Simultaneous Scene Encoding sends independent groups of frames, or scenes, to each NVIDIA Encoder (NVENC). With multiple NVENCs fully utilized, video export times can be reduced significantly, without affecting image quality. The first software to integrate the technology is the popular video editing app CapCut.
The May Studio Driver is ready for download now. This month’s release includes support for updates to MAGIX VEGAS Pro, D5 Render and VLC Media Player — in addition to CapCut — plus AI model optimizations for popular apps.
COMPUTEX, Asia’s biggest annual tech trade show, kicks off a flurry of updates, bringing creators new tools and performance from the NVIDIA Studio platform — and plenty of AI power.
During his keynote address at COMPUTEX, NVIDIA founder and CEO Jensen Huang introduced a new generative AI to support game development, NVIDIA Avatar Cloud Engine (ACE) for Games. The platform adds intelligence to non-playable characters (NPCs) in gaming, with AI-powered natural language interactions.
The Kairos demo — a joint venture with Convai led by NVIDIA Creative Director Gabriele Leone — demonstrates how a single model can transform into a living, breathing, lifelike character this week In the NVIDIA Studio.
Ultraportable, Ultimate Performance
NVIDIA Studio laptops, powered by the NVIDIA Ada Lovelace architecture, are the world’s fastest laptops for creating and gaming.
For the first time, GeForce RTX performance comes to 14-inch devices. In the process, it’s transforming the ultraportable market, delivering the ultimate combination of performance and portability.
These purpose-built creative powerhouses do it all. Backed by NVIDIA Studio, the platform supercharges over 110 creative apps, provides lasting stability with NVIDIA Studio Drivers and includes a powerful suite of AI-powered Studio software, such as NVIDIA Omniverse, Canvas and Broadcast.
Fifth-generation Max-Q technologies bring an advanced suite of AI-powered technologies that optimize laptop performance, power and acoustics for peak efficiency. Battery life improves by up to 70%. And DLSS is now optimized for laptops, giving creators incredible 3D rendering performance with DLSS 3 optical multi-frame generation and super resolution in Omniverse and D5 Render, and in hit games like Cyberpunk 2077.
As the ultraportable market heats up, PC laptop makers are giving creators more options than ever. Recently announced models, with more on the way, include the Acer Swift X 14, ASUS Zenbook Pro 14, GIGABYTE Aero 14, Lenovo’s Slim Pro 9i 14 and MSI Stealth 14.
Visit the Studio Shop for the latest GeForce RTX-powered NVIDIA Studio systems and explore the range of high-performance Studio products.
Simultaneous Scene Encoding
The recent release of Video Codec SDK 12.1 added support for multi-encoder support, which can cut export times in half. Our previously announced split encoding method — which splits a frame and sends each section to an encoder — now has an API that app developers can expose to their end users. Previously, split encoding would be engaged automatically for 4K or higher video and the faster export presets. With this update, developers can simply allow users to toggle on this option.
Video Codec SDK 12.1 also introduces a new encoding method: simultaneous scene encoding. Video apps can split groups of pictures or scenes as they’re sent into the rendering pipeline. Each group can then be rendered independently and ordered properly on the final output.
The result is a significant increase in encoding speed — approximately 80% for dual encoders, and further increases when more than two NVENCs are present, like in the NVIDIA RTX 6000 Ada Generation professional GPU. Image quality is also improved compared to current split encoding methods, where individual frames are sent to each encoder and then stitched back together in the final output.
CapCut users will be the first to experience this benefit on RTX GPUs with two or more encoders, starting with the software’s current release, available today.
Massive May Studio Driver Drops
The May Studio Driver features significant upgrades and optimizations.
MAGIX partnered with NVIDIA to move its line of VEGAS Pro AI models on WinML, enabling video editors to apply AI effects much faster.
The driver also optimizes AI features for applications running on WinML, including Adobe Photoshop, Lightroom, MAGIX Vegas Pro, ON1 and DxO, among many others.
The real-time ray tracing renderer D5 Render also added NVIDIA DLSS 3, delivering a smoother viewport experience to navigate scenes with super fluid motion, massively benefiting architects, designers, interior designers and all professional 3D artists.D5 Render and DLSS 3 work brilliantly to create photorealistic imagery.
NVIDIA RTX Video Super Resolution — video upscaling technology that uses AI and RTX Tensor Cores to upscale video quality — is now fully integrated into VLC Media Player, no longer requiring a separate download. Learn more.
Gaming’s ACE in the Hole
During NVIDIA founder and CEO Jensen Huang’s keynote address at COMPUTEX, he introduced NVIDIA ACE for Games, a new foundry that adds intelligence to NPCs in gaming with AI-powered natural language interactions.
Game developers and studios can use ACE for Games to build and deploy customized speech, conversation and animation AI models in their software and games. The AI technology can transform entire worlds, breathing new life into individuals, groups or an entire town’s worth of characters — the sky’s the limit.
ACE for Games builds on technology inside NVIDIA Omniverse, an open development platform for building and operating metaverse applications, including optimized AI foundation models for speech, conversation and character animation.
This includes the NVIDIA NeMo for conversational AI fine-tuned for game characters, NVIDIA Riva for automatic speech recognition and text-to-speech, and Omniverse Audio2Face for instantly creating expressive facial animation of game characters to match any speech tracks. Audio2Face features Omniverse connectors for Unreal Engine 5, so developers can add facial animation directly to MetaHuman characters.
Seeing Is Believing: Kairos Demo
Huang debuted for COMPUTEX attendees ACE for Games — and provided a sneak-peek of the future of gaming — in a demo dubbed Kairos.
Convai, an NVIDIA Inception startup, specializes in cutting-edge conversational AI for virtual game worlds. NVIDIA Lightspeed Studios, led by Creative Director and 3D artist Gabriele Leone, built the remarkably realistic scene and demo. Together, they’ve showcased the opportunity developers have to use NVIDIA ACE for Games to build NPCs.
In the demo, players interact with Jin, owner and proprietor of a ramen shop. The photorealistic shop was modeled after the virtual ramen shop built in NVIDIA Omniverse.
For this, an NVIDIA artist traveled to a real ramen restaurant in Tokyo and collected over 2,000 high-resolution reference images and videos. Each captured aspects from the kitchen’s distinct areas for cooking, cleaning, food preparation and storage. “We probably used 70% of the existing models, 30% new and 80% retextures,” said Leone.Kairos: Beautifully rendered in Autodesk Maya, Blender, Unreal Engine 5 and NVIDIA Omniverse.
In the digital ramen shop, objects were modeled in Autodesk 3ds Max with RTX-accelerated AI denoising, and Blender benefiting from RTX-accelerated OptiX ray tracing for smooth, interactive movement in the viewport — all powered by the team’s arsenal of GeForce RTX 40 Series GPUs.
The texture phase in Adobe Substance 3D Painter used NVIDIA Iray rendering technology with RTX-accelerated light and ambient occlusion, baking large assets in mere moments.
Next, Omniverse and the Audio2Face app, via the Unreal Engine 5 Connector, allowed the team to add facial animation and audio directly to the ramen shop NPC.
Although he is an NPC, Jin replies to natural language realistically and consistent with the narrative backstory — all with the help of generative AI.
Lighting and animation work was done in Unreal Engine 5 aided by NVIDIA DLSS using AI to upscale frames rendered at lower resolution while still retaining high-fidelity detail, again increasing interactivity in the viewport for the team.