absorb.md

A Significantly More Generalized Approach Replacing Discrete Latent Variables With Natural Language Conditioning Via A Textconditioned Variational Autoencoder Vae

1 mentions across 0 people

Unknown speaker
paper · 2026-04-10
Recommended

We propose a significantly more generalized approach, replacing discrete latent variables with natural language conditioning via a text-conditioned Variational Autoencoder (VAE). Our core innovation utilizes a Large Language Model (LLM) as a dynamic \textit{semantic operator} at test time.

LLMs Enhance Zero-Shot Transfer in Reinforcement Learning by Semantically Re-map