MIT researchers “speak objects into existence” using AI and robotics The speech-to-reality system combines 3D generative AI and robotic assembly to create objects on demand.
MIT researchers at the School of Architecture and Planning developed a speech-to-reality system that combines generative AI, natural language processing, and robotic assembly to fabricate physical objects from spoken prompts.
When applied to fields like computer vision and robotics, the next-token and full-sequence diffusion models have capability trade-offs. Next-token models can spit out sequences that vary in length.
The Robotics and AI Institute, founded by Marc Raibert, presents new research that uses reinforcement learning to teach Boston Dynamics' Spot to run three times faster. The same technique is used ...
Julie A. Adams, the associate director of research at Oregon State University’s Collaborative Robotics and Intelligent Systems Institute, has been studying human interactions with robots and ...
Cartwheel Robotics, led by Scott LaValley, is redefining humanoids by focusing on emotional connection and companionship rather than industrial tasks. Can these friendly robots, like Yogi and ...