<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Robotics | MODE Collaboration</title><link>https://mode-demo.github.io/tags/robotics/</link><atom:link href="https://mode-demo.github.io/tags/robotics/index.xml" rel="self" type="application/rss+xml"/><description>Robotics</description><generator>Hugo Blox Builder (https://hugoblox.com)</generator><language>en-us</language><lastBuildDate>Tue, 03 Oct 2023 00:00:00 +0000</lastBuildDate><item><title>Learning-based Methods for Robotics &amp; Autonomous Driving</title><link>https://mode-demo.github.io/project/robotics/</link><pubDate>Tue, 03 Oct 2023 00:00:00 +0000</pubDate><guid>https://mode-demo.github.io/project/robotics/</guid><description>&lt;!-- We focus on developing robotic control and autonomous driving policy learning methods that could directly learn from real-world data, bypassing or alleviating sim-to-real gap, while achieving robust and generalizable performance.
Our current research focus include:
- Offline RL / IL / planning methods for autonomous driving and robotic control
- Offline policy optimization for safety-critical scenarios
- Foundation models for robotic control
- Sim-to-real adaptation
**Latest research**:
- [Diffusion-Planner: Diffusion-Based Planning for Autonomous Driving with Flexible Guidance](../../publication/zheng-2025-diffusion/) --&gt;
&lt;div style="font-family: Helvetica, sans-serif; max-width: 960px; margin: 0 auto; padding: 20px; line-height: 1.6; color: #333;"&gt;
&lt;div style="
padding: 2px;
border-radius: 12px;
background: linear-gradient(135deg, #e0f2fe, #ecfdf5);
box-shadow: 0 4px 12px rgba(0,0,0,0.05);
"&gt;
&lt;div style="
background: white;
border-radius: 10px;
padding: 20px;
"&gt;
&lt;p style="
font-size: 18px;
line-height: 1.7;
color: #1e293b;
margin: 0;
"&gt;
We focus on developing robotic control and autonomous driving policy learning methods that could directly learn from real-world data, bypassing or alleviating sim-to-real gap, while achieving robust and generalizable performance.
&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h3 style="margin-top: 24px; color: #00bcd4; font-size: 24px;"&gt;Our current research focus includes:&lt;/h3&gt;
&lt;!-- 卡片式布局 --&gt;
&lt;div style="display: grid; grid-template-columns: repeat(auto-fill, minmax(280px, 1fr)); gap: 24px; margin-top: 24px;"&gt;
&lt;div style="background: white; border-radius: 12px; padding: 24px; box-shadow: 0 5px 15px rgba(0, 0, 0, 0.05); transition: transform 0.3s ease; border-left: 4px solid #00bcd4;"&gt;
&lt;h4 style="margin-top: 0; margin-bottom: 12px; color: #222; font-size: 18px;"&gt;Offline RL / IL / planning methods for autonomous driving and robotic control&lt;/h4&gt;
&lt;/div&gt;
&lt;div style="background: white; border-radius: 12px; padding: 24px; box-shadow: 0 5px 15px rgba(0, 0, 0, 0.05); transition: transform 0.3s ease; border-left: 4px solid #4caf50;"&gt;
&lt;h4 style="margin-top: 0; margin-bottom: 12px; color: #222; font-size: 18px;"&gt;Offline policy optimization for safety-critical scenarios&lt;/h4&gt;
&lt;/div&gt;
&lt;div style="background: white; border-radius: 12px; padding: 24px; box-shadow: 0 5px 15px rgba(0, 0, 0, 0.05); transition: transform 0.3s ease; border-left: 4px solid #ff9800;"&gt;
&lt;h4 style="margin-top: 0; margin-bottom: 12px; color: #222; font-size: 18px;"&gt;Foundation models for robotic control&lt;/h4&gt;
&lt;/div&gt;
&lt;div style="background: white; border-radius: 12px; padding: 24px; box-shadow: 0 5px 15px rgba(0, 0, 0, 0.05); transition: transform 0.3s ease; border-left: 4px solid #9c27b0;"&gt;
&lt;h4 style="margin-top: 0; margin-bottom: 12px; color: #222; font-size: 18px;"&gt;Sim-to-real adaptation&lt;/h4&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div align="center" style="font-family: Helvetica, sans-serif; margin-bottom: 1em; margin-top: 60px;"&gt;
&lt;h1 style="color: #00bcd4; text-transform: uppercase; font-size: 40px; margin: 0;"&gt;Latest Achievement&lt;/h1&gt;
&lt;div class="card"&gt;
&lt;h3 style="color: #121212; font-size: 24px; font-weight: bold; margin: 0.3em 0 1em;"&gt;
&lt;a href="../../publication/zheng-2025-xvla/" style="color:rgb(212, 191, 55);"&gt;X-VLA has won First Place in the AGIBOT World Challenge (Manipulation track) @ IROS 2025!&lt;/a&gt;&lt;/h3&gt;
&lt;/div&gt;
&lt;div class="card"&gt;
&lt;h3 style="color: #121212; font-size: 24px; font-weight: bold; margin: 0.3em 0 1em;"&gt;
&lt;a href="../../publication/zheng-2025-diffusion/" style="color:rgb(13, 181, 227);"&gt;Diffusion-Planner: Diffusion-Based Planning for Autonomous Driving with Flexible Guidance&lt;/a&gt;&lt;/h3&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;style&gt;
.card {
background: white;
border-radius: 12px;
padding: 5px;
box-shadow: 0 5px 15px rgba(0, 0, 0, 0.05);
transition: transform 0.3s ease;
border: none;
}
/* 鼠标悬停时的效果 */
.card:hover {
transform: scale(1.05); /* 放大 */
box-shadow: 0 10px 25px rgba(0, 0, 0, 0.15); /* 阴影更明显 */
}
&lt;/style&gt;</description></item></channel></rss>