Threadser.net
數據
關鍵字
功能建議
Following
Threads
Login
串文
串文鏈結
2023-07-31 02:39
Scaling Up and Distilling Down is a framework for language-guided skill learning. Give it a task description, and it will automatically generate rich, diverse robot trajectories, complete with success label and dense language labels.
讚
2
回覆
1
轉發
作者
AF
aipaperaf
粉絲
1,122
串文
93+
讚
回覆
轉發
24小時粉絲增長
無資料
互動率
(讚 + 回覆 + 轉發) / 粉絲數
0.27%
回覆 (BETA)
最先回覆的內容
發文後
用戶
內容
幾秒內
AF
aipaperaf
For scaling up data generation, they use a language model to guide high-level planning and sampling-based robot planners to generate rich and diverse manipulation trajectories (b). To robustify this data-collection process, the language model also infers a code-snippet for the success condition of each task, simultaneously enabling the data-collection process to detect failure and retry and automatically label of trajectories with success/failure (c).