2024-10-23 17:25
Here is an interesting new paper on improving Chain-of-Thought accuracy. They find that adding correct and incorrect reasoning paths in demonstrations improves the accuracy of intermediate steps and CoT. I am not surprised by this because we often see that when we provide feedback (e.g., solution hints or point out mistakes) to LLMs, they tend to generate effective results. Like humans, LLMs can also "learn" from failures. I have also seen something similar for RAG and even agentic systems.
75
回覆
2
轉發
6

作者

elvis
omarsar0
粉絲
串文
144+

回覆

轉發

24小時粉絲增長

無資料

互動率

(讚 + 回覆 + 轉發) / 粉絲數
Infinity%

© 2025 Threadser.net. 版權所有。

Threadser.net 與 Meta Platforms, Inc. 無關,未經其認可、贊助或特別批准。

Threadser.net 也不與 Meta 的"Threads" 產品存在任何關聯。