Loading
![]() |
There’s really nothing surprising about this. Models like o1 tend to respond well to direct instructions rather than step-by-step guides or detailed chains of thought. You just have to structure the inputs clearly and use demonstrations or relevant examples to provide context instead of long explanations. I haven’t tried few-shot prompting with DeepSeek-R1 yet, but I suspect it might actually reduce o1’s performance. submitted by /u/ml_guy1 |