I was upset about this when it was being called “prompt engineering”, and found no sign of engineering, but instead a series of vibes about how to phrase a prompt in a particular version of a particular model, which sometimes produced output that is plausibly related to the input prompt and therefore plausibly close to what you might have intended. I’m upset now when people are making claims that agents are so useful, but can’t tell me when or why or how they’re useful beyond vibes about feeling more productive (vibes that have been refuted by real science contrasting objective measure of productivity vs. subjective reports), or examples of having produced a lot of plausible output.
10 monthly gift articles to share
,详情可参考新收录的资料
ВсеПолитикаОбществоПроисшествияКонфликтыПреступность
聚焦全球优秀创业者,项目融资率接近97%,领跑行业