Perspective.
https://www.bespacific.com/the-science-of-how-ai-pays-attention/
The science of how AI pays attention
Growth Memo: “I analyzed 1.2 million search results to find out exactly how AI reads. The verdict? It’s a busy editor, not a patient student… There isn’t much known about which parts of a text LLMs cite. We analyzed 18,012 citations and found a “ski ramp” distribution.
44.2% of all citations come from the first 30% of text (the intro). The AI reads like a journalist. It grabs the “Who, What, Where” from the top. If your key insight is in the intro, the chances it gets cited are high.
31.1% of citations come from the 30-70% of a text (the middle). If you bury your key product features in paragraph 12 of a 20-paragraph post, the AI is 2.5x less likely to cite it.
24.7% of citations come from the last third of an article (the conclusion). It proves the AI does wake up at the end (much like humans). It skips the actual footer (see the 90-100% drop-off), but it loves the “Summary” or “Conclusion” section right before the footer.
Possible explanations for the ski ramp pattern are training and efficiency:
LLMs are trained on journalism and academic papers, which follow the “BLUF” (Bottom Line Up Front) structure. The model learns that the most “weighted” information is always at the top.
While modern models can read up to 1 million tokens for a single interaction (~700-800K words), they aim to establish the frame as fast as possible, then interpret everything else through that frame…”
Perspective.
https://sloanreview.mit.edu/audio/ai-is-not-improving-productivity-nobel-laureate-daron-acemoglu/
AI Is Not Improving Productivity: Nobel Laureate Daron Acemoglu
In this bonus episode of the Me, Myself, and AI podcast, Nobel Prize-winning economist Daron Acemoglu joins host Sam Ransbotham to challenge some of the most common assumptions about artificial intelligence’s future. Drawing on his book Power and Progress, Daron argues that technology doesn’t have a fixed destiny — and that today’s choices will determine whether AI boosts workers or simply accelerates automation and inequality. He makes a case for focusing on new tasks that complement human skills, rather than replacing them, and warns that current incentives push AI toward centralization and automation by default. The conversation tackles productivity myths, reliability risks, and why regulation should proactively steer AI toward social good.
No comments:
Post a Comment