Beatrice Lazzari
I ghostwrite LinkedIn posts, specializing in topics related to technology and AI, ensuring the content is engaging, insightful and aligned with industry trends.
❌ Reflection 70B is a shame, but here on LinkedIn no one is talking about it ❌ Here's a story on how an AI Guru tried to fool the AI open-source community and how it quickly backfired. Released on September 5th and created by Matt Shumer, Reflection 70B was 𝗰𝗹𝗮𝗶𝗺𝗲𝗱 to be the world's top open-source model. Developed with synthetic data from GlaiveAI, it was supposedly trained using a technique called Reflection-Tuning, which was said to enable LLMs to fix their own mistakes. The announcement also mentioned a 405B model coming the following week, expected to be "the best model in the world." The result? It received a ton of attention in the past days, with sites like VentureBeat already feeding the public the idea that this was the new most powerful open-source model in the world. ❓ But why am I saying "𝗰𝗹𝗮𝗶𝗺𝗲𝗱"? Well, here's where the carousel begins: ❌ After the release, several players started to test Reflection 70B, consistently failing to replicate the results. 🌀 The creator of Reflection 70B first blamed the API he just released, then there were also issues with the weights uploaded on HuggingFace, with the weights being apparently mixed with other models' weights. "Something got messed up in the process," was the assertion. 🕵♂️ To patch this situation, he released a private API for research, which, well, actually performed great, but there's a catch. With APIs, the inner workings are hidden from view. One could easily use a more powerful model behind the scenes and conceal it. As it turns out, that's exactly what happened here: 🌯 The API was revealed to be first a Claude wrapper, then an OpenAI one, with hacky ways to cover it up like removing "Claude" from the output of the model (lol) Ultimately, this is another example showing that if something looks too good to be true, well, it's probably not true. 🤔 But why put up such a scheme? While the hype around AI is slowly fading, everyone is hoping for the new "attention is all you need" that can solve LLMs' biggest issue: hallucinations. This means that anything claiming to have overcome this enigma will receive some degree of attention, which could then be used to funnel people into other things, like products or tools. This incident sheds light on some troubling trends in the AI community. We've seen AI influencers eagerly sharing unverified claims, media outlets rushing to publish articles without proper fact-checking, and content creators scrambling to capitalize on the sudden surge of public interest. Hopefully, this will give many an idea of how some information circulates on LinkedIn at a superficial level, often following hype rather than facts!