That much data can’t be wrong
AI is awesome. I use Claude daily and find it extremely useful. But a recent experience reminded me that like any tool, it still requires skill to use it effectively.
I got a message from my coworker letting me know they would be removing a
linting rule that requires a certain configuration comment in our code (for my
fellow rubyists out there, I’m referring to frozen_string_literal
of course).
He explained that it was no longer needed in the version of Ruby we’re using
(3.3). This confused me because I was fairly certain that change was still a few
versions of Ruby away. My coworker explained that he had checked with ChatGPT
and it assured him that the comment was no longer needed as of Ruby 3.0.
I asked Claude and got the same answer as my coworker. But I had a hunch that the AIs were wrong and that I knew why. After some internet sleuthing I found what I was looking for. It turns out there had been plans to remove the need for this comment in Ruby 3.0, but the creator of the language suddenly postponed the change to ensure it didn’t break existing codebases. But before that decision was made the internet was full of people and posts talking about the change happening in version 3.0. Both my and my coworker’s LLMs understandably found all that chatter and assumed it was correct, overlooking the more obscure event that changed everything.
LLMs in their current state are simply data aggregators. They’re VERY advanced and complex aggregators, but aggregators nonetheless. They simply absorb petabytes of data and then produce summaries and similitudes of that data. It’s important to understand this when using them. They are fantastic tools and get better every day, but there isn’t any magic behind that chat interface. Just because more data points to an answer does not mean it’s correct.
I look forward to more advances in AI and love to use it as a second brain, but I’m also very glad I still have the first one.