Github Copilot Got Multiple Personalities
Wed, Oct 30, 2024 1:32 PM
Github Copilot Meet Claude Sonnet and Google Gemini

I have been keeping up with the latest news about AI code assistants, and I must say that GitHub just released something really cool. Do you remember how we've all been using Copilot with the same flavor? That's going to change a lot soon.

The Game-Changer

When Copilot first came out, it was like having a coding friend who got things right some of the time and not so regularly. Recently, GitHub has finally granted us something I had been secretly hoping for: the freedom to choose which AI model we want to use.

Meet the New Squad

Now let me tell you about the new kids this block. The first one is Claude 3.5 Sonnet from Anthropic. The reason I am really excited about this one is that it is supposed to be great at understanding complicated codebases and performing maintenance tasks. This could save me a lot of time because I spend too much time fixing bugs in old code.

Following is Google Gemini 1.5 Pro. The cool thing about this one is that it can handle two million tokens at once. That is the same as being able to see all of your code at once! On top of that, it can handle both audio and images, which makes it great for multimedia projects.

Also, OpenAI is really stepping up with the o1-preview and o1-mini models. These should understand what we are trying to do with our code even better.

Why I'm Hyped

Did you know this is really cool? Anytime we want, we can switch between these models in VS Code or on GitHub.com. It is like having various coders help you with different tasks. Currently working on a huge project to refactor? Claude might be your go-to. Doing a project that includes a lot of multimedia? Gemini could be the best sign for you.

Looking Ahead

Also, this GitHub Spark thing they talked about interests me. You could make whole apps by just telling them what you want to do. That might sound like science fiction, but Copilot did the same thing a few years ago!

My Take

GitHub did a great job with this. They are not making us use just one AI model; instead, they are letting us pick the one that works best for us. They are telling you, "Hey, we believe you know which tools work best for your work."

This makes me so excited to start playing around with these different models. Yes, it might take some time to learn which one works best in different situations, but that is part of the fun, right?

What do you think about these changes? Are you excited to try out different models, or are you content with the ones you have now?

Catch you in the next commit!