The Model Context Protocol (MCP) is still in its infancy, or at least I hope it is, but it’s already opening up compelling new ways to wire up LLMs to real tools and services. I feel like I did when I got my first modem and could connect my computer at the blazing speed of 1200 bits per second to other computers (cue sounds of the dialing, ringing and then the happy tones and beeps of the handshake).
As a side note, this video is fun to watch with closed captioning turned on: https://youtu.be/xalTFH5ht-k
At Nimble Gravity, we’ve been automating processes with MCP servers that would have been very difficult to do without the power of MCP (and LLMs) and wanted to share some learnings.
There is quite a plethora of MCP SDK variants emerging. We like Anthropic’s version because, well, it’s the OG plus it was well-documented and provided everything we need.
When implementing tools, while it’s obvious that you need to be clear about what the tools do, what the parameters (and enums!) are doing, what the expected returns are, etc. It’s helpful to think of your schemas as both machine-parseable and human-explainable. The due date is in what format?
If there’s something you don’t explicitly handle (or didn’t describe well), or which actually isn’t possible, the LLM will sometimes still try. It maybe even invents parameter values. Sometimes that works out nicely (like what’s a valid industry to pass into a CRM MCP, for example) but sometimes a little defensive coding goes a long way. And better handling use cases over time is helpful as you see how people are actually using the server.
If an API is available that describes another API or what the parameters are, those are cool to use because you can then tell the LLM what to do. For example, you can call a HubSpot API to get all the properties for an object type (e.g., Contacts) and then avoid hardcoding all of those into your update contact tool. It creates a kind of “future-proofing” so your tools can remain minimal and composable.
Having a simple UI to test tools is super handy. It's kind of like an early Postman for LLM interfaces.
Also, for issues that live somewhere in between the LLM and your MCP, Claude (for example) has super handy logs for anything that goes right or wrong – one log per MCP server so you know where to look. Writing this also just says that it’s a new space with very early tooling and I anticipate a lot will improve in this space.
If you’re suddenly incorporating much more data, it’s possible that you’re going to start hitting context window and conversation limits. So, if you need to do 20 things you might need to break them up into two or more sessions to avoid (or minimize) the “please start a new conversation to continue chatting” type messages.
This space is early. Really early. But I get dial-up-modem nostalgia (in a good way). You can feel the future potential in these interfaces: secure, modular, semantically rich endpoints for AI. Today it’s like the beige box connected to your computer. Tomorrow? Core infrastructure.
We’re still figuring out best practices too. If you’re experimenting with MCP, we’d love to trade notes.
Shoot us a message at Nimble Gravity if you’re building something cool (or getting stuck).