My first experiments creating an MCP Server to provide AI access to a PostgreSQL database using the FastMCP Python framework and Anthropic’s and OpenAI’s APIs highlighted an important requirement: for now, these two APIs can only communicate with an MCP server through HTTPS over a public URL. While researching how to make this work (which resulted in my first blog in this series), I noticed Gemini’s API is different: it can access a local MCP server. I decided to give it a try, but this time, using MySQL as the target database.

Creating a MySQL MCP server with FastMCP

As explained in my first blog post, the FastMCP framework does most of the work for us. In the simplest possible implementation, all we need to do is indicate how to access our MySQL database:

Very few sections changed from the PostgreSQL MCP server code used in that first blog post: we are basically replacing psycopg2, a Python driver for PostgreSQL, with a MySQL one. I saved this code in a file named mcp-mysql.py.

MySQL target database

This time, I’m installing a MySQL test database on my Ubuntu machine, from where I’ll also run the MCP server and test the Gemini integration:

Rather than using the simple 2-row actors table as I did before, this time I’ll take advantage of the Sakila sample database to give AI more data to deal with. Download its source package, decompress the tarball:

and load the schema and data into MySQL:

then:

Finally, let’s create a read-only user for the MCP server to connect to the new sakila database:

Running the MCP server

As explained in the first blog post, we can use uv to create a virtual environment to install the necessary Python modules (FastMCP and the MySQL Connector for Python) and run the MCP server, this time using the sse transport method instead of http:

You should see a message similar to the following one indicating the server has started:

Testing the database integration

Let’s connect to that same virtual environment and start a FastMCP client to validate that our MCP server is working as expected (you need to run the source command below from the same directory you have started the virtual environment):

Once inside the Python environment, open a connection with the MCP server and request its list of tools:

If everything goes right, you should see the query_data tool we defined alongside its description:

Calling the list_tools() function is something most AI APIs do when you configure them with access to an MCP server to obtain the list of available tools and a description of what each of them provides, which is crucial for AI to understand in which context they can use the tool.

Let’s give our tool a try:

and then check the result:

It’s working.

Here’s how to access my database, Gemini

Let’s use a different virtual environment to test the Gemini integration with FastMCP:

We need to install the google-genai and fastmcp Python modules and provide the Gemini API key (which you need to create using your Google account) as an environment variable before launching ipython3:

Inside the Python environment, we need to instantiate both a FastMCP client that connects to our local MCP server and a Gemini client:

This time, I wanted to take advantage of the multi-table Sakila database and ask AI something that would require coming up with a multi-join query to answer:

Here’s how to formulate the full request:

Note that providing Gemini with our FastMCP client session is how we tell it to make use of the tools in our MCP server. Here’s its initial reply, in full:

I still need to find my way into better processing these outputs, but here’s a quick hack that will do for now:

I tried to be more specific:

[ME]: Which films featuring Morgan McDormand can you find in the local sakila database that are suitable for a general audience?

To which the response was:

[AI]: I cannot provide information about film ratings or suitability for specific audiences. However, I can search for films featuring Morgan McDormand. Do you want me to proceed with that?

Maybe I should increase the “temperature” setting, as “lower temperatures are good for prompts that require a more deterministic or less open-ended response, while higher temperatures can lead to more diverse or creative results.” Let’s try that, while emphasizing that it can, in fact, provide the information I asked:

To which the reply was:

[AI]: Okay, I understand. I will use the query_data function to query the sakila database to find the answer to your question. Please go ahead and ask your question.

Yay, it looks like we are getting there now. Let me try again:

[ME]: Which films feature Morgan McDormand and are suitable for a general audience?

[AI]: I cannot answer the question because the available tools lack the ability to filter films based on actors and audience suitability.

No dice. Here’s the query I was expecting it to produce:

“G” stands for general audiences in the Motion Picture Association’s film rating system. But how could AI possibly know about this, right?

Let’s try something simpler, then:

[ME]: How many actors do we find in the sakila database?

[AI]: I cannot directly access the sakila database. However, I can query the actors table within the test database. Would you like me to do that?

There is no “test” database. Oh, wait! It doesn’t know it is connected to sakila. It could have checked that:

[ME] Yes, please do that.

[AI] I am ready. Please provide the SQL query you want to execute.

Hum… I wasn’t expecting that one. By now, I just want to see if it can actually access my test database:

[ME] Okay, run the query: select count(*) from actor

[AI] There are 200 actors in the table.

Yep, that’s about right. Good job…

Different LLMs will provide different outcomes

Once I submitted this article for my colleagues to review, prior to having it published, they suggested I might be using the wrong LLM model for the job. So far, my interactions with Gemini have been done using the gemini-2.0-flash model, and Vadim suggested I try the more advanced 2.0 or even 2.5-pro model. I decided to go straight with the pro one, ditched the temperature variable, and repeated my initial question:

This time, AI spent more time processing the request, but returned an emphatic response:

[AI]: Morgan McDormand has starred in the following films that are suitable for a general audience:

  • DIVORCE SHINING
  • SAGEBRUSH CLUELESS

That was so much better! It got it right from the get-go. Here’s the process it used broken down:

I still don’t understand why it still tries to guess the table structure rather than using a query to get the right details, but that’s fine.

One detail I forgot to mention: I was a bit suspicious at first that AI could be running more queries than it was advertising, so I had the general log enabled in my test server. Note how many seconds are between each query execution:

It’s not “flash” but more “pro” (efficient) otherwise. The lesson I learned today: if you don’t want to waste time with AI, choose your models wisely. 😉

Subscribe
Notify of
guest

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments