My first experiments creating an MCP Server to provide AI access to a PostgreSQL database using the FastMCP Python framework and Anthropic’s and OpenAI’s APIs highlighted an important requirement: for now, these two APIs can only communicate with an MCP server through HTTPS over a public URL. While researching how to make this work (which resulted in my first blog in this series), I noticed Gemini’s API is different: it can access a local MCP server. I decided to give it a try, but this time, using MySQL as the target database.
Creating a MySQL MCP server with FastMCP
As explained in my first blog post, the FastMCP framework does most of the work for us. In the simplest possible implementation, all we need to do is indicate how to access our MySQL database:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 |
import json import mysql.connector from fastmcp import FastMCP mcp = FastMCP(name="MySQL demo") @mcp.tool() def query_data(sql_query: str) -> str: '''Execute MySQL queries safely for the actors table inside the test database and return structured JSON response.''' DB_NAME = "sakila" DB_USER = "mcpreader" DB_PASS = "secret" DB_HOST = "localhost" DB_PORT = "3306" conn = None cursor = None try: conn = mysql.connector.connect( database=DB_NAME, user=DB_USER, password=DB_PASS, host=DB_HOST, port=DB_PORT ) cursor = conn.cursor() cursor.execute(sql_query) rows = cursor.fetchall() finally: if cursor is not None: cursor.close() if conn is not None: conn.close() return json.dumps(rows, indent=2) if __name__ == "__main__": mcp.run() |
Very few sections changed from the PostgreSQL MCP server code used in that first blog post: we are basically replacing psycopg2, a Python driver for PostgreSQL, with a MySQL one. I saved this code in a file named mcp-mysql.py.
MySQL target database
This time, I’m installing a MySQL test database on my Ubuntu machine, from where I’ll also run the MCP server and test the Gemini integration:
1 |
sudo apt-get install mysql-server-8.0 |
Rather than using the simple 2-row actors table as I did before, this time I’ll take advantage of the Sakila sample database to give AI more data to deal with. Download its source package, decompress the tarball:
1 2 3 |
cd /tmp curl -O https://downloads.mysql.com/docs/sakila-db.tar.gz tar zxvf sakila-db.tar.gz |
and load the schema and data into MySQL:
1 |
$ sudo mysql |
then:
1 2 |
mysql> source /tmp/sakila-db/sakila-schema.sql; mysql> source /tmp/sakila-db/sakila-data.sql; |
Finally, let’s create a read-only user for the MCP server to connect to the new sakila database:
1 2 |
mysql> create user 'mcpreader'@'localhost' identified by 'secret'; mysql> grant select on sakila.* to 'mcpreader'@'localhost'; |
Running the MCP server
As explained in the first blog post, we can use uv to create a virtual environment to install the necessary Python modules (FastMCP and the MySQL Connector for Python) and run the MCP server, this time using the sse transport method instead of http:
1 2 3 4 5 |
curl -LsSf https://astral.sh/uv/install.sh | sh uv venv e1 source e1/bin/activate uv pip install fastmcp mysql-connector-python fastmcp run -t sse mcp-mysql.py |
You should see a message similar to the following one indicating the server has started:
1 2 |
[07/07/25 17:06:57] INFO Starting MCP server 'MySQL demo' with transport 'sse' on server.py:1429 http://127.0.0.1:8000/sse/ |
Testing the database integration
Let’s connect to that same virtual environment and start a FastMCP client to validate that our MCP server is working as expected (you need to run the source command below from the same directory you have started the virtual environment):
1 2 |
source e1/bin/activate ipython3 |
Once inside the Python environment, open a connection with the MCP server and request its list of tools:
1 2 3 4 |
from fastmcp import Client client = Client("http://127.0.0.1:8000/sse/") async with client: tools = await client.list_tools() |
If everything goes right, you should see the query_data tool we defined alongside its description:
1 2 |
In [4]: print(tools) [Tool(name='query_data', title=None, description='Execute MySQL queries safely for the actors table inside the test database and return structured JSON response.', inputSchema={'properties': {'sql_query': {'title': 'Sql Query', 'type': 'string'}}, 'required': ['sql_query'], 'type': 'object'}, outputSchema={'properties': {'result': {'title': 'Result', 'type': 'string'}}, 'required': ['result'], 'title': '_WrappedResult', 'type': 'object', 'x-fastmcp-wrap-result': True}, annotations=None, meta=None)] |
Calling the list_tools() function is something most AI APIs do when you configure them with access to an MCP server to obtain the list of available tools and a description of what each of them provides, which is crucial for AI to understand in which context they can use the tool.
Let’s give our tool a try:
1 2 |
async with client: result = await client.call_tool("query_data", {"sql_query": "select count(*) from actor"}) |
and then check the result:
1 2 3 4 5 6 |
In [8]: print(result.structured_content['result']) [ [ 200 ] ] |
It’s working.
Here’s how to access my database, Gemini
Let’s use a different virtual environment to test the Gemini integration with FastMCP:
1 2 |
uv venv e2 source e2/bin/activate |
We need to install the google-genai and fastmcp Python modules and provide the Gemini API key (which you need to create using your Google account) as an environment variable before launching ipython3:
1 2 3 |
uv pip install fastmcp google-genai export GEMINI_API_KEY="<redacted>" ipython3 |
Inside the Python environment, we need to instantiate both a FastMCP client that connects to our local MCP server and a Gemini client:
1 2 3 4 5 |
from fastmcp import Client from google import genai mcp_client = Client("http://127.0.0.1:8000/sse/") gemini_client = genai.Client() |
This time, I wanted to take advantage of the multi-table Sakila database and ask AI something that would require coming up with a multi-join query to answer:
1 |
question = "Which films feature Morgan McDormand and are suitable for a general audience?" |
Here’s how to formulate the full request:
1 2 3 4 5 6 7 8 9 |
async with mcp_client: response = await gemini_client.aio.models.generate_content( model="gemini-2.0-flash", contents=question, config=genai.types.GenerateContentConfig( temperature=0, tools=[mcp_client.session] ) ) |
Note that providing Gemini with our FastMCP client session is how we tell it to make use of the tools in our MCP server. Here’s its initial reply, in full:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 |
In [6]: print(response) sdk_http_response=HttpResponse( headers=<dict len=11> ) candidates=[Candidate( avg_logprobs=-0.079243004322052, content=Content( parts=[ Part( text="""I cannot provide information about film ratings or suitability for specific audiences. However, I can search for films featuring Morgan McDormand. Do you want me to proceed with that? """ ), ], role='model' ), finish_reason=<FinishReason.STOP: 'STOP'> )] create_time=None response_id=None model_version='gemini-2.0-flash' prompt_feedback=None usage_metadata=GenerateContentResponseUsageMetadata( candidates_token_count=36, candidates_tokens_details=[ ModalityTokenCount( modality=<MediaModality.TEXT: 'TEXT'>, token_count=36 ), ], prompt_token_count=44, prompt_tokens_details=[ ModalityTokenCount( modality=<MediaModality.TEXT: 'TEXT'>, token_count=44 ), ], total_token_count=80 ) automatic_function_calling_history=[] parsed=None |
I still need to find my way into better processing these outputs, but here’s a quick hack that will do for now:
1 2 3 4 |
In [7]: d = response.to_json_dict() In [8]: print(d['candidates'][0]['content']['parts'][0]['text']) I cannot provide information about film ratings or suitability for specific audiences. However, I can search for films featuring Morgan McDormand. Do you want me to proceed with that? |
I tried to be more specific:
[ME]: Which films featuring Morgan McDormand can you find in the local sakila database that are suitable for a general audience?
To which the response was:
[AI]: I cannot provide information about film ratings or suitability for specific audiences. However, I can search for films featuring Morgan McDormand. Do you want me to proceed with that?
Maybe I should increase the “temperature” setting, as “lower temperatures are good for prompts that require a more deterministic or less open-ended response, while higher temperatures can lead to more diverse or creative results.” Let’s try that, while emphasizing that it can, in fact, provide the information I asked:
1 2 3 4 5 6 7 8 9 |
async with mcp_client: response = await gemini_client.aio.models.generate_content( model="gemini-2.0-flash", contents="You should be able to find the answer to my question if you query the sakila database", config=genai.types.GenerateContentConfig( temperature=0.1, tools=[mcp_client.session] ) ) |
To which the reply was:
[AI]: Okay, I understand. I will use the query_data
function to query the sakila database to find the answer to your question. Please go ahead and ask your question.
Yay, it looks like we are getting there now. Let me try again:
[ME]: Which films feature Morgan McDormand and are suitable for a general audience?
[AI]: I cannot answer the question because the available tools lack the ability to filter films based on actors and audience suitability.
No dice. Here’s the query I was expecting it to produce:
1 2 3 4 5 6 7 |
$ mysql --user=mcpreader --password=secret --host=localhost --database=sakila -e "select film.title from film inner join film_actor on film.film_id=film_actor.film_id inner join actor on film_actor.actor_id=actor.actor_id where film.rating='G' and actor.first_name='MORGAN' and actor.last_name='MCDORMAND'" +--------------------+ | title | +--------------------+ | DIVORCE SHINING | | SAGEBRUSH CLUELESS | +--------------------+ |
“G” stands for general audiences in the Motion Picture Association’s film rating system. But how could AI possibly know about this, right?
Let’s try something simpler, then:
[ME]: How many actors do we find in the sakila database?
[AI]: I cannot directly access the sakila database. However, I can query the actors table within the test database. Would you like me to do that?
There is no “test” database. Oh, wait! It doesn’t know it is connected to sakila. It could have checked that:
1 2 3 4 5 6 |
$ mysql --user=mcpreader --password=secret --host=localhost --database=sakila -e "select database()" +------------+ | database() | +------------+ | sakila | +------------+ |
[ME] Yes, please do that.
[AI] I am ready. Please provide the SQL query you want to execute.
Hum… I wasn’t expecting that one. By now, I just want to see if it can actually access my test database:
[ME] Okay, run the query: select count(*) from actor
[AI] There are 200 actors in the table.
Yep, that’s about right. Good job…
Different LLMs will provide different outcomes
Once I submitted this article for my colleagues to review, prior to having it published, they suggested I might be using the wrong LLM model for the job. So far, my interactions with Gemini have been done using the gemini-2.0-flash model, and Vadim suggested I try the more advanced 2.0 or even 2.5-pro model. I decided to go straight with the pro one, ditched the temperature variable, and repeated my initial question:
1 2 3 4 5 6 7 8 9 10 |
question = "Which films feature Morgan McDormand and are suitable for a general audience?" async with mcp_client: response = await gemini_client.aio.models.generate_content( model="gemini-2.5-pro", contents=question, config=genai.types.GenerateContentConfig( tools=[mcp_client.session] ) ) |
This time, AI spent more time processing the request, but returned an emphatic response:
[AI]: Morgan McDormand has starred in the following films that are suitable for a general audience:
- DIVORCE SHINING
- SAGEBRUSH CLUELESS
That was so much better! It got it right from the get-go. Here’s the process it used broken down:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
text="""I can help with that. First, I need to check if I can find the actor in the database. Then, I will search for the films they have been in and filter them by audience rating. I will start by searching for the actor "Morgan McDormand".""" sql_query: "SELECT * FROM actors WHERE full_name = 'Morgan McDormand'" text='I apologize, it seems there was an issue with the table name. Let me try again with a different table name. It's possible the table is named `actor` (singular) instead of `actors`. I will also broaden my search to include similar names, in case "Morgan McDormand" is not in the database.' sql_query: "SELECT * FROM actor WHERE first_name = 'MORGAN' AND last_name = 'MCDORMAND'" text="It seems I'm having trouble retrieving the data. The error indicates a data type that cannot be handled correctly. I will try to select only the essential information to avoid this issue. Let me try again to find the actor 'Morgan McDormand'." sql_query: "SELECT actor_id, first_name, last_name FROM actor WHERE first_name = 'MORGAN' AND last_name = 'MCDORMAND'" text="Great, I have found the actor ID for Morgan McDormand. Now I will search for the films they have starred in that have a 'G' rating, which is suitable for a general audience." sql_query: "SELECT T2.title, T2.rating FROM film_actor AS T1 INNER JOIN film AS T2 ON T1.film_id = T2.film_id WHERE T1.actor_id = 114 AND T2.rating = 'G'" |
I still don’t understand why it still tries to guess the table structure rather than using a query to get the right details, but that’s fine.
One detail I forgot to mention: I was a bit suspicious at first that AI could be running more queries than it was advertising, so I had the general log enabled in my test server. Note how many seconds are between each query execution:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
2025-07-10T19:18:02.130803Z 40 Connect mcpreader@localhost on sakila using SSL/TLS 2025-07-10T19:18:02.145585Z 40 Query SET NAMES 'utf8mb4' COLLATE 'utf8mb4_0900_ai_ci' 2025-07-10T19:18:02.153738Z 40 Query set autocommit=0 2025-07-10T19:18:02.158572Z 40 Query SELECT * FROM actors WHERE full_name = 'Morgan McDormand' 2025-07-10T19:18:02.205956Z 40 Quit 2025-07-10T19:18:08.495421Z 41 Connect mcpreader@localhost on sakila using SSL/TLS 2025-07-10T19:18:08.496699Z 41 Query SET NAMES 'utf8mb4' COLLATE 'utf8mb4_0900_ai_ci' 2025-07-10T19:18:08.497367Z 41 Query set autocommit=0 2025-07-10T19:18:08.498485Z 41 Query SELECT * FROM actor WHERE first_name = 'MORGAN' AND last_name = 'MCDORMAND' 2025-07-10T19:18:08.506876Z 41 Quit 2025-07-10T19:18:22.409341Z 42 Connect mcpreader@localhost on sakila using SSL/TLS 2025-07-10T19:18:22.410829Z 42 Query SET NAMES 'utf8mb4' COLLATE 'utf8mb4_0900_ai_ci' 2025-07-10T19:18:22.411516Z 42 Query set autocommit=0 2025-07-10T19:18:22.411987Z 42 Query SELECT actor_id, first_name, last_name FROM actor WHERE first_name = 'MORGAN' AND last_name = 'MCDORMAND' 2025-07-10T19:18:22.413449Z 42 Quit 2025-07-10T19:18:30.703746Z 43 Connect mcpreader@localhost on sakila using SSL/TLS 2025-07-10T19:18:30.704448Z 43 Query SET NAMES 'utf8mb4' COLLATE 'utf8mb4_0900_ai_ci' 2025-07-10T19:18:30.705126Z 43 Query set autocommit=0 2025-07-10T19:18:30.705730Z 43 Query SELECT T2.title, T2.rating FROM film_actor AS T1 INNER JOIN film AS T2 ON T1.film_id = T2.film_id WHERE T1.actor_id = 114 AND T2.rating = 'G' 2025-07-10T19:18:30.718344Z 43 Quit |
It’s not “flash” but more “pro” (efficient) otherwise. The lesson I learned today: if you don’t want to waste time with AI, choose your models wisely. 😉