MCP Server - "Already connected to a transport" error persists after full restart of FMS + OttoFMS

Hi,

I’m experiencing a persistent MCP connection error on two separate FileMaker Cloud servers. When connecting via mcp-remote (from Claude Desktop or directly from Terminal), I get:

Internal Server Error: Already connected to a transport. Call close() before connecting to a new transport, or use a separate Protocol instance per connection.

Environment:

  • FileMaker Cloud (fmcloud.fm hosted)
  • Two different servers affected: digimidi.fmcloud.fm and fidelami.fmcloud.fm
  • Client: mcp-remote@0.1.37 via npx
  • Claude Desktop as MCP client (also tested directly from Terminal with same result)
  • macOS, Node.js v20.20.1

What I’ve tried (none of these fixed the issue):

  1. Killed all local mcp-remote processes (pkill -9 -f mcp-remote)
  2. Cleared ~/.mcp-auth cache
  3. Restarted the MCP Server from the OttoFMS console
  4. Restarted OttoFMS entirely
  5. Restarted FileMaker Server
  6. Tested with --transport sse-only → returns 400 (SSE not supported)
  7. Tested with --transport http-first → same “Already connected” error
  8. Waited over an hour between attempts — error persists
  9. Sent manual JSON-RPC initialize and notifications/cancelled requests via curl — same error

Steps to reproduce:

bash

pkill -9 -f mcp-remote
rm -rf ~/.mcp-auth
# Wait 30 seconds
npx mcp-remote@latest https://digimidi.fmcloud.fm/otto/mcp/mcp-digimidi-prod \
  --header "Authorization: Bearer dk_xxxxx" 2>&1

Output:

Connecting to remote server: https://digimidi.fmcloud.fm/otto/mcp/mcp-digimidi-prod
Using transport strategy: http-first
Connection error: StreamableHTTPError: Streamable HTTP error: Error POSTing to endpoint:
{"jsonrpc":"2.0","error":{"code":-32000,"message":"Internal Server Error Already connected to a transport. Call close() before connecting to a new transport, or use a separate Protocol instance per connection."}}

Analysis: This appears to be a server-side issue where the MCP Server/Protocol instance is a singleton that doesn’t get reset when the previous transport disconnects (or crashes). The MCP SDK’s Protocol.connect() throws this error when called a second time without close() being called first. Even a full restart of FMS + OttoFMS doesn’t clear the stale transport state.

A third server (fmsvg2.fmcloud.fm) with a similar setup is working fine.

Questions:

  1. Could this be related to the OttoFMS version? What version introduced the fix for per-connection Server instances?
  2. Is there a known workaround?
  3. I have restarted the complete FMS server and it still happens

Thanks for any help!

Hey Antoine,

This issue was fixed in OttoFMS 4.16.2, a simple upgrade should solve it for you. Thanks!

-Kyle