Hi,
I’m experiencing a persistent MCP connection error on two separate FileMaker Cloud servers. When connecting via mcp-remote (from Claude Desktop or directly from Terminal), I get:
Internal Server Error: Already connected to a transport. Call close() before connecting to a new transport, or use a separate Protocol instance per connection.
Environment:
- FileMaker Cloud (fmcloud.fm hosted)
- Two different servers affected:
digimidi.fmcloud.fmandfidelami.fmcloud.fm - Client:
mcp-remote@0.1.37vianpx - Claude Desktop as MCP client (also tested directly from Terminal with same result)
- macOS, Node.js v20.20.1
What I’ve tried (none of these fixed the issue):
- Killed all local
mcp-remoteprocesses (pkill -9 -f mcp-remote) - Cleared
~/.mcp-authcache - Restarted the MCP Server from the OttoFMS console
- Restarted OttoFMS entirely
- Restarted FileMaker Server
- Tested with
--transport sse-only→ returns 400 (SSE not supported) - Tested with
--transport http-first→ same “Already connected” error - Waited over an hour between attempts — error persists
- Sent manual JSON-RPC
initializeandnotifications/cancelledrequests viacurl— same error
Steps to reproduce:
bash
pkill -9 -f mcp-remote
rm -rf ~/.mcp-auth
# Wait 30 seconds
npx mcp-remote@latest https://digimidi.fmcloud.fm/otto/mcp/mcp-digimidi-prod \
--header "Authorization: Bearer dk_xxxxx" 2>&1
Output:
Connecting to remote server: https://digimidi.fmcloud.fm/otto/mcp/mcp-digimidi-prod
Using transport strategy: http-first
Connection error: StreamableHTTPError: Streamable HTTP error: Error POSTing to endpoint:
{"jsonrpc":"2.0","error":{"code":-32000,"message":"Internal Server Error Already connected to a transport. Call close() before connecting to a new transport, or use a separate Protocol instance per connection."}}
Analysis: This appears to be a server-side issue where the MCP Server/Protocol instance is a singleton that doesn’t get reset when the previous transport disconnects (or crashes). The MCP SDK’s Protocol.connect() throws this error when called a second time without close() being called first. Even a full restart of FMS + OttoFMS doesn’t clear the stale transport state.
A third server (fmsvg2.fmcloud.fm) with a similar setup is working fine.
Questions:
- Could this be related to the OttoFMS version? What version introduced the fix for per-connection
Serverinstances? - Is there a known workaround?
- I have restarted the complete FMS server and it still happens
Thanks for any help!