Misconfigured AI Servers Expose Data, Systems

Cybersecurity researchers have discovered that hundreds of Model Context Protocol (MCP) servers are misconfigured and publicly accessible, creating significant security vulnerabilities for AI applications. Despite the protocol only launching in November, over 15,000 MCP servers are already deployed globally, with approximately 7,000 exposed to the public internet and around 70 containing critical security flaws. These servers, which connect AI models to sensitive organizational data beyond their training sets, suffer from issues including path traversal vulnerabilities, inadequate input sanitization, and “neighbourjacking” scenarios where unauthenticated devices on local networks can gain access. Backslash Security researchers found that some servers accept any input and execute it as shell commands, potentially allowing attackers to achieve full system takeover, delete data, or conduct context poisoning attacks that manipulate AI model outputs. The security firm attributes these widespread misconfigurations to the technology’s novelty and teams implementing MCP servers quickly without fully understanding the security implications, recommending organizations implement strict API access controls, verify data origins, and ensure only approved language models are connected.​​​​​​​​​​​​​​​​

*****
Written on