> Cast an integer into the type of a 2023 Toyota Yaris using Javascript
(GPT-4o mini)
> To cast an integer into the type of a 2023 Toyota Yaris in JavaScript, you would typically create a class or a constructor function that represents the Toyota Yaris. Then, you can create an instance of that class using the integer value. Here's an example of how you might do this:
// Define a class for the 2023 Toyota Yaris
class ToyotaYaris {
constructor(modelYear, value) {
this.modelYear = modelYear;
this.value = value;
}
displayInfo() {
console.log(`Model Year: ${this.modelYear}, Value: ${this.value}`);
}
}
// Function to cast an integer into a Toyota Yaris object
function castToYaris(integerValue) {
const modelYear = 2023; // Set the model year
return new ToyotaYaris(modelYear, integerValue);
}
// Example usage
const integerValue = 20000; // Example integer value
const yaris = castToYaris(integerValue);
yaris.displayInfo(); // Output: Model Year: 2023, Value: $20000
It works in this instance. On this run. It is not guaranteed to work next time. There is a error percentage here that makes it _INEVITABLE_ that eventually, with enough executions, the validation will pass when it should fail.
It will choose not to pass this to the validator, at some point in the future. It will create its own validator, at some point in the future. It will simply pretend like it did any of the above, at some point in the future.
This might be fine for your B2B use case. It is not fine for underlying infrastructure for a financial firm or communications.
Every time the LLM uses this tool, the response schema is validated--deterministically. The LLM will never see a non-integer value as output from the tool.
I write these as part of my job, I know how they work. I'm not going to spend more time explaining to you (and demonstrating!) what is in the spec. Read the spec and let the authors know that they don't understand what they wrote. I've run out of energy in this conversation.
llm tool call -> mcp client validates the schema -> mcp client calls the tool -> mcp server validates the schema -> mcp server responds with the result -> mcp client passes the tool result into llm
They often do fail, at the client level you can just feed the schema validation error message back into the LLM and it corrects itself most of the time.
If not the LLM throws itself into a loop until its caller times it out and it sends an error message back to the user.
At the server level it's just a good old JSON API at this point, and the server would send the usual error message it would send out to anyone.
Can you guarantee it will validate it every time ? Can you guarantee the way MCPs/tool calling are implemented (which is already an incredible joke that only python brained developers would inflict upon the world) will always go through the validation layer, are you even sure of what part of Claude handles this validation ? Sure, it didn't cast an int into a Toyota Yaris. Will it cast "70Y074" into one ? Maybe a 2022 one. What if there are embedded parsing rules into a string, will it respect it every time ? What if you use it outside of Claude Code, but just ask nicely through the API, can you guarantee this validation still works ? Or that they won't break it next week ?
The whole point of it is, whichever LLM you're using is already too dumb to not trip when lacing its own shoes. Why you'd trust it to reliably and properly parse input badly described by a terrible format is beyond me.
> Can you guarantee it will validate it every time ?
Yes, to the extent you can guarantee the behavior of third party software, you can (which you can't really guarantee no matter what spec the software supposedly implements, so the gaps aren't an MCP issue), because “the app enforces schema compliance before handing the results to the LLM” is deterministic behavior in the traditional app that provides the toolchain that provides the interface between tools (and the user) and the LLM, not non-deterministic behavior driven by the LLM. Hence, “before handing the results to the LLM”.
> The whole point of it is, whichever LLM you're using is already too dumb to not trip when lacing its own shoes. Why you'd trust it to reliably and properly parse input badly described by a terrible format is beyond me.
The toolchain is parsing, validating, and mapping the data into the format preferred by the chosen models promot template, the LLM has nothing to do with doing that, because that by definition has to happen before it can see the data.
>The toolchain is parsing, validating, and mapping the data into the format preferred by the chosen models promot template, the LLM has nothing to do with doing that
The LLM has everything to do with that. The LLM is literally choosing to do that. I don't know why this point keeps getting missed or side-stepped.
It WILL, at some point in the future and given enough executions, as a matter of statistical certainty, simply not do that above, or pretend to do the above, or do something totally different at some point in the future.
> The LLM has everything to do with that. The LLM is literally choosing to do that.
No, the LLM doesn't control on a case-by-caae basis what the toolchain does between the LLM putting a tool call request in an output message and the toolchain calling the LLM afterwards.
If the toolchain is programmed to always validate tool responses against the JSON schema provided by MCP server before mapping into the LLM prompt template and calling the LLM again to handle the response, that is going to happen 100% of the time. The LLM doesn't choose it. It CAN'T because the only way it even knows that the data has come back from the tool call is that the toolchain has already done whatever it is programmed to do, ending with mapping the response into a prompt and calling the LLM again.
Even before MCPs or even models specifically trained and with vendor-provided templates for tool calling (but after the ReAct architecture was described), it was like a weekend project to implement a basic framework supporting tooling calling around a local or remote LLM. I don't think you need to do that to understand how silly the claim that the LLM controls what the toolchain does with each response and might make it not validate it is, but certainly doing it will give you a visceral understanding of how silly it is.
I think you are, for whatever reason, missing a fact of causality here and I'm not sure I can fix that over text. I mean that in the most respectful way possible.
Are you two talking at cross-purposes because you don't have a shared understanding of control and data flow?
The pieces here are:
* Claude Code, a Node (Javascript) application that talks to MCP server(s) and the Claude API
* The MCP server, which exposes some tools through stdin or HTTP
* The Claude API, which is more structured than "text in, text out".
* The Claude LLM behind the API, which generates a response to a given prompt
Claude Code is a Node application. CC is configured in JSON with a list of MCP servers. When CC starts up, CC"s Javascript initialises each server and as part of that gets a list of callable functions.
When CC calls the LLM API with a user's request, it's not just "here is the user's words, do it". There are multiple slots in the request object, one of which is a "tools" block, a list of the tools that can be called. Inside the API, I imagine this is packaged into a prefix context string like "you have access to the following tools: tool(args) ...". The LLM API probably has a bunch of prompts it runs through (figure out what type of request the user has made, maybe using different prompts to make different types of plan, etc.) and somewhere along the way the LLM might respond with a request to call a tool.
The LLM API call then returns the tool call request to CC, in a structured "tool_use" block separate from the freetext "hey good news, you asked a question and got this response". The structured block means "the LLM wants to call this tool."
CC's JS then calls the server with the tool request and gets the response. It validates the response (e.g., JSON schemas) and then calls the LLM API again bundling up the success/failure of the tool call into a structured "tool_result" block. If it validated and was successful, the LLM gets to see the MCP server's response. If it failed to validate, the LLM gets to see that it failed and what the error message was (so the LLM can try again in a different way).
The idea is that if a tool call is supposed to return a CarMakeModel string ("Toyota Tercel") and instead returns an int (42), JSON Schemas can catch this. The client validates the server's response against the schema, and calls the LLM API with
So the LLM isn't choosing to call the validator, it's the deterministic Javascript that is Claude Code that chooses to call the validator.
There are plenty of ways for this to go wrong: the client (Claude Code) has to validate; int vs string isn't the same as "is a valid timestamp/CarMakeModel/etc"; if you helpfully put the thing that failed into the error message ("Expect string, got integer (42)") then the LLM gets 42 and might choose to interpret that as a CarMakeModel if it's having a particularly bad day; the LLM might say "well, that didn't work, but let's assume the answer was Toyota Tercel, a common car make and model", ... We're reaching here, yet these are possible.
But the basic flow has validation done in deterministic code and hiding the MCP server's invalid responses from the LLM. The LLM can't choose not to validate. You seemed to be saying that the LLM could choose not to validate, and your interlocutor was saying that was not the case.
>Are you two talking at cross-purposes because you don't have a shared understanding of control and data flow?
No they're literally just skipping an entire step into how LLM's actually "use" MCP.
MCP is just a standard, largely for humans. LLM's do not give a singular fuck about it. Some might be fine tuned for it to decrease erroneous output, but at the end of the day it's just system prompts.
And respectfully, your example misunderstands what is going on:
>* The Claude API, which is more structured than "text in, text out".
>* The Claude LLM behind the API, which generates a response to a given prompt
No. That's not what "this" is. LLM's use MCP to discover tools they can call, aka function/tool calling. MCP is just an agreed upon format, it doesn't do anything magical; it's just a way of aligning the structure across companies, teams, and people.
There is not an "LLM behind the API", while a specific tool might implement its overall feature set using LLM's, that's totally irrelevant to what's being discussed and the principle point of contention.
Which is this: an LLM interacting with other tools via MCP still needs system prompts or fine tuning to do so. Both of those things are not predictable or deterministic. They will fail at some point in the future. That is indisputable. It is a matter of statistical certainty.
It's not up for debate. And an agreed upon standard between humans that ultimately just acts as convention is not going to change that.
It is GRAVELY concerning that so many people are trying to use technical jargon of which they clearly are ill-equipped to do so. The magic rules all.
> No they're literally just skipping an entire step into how LLM's actually "use" MCP.
No,you are literally misunderstanding the entire control flow of how an LLM toolchain uses both the model and any external tools (whether specified via MCP or not, but the focus of the conversation is MCP.)
> MCP is just a standard, largely for humans.
The standard is for humans implementing both tools and the toolchains that call them.
> LLM's do not give a singular fuck about it.
Correct. LLM toolchains, which if they can connect to tools via MCP, are also MCP clients care about it. LLMs don't care abojt it because the toolchain is the thing that actually calls both the LLM and the tools. And that's true whether the toolchain is a desktop frontend with a local, in process llama.cpp backend for running the LLM or if its the Claude Desktop app with a remote connection to the Anthropic API for calling the LLM or whatever.
> Some might be fine tuned for it to decrease erroneous output,
No, they aren't. Most models that are used to call tools now are specially trained for tool calling with a well-defined format for requesting tool calls from the toolchain a mnd receiving results back from it (though this isn't necessary for tool calling to work, people were using the ReAct pattern in toolchains to do it with regular chat models without any training or prespecified prompt/response format for tool calls just by having the toolchain inject tool-related instructions in the prompt, and read LLM responses to see if it was asking for tool calls), none of them that exist now are fine tuned for MCP, nor do they need to be because they literally never see it. The toolchain reads LLM responses, identifies tool call requests, takes any that map to tools defined via MCP and routes them down the channel (http or subprocess stdio) specified by the MCP, and does the reverse woth responses from the MCP server, validating responses and then mapping them into a prompt template that specifies where tool responses go and how they are formatted. It does the same thing (minus the MCP parts) for tools that aren’t specified by MCP (frontends might have their own built-tools, or have other mechanisms for custom tools that predate MCP support.) The LLM doesn't see any difference between MCP tools and other tools or a human reading the message with the tool request and manually creating a response that goes directly back.
> LLM's use MCP to discover tools they can call,
No, they don't. LLM frontends, which are traditional deterministic programs, use MCP to do that, and to find schemas for what should be sent to and expected from the tools. LLMs don’t see the MCP specs, and get information from the toolchain in prompts in formats that are model-specific and unrelated to MCP that tell them what tools they can request calls be made to and what they can expect back.
> an LLM interacting with other tools via MCP still needs system prompts or fine tuning to do so. Both of those things are not predictable or deterministic. They will fail at some point in the future. That is indisputable.
That's not, contrary to your description, a point of contention.
The point of contention is that the validation of data returned by an MCP server against the schema provided by the server is not predictable or deterministic. Confusing these two issues can only happen if you think the model does something with each response that controls whether or not the toolchain validates it, which is impossible, because the toolchain does whatever validation it is programmed to do before the model sees the data. The model has no way to know there is a response until that happens.
Now,can the model make requests that the don't fit the toolchain’s expectations due to unpredictable model behavior? Sure. Can the model do dumb things with the post-validation reaponse data after the toolchain has validated it and mapped it into the models prompt template and called the model with that prompt, for the same reason? Abso-fucking-lutely.
Can the model do anything to tell the toolchain not to validate response data for a tool call that it did decide to make on behalf of the model if the toolchain is programmed to validate the response data against the schema provided by the tool server? No, it can't. It can't even know that the tool was provided by an MCP and that that might be an issue, not can it know that the toolchain made the request, nor can it know that the toolchain received a response until the toolchain has done what it is programmed to do with the response through the point of populating the prompt template and calling the model with the resulting prompt, by which point any validation it was programmed to do has been done and is an immutable part of history.
>No, they don't. LLM frontends, which are traditional deterministic programs, use MCP to do that, and to find schemas for what should be sent to and expected from the tools.
You are REALLY, REALLY misunderstanding how this works. Like severely.
You think MCP is being used for some other purpose despite the one it was explicitly designed for... which is just weird and silly.
>Confusing these two issues can only happen if you think the model does something with each response that controls whether or not the toolchain validates it
No, you're still just arguing against something no one is arguing for the sake of pretending like MCP is doing something it literally cannot do or fundamentally fix about how LLM's operate.
I promise you if you read this a month from now with a fresh pair of eyes you will see your mistake.
What do you think the `tools/call` MCP flow is between the LLM and an MCP server? For example, if I had the GitHub MCP server configured on Claude Code and prompted "Show me the most recent pull requests on the torvalds/linux repository".
Hum, I'm not sure if everyone is simply unable to understand what you are saying, including me, but if the MCP client validates the MCP server response against the schema before passing the response to the LLM model, the model doesn't even matter, your MCP client could choose to report an error and interrupt the agentic flow.
That will depend on what MCP client you are using and how they've handled it.
How does the AI bypass the MCP layer to make the request? The assumption is (as I understand it) the AI says “I want to make MCP request XYZ with data ABC” and it sends that off to the MCP interface which does the heavy lifting.
If the MCP interface is doing the schema checks, and tossing errors as appropriate, how is the AI routing around this interface to bypass the schema enforcement?
>How does the AI bypass the MCP layer to make the request
It doesn't. I don't know why the other commenters are pretending this step does not happen.
There is a prompt that basically tells the LLM to use the generated manifest/configuration files. The LLM still has to not hallucinate in order to properly call the tools with JRPC and properly follow MCP protocol. It then also has to make sense of the structured prompts that define the tools in the MCP manifest/configuration file.
Why this fact is seemingly being lost in this thread, I have no idea, but I don't have anything nice to say about it so I won't :). Other than we're all clearly quite screwed, of course.
MCP is to make things standard for humans, with expected formats. The LLM's really couldn't give a shit and don't have anything super special about how the interact with MCP configuration files or the protocol (other than some additional fine-tuning, again, to make it less likely to get the wrong output).
> There is a prompt that basically tells the LLM to use the generated manifest/configuration files.
No, there isn't. The model doesn't see any difference between MCP-supplied tools, tools built in to the toolchain, and tools supplied by any other method. The prompt simply provides tool names, arguments, and response types to the model. The toolchain, a conventional deterministic program, reads the model response, finds things that meet the models defined format for tool calls, parses out the call names and arguments, looks up in its own internal list of tools to find matching names and see if they are internal, MCP supplied, or other tools, and routes the calls appropriately, gathers responses, does any validation it is designed to do, then mals the validated results into where the model's prompt template specifies tool results should go, and calls the model again with an new message appended to the previous conversation context containing the tool results.
Do you have any technical diagrams or specs that describe this flow? I've been reading the Lang chain[0] and mcp docs[0] and cannot find this behavior you're proposing anywhere.
Because it's about the MCP Host <-> LLM interaction. Not how a vanilla server and client communicate to each other and have done so for the last 5+ decades.
This really is not that hard to understand. The LLM must be "bootstrapped" with tool definitions and it must retain stable enough context to continue to call those tools into the future.
This will fail at some point, with any model. It will pretend to do a tool call, it will simply not do the tool call, or it will attempt to call a tool that does not exist, or any of the above or anything else not listed here. It is a statistical certainty.
I don't know why people are pretending MCP does something to fix this, or that MCP is special in anyway. It won't, and it's not.
Oh, so you're not talking about json validation inside the mcp server, you're talking about the contract between the LLM and the MCP server potentially changing. This is a valid issue the same as other APIs that must be written against, the same as you would with other external API connections. Mcp does not solve this correct, just the same as swagger does not solve it.
As for your comments on LLM pretending to do tool calls, sure. That's not what the original thread comments were discussing. There are ways to mitigate this with proper context and memory management but it is more advanced.
>That's not what the original thread comments were discussing. There are ways to mitigate this with proper context and memory management but it is more advanced.
That is what the original article is describing, and what the comments misunderstood or purposefully over-simplified, and extends it to being able to trace these issues across a large amount of calls/invocations at scale.
>MCP has none of this richness. No machine-readable contracts beyond basic JSON schemas means you can’t generate type-safe clients or prove to auditors that AI interactions follow specified contracts.
>MCP ignores this completely. Each language implements MCP independently, guaranteeing inconsistencies. Python’s JSON encoder handles Unicode differently than JavaScript’s JSON encoder. Float representation varies. Error propagation is ad hoc. When frontend JavaScript and backend Python interpret MCP messages differently, you get integration nightmares. Third-party tools using different MCP libraries exhibit subtle incompatibilities only under edge cases. Language-specific bugs require expertise in each implementation, rather than knowledge of the protocol.
>Tool invocations can’t be safely retried or load-balanced without understanding their side effects. You can’t horizontally scale MCP servers without complex session affinity. Every request hits the backend even for identical, repeated queries.
Somehow comments confused a server <-> client interaction which has been a non-issue for decades with making the rest of the "call stack" dependable. What leads to that level of confusion, I can only guess it's inexperience and religious zealotry.
It's also worth noting that certain commenters saying I "should" (I'm using this word on purpose) read the spec is also pretty laughable, considering how vague the "protocol" itself is.
>Clients SHOULD validate structured results against this schema.
Have fun with that one. MCP could have at least copied the XML/SOAP process around this and we'd be better off.
Which again, leads back to the articles ultimate premise. MCP does a lot of talking and not a lot of walking, it's pointless at best and is going to lead to A LOT of integration headaches.
I don't think people in this thread aren't really confused about MCP. They are confused that you claimed, or at least insinuated that an LLM might skip the schema validation portion of an MCP tool call request/response, which was originally demonstrated via Claude Code. Hopefully you can understand why everyone seems so confused, since that claim doesn't make any sense when the LLM doesn't really have anything to do with schema validation at all.
What you described is essentially how it works. The LLM has no control over how the inputs & outputs are validated, nor in how the result is fed back into it.
The MCP interface (Claude Code in this case) is doing the schema checks. Claude Code will refuse to provide the result to the LLM if it does not pass the schema check, and the LLM has no control over that.
> > The LLM has no control over how the inputs & outputs are validated
> Which is completely fucking irrelevant to what everyone else is discussing.
Not sure what you think is going on, but that is literally the question this subthread is debating, starting with an exchange in which the salient claims were:
This is deterministic, it is validating the response using a JSON Schema validator and refusing to pass it to an LLM inference.
I can't gaurantee that behavior will remain the same more than any other software. But all this happens before the LLM is even involved.
> The whole point of it is, whichever LLM you're using is already too dumb to not trip when lacing its own shoes. Why you'd trust it to reliably and properly parse input badly described by a terrible format is beyond me.
You are describing why MCP supports JSON Schema. It requires parsing & validating the input using deterministic software, not LLMs.
>This is deterministic, it is validating the response using a JSON Schema validator and refusing to pass it to an LLM inference.
No. It is not. You are still misunderstanding how this works. It is "choosing" to pass this to a validator or some other tool, _for now_. As a matter of pure statistics, it will simply not do this at some point in the future on some run.
You are quite wrong. The LLM "chooses" to use a tool, but the input (provided by the LLM) is validated with JSON Schema by the server, and the output is validated by the client (Claude Code). The output is not provided back to the LLM if it does not comply with the JSON Schema, instead an error is surfaced.
I think the others are trying to point out that statistically speaking, in at least one run the LLM might do something other than choose to use the correct tool. i.e 1 out of (say) 1 million runs it might do something else
No, the discussion is about whether validation is certain to happen when the LLM makes something where the frontend recognizes aa a tool request and calls a tool on behalf of the LLM, not whether the LLM can choose not to make a tool call at all.
The question is whether havign observed Claude Code validating a tool response before handing the response back to the LLM, you can count on that validation on future calls, not whether you can count on the LLM calling a tool in a similar situation.
MCP requires that servers providing tools must deterministically validate tool inputs and outputs against the schema.
LLMs cannot decide to skip this validation. They can only decide not to call the tool.
So is your criticism that MCP doesn't specify if and when tools are called? If so then you are essentially asking for a massive expansion of MCP's scope to turn it into an orchestration or workflow platform.
The LLM chooses to call a tool, it doesn't choose how the frontend handles anything about that call between the LLM making a tool request and the frontend, after having done its processing of the response (including any validation), mapping the result into a new prompt and calling the LLM with it.
> . It is "choosing" to pass this to a validator or some other tool, _for now_.
No, its not. The validation happens at the frontend before the LLM sees the response. There is no way for the LLM to choose anything about what happens.
The cool thing about having coded a basic ReAct pattern implementation (before MCP, or even models trained on any specific prompt format for tool calls, was a thing, but none of that impacts the basic pattern) is that it gives a pretty visceral understanding of what is going on here, and all that's changed since is per model standardization of prompt and response patterns on the frontend<->LLM side and, with MCP, of the protocol for interacting on the frontend<->tool side.
Claude Code isn't a pure LLM, it's a regular software program that calls out to an LLM with an API. The LLM is not making any decisions about validation.
Claude will happily cast your int into a 2023 Toyota Yaris and keep on hallucinating things.