Batching

Batching lets you send and execute multiple GraphQL operations in a single HTTP request. Hot Chocolate supports two forms of batching: variable batching and request batching. Both deliver results as a stream, so the client receives each result as soon as it is ready without waiting for the entire batch to complete.

Variable batching is based on an open proposal to the GraphQL over HTTP specification.

Enabling Batching

Batching is disabled by default as a security measure. You enable the types of batching you want to allow through the AllowedBatching flags enum using ModifyServerOptions:

C#
builder
.AddGraphQL()
.ModifyServerOptions(o => o.Batching = AllowedBatching.VariableBatching);

You can combine flags to enable multiple batching modes:

C#
builder
.AddGraphQL()
.ModifyServerOptions(o => o.Batching =
AllowedBatching.VariableBatching | AllowedBatching.RequestBatching);

To enable all batching modes at once:

C#
builder
.AddGraphQL()
.ModifyServerOptions(o => o.Batching = AllowedBatching.All);

Note: If your GraphQL server is a Fusion subgraph, both variable batching and request batching are enabled by default. You do not need to configure this explicitly.

Batch Size Limits

The maximum number of operations in a single batch defaults to 1024. You can adjust this limit:

C#
builder
.AddGraphQL()
.ModifyServerOptions(o => o.MaxBatchSize = 2048);

A value of 0 means unlimited.

Variable Batching

Variable batching lets you execute a single operation multiple times with different sets of variables. Instead of sending variables as an object, you send it as an array of objects:

JSON
{
"query": "query getHero { hero { name } }",
"operationName": "getHero",
"id": "W5vrrAIypCbniaIYeroNnw==",
"variables": [
{
"a": 1,
"b": "abc"
},
{
"a": 2,
"b": "def"
}
],
"extensions": {
"a": 1,
"b": "abc"
}
}

The operation executes once per variable set. Each result in the response stream includes a variableIndex (0-based) so the client can match results back to their corresponding variable set:

{ "data": { "hero": { "name": "R2-D2" } }, "variableIndex": 0 }
{ "data": { "hero": { "name": "Luke Skywalker" } }, "variableIndex": 1 }

Results are delivered out of order. Whichever variable set finishes first is streamed to the client first. The variableIndex field is how the client correlates each result back to its input.

Request Batching

Request batching lets you send a JSON array of independent GraphQL operations in a single HTTP request. Each entry in the array is a complete operation with its own query, variables, and operation name.

Individual entries in the array can also use variable batching by providing variables as an array:

JSON
[
{
"query": "query getHero { hero { name } }",
"operationName": "getHero",
"id": "W5vrrAIypCbniaIYeroNnw==",
"variables": {
"a": 1,
"b": "abc"
},
"extensions": {
"a": 1,
"b": "abc"
}
},
{
"query": "query getHero { hero { name } }",
"operationName": "getHero",
"id": "W5vrrAIypCbniaIYeroNnw==",
"variables": [
{
"a": 1,
"b": "abc"
},
{
"a": 2,
"b": "def"
}
],
"extensions": {
"a": 1,
"b": "abc"
}
}
]

Each result includes a requestIndex (0-based) that identifies which entry in the request array it belongs to. When an entry uses variable batching, its results also include a variableIndex:

{ "data": { "hero": { "name": "R2-D2" } }, "requestIndex": 1, "variableIndex": 0 }
{ "data": { "hero": { "name": "Han Solo" } }, "requestIndex": 0 }
{ "data": { "hero": { "name": "Luke Skywalker" } }, "requestIndex": 1, "variableIndex": 1 }

Like variable batching, results are delivered out of order. In the example above, the second request (index 1) returned its first variable result before the first request (index 0) completed. The requestIndex and variableIndex fields let the client reassemble the results correctly.

Response Formats

Batch results are delivered as a result stream. Hot Chocolate streams result data back to your client as soon as each item in the batch has been executed.

The response transport is selected via the Accept header:

Accept headerTransportContent-Type
multipart/mixedMultipartmultipart/mixed
text/event-streamSSEtext/event-stream
application/jsonlJSON Linesapplication/jsonl

If no streaming Accept header is provided, the default is multipart/mixed.

JSON Lines (application/jsonl) is well-suited for batch responses. Each result is written as a single line of JSON, making it straightforward for clients to parse results incrementally:

{"requestIndex":0,"data":{"hero":{"name":"R2-D2"}}}
{"requestIndex":1,"data":{"hero":{"name":"Luke Skywalker"}}}

If you are using a JavaScript client, consider:

  • meros for handling multipart/mixed responses
  • graphql-sse for handling text/event-stream responses

For more details about streaming transports, see HTTP Transport.

Next Steps

Last updated on April 13, 2026 by Michael Staib