Batch Requests
Send multiple operations in one encrypted round-trip. Batching reduces network overhead, keeps sequential workflows together, and — with atomic mode — guarantees all mutations either succeed or roll back together.
Batch requests use the same batchBroadcast helper introduced in Making Requests. The difference is the payload structure: instead of a single operation, you send an array and choose whether execution should be independent or transactional.
Envelope structure
Every batch message is a plain object with two optional arrays:
{
"batch": [], // independent requests — failures are isolated
"atomicBatch": [], // transactional requests — all-or-nothing
}- Items in
batchrun independently. A failure in one does not affect the rest. - Items in
atomicBatchrun sequentially as a transaction. Any failure rolls back the entire set. - You can populate one or both arrays. Regular items execute before the atomic batch so that any data they create is available to atomic operations in the same envelope.
Choosing the right mode
| Mode | Execution | Rollback on failure | Use case |
|---|---|---|---|
| Regular batch | Parallel | No | Unrelated reads/writes, dashboards, best-effort workflows |
| Atomic batch | Sequential | Yes | Related mutations that must stay consistent, chained operations |
Start with regular batching for throughput. Switch to atomic mode only when you need transactional safety or want a single approval to cover multiple steps.
Regular batch
Use batchBroadcast without options. Responses are returned in the same order as the input array, each typed to its own resource schema.
import { resourceSchema } from '@blockstream/ecs-js-sdk'
import { batchBroadcast } from './broadcaster'
const [wallet, signerList] = await batchBroadcast<
[typeof resourceSchema.shape.wallets.shape.get, typeof resourceSchema.shape.signers.shape.list]
>(
[
{ action: 'get', resource: '/wallets/w1', details: {} },
{ action: 'list', resource: '/signers', details: {} },
],
blockstream,
)Handling responses
Each response in a regular batch is the same discriminated union as a single-request response. Check status on each entry individually:
const responses = await batchBroadcast(messages, blockstream)
responses.forEach((res, index) => {
if (res.status === 'success') {
console.log(`#${index} succeeded`, res.details)
} else if (res.status === 'pending') {
console.log(`#${index} created proposal`, res.details.proposal_id)
} else {
console.warn(`#${index} failed`, res.message)
}
})Atomic batch
Pass { isAtomic: true }. The server executes operations in order and wraps them in a single transaction. If any step fails, the whole batch is rolled back.
import { resourceSchema } from '@blockstream/ecs-js-sdk'
import { batchBroadcast } from './broadcaster'
const [inviteRes, roleRes] = await batchBroadcast<
[typeof resourceSchema.shape.users.shape.invite, typeof resourceSchema.shape.roles.shape.create]
>(
[
{
action: 'add',
resource: '/users/invite',
details: { uid: crypto.randomUUID(), email: 'ops@example.com', user_category: 'regular' },
},
{
action: 'add',
resource: '/roles',
details: { rid: crypto.randomUUID(), name: 'ops-role', rules: [] },
},
],
blockstream,
{ isAtomic: true },
)Handling responses
The atomic batch returns a single discriminated union rather than an array. On success, details contains one entry per operation:
// success
const [groupRes, roleRes] = atomicResult.details
console.log('Both created', groupRes.details, roleRes.details)
// pending — whole batch awaits approval
console.log('Batch proposal:', atomicResult.details.batch_proposal_id)
atomicResult.details.proposals.forEach(p => console.log('Individual proposal:', p.proposal_id))
// failed — nothing was persisted
throw new Error(`Atomic batch failed: ${atomicResult.message}`)| Status | Meaning |
|---|---|
success | All operations committed. details is an array with one entry per request. |
pending | Batch triggered proposals. A single batch_proposal_id covers the whole set. |
failed / rejected / unauthorized | Server rolled back the entire batch. |
Mixed batch
Send both arrays together. Regular items run first, so reads or preparatory writes can inform the atomic portion of the same envelope.
const [listRes] = await batchBroadcast(
[{ action: 'list', resource: '/users', details: {} }],
blockstream,
)
await batchBroadcast([createGroup, createRole], blockstream, { isAtomic: true })Or in a single call using the raw envelope via blockstream.request directly when you need both arrays in one round-trip:
const envelope = {
batch: [listUsersRequest],
atomicBatch: [createGroup, createRole],
}
const encrypted = await blockstream.request(envelope)Tips
- Validate request objects against resource schemas before sending — a malformed item in an atomic batch aborts the whole set.
- Persist
proposal_idorbatch_proposal_idwhen you receivependingso you can reconcile after approvals. - Log the array index alongside each response to correlate errors back to the originating request.
- Keep batches scoped to coherent workflows. Mixing unrelated operations in one batch makes observability and retries harder.