Batch Runner
BatchRunner is a client-side orchestration helper for submitting many energy model runs and waiting for all of them to complete. It wraps client.energy_models.create() for submission and client.tasks.get() for lightweight status polling.
When to Use
Use BatchRunner when you need to run multiple energy models (e.g. different sites, tracker vs fixed comparisons, sensitivity sweeps) and want to avoid writing your own submit/poll loop.
For a single run, use client.energy_models.create() and client.tasks.get() directly — see Usage Patterns.
Quick Start
from dalysdk import DalyClient, BatchRunner
with DalyClient(workspace_api_key="wk_...", user_api_key="uk_...") as client:
batch = BatchRunner(client)
batch.submit("Site A - Tracker", epm_input_a)
batch.submit("Site B - Fixed", epm_input_b)
batch.wait(timeout=3600, poll_interval=15)
print(batch.summary())
# {"total": 2, "completed": 2, "failed": 0, "pending": 0, ...}
for result in batch.results():
print(result["name"], result["data"]["summary"])
Constructor
BatchRunner(client, *, max_workers=4)
| Parameter | Type | Default | Description |
|---|
client | DalyClient | (required) | An initialised SDK client instance. |
max_workers | int | 4 | Maximum threads for parallel submission via submit_many(). |
Methods
submit
batch.submit(name, data, *, timeout=None) -> dict
Submit a single async energy model run.
| Parameter | Type | Default | Description |
|---|
name | str | (required) | Human-readable label for tracking this run. |
data | dict | (required) | Full energy model input JSON. |
timeout | float | None | None | Per-request timeout override in seconds. |
Returns the API response dict containing task_id, energy_model_id, and run_index.
submit_many
batch.submit_many(configs, *, parallel=False, timeout=None) -> int
Submit multiple runs at once.
| Parameter | Type | Default | Description |
|---|
configs | list[tuple[str, dict]] | (required) | List of (name, epm_input) tuples. |
parallel | bool | False | Use a thread pool for concurrent submission. |
timeout | float | None | None | Per-request timeout override in seconds. |
Returns the number of successfully submitted runs. Failed submissions are logged and skipped.
configs = [
("Site A - Tracker", epm_input_a),
("Site B - Fixed", epm_input_b),
("Site C - Bifacial", epm_input_c),
]
count = batch.submit_many(configs, parallel=True)
print(f"{count}/{len(configs)} submitted")
poll
batch.poll() -> tuple[int, int, int]
Poll GET /tasks/{task_id} for each non-terminal run and update statuses in-place. Returns a tuple of (completed, failed, pending) counts.
wait
batch.wait(*, timeout=None, poll_interval=15.0, on_poll=None) -> bool
Block until all runs reach a terminal status or timeout elapses.
| Parameter | Type | Default | Description |
|---|
timeout | float | None | None | Maximum seconds to wait. None waits forever. |
poll_interval | float | 15.0 | Seconds between poll cycles. |
on_poll | Callable | None | None | Callback invoked after each poll with (completed, failed, pending). |
Returns True if all runs finished, False if timed out.
def progress(completed, failed, pending):
total = completed + failed + pending
print(f"{completed + failed}/{total} done ({failed} failed)")
all_done = batch.wait(timeout=3600, poll_interval=10, on_poll=progress)
summary
Returns a summary dict with the current batch state:
{
"total": 4,
"completed": 3,
"failed": 1,
"pending": 0,
"results": [
{"name": "Site A", "energy_model_id": 10, "task_id": 1},
...
],
"failures": [
{"name": "Site D", "task_id": 4, "error": "..."},
]
}
results
batch.results() -> list[dict]
Fetches full energy model data via GET /energymodels/{id} for every completed run. Returns a list of dicts:
[
{
"name": "Site A - Tracker",
"energy_model_id": 10,
"data": { ... } # full energy model response
},
...
]
results() makes one API call per completed run. Call it after wait() completes
to avoid fetching partial results.
Properties
| Property | Returns | Description |
|---|
batch.runs | list[dict] | All tracked runs. |
batch.completed | list[dict] | Runs with status COMPLETED. |
batch.failed | list[dict] | Runs with status FAILED. |
batch.pending | list[dict] | Runs not yet in a terminal status. |
Logging
BatchRunner uses Python’s logging module (logger name: dalysdk.batch). To see progress output:
import logging
logging.basicConfig(level=logging.INFO)