Quickstart
Using the Python SDK
Here’s the simplest way to run a job with Sutro - just pass a list of inputs and a system prompt.Structuring outputs
In the above example, we’re trying to perform a simple classification task. In such cases, we may want structured outputs. We can accomplish this by passing in a Pydantic model or JSON schema using theoutput_schema parameter. The model will strictly adhere to this schema in its output content.
Working with DataFrames and Sampling Parameters
This example shows how to work with DataFrames, customize sampling parameters, and wait for job completion.Multi-Model Comparison
Run the same inputs across multiple models to compare outputs and quality.Cost Estimation
Before running a large job, you can estimate costs using thedry_run parameter.
Using Files
You can also use files to pass in data. We currently support CSV, Parquet, and TXT files. If you’re using a TXT file, each line should represent a single input. If you’re using a CSV or Parquet file, you must specify the column name that contains the inputs using thecolumn parameter.
Moving to Production
So far we’ve shown what it looks like to use prototyping jobs (priority 0, default). After working with a small amount of data using prototyping jobs, you’ll likely want to move to production jobs (priority 1) which are less expensive and have higher quotas. To do so, you simply need to set thejob_priority parameter to 1: