Model Database's logo
Join the Model Database community

and get access to the augmented documentation experience

to get started

Interface: Options

Properties

dont\_load\_model

Optional dont_load_model: boolean

(Default: false). Boolean. Do not load the model if it’s not already available.

Defined in

inference/src/types.ts:13


fetch

Optional fetch: (input: RequestInfo | URL, init?: RequestInit) => Promise<Response>

Type declaration

▸ (input, init?): Promise<Response>

Custom fetch function to use instead of the default one, for example to use a proxy or edit headers.

Parameters
Name Type
input RequestInfo | URL
init? RequestInit
Returns

Promise<Response>

Defined in

inference/src/types.ts:26


retry\_on\_error

Optional retry_on_error: boolean

(Default: true) Boolean. If a request 503s and wait_for_model is set to false, the request will be retried with the same parameters but with wait_for_model set to true.

Defined in

inference/src/types.ts:5


use\_cache

Optional use_cache: boolean

(Default: true). Boolean. There is a cache layer on the inference API to speedup requests we have already seen. Most models can use those results as is as models are deterministic (meaning the results will be the same anyway). However if you use a non deterministic model, you can set this parameter to prevent the caching mechanism from being used resulting in a real new query.

Defined in

inference/src/types.ts:9


use\_gpu

Optional use_gpu: boolean

(Default: false). Boolean to use GPU instead of CPU for inference (requires Startup plan at least).

Defined in

inference/src/types.ts:17


wait\_for\_model

Optional wait_for_model: boolean

(Default: false) Boolean. If the model is not ready, wait for it instead of receiving 503. It limits the number of requests required to get your inference done. It is advised to only set this flag to true after receiving a 503 error as it will limit hanging in your application to known places.

Defined in

inference/src/types.ts:22