Run Model (Sync)
Tool to invoke a fal.ai model synchronously via fal.run, generating new content (images, audio, video, or other model outputs) hosted at fal.ai CDN URLs in the response. Each call executes paid inference on fal.ai — it consumes the connected account's credit balance and is billed per request, so it is NOT a read-only operation. Use when you need a single-shot result and the model is fast enough to return inline — the request blocks until the model finishes and returns the generated output directly. For long-running jobs, parallel invocations, or anything production-grade, prefer SUBMIT_ASYNC_JOB (queue.fal.run) which adds persistence, retries, and webhooks; both produce the same kind of newly-generated, persistently-hosted output and are billed identically.