Initiate video generation using a specified AI model. Different models require different parameters — see the model descriptions for details.
Use your API key from the RawUGC dashboard. Include as: Authorization: Bearer YOUR_API_KEY
API version to use for this request (date string, e.g. '2026-03-06'). If omitted, uses your API key's pinned version or the latest version.
^\d{4}-\d{2}-\d{2}$"2026-03-06"
AI model to use for video generation
sora-2-text-to-video, sora-2-image-to-video, kling-2.6/motion-control, veo3, veo3_fast Text description of the video to generate. Required for text-to-video models (sora-2-text-to-video, veo3, veo3_fast).
1 - 5000Array of image URLs. Required for sora-2-image-to-video and kling-2.6/motion-control.
10Array of reference video URLs. Required for kling-2.6/motion-control.
1Video aspect ratio. Sora models use portrait/landscape. Veo models use 16:9, 9:16, or Auto.
portrait, landscape, 16:9, 9:16, Auto Video length in seconds (Sora models only)
10, 15 Character orientation mode for kling-2.6/motion-control. 'image' = 10s max, 'video' = 30s max.
image, video Output resolution for kling-2.6/motion-control
720p, 1080p Character username to use (e.g., 'rawugc.mia')
Video generation initiated successfully
Unique video identifier (vid_xxx format)
Model used for generation
Current generation status
pending, processing, completed, failed Number of credits deducted for this generation
Remaining credit balance after deduction
Estimated time until video is ready (human-readable)
Timestamp when generation was initiated (milliseconds since epoch)