R1V4
Create a chat completion request that supports text and image input, returning model-generated responses.
Endpoint
POST /api/v1/chat/completionsRequest Parameters
Request Body (JSON)
json
{
"model": "string (required)",
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "string"
},
{
"type": "image_url",
"image_url": {
"url": "string (base64 data URI)"
}
}
]
}
],
"stream": true
}Parameter Description
| Parameter | Type | Required | Description |
|---|---|---|---|
model | string | Yes | Model name: skywork/r1v4-lite or skywork/r1v4-vl-planner-lite |
messages | array | Yes | List of conversation messages, including user messages and assistant replies |
stream | boolean | No | Whether to use streaming response, defaults to false. Set to true for SSE format response |
messages Parameter Description
| Parameter | Type | Required | Description |
|---|---|---|---|
role | string | Yes | Message role, supports user, assistant, system |
content | array | Yes | Message content, supports mixed text and image input |
content Parameter Description
Text Content
json
{
"type": "text",
"text": "Please analyze this image"
}Image Content
json
{
"type": "image_url",
"image_url": {
"url": "data:image/jpeg;base64,/9j/4AAQSkZJRg..."
}
}Image URL supports base64-encoded data URI format:
- Format:
data:<mime_type>;base64,<base64_encoded_data> - Supported image formats: JPEG, PNG, GIF, WebP, etc.
Request Examples
Python Example
python
import requests
import base64
import os
def image_to_base64(image_path):
"""Convert image file to base64 encoding"""
with open(image_path, "rb") as f:
image_data = f.read()
image_base64 = base64.b64encode(image_data).decode("utf-8")
from mimetypes import guess_type
mime_type, _ = guess_type(image_path)
return f"data:{mime_type};base64,{image_base64}"
# Configuration
base_url = "https://api.skyworkmodel.ai"
api_key = "Your-API-Key"
model = "skywork/r1v4-lite" # you can also use skywork/r1v4-vl-planner-lite to use planner model
# Prepare message content
contents = []
image_path = "path/to/your/image.jpg" # Optional, you can use pure text in contents
if image_path and os.path.exists(image_path):
image_base64 = image_to_base64(image_path)
contents.append({"type": "image_url", "image_url": {"url": image_base64}})
contents.append({"type": "text", "text": "Please analyze this image"})
# Request data
data = {
"messages": [{"role": "user", "content": contents}],
"model": model,
"stream": True, # set to False for non-streaming response
"enable_search": True, # enable deepresearch mode, you can also set to False to use normal mode
}
# Request headers
headers = {
"Content-Type": "application/json",
"Accept": "text/event-stream",
"Authorization": f"Bearer {api_key}",
}
# Send request
url = f"{base_url}/api/v1/chat/completions"
response = requests.post(url, json=data, headers=headers, stream=True, timeout=600)
# Handle streaming response
if response.status_code == 200:
for line in response.iter_lines(decode_unicode=True):
if line:
print(line)
else:
print(f"Request failed: {response.status_code}")
print(f"Error message: {response.text}")Text-Only Request
bash
curl -X POST "https://api.skyworkmodel.ai/api/v1/chat/completions" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <your-api-key>" \
-H "Accept: text/event-stream" \
-d '{
"model": "[MODEL_NAME]",
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "How can I make a billion dollars?"
}
]
}
],
"stream": true
}'Text + Image Request
bash
curl -X POST "https://api.skyworkmodel.ai/api/v1/chat/completions" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <your-api-key>" \
-H "Accept: text/event-stream" \
-d '{
"model": "[MODEL_NAME]",
"messages": [
{
"role": "user",
"content": [
{
"type": "image_url",
"image_url": {
"url": "data:image/jpeg;base64,[YOUR BASE64 PICTURE]"
}
},
{
"type": "text",
"text": "Please analyze this image"
}
]
}
],
"stream": true
}'Response Format
Streaming Response (stream: true)
When the stream parameter is set to true, the response uses Server-Sent Events (SSE) format:
data: {"id":"chatcmpl-123","object":"chat.completion.chunk","created":1694268190,"model":"[MODEL_NAME]","choices":[{"index":0,"delta":{"content":"Hello"},"finish_reason":null}]}
data: {"id":"chatcmpl-123","object":"chat.completion.chunk","created":1694268190,"model":"[MODEL_NAME]","choices":[{"index":0,"delta":{"content":","},"finish_reason":null}]}
data: {"id":"chatcmpl-123","object":"chat.completion.chunk","created":1694268190,"model":"[MODEL_NAME]","choices":[{"index":0,"delta":{"content":" I am"},"finish_reason":null}]}
data: [DONE]Each data: line contains a JSON object with:
id: Response IDobject: Object type, usuallychat.completion.chunkcreated: Creation timestampmodel: Model name usedchoices: List of choices, containing:index: Choice indexdelta: Incremental content, containingcontentfieldfinish_reason: Completion reason,nullmeans not finished,stopmeans normal completion
Non-Streaming Response (stream: false)
When the stream parameter is set to false or not set, returns a complete JSON response:
json
{
"id": "chatcmpl-123",
"object": "chat.completion",
"created": 1694268190,
"model": "[MODEL_NAME]",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Hello, I am an AI assistant, happy to serve you."
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 10,
"completion_tokens": 20,
"total_tokens": 30
}
}Error Response
When a request fails, an error message is returned:
json
{
"code": 400307,
"code_msg": "Invalid API key",
}Notes
- Image Size Limit: Recommended single image size not exceeding 10MB, base64 encoding increases size by approximately 33%
- Timeout Settings: Streaming responses may take a long time, recommend setting a reasonable timeout (e.g., 600 seconds)
- Streaming Response Handling: Need to properly handle SSE format, parse lines starting with
data:line by line - Model Selection: Choose the appropriate model based on requirements, different models have different capabilities and limitations