SeaLink
Sign up
Back to models

Gemini 3.1 Pro

Google DeepMind (Gemini)

Chat & customer supportCode assistantLong documentsVision & OCRAutomation & agents
model: gemini-3-1-pro
Try itDemo
Advanced parameters
⌘/Ctrl + Enter to send

Demo mode: doesn't call the upstream model. After signup with your key, the same request hits the real model.

Use cases

  • Large multimodal analysis
  • Long technical research
  • Google ecosystem agent workloads

Code samples

cURL
curl https://api.sealink.asia/v1/chat/completions \
-H "Authorization: Bearer $SEALINK_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gemini-3-1-pro",
"messages": [{"role": "user", "content": "Hello."}]
}'
Python (OpenAI SDK)
from openai import OpenAI
client = OpenAI(base_url="https://api.sealink.asia/v1", api_key="<your-sealink-key>")
resp = client.chat.completions.create(
model="gemini-3-1-pro",
messages=[{"role": "user", "content": "Hello."}],
)
print(resp.choices[0].message.content)
Node.js (OpenAI SDK)
import OpenAI from "openai";
const client = new OpenAI({
baseURL: "https://api.sealink.asia/v1",
apiKey: process.env.SEALINK_API_KEY,
});
const resp = await client.chat.completions.create({
model: "gemini-3-1-pro",
messages: [{ role: "user", content: "Hello." }],
});
console.log(resp.choices[0].message.content);

Once you sign in, your base_url and API key will be inlined automatically.

Performance

Last 30 days from SeaLink's own probes; launch values use model-tier estimates until live probe history is available.

TTFT P50

732ms

TTFT P95

1303ms

Tokens/sec

55

30d uptime

99.75%

Capabilities & limits

Context length
1000K tokens
Capabilities
multimodal · tools · streaming · cache
Status
operational