SeaLink
← Docs

Embeddings

Turn text into vectors for RAG, semantic search, clustering, and recommendations. The launch embedding model is text-embedding-3-large.

Supported models

ModelDimCtxPrice / 1MBest for
text-embedding-3-large30728K$0.13General, English-leaning

Example

Python
from openai import OpenAI
client = OpenAI(
base_url="https://api.sealink.asia/v1",
api_key="<your-sealink-key>",
)
# Single string
res = client.embeddings.create(
model="text-embedding-3-large",
input="SeaLink helps SEA developers ship AI faster."
)
vec = res.data[0].embedding # 3072-dim vector
# Batch (recommended for performance)
texts = ["Doc 1 content", "Doc 2 content", "Doc 3 content"]
res = client.embeddings.create(model="text-embedding-3-large", input=texts)
vectors = [d.embedding for d in res.data]

Which one?

  • text-embedding-3-large: Industry standard from OpenAI. Most reliable for English. 3072-dim takes more storage.

Performance tips

  • Batch input (array form) is 5-10× faster than looping single calls.
  • L2-normalize before storing — cosine retrieval becomes a dot product, much faster.
  • Chunk size: 500-800 tokens with 50-100 token overlap is a safe default.
  • Want full RAG end-to-end? See the text-embedding-3-large + Qwen recipe in Cookbook.