Nebula Update v0.0.5: Advanced configurations, new endpoints, and simplified context filters
This release focuses on granular configurations for responses to control randomness, limitations, and formats for outputs. Additionally, two new OpenAI compatible endpoints including /chat/completions
and /models
, and simplified context filter endpoint for an improved developer experience.
New Features
- Enable advanced LLM configurations to control output randomness and formatting, including temperature, presence penalty, max tokens, new response formats (JSON object, JSON schema), and nucleus sampling (top-p).
{
"message": "Hello Nebula!",
// ...
// advanced llm configuration
"max_tokens": 100, // integer
"frequency_penalty": 0.0, // [-2.0, 2.0]
"presence_penalty": 0.0, // [-2.0, 2.0]
"temperature": 1.0, // [0.0, 2.0]
"top_p": 1.0, // [0.0, 1.0]
"response_format": {}, //{ type: "json_object" } | { "type": "json_schema", "json_schema": { ... } }
}
- New endpoint for
/chat/completions
compatible with OpenAI client to process chat history and generate context-aware responses.
POST "https://nebula-api.thirdweb.com/chat/completions" \
-X POST \
-H "X-Secret-Key: $THIRDWEB_SECRET_KEY" \
--data '{
"messages": [{ "role": "user", "content": "Hello Nebula!"}],
// (optional) context management
"context": {
"session_id": "...", // optional
"wallet_address": "0x...", // optional
"chain_ids": ["1", "11155111"], // optional
},
// (optional) advanced llm configuration
"max_tokens": 100,
"frequency_penalty": 0.0,
"presence_penalty": 0.0,
"temperature": 1.0,
"top_p": 1.0,
"response_format": { type: "json_object" }
}'
from openai import OpenAI
client = OpenAI(
base_url="<https://nebula-api.thirdweb.com/>",
api_key="{{THIRDWEB_SECRET_KEY}}",
)
chat_completion = client.chat.completions.create(
model="t0",
messages=[{"role": "user", "content": "Hello Nebula!"}],
stream=False,
extra_body={ "context": { "wallet_address": "0x..." }}
)
print(chat_completion)
- New OpenAI compatible endpoint to list out models used by Nebula
from openai import OpenAI
# https://nebula-api.thirdweb.com/models
client = OpenAI(
base_url="https://nebula-api.thirdweb.com/",
api_key="",
)
models = client.models.list()
print(models)
- Simplified
context_filter
andexecute_config
intocontext
for/chat
,/execute
, and/chat/completions
endpoints.
Before:
{
"message": "Hello Nebula!",
"session_id": "...",
"context_filter": {
"wallet_addresses": [],
"chain_ids": [],
"contract_addresses": []
},
"execute_config": {
"type": "client",
"signer_wallet_address": "0x0000..."
}
}
New:
{
"message": "Hello Nebula!",
"context": {
"session_id": "...",
"wallet_address": "0x...",
"chain_ids": ["1", "11155111"]
},
}
Nebula is currently in Alpha for a select number of developers. For access to the product, please sign up on the Nebula waitlist.
For any feedback or support inquiries, please visit our support site.