OpenAI¶
OpenAI provides state-of-the-art language models including GPT-4o and the reasoning-focused o1 series.
Stable
Quick Facts¶
| Property | Value |
|---|---|
| Quality | State-of-the-art |
| Free Tier | No (pay-as-you-go) |
| Models | GPT-4o, GPT-4o-mini, o1 |
| Streaming | Supported |
| Functions | Supported |
| Vision | Supported |
| Embeddings | Supported |
Setup¶
Get API Key¶
- Go to platform.openai.com
- Create an account and add payment method
- Navigate to API Keys
- Create a new secret key
Configure¶
Available Models¶
| Model | Context | Input Cost | Output Cost | Best For |
|---|---|---|---|---|
gpt-4o | 128K | $2.50/1M | $10.00/1M | Best quality |
gpt-4o-mini | 128K | $0.15/1M | $0.60/1M | Cost-effective |
o1 | 200K | $15.00/1M | $60.00/1M | Complex reasoning |
o1-mini | 128K | $3.00/1M | $12.00/1M | Fast reasoning |
gpt-4-turbo | 128K | $10.00/1M | $30.00/1M | Legacy |
Usage Examples¶
Basic Usage¶
import asyncio
from sentimatrix import Sentimatrix
from sentimatrix.config import SentimatrixConfig, LLMConfig
config = SentimatrixConfig(
llm=LLMConfig(
provider="openai",
model="gpt-4o-mini"
)
)
async def main():
async with Sentimatrix(config) as sm:
summary = await sm.summarize_reviews(reviews)
print(summary)
asyncio.run(main())
With Vision (Analyze Images)¶
config = SentimatrixConfig(
llm=LLMConfig(
provider="openai",
model="gpt-4o" # Vision-capable model
)
)
async with Sentimatrix(config) as sm:
# Analyze product image
result = await sm.analyze_image(
image_path="product.jpg",
prompt="Describe the sentiment conveyed by this product image"
)
Structured Output (JSON Mode)¶
async with Sentimatrix(config) as sm:
insights = await sm.generate_insights(
reviews,
output_format="json",
schema={
"pros": ["string"],
"cons": ["string"],
"rating": "number",
"recommendation": "string"
}
)
Function Calling¶
async with Sentimatrix(config) as sm:
# Automatic function calling for structured extraction
analysis = await sm.extract_aspects(
text="The battery life is great but the screen is too dim.",
aspects=["battery", "screen", "price", "design"]
)
# Returns: {"battery": "positive", "screen": "negative", ...}
Configuration Options¶
LLMConfig(
provider="openai",
model="gpt-4o-mini",
# API settings
api_key="sk-...",
organization="org-...", # Optional
base_url=None, # For Azure/proxies
# Generation settings
temperature=0.7,
max_tokens=4096,
top_p=1.0,
frequency_penalty=0.0,
presence_penalty=0.0,
# Reliability
timeout=60,
max_retries=3,
# Features
json_mode=False, # Force JSON output
seed=None, # For reproducibility
)
Rate Limits¶
OpenAI has tiered rate limits based on usage:
| Tier | RPM | TPM |
|---|---|---|
| Free | 3 | 40,000 |
| Tier 1 | 500 | 200,000 |
| Tier 2 | 5,000 | 2,000,000 |
| Tier 3+ | Higher | Higher |
Handling Rate Limits¶
config = SentimatrixConfig(
llm=LLMConfig(
provider="openai",
model="gpt-4o-mini",
rate_limit={
"requests_per_minute": 450,
"tokens_per_minute": 180000,
"retry_on_rate_limit": True,
}
)
)
Best Practices¶
-
Choose the Right Model
gpt-4o-minifor most tasks (best value)gpt-4ofor complex analysiso1for multi-step reasoning
-
Use JSON Mode for Structured Output
-
Enable Caching
- OpenAI supports prompt caching for repeated requests
- Reduces costs by up to 50%
-
Monitor Costs
- Use usage tracking
- Set spending limits in OpenAI dashboard
Troubleshooting¶
Invalid API key
Verify your API key:
Rate limit exceeded
Implement backoff or reduce concurrency:
Context length exceeded
Reduce input size or use a model with larger context:
Related¶
- Provider Overview
- Azure OpenAI - Enterprise deployment
- Anthropic - Alternative premium provider