Python
CGPT SDK Docs
AI Chatbot & LLM SDK Documentation
The ChainGPT AI Chatbot & LLM feature allows you to integrate powerful conversational AI capabilities into your applications. You can send prompts, manage chat history, and customize the AI's responses using context injection.
Table of Contents
Installation
Quick Start
ChainGPTClient Parameters
LLM Service
LLMChatRequestModel Parameters
Basic LLM Usage
Buffered Chat (Complete Response)
Streaming Chat (Real-time Response)
Context Injection
ContextInjectionModel Parameters
TokenInformationModel Parameters
SocialMediaUrlModel Parameters
Context Injection Example
Enums and Constants
ChatHistoryMode
AITone Options
PresetTone Options
Supported Blockchain Networks
Error Handling
Exception Types
Error Handling Example
Complete Example
Best Practices
Installation
Install via pip:
pip install chaingpt
Or add to your requirements.txt
:
chaingpt>=1.1.3
Quick Start
Basic Client Initialization
client = ChainGPTClient(api_key=API_KEY)
# Always remember to close the client
await client.close()
# Or use as context manager (recommended)
async with ChainGPTClient(api_key=API_KEY) as client:
# Your code here
pass
ChainGPTClient Parameters
api_key
str
Required
Your ChainGPT API key
base_url
str
"https://api.chaingpt.org"
The base URL for the ChainGPT API
timeout
HTTPTimeout
DEFAULT_TIMEOUT
Default timeout for regular requests (seconds)
stream_timeout
HTTPTimeout
DEFAULT_STREAM_TIMEOUT
Default timeout for streaming requests (seconds)
LLM Service
The LLM service provides both buffered (complete response) and streaming chat capabilities.
LLMChatRequestModel Parameters
model
str
"general_assistant"
Model ID to use (currently only "general_assistant" supported)
question
str
Required
User's question or prompt (1-10,000 characters)
chat_history
ChatHistoryMode
ChatHistoryMode.OFF
Enable/disable chat history (ON
or OFF
)
sdk_unique_id
str
None
Unique session identifier (1-100 characters)
use_custom_context
bool
False
Whether to use custom context injection
context_injection
ContextInjectionModel
None
Custom context data (required if use_custom_context=True
)
Basic LLM Usage
Buffered Chat (Complete Response)
from chaingpt.models import LLMChatRequestModel
from chaingpt.types import ChatHistoryMode
async def buffered_chat_example():
client = ChainGPTClient(api_key=API_KEY)
try:
request = LLMChatRequestModel(
question="What is blockchain technology?",
chatHistory=ChatHistoryMode.OFF
)
response = await client.llm.chat(request)
print(response.data.bot)
finally:
await client.close()
Streaming Chat (Real-time Response)
async def streaming_chat_example():
client = ChainGPTClient(api_key=API_KEY)
try:
request = LLMChatRequestModel(
question="Explain smart contracts in detail",
chatHistory=ChatHistoryMode.ON,
sdkUniqueId="unique-session-123"
)
print("AI Response:")
async for chunk in client.llm.stream_chat(request):
print(chunk.decode("utf-8"), end="", flush=True)
print("\n--- Stream ended ---")
finally:
await client.close()
Context Injection
Context injection allows you to customize the AI's responses with specific company, project, or token information.
ContextInjectionModel Parameters
company_name
str
None
Company or project name
company_description
str
None
Brief description of the company/project
company_website_url
HttpUrl
None
Company website URL
white_paper_url
HttpUrl
None
Whitepaper URL
purpose
str
None
Purpose or role of the AI chatbot
crypto_token
bool
None
Whether the project has a crypto token
token_information
TokenInformationModel
None
Token details (required if crypto_token=True
)
social_media_urls
List[SocialMediaUrlModel]
None
Social media URLs
limitation
bool
None
Content limitation flag
ai_tone
AITone
None
AI tone setting
selected_tone
PresetTone
None
Selected preset tone (required if ai_tone=PRE_SET_TONE
)
custom_tone
str
None
Custom tone description (required if ai_tone=CUSTOM_TONE
)
TokenInformationModel Parameters
token_name
str
None
Name of the token
token_symbol
str
None
Token symbol/ticker
token_address
str
None
Token contract address
token_source_code
str
None
Token source code or repository URL
token_audit_url
HttpUrl
None
URL to token audit report
explorer_url
HttpUrl
None
Block explorer URL for the token
cmc_url
HttpUrl
None
CoinMarketCap URL
coingecko_url
HttpUrl
None
CoinGecko URL
blockchain
List[BlockchainNetwork]
None
List of supported blockchain networks
SocialMediaUrlModel Parameters
name
str
Required
Name of the social media platform
url
HttpUrl
Required
URL to the social media profile
Context Injection Example
from chaingpt.models import (
LLMChatRequestModel,
ContextInjectionModel,
TokenInformationModel,
SocialMediaUrlModel
)
from chaingpt.types import AITone, PresetTone, BlockchainNetwork
async def context_injection_example():
client = ChainGPTClient(api_key=API_KEY)
try:
# Define token information
token_info = TokenInformationModel(
tokenName="MyAwesomeToken",
tokenSymbol="MAT",
blockchain=[BlockchainNetwork.ETHEREUM, BlockchainNetwork.POLYGON]
)
# Define social media
social_media = [
SocialMediaUrlModel(
name="twitter",
url="https://twitter.com/myawesometoken"
),
SocialMediaUrlModel(
name="telegram",
url="https://t.me/myawesometoken"
)
]
# Create context injection
context = ContextInjectionModel(
companyName="Awesome Crypto Inc.",
companyDescription="Leading DeFi protocol for yield farming",
cryptoToken=True,
tokenInformation=token_info,
socialMediaUrls=social_media,
aiTone=AITone.PRE_SET_TONE,
selectedTone=PresetTone.PROFESSIONAL
)
# Make request with context
request = LLMChatRequestModel(
question="Tell me about our token and its utilities",
useCustomContext=True,
contextInjection=context,
sdkUniqueId="context-session-456"
)
response = await client.llm.chat(request)
print(response.data.bot)
finally:
await client.close()
Enums and Constants
ChatHistoryMode
ChatHistoryMode.ON
: Enable chat historyChatHistoryMode.OFF
: Disable chat history
AITone Options
AITone.DEFAULT_TONE
: Use default AI toneAITone.CUSTOM_TONE
: Use custom tone (requirescustom_tone
parameter)AITone.PRE_SET_TONE
: Use preset tone (requiresselected_tone
parameter)
PresetTone Options
PROFESSIONAL
Professional business tone
FRIENDLY
Casual and friendly tone
INFORMATIVE
Educational and detailed tone
FORMAL
Formal and structured tone
CONVERSATIONAL
Natural conversation tone
AUTHORITATIVE
Expert and confident tone
PLAYFUL
Light and entertaining tone
INSPIRATIONAL
Motivational and uplifting tone
CONCISE
Brief and to-the-point tone
EMPATHETIC
Understanding and caring tone
ACADEMIC
Scholarly and research-focused tone
NEUTRAL
Balanced and objective tone
SARCASTIC_MEME_STYLE
Humorous and meme-like tone
Supported Blockchain Networks
Ethereum
ETHEREUM
Binance Smart Chain
BSC
Arbitrum
ARBITRUM
Base
BASE
Blast
BLAST
Avalanche
AVALANCHE
Polygon
POLYGON
Scroll
SCROLL
Optimism
OPTIMISM
Linea
LINEA
zkSync
ZKSYNC
Polygon zkEVM
POLYGON_ZKEVM
Gnosis
GNOSIS
Fantom
FANTOM
Moonriver
MOONRIVER
Moonbeam
MOONBEAM
Boba
BOBA
Metis
METIS
Lisk
LISK
Aurora
AURORA
Sei
SEI
Immutable zkEVM
IMMUTABLE_ZK
Gravity
GRAVITY
Taiko
TAIKO
Cronos
CRONOS
Fraxtal
FRAXTAL
Abstract
ABSTRACT
World Chain
WORLD_CHAIN
Mantle
MANTLE
Mode
MODE
Celo
CELO
Berachain
BERACHAIN
Error Handling
The SDK provides comprehensive error handling with specific exception types:
Exception Types
ChainGPTError
Base exception for all SDK errors
General SDK errors
APIError
API returned an error response
API-specific errors
AuthenticationError
Authentication failed (401)
Invalid API key
ValidationError
Request validation failed (400)
Invalid request parameters
InsufficientCreditsError
Account has insufficient credits (402/403)
No credits remaining
RateLimitError
Rate limit exceeded (429)
Too many requests
NotFoundError
Endpoint not found (404)
Invalid endpoint
ServerError
Server error (5xx)
API server issues
TimeoutError
Request timed out
Network timeout
StreamingError
Streaming encountered an error
Streaming-specific issues
ConfigurationError
SDK configuration is invalid
Invalid configuration
Error Handling Example
from chaingpt.exceptions import (
ChainGPTError,
AuthenticationError,
ValidationError,
RateLimitError
)
async def error_handling_example():
client = ChainGPTClient(api_key=API_KEY)
try:
request = LLMChatRequestModel(
question="What is DeFi?",
chatHistory=ChatHistoryMode.OFF
)
response = await client.llm.chat(request)
print(response.data.bot)
except AuthenticationError:
print("Authentication failed. Please check your API key.")
except ValidationError as e:
print(f"Validation error: {e.message}")
if e.field:
print(f"Field: {e.field}")
except RateLimitError as e:
print(f"Rate limit exceeded: {e.message}")
if e.retry_after:
print(f"Retry after: {e.retry_after} seconds")
except ChainGPTError as e:
print(f"ChainGPT error: {e.message}")
if e.details:
print(f"Details: {e.details}")
except Exception as e:
print(f"Unexpected error: {e}")
finally:
await client.close()
Complete Example
Here's a comprehensive example demonstrating various features:
import asyncio
import os
from chaingpt.client import ChainGPTClient
from chaingpt.models import (
LLMChatRequestModel,
ContextInjectionModel,
TokenInformationModel,
SocialMediaUrlModel,
)
from chaingpt.types import AITone, PresetTone, ChatHistoryMode, BlockchainNetwork
from chaingpt.exceptions import ChainGPTError
async def comprehensive_example():
API_KEY = os.getenv("CHAINGPT_API_KEY")
if not API_KEY:
print("Please set your CHAINGPT_API_KEY environment variable")
return
async with ChainGPTClient(api_key=API_KEY) as client:
try:
# Example 1: Simple chat
print("=== Simple Chat ===")
simple_request = LLMChatRequestModel(
question="Explain blockchain in simple terms"
)
response = await client.llm.chat(simple_request)
print(response.data.bot)
# Example 2: Chat with context injection
print("\n=== Chat with Context ===")
token_info = TokenInformationModel(
tokenName="DemoToken",
tokenSymbol="DEMO",
blockchain=[BlockchainNetwork.ETHEREUM]
)
context = ContextInjectionModel(
companyName="Demo Company",
companyDescription="Blockchain innovation company",
cryptoToken=True,
tokenInformation=token_info,
aiTone=AITone.PRE_SET_TONE,
selectedTone=PresetTone.PROFESSIONAL
)
context_request = LLMChatRequestModel(
question="What can you tell me about our company?",
useCustomContext=True,
contextInjection=context,
chatHistory=ChatHistoryMode.ON,
sdkUniqueId="demo-session-123"
)
response = await client.llm.chat(context_request)
print(response.data.bot)
# Example 3: Streaming chat
print("\n=== Streaming Chat ===")
stream_request = LLMChatRequestModel(
question="Tell me about DeFi protocols",
chatHistory=ChatHistoryMode.ON,
sdkUniqueId="demo-session-123" # Continue same session
)
print("Streaming response:")
async for chunk in client.llm.stream_chat(stream_request):
print(chunk.decode("utf-8"), end="", flush=True)
print("\n--- Stream complete ---")
except ChainGPTError as e:
print(f"Error: {e}")
if __name__ == "__main__":
asyncio.run(comprehensive_example())
Best Practices
Always use context managers or remember to call
close()
to properly clean up resourcesHandle exceptions appropriately - use specific exception types for better error handling
Use unique session IDs when maintaining chat history across multiple requests
Validate context injection - ensure required fields are provided when using custom context
Consider rate limits - implement retry logic with exponential backoff for production use
Use streaming for long responses - better user experience for lengthy AI responses
Store API keys securely - use environment variables, not hardcoded values
This concludes the documentation for the AI Chatbot & LLM feature!
Last updated
Was this helpful?