Web3 AI Chatbot & LLM (API & SDK)
Last updated
Was this helpful?
Last updated
Was this helpful?
ChainGPT’s Web3 AI Chatbot & LLM combines a powerful Large Language Model (LLM) with deep blockchain expertise — enabling seamless integration of crypto-native AI into your platform. It’s trained on smart contracts, DeFi, NFTs, DAOs, and real-time market data, making it ideal for use cases like support, analytics, trading assistance, and community engagement.
Already trusted by leading platforms like Binance Square, BNB Chain, and TronDAO, ChainGPT’s LLM is purpose-built for the next generation of Web3 applications.
Web3 Domain Expertise
Understands the Crypto market, smart contracts, DeFi, NFTs, tokenomics. Fine-tuned on blockchain data, unlike general LLMs. Great for explaining and analyzing crypto concepts/code, or market research.
Real-Time Insights
Pulls live on-chain + market data. Supports use cases like price checks, explorer queries, trend tracking. Keeps responses always up-to-date.
Crypto-Specific Utilities
Built-in tools for smart contract audits, generation, NFT creation, and TA — replacing separate dev tools.
Multi-Channel Deployment
Use the bot in websites, dApps, Discord, Telegram, and more — a consistent AI assistant across channels.
Security & Privacy First
Data is encrypted and isolated. ChainGPT doesn’t train its models on your data. Enterprise-safe, privacy-respecting,
production-ready.
Learn more about the unique capabilities of ChainGPT's Web3 LLM:
Integrating the ChainGPT Web3 AI Chatbot into your tech stack is straightforward. Your application (frontend and/or backend) communicates with ChainGPT’s cloud-hosted AI service via REST API calls or through ChainGPT’s API. At a high level, the flow looks like this:
User question: A user interacts with your interface (website, mobile app, dApp, chat widget) and asks a question or issues a command (e.g. “What’s the current ETH price?” or *“Explain this smart contract code").
Application request: Your app (client or backend) sends the user’s query to ChainGPT’s service, either via a direct HTTPS API call or using the ChainGPT SDK. The request includes your API key for authentication and can optionally include parameters like a specific AI tone or extra context (see Customization via AI Hub).
AI processing: ChainGPT’s cloud servers receive the request and the Web3 LLM processes it. The model uses its built-in crypto knowledge base (and can fetch live on-chain or market data if needed) along with any custom context you’ve provided. It then generates a response – answering the question or performing the requested action.
Response delivery: The answer is returned back through the API/SDK to your application. If you use the SDK, it will handle parsing the response (and even streaming tokens) for you; if you use the raw API, you’ll receive a JSON response. Your app then displays the answer to the user (e.g. showing the chatbot reply in the UI).
Multi-turn conversation: The chatbot supports follow-up questions. Your app can send subsequent queries along with a conversation ID or previous chat messages so ChainGPT knows the context (The SDK makes this easy by managing conversation state for you.) This way, the user can have a natural back-and-forth dialogue with the AI.
You can integrate ChainGPT via a RESTful web API or with the official JavaScript/TypeScript SDK – both provide the same Web3 AI capabilities, but with different developer experience.
The table below compares these options:
Platform
Works with any environment that can send HTTPS requests (cURL, Python, Java, Go, etc) Great for server-side integration or non-JS languages.
Designed for JavaScript/TypeScript projects (Node.js, web browsers, React, etc). Distributed via NPM for easy inclusion in Node and front-end apps.
Setup
No additional library – simply make HTTP calls to ChainGPT’s API endpoints. You handle HTTP headers (e.g. Authorization: Bearer <API_KEY>
) and parse JSON responses manual.
Install the SDK package (e.g. npm install @chaingpt/generalchat
) and import it. Initialize an instance with your API key, then call provided methods (e.g. chat.ask("...")
) instead of manual request. The SDK manages authentication internally.
Features
Full control over request/response handling. Supports all features (including streaming responses via HTTP. You’ll manage details like constructing JSON payloads, maintaining conversation IDs, and error handling with HTTP status codes.
Higher-level abstraction with utilities: built-in streaming support (events or async iterators), automatic conversation context management, and TypeScript type definitions for responses. Simplifies error handling and retries by exposing them as methods or callbacks.
When to Use
Use the REST API if you’re working in a non-JS environment or need maximum convinience. It’s ideal for backend services in Python/Java/etc., or any scenario where adding an SDK isn’t feasible. You get language-agnostic flexibility at the cost of a bit more coding.
Use the SDK if you are building in JavaScript/TypeScript and want to integrate quickly. It eliminates boilerplate and reduces mistakes by handling the API under the hood. This is perfect for Node.js backends or web apps that want a plug-and-play chatbot client.
One of ChainGPT’s most powerful features is the ability to customize the chatbot’s behavior and knowledge to fit your project. Through the ChainGPT AI Hub (a web portal for developers), you can configure your AI agent’s personality and inject custom context or data, without needing to train a new model from scratch. Key customization options include:
Tone & Persona: Define the AI’s tone and style to match your brand or use-case. You can choose from preset personalities (e.g. friendly, professional, humorous) or create a custom one. This setting influences the style of all responses (for example, a friendly tone might use casual language and emojis, whereas a formal tone would be more concise and polite).
Project Knowledge Base: Provide your project’s specific details such as name, description, website, documentation links, token info, FAQs, etc., via the AI Hub. The ChainGPT LLM will incorporate this information when responding. For instance, if a user asks “What is this project about?”, the chatbot will answer with the details you provided (mission, features, tokenomics) instead of a generic response.
Context Injection: Dynamically inject additional context on a per-request basis. Along with your user’s question, you can send contextual data (enabled by setting useCustomContext: true
or by adding a contextInjection
object in the API/SDK call). This could be a specific article, a recent announcement, or any relevant text. The injected context will influence the answer for that query, allowing fine-tuned responses (e.g. ask a question about a particular document by injecting its content).
Knowledge Extensions: Extend the AI’s knowledge beyond its base training. In AI Hub you can link to external URLs (like your platform’s docs or blockchain explorers) or upload files for the AI to refer to. This makes the chatbot capable of pulling in information from those sources when needed. For example, you could enable it to answer questions using data from your own documentation or even fetch on-chain stats about your token.
Using these features, you can tailor the AI’s output to be highly relevant and aligned with your project without writing long prompts each time. A support chatbot could adopt a polite, helpful tone and have full knowledge of your product’s FAQ, while a community bot for a DeFi project might use a meme-friendly voice and know the project’s latest roadmap update. This configuration-driven approach is safer and more manageable than manual prompt engineering – the context is stored securely in the AI Hub (tied to your API key) and can be updated on the fly for all subsequent calls.
Integrating ChainGPT’s Web3 AI Chatbot is designed to be developer-friendly. Here’s a quick step-by-step guide to start building with the API or SDK:
Sign Up & Access AI Hub: If you haven’t already, create an account on ChainGPT and log in to the AI Hub (the developer dashboard). This is where you can generate API keys and configure your chatbot.
Generate an API Key: In the AI Hub, navigate to the API/SDK section or developer settings and click “Create API Key” (you may be asked to label it for organization. Copy the key when it’s displayed – for security, you won’t be able to view it again later. (If you lose a key, you can always create a new one.)
Secure the Key: Treat your ChainGPT API key like a password. Do not expose it in client-side code or public-side. If you’re making calls from a web frontend, route them through your backend server to keep the key hidden. In a Node.js or server environment, store the key in an environment variable or secure config file, not in your code / front-end. This prevents others from using your key, which protects your account and usage quota.
Choose Integration Method: Decide whether you’ll use the JavaScript SDK or call the REST API directly in your project.
Using the SDK: Install the NPM package (e.g. npm install @chaingpt/generalchat
) and import it into your project. Initialize the client with your API key, for example: const chat = new GeneralChat({ apiKey: "YOUR_API_KEY" });
. Now you can send queries with a single method call – e.g. const answer = await chat.ask("...");
– instead of manually handling HTTP. (See the SDK docs for details on streaming responses or advanced usage.)
Using the REST API: Make HTTPS requests to the ChainGPT API endpoint from your app. For example, a chat query can be sent with an HTTP POST request to the /v1/chat
endpoint. Include your API key in the header and your question in the JSON format. For instance:
Handle the Response: Parse and use the AI’s reply in your app. For a chat interface, you’ll append the chatbot’s answer to the conversation thread. If you requested a streaming response, handle the stream of partial data to update the answer in real-time. (The SDK provides convenience methods/events for streaming; with the raw API you’d read the response in chunks.) Make sure to account for any error messages or rate-limit responses in your code.
Test and Tweak: During development, experiment with your chatbot and adjust settings as needed. Use the AI Hub to tweak the bot’s tone or inject more context, then test again to see the channels. Monitor your usage in the AI Hub dashboard – you might use a test mode or small scale to ensure everything works without hitting quotas. This iterative process will help fine-tune the user experience.
Go Live: Integrate the chatbot into your production environment once you’re satisfied. This could mean deploying a chat widget on your site, adding a new support assistant in your app, or inviting the bot to your community channels. As users start interacting with it, you can continue to refine the AI’s behavior via AI Hub settings (which take effect immediately, no code deploy needed). Keep an eye on usage to ensure you have the appropriate plan or credits for your user base.
When deploying ChainGPT in a production setting, keep these considerations in mind:
API Key Management: You can generate multiple API keys to separate different environments or applications (e.g. one for development, one for products. This allows you to monitor usage per key and revoke keys individually if needed. Regularly review your keys in the dashboard and revoke any that are not in use to minimize security risks. Each key accumulates its own usage stats, which helps with cost tracking and debugging.
Security Best Practices: Never embed your secret API key in client-side code (JavaScript) or any public repositories. If building a web app, have your server relay requests so the key stays hidden. Use environment variables or secure storage for keys in your backend. All requests to ChainGPT’s API are encrypted via HTTPS and processed on secure infrastructure, but it’s up to you to enforce good security on your side. Also, ChainGPT does not use your query data to train its models, so your interactions remain private to your account.
API Reference: Dive into the full REST API documentation for ChainGPT’s Web3 LLM. It covers all available endpoints, request/response schemas, error codes, and examples in various languages. This is your go-to resource when you need detailed info on calling the API and handling responses.
SDK Documentation: Check out the JavaScript/TypeScript SDK docs, which include quickstart examples, class and method details, and best practices for using the SDK in your product.. Learn how to stream responses, manage conversation history in code, and more.
Use Cases & Examples: Explore real-world examples and use cases to see ChainGPT in action. From DeFi platforms with on-site AI advisors to NFT marketplaces enhancing user support, these examples will inspire you and provide guidance on design patterns and integration approaches. (See our “AI Web3 Chatbot: Features and Use Cases” page for a feature-oriented overview.)
Case Studies: Read in-depth case studies of projects that have successfully integrated ChainGPT. Learn how companies improved user engagement and support efficiency using the chatbot, and understand the ROI and quantitative benefits. (For a detailed example, see how DexCheck – an analytics platform – leveraged ChainGPT to assist its users.)
Other AI Tools: Remember, ChainGPT’s ecosystem offers more than just the chatbot. Via the same API/SDK, you can access tools like the Smart Contract Auditor, Smart Contract Generator, AI NFT Generator, and more. These can complement your chatbot – for example, you might integrate the auditor to let users audit a contract via chat, or use the NFT generator for creative user commands. Check out the relevant docs sections for these tools to expand your application’s AI capabilities.
Ready to build? Get your API key from the AI Hub and start integrating a Web3-smart chatbot into your project today. With ChainGPT’s specialized LLM, you can offer users a cutting-edge, crypto-savvy assistant that elevates their experience. We’re excited to see what you create – and if you need any help, feel free to reach out on our developer Discord or support channels. Happy coding!
Pricing & Usage: ChainGPT uses a credit-based pricing model for API/SDK usage. (For example, a certain number of tokens or API calls might cost a fraction of a credit.) Refer to the page for details on free tiers and paid plans. Generally, basic community chatbot usage on Telegram/Discord is free (within limits), whereas higher-volume or commercial API usage will require purchasing credits. The AI Hub dashboard shows your current credit balance and usage in real-time, so you can monitor consumption and top-up or upgrade as needed to avoid service interruption.
ChainGPT AI Hub: Use the AI Hub portal to manage and refine your chatbot. In the Hub you can adjust the AI’s persona and knowledge base over time, view analytics (usage metrics, chat logs, user feedback), and ensure your bot is performing well. It’s essentially the control center for your AI agent, so make sure to familiarize yourself with its features. — Crypto AI Hub: — API Dashboard: