Documentation
Introduction to Accelerate AI
Welcome to Accelerate AI - the first platform to bring true AI processing to the Solana blockchain. Execute machine learning models, deploy intelligent smart contracts, and build autonomous agents with sub-second finality.
🚀 Why Accelerate AI?
- Blazing Fast: Leverage Solana's 65,000+ TPS for real-time AI inference
- Verifiable: Every prediction is cryptographically proven on-chain
- Decentralized: No centralized AI servers - everything runs on Solana
- Cost-Effective: Ultra-low gas fees make AI accessible to everyone
What You Can Build
🤖 AI Smart Contracts
Deploy intelligent contracts that make autonomous decisions using machine learning models.
📊 Predictive DeFi
Build DeFi protocols with AI-powered risk assessment and price prediction.
🎨 Dynamic NFTs
Create NFTs that evolve based on AI analysis of on-chain data and user behavior.
🔮 Autonomous Agents
Deploy self-governing agents that execute strategies using AI inference.
Quick Start
Get up and running with Accelerate AI in under 5 minutes.
Step 1: Install the SDK
npm install @accelerate-ai/sdk @solana/web3.js
Step 2: Initialize the Client
import { AccelerateAI } from '@accelerate-ai/sdk';
import { Connection, Keypair } from '@solana/web3.js';
// Connect to Solana
const connection = new Connection('https://api.mainnet-beta.solana.com');
const wallet = Keypair.generate();
// Initialize Accelerate AI
const ai = new AccelerateAI({
connection,
wallet,
network: 'mainnet-beta'
});
Step 3: Run Your First Inference
// Use a pre-trained model
const result = await ai.inference({
model: 'sentiment-analysis-v1',
input: 'Solana is the fastest blockchain!',
});
console.log(result.prediction); // "positive"
console.log(result.confidence); // 0.96
console.log(result.txSignature); // On-chain proof
🎉 Congratulations! You've just executed your first AI inference on Solana!
Installation
Prerequisites
- Node.js 16+ or Python 3.8+
- Solana CLI tools (optional, for deployment)
- A Solana wallet with SOL for transactions
JavaScript/TypeScript
npm install @accelerate-ai/sdk
Python
pip install accelerate-ai-python
Rust (for on-chain programs)
cargo add accelerate-ai
⚠️ Note: For mainnet deployment, ensure you have sufficient SOL in your wallet to cover transaction fees.
Deploy Your First Model
Learn how to deploy a custom machine learning model on Solana.
Model Requirements
- Model must be serialized in ONNX, TensorFlow Lite, or PyTorch format
- Maximum model size: 10MB (optimized for on-chain storage)
- Supported architectures: Neural Networks, Decision Trees, Linear Models
Deployment Steps
import { AccelerateAI } from '@accelerate-ai/sdk';
// Load your model
const model = await ai.loadModel('./my-model.onnx');
// Deploy to Solana
const deployment = await ai.deploy({
model: model,
name: 'price-predictor-v1',
description: 'Predicts SOL price movements',
inputSchema: {
type: 'array',
shape: [5], // 5 input features
},
pricing: {
inferenceFeeLamports: 1000, // 0.000001 SOL per inference
}
});
console.log('Model deployed at:', deployment.programId);
console.log('Model URL:', deployment.explorerUrl);
Architecture Overview
Understanding how Accelerate AI integrates with Solana.
Application Layer
Your dApp, Smart Contracts, Web Apps
Accelerate AI SDK
TypeScript/Python/Rust SDKs
AI Runtime Layer
On-chain inference engine, Model registry
Solana Blockchain
Proof-of-History, Sealevel Runtime
Key Components
- Model Registry: On-chain registry of deployed AI models
- Inference Engine: Executes models within Solana's compute budget
- Verification Layer: Cryptographic proof of inference results
- Training Coordinator: Manages decentralized training jobs
AI Smart Contracts
Build intelligent smart contracts that use AI for decision-making.
Example: AI-Powered Trading Bot
use accelerate_ai::prelude::*;
use anchor_lang::prelude::*;
#[program]
mod ai_trader {
pub fn execute_trade(ctx: Context<ExecuteTrade>) -> Result<()> {
// Get market data
let market_data = fetch_market_data(&ctx.accounts.market)?;
// Run AI prediction
let prediction = accelerate_ai::inference(
"price-predictor-v1",
market_data
)?;
// Execute trade based on AI decision
if prediction.confidence > 0.8 {
if prediction.label == "bullish" {
execute_buy_order(&ctx, prediction.amount)?;
} else {
execute_sell_order(&ctx, prediction.amount)?;
}
}
Ok(())
}
}
SDK Reference
Core Methods
ai.inference(options)
Execute an inference on a deployed model.
Parameters:
model
(string): Model name or program IDinput
(any): Input data for the modeloptions
(object): Additional options
Returns:
{
prediction: any, // Model output
confidence: number, // Confidence score (0-1)
txSignature: string, // Transaction signature
computeUnits: number // Compute units used
}
ai.deploy(options)
Deploy a model to Solana.
Parameters:
model
(Buffer): Serialized modelname
(string): Model namepricing
(object): Fee configuration
ai.train(options)
Start a decentralized training job.
Parameters:
dataset
(string): Dataset ID or IPFS hasharchitecture
(string): Model architectureepochs
(number): Training epochsvalidators
(number): Number of validators
Optimization Guide
Optimize your models for on-chain execution.
Model Size Optimization
- Quantization: Reduce model precision from FP32 to INT8
- Pruning: Remove unnecessary weights and connections
- Knowledge Distillation: Train smaller models that mimic larger ones
Compute Budget Tips
💡 Solana has a 1.4M compute unit limit per transaction. Optimize your models to stay within this budget.
// Check compute usage before deployment
const analysis = await ai.analyzeModel('./model.onnx');
console.log('Estimated compute units:', analysis.computeUnits);
if (analysis.computeUnits > 1_400_000) {
console.log('⚠️ Model too large, consider optimization');
}
Security Best Practices
⚠️ Important Security Considerations
- Always validate model inputs to prevent adversarial attacks
- Use multi-signature wallets for model deployment in production
- Implement rate limiting to prevent inference spam
- Audit your models for bias and fairness before deployment
Input Validation
// Always validate inputs
function validateInput(input: number[]): boolean {
// Check array length
if (input.length !== 5) return false;
// Check for valid ranges
return input.every(x => x >= 0 && x <= 1 && !isNaN(x));
}
const userInput = [0.1, 0.5, 0.3, 0.8, 0.2];
if (!validateInput(userInput)) {
throw new Error('Invalid input data');
}
const result = await ai.inference({
model: 'my-model',
input: userInput
});