Fully Automated Crypto Trading System
This is a fully automated, multi-user cryptocurrency trading system I built to eliminate emotion from trading. What started as a targeted tool evolved into a complete trading infrastructure with four independent but interconnected module architecture.

Fully Automated Crypto Trading System
# Overview
This is a fully automated, multi-user cryptocurrency trading system I built to eliminate emotion from trading. What started as a targeted tool evolved into a complete trading infrastructure with four independent but interconnected modules: the Bot Engine (strategy execution), Multi-Nodes (trade execution for multiple users), Telegram Bot (user interface and management), and the Service Connector (communication backbone). The system currently runs live, managing real trades for myself and a growing group of users with automated optimization, self-healing capabilities, and comprehensive logging. It is designed for scalability, reliability, and speed—executing trades in under 1 second from signal detection to order placement in the exchange.
This is a production system that handles real capital, real users, and real market conditions 24/7. The system automatically scans markets, optimizes trading parameters, executes trades, manages risk, monitors performance, handles billing, and self-heals when necessary. It's proof that I can build complex distributed systems that actually work in the real world.
# Why I Built This
I built this to overcome the challenges of emotional trading and self-discipline. People often trade poorly when they let fear or excitement make decisions, so automation fixes that. Algorithmic trading tools exist, but they're either too expensive, too complicated, or just don't work at all thay just there to get our money. Retail traders want automated trading but don't have the technical skills or the capital to build their own systems. I wanted to create something that:
- Eliminates human error - Adhering to a strict strategy without emotional interference.
- Automates risk management - Built-in position sizing, stop-losses, and daily trade limits act as a constant discipline engine.
- Optimizes itself - The system automatically finds the best parameters for each market, removing the need for manual oversight.
- Scales to many users - A robust infrastructure capable of serving a large user base simultaneously.
- Is reliable and self-healing - The system is designed to automatically detect and resolve failures.
Recognizing the potential for a commercial platform, I expanded the system so others can benefit from these automated signals. The business model is a profit-sharing agreement—users only pay when they succeed. This aligns my success directly with theirs. This project demonstrates my ability to handle complex distributed systems, real-time data processing, financial risk management, and business operations—managing everything from low-level socket programming to high-level strategy logic.
# System Architecture - The 4-Pillar Design
The system is built around four core modules, each with a specific role in the trading ecosystem. Think of it as a living organism where each part has a vital function:
Module 1: Bot System
The Bot System is the intelligence center of the entire operation. It analyzes market data, executes trading strategies, performs backtesting, and optimizes parameters automatically. This module runs independently, continuously scanning multiple cryptocurrency pairs, identifying profitable opportunities, and generating high-quality trading signals. It's where all the trading logic lives - from technical analysis and pattern recognition to risk management and position sizing. The Bot System doesn't interact directly with exchanges; instead, it focuses purely on decision-making and signal generation.
Key Responsibilities:
- Market data analysis and pattern recognition
- Strategy execution and signal generation
- Backtesting and parameter optimization
- Risk management and position sizing
- Performance monitoring and self-healing
Module 2: Node System
The Node System is the execution engine that puts the Bot System's decisions into action. Each Node represents a user account and connects directly to exchange APIs to execute trades, manage orders, and handle real-time market data. This is the working part of the system - it's where the actual trading happens. Nodes are independent and can run multiple instances simultaneously, each managing its own set of trades while following the signals from the Bot System. Think of it as having multiple hands executing trades across different accounts with precision and speed.
Key Responsibilities:
- Connect to exchange APIs and authenticate users
- Execute buy/sell orders based on Bot System signals
- Monitor order status and manage open positions
- Handle real-time market data streams
- Maintain local trade databases and balance caching
- Implement individual user risk limits and trade limits
Module 3: Telegram Bot
The Telegram Bot serves as the user interface and communication hub. It's how users interact with the system, monitor performance, receive alerts, and manage their accounts. Through a clean and intuitive Telegram interface, users can view their portfolio, track live trades, adjust settings, manage subscriptions, and receive real-time notifications. The Bot also handles user onboarding, billing, and customer support - making it the face of the entire system. It's both the eyes (showing users what's happening) and the mouth (communicating with users).
Key Responsibilities:
- User authentication and account management
- Real-time trade notifications and alerts
- Portfolio visualization and performance tracking
- Billing system and subscription management
- User settings and preferences
- Customer support and help documentation
Module 4: Service Connector
The Service Connector is the communication backbone that keeps everything connected and synchronized. It acts as the central message bus, facilitating real-time communication between all modules. When the Bot System generates a signal, the Service Connector distributes it to all active Nodes. When a Node executes a trade, the Service Connector updates the Telegram Bot. It's the circulatory system of the architecture, ensuring information flows smoothly and reliably. Built on ZeroMQ for high-performance messaging, it handles thousands of messages per second with minimal latency.
Key Responsibilities:
- Real-time message routing between modules
- Signal distribution from Bot System to Nodes
- Status updates from Nodes to Telegram Bot
- Command execution and coordination
- System health monitoring and heartbeat management
- Message queuing and delivery guarantees
# The Communication Layer
Realtime Candle Data Stream
I spent more time thinking about this architecture than I did building the trading logic itself. Why? Because we are playing in someone else's sandbox. We need real-time data, and while Binance provides great APIs, they have limits—connection limits, rate limits, and concurrent stream limits.
To scale this to hundreds of bots without getting banned, I had to optimize this to the absolute limit.
The Solution: WebSocket Aggregation WebSockets are the most efficient way to handle real-time data. To avoid hitting connection limits, I leveraged Binance's capability to subscribe to up to 1024 data streams on a single connection. I built a custom Service Connector module to act as a central hub:
- It opens a single pipe to Binance.
- It multiplexes up to 1024 coin streams into that one pipe.
- If that pipe fills up? It automatically spawns a new connection.
- If the connection dropes for a unknown reason? It automatically reconnects.
This ensures the system can scale to thousands of bots without exceeding connection limits.
Here's what the connection setup looks like in the logs:
2026-01-08 19:33:31,588 - INFO - Starting WebSocket and ZMQ Proxy Service...
2026-01-08 19:33:31,592 - INFO - ZMQ Proxy publishers is listening on tcp://*:5555
2026-01-08 19:33:31,593 - CONNECTOR - INFO - ZMQ Proxy subscribers is broadcasting on tcp://*:5556
2026-01-08 19:33:31,593 - INFO - Connection Manager started.
...
2026-01-08 19:33:32,109 - INFO - Listener started: ['zkusdt@kline_5m', 'jellyjellyusdt@kline_5m']
...
2026-01-08 19:33:33,373 - INFO - Data Received >>> Coin: JELLYJELLYUSDT | Interval: 5m | O : 0.0611000 | C : 0.0608300
Self-Healing Capabilities Connections can drop, so I implemented a robust self-healing mechanism. Every 24-48 hours (configurable), the connector proactively refreshes by closing the current connection and instantly reconnecting.
2026-01-09 19:33:32,778 - CONNECTOR - INFO - ==========================================
2026-01-09 19:33:32,778 - CONNECTOR - INFO - Service uptime limit (86400s) reached. Performing scheduled restart.
2026-01-09 19:33:32,778 - CONNECTOR - INFO - ==========================================
2026-01-09 19:33:32,780 - CONNECTOR - INFO - Shutting down combined service for scheduled restart...
...
2026-01-09 19:33:33,381 - CONNECTOR - INFO - Shutdown complete.
...
2026-01-09 19:33:33,883 - CONNECTOR - INFO - Restarting Connector Service...
2026-01-09 19:33:38,918 - CONNECTOR - INFO - Starting WebSocket and ZMQ Proxy Service...
2026-01-09 19:33:38,957 - CONNECTOR - INFO - Connection Manager started.
As you can see, the system detects the scheduled uptime limit, restarts, and re-subscribes in seconds. It’s bulletproof.
ZMQ & Internal Signals (The Nervous System)
Once we have the data, we need to move it fast. This is where ZeroMQ (ZMQ) comes in. When a bot finds a setup, it doesn't just "place a trade." It broadcasts a signal to the entire internal network.
The Service Connector acts as the central hub (The Dispatcher). It takes the signal and shoots it out to every active Node.
2026-01-20 06:00:00,115 - INFO - NEW SIGNAL DETECTED by strategy 'EMA_TREND'.
2026-01-20 06:00:00,229 - INFO - Initiating new trade mt_aba2d0580616 | Side: BUY | Entry: 0.026749
2026-01-20 06:00:00,235 - INFO - IO > Processing NEW_SIGNAL for trade mt_aba2d0580616
2026-01-20 06:00:00,269 - INFO - Successfully broadcasted signal via ZMQ for trade mt_aba2d0580616
2026-01-20 06:00:02,121 - INFO - Trade mt_aba2d0580616 event: FILLED
2026-01-20 06:00:02,135 - INFO - Telegram signal sent, received message_id: 3419
2026-01-20 06:00:02,147 - INFO - IO > Processing TRADE_FILLED for trade mt_aba2d0580616
When a signal is detected, the system executes these steps asynchronously. The processing is fast. the log shows how fast it can process the signal and execute the trade. reflecting the high-performance nature of this network.
Trade Execution: The Need for Speed
This is where the magic happens. The User Nodes are the "hands" that actually execute the trades. I needed this to be fast—sub-second fast.
Here is the exact workflow from Signal to Filled Order:
You can see this speed in the actual production logs:
2026-01-20 10:45:04,314 - INFO - Received signal for SOPHUSDT. Processing immediately.
2026-01-20 10:45:04,722 - INFO - [UID:1] Successfully set leverage for SOPHUSDT to 10x. - 194.00 ms
2026-01-20 10:45:04,907 - INFO - [UID:1] Entry order placed (ID: 8124704792). Awaiting confirmation. - 172.42 ms
2026-01-20 10:45:04,916 - INFO - [UID:1] Entry order confirmed NEW.
2026-01-20 10:45:05,016 - INFO - [UID:1] REAL-TIME balance update: 21.35 USDT
2026-01-20 10:45:05,195 - INFO - [UID:1] Stop Loss order placed successfully. 203.30 ms
2026-01-20 10:45:05,381 - INFO - [UID:1] Take Profit order placed successfully. Trade is active. 187.10 ms
1. The Caching Engine (Speed)
We can't ask Binance for the user's balance every time we want to trade—that's an extra API call that wastes 200ms. Instead, I built a local balance cache. We track the user's balance internally.
- Result: Balance checks take < 1ms.
- Impact: We can calculate position sizes for 100 users instantly.
2. Execution Logic
Once the size is calculated, the Node sets the leverage (10x-20x) and fires a Limit Order. Using Limit Orders minimizes slippage, and because our execution is highly optimized, we typically get filled immediately while capturing maker fees.
3. The "3-Layer Recovery" System
I learned this the hard way: sometimes Binance fills an order, but the "Order Filled" event packet gets lost in the internet ether. If the bot doesn't know it's in a trade, it won't place a Take Profit or Stop Loss. That is catastrophic.
So I built a Triple-Redundancy System to confirm trades:
- WebSocket Stream: Listens for the standard "Order Filled" event for immediate confirmation.
- Balance Monitor: Watches the wallet balance; any significant shift triggers a trade verification check.
- API Fallback: If the first two stay silent, we manually poll the API to double-check.
This ensures that no trade is ever left "naked" without a Stop Loss.
# Technical Stack
Bot Module (Strategy Engine)
- Python 3.11 - Core language
- pandas & numpy - Data manipulation and calculations
- pandas-ta - Technical analysis indicators
- python-binance - Binance WebSocket and REST API
- websocket-client - WebSocket connections
- pyzmq - Message passing between components
- python-dotenv - Environment configuration
- threading - Concurrent actor pattern implementation
Node Module (Trade Executor)
- Python 3.11 - Core language
- asyncio - Async I/O for concurrent user streams
- python-binance - Async Binance client for trade execution
- zmq.asyncio - Async ZMQ for signal reception
- mysql-connector-python - Database connectivity
- pandas - Data analysis for trade reconciliation
- threading - Background tasks and monitoring
Telegram Bot Module (User Interface)
- python-telegram-bot - Telegram Bot API wrapper
- job-queue - Scheduled tasks (billing, notifications)
- mysql-connector-python - Database operations
- cryptography - Fernet encryption for API keys
- pyzmq - Communication with other modules
- pandas - Performance calculations and reporting
- matplotlib & seaborn - Trade analysis charts
Service Connector Module (Data & Message Bus)
- Python 3.11 - Core language
- pyzmq - ZeroMQ Proxy (XPUB/XSUB) for message brokering
- websocket-client - Aggregated Binance WebSocket streams (1024 streams/conn)
- threading - Thread-safe connection management
- requests - Initial snapshot fetching
Database & Infrastructure
- MySQL 8.0 - Primary database (users, trades, payments)
- connection pooling - Efficient database connections
- ZMQ - Message broker for inter-module communication
- WebSocket - Real-time data streams from Binance
- File-based configs - JSON with atomic updates for bot configurations
Testing & Quality
- pytest - Unit and integration testing
- pytest-asyncio - Async test support
- smoke tests - Import validation for all modules
- integration tests - Component communication testing
- unit tests - Strategy and configuration testing
# The Bot's Technical Core - Trading Strategies and Logics
I designed a "Plug-and-Play" Strategy Engine to streamline the testing of new ideas and eliminate redundant code.
1. Strategies as Simple Python Functions
In this system, a trading strategy is just a standard Python function. It takes market data (Pandas DataFrame) in and spits out buy/sell signals. The heavy lifting—calculating Stop Losses, Position Sizing, and Leverage abstracted away into shared core logic.
This allows me to deploy a new strategy in minutes by focusing purely on the entry conditions.
How it works:
- Standard Signature: Every strategy (e.g.,
EMA_TREND,SWING_FIBO) accepts the same inputs:(df, config, is_live). - Shared Helpers: Functions like
_calculate_common_indicatorsand_calculate_leverage_and_trade_limitshandle the math automatically. - Pandas-TA Power: I use
pandas-tato calculate complex indicators in one line of code.
If I want to create a new "RSI Reversal" strategy, I just define def RSI_REVERSAL(...), write the logic if rsi < 30: BUY, and I'm done. The system handles the rest.
2. The Configuration Engine (configs.json)
Hardcoding parameters is for amateurs. I built a dynamic configuration engine that lets me control every aspect of every bot without touching the code.
The configs.json file is the "Mission Control". It tells the system:
- Which Strategy to run (e.g.,
EMA_TREND). - Which Coin to trade (e.g.,
BTCUSDT). - How much Risk to take (e.g., 1% per trade).
- Specific Strategy Parameters (e.g.,
atr_lookback,ema_length).
Live Updates: The best part? The system watches this file. If I change a parameter in the JSON, the bot updates instantly on the next candle. No restart required.
Here is a production config snippet for the Swing Fibonachi and EMA PullBack bots:
"SWING_FIBONACHI": {
"is_active": true,
"strategy": "SWING_FIBO",
"symbol": "AXLUSDT",
"interval": "15m",
"rr_ratio": 3.0,
"max_trades_day": 2,
"risk_percent": 0.0125,
"margin_multiplier": 20,
"pending_trade_timeout_hours": 1,
"resume_on_next_start": false,
"in_a_trade": false,
"Prems": {
"swing_lookback": 20,
"trend_ema": 50,
"fib_zone_start": 0.5,
"fib_zone_end": 0.618,
"atr_lookback": 14,
"atr_multi": 1.5
},
"Last_backtest_result": {
"win_rate": 41.67,
"max_drawdown": 0.39,
"profit_factor": 2.04,
"oos_total_trades": 12,
"oos_wins": 5,
"oos_losses": 7
},
"Health_State": {
"current_equity": 1061.93,
"peak_equity": 1102.79,
"total_trades_closed": 7,
"recent_trades": [
{"trade_id": "mt_191e26e1a85b", "account_pnl_percent": 3.7495},
{"trade_id": "mt_bd7bd03b6ac4", "account_pnl_percent": 3.7495},
{"trade_id": "mt_db946479fc02", "account_pnl_percent": -1.2505},
{"trade_id": "mt_8e2f32e13b7d", "account_pnl_percent": 3.7495},
{"trade_id": "mt_2c08a5026121", "account_pnl_percent": -1.2505},
{"trade_id": "mt_6fc5b766dab2", "account_pnl_percent": -1.2505},
{"trade_id": "mt_0a0e36bf5a99", "account_pnl_percent": -1.2505}
],
"active_thresholds": {
"drawdown_threshold_pct": 5.0,
"cumulative_pnl_threshold_pct": 6.2465
}
},
"update_time": "2026-01-12 19:41:31",
"daily_trade_count": 1,
"last_trade_date": "2026-01-20"
},
"EMA_PULLBACK": {
"is_active": true,
"strategy": "EMA_PULLBACK",
"symbol": "SUPERUSDT",
"interval": "15m",
"rr_ratio": 3.0,
"max_trades_day": 2,
"risk_percent": 0.0125,
"margin_multiplier": 20,
"pending_trade_timeout_hours": 1,
"resume_on_next_start": false,
"in_a_trade": true,
"Prems": {
"atr_lookback": 12,
"atr_multi": 1.4,
"ema20": 9,
"pullback_window": 9
},
"Last_backtest_result": {
"win_rate": 46.15,
"max_drawdown": 0.4,
"profit_factor": 2.88,
"oos_total_trades": 13,
"oos_wins": 6,
"oos_losses": 6
},
"Health_State": {
"current_equity": 1116.76,
"peak_equity": 1116.76,
"total_trades_closed": 3,
"recent_trades": [
{"trade_id": "mt_20bc0bf3ae5a", "account_pnl_percent": 3.7495},
{"trade_id": "mt_033ceda09c4a", "account_pnl_percent": 3.7495},
{"trade_id": "mt_15d7ff7097df", "account_pnl_percent": 3.7495}
],
"active_thresholds": {
"drawdown_threshold_pct": 5.0,
"cumulative_pnl_threshold_pct": 11.2485
}
},
"update_time": "2026-01-12 23:53:26",
"daily_trade_count": 0,
"last_trade_date": "2026-01-20",
"active_trade_snapshot": {
"trade_id": "mt_5390332b6bf3",
"message_id": 3410,
"side": "SELL",
"entry": 0.2016,
"stop": 0.2055498,
"take_profit": 0.1897506
}
}
This flexibility allows the system to run dozens of bots simultaneously, each fine-tuned for a specific asset. All configurations are managed from a single file with robust file-locking to prevent data corruption.
# Automated Optimization and Self-Healing Bot Logics
Markets are constantly changing. A strategy that is effective today may need adjustment tomorrow. I built the system to monitor its own performance and self-correct automatically.
1. Self-Diagnosing Bots (The Health Check)
Every bot constantly checks its own "health." It doesn't just look at how much money it made; it looks at how it made it.
Here is the logic flow that runs every minute:
In the configs.json, each bot tracks its stats real-time:
"Health_State": {
"current_equity": 1116.76,
"peak_equity": 1116.76,
"total_trades_closed": 3,
"recent_trades": [
{"trade_id": "mt_20bc0bf3ae5a", "account_pnl_percent": 3.7495},
{"trade_id": "mt_033ceda09c4a", "account_pnl_percent": 3.7495},
{"trade_id": "mt_15d7ff7097df","account_pnl_percent": 3.7495}
],
"active_thresholds": {
"drawdown_threshold_pct": 5.0,
"cumulative_pnl_threshold_pct": 11.2485
}
}
If a bot reaches a drawdown threshold, it pauses trading and automatically initiates the optimization process.
Telegram Alert:
⚠️ Health Alert: EMA_PULLBACK My drawdown is -2.49%. I'm pausing trading and starting the optimizer.
🚨 BOT HALTED | Health Check Failed 🚨
Bot Name: EMA_PULLBACK
Reason: Drawdown [-2.49%] exceeded threshold [1.04%] after 7 trades
Total Trades: 7
Final Equity: $1061.93
2. Walk-Forward Optimization
When the optimizer starts, it doesn't just pick random numbers. It tries to find settings that will work in the future, not just the past.
People try to optimize by finding what worked best over the last 2 years. I don't do that. I use Walk-Forward Optimization.
- I take a chunk of data (In-sample Data) and find the best settings. (above 1.3-1.4pf)
- Then I test those settings on data the optimizer hasn't seen yet (Out-of-sample Data). (Above 1.2-1.3pf)
- If it makes money on the Test Data, then (and only then) do I trust it.
Real Log Output: You can see this happening in my logs. The system throws away tons of bad settings until it finds the one that works.
2026-01-15 12:29:58 - INFO - --- Optimization for Bot: [SWING_FIBO] | Strategy: SWING_FIBO | Timeframe: 15m ---
2026-01-15 12:30:00 - INFO - --- 171/175 coins available for optimization scan ---
2026-01-15 12:30:00 - INFO - --- Processing Batch 1/17 (10 randomly selected coins) ---
2026-01-15 12:30:12 - INFO - - Batch 1 | 01/10 | PROMISED >>> PF: 1.35 | WR: 32.35% | DD: 1.46% | SOPHUSDT
...
2026-01-15 12:31:47 - INFO - - Batch 1 | 09/10 | PROMISED >>> PF: 1.12 | WR: 27.91% | DD: 1.87% | AUSDT
2026-01-15 12:31:54 - INFO - - Batch 1 | 10/10 | DISCARDED >>> PF: 0.69 | WR: 19.27% | DD: 4.67% | LSKUSDT
2026-01-15 12:31:54 - INFO - === Found 3 promising coins. Starting Walk-Forward Analysis ---
2026-01-15 12:31:58 - INFO - === In-Sample SOPHUSDT | 8662 rows (90 days) | Out-of-Sample: 1322 rows (14 days)
2026-01-15 12:32:58 - INFO - - 010/384 | SOPHUSDT | Best in 10: PF: 0.75 | WR: 20.87% | Trades: 95 | DD: 4.20%
...
2026-01-15 13:02:54 - INFO - - 380/384 | SOPHUSDT | Best in 10: PF: 1.27 | WR: 30.14% | Trades: 73 | DD: 1.21%
2026-01-15 13:03:07 - INFO - - 384/384 | SOPHUSDT | Best in 04: PF: 1.18 | WR: 29.63% | Trades: 81 | DD: 2.06%
2026-01-15 13:03:07 - INFO - === Result for SOPHUSDT: PASSED In-Sample. Found 30 profitable sets.
2026-01-15 13:03:08 - INFO - - Champion Parameters for SOPHUSDT: {
"swing_lookback": 20,
"trend_ema": 50,
"fib_zone_start": 0.382,
"fib_zone_end": 0.618,
"atr_lookback": 20,
"atr_multi": 1.5
}
2026-01-15 13:03:15 - INFO - --- Final Combination & OOS Check for SWING_FIBO : SOPHUSDT ---
2026-01-15 13:03:15 - INFO - - [PASS] | SOPHUSDT | PF: 1.34 | TT: 85 | WR: 31.76% | DD: 1.27%
2026-01-15 13:03:21 - INFO - - [PASS] | SOPHUSDT | PF: 2.23 | TT: 16 | WR: 43.75% | DD: 0.66%
2026-01-15 13:03:21 - INFO - --- Updated 'SWING_FIBO' in configs.json with champion ---
2026-01-15 13:03:55 - INFO - - Telegram notification sent successfully.
2026-01-15 13:03:55 - INFO - --- Optimization process completed for bot: SWING_FIBO ---
3. Coin Selection
My optimizer also picks which coin to trade. I have a few simple rules for what coins make the cut:
- Market Cap > $3M: It needs to be big enough so I can buy and sell easily.
- History > 180 Days: I need enough data to see how it moves.
- Price < $2: I like cheaper coins. They tend to be more volatile (move more), which is good for my strategy, and it's easier for people with small accounts to trade them too.
4. Back in Action
Once the optimizer finds a winner, it automatically updates the config file and restarts the bot. I don't have to touch anything.
I just get a nice message on Telegram telling me everything is back to normal:
🟢 Optimization Complete
Old Coin: SUPERUSDT (Removed) New Coin: SOPHUSDT (Added)
New Results: Win Rate: 43.75% Profit Factor: 2.23
Status: Trading Resumed.
🚀 SWING_FIBO Walk-Forward PASS
🔹 Coin: SOPHUSDT
🔹 In-Sample Results (90 days):
✦ Win Rate: 31.76%
✦ Drawdown: 1.27%
✦ Profit Factor: 1.34
✦ Trades: 85
🔹 Out-of-Sample Results (14 days):
✦ Win Rate: 43.75%
✦ Drawdown: 0.66%
✦ Profit Factor: 2.23
✦ Trades: 16
Consensus Parameters:
{
"swing_lookback": 20,
"trend_ema": 50,
"fib_zone_start": 0.382,
"fib_zone_end": 0.618,
"atr_lookback": 20,
"atr_multi": 1.5
}
This autonomous system ensures that trading strategies stay aligned with current market conditions without manual intervention.
# Multi-User Management: From "Just Me" to "Everyone"
Originally, this system wasn't meant for the public.
- V1 was slow and clunky.
- V2 was fast but only worked for my account.
- V3 is what we have now: Multi-user, sub-second execution, and fully scalable.
It started when a friend asked, "Bro this is good, can I use this on my API too?" I didn't want to just send him signals to manually trade. I wanted to hook his Binance account directly into my brain. But I realized my V2 bot couldn't handle two accounts at once the logic is wasn't build for that.
So I rebuilt the core to be a Node-Based System.
I established a Performance-Driven Business Model based on a 20% profit share. I only succeed when the users succeed, ensuring my interests are perfectly aligned with theirs.
2. Node Architecture (The "100 User" Rule)
Why do we need "Nodes"? Why not just run one giant server? Binance Logic. Binance limits how many API calls you can make from a single IP address. If I put 1,000 users on one server, we'd get banned in 5 seconds.
So I created Nodes.
- A "Node" is a lightweight server that manages exactly 100 users.
- It has its own IP address.
- It has its own connection to Binance.
- When I need more users, I just spin up a new Node.
When the Brain says "Buy Bitcoin," it shouts it to the ZMQ Broker. The Message Broker instantly shoots it to every Node. Each Node executes trades for its 100 users in parallel. Result: 1,000 trades executed in under 1 second.
3. Security: Advanced Protection
Managing user API keys is a significant responsibility. The system uses a multi-layered security approach:
- Encryption: Keys are encrypted with a Master Key before they touch the database.
- Decryption: They are only decrypted inside the Node's memory for the split second needed to sign a request.
- IP Whitelisting: This is the big one. Users must "Whitelist" my Node's IP address. its also the binance way to enable future trades. Even if someone stole the keys, they couldn't trade from a different computer.
- No open ports: because this oprate using telegram as a interface, i dont need to have open ports like 80 or 443
4. Registration & Validation Flow
When a new user joins via Telegram, we don't just "trust" them. We run a full background check on their keys.
If the user didn't set the IP correctly, the Node catches it immediately and tells them exactly what's wrong.
5. Reliability: The Heartbeat
How do I know if a Node crashes? Every Node has to "ping" the database every minute. It's a heartbeat.
If a Node misses two pings, I get a massive alert on my phone. I can fix it before users even notice they missed a trade.
This architecture lets me scale from 1 friend to 1,000 users without changing a single line of code. I just add more Nodes.
# The Evolution: What This Project Taught Me
This project has been a major learning experience, taking my skills from basic programming to advanced systems engineering. It marks my journey from writing simple scripts to building high-performance, production-ready systems.
1. The Python Deep Dive
I learned that Python is incredibly powerful, but also very easy to mess up if you're lazy. I went from single-file scripts to a multi-module architecture. The most important technical lesson? In trading, if you're iterating over a DataFrame with a for-loop, you've already lost. I learned to think in vectors with pandas and numpy. I discovered that there are always five ways to solve a one problem, but only one of them won't crash your server when things get intense.
2. Scalability & System Architecture
Building for myself was easy. Building for 100+ strangers is an art. I had to learn:
- Database Design: How to write queries that don't bottleneck when bots update their balance at the same time.
- Connection Management: Why pooling matters and how to keep a database alive under constant stress.
- Scalability at the Core: Realizing that if a design can't handle 10x the current load, it's a bad design.
3. The Millisecond War: Async vs. Sync
I learned what a millisecond actually feels like. In high-frequency trading, 100ms is the difference between a "perfect entry" and a "slippage nightmare." and im not evan doing HTF, I had to master asyncio for handling thousands of WebSocket packets while keeping order execution sync and reliable. Balancing these two worlds was one of the hardest things I've ever coded.
4. Security: Trust, but Encrypt
Handling other people's API keys is a massive responsibility. I learned that nothing is 100% secure, so your job is to make it as hard as possible for things to go wrong. I dove into Fernet encryption, IP whitelisting, and risk reduction protocols. I learned that you don't just build a wall; you build a system that knows what to do if the wall is breached.
5. Infrastructure: The Invisible Glue
I used to hate writing logs. Now, I love them. Logging is the only reason I can sleep at night. I learned:
- Environment Management: Keeping secrets out of the code.
- Config Management: Using atomic file updates to prevent
configs.jsonfrom ever getting corrupted. - Module Structure: How to organize a project so it doesn't become a "spaghetti" monster after three months of updates.
6. Money, Risk, and the Mental Game
This wasn't just a coding challenge; it was a finance and psychology lesson. I learned the brutal reality of Risk vs. Reward. I learned that discipline is the only thing that separates a trader from a gambler. The "emotional rollercoaster" of watching a system manage real money is real. I built the bot to remove my emotions, but the process of building it taught me how to manage them.
7. The Journey Isn't Over
I'm still learning. This project is far from finished. Every time I think it’s perfect, the market changes or I find a better way to optimize this. I went from school-level basics to building a distributed, self-healing trading infrastructure. This project didn't just teach me Python; it taught me how to be a build systems.
And I’m just getting started.
# The Horizon: Scale and Future Growth
This system is now a solid, working infrastructure. The core engineering is done, so I can focus on making it better and bigger.
Project's Future: Infrastructure & DevOps
This project shows I can build automated systems that work in real markets. The next step is all about making the infrastructure smarter and more scalable.
Strategy & Backtest Improvements
- Better Strategies: Keep improving what I have and build new ones based on how the market changes
- Faster Backtesting: Make the optimization engine faster so it finds good parameters in minutes instead of hours
- Performance Dashboards: Build real-time dashboards to see how strategies are doing
DevOps & Infrastructure Scaling
The current setup works great for dozens of users, but to handle hundreds or thousands, I need to automate the infrastructure. This is what I'm focusing on learning and building.
Containerization with Docker
- Put each module (Bot, Node, Telegram Bot, Service Connector) in containers so they deploy the same way every time
- Use Docker Compose for local development and testing
- Use container orchestration to manage multiple nodes
Infrastructure as Code with Terraform
- Automatically set up cloud resources (servers, databases, load balancers)
- Keep track of infrastructure versions and roll back if needed
- Auto-scale based on how many users are trading
Auto-Scaling Architecture
- Automatically start new Node instances when one gets full
- Spread signals and user requests across multiple instances
- Watch health and replace failed instances automatically
- Update without stopping trading (zero downtime)
CI/CD Pipeline Optimization
I already have a CI/CD pipeline, but I need to optimize it for the new containerized setup:
- Container Registry: Automatically build and push Docker images when code changes
- Multi-Stage Builds: Make images smaller so they deploy faster
- Environment-Specific Deployments: Separate pipelines for dev, staging, and production
- Rollback: Go back to the previous version quickly if something breaks
- Automated Testing: Test containerized components better
- Infrastructure Integration: CI/CD triggers Terraform when infrastructure needs to change
Learning Roadmap
This is a big learning curve, but I need to do it:
- Docker: Build optimized images, multi-stage builds, and orchestration
- Terraform: Write reusable modules, manage state, and build infrastructure patterns
- Cloud Architecture: Understand auto-scaling, load balancers, and distributed systems
- Monitoring: Set up Prometheus or Grafana to watch system health and performance
The Vision
Turn this from a manually deployed system into a self-healing, auto-scaling platform that can handle 1,000+ users with minimal manual work. When a Node gets full, a new one starts automatically. When something fails, it gets replaced without me touching it. When trading volume spikes, resources scale up to handle it.
The code is running. The discipline is automated. The infrastructure evolution begins.
Bot's Future: Strategy & Trading Philosophy
Because the foundation is solid, I can plug in new strategies without rewriting code. Right now I'm running 4 bots with 1.25% risk per trade and 2 trades per day. The architecture supports 10+ bots at lower risk, or high-frequency approaches. I'm only limited by strategy validation and math, not by code.
A Warning to the "Get Rich Quick" Crowd
If you're looking at this thinking it's a "money printer" that makes you a millionaire overnight, I'll tell you straight: Learn the market. Trading isn't for the weak, and it's also not a scam. I learned that the hard way by losing my own money.
If anyone comes to me looking for a "magic button," I'll slap your face and tell them to go find a fixed deposit. This system is built to be better than a bank, not to be a lottery ticket. It's about consistency, not miracles.
The Reality of the Game
The market is a always changing. I just came off a 2-month losing streak after a 1-month winning run. That's the game. If i trade those 2 months i might blow up my accout. but i didn't. I removed my emotions and gave the keys to the bot. While others are yelling at their screens and revenge-trading, I'm out living my life because I know my risk is capped and my system is disciplined.
# Final Thoughts
Looking back, this project has been a massive personal achievement for me. I built every part of this system—from the core trading logic to the entire network infrastructure—entirely on my own as a solo developer. It wasn't just about making a bot trade; it was about creating a complete, high-performance ecosystem that handles real users and real market conditions 24/7. To get this running, I had to master complex challenges like real-time data management, secure encryption, and building a system that can heal itself without me being there. Even though there are much larger platforms out there, knowing that I designed and built this entire system from top to bottom gives me great confidence in my ability to build complex software. This project is proof that I can take a difficult idea and turn it into a professional, production-ready solution that actually works in the real world.
Leave a suggestion to improve!
Share your thoughts about this project