|
|
||
|---|---|---|
| .github | ||
| assets | ||
| astrai | ||
| scripts | ||
| tests | ||
| .dockerignore | ||
| .gitattributes | ||
| .gitignore | ||
| CONTRIBUTING.md | ||
| Dockerfile | ||
| LICENSE | ||
| README.md | ||
| pyproject.toml | ||
README.md
A lightweight Transformer training & inference framework
📖 Table of Contents
English
Features
- 🚀 High Performance: Optimized for both training and inference with efficient parallelization.
- 🔧 Flexible: Support for seq/sft/dpo/grpo training, customizable model architectures.
- 💡 Easy to Use: Simple API with comprehensive examples and demos.
- 📦 Lightweight: Minimal dependencies, easy to deploy.
- 🔬 Research‑Friendly: Modular design, easy to experiment with new ideas.
- 🤗 HuggingFace Integration: Compatible with HuggingFace models and datasets.
Quick Start
Installation
git clone https://github.com/ViperEkura/AstrAI.git
cd AstrAI
pip install -e .
For development dependencies:
pip install -e ".[dev]"
Train a Model
python scripts/tools/train.py \
--train_type=seq \
--data_root_path=/path/to/dataset \
--param_path=/path/to/param_path
Generate Text
python scripts/tools/generate.py --param_path=/path/to/param_path
Docker
Build and run with Docker (recommended for GPU environments):
# Build image
docker build -t astrai:latest .
# Run with GPU support
docker run --gpus all -it astrai:latest
# Run with specific GPUs
docker run --gpus '"device=0,1"' -it astrai:latest
# Run inference server
docker run --gpus all -p 8000:8000 astrai:latest \
python -m scripts.tools.server --port 8000 --device cuda
# Run with volume mount for data
docker run --gpus all -v /path/to/data:/data -it astrai:latest
Note:
--gpus allis required for CUDA support. Without it,torch.cuda.is_available()will returnFalse.
Start HTTP Server
Start the inference server with OpenAI-compatible HTTP API:
python -m scripts.tools.server --port 8000 --device cuda
Make requests:
# Chat API (OpenAI compatible)
curl -X POST http://localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"messages": [{"role": "user", "content": "Hello"}],
"max_tokens": 512
}'
# Streaming response
curl -X POST http://localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"messages": [{"role": "user", "content": "Tell a story"}],
"stream": true,
"max_tokens": 500
}'
# Health check
curl http://localhost:8000/health
Demo
Check out the demos in the scripts/demo/ folder:
# Download pre‑processed data (required before running demos)
python scripts/demo/download.py
# Interactive streaming chat
python scripts/demo/stream_chat.py
# Batch generation
python scripts/demo/generate_batch.py
# Auto‑regressive generation
python scripts/demo/generate_ar.py
Watch a video walkthrough on bilibili.
Documentation
| Document | Description |
|---|---|
| Parameter Guide | Training & inference parameters |
| Design Document | Framework architecture & module design |
| Data Flow | Data processing pipeline details |
| Model Introduction | Model architecture & technical details |
Contributing
We welcome contributions! Please see our Contributing Guidelines for details.
- Fork the repository.
- Create a feature branch.
- Commit your changes.
- Open a Pull Request.
For major changes, please open an issue first to discuss what you would like to change.
Community
- GitHub Issues: Issue Tracker
- Discussions: GitHub Discussions
- HuggingFace: Model Hub
License
This project is licensed under the GPL-3.0 License.
A lightweight Transformer framework designed for both high performance and ease of use.