Go to file
ViperEkura 6831a15424 docs: 更新镜像构建部分说明 2026-04-10 12:59:50 +08:00
.github ci: 优化 GitHub Actions 工作流 2026-04-05 22:40:16 +08:00
assets docs: 更新镜像构建部分说明 2026-04-10 12:59:50 +08:00
astrai fix: 修复删除节点问题 2026-04-09 16:58:29 +08:00
scripts build: 修改docker 构建流程 2026-04-10 11:25:00 +08:00
tests chore: 增加并发测试 2026-04-09 18:10:28 +08:00
.dockerignore build: 修改docker 构建流程 2026-04-10 11:25:00 +08:00
.gitattributes ci: 优化 GitHub Actions 工作流 2026-04-05 22:40:16 +08:00
.gitignore chore: 增加docker 配置 2026-04-04 10:59:32 +08:00
CONTRIBUTING.md docs: 优化文档结构并添加 GitHub 模板 2026-03-31 10:00:49 +08:00
Dockerfile build: 修改docker 配置 2026-04-10 12:53:08 +08:00
LICENSE Change license from Apache 2.0 to GPL v3.0 2026-02-22 21:20:34 +08:00
README.md docs: 更新镜像构建部分说明 2026-04-10 12:59:50 +08:00
pyproject.toml fix: 修复工厂模式问题并增加chat-template设置 2026-04-04 12:05:05 +08:00

README.md

Logo

A lightweight Transformer training & inference framework

python license release stars forks


📖 Table of Contents

English

English

Features

  • 🚀 High Performance: Optimized for both training and inference with efficient parallelization.
  • 🔧 Flexible: Support for seq/sft/dpo/grpo training, customizable model architectures.
  • 💡 Easy to Use: Simple API with comprehensive examples and demos.
  • 📦 Lightweight: Minimal dependencies, easy to deploy.
  • 🔬 ResearchFriendly: Modular design, easy to experiment with new ideas.
  • 🤗 HuggingFace Integration: Compatible with HuggingFace models and datasets.

Quick Start

Installation

git clone https://github.com/ViperEkura/AstrAI.git
cd AstrAI
pip install -e .

For development dependencies:

pip install -e ".[dev]"

Train a Model

python scripts/tools/train.py \
  --train_type=seq \
  --data_root_path=/path/to/dataset \
  --param_path=/path/to/param_path

Generate Text

python scripts/tools/generate.py --param_path=/path/to/param_path

Docker

Build and run with Docker (recommended for GPU environments):

# Build image
docker build -t astrai:latest .

# Run with GPU support
docker run --gpus all -it astrai:latest

# Run with specific GPUs
docker run --gpus '"device=0,1"' -it astrai:latest

# Run inference server
docker run --gpus all -p 8000:8000 astrai:latest \
  python -m scripts.tools.server --port 8000 --device cuda

# Run with volume mount for data
docker run --gpus all -v /path/to/data:/data -it astrai:latest

Note: --gpus all is required for CUDA support. Without it, torch.cuda.is_available() will return False.

Start HTTP Server

Start the inference server with OpenAI-compatible HTTP API:

python -m scripts.tools.server --port 8000 --device cuda

Make requests:

# Chat API (OpenAI compatible)
curl -X POST http://localhost:8000/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "messages": [{"role": "user", "content": "Hello"}],
    "max_tokens": 512
  }'

# Streaming response
curl -X POST http://localhost:8000/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "messages": [{"role": "user", "content": "Tell a story"}],
    "stream": true,
    "max_tokens": 500
  }'

# Health check
curl http://localhost:8000/health

Demo

Check out the demos in the scripts/demo/ folder:

# Download preprocessed data (required before running demos)
python scripts/demo/download.py

# Interactive streaming chat
python scripts/demo/stream_chat.py

# Batch generation
python scripts/demo/generate_batch.py

# Autoregressive generation
python scripts/demo/generate_ar.py

Watch a video walkthrough on bilibili.

Documentation

Document Description
Parameter Guide Training & inference parameters
Design Document Framework architecture & module design
Data Flow Data processing pipeline details
Model Introduction Model architecture & technical details

Contributing

We welcome contributions! Please see our Contributing Guidelines for details.

  1. Fork the repository.
  2. Create a feature branch.
  3. Commit your changes.
  4. Open a Pull Request.

For major changes, please open an issue first to discuss what you would like to change.

Community

License

This project is licensed under the GPL-3.0 License.


A lightweight Transformer framework designed for both high performance and ease of use.