Usage Guide
To help developers and enterprise users use AtomGit AI’s model capabilities more efficiently, this usage guide provides a unified explanation of Token consumption, model types, Notebook/Space core-hour calculation, and other related content, helping you clearly understand resource calculation rules and estimation methods.
I. Usage Methods Overview
AtomGit AI currently provides two types of resource usage methods:
- Token Usage: Applicable to model inference services such as text generation, image-to-text generation, text-to-image generation, sentence similarity, automatic speech recognition, etc.
- Core-Hour Usage: Applicable to scenarios requiring computing resources such as Notebook and Space.
Usage methods for different services are calculated independently and do not affect each other.
II. Token Usage Rules
Different model types consume different amounts of Tokens. The following are the Token estimation rules for common model types:
| Model Type | Chinese Token Estimation Rule | English Token Estimation Rule | Additional Notes |
|---|---|---|---|
| Text Generation | 1 Token ≈ 1.5–1.8 Chinese characters | 1 Token ≈ 4 English characters | Different models use different tokenization algorithms, so actual Token counts may vary slightly. |
| Image-to-Text | 1 Token ≈ 1.5–2 Chinese characters | 1 Token ≈ 4 English characters | Image Token is calculated based on resolution: e.g., 512×512≈334 Tokens. |
| Sentence Similarity | 1 Token ≈ 1.5-2 Chinese characters | 1 Token ≈ 4 English characters | Similarity result scores consume approximately 4 tokens. |
| Automatic Speech Recognition | 50,000 Tokens / request | Audio files smaller than 50M consume 50,000 Tokens per successful generation | |
| Text-to-Image | 50,000 Tokens / image | Different resolutions and quality may lead to Token consumption adjustments, subject to actual API call results. |
Note: The table data provides general estimation rules. Actual Token consumption is subject to the actual API response.
III. Core-Hour Calculation Rules (Notebook / Space)
Notebook and Space use CPU resources and adopt core-hour usage method.
Core-Hour Calculation Formula:
Core-Hours = CPU Cores × Running Time (hours)
Notebook / Space Core-Hour Consumption Reference Table
| CPU Cores | 10 minutes | 30 minutes | 60 minutes | 120 minutes |
|---|---|---|---|---|
| 0.5 cores | 0.08 | 0.25 | 0.5 | 1 |
| 2 cores | 0.34 | 1 | 2 | 4 |
| 4 cores | 0.67 | 2 | 4 | 8 |
| 8 cores | 1.34 | 4 | 8 | 16 |
| 16 cores | 3.34 | 8 | 16 | 32 |
| 32 cores | 5.34 | 16 | 32 | 64 |
Notes:
- Notebook and Space have consistent usage logic;
- Actual deduction is calculated precisely based on resource usage logs, subject to actual usage log calculation;
- Multiple concurrent instances are calculated by accumulating instances;
IV. Frequently Asked Questions
1. How is total Token consumption calculated?
Total Tokens = Input Tokens + Output Tokens
The longer the input and output, the more total Token consumption.
2. How are mixed Chinese-English Tokens calculated?
The system automatically identifies based on the model’s tokenization strategy. Mixed Chinese-English content does not require manual distinction.
3. Why do image Tokens vary?
It depends on the image size and the resolution supported by the model. Generally, the higher the resolution, the more Tokens consumed.
This concludes the usage guide for AtomGit AI. Whether you are using inference APIs, model online experience, or running code in Notebook / Space, you can estimate resource consumption based on the table rules. We hope this document helps you better understand the calculation methods for Tokens and core-hours, making it easier for you to use the platform’s various models and services with confidence. If you have any questions, please feel free to contact us in the community at any time.