⬇️
⬆️
AI Model Performance
Sorana uses AI models to intelligently group files and folders based on semantic meaning and logical relationships. The built-in default model is lightweight and works offline, but may not always classify files perfectly, especially for less common file types. For optimal performance, we strongly recommend using Llama 3.1 8b Instruct or higher models. Processing time depends entirely on the folder size - fewer files mean faster processing times. Naturally, the best performance is achieved with paid cloud services.
Hardware Requirements
- Built-in Models: Range from 1B parameter (806MB) to 20B parameter (12-16GB)
- Recommended 8B Models (e.g., Llama 3.1 8b Instruct): Minimum 12 GB RAM or 8 GB VRAM for smooth operation
- Hardware Requirements Increase with Model Size: Larger models with more parameters require higher specifications
- Cloud Models: No local hardware requirements (requires internet connection)
Alternatives for Limited Hardware
If your system has limited hardware resources, you have two main options:
- Built-in Portable Model: Works fully offline but may classify complex files as "Miscellaneous"
- Cloud-based LLMs: Connect to services like OpenAI, Mistral, etc. for high accuracy without local hardware costs
- Built-in Models: Range from 1B parameter (806MB) to 20B parameter (17-18GB). The smallest model (~806MB) is downloaded on first run and works fully offline. It is fast, but may sometimes classify complex files as "Miscellaneous". For significantly better results, we recommend Llama 3.1 8b Instruct or higher models.
- AI Integration: Supports connection to both on-prem and cloud-based AI services:
- Built-in Models: Pre-configured models accessible through the model manager.
- On-Prem Services:Lemonade, Llamacpp, Ollama, and other self-hosted LLM solutions
- Cloud Services: OpenAI, Mistral, and other cloud-based AI platforms (paid services provide the best performance)
- Configuration: Switch between different AI backends based on requirements
Note: Larger models can provide better grouping, but require more memory and processing power. Processing time is directly proportional to the number of files - fewer files result in faster processing. For the best performance, especially with large folders, we recommend using paid cloud services.