Inspiration

Most tab managers are confined to tiny browser sidebars or dropdown menus, trapping your research in flat, ephemeral lists. We built TabNeuron because complex web research requires space and permanence. By pulling your browser tabs out onto a native desktop canvas, you gain an infinite 2D workspace to visually group information, analyze it with AI, and take true ownership of your browsing sessions.

What is TabNeuron?

TabNeuron is a standalone desktop application that connects to your browser via a lightweight extension. It maps your active browsing session onto an infinite visual canvas. You can organize tabs spatially, chat with the extracted page content using local or cloud AI, and build agent pipelines to automate web research.

Demonstration of TabNeuron workspace with browser tabs

Core Capabilities

  • Native Desktop Workspace: Break out of the browser UI. Manage tabs on an infinite 2D spatial canvas with drag, drop, and WYSIWYG grouping tools.
  • Data Ownership & Backups: Save your visual workspace states locally to disk. Close your browser, and your research session remains safely backed up and restorable.
  • Validated App ↔ Browser Sync: AI suggests semantic tab groupings, but you stay in control. Review your visual layout and click "Apply" to sync the structure back to your browser's tab groups.
  • Mindmap Visualization: Toggle a structured mindmap view to get a high-level overview of all nested groups and tabs.
  • Visual Previews & Metadata: Automatic scaled screenshots, titles, URLs, and OG-tags are extracted for every tab, giving you instant visual context without switching windows.
  • Multi-Tab Chat: Select multiple tabs on the canvas and ask the AI to synthesize, compare, or extract data across all of them simultaneously.
  • AI Agent Pipelines: Build no-code automated workflows where the output of one agent (e.g., extracting specs) becomes the input for the next (e.g., formatting a comparison table).
  • Built-in MCP Server: Native Model Context Protocol server giving your AI direct file system and web access:
    • File Operations: read_file, read_file_from_line, write_file, list_directory, create_directory, move_file, search_files, get_file_info, edit_file, get_filesystem_info
    • Web Content Tools: web_fetch_content, web_scrape_page, web_extract_links, web_search, web_get_metadata, web_save_snapshot
  • 4-Tier AI Memory System: Advanced memory architecture with short-term memory, conversation summaries, long-term memory (personas, facts, preferences), and document search. Each layer builds on the last for intelligent, context-aware responses.
  • Document OCR & Context: Extract text from PDFs, images, and web pages. Enhance agent knowledge with OCR-processed documents.
  • Advanced Model Management: Connect to multiple cloud, on-prem and local LLM backends including OpenAI, Mistral, Lemonade, Llamacpp, LM Studio, Ollama and mnany other compatible services. The model manager lists all available models and allows users to activate or deactivate models as needed for specific tasks.
    • Local and On-Prem Services: Lemonade, Llamacpp, LM Studio, Ollama and other self-hosted LLM solutions
    • Cloud Services: OpenAI, Mistral, Gemini and other cloud-based AI platforms
    • Flexible Configuration: Switch between different AI backends based on your needs, privacy requirements, and performance considerations
TabNeuron Visual Canvas Workspace

🔄 Visual AI Workspace ↔ Browser Sync

Your workspace and your browser stay aligned through a controlled bidirectional sync.

Workspace → Browser:

  • Organize tabs spatially on the desktop canvas or let the AI cluster them semantically.
  • Review the layout and click "Apply".
  • Your browser's native tab groups are instantly updated to match your visual workspace.

Browser → Workspace:

  • Create groups or open new tabs directly in your browser.
  • TabNeuron's lightweight polling API detects the changes.
  • The new tabs and groups are automatically pulled onto your desktop canvas.

Note: While our architecture is browser-agnostic, the current bridge extension only supports Chromium-based browsers (Chrome, Edge, Brave, etc.).

TabNeuron Mindmap - Visual representation of tab groups
Mindmap - Visual representation of tab groups and their relationships

AI Model Manager

TabNeuron is backend-agnostic. Choose the engine that fits your privacy needs and hardware capabilities:

  • Local-First & Offline (Privacy): Connect to Ollama, Llama.cpp, or use the built-in portable model (~806MB). Analyze your tabs entirely offline so your browsing data never leaves your machine. (Note: Running 8B+ models locally requires at least 16 GB RAM or 8 GB VRAM for smooth performance).
  • Cloud-Scale (Performance): Connect your API keys for OpenAI, Mistral, Lemonade, or others. Recommended for complex, multi-tab reasoning tasks, highly accurate agent pipelines, and bypassing local hardware limits.
TabNeuron Model Manager Interface
Model Manager - Activate and manage AI models

Note: Cloud models provide the best performance for tab analysis and chat. Local or on-premise models work offline but may have reduced accuracy for complex tasks.

💬 Chat with Websites

Talk directly to your open tabs. TabNeuron extracts the DOM content and metadata from your selected tabs and feeds it into your chosen LLM context.

How it Works:

  1. Create an agent.
  2. Add add any tabs to the agents group.
  3. Open the context menu and select "Chat".
  4. Ask natural language questions to synthesize data across all selected sources.

Use Cases:

  • "Compare the technical specifications across these 4 review tabs."
  • "Extract all Python code blocks from this tutorial and format them."
  • "Read these 5 news articles and summarize the overarching timeline."
  • "Summarize this article"

🔌 MCP Server Usage Guide

TabNeuron includes a built-in MCP (Model Context Protocol) server that enables advanced file operations and web content access through AI agents. To use these capabilities:

  1. Create an agent in the workspace
  2. Right-click on the agent title and select "Chat"
  3. Interact directly with all files, folders, and web content

File Operations

Common file operations you can perform in the agent chat:

  • "List the first 10 files in the current directory" (uses list_directory)
  • "Search all text files in the workspace" (uses search_files)
  • "Read the content of document.txt to summarize its contents" (uses read_file)
  • "Read lines 10-20 of large_log.txt to check for errors" (uses read_file_from_line)
  • "Move file report.docx to archive/report_backup.docx" (uses move_file)
  • "Get size and creation date of config.json" (uses get_file_info)
  • "Edit the third paragraph of essay.txt to improve clarity" (uses edit_file)
  • "Write the generated report to monthly_report.txt" (uses write_file)
  • "Create a new folder called 'Project_X' in the workspace" (uses create_directory)
  • "Show available storage space and supported operations" (uses get_filesystem_info)
  • "Summarize document.txt and create a markdown summary document" (uses read_file + write_file)

Web Content Tools

Built-in MCP web content tools give your AI direct web access. Combine file operations with web content for even more powerful agents:

  • web_fetch_content: Fetch and read content from any URL
  • web_scrape_page: Scrape full webpage content
  • web_extract_links: Extract all links from a webpage
  • web_search: Search the web for information
  • web_get_metadata: Get page metadata and SEO info
  • web_save_snapshot: Save webpage snapshots for later reference

Example queries:

  • "What's the weather in Berlin?"
  • "Find best wireless earbuds under 100€"
  • "Compare iPhone 16 vs Samsung S25"
  • "Check if website is down"
  • "Extract all product links from Amazon search"
  • "Fetch the latest news about AI and summarize"
  • "Compare prices from multiple online shops"

Combined workflows:

  • "Fetch product specs from the manufacturer website and save to a local file"
  • "Search for documentation, download PDFs, and organize into folders"
  • "Extract data from web sources, analyze with local files, and generate a report"

🧠 Advanced AI Memory & RAG System

TabNeuron introduces a 4-tier memory architecture combined with 4-tier RAG (Retrieval-Augmented Generation) for intelligent, context-aware responses that save tokens and deliver faster, targeted answers — with full user control.

4-Tier AI Memory System

Each memory layer builds on the last, creating a comprehensive memory system that makes your AI agents more intelligent over time:

  • Short-Term Memory: Active conversation context within the current chat session
  • Summaries: Automatic conversation compression to preserve key insights without token bloat
  • Long-Term Memory: Persistent storage for user personas, facts, preferences, and skills — automatically extracted from conversations
  • Document Search: RAG-based retrieval from uploaded documents and web pages for precise, context-rich answers
Long-Term Person Memory Management Interface
Long-Term Memory: Automatically extracted user facts, preferences, and persona data

Chat History & Conversation Management

All your past conversations are stored in a sidebar for easy access. Search, reload, and delete chats — nothing gets lost.

Chat History Sidebar Interface
Chat History Sidebar: Access all past conversations instantly
Chat Manager Interface
Chat Manager: View, search, and manage conversation history

RAG Retrieval

Optimized RAG startup. The 4-tier RAG system ensures token-efficient, targeted answers:

  • Tier 1: Semantic
  • Tier 2: Lexical
  • Tier 3: Keyword
  • Tier 4: Full document context
RAG Document Manager Interface
RAG Document Manager: Upload, view, and manage documents for AI context retrieval

Key Benefits

  • Token Efficiency: Smart memory layering reduces token usage by serving only relevant context
  • Faster Responses: 40x speedup in RAG retrieval means near-instant answers
  • Full User Control: All data stored locally — you control what's remembered
  • NPU Acceleration: Uses 🍋 Lemonade backend with integrated FastFlowLM NPU for Windows Ryzen. 🍋 Lemonade is AMD's high-performance AI server (C++ based) integrating multiple backends including FastFlowLM for native NPU acceleration. Falls back to text-based retrieval if no NPU is available.
  • Multi-Language: 15 languages supported with auto-detection

🤖 AI Agents Pipeline

Automate repetitive web research by chaining AI agents together. Build pipelines where one agent's output is directly piped into the next agent's prompt.

Example Pipeline:

  1. Scraper Agent: Reads 10 selected product tabs and uses web_extract_links to find technical specs.
  2. Analyst Agent: Takes the outputted specs, compares them, and formats a Markdown table.
  3. Output Agent: Writes a final executive summary and saves it as a new, persistent text node on your canvas.

📄 Document OCR and Context Enhancement

Document OCR and Context Enhancement allows users to enrich agents with specific knowledge by dragging and dropping documents directly onto agents. After organizing tabs spatially, users can make documents accessible to agents by processing them through OCR, converting them into contextual information. This is accomplished via the context menu options 'Document Overview' and 'Process Documents,' which is particularly valuable for PDFs and image-based documents.

It's important to distinguish this from MCP server functionality. While the MCP server allows file interaction during chats, it currently lacks OCR capabilities and can only access text-based file content.

Document OCR Capabilities

TabNeuron includes powerful Optical Character Recognition (OCR) capabilities for processing various document types with support for common character encodings:

  • Text PDFs: Extract text from PDF documents (supports embedded text and OCR for scanned content)
  • Plain Text Files: Process .txt files with support for:
    • UTF-8 (recommended for full Unicode support)
    • Latin-1 (ISO-8859-1) as fallback encoding
  • Code Files: OCR support for source code files including:
    • Python (.py), C++ (.cpp), JavaScript (.js), Java (.java)
    • C# (.cs), PHP (.php), Ruby (.rb), Go (.go)
    • TypeScript (.ts), Swift (.swift), Kotlin (.kt)
    • And other common programming language files in UTF-8 or Latin-1 encoding
  • PDFs with Images: Built-in method for OCR processing of PDFs containing images

Requirements for PDF Image OCR

To enable OCR for PDFs with images, you need to install the official Tesseract OCR engine with default settings and ensure it's available in your system PATH. Tesseract is an open-source OCR engine that provides high-quality text extraction from images.

Download Tesseract from:

Encoding Support Notes

The application primarily uses UTF-8 encoding for document processing and falls back to Latin-1 (ISO-8859-1) when UTF-8 decoding fails. For optimal results, we recommend using UTF-8 encoding for your documents. This ensures the best compatibility with international characters and special symbols.

📦 Browser Extension Setup

Because TabNeuron is a native desktop app, it uses a lightweight extension merely as a bridge to communicate with your browser via a local polling API.

Configuration Bridge:

  • Host: localhost
  • Port: 5555
  • Sync: Polling API (default 2s interval)

Need help? Visit our Chrome Extension Support Page for detailed installation guides, screenshots, and troubleshooting.

System Requirements

ComponentMinimum
OSWindows 11 (64-bit)
🌐 BrowserGoogle Chrome (required for extension)
🧠 AI SupportBuilt-in models or on-prem/cloud AI services
💾 RAM≥ 4 GB
💽 StorageMinimum 2 GB (app + model)
⚡ RightsStandard user

🚀 Get TabNeuron

Download the application, then install the Chrome extension.

$ md5sum.exe TabNeuron.exe v1.0.6 d82aa8a3fe52129e1118d14efa19fc96
Get it from Microsoft

After installation:Configure the browser extension and click Start. Then launch TabNeuron and select Organize Tabs to begin the synchronization. The app will display the synchronization progress. Done! The extension connects to localhost:5555 (default port).

Support & Donations

Join our Discord community

Leave a review or star the project

The Visual AI Web Workspace

Support via PayPal or Bitcoin Cash

Bitcoin Cash (BCH)

bitcoincash:
qrvhk77ujevd9n7jse4jewm99eg95at7tvc6m9v2vv

Bitcoin Cash QR Code

Discover our other tools and games:

Check out our other products and tools:

  • 🚀 Spaceship - Retro Arcade Mini Game
  • 🧩 Sorana - Visual AI Workspace & Personal AI Agent
  • ⚡ RyzenZPilot - AMD Ryzen Power Management
  • 🤖 Aicono - AI Desktop Icon Organizer

New Spaceship - Retro Mini Game

Retro Arcade 2d side-scroller bullet-hell shmup game

Website

Spaceship

Featured on itch.io

New Spaceship - Retro Mini Game

Featured on IndieDB

New Spaceship

RyzenZPilot - AMD Ryzen Power Management Tool

RyzenZPilot is a powerful tool for managing AMD Ryzen processor power settings on Windows. It allows users to adjust CPU performance, power limits, and thermal configurations for optimal performance and efficiency.

Website

RyzenZPilot

Aicono - AI Intelligent Desktop Icon Autopilot

Aicono automatically organizes your cluttered Windows desktop using AI. Group icons intelligently, arrange them neatly!

Website

Aicono

Featured on Microsoft Store

Get it from Microsoft

Featured on Softpedia

Get it from Softpedia

Featured on AlternativeTo

Aicono - AI Intelligent Desktop Icon Autopilot

🧩 Sorana - The Visual AI Workspace:

Sorana is an AI-powered visual workspace that transforms how you organize and interact with digital files. Using semantic AI analysis, it automatically groups related files and folders onto a spatial 2D canvas, replacing traditional hierarchies with intuitive visual layouts. Build drag-and-drop workspaces and no-code agent pipelines, connect to on-prem or cloud AI backends (OpenAI, Mistral, LLamacpp, Lemonade, Ollama), and keep your data under your control.

Homepage:

Featured on Softpedia:

Get it from Softpedia

Featured on Microsoft Store:

Get it from Microsoft

↑ Back to Top