Gathering detailed insights and metrics for @autodev/codebase
Gathering detailed insights and metrics for @autodev/codebase
Gathering detailed insights and metrics for @autodev/codebase
Gathering detailed insights and metrics for @autodev/codebase
npm install @autodev/codebase
Typescript
Module System
Node Version
NPM Version
Cumulative downloads
Total Downloads
Last Day
0%
NaN
Compared to previous day
Last Week
0%
NaN
Compared to previous week
Last Month
0%
NaN
Compared to previous month
Last Year
0%
NaN
Compared to previous year
21
A platform-agnostic code analysis library with semantic search capabilities and MCP (Model Context Protocol) server support. This library provides intelligent code indexing, vector-based semantic search, and can be integrated into various development tools and IDEs.
1# Install Ollama (macOS) 2brew install ollama 3 4# Start Ollama service 5ollama serve 6 7# In a new terminal, pull the embedding model 8ollama pull dengcao/Qwen3-Embedding-0.6B:Q8_0
ripgrep
is required for fast codebase indexing. Install it with:
1# Install ripgrep (macOS) 2brew install ripgrep 3 4# Or on Ubuntu/Debian 5sudo apt-get install ripgrep 6 7# Or on Arch Linux 8sudo pacman -S ripgrep
Start Qdrant using Docker:
1# Start Qdrant container 2docker run -p 6333:6333 -p 6334:6334 qdrant/qdrant
Or download and run Qdrant directly:
1# Download and run Qdrant 2wget https://github.com/qdrant/qdrant/releases/latest/download/qdrant-x86_64-unknown-linux-gnu.tar.gz 3tar -xzf qdrant-x86_64-unknown-linux-gnu.tar.gz 4./qdrant
1# Check Ollama 2curl http://localhost:11434/api/tags 3 4# Check Qdrant 5curl http://localhost:6333/collections
1npm install -g @autodev/codebase
Alternatively, you can install it locally:
git clone https://github.com/anrgct/autodev-codebase
cd autodev-codebase
npm install
npm run build
npm link
The CLI provides two main modes:
1# Basic usage: index your current folder as the codebase. 2# Be cautious when running this command if you have a large number of files. 3codebase 4 5 6# With custom options 7codebase --demo # Create a local demo directory and test the indexing service, recommend for setup 8codebase --path=/my/project 9codebase --path=/my/project --log-level=info
1# Start long-running MCP server 2cd /my/project 3codebase mcp-server 4 5# With custom configuration 6codebase mcp-server --port=3001 --host=localhost 7codebase mcp-server --path=/workspace --port=3002
The library uses a layered configuration system, allowing you to customize settings at different levels. The priority order (highest to lowest) is:
--model
, --ollama-url
, --qdrant-url
, --config
, etc.)./autodev-config.json
)~/.autodev-cache/autodev-config.json
)Settings specified at a higher level override those at lower levels. This lets you tailor the behavior for your environment or project as needed.
Config file locations:
~/.autodev-cache/autodev-config.json
./autodev-config.json
Create a global configuration file at ~/.autodev-cache/autodev-config.json
:
1{ 2 "isEnabled": true, 3 "embedder": { 4 "provider": "ollama", 5 "model": "dengcao/Qwen3-Embedding-0.6B:Q8_0", 6 "dimension": 1024, 7 "baseUrl": "http://localhost:11434" 8 }, 9 "qdrantUrl": "http://localhost:6333", 10 "qdrantApiKey": "your-api-key-if-needed", 11 "searchMinScore": 0.4 12}
Create a project-specific configuration file at ./autodev-config.json
:
1{ 2 "embedder": { 3 "provider": "openai-compatible", 4 "apiKey": "sk-xxxxx", 5 "baseUrl": "http://localhost:2302/v1", 6 "model": "openai/text-embedding-3-smallnpm", 7 "dimension": 1536, 8 }, 9 "qdrantUrl": "http://localhost:6334" 10}
Option | Type | Description | Default |
---|---|---|---|
isEnabled | boolean | Enable/disable code indexing feature | true |
embedder.provider | string | Embedding provider (ollama , openai , openai-compatible ) | ollama |
embedder.model | string | Embedding model name | dengcao/Qwen3-Embedding-0.6B:Q8_0 |
embedder.dimension | number | Vector dimension size | 1024 |
embedder.baseUrl | string | Provider API base URL | http://localhost:11434 |
embedder.apiKey | string | API key (for OpenAI/compatible providers) | - |
qdrantUrl | string | Qdrant vector database URL | http://localhost:6333 |
qdrantApiKey | string | Qdrant API key (if authentication enabled) | - |
searchMinScore | number | Minimum similarity score for search results | 0.4 |
Note: The isConfigured
field is automatically calculated based on the completeness of your configuration and should not be set manually. The system will determine if the configuration is valid based on the required fields for your chosen provider.
1# Use global config defaults 2codebase 3 4# Override model via CLI (highest priority) 5codebase --model="custom-model" 6 7# Use project config with CLI overrides 8codebase --config=./my-config.json --qdrant-url=http://remote:6333
--path=<path>
- Workspace path (default: current directory)--demo
- Create demo files in workspace--force
- ignore cache force re-index--ollama-url=<url>
- Ollama API URL (default: http://localhost:11434)--qdrant-url=<url>
- Qdrant vector DB URL (default: http://localhost:6333)--model=<model>
- Embedding model (default: nomic-embed-text)--config=<path>
- Config file path--storage=<path>
- Storage directory path--cache=<path>
- Cache directory path--log-level=<level>
- Log level: error|warn|info|debug (default: error)--log-level=<level>
- Log level: error|warn|info|debug (default: error)--help, -h
- Show help--port=<port>
- HTTP server port (default: 3001)--host=<host>
- HTTP server host (default: localhost)Configure your IDE to connect to the MCP server:
1{ 2 "mcpServers": { 3 "codebase": { 4 "url": "http://localhost:3001/sse" 5 } 6 } 7}
For clients that do not support SSE MCP, you can use the following configuration:
1{ 2 "mcpServers": { 3 "codebase": { 4 "command": "codebase", 5 "args": [ 6 "stdio-adapter", 7 "--server-url=http://localhost:3001/sse" 8 ] 9 } 10 } 11}
http://localhost:3001
- Server status and configurationhttp://localhost:3001/health
- JSON status endpointhttp://localhost:3001/sse
- SSE/HTTP MCP protocol endpointsearch_codebase
- Semantic search through your codebase
query
(string), limit
(number), filters
(object)1# Development mode with demo files 2npm run dev 3 4# Build for production 5npm run build 6 7# Type checking 8npm run type-check 9 10# Run TUI demo 11npm run demo-tui 12 13# Start MCP server demo 14npm run mcp-server
Mainstream Embedding Models Performance
Model | Dimension | Avg Precision@3 | Avg Precision@5 | Good Queries (≥66.7%) | Failed Queries (0%) |
---|---|---|---|---|---|
siliconflow/Qwen/Qwen3-Embedding-8B | 4096 | 76.7% | 66.0% | 5/10 | 0/10 |
siliconflow/Qwen/Qwen3-Embedding-4B | 2560 | 73.3% | 54.0% | 5/10 | 1/10 |
voyage/voyage-code-3 | 1024 | 73.3% | 52.0% | 6/10 | 1/10 |
siliconflow/Qwen/Qwen3-Embedding-0.6B | 1024 | 63.3% | 42.0% | 4/10 | 1/10 |
morph-embedding-v2 | 1536 | 56.7% | 44.0% | 3/10 | 1/10 |
openai/text-embedding-ada-002 | 1536 | 53.3% | 38.0% | 2/10 | 1/10 |
voyage/voyage-3-large | 1024 | 53.3% | 42.0% | 3/10 | 2/10 |
openai/text-embedding-3-large | 3072 | 46.7% | 38.0% | 1/10 | 3/10 |
voyage/voyage-3.5 | 1024 | 43.3% | 38.0% | 1/10 | 2/10 |
voyage/voyage-3.5-lite | 1024 | 36.7% | 28.0% | 1/10 | 2/10 |
openai/text-embedding-3-small | 1536 | 33.3% | 28.0% | 1/10 | 4/10 |
siliconflow/BAAI/bge-large-en-v1.5 | 1024 | 30.0% | 28.0% | 0/10 | 3/10 |
siliconflow/Pro/BAAI/bge-m3 | 1024 | 26.7% | 24.0% | 0/10 | 2/10 |
ollama/nomic-embed-text | 768 | 16.7% | 18.0% | 0/10 | 6/10 |
siliconflow/netease-youdao/bce-embedding-base_v1 | 1024 | 13.3% | 16.0% | 0/10 | 6/10 |
Ollama-based Embedding Models Performance
Model | Dimension | Precision@3 | Precision@5 | Good Queries (≥66.7%) | Failed Queries (0%) |
---|---|---|---|---|---|
ollama/dengcao/Qwen3-Embedding-4B:Q4_K_M | 2560 | 66.7% | 48.0% | 4/10 | 1/10 |
ollama/dengcao/Qwen3-Embedding-0.6B:f16 | 1024 | 63.3% | 44.0% | 3/10 | 0/10 |
ollama/dengcao/Qwen3-Embedding-0.6B:Q8_0 | 1024 | 63.3% | 44.0% | 3/10 | 0/10 |
ollama/dengcao/Qwen3-Embedding-4B:Q8_0 | 2560 | 60.0% | 48.0% | 3/10 | 1/10 |
lmstudio/taylor-jones/bge-code-v1-Q8_0-GGUF | 1536 | 60.0% | 54.0% | 4/10 | 1/10 |
ollama/dengcao/Qwen3-Embedding-8B:Q4_K_M | 4096 | 56.7% | 42.0% | 2/10 | 2/10 |
ollama/hf.co/nomic-ai/nomic-embed-code-GGUF:Q4_K_M | 3584 | 53.3% | 44.0% | 2/10 | 0/10 |
ollama/bge-m3:f16 | 1024 | 26.7% | 24.0% | 0/10 | 2/10 |
ollama/hf.co/nomic-ai/nomic-embed-text-v2-moe-GGUF:f16 | 768 | 26.7% | 20.0% | 0/10 | 2/10 |
ollama/granite-embedding:278m-fp16 | 768 | 23.3% | 18.0% | 0/10 | 4/10 |
ollama/unclemusclez/jina-embeddings-v2-base-code:f16 | 768 | 23.3% | 16.0% | 0/10 | 5/10 |
lmstudio/awhiteside/CodeRankEmbed-Q8_0-GGUF | 768 | 23.3% | 16.0% | 0/10 | 5/10 |
lmstudio/wsxiaoys/jina-embeddings-v2-base-code-Q8_0-GGUF | 768 | 23.3% | 16.0% | 0/10 | 5/10 |
ollama/dengcao/Dmeta-embedding-zh:F16 | 768 | 20.0% | 20.0% | 0/10 | 6/10 |
ollama/znbang/bge:small-en-v1.5-q8_0 | 384 | 16.7% | 16.0% | 0/10 | 6/10 |
lmstudio/nomic-ai/nomic-embed-text-v1.5-GGUF@Q4_K_M | 768 | 16.7% | 14.0% | 0/10 | 6/10 |
ollama/nomic-embed-text:f16 | 768 | 16.7% | 18.0% | 0/10 | 6/10 |
ollama/snowflake-arctic-embed2:568m:f16 | 1024 | 16.7% | 18.0% | 0/10 | 5/10 |
No vulnerabilities found.
No security vulnerabilities found.