CI will fail if coverage drops compared to the base branch (enforced by Codecov).
Pull Requests will receive automated coverage comments from the Codecov bot when Codecov integration is enabled for your repository.
Framework for generating passive income by utilizing a team of AI agents to generate niche software and AI bots for customers.
- AI Agent Team: Utilize a team of AI agents to generate niche software and AI bots
- CrewAI Integration: Use CrewAI to create and manage AI agent teams
- CopilotKit Integration: Add AI copilot features to the React frontend
- Multi-Chain Protocol (MCP) Support: Connect to various AI providers through a unified interface
- Note: As of May 2025, the unused
mcp-use
dependency has been removed while maintaining full MCP functionality via themodelcontextprotocol
package
- Note: As of May 2025, the unused
- mem0 Memory Integration: Enhance agents with persistent memory capabilities
pAIssive Income is a modular, extensible platform for AI-powered content generation, market analysis, monetization, and automation. It combines advanced AI models, multi-agent orchestration, and robust APIs with a focus on developer experience and security.
- Getting Started: See docs/00_introduction/02_getting_started.md for installation and setup instructions.
All project documentation is now centralized in docs/:
- Project Overview
- Getting Started
- User Guide
- Developer Guide
- DevOps & CI/CD
- Security & Compliance
- SDKs & Integrations
- Tooling & Scripts
- Troubleshooting & FAQ
- Team & Collaboration
- Archive & Historical Notes
- Changelog
For a full directory map, see docs/00_introduction/03_project_structure.md.
A minimal demonstration script is provided to show how an agent can pick and use tools (such as a calculator or text analyzer) via a simple registry.
What the demo does:
- Instantiates an
ArtistAgent
- Lists available tools
- Supports both example prompts and interactive mode
- Example mode: runs three promptsβone handled by the calculator, one by the text analyzer, one unhandled
- Interactive mode: enter your own prompts in a loop
How to run:
# Example-based demo (default)
python scripts/artist_demo.py
# Interactive mode (enter prompts manually)
python scripts/artist_demo.py -i
Expected output:
- The list of available tools (calculator, text_analyzer, etc.)
- The prompt that is sent to the agent
- The agent's output (e.g., calculation result, analysis, or a message indicating no tool can handle the prompt)
Example (output will vary by implementation):
=== ArtistAgent Tool Use Demo ===
Available tools:
- calculator
- text_analyzer
-----------------------------
Prompt: What is 12 * 8?
Agent output: 96
-----------------------------
Prompt: Analyze the sentiment of this phrase: 'This is a fantastic development!'
Agent output: Sentiment: positive | Words: 6 | Characters: 35 | Positive indicators: 1 | Negative indicators: 0
-----------------------------
Prompt: Translate hello to French
Agent output: No suitable tool found for this prompt.
Note:
The agent and tool registry are easily extensibleβnew tools can be added with minimal code changes, allowing the agent to handle more types of tasks.
See Development Workflow for contribution guidelines and coding standards. Module-specific deep dives are in docs/02_developer_guide/06_module_deep_dives/.
Security policy, reporting, and compliance: docs/04_security_and_compliance/01_security_overview.md Historical fixes and audit notes: docs/09_archive_and_notes/security_fixes_summaries.md
See LICENSE.
This project enforces at least 80% code coverage for JavaScript files using nyc (Istanbul) and Mocha.
Install dependencies (if not already done):
pnpm install
Run JavaScript tests and check coverage:
pnpm test
- If code coverage falls below 80%, the test run will fail.
- Coverage reports will be printed to the console and an HTML report will be generated in the
coverage/
directory (if running locally).
To generate a detailed lcov report:
pnpm coverage
Coverage thresholds for statements, branches, functions, and lines are all set to 80%.
You can find example JS source and tests in the src/
directory.
For more complex JavaScript code, such as React components, you can write tests to verify rendering, user interaction, and state updates.
Example: Testing a React Component with Mocha and Enzyme
First, install additional test utilities:
pnpm add --save-dev enzyme enzyme-adapter-react-16 @wojtekmaj/enzyme-adapter-react-17
Example component (src/Hello.js
):
import React from 'react';
export function Hello({ name }) {
return <div>Hello, {name}!</div>;
}
Example test (src/Hello.test.js
):
const React = require('react');
const { shallow, configure } = require('enzyme');
const Adapter = require('@wojtekmaj/enzyme-adapter-react-17');
const { Hello } = require('./Hello');
configure({ adapter: new Adapter() });
describe('Hello component', () => {
it('renders the correct greeting', () => {
const wrapper = shallow(<Hello name="World" />);
if (!wrapper.text().includes('Hello, World!')) {
throw new Error('Greeting not rendered correctly');
}
});
});
Tip: For React projects, Jest with React Testing Library is very popular and may offer a smoother setup for component and hook testing.
Best Practices:
- Test both rendering and user events/interactions.
- Mock API calls and external dependencies.
- Use coverage reports to identify untested code paths.
- Place tests next to source files or in a dedicated test directory.
This project uses Dependabot to keep dependencies up-to-date for all major ecosystems:
- JavaScript/Node (pnpm): Updates to packages in
package.json
- Python: Updates to packages in
requirements-ci.txt
,requirements_filtered.txt
- Docker: Updates to base images in
Dockerfile
,main_Dockerfile
, andui/react_frontend/Dockerfile.dev
- GitHub Actions: Updates to workflow actions in
.github/workflows/
How it works:
- Dependabot will automatically open pull requests for version updates on a weekly schedule.
- PRs are labeled by ecosystem (e.g.,
dependencies
,javascript
,python
,docker
,github-actions
). - Some dependencies (e.g.,
react
,flask
,pytest
) will not be updated for major releases automatically.
Maintainer action:
- Review Dependabot PRs promptly.
- Ensure CI/tests pass before merging.
- For major upgrades, review changelogs for breaking changes.
For more details, see .github/dependabot.yml
.
Tip for maintainers: Periodically review and adjust the
.github/dependabot.yml
configuration (update schedules, ignored dependencies, PR limits) to ensure it fits the project's evolving needs.
For any questions, see the FAQ or open an issue.
Tip: To enable advanced build graph features (Compose BuildKit Bake), set
COMPOSE_BAKE=true
in your.env
file. This requires Docker Compose v2.10+ and will use the BuildKit bake engine for improved build performance and caching.
For more details on the Docker Compose integration and Compose Bake, see DOCKER_COMPOSE.md.
This project includes integration with CrewAI and CopilotKit to enable powerful multi-agent AI features in the React frontend.
- Agentic Chat: Chat with AI copilots and call frontend tools
- Human-in-the-Loop: Collaborate with the AI, plan tasks, and decide actions interactively
- Agentic/Generative UI: Assign long-running tasks to agents and see real-time progress
To use the CrewAI + CopilotKit integration:
-
Install the required dependencies:
# Backend (CrewAI) pip install '.[agents]' # Frontend (CopilotKit) cd ui/react_frontend npm install
-
Start the application:
# Using Docker Compose docker compose up --build # Or manually python app.py
For more details on the CrewAI + CopilotKit integration, see:
- docs/CrewAI_CopilotKit_Integration.md - Main integration guide
- ui/react_frontend/CopilotKit_CrewAI.md - Frontend implementation details
- docs/examples/CrewAI_CopilotKit_Advanced_Examples.md - Advanced usage examples
This project includes integration with mem0, a memory layer for AI agents that enables persistent memory capabilities across conversations and sessions.
- Persistent Memory: Agents remember user preferences, past interactions, and important information
- Memory Search: Retrieve relevant memories based on context and queries
- Conversation Storage: Store entire conversations for future reference
- Memory-Enhanced Agents: Both ADK and CrewAI agents are enhanced with memory capabilities
To use the mem0 integration:
-
Install the required dependencies:
pip install -r requirements.txt
-
Set your OpenAI API key (required by mem0):
# Linux/macOS export OPENAI_API_KEY='your-api-key' # Windows (PowerShell) $env:OPENAI_API_KEY='your-api-key' # Windows (Command Prompt) set OPENAI_API_KEY=your-api-key
-
Use memory-enhanced agents in your code:
# For ADK agents from adk_demo.mem0_enhanced_adk_agents import MemoryEnhancedDataGathererAgent agent = MemoryEnhancedDataGathererAgent(name="DataGatherer", user_id="user123") # For CrewAI agents from agent_team.mem0_enhanced_agents import MemoryEnhancedCrewAIAgentTeam team = MemoryEnhancedCrewAIAgentTeam(user_id="user123")
For more details on the mem0 integration and best practices for combining mem0 with Retrieval-Augmented Generation (RAG):
- README_mem0_integration.md β Main integration guide (now includes best practices for mem0 + RAG)
- docs/mem0_rag_best_practices.md β Detailed guide on when and how to use mem0 and RAG, with examples
- docs/README_mem0.md β Overview of mem0 investigation
- docs/mem0_core_apis.md β Documentation of mem0's core APIs
For installation, setup, and usage, see our Getting Started Guide.
- Project Overview
- Getting Started
- User Guide
- Developer Guide
- API Reference
- Security & Compliance
- Troubleshooting & FAQ
- Contributing
- Changelog
For a full breakdown of directory structure and module deep dives, see docs/00_introduction/03_project_structure.md and docs/02_developer_guide/06_module_deep_dives/README.md.
See Security Policy and Security Overview.
ai_models/
β Model management and utilitiesagent_team/
β CrewAI agent orchestrationapi/
β API server and endpointsdocs/
β All documentation (user, developer, security, CI/CD, SDKs, etc.)tests/
β Unit and integration tests
All development uses uv (Python) and pnpm (Node.js). See the Developer Workflow for guidelines, linting, and the contribution checklist.
- For common issues, see the FAQ.
- For in-depth troubleshooting, see docs/07_troubleshooting_and_faq/troubleshooting.md.
This project now includes advanced tests and benchmarking for agentic reasoning and tool use:
- Agentic Reasoning Unit Tests:
- See
tests/test_artist_agent.py
for tests validating the ability of agents to select and use tools (such as the calculator) and to handle multi-step reasoning prompts.
- See
- Benchmarking:
- See
tests/performance/test_artist_agent_benchmark.py
for automated benchmarking of the ARTIST agent against other agent frameworks. - Benchmark results are output to
tests/performance/artist_agent_benchmark.md
after running the benchmark script.
- See
These additions help ensure robust, measurable progress in agentic reasoning and tool integration in this codebase.
See LICENSE for license details.
- Updated
.uv.toml
configuration with improved cache management, timeout settings, and parallel installation support - Enhanced GitHub workflow configurations for better cross-platform compatibility
- Improved uv virtual environment handling and dependency management