Create resource on the cloud with natural language using AI-powered Terraform generation
- Natural language-based resource creation
- Support for AWS cloud resources (S3 buckets, EC2 instances, etc.)
- Local infrastructure development using LocalStack
- Component-based infrastructure management
- Interactive chat interface for cloud resources
- Support for multiple infrastructure components
- Self-healing infrastructure creation with automatic error fixing
- Python 3.10 or higher
- Required packages (to be installed via pip):
pip install infrabot
- Terraform installed:
brew install terraform
- AWS CLI configured:
aws configure
P.S, make sure to configure the default region as well.
- OpenAI API key:
export OPENAI_API_KEY='your_api_key_here'
- For local development:
docker pull localstack/localstack docker run -d -p 4566:4566 localstack/localstack
Initialize a new project:
infrabot init [--verbose] [--local]
Create a new component:
infrabot component create --prompt "Your infrastructure description" --name component-name [--verbose] [--force] [--model MODEL_NAME] [--self-healing] [--max-attempts N] [--keep-on-failure]
Delete all components:
infrabot component delete [--force]
Destroy all components infrastructure:
infrabot component destroy [--force]
Edit a component:
infrabot component edit component-name
Chat about your infrastructure:
infrabot chat component-name
Check InfraBot version:
infrabot version
- Initialize a new project:
infrabot init
- Create a web server component with self-healing:
infrabot component create --prompt "Create an EC2 instance with nginx installed" --name web-server --self-healing
- Create a local S3 bucket for testing:
infrabot component create --prompt "Create an S3 bucket" --name test-bucket --local
- Create a database component with custom retry attempts:
infrabot component create --prompt "Set up an RDS instance for PostgreSQL" --name database --self-healing --max-attempts 5
- Chat about your infrastructure:
infrabot chat web-server
When you initialize a project, InfraBot creates a .infrabot
directory with the following structure:
.infrabot/
└── default/
├── backend.tf
├── provider.tf
├── component1.tf
├── component2.tf
└── ...
Each component is stored as a separate Terraform file in the workspace directory.
InfraBot includes a self-healing feature that automatically fixes Terraform errors during resource creation:
- Enable with
--self-healing
flag - Set maximum retry attempts with
--max-attempts N
(default: 3) - Uses AI to analyze errors and fix configuration issues
- Maintains original infrastructure intent while resolving dependencies
- Shows detailed fix explanations for transparency
- Use
--keep-on-failure
to preserve generated Terraform files even when errors occur (useful for debugging)
Example with self-healing:
infrabot component create \
--prompt "Create a highly available EC2 setup with auto-scaling" \
--name ha-web \
--self-healing \
--max-attempts 5 \
--keep-on-failure
If Terraform encounters errors during plan or apply:
- InfraBot analyzes the error output
- AI suggests fixes while preserving the original intent
- Retries the operation with fixed configuration
- Continues until success or max attempts reached
- If
--keep-on-failure
is set, preserves the generated Terraform files for inspection even if errors occur
InfraBot supports observability and monitoring of AI interactions through Langfuse:
-
Install Langfuse:
pip install langfuse
-
Set up Langfuse credentials:
export LANGFUSE_PUBLIC_KEY='your_public_key' export LANGFUSE_SECRET_KEY='your_secret_key'
-
All AI interactions are automatically logged to your Langfuse dashboard
InfraBot supports multiple AI models for infrastructure generation through LiteLLM integration. While OpenAI is the default provider, you can use other models by setting the appropriate API key and specifying the model:
Note: Even when using alternative models, the OPENAI_API_KEY
environment variable is still required for certain auxiliary tasks within InfraBot.
export GROQ_API_KEY='your_api_key'
infrabot component create \
--name eks-cluster-1 \
--prompt "create an EKS cluster named MyKubernetesCluster" \
--self-healing \
--model "groq/deepseek-r1-distill-llama-70b"
export PERPLEXITY_API_KEY='your_api_key'
infrabot component create \
--name eks-cluster-1 \
--prompt "create an EKS cluster named MyKubernetesCluster" \
--self-healing \
--model "perplexity/sonar-pro"
Recommendation: We recommend using the
perplexity/sonar-pro
model for its enhanced factuality and accuracy in infrastructure generation.
The --model
flag allows you to specify which model to use for infrastructure generation. Make sure to set the corresponding API key as an environment variable before running the command.
InfraBot supports all models available through LiteLLM (see LiteLLM Documentation), including but not limited to:
- OpenAI (default), for instance:
gpt-4o
,o3-mini
- Groq, for instance:
groq/deepseek-r1-distill-llama-70b
- Perplexity, for instance:
perplexity/sonar-pro
- Anthropic, for instance:
anthropic/claude-3-5-sonnet
- Google VertexAI
- AWS Bedrock
- Azure OpenAI
- Hugging Face
- And many more
Each provider requires its own API key to be set as an environment variable. Common examples:
OPENAI_API_KEY
for OpenAI models (required for all setups)GROQ_API_KEY
for Groq modelsPERPLEXITY_API_KEY
for Perplexity modelsANTHROPIC_API_KEY
for Anthropic modelsAZURE_API_KEY
for Azure OpenAI models
Refer to the LiteLLM documentation for the complete list of supported models and their corresponding environment variables.