|
| 1 | +# Testing Best Practices for the Infactory SDK |
| 2 | + |
| 3 | +Testing a client SDK like Infactory requires a multi-layered testing approach to ensure both the Python library and CLI work correctly. Here's a comprehensive testing strategy: |
| 4 | + |
| 5 | +## 1. Unit Tests |
| 6 | + |
| 7 | +### Python SDK Unit Tests |
| 8 | + |
| 9 | +Unit tests should validate the individual components of your SDK without requiring actual API calls: |
| 10 | + |
| 11 | +- **Test models**: Ensure model serialization/deserialization works correctly |
| 12 | +- **Test service classes**: Verify that API calls are constructed properly |
| 13 | +- **Test client initialization**: Check that configuration loading works as expected |
| 14 | +- **Test error handling**: Validate that API errors are caught and transformed properly |
| 15 | + |
| 16 | +Use mock responses with a library like `unittest.mock` or `pytest-mock`: |
| 17 | + |
| 18 | +```python |
| 19 | +def test_projects_list(mocker): |
| 20 | + # Mock the HTTP response |
| 21 | + mock_response = [{"id": "proj-123", "name": "Test Project", "team_id": "team-456"}] |
| 22 | + mock_get = mocker.patch("infactory_client.client.Client._get", return_value=mock_response) |
| 23 | + |
| 24 | + # Create client and call the method |
| 25 | + client = Client(api_key="test_key") |
| 26 | + projects = client.projects.list(team_id="team-456") |
| 27 | + |
| 28 | + # Assertions |
| 29 | + mock_get.assert_called_once_with("v1/projects", {"team_id": "team-456"}) |
| 30 | + assert len(projects) == 1 |
| 31 | + assert projects[0].id == "proj-123" |
| 32 | + assert projects[0].name == "Test Project" |
| 33 | +``` |
| 34 | + |
| 35 | +### CLI Unit Tests |
| 36 | + |
| 37 | +For the CLI, test that commands properly parse arguments and call the appropriate SDK methods: |
| 38 | + |
| 39 | +```python |
| 40 | +def test_projects_list_command(mocker): |
| 41 | + # Mock the projects.list method |
| 42 | + mock_projects = [MagicMock(id="proj-123", name="Test Project")] |
| 43 | + mock_client = MagicMock() |
| 44 | + mock_client.projects.list.return_value = mock_projects |
| 45 | + mocker.patch("infactory_cli.get_client", return_value=mock_client) |
| 46 | + |
| 47 | + # Call the CLI command handler |
| 48 | + args = MagicMock(team_id="team-456") |
| 49 | + handle_projects_list(args) |
| 50 | + |
| 51 | + # Assertions |
| 52 | + mock_client.projects.list.assert_called_once_with(team_id="team-456") |
| 53 | +``` |
| 54 | + |
| 55 | +## 2. Integration Tests |
| 56 | + |
| 57 | +Integration tests validate that your SDK can properly interact with the API: |
| 58 | + |
| 59 | +### Approach 1: Mock Server |
| 60 | + |
| 61 | +Set up a mock server that mimics the Infactory API responses: |
| 62 | + |
| 63 | +- Use tools like `responses`, `requests-mock`, or `httpx-mock` to intercept HTTP requests |
| 64 | +- Create a fixture with realistic API responses |
| 65 | +- Test complete workflows through multiple API calls |
| 66 | + |
| 67 | +```python |
| 68 | +def test_create_and_publish_query_program(requests_mock): |
| 69 | + # Mock API responses |
| 70 | + requests_mock.post("https://api.infactory.ai/v1/queryprograms", json={"id": "qp-123", "name": "Test Query"}) |
| 71 | + requests_mock.patch("https://api.infactory.ai/v1/queryprograms/qp-123/publish", json={"id": "qp-123", "published": True}) |
| 72 | + |
| 73 | + # Execute the workflow |
| 74 | + client = Client(api_key="test_key") |
| 75 | + query = client.query_programs.create(name="Test Query", dataline_id="dl-456", code="test code") |
| 76 | + published = client.query_programs.publish(query.id) |
| 77 | + |
| 78 | + # Assertions |
| 79 | + assert published.id == "qp-123" |
| 80 | + assert published.published is True |
| 81 | +``` |
| 82 | + |
| 83 | +### Approach 2: VCR-style Tests |
| 84 | + |
| 85 | +Record actual API responses and replay them in tests: |
| 86 | + |
| 87 | +- Use `vcr.py` or `betamax` to record and replay HTTP interactions |
| 88 | +- Run tests against the actual API once, then replay for subsequent test runs |
| 89 | +- Provides realistic responses without hitting the API repeatedly |
| 90 | + |
| 91 | +```python |
| 92 | +@vcr.use_cassette('fixtures/vcr_cassettes/project_list.yaml') |
| 93 | +def test_list_projects(): |
| 94 | + client = Client(api_key="test_key") |
| 95 | + projects = client.projects.list(team_id="team-456") |
| 96 | + |
| 97 | + assert len(projects) > 0 |
| 98 | + assert projects[0].id is not None |
| 99 | +``` |
| 100 | + |
| 101 | +## 3. End-to-End (E2E) Tests |
| 102 | + |
| 103 | +E2E tests validate complete user workflows against the actual API: |
| 104 | + |
| 105 | +### Approach 1: Test Account |
| 106 | + |
| 107 | +- Create a dedicated test account in the Infactory platform |
| 108 | +- Run automated tests against this account with real API calls |
| 109 | +- Test full workflows from start to finish |
| 110 | + |
| 111 | +```python |
| 112 | +def test_e2e_datasource_workflow(): |
| 113 | + # Use a test API key from environment variable |
| 114 | + client = Client(api_key=os.environ.get("NF_TEST_API_KEY")) |
| 115 | + |
| 116 | + # Create a project |
| 117 | + project = client.projects.create(name="Test Project", team_id=os.environ.get("NF_TEST_TEAM_ID")) |
| 118 | + |
| 119 | + # Create a datasource |
| 120 | + datasource = client.datasources.create(name="Test DB", project_id=project.id, type="postgres") |
| 121 | + |
| 122 | + # List datasources |
| 123 | + datasources = client.datasources.list(project_id=project.id) |
| 124 | + |
| 125 | + # Assertions |
| 126 | + assert any(ds.id == datasource.id for ds in datasources) |
| 127 | + |
| 128 | + # Clean up |
| 129 | + client.datasources.delete(datasource.id) |
| 130 | + client.projects.delete(project.id) |
| 131 | +``` |
| 132 | + |
| 133 | +### Approach 2: CLI E2E Tests |
| 134 | + |
| 135 | +Test the CLI commands against the actual API: |
| 136 | + |
| 137 | +```python |
| 138 | +def test_cli_e2e(): |
| 139 | + # Run CLI commands using subprocess |
| 140 | + result = subprocess.run( |
| 141 | + ["nf", "login", "--key", os.environ.get("NF_TEST_API_KEY")], |
| 142 | + capture_output=True, text=True |
| 143 | + ) |
| 144 | + assert "API key saved successfully" in result.stdout |
| 145 | + |
| 146 | + result = subprocess.run( |
| 147 | + ["nf", "projects", "list", "--team-id", os.environ.get("NF_TEST_TEAM_ID")], |
| 148 | + capture_output=True, text=True |
| 149 | + ) |
| 150 | + assert "ID" in result.stdout |
| 151 | +``` |
| 152 | + |
| 153 | +## 4. Test Environment Setup |
| 154 | + |
| 155 | +For comprehensive testing, set up: |
| 156 | + |
| 157 | +1. **CI/CD Pipeline Integration**: |
| 158 | + - Run unit tests on every commit |
| 159 | + - Run integration tests on PRs |
| 160 | + - Run E2E tests on release branches |
| 161 | + |
| 162 | +2. **Test Fixtures**: |
| 163 | + - Create reusable test data |
| 164 | + - Set up environment for realistic workflows |
| 165 | + - Implement automatic cleanup after tests |
| 166 | + |
| 167 | +3. **Testing Matrix**: |
| 168 | + - Test across different Python versions (3.8, 3.9, 3.10, 3.11, 3.12) |
| 169 | + - Test on different operating systems (Windows, macOS, Linux) |
| 170 | + |
| 171 | +## 5. Testing Recommendations |
| 172 | + |
| 173 | +### When to Mock vs. Use Live Endpoints |
| 174 | + |
| 175 | +- **Unit Tests**: Always use mocks |
| 176 | +- **Integration Tests**: Use recorded responses or a mock server |
| 177 | +- **E2E Tests**: Use a dedicated test account with live endpoints |
| 178 | + |
| 179 | +### Best Practices |
| 180 | + |
| 181 | +1. **Use a dedicated test account**: Don't use production credentials |
| 182 | +2. **Clean up test resources**: Delete any created resources after tests |
| 183 | +3. **Use fixture data**: Prepare test data for reproducible results |
| 184 | +4. **Make tests independent**: Each test should be able to run on its own |
| 185 | +5. **Use realistic data**: Test with data that resembles real-world usage |
| 186 | +6. **Test edge cases**: Error handling, rate limiting, authentication failures |
| 187 | +7. **Test CLI workflows**: Validate common command patterns |
| 188 | +8. **Focus on main workflows**: Prioritize testing the most common user flows |
| 189 | + |
| 190 | +By implementing this testing strategy, you'll build confidence in your Infactory SDK and ensure a quality experience for your users across both the Python library and CLI interfaces. |
0 commit comments