Skip to content

Commit f971d4d

Browse files
committed
fix: copy all tables on blue-green
1 parent 28b326b commit f971d4d

File tree

2 files changed

+102
-47
lines changed

2 files changed

+102
-47
lines changed

scripts/migrations/README-DB-BLUE-GREEN.md

Lines changed: 28 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ This utility provides scripts for managing PostgreSQL databases in a blue-green
77
The scripts offer two main operations:
88

99
1. **Database Creation** - Create both blue and green databases if they don't exist
10-
2. **Cache Data Copy** - Copy external services cache data from one database to the other
10+
2. **Database Copy** - Create an exact copy of all tables and data from one database to the other
1111

1212
## Database Creation
1313

@@ -25,35 +25,48 @@ pnpm db:create-databases
2525
3. If either database doesn't exist, it creates it
2626
4. This operation can be run multiple times without error, as it only creates databases when needed
2727

28-
## Cache Data Copy
28+
## Database Copy
2929

30-
Use this operation to copy cache data between blue and green databases. This is essential for maintaining cache consistency during blue-green deployments.
30+
Use this operation to create an exact copy of one database to the other. This is essential for maintaining consistency during blue-green deployments.
3131

3232
```bash
33-
# Copy cache data from blue to green
33+
# Copy all tables and data from blue to green
3434
pnpm db:copy-cache --copyFrom=blue
3535

36-
# Copy cache data from green to blue
36+
# Copy all tables and data from green to blue
3737
pnpm db:copy-cache --copyFrom=green
3838

3939
# Using the shorthand parameter
4040
pnpm db:copy-cache -f blue
4141
```
4242

43-
### How Cache Data Copy Works
43+
### How Database Copy Works
4444

4545
1. The script reads the PostgreSQL connection details from the `DATABASE_URL` environment variable
46-
2. It handles the two specific cache tables: `price_cache` and `metadata_cache`
47-
3. For each table:
48-
- It truncates the target table
49-
- Copies all data from the source to the target table
46+
2. It determines the source and target databases based on the `copyFrom` parameter
47+
3. It retrieves a list of all tables in the source database
48+
4. For each table:
49+
- It truncates the target table (removing all existing data)
50+
- Copies all data from the source table to the target table
5051
- Processes data in batches to avoid memory issues with large tables
52+
5. After completion, the target database is an exact copy of the source database
5153

52-
### Cache Tables
54+
## Blue-Green Deployment Process
5355

54-
The script only copies the following tables, which contain cached data from external services:
56+
Here's a typical workflow for using this utility in a blue-green deployment:
5557

56-
- `price_cache`: Stores token price information
57-
- `metadata_cache`: Stores token metadata
58+
```bash
59+
# Step 1: Ensure both databases exist
60+
pnpm db:create-databases
61+
62+
# Step 2: Deploy new version to the inactive environment (e.g., green)
63+
# (Your deployment steps here)
64+
65+
# Step 3: Copy data from the active environment to the inactive one
66+
pnpm db:copy-cache --copyFrom=blue
67+
68+
# Step 4: Switch traffic to the newly updated environment
69+
# (Your traffic switching steps here)
70+
```
5871

59-
All other tables are managed through the regular migration process and are not part of the blue-green deployment cache copying strategy.
72+
This process allows for zero-downtime deployments by maintaining two parallel database environments.

scripts/migrations/src/copyCache.script.ts

Lines changed: 74 additions & 32 deletions
Original file line numberDiff line numberDiff line change
@@ -5,26 +5,20 @@ import { hideBin } from "yargs/helpers";
55

66
import { Logger, stringify } from "@grants-stack-indexer/shared";
77

8-
import {
9-
BLUE_DB,
10-
CACHE_TABLES,
11-
ConnectionDetails,
12-
extractConnectionDetails,
13-
GREEN_DB,
14-
} from "./constants.js";
8+
import { BLUE_DB, ConnectionDetails, extractConnectionDetails, GREEN_DB } from "./constants.js";
159
import { getDatabaseConfigFromEnv } from "./schemas/index.js";
1610

17-
const { Pool } = pg;
18-
1911
configDotenv();
2012

13+
const { Pool } = pg;
14+
2115
/**
22-
* This script copies cache data between blue and green databases.
16+
* This script copies all table data between blue and green databases.
2317
*
2418
* It performs the following steps:
2519
* 1. Loads environment variables from .env file
2620
* 2. Gets database configuration from environment
27-
* 3. Copies cache tables from source to target database
21+
* 3. Copies all tables from source to target database, resetting destination tables first
2822
*
2923
* Environment variables required:
3024
* - DATABASE_URL: PostgreSQL connection string (used to extract host, port, user, password)
@@ -43,6 +37,10 @@ interface CopyCacheCommandArgs {
4337
}
4438

4539
// Define interfaces for database query results
40+
interface TableNameRow {
41+
table_name: string;
42+
}
43+
4644
interface ColumnNameRow {
4745
column_name: string;
4846
}
@@ -64,12 +62,57 @@ const parseArguments = (): CopyCacheCommandArgs => {
6462
.parseSync() as CopyCacheCommandArgs;
6563
};
6664

65+
/**
66+
* Get all tables in the public schema
67+
*/
68+
export const getAllTables = async (
69+
db: string,
70+
connectionDetails: ConnectionDetails,
71+
): Promise<string[]> => {
72+
const logger = Logger.getInstance();
73+
const { host, port, user, password } = connectionDetails;
74+
75+
const pool = new Pool({
76+
host,
77+
port: parseInt(port, 10),
78+
user,
79+
password,
80+
database: db,
81+
ssl:
82+
process.env.NODE_ENV === "production"
83+
? {
84+
rejectUnauthorized: false,
85+
}
86+
: undefined,
87+
connectionTimeoutMillis: 15000,
88+
idleTimeoutMillis: 10000,
89+
max: 5,
90+
});
91+
92+
try {
93+
logger.info(`Getting all tables from database '${db}'...`);
94+
95+
const result = await pool.query<TableNameRow>(`
96+
SELECT table_name
97+
FROM information_schema.tables
98+
WHERE table_schema = 'public'
99+
AND table_type = 'BASE TABLE'
100+
ORDER BY table_name
101+
`);
102+
103+
const tables = result.rows.map((row) => row.table_name);
104+
logger.info(`Found ${tables.length} tables in database '${db}'`);
105+
return tables;
106+
} catch (error) {
107+
logger.error(`Failed to get tables: ${stringify(error)}`);
108+
throw error;
109+
} finally {
110+
await pool.end();
111+
}
112+
};
113+
67114
/**
68115
* Copy table data between databases
69-
* @param sourceDb - The source database name
70-
* @param targetDb - The target database name
71-
* @param tableName - The table name to copy
72-
* @param connectionDetails - The connection details for the source and target databases
73116
*/
74117
export const copyTableData = async (
75118
sourceDb: string,
@@ -145,7 +188,7 @@ export const copyTableData = async (
145188
const dataResult = await sourcePool.query<DatabaseRow>(`SELECT * FROM "${tableName}"`);
146189

147190
if (dataResult.rows.length === 0) {
148-
logger.info(`No data in source table '${tableName}'. Skipping.`);
191+
logger.info(`No data in source table '${tableName}'. Skipping insert.`);
149192
return;
150193
}
151194

@@ -196,37 +239,36 @@ export const copyTableData = async (
196239
};
197240

198241
/**
199-
* Copy cache data from one database to another
200-
* @param sourceDb - The source database name
201-
* @param targetDb - The target database name
202-
* @param connectionDetails - The connection details for the source and target databases
242+
* Copy all tables data from one database to another
203243
*/
204-
export const copyCacheData = async (
244+
export const copyAllTableData = async (
205245
sourceDb: string,
206246
targetDb: string,
207247
connectionDetails: ConnectionDetails,
208248
): Promise<void> => {
209249
const logger = Logger.getInstance();
210250

211251
try {
212-
logger.info(`Copying cache data from '${sourceDb}' to '${targetDb}'...`);
252+
logger.info(`Copying all table data from '${sourceDb}' to '${targetDb}'...`);
213253

214-
// Log which cache tables we'll copy
215-
logger.info(`Using ${CACHE_TABLES.length} cache tables: ${CACHE_TABLES.join(", ")}`);
254+
// Get all tables from source database
255+
const tables = await getAllTables(sourceDb, connectionDetails);
216256

217-
if (CACHE_TABLES.length === 0) {
218-
logger.warn("No cache tables defined. Nothing to copy.");
257+
if (tables.length === 0) {
258+
logger.warn("No tables found in source database. Nothing to copy.");
219259
return;
220260
}
221261

262+
logger.info(`Found ${tables.length} tables to copy from source database`);
263+
222264
// Copy each table
223-
for (const table of CACHE_TABLES) {
265+
for (const table of tables) {
224266
await copyTableData(sourceDb, targetDb, table, connectionDetails);
225267
}
226268

227-
logger.info(`✅ Successfully copied cache data from '${sourceDb}' to '${targetDb}'`);
269+
logger.info(`✅ Successfully copied all table data from '${sourceDb}' to '${targetDb}'`);
228270
} catch (error) {
229-
logger.error(`Failed to copy cache data: ${stringify(error)}`);
271+
logger.error(`Failed to copy table data: ${stringify(error)}`);
230272
throw error;
231273
}
232274
};
@@ -244,10 +286,10 @@ export const main = async (): Promise<void> => {
244286
const sourceDb = sourceColor === "blue" ? BLUE_DB : GREEN_DB;
245287
const targetDb = sourceColor === "blue" ? GREEN_DB : BLUE_DB;
246288

247-
logger.info(`Copying cache data from ${sourceColor} to ${targetColor}...`);
248-
await copyCacheData(sourceDb, targetDb, connectionDetails);
289+
logger.info(`Copying all table data from ${sourceColor} to ${targetColor}...`);
290+
await copyAllTableData(sourceDb, targetDb, connectionDetails);
249291

250-
logger.info(`✅ Cache data copied from ${sourceColor} to ${targetColor} successfully`);
292+
logger.info(`✅ Database ${targetColor} is now an exact copy of ${sourceColor}`);
251293

252294
process.exit(0);
253295
};

0 commit comments

Comments
 (0)