-
Notifications
You must be signed in to change notification settings - Fork 649
Change AM_APPLICATION_REGISTRATION table #13154
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
📝 WalkthroughWalkthroughThe changes update the storage and retrieval mechanism for the Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant ApiMgtDAO
participant Database
Client->>ApiMgtDAO: Create Application Registration (with INPUTS JSON)
ApiMgtDAO->>ApiMgtDAO: Convert INPUTS JSON to UTF-8 byte stream
ApiMgtDAO->>Database: Store INPUTS as binary (BLOB/VARBINARY/BYTEA)
Client->>ApiMgtDAO: Retrieve Application Registration
ApiMgtDAO->>Database: Fetch INPUTS as binary stream
ApiMgtDAO->>ApiMgtDAO: Convert binary stream to String
ApiMgtDAO->>Client: Return Application Registration (with INPUTS as String)
Warning There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure. 🔧 ast-grep (0.38.1)components/apimgt/org.wso2.carbon.apimgt.impl/src/main/java/org/wso2/carbon/apimgt/impl/dao/ApiMgtDAO.java✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
6bff3eb
to
72abf15
Compare
72abf15
to
7e3e843
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR updates the AM_APPLICATION_REGISTRATION table by changing the data type of the INPUTS column from VARCHAR to a binary type across multiple database dialects to support binary data storage.
- Updated SQL scripts for H2, PostgreSQL, Oracle, MySQL, MSSQL, DB2, and multi-DC configurations
- Modified related Java DAO methods for writing (using setBinaryStream) and reading (using getBinaryStream) the INPUTS column data
Reviewed Changes
Copilot reviewed 14 out of 14 changed files in this pull request and generated no comments.
Show a summary per file
File | Description |
---|---|
features/apimgt/org.wso2.carbon.apimgt.keymanager.feature/src/main/resources/sql/h2.sql | Changed INPUTS column type from VARCHAR to BLOB for H2 |
features/apimgt/org.wso2.carbon.apimgt.core.feature/src/main/resources/sql/postgresql.sql | Changed INPUTS column type from VARCHAR to BYTEA for PostgreSQL |
features/apimgt/org.wso2.carbon.apimgt.core.feature/src/main/resources/sql/oracle_rac.sql | Changed INPUTS column type from VARCHAR2 to BLOB for Oracle RAC |
features/apimgt/org.wso2.carbon.apimgt.core.feature/src/main/resources/sql/oracle_23c.sql | Changed INPUTS column type from VARCHAR2 to BLOB for Oracle 23c |
features/apimgt/org.wso2.carbon.apimgt.core.feature/src/main/resources/sql/oracle.sql | Changed INPUTS column type from VARCHAR2 to BLOB for Oracle |
features/apimgt/org.wso2.carbon.apimgt.core.feature/src/main/resources/sql/mysql_cluster.sql | Changed INPUTS column type from VARCHAR to LONGBLOB for MySQL Cluster |
features/apimgt/org.wso2.carbon.apimgt.core.feature/src/main/resources/sql/mysql.sql | Changed INPUTS column type from VARCHAR to LONGBLOB for MySQL |
features/apimgt/org.wso2.carbon.apimgt.core.feature/src/main/resources/sql/mssql.sql | Changed INPUTS column type from VARCHAR to VARBINARY(MAX) for MSSQL (two occurrences) |
features/apimgt/org.wso2.carbon.apimgt.core.feature/src/main/resources/sql/h2.sql | Changed INPUTS column type from VARCHAR to BLOB for H2 |
features/apimgt/org.wso2.carbon.apimgt.core.feature/src/main/resources/sql/db2.sql | Changed INPUTS column type from VARCHAR to BLOB for DB2 |
features/apimgt/org.wso2.carbon.apimgt.core.feature/src/main/resources/multi-dc/SQLServer/mssql/apimgt/tables.sql | Changed INPUTS column type from VARCHAR to VARBINARY(MAX) for multi-dc MSSQL |
features/apimgt/org.wso2.carbon.apimgt.core.feature/src/main/resources/multi-dc/Postgresql/apimgt/tables.sql | Changed INPUTS column type from VARCHAR to BYTEA for multi-dc PostgreSQL |
components/apimgt/org.wso2.carbon.apimgt.keymgt/src/test/resources/dbscripts/h2.sql | Changed INPUTS column type from VARCHAR to BLOB for test H2 scripts |
components/apimgt/org.wso2.carbon.apimgt.impl/src/main/java/org/wso2/carbon/apimgt/impl/dao/ApiMgtDAO.java | Updated DAO code for handling binary INPUTS data during INSERT and SELECT |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🔭 Outside diff range comments (1)
features/apimgt/org.wso2.carbon.apimgt.core.feature/src/main/resources/sql/mssql.sql (1)
1996-2003
: Duplicate AM_APPLICATION_REGISTRATION definition – remove the second copyThe file now contains two
CREATE TABLE AM_APPLICATION_REGISTRATION
blocks.
Even though the second one is gated with the sameIF NOT EXISTS
, it is redundant noise and a maintenance hazard (future edits may touch only one copy).-IF NOT EXISTS (SELECT * FROM SYS.OBJECTS WHERE OBJECT_ID = OBJECT_ID(N'[DBO].[AM_APPLICATION_REGISTRATION]') AND TYPE IN (N'U')) - -CREATE TABLE AM_APPLICATION_REGISTRATION ( - REG_ID INTEGER IDENTITY(1,1), - SUBSCRIBER_ID INTEGER, - WF_REF VARCHAR(255) NOT NULL, - APP_ID INTEGER, - TOKEN_TYPE VARCHAR(30), - TOKEN_SCOPE VARCHAR(1500) DEFAULT 'default', - INPUTS VARBINARY(MAX), - ALLOWED_DOMAINS VARCHAR(256), - VALIDITY_PERIOD BIGINT, - KEY_MANAGER VARCHAR(255) NOT NULL, - UNIQUE (SUBSCRIBER_ID,APP_ID,TOKEN_TYPE,KEY_MANAGER), - FOREIGN KEY(SUBSCRIBER_ID) REFERENCES AM_SUBSCRIBER(SUBSCRIBER_ID) ON DELETE NO ACTION, - FOREIGN KEY(APP_ID) REFERENCES AM_APPLICATION(APPLICATION_ID) ON UPDATE CASCADE ON DELETE NO ACTION, - PRIMARY KEY (REG_ID) -);Removing this duplicate keeps the DDL unambiguous and avoids accidental divergence.
🧹 Nitpick comments (4)
features/apimgt/org.wso2.carbon.apimgt.core.feature/src/main/resources/sql/mysql.sql (1)
1725-1733
: Upgrade path & column size need follow-upChanging
INPUTS
toLONGBLOB
works for new deployments only; existing DBs won’t pick this up because this file is the bootstrap schema.
- Add an explicit MySQL‐upgrade script that runs
ALTER TABLE AM_APPLICATION_REGISTRATION MODIFY INPUTS LONGBLOB;(same for clustered schema).
LONGBLOB
(4 GiB) is generous. If the payload is expected to stay below 16 MiB,MEDIUMBLOB
is more RAM/IO-friendly and still well beyond the old 1 KiB limit:- INPUTS LONGBLOB, + INPUTS MEDIUMBLOB,Please verify the real-world payload size and add a migration step accordingly.
features/apimgt/org.wso2.carbon.apimgt.core.feature/src/main/resources/sql/mysql_cluster.sql (1)
1904-1906
: LONGBLOB may be over-sized and has NDB-specific overhead – double-check the requirementYou have switched
INPUTS
toLONGBLOB
(≈ 4 GB).
• If typical payloads are well below 16 MB,MEDIUMBLOB
(16 MB) or evenBLOB
(64 KB) would reduce memory/redo-log pressure in MySQL NDB and improve replication throughput.
• NDB stores BLOB/TEXT in a side table; each access incurs an extra round-trip. The larger the column, the bigger the penalty.
• Existing DAO code must now stream binary data; writing aNULL
versus empty byte array behaves differently from the oldVARCHAR
. Confirm all callers handle this.If 4 GB really is required, ignore this; otherwise consider:
- INPUTS LONGBLOB, + -- INPUTS rarely exceeds <x> KB; MEDIUMBLOB is sufficient and lighter on NDB + INPUTS MEDIUMBLOB,features/apimgt/org.wso2.carbon.apimgt.core.feature/src/main/resources/sql/h2.sql (1)
1795-1795
: VerifyBLOB
vsCLOB
for JSON payloads
INPUTS
stores JSON (text). Switching fromVARCHAR
to an opaqueBLOB
sacrifices readability, LIKE-based queries, and built-in UTF-8 validation. Most databases provide aCLOB
/TEXT
type for large character data, while keeping byte-for-byte compatibility with your DAO’s UTF-8 streams.Please double-check that:
- The DAO layer always encodes/decodes with a deterministic charset (e.g.
StandardCharsets.UTF_8
).- Down-stream analytics or ad-hoc SQL that previously inspected
INPUTS
won’t break.- You really need a binary LOB instead of a character LOB.
If not, consider the lighter change below (consistent with other engines’TEXT
/CLOB
):- INPUTS BLOB, + INPUTS CLOB,No action required if binary storage was intentional, but a quick confirmation (or a migration note) will prevent surprises in existing tool-chains.
features/apimgt/org.wso2.carbon.apimgt.core.feature/src/main/resources/multi-dc/SQLServer/mssql/apimgt/tables.sql (1)
1867-1884
: Two separateCREATE TABLE AM_APPLICATION_REGISTRATION
blocks – consider deduplicatingThe script still contains two guarded
CREATE TABLE AM_APPLICATION_REGISTRATION
statements (the second begins at line 1982).
Although theIF NOT EXISTS
guard prevents execution errors, having duplicate DDL leads to divergence risk (e.g. different fk-options between the two definitions) and hampers maintainability.-IF NOT EXISTS (SELECT * FROM SYS.OBJECTS WHERE OBJECT_ID = OBJECT_ID(N'[DBO].[AM_APPLICATION_REGISTRATION]') AND TYPE IN (N'U')) -/* second definition here */ -... -Keep a single, authoritative definition and drop the redundant block (or move the duplicate to an explicit upgrade script if that was the intent).
Also applies to: 1982-1999
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (14)
components/apimgt/org.wso2.carbon.apimgt.impl/src/main/java/org/wso2/carbon/apimgt/impl/dao/ApiMgtDAO.java
(2 hunks)components/apimgt/org.wso2.carbon.apimgt.keymgt/src/test/resources/dbscripts/h2.sql
(1 hunks)features/apimgt/org.wso2.carbon.apimgt.core.feature/src/main/resources/multi-dc/Postgresql/apimgt/tables.sql
(1 hunks)features/apimgt/org.wso2.carbon.apimgt.core.feature/src/main/resources/multi-dc/SQLServer/mssql/apimgt/tables.sql
(2 hunks)features/apimgt/org.wso2.carbon.apimgt.core.feature/src/main/resources/sql/db2.sql
(1 hunks)features/apimgt/org.wso2.carbon.apimgt.core.feature/src/main/resources/sql/h2.sql
(1 hunks)features/apimgt/org.wso2.carbon.apimgt.core.feature/src/main/resources/sql/mssql.sql
(2 hunks)features/apimgt/org.wso2.carbon.apimgt.core.feature/src/main/resources/sql/mysql.sql
(1 hunks)features/apimgt/org.wso2.carbon.apimgt.core.feature/src/main/resources/sql/mysql_cluster.sql
(1 hunks)features/apimgt/org.wso2.carbon.apimgt.core.feature/src/main/resources/sql/oracle.sql
(1 hunks)features/apimgt/org.wso2.carbon.apimgt.core.feature/src/main/resources/sql/oracle_23c.sql
(1 hunks)features/apimgt/org.wso2.carbon.apimgt.core.feature/src/main/resources/sql/oracle_rac.sql
(1 hunks)features/apimgt/org.wso2.carbon.apimgt.core.feature/src/main/resources/sql/postgresql.sql
(1 hunks)features/apimgt/org.wso2.carbon.apimgt.keymanager.feature/src/main/resources/sql/h2.sql
(1 hunks)
🔇 Additional comments (12)
features/apimgt/org.wso2.carbon.apimgt.core.feature/src/main/resources/sql/db2.sql (1)
2538-2546
: ```shell
#!/bin/bashSearch for any ALTER statements on AM_APPLICATION_REGISTRATION in DB2 scripts
rg -n "ALTER TABLE.*AM_APPLICATION_REGISTRATION" .
Search for any temporary columns or migration steps related to INPUTS
rg -n "ADD COLUMN INPUTS_TMP" .
rg -n "INPUTS_TMP" .</details> <details> <summary>features/apimgt/org.wso2.carbon.apimgt.core.feature/src/main/resources/sql/oracle_rac.sql (1)</summary> `2517-2519`: **Is `BLOB` the right LOB type here?** If the `INPUTS` column holds textual/json payload (as opposed to opaque binary), `CLOB` is normally preferred in Oracle. Using `BLOB` means callers must handle character-set conversion manually and you lose native SQL operators (`LIKE`, `REGEXP_LIKE`, etc.). Please double-check that: 1. All callers now treat the field strictly as a byte stream. 2. No in-DB string operations are required on this column. 3. Migration scripts convert existing `VARCHAR2` data to `BLOB` safely (`TO_BLOB(UTL_RAW.CAST_TO_RAW(...))`). If any of the above is false, consider switching to `CLOB` instead: ```diff - INPUTS BLOB, + INPUTS CLOB,
Failing to align the LOB type with the actual data semantics can introduce subtle bugs and performance penalties.
features/apimgt/org.wso2.carbon.apimgt.core.feature/src/main/resources/sql/oracle_23c.sql (1)
2686-2689
: ```shell
#!/bin/bash1. Verify INPUTS column definition across all "23c" SQL scripts
echo "=== Verifying INPUTS column in 23c SQL scripts ==="
find features/apimgt -type f -path "/sql/_23c.sql" | while read -r f; do
echo "File: $f"
grep -Hn "INPUTS" -A2 -B1 "$f" || echo " (no match)"
done2. Check that the DAO layer uses binary streams for the INPUTS column
echo
echo "=== Checking Java code for binary‐stream usage ==="
rg -n "getBinaryStream" -n .
rg -n "setBinaryStream" -n .</details> <details> <summary>features/apimgt/org.wso2.carbon.apimgt.core.feature/src/main/resources/multi-dc/Postgresql/apimgt/tables.sql (1)</summary> `1987-1996`: ```shell #!/bin/bash set -e # 1️⃣ List DB vendor directories under multi-dc echo "DB vendor directories under multi-dc:" find features/apimgt/org.wso2.carbon.apimgt.core.feature/src/main/resources/multi-dc -maxdepth 1 -mindepth 1 -type d # 2️⃣ Search for INPUTS occurrences in all SQL scripts under multi-dc echo -e "\nINPUTS occurrences in multi-dc SQL scripts:" rg -n "INPUTS" -g "features/apimgt/org.wso2.carbon.apimgt.core.feature/src/main/resources/multi-dc/**/*.sql" # 3️⃣ Check for any existing migration scripts referencing INPUTS_NEW echo -e "\nSearching for INPUTS_NEW in all SQL files:" rg -n "INPUTS_NEW" -g "*.sql" || echo "No INPUTS_NEW references found." # 4️⃣ Scan Java code for direct references to the INPUTS column echo -e "\nJava code references to 'INPUTS':" rg -n "INPUTS" -g "*.java" || echo "No raw 'INPUTS' references found." # 5️⃣ Verify use of binary vs string setters in DAO code echo -e "\nOccurrences of setBinaryStream in Java code:" rg -n "setBinaryStream" -g "*.java" || echo "No setBinaryStream calls found." echo -e "\nOccurrences of setString in Java code:" rg -n "setString" -g "*.java" || echo "No setString calls found."
features/apimgt/org.wso2.carbon.apimgt.core.feature/src/main/resources/sql/postgresql.sql (1)
1994-1996
: Column switched to BYTEA – confirm migration and driver handlingChanging
INPUTS
fromVARCHAR
toBYTEA
is fine for fresh installs, but existing deployments will need a data-migration script and the DAO layer must now usesetBytes/getBytes
(orsetBinaryStream
) instead ofsetString/getString
.
Please verify:
- A DB‐migration patch (
ALTER TABLE … ALTER COLUMN INPUTS TYPE BYTEA USING INPUTS::BYTEA
) exists for upgrades.- All JDBC calls touching
INPUTS
have been updated (seeApiMgtDAO
) and are covered by tests.- No downstream code still assumes JSON/text semantics (e.g., SQL
LIKE
filters).If any of these items is missing, the change will break runtime or upgrade paths.
features/apimgt/org.wso2.carbon.apimgt.core.feature/src/main/resources/sql/oracle.sql (1)
2670-2677
: SwitchingINPUTS
toBLOB
has functional implications – verify encoding, migration & LOB-handling.The column is now a binary LOB, which is fine given that the DAO persists a UTF-8 byte stream. However, confirm the following before merge:
Upgrade path – existing installations will require an
ALTER TABLE … MODIFY
+ data copy usingto_blob(to_clob(INPUTS))
(or similar). Provide a DB migration script in the DB‐upgrade folder; otherwiseINPUTS
data will be silently lost on upgrade.Character vs binary – if the payload is always JSON/text, a
CLOB
(orNCLOB
for UTF-16) avoids manual encoding/decoding and is friendlier to DB tooling. Double-check other vendors: H2/MySQL/Postgres use text-based LOBs (CLOB
,TEXT
,JSONB
) not binary? Keep the type choice consistent or document why Oracle usesBLOB
.Nullability – the original column was
VARCHAR2(1024)
(implicitly nullable). If non-null data is expected, addNOT NULL
or supply a sane default (EMPTY_BLOB()
).Performance / storage – Oracle LOBs are stored out-of-line; consider
ENABLE STORAGE IN ROW
if average payload < 4 KB to avoid extra LOB segments.Indexing restrictions – future code must not attempt to add functional indexes on
INPUTS
; Oracle prohibits indexing a plainBLOB
without a custom domain index.Validate 1 & 2 across all DB scripts and ensure a migration script exists.
features/apimgt/org.wso2.carbon.apimgt.keymanager.feature/src/main/resources/sql/h2.sql (1)
1478-1486
: Column switched to BLOB – confirm upgrade path & data handlingChanging
INPUTS
fromVARCHAR
toBLOB
is fine for fresh installs, but production environments will already hold string data.
- Provide an explicit DB-migration script (or document the manual step) that converts existing rows to BLOB so upgrades don’t silently truncate / fail.
- A BLOB is binary; the DAO must always UTF-8 encode / decode. Verify the DAO update covers all read/write paths and handles
null
safely.- If the payload is only UTF-8 JSON,
CLOB
(character large object) gives the same size benefit without manual encoding, and is better aligned with text search / backups in most RDBMSs – worth a quick cross-dialect check.- Keep the nullability definition consistent across dialect scripts (
NOT NULL
vs nullable). Today the column is nullable here; ensure the same elsewhere.Please double-check these points before merging.
components/apimgt/org.wso2.carbon.apimgt.keymgt/src/test/resources/dbscripts/h2.sql (1)
1485-1485
: Migration/compatibility risk:VARCHAR
→BLOB
requires an upgrade pathChanging
INPUTS
fromVARCHAR(1000)
toBLOB
is fine for new installations, but an in-place upgrade of an existing deployment will silently drop the previous data because H2 will recreate the table, not alter it.Please confirm that:
- A dedicated migration/ALTER script is shipped for existing DBs (e.g.
ALTER TABLE … ALTER COLUMN INPUTS BLOB;
for H2 and equivalent for other RDBMSs).- Runtime code gracefully handles both old and new schemas until the migration is executed (e.g. fallback to
getString
whengetBinaryStream
returnsnull
).If these are already covered, feel free to dismiss. Otherwise, consider adding an upgrade script or at least documenting the manual step.
components/apimgt/org.wso2.carbon.apimgt.impl/src/main/java/org/wso2/carbon/apimgt/impl/dao/ApiMgtDAO.java (2)
276-277
: LGTM! Proper binary stream handling for BLOB storage.The conversion from string to binary stream is correctly implemented using UTF-8 encoding, which aligns with the schema change from VARCHAR to BLOB.
8875-8875
: ```shell
#!/bin/bashLocate the APIMgtDBUtil class and display its getStringFromInputStream method for inspection
Find the Java file containing APIMgtDBUtil
APIMGT_UTIL_FILE=$(rg -l "class APIMgtDBUtil" -g "*.java")
echo "APIMgtDBUtil located at: $APIMGT_UTIL_FILE"Show the getStringFromInputStream method with surrounding context
rg -n "getStringFromInputStream" -A 10 -B 5 "$APIMGT_UTIL_FILE"
</details> <details> <summary>features/apimgt/org.wso2.carbon.apimgt.core.feature/src/main/resources/multi-dc/SQLServer/mssql/apimgt/tables.sql (1)</summary> `1870-1877`: To pinpoint any JDBC calls still treating the INPUTS column as text, let’s search for `getString("INPUTS")` in the DAO layer: ```shell #!/bin/bash # Check for any getString usage when retrieving the INPUTS column by name rg --ignore-case --line-number 'getString\(\s*"?INPUTS"?\s*\)' components/apimgt/org.wso2.carbon.apimgt.impl/src/main/java
features/apimgt/org.wso2.carbon.apimgt.core.feature/src/main/resources/sql/mssql.sql (1)
1888-1888
: Binary type switch LGTM – verify downstream impactsChanging
INPUTS
toVARBINARY(MAX)
is the right choice for storing opaque payloads on SQL Server and keeps the dialect consistent with the other DB scripts in the PR.
Please double-check:
- All DAO / JDBC code now uses
setBinaryStream
/getBinaryStream
(the PR shows this forApiMgtDAO
, but scan for any directSELECT INPUTS …
still casting toVARCHAR
).- Migrations: existing installations will require an
ALTER TABLE
plus manual conversion of the oldVARCHAR
data →VARBINARY
. Ship an explicit migration script or document the step.If both items are covered, nothing more to do.
@@ -1728,7 +1728,7 @@ CREATE TABLE IF NOT EXISTS AM_APPLICATION_REGISTRATION ( | |||
APP_ID INT, | |||
TOKEN_TYPE VARCHAR(30), | |||
TOKEN_SCOPE VARCHAR(1500) DEFAULT 'default', | |||
INPUTS VARCHAR(1000), | |||
INPUTS LONGBLOB, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there any specific reason that we have set the type to LONGBLOB instead of BLOB?
Since we are converting a columns of varchar(1000) the BLOB should be well sufficient as it contains 64kb size. As Memory and other overhead are higher for LONGBLOB, it would be ideal to use BLOB if its possible.
Changed the Inputs type from VARCHAR to BLOB