Skip to content

Change AM_APPLICATION_REGISTRATION table #13154

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 3 commits into
base: master
Choose a base branch
from

Conversation

DDH13
Copy link

@DDH13 DDH13 commented Jun 23, 2025

Changed the Inputs type from VARCHAR to BLOB

Copy link

coderabbitai bot commented Jun 23, 2025

📝 Walkthrough

Walkthrough

The changes update the storage and retrieval mechanism for the INPUTS field in the AM_APPLICATION_REGISTRATION table across multiple database platforms. The field's type is changed from text-based (e.g., VARCHAR) to binary (e.g., BLOB, BYTEA, VARBINARY). Corresponding Java DAO logic is updated to use binary streams for storing and reading the INPUTS data.

Changes

File(s) Change Summary
components/apimgt/org.wso2.carbon.apimgt.impl/.../ApiMgtDAO.java Switched storage and retrieval of INPUTS from text to binary streams in DAO logic.
features/apimgt/org.wso2.carbon.apimgt.core.feature/.../sql/h2.sql
components/apimgt/org.wso2.carbon.apimgt.keymgt/src/test/resources/dbscripts/h2.sql
features/apimgt/org.wso2.carbon.apimgt.keymanager.feature/src/main/resources/sql/h2.sql
Changed INPUTS column type from VARCHAR(1000) to BLOB in H2 schemas.
features/apimgt/org.wso2.carbon.apimgt.core.feature/.../sql/mysql.sql
features/apimgt/org.wso2.carbon.apimgt.core.feature/.../sql/mysql_cluster.sql
Changed INPUTS column type from VARCHAR(1000) to LONGBLOB in MySQL schemas.
features/apimgt/org.wso2.carbon.apimgt.core.feature/.../sql/oracle.sql
features/apimgt/org.wso2.carbon.apimgt.core.feature/.../sql/oracle_23c.sql
features/apimgt/org.wso2.carbon.apimgt.core.feature/.../sql/oracle_rac.sql
Changed INPUTS column type from VARCHAR2(1024) to BLOB in Oracle schemas.
features/apimgt/org.wso2.carbon.apimgt.core.feature/.../sql/postgresql.sql
features/apimgt/org.wso2.carbon.apimgt.core.feature/.../multi-dc/Postgresql/apimgt/tables.sql
Changed INPUTS column type from VARCHAR(1000) to BYTEA in PostgreSQL schemas.
features/apimgt/org.wso2.carbon.apimgt.core.feature/.../sql/mssql.sql
features/apimgt/org.wso2.carbon.apimgt.core.feature/.../multi-dc/SQLServer/mssql/apimgt/tables.sql
Changed INPUTS column type from VARCHAR(1000) to VARBINARY(MAX) in SQL Server schemas.
features/apimgt/org.wso2.carbon.apimgt.core.feature/.../sql/db2.sql Changed INPUTS column type from VARCHAR(1000) to BLOB in DB2 schema.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant ApiMgtDAO
    participant Database

    Client->>ApiMgtDAO: Create Application Registration (with INPUTS JSON)
    ApiMgtDAO->>ApiMgtDAO: Convert INPUTS JSON to UTF-8 byte stream
    ApiMgtDAO->>Database: Store INPUTS as binary (BLOB/VARBINARY/BYTEA)
    
    Client->>ApiMgtDAO: Retrieve Application Registration
    ApiMgtDAO->>Database: Fetch INPUTS as binary stream
    ApiMgtDAO->>ApiMgtDAO: Convert binary stream to String
    ApiMgtDAO->>Client: Return Application Registration (with INPUTS as String)
Loading

Warning

There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure.

🔧 ast-grep (0.38.1)
components/apimgt/org.wso2.carbon.apimgt.impl/src/main/java/org/wso2/carbon/apimgt/impl/dao/ApiMgtDAO.java
✨ Finishing Touches
  • 📝 Generate Docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@DDH13 DDH13 force-pushed the column-change-inputs branch 2 times, most recently from 6bff3eb to 72abf15 Compare June 23, 2025 11:45
@DDH13 DDH13 force-pushed the column-change-inputs branch from 72abf15 to 7e3e843 Compare June 24, 2025 03:37
@DDH13 DDH13 marked this pull request as ready for review June 25, 2025 04:17
@Copilot Copilot AI review requested due to automatic review settings June 25, 2025 04:17
Copy link
Contributor

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR updates the AM_APPLICATION_REGISTRATION table by changing the data type of the INPUTS column from VARCHAR to a binary type across multiple database dialects to support binary data storage.

  • Updated SQL scripts for H2, PostgreSQL, Oracle, MySQL, MSSQL, DB2, and multi-DC configurations
  • Modified related Java DAO methods for writing (using setBinaryStream) and reading (using getBinaryStream) the INPUTS column data

Reviewed Changes

Copilot reviewed 14 out of 14 changed files in this pull request and generated no comments.

Show a summary per file
File Description
features/apimgt/org.wso2.carbon.apimgt.keymanager.feature/src/main/resources/sql/h2.sql Changed INPUTS column type from VARCHAR to BLOB for H2
features/apimgt/org.wso2.carbon.apimgt.core.feature/src/main/resources/sql/postgresql.sql Changed INPUTS column type from VARCHAR to BYTEA for PostgreSQL
features/apimgt/org.wso2.carbon.apimgt.core.feature/src/main/resources/sql/oracle_rac.sql Changed INPUTS column type from VARCHAR2 to BLOB for Oracle RAC
features/apimgt/org.wso2.carbon.apimgt.core.feature/src/main/resources/sql/oracle_23c.sql Changed INPUTS column type from VARCHAR2 to BLOB for Oracle 23c
features/apimgt/org.wso2.carbon.apimgt.core.feature/src/main/resources/sql/oracle.sql Changed INPUTS column type from VARCHAR2 to BLOB for Oracle
features/apimgt/org.wso2.carbon.apimgt.core.feature/src/main/resources/sql/mysql_cluster.sql Changed INPUTS column type from VARCHAR to LONGBLOB for MySQL Cluster
features/apimgt/org.wso2.carbon.apimgt.core.feature/src/main/resources/sql/mysql.sql Changed INPUTS column type from VARCHAR to LONGBLOB for MySQL
features/apimgt/org.wso2.carbon.apimgt.core.feature/src/main/resources/sql/mssql.sql Changed INPUTS column type from VARCHAR to VARBINARY(MAX) for MSSQL (two occurrences)
features/apimgt/org.wso2.carbon.apimgt.core.feature/src/main/resources/sql/h2.sql Changed INPUTS column type from VARCHAR to BLOB for H2
features/apimgt/org.wso2.carbon.apimgt.core.feature/src/main/resources/sql/db2.sql Changed INPUTS column type from VARCHAR to BLOB for DB2
features/apimgt/org.wso2.carbon.apimgt.core.feature/src/main/resources/multi-dc/SQLServer/mssql/apimgt/tables.sql Changed INPUTS column type from VARCHAR to VARBINARY(MAX) for multi-dc MSSQL
features/apimgt/org.wso2.carbon.apimgt.core.feature/src/main/resources/multi-dc/Postgresql/apimgt/tables.sql Changed INPUTS column type from VARCHAR to BYTEA for multi-dc PostgreSQL
components/apimgt/org.wso2.carbon.apimgt.keymgt/src/test/resources/dbscripts/h2.sql Changed INPUTS column type from VARCHAR to BLOB for test H2 scripts
components/apimgt/org.wso2.carbon.apimgt.impl/src/main/java/org/wso2/carbon/apimgt/impl/dao/ApiMgtDAO.java Updated DAO code for handling binary INPUTS data during INSERT and SELECT

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🔭 Outside diff range comments (1)
features/apimgt/org.wso2.carbon.apimgt.core.feature/src/main/resources/sql/mssql.sql (1)

1996-2003: Duplicate AM_APPLICATION_REGISTRATION definition – remove the second copy

The file now contains two CREATE TABLE AM_APPLICATION_REGISTRATION blocks.
Even though the second one is gated with the same IF NOT EXISTS, it is redundant noise and a maintenance hazard (future edits may touch only one copy).

-IF NOT  EXISTS (SELECT * FROM SYS.OBJECTS WHERE OBJECT_ID = OBJECT_ID(N'[DBO].[AM_APPLICATION_REGISTRATION]') AND TYPE IN (N'U'))
-
-CREATE TABLE AM_APPLICATION_REGISTRATION (
-    REG_ID INTEGER IDENTITY(1,1),
-    SUBSCRIBER_ID INTEGER,
-    WF_REF VARCHAR(255) NOT NULL,
-    APP_ID INTEGER,
-    TOKEN_TYPE VARCHAR(30),
-    TOKEN_SCOPE VARCHAR(1500) DEFAULT 'default',
-    INPUTS VARBINARY(MAX),
-    ALLOWED_DOMAINS VARCHAR(256),
-    VALIDITY_PERIOD BIGINT,
-    KEY_MANAGER VARCHAR(255) NOT NULL,
-    UNIQUE (SUBSCRIBER_ID,APP_ID,TOKEN_TYPE,KEY_MANAGER),
-    FOREIGN KEY(SUBSCRIBER_ID) REFERENCES AM_SUBSCRIBER(SUBSCRIBER_ID) ON DELETE NO ACTION,
-    FOREIGN KEY(APP_ID) REFERENCES AM_APPLICATION(APPLICATION_ID) ON UPDATE CASCADE ON DELETE NO ACTION,
-    PRIMARY KEY (REG_ID)
-);

Removing this duplicate keeps the DDL unambiguous and avoids accidental divergence.

🧹 Nitpick comments (4)
features/apimgt/org.wso2.carbon.apimgt.core.feature/src/main/resources/sql/mysql.sql (1)

1725-1733: Upgrade path & column size need follow-up

Changing INPUTS to LONGBLOB works for new deployments only; existing DBs won’t pick this up because this file is the bootstrap schema.

  1. Add an explicit MySQL‐upgrade script that runs
ALTER TABLE AM_APPLICATION_REGISTRATION MODIFY INPUTS LONGBLOB;

(same for clustered schema).

  1. LONGBLOB (4 GiB) is generous. If the payload is expected to stay below 16 MiB, MEDIUMBLOB is more RAM/IO-friendly and still well beyond the old 1 KiB limit:
-    INPUTS LONGBLOB,
+    INPUTS MEDIUMBLOB,

Please verify the real-world payload size and add a migration step accordingly.

features/apimgt/org.wso2.carbon.apimgt.core.feature/src/main/resources/sql/mysql_cluster.sql (1)

1904-1906: LONGBLOB may be over-sized and has NDB-specific overhead – double-check the requirement

You have switched INPUTS to LONGBLOB (≈ 4 GB).
• If typical payloads are well below 16 MB, MEDIUMBLOB (16 MB) or even BLOB (64 KB) would reduce memory/redo-log pressure in MySQL NDB and improve replication throughput.
• NDB stores BLOB/TEXT in a side table; each access incurs an extra round-trip. The larger the column, the bigger the penalty.
• Existing DAO code must now stream binary data; writing a NULL versus empty byte array behaves differently from the old VARCHAR. Confirm all callers handle this.

If 4 GB really is required, ignore this; otherwise consider:

-    INPUTS LONGBLOB,
+    -- INPUTS rarely exceeds <x> KB; MEDIUMBLOB is sufficient and lighter on NDB
+    INPUTS MEDIUMBLOB,
features/apimgt/org.wso2.carbon.apimgt.core.feature/src/main/resources/sql/h2.sql (1)

1795-1795: Verify BLOB vs CLOB for JSON payloads

INPUTS stores JSON (text). Switching from VARCHAR to an opaque BLOB sacrifices readability, LIKE-based queries, and built-in UTF-8 validation. Most databases provide a CLOB/TEXT type for large character data, while keeping byte-for-byte compatibility with your DAO’s UTF-8 streams.

Please double-check that:

  1. The DAO layer always encodes/decodes with a deterministic charset (e.g. StandardCharsets.UTF_8).
  2. Down-stream analytics or ad-hoc SQL that previously inspected INPUTS won’t break.
  3. You really need a binary LOB instead of a character LOB.
    If not, consider the lighter change below (consistent with other engines’ TEXT/CLOB):
-    INPUTS BLOB,
+    INPUTS CLOB,

No action required if binary storage was intentional, but a quick confirmation (or a migration note) will prevent surprises in existing tool-chains.

features/apimgt/org.wso2.carbon.apimgt.core.feature/src/main/resources/multi-dc/SQLServer/mssql/apimgt/tables.sql (1)

1867-1884: Two separate CREATE TABLE AM_APPLICATION_REGISTRATION blocks – consider deduplicating

The script still contains two guarded CREATE TABLE AM_APPLICATION_REGISTRATION statements (the second begins at line 1982).
Although the IF NOT EXISTS guard prevents execution errors, having duplicate DDL leads to divergence risk (e.g. different fk-options between the two definitions) and hampers maintainability.

-IF NOT  EXISTS (SELECT * FROM SYS.OBJECTS WHERE OBJECT_ID = OBJECT_ID(N'[DBO].[AM_APPLICATION_REGISTRATION]') AND TYPE IN (N'U'))
-/* second definition here */
-...
-

Keep a single, authoritative definition and drop the redundant block (or move the duplicate to an explicit upgrade script if that was the intent).

Also applies to: 1982-1999

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 8444935 and 7e3e843.

📒 Files selected for processing (14)
  • components/apimgt/org.wso2.carbon.apimgt.impl/src/main/java/org/wso2/carbon/apimgt/impl/dao/ApiMgtDAO.java (2 hunks)
  • components/apimgt/org.wso2.carbon.apimgt.keymgt/src/test/resources/dbscripts/h2.sql (1 hunks)
  • features/apimgt/org.wso2.carbon.apimgt.core.feature/src/main/resources/multi-dc/Postgresql/apimgt/tables.sql (1 hunks)
  • features/apimgt/org.wso2.carbon.apimgt.core.feature/src/main/resources/multi-dc/SQLServer/mssql/apimgt/tables.sql (2 hunks)
  • features/apimgt/org.wso2.carbon.apimgt.core.feature/src/main/resources/sql/db2.sql (1 hunks)
  • features/apimgt/org.wso2.carbon.apimgt.core.feature/src/main/resources/sql/h2.sql (1 hunks)
  • features/apimgt/org.wso2.carbon.apimgt.core.feature/src/main/resources/sql/mssql.sql (2 hunks)
  • features/apimgt/org.wso2.carbon.apimgt.core.feature/src/main/resources/sql/mysql.sql (1 hunks)
  • features/apimgt/org.wso2.carbon.apimgt.core.feature/src/main/resources/sql/mysql_cluster.sql (1 hunks)
  • features/apimgt/org.wso2.carbon.apimgt.core.feature/src/main/resources/sql/oracle.sql (1 hunks)
  • features/apimgt/org.wso2.carbon.apimgt.core.feature/src/main/resources/sql/oracle_23c.sql (1 hunks)
  • features/apimgt/org.wso2.carbon.apimgt.core.feature/src/main/resources/sql/oracle_rac.sql (1 hunks)
  • features/apimgt/org.wso2.carbon.apimgt.core.feature/src/main/resources/sql/postgresql.sql (1 hunks)
  • features/apimgt/org.wso2.carbon.apimgt.keymanager.feature/src/main/resources/sql/h2.sql (1 hunks)
🔇 Additional comments (12)
features/apimgt/org.wso2.carbon.apimgt.core.feature/src/main/resources/sql/db2.sql (1)

2538-2546: ```shell
#!/bin/bash

Search for any ALTER statements on AM_APPLICATION_REGISTRATION in DB2 scripts

rg -n "ALTER TABLE.*AM_APPLICATION_REGISTRATION" .

Search for any temporary columns or migration steps related to INPUTS

rg -n "ADD COLUMN INPUTS_TMP" .
rg -n "INPUTS_TMP" .


</details>
<details>
<summary>features/apimgt/org.wso2.carbon.apimgt.core.feature/src/main/resources/sql/oracle_rac.sql (1)</summary>

`2517-2519`: **Is `BLOB` the right LOB type here?**

If the `INPUTS` column holds textual/json payload (as opposed to opaque binary), `CLOB` is normally preferred in Oracle.  
Using `BLOB` means callers must handle character-set conversion manually and you lose native SQL operators (`LIKE`, `REGEXP_LIKE`, etc.). Please double-check that:

1. All callers now treat the field strictly as a byte stream.
2. No in-DB string operations are required on this column.
3. Migration scripts convert existing `VARCHAR2` data to `BLOB` safely (`TO_BLOB(UTL_RAW.CAST_TO_RAW(...))`).

If any of the above is false, consider switching to `CLOB` instead:

```diff
-    INPUTS BLOB,
+    INPUTS CLOB,

Failing to align the LOB type with the actual data semantics can introduce subtle bugs and performance penalties.

features/apimgt/org.wso2.carbon.apimgt.core.feature/src/main/resources/sql/oracle_23c.sql (1)

2686-2689: ```shell
#!/bin/bash

1. Verify INPUTS column definition across all "23c" SQL scripts

echo "=== Verifying INPUTS column in 23c SQL scripts ==="
find features/apimgt -type f -path "/sql/_23c.sql" | while read -r f; do
echo "File: $f"
grep -Hn "INPUTS" -A2 -B1 "$f" || echo " (no match)"
done

2. Check that the DAO layer uses binary streams for the INPUTS column

echo
echo "=== Checking Java code for binary‐stream usage ==="
rg -n "getBinaryStream" -n .
rg -n "setBinaryStream" -n .


</details>
<details>
<summary>features/apimgt/org.wso2.carbon.apimgt.core.feature/src/main/resources/multi-dc/Postgresql/apimgt/tables.sql (1)</summary>

`1987-1996`: ```shell
#!/bin/bash
set -e

# 1️⃣ List DB vendor directories under multi-dc
echo "DB vendor directories under multi-dc:"
find features/apimgt/org.wso2.carbon.apimgt.core.feature/src/main/resources/multi-dc -maxdepth 1 -mindepth 1 -type d

# 2️⃣ Search for INPUTS occurrences in all SQL scripts under multi-dc
echo -e "\nINPUTS occurrences in multi-dc SQL scripts:"
rg -n "INPUTS" -g "features/apimgt/org.wso2.carbon.apimgt.core.feature/src/main/resources/multi-dc/**/*.sql"

# 3️⃣ Check for any existing migration scripts referencing INPUTS_NEW
echo -e "\nSearching for INPUTS_NEW in all SQL files:"
rg -n "INPUTS_NEW" -g "*.sql" || echo "No INPUTS_NEW references found."

# 4️⃣ Scan Java code for direct references to the INPUTS column
echo -e "\nJava code references to 'INPUTS':"
rg -n "INPUTS" -g "*.java" || echo "No raw 'INPUTS' references found."

# 5️⃣ Verify use of binary vs string setters in DAO code
echo -e "\nOccurrences of setBinaryStream in Java code:"
rg -n "setBinaryStream" -g "*.java" || echo "No setBinaryStream calls found."

echo -e "\nOccurrences of setString in Java code:"
rg -n "setString" -g "*.java" || echo "No setString calls found."
features/apimgt/org.wso2.carbon.apimgt.core.feature/src/main/resources/sql/postgresql.sql (1)

1994-1996: Column switched to BYTEA – confirm migration and driver handling

Changing INPUTS from VARCHAR to BYTEA is fine for fresh installs, but existing deployments will need a data-migration script and the DAO layer must now use setBytes/getBytes (or setBinaryStream) instead of setString/getString.
Please verify:

  1. A DB‐migration patch (ALTER TABLE … ALTER COLUMN INPUTS TYPE BYTEA USING INPUTS::BYTEA) exists for upgrades.
  2. All JDBC calls touching INPUTS have been updated (see ApiMgtDAO) and are covered by tests.
  3. No downstream code still assumes JSON/text semantics (e.g., SQL LIKE filters).

If any of these items is missing, the change will break runtime or upgrade paths.

features/apimgt/org.wso2.carbon.apimgt.core.feature/src/main/resources/sql/oracle.sql (1)

2670-2677: Switching INPUTS to BLOB has functional implications – verify encoding, migration & LOB-handling.

The column is now a binary LOB, which is fine given that the DAO persists a UTF-8 byte stream. However, confirm the following before merge:

  1. Upgrade path – existing installations will require an ALTER TABLE … MODIFY + data copy using to_blob(to_clob(INPUTS)) (or similar). Provide a DB migration script in the DB‐upgrade folder; otherwise INPUTS data will be silently lost on upgrade.

  2. Character vs binary – if the payload is always JSON/text, a CLOB (or NCLOB for UTF-16) avoids manual encoding/decoding and is friendlier to DB tooling. Double-check other vendors: H2/MySQL/Postgres use text-based LOBs (CLOB, TEXT, JSONB) not binary? Keep the type choice consistent or document why Oracle uses BLOB.

  3. Nullability – the original column was VARCHAR2(1024) (implicitly nullable). If non-null data is expected, add NOT NULL or supply a sane default (EMPTY_BLOB()).

  4. Performance / storage – Oracle LOBs are stored out-of-line; consider ENABLE STORAGE IN ROW if average payload < 4 KB to avoid extra LOB segments.

  5. Indexing restrictions – future code must not attempt to add functional indexes on INPUTS; Oracle prohibits indexing a plain BLOB without a custom domain index.

Validate 1 & 2 across all DB scripts and ensure a migration script exists.

features/apimgt/org.wso2.carbon.apimgt.keymanager.feature/src/main/resources/sql/h2.sql (1)

1478-1486: Column switched to BLOB – confirm upgrade path & data handling

Changing INPUTS from VARCHAR to BLOB is fine for fresh installs, but production environments will already hold string data.

  1. Provide an explicit DB-migration script (or document the manual step) that converts existing rows to BLOB so upgrades don’t silently truncate / fail.
  2. A BLOB is binary; the DAO must always UTF-8 encode / decode. Verify the DAO update covers all read/write paths and handles null safely.
  3. If the payload is only UTF-8 JSON, CLOB (character large object) gives the same size benefit without manual encoding, and is better aligned with text search / backups in most RDBMSs – worth a quick cross-dialect check.
  4. Keep the nullability definition consistent across dialect scripts (NOT NULL vs nullable). Today the column is nullable here; ensure the same elsewhere.

Please double-check these points before merging.

components/apimgt/org.wso2.carbon.apimgt.keymgt/src/test/resources/dbscripts/h2.sql (1)

1485-1485: Migration/compatibility risk: VARCHARBLOB requires an upgrade path

Changing INPUTS from VARCHAR(1000) to BLOB is fine for new installations, but an in-place upgrade of an existing deployment will silently drop the previous data because H2 will recreate the table, not alter it.

Please confirm that:

  1. A dedicated migration/ALTER script is shipped for existing DBs (e.g. ALTER TABLE … ALTER COLUMN INPUTS BLOB; for H2 and equivalent for other RDBMSs).
  2. Runtime code gracefully handles both old and new schemas until the migration is executed (e.g. fallback to getString when getBinaryStream returns null).

If these are already covered, feel free to dismiss. Otherwise, consider adding an upgrade script or at least documenting the manual step.

components/apimgt/org.wso2.carbon.apimgt.impl/src/main/java/org/wso2/carbon/apimgt/impl/dao/ApiMgtDAO.java (2)

276-277: LGTM! Proper binary stream handling for BLOB storage.

The conversion from string to binary stream is correctly implemented using UTF-8 encoding, which aligns with the schema change from VARCHAR to BLOB.


8875-8875: ```shell
#!/bin/bash

Locate the APIMgtDBUtil class and display its getStringFromInputStream method for inspection

Find the Java file containing APIMgtDBUtil

APIMGT_UTIL_FILE=$(rg -l "class APIMgtDBUtil" -g "*.java")
echo "APIMgtDBUtil located at: $APIMGT_UTIL_FILE"

Show the getStringFromInputStream method with surrounding context

rg -n "getStringFromInputStream" -A 10 -B 5 "$APIMGT_UTIL_FILE"


</details>
<details>
<summary>features/apimgt/org.wso2.carbon.apimgt.core.feature/src/main/resources/multi-dc/SQLServer/mssql/apimgt/tables.sql (1)</summary>

`1870-1877`: To pinpoint any JDBC calls still treating the INPUTS column as text, let’s search for `getString("INPUTS")` in the DAO layer:


```shell
#!/bin/bash
# Check for any getString usage when retrieving the INPUTS column by name
rg --ignore-case --line-number 'getString\(\s*"?INPUTS"?\s*\)' components/apimgt/org.wso2.carbon.apimgt.impl/src/main/java
features/apimgt/org.wso2.carbon.apimgt.core.feature/src/main/resources/sql/mssql.sql (1)

1888-1888: Binary type switch LGTM – verify downstream impacts

Changing INPUTS to VARBINARY(MAX) is the right choice for storing opaque payloads on SQL Server and keeps the dialect consistent with the other DB scripts in the PR.
Please double-check:

  1. All DAO / JDBC code now uses setBinaryStream / getBinaryStream (the PR shows this for ApiMgtDAO, but scan for any direct SELECT INPUTS … still casting to VARCHAR).
  2. Migrations: existing installations will require an ALTER TABLE plus manual conversion of the old VARCHAR data → VARBINARY. Ship an explicit migration script or document the step.

If both items are covered, nothing more to do.

@@ -1728,7 +1728,7 @@ CREATE TABLE IF NOT EXISTS AM_APPLICATION_REGISTRATION (
APP_ID INT,
TOKEN_TYPE VARCHAR(30),
TOKEN_SCOPE VARCHAR(1500) DEFAULT 'default',
INPUTS VARCHAR(1000),
INPUTS LONGBLOB,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there any specific reason that we have set the type to LONGBLOB instead of BLOB?

Since we are converting a columns of varchar(1000) the BLOB should be well sufficient as it contains 64kb size. As Memory and other overhead are higher for LONGBLOB, it would be ideal to use BLOB if its possible.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants