IntelliJ Data Exporter for over 32 KB Oracle BLOBs
If you've already read how to prepare a quick export of CLOB and BLOB columns up to 32 KB, you may be interested in a similar output for BLOBs above 32 KB. While this is not an optimal solution, it can be handy. Useful, especially when you want to prepare a one-off imports, or when you don't want to use additional tools for the import itself.
In the previous example, our limitation was the built-in function UTL_RAW.CONCAT
. It allowed for the concatenation of BLOBs up to a maximum size of 32 KB.
Let's try to implement an equivalent of this function, which will not be burdened with the mentioned limit.
At the same time, try not to touch the BLOB extractor (IntelliJ Data Extractor) described in the previous article.
The concat_blobs function
An example import generated by the previous extractor looks like this:
CREATE TABLE foo
(
id NUMBER GENERATED BY DEFAULT AS IDENTITY PRIMARY KEY,
bar BLOB
);
--MAX_STRING_SIZE = 6 --ORA-01704/SP2-0027 test
insert into DEMO.FOO (ID, BAR) values (7,
UTL_RAW.CONCAT(HEXTORAW('746573') ,
UTL_RAW.CONCAT(HEXTORAW('742C31') ,
UTL_RAW.CONCAT(HEXTORAW('2C32')
, HEXTORAW('')))));
To ensure compatibility, it is enough that the syntax of the new function returns a BLOB data type and accepts two arguments of the same type:
concat_blob (
r1 IN BLOB DEFAULT NULL,
r2 IN BLOB DEFAULT NULL)
RETURN BLOB;
This time we will use the built-in DBMS_LOB
package. For concatenation, the DBMS_LOB.APPEND
procedure will be perfect.
It concatenates data to the first BLOB, which also acts as an OUT
parameter:
DBMS_LOB.APPEND (
dest_lob IN OUT NOCOPY BLOB,
src_lob IN BLOB);
Bearing in mind the need to provide a valid and existing BLOB pointer, we will create a new in-memory (second parameter) BLOB using the
DBMS_LOB.CREATETEMPORARY
procedure:
CREATE OR REPLACE FUNCTION concat_blobs(p_blob1 BLOB DEFAULT NULL, p_blob2 BLOB DEFAULT NULL) RETURN BLOB IS
v_out_blob BLOB;
BEGIN
IF p_blob1 IS NULL AND p_blob2 IS NULL THEN
RETURN NULL;
END IF;
DBMS_LOB.CREATETEMPORARY(v_out_blob, TRUE);
IF p_blob1 IS NOT NULL THEN
DBMS_LOB.APPEND(v_out_blob, p_blob1);
END IF;
IF p_blob2 IS NOT NULL THEN
DBMS_LOB.APPEND(v_out_blob, p_blob2);
END IF;
RETURN v_out_blob;
END;
/
After creating such a function on the database, all you have to do is replace UTL_RAW.CONCAT
with concat_blob
in the
BLOB_PREFIX
variable of the extractor from the previous article. Then, after selecting it, copy data from the console table or
export the table through the IntelliJ Database tab. You can test this export on binary data larger than 32 KB, such as PNG photos or ZIP
archives.
32~ KB BLOB Data Extractor
For a full out-of-the-box import, we need to get rid the concat_blobs
pre-creation requirement.
We can hide this function by wrapping it together with the inserts in an anonymous PL/SQL code block:
DECLARE
FUNCTION concat_blobs(p_blob1 BLOB DEFAULT NULL, p_blob2 BLOB DEFAULT NULL) RETURN BLOB IS
--...
END;
BEGIN
insert into FOO (ID, BAR) values (1, concat_blobs());
insert into FOO (ID, BAR) values (2, concat_blobs());
--...
end;
/
Go to the BLOB export implementation from the previous article and wrap your PL/SQL code around the ROWS.each { row -> record(COLUMNS, row) }
.
To print it, use the implicit OUT
variable and its append
function. In the end, our exporter can look like this:
/*
* Available context bindings:
* COLUMNS List<DataColumn>
* ROWS Iterable<DataRow>
* OUT { append() }
* FORMATTER { format(row, col); formatValue(Object, col); getTypeName(Object, col); isStringLiteral(Object, col); }
* TRANSPOSED Boolean
* plus ALL_COLUMNS, TABLE, DIALECT
*
* where:
* DataRow { rowNumber(); first(); last(); data(): List<Object>; value(column): Object }
* DataColumn { columnNumber(), name() }
*/
SEP = ", "
QUOTE = "\'"
STRING_PREFIX = DIALECT.getDbms().isMicrosoft() ? "N" : ""
NEWLINE = System.getProperty("line.separator")
KEYWORDS_LOWERCASE = com.intellij.database.util.DbSqlUtil.areKeywordsLowerCase(PROJECT)
KW_INSERT_INTO = KEYWORDS_LOWERCASE ? "insert into " : "INSERT INTO "
KW_VALUES = KEYWORDS_LOWERCASE ? ") values (" : ") VALUES ("
KW_NULL = KEYWORDS_LOWERCASE ? "null" : "NULL"
MAX_STRING_SIZE = 2000
CONCAT_SEPARATOR = ', '
BLOB_PREFIX = '\n concat_blob(HEXTORAW('
def record(columns, dataRow) {
OUT.append(KW_INSERT_INTO)
if (TABLE == null) OUT.append("MY_TABLE")
else OUT.append(TABLE.getParent().getName()).append(".").append(TABLE.getName())
OUT.append(" (")
columns.eachWithIndex { column, idx ->
OUT.append(column.name()).append(idx != columns.size() - 1 ? SEP : "")
}
OUT.append(KW_VALUES)
columns.eachWithIndex { column, idx ->
def value = dataRow.value(column)
def stringValue = value == null ? KW_NULL : FORMATTER.formatValue(value, column)
def isStringLiteral = value != null && FORMATTER.isStringLiteral(value, column)
if (isStringLiteral && DIALECT.getDbms().isMysql()) stringValue = stringValue.replace("\\", "\\\\")
def isOracleBlob = value != null && FORMATTER.getTypeName(value, column) == "BLOB" && DIALECT.getDbms().isOracle()
if (isOracleBlob) {
def separatorsCount = Math.ceil(stringValue.length() / MAX_STRING_SIZE)
stringValue = stringValue
.replace(QUOTE, "")
.replaceAll("(.{" + MAX_STRING_SIZE + "})", "\$1" + QUOTE + ') ' + CONCAT_SEPARATOR + BLOB_PREFIX + QUOTE)
OUT.append(STRING_PREFIX + BLOB_PREFIX + QUOTE)
.append(stringValue)
.append(QUOTE + ")\n")
.append(", HEXTORAW('')")
.append(")" * separatorsCount)
.append(idx != columns.size() - 1 ? SEP : "")
return
}
OUT.append(isStringLiteral ? (STRING_PREFIX + QUOTE) : "")
.append(isStringLiteral ? stringValue.replace(QUOTE, QUOTE + QUOTE) : stringValue)
.append(isStringLiteral ? QUOTE : "")
.append(idx != columns.size() - 1 ? SEP : "")
}
OUT.append(");").append(NEWLINE)
}
OUT.append("""DECLARE
FUNCTION concat_blobs(p_blob1 BLOB DEFAULT NULL, p_blob2 BLOB DEFAULT NULL) RETURN BLOB IS
v_out_blob BLOB;
BEGIN
IF p_blob1 IS NULL AND p_blob2 IS NULL THEN
RETURN NULL;
END IF;
DBMS_LOB.CREATETEMPORARY(v_out_blob, TRUE);
IF p_blob1 IS NOT NULL THEN
DBMS_LOB.APPEND(v_out_blob, p_blob1);
END IF;
IF p_blob2 IS NOT NULL THEN
DBMS_LOB.APPEND(v_out_blob, p_blob2);
END IF;
RETURN v_out_blob;
END;
BEGIN
""")
ROWS.each { row -> record(COLUMNS, row) }
OUT.append("""
END;
/
""")
The presented example is not an optimal solution for preparing imports. It only works for relatively small one-off BLOB imports. In other cases, the recursive creation of temporary BLOBs will cause significant slowdown and memory or I/O consumption (depending on the parameterization of the
DBMS_LOB.CREATETEMPORARY
procedure). In such cases, it is worth considering solutions based on JDBC, SQLLDR, or Oracle Data Pump.