OrientDB natively handles binary data, namely BLOB. However, there are some considerations to take into account based on the type of binary data, the size, the kind of usage, etc.
Sometimes it's better to store binary records in a different path then default database directory to benefit of faster HD (like a SSD) or just to go in parallel if the OS and HW configuration allow this.
In this case create a new cluster in a different path:
db.addCluster("physical", "binary", "/mnt/ssd", "binary" );
All the records in cluster binary
will reside in files created under the directory /mnt/ssd
.
This is the simpler way to handle binary data: store them to the file system and just keep the path to retrieve them.
Example:
ODocument doc = new ODocument();
doc.field("binary", "/usr/local/orientdb/binary/test.pdf");
doc.save();
Pros:
Cons:
ODocument class is able to manage binary data in form of byte[]
(byte array). Example:
ODocument doc = new ODocument();
doc.field("binary", "Binary data".getBytes());
doc.save();
This is the easiest way to keep the binary data inside the database, but it's not really efficient on large BLOB because the binary content is serialized in Base64. This means a waste of space (33% more) and a run-time cost in marshalling/unmarshalling.
Also be aware that once the binary data reaches a certain size (10 MB in some recent testing), the database's performance can decrease significantly. If this occurs, the solution is to use the ORecordBytes
solution described below.
Pros:
Cons:
The ORecordBytes
class is a record implementation able to store binary content without conversions (see above). This is the faster way to handle binary data with OrientDB but needs a separate record to handle it. This technique also offers the highest performance when storing and retrieving large binary data records.
Example:
ORecordBytes record = new ORecordBytes("Binary data".getBytes());
record.save();
Since this is a separate record, the best way to reference it is to link it to a Document record. Example:
ORecordBytes record = new ORecordBytes("Binary data".getBytes());
ODocument doc = new ODocument();
doc.field("id", 12345);
doc.field("binary", record);
doc.save();
In this way you can access to the binary data by traversing the binary
field of the parent's document record.
ORecordBytes record = doc.field("binary");
byte[] content = record.toStream();
You can manipulate directly the buffer and save it back again by calling the setDirty()
against the object:
byte[] content = record.toStream();
content[0] = 0;
record.setDirty();
record.save();
Or you can work against another byte[]
:
byte[] content = record.toStream();
byte[] newContent = new byte[content*2];
System.arrayCopy(content, 0, newContent, 0, content.length);
record.fromStream(newContent);
record.setDirty();
record.save();
ORecordBytes
class can work with Java Streams:
ORecordBytes record = new ORecordBytes().fromInputStream(in);
record.toOutputStream(out);
Pros:
Cons:
OrientDB can store up to 2Gb as record content. But there are other limitations on network buffers and file sizes you should tune to reach the 2GB barrier.
However managing big chunks of binary data means having big byte[]
structures in RAM and this could cause a Out Of Memory of the JVM. Many users reported that splitting the binary data in chunks it's the best solution.
Continuing from the last example we could handle not a single reference against one ORecordBytes
record but multiple references. A One-To-Many relationship. For this purpose the LINKLIST
type fits perfect because maintains the order.
To avoid OrientDB caches in memory large records use the massive insert intent and keep in the collection the RID, not the entire records.
Example to store in OrientDB the file content:
database.declareIntent( new OIntentMassiveInsert() );
List<ORID> chunks = new ArrayList<ORID>();
InputStream in = new BufferedInputStream( new FileInputStream( file ) );
while ( in.available() > 0 ) {
final ORecordBytes chunk = new ORecordBytes();
// READ REMAINING DATA, BUT NOT MORE THAN 8K
chunk.fromInputStream( in, 8192 );
// SAVE THE CHUNK TO GET THE REFERENCE (IDENTITY) AND FREE FROM THE MEMORY
database.save( chunk );
// SAVE ITS REFERENCE INTO THE COLLECTION
chunks.add( chunk.getIdentity() );
}
// SAVE THE COLLECTION OF REFERENCES IN A NEW DOCUMENT
ODocument record = new ODocument();
record.field( "data", chunks );
database.save( record );
database.declareIntent( null );
Example to read back the file content:
record.setLazyLoad(false);
for (OIdentifiable id : (List<OIdentifiable>) record.field("data")) {
ORecordBytes chunk = (ORecordBytes) id.getRecord();
chunk.toOutputStream(out);
chunk.unload();
}
Pros:
Cons:
What to use?
ORecordBytes
ORecordBytes
but split the content in multiple records