1. What Is Nitrite?
NOsql Object (NO2 a.k.a Nitrite) database is an open source nosql embedded document store written in Java. It has MongoDB like API. It supports both in-memory and single file based persistent store powered by MVStore engine of h2 database.
Nitrite is a server-less embedded database ideal for desktop, mobile or small web applications.
It features:
-
Embedded key-value/document and object store
-
In-memory (on/off)-heap store
-
Single file store
-
Very fast and lightweight MongoDB like API
-
Indexing
-
Full text search capability
-
Full Android compatibility (API Level 19)
-
Observable store
-
Both way replication via Nitrite DataGate server
2. What It Is Not?
Nitrite is not an RDBMS. It is also not a distributed nosql database like MongoDB or Cassandra. It does not have any server for external application to connect to. It does not support sharding and ACID transaction.
3. Getting Started
3.1. How To Install
To use Nitrite in any Java application, just add the below dependency:
Maven
<dependency>
<groupId>org.dizitart</groupId>
<artifactId>nitrite</artifactId>
<version>3.4.2</version>
</dependency>
Gradle
compile 'org.dizitart:nitrite:3.4.2'
3.2. Quick Examples
// java initialization
Nitrite db = Nitrite.builder()
.compressed()
.filePath("/tmp/test.db")
.openOrCreate("user", "password");
// android initialization
Nitrite db = Nitrite.builder()
.compressed()
.filePath(getFilesDir().getPath() + "/test.db")
.openOrCreate("user", "password");
For more options on opening a database visit here.
// Create a Nitrite Collection
NitriteCollection collection = db.getCollection("test");
// Create an Object Repository
ObjectRepository<Employee> repository = db.getRepository(Employee.class);
// Create an Object Repository with a key
ObjectRepository<Employee> repository = db.getRepository("key", Employee.class);
// create a document to populate data
Document doc = createDocument("firstName", "John")
.put("lastName", "Doe")
.put("birthDay", new Date())
.put("data", new byte[] {1, 2, 3})
.put("fruits", new ArrayList<String>() {{ add("apple"); add("orange"); add("banana"); }})
.put("note", "a quick brown fox jump over the lazy dog");
// insert the document
collection.insert(doc);
// update the document
collection.update(eq("firstName", "John"), createDocument("lastName", "Wick"));
// remove the document
collection.remove(doc);
// insert an object
Employee emp = new Employee();
emp.setEmpId(124589);
emp.setFirstName("John");
emp.setLastName("Doe");
repository.insert(emp);
// create document index
collection.createIndex("firstName", indexOptions(IndexType.NonUnique));
collection.createIndex("note", indexOptions(IndexType.Fulltext));
// create object index
repository.createIndex("firstName", indexOptions(IndexType.NonUnique));
Cursor cursor = collection.find(
// and clause
and(
// firstName == John
eq("firstName", "John"),
// elements of data array is less than 4
elemMatch("data", lt("$", 4)),
// elements of fruits list has one element matching orange
elemMatch("fruits", regex("$", "orange")),
// note field contains string 'quick' using full-text index
text("note", "quick")
)
);
for (Document document : cursor) {
// process the document
}
// create document by id
Document document = collection.getById(nitriteId);
// query an object repository and create the first result
Employee emp = repository.find(eq("firstName", "John"))
.firstOrDefault();
There are several find filters available for feature-rich search operations. Please head over here.
// connect to a DataGate server localhost 9090 port
DataGateClient dataGateClient = new DataGateClient("http://localhost:9090")
.withAuth("userId", "password");
DataGateSyncTemplate syncTemplate
= new DataGateSyncTemplate(dataGateClient, "remote-collection@userId");
// create sync handle
SyncHandle syncHandle = Replicator.of(db)
.forLocal(collection)
// a DataGate sync template implementation
.withSyncTemplate(syncTemplate)
// replication attempt delay of 1 sec
.delay(timeSpan(1, TimeUnit.SECONDS))
// both-way replication
.ofType(ReplicationType.BOTH_WAY)
// sync event listener
.withListener(new SyncEventListener() {
@Override
public void onSyncEvent(SyncEventData eventInfo) {
}
})
.configure();
// start sync in the background using handle
syncHandle.startSync();
// Export data to a file
Exporter exporter = Exporter.of(db);
exporter.exportTo(schemaFile);
//Import data from the file
Importer importer = Importer.of(db);
importer.importFrom(schemaFile);
For any detail information or when in doubt please consult the javadoc. This is not one of those plain old vanilla javadoc, it is heavily attributed with lots of example and gotchas.
Click the icon next to a class name in the document to go directly to corresponding javadoc. |
4. Nitrite Database
4.1. Create/Open Database
NitriteBuilder builder = Nitrite.builder();
Nitrite db = Nitrite.builder()
.filePath("/tmp/test.db")
.openOrCreate();
If a file path is provided and the file does not exists, builder will create a new file based database. If the file exists, builder will try to open the existing database.
If the existing database file is corrupted, while opening, nitrite will try to recover from it by restoring the last known good version. |
Nitrite db = Nitrite.builder()
.openOrCreate();
Nitrite db = Nitrite.builder()
.enableOffHeapStorage()
.openOrCreate();
If no file path is provided, builder will create an in-memory database.
Nitrite db = Nitrite.builder()
.filePath("/tmp/test.db")
.disableAutoCommit()
.openOrCreate();
By default auto-commit is enabled while creating nitrite database. But it can be disabled also.
Nitrite db = Nitrite.builder()
.filePath("/tmp/test.db")
.autoCommitBufferSize(2048) // size is 2048 KB now
.openOrCreate();
If auto commit is not disabled, nitrite will commit the changes if the size of unsaved changes is more than the write buffer size. By default the buffer size is 1024 KB. But it can be customized also from builder.
Nitrite db = Nitrite.builder()
.filePath("/tmp/test.db")
.disableAutoCompact()
.openOrCreate();
Nitrite by default compacts the database file before close. If compaction is enabled chunks will be moved next to each other. Disabling compaction will increase the performance during database close.
Nitrite db = Nitrite.builder()
.filePath("/tmp/test.db")
.readOnly()
.openOrCreate();
The builder can also open a database in readonly mode. While opened in readonly mode nitrite will not persists any changes.
While opened in readonly mode, options like autoCommitBufferSize(size) ,
compressed() or disableAutoCommit() do not have any effect.
|
Nitrite db = Nitrite.builder()
.filePath("/tmp/test.db")
.compressed()
.openOrCreate();
A nitrite database can be compressed while saving the changes to the disk. The compression algorithm nitrite uses is LZF. This will save about 50% of the disk space, but it will slow down read and write operations slightly.
Nitrite db = Nitrite.builder()
.filePath("/tmp/test.db")
.textIndexingService(new MyTextIndexingEngine())
.openOrCreate();
Nitrite also provides some options for full-text indexing. Nitrite has its own full-text indexing engine, but there is a provision to supply third-party full-text engine implementation like lucene.
Nitrite db = Nitrite.builder()
.filePath("/tmp/test.db")
.textTokenizer(new MyBengaliTextTokenizer())
.openOrCreate();
Nitrite’s own full-text index engine is for english language only. But if anyone
wants to use the same engine for languages other than english, a custom TextTokenizer
implementation for that language should be configured in the builder.
Once a database is opened it acquires an exclusive lock to the data file. So if a database is opened in a process, further attempt to open it from another process will fail. Proper closing of a database will release the file lock.
While opening the database, nitrite registers itself to a JVM shutdown hook, which before exiting will close the database without persisting any unsaved changes to the disk. This shutdown hook protects the data file from corruption due to JVM shutdown before properly closing the database.
4.1.1. Security
A nitrite database can be secured using a username password pair. The username and password can be set only once while creating the database. Nitrite does not store raw password, so retrieval or change of password is not possible. Adding a new username password pair is also not possible for existing database.
Nitrite db = Nitrite.builder()
.filePath("/tmp/test.db")
.openOrCreate("username", "password");
Nitrite does not support access control. |
4.2. Close Database
A nitrite database should be closed before exiting from the program. When a database is opened, it acquires an exclusive lock to the data file. Closing a database releases the lock. Prior to closing, nitrite will persists all unsaved changes to the disk and compacts the data file by moving all chunks next to each other.
To close a database call
db.close();
Once a database is closed, no further operation is possible on the instance with out properly opening it again. |
To check if a database is already closed, use below method.
db.isClosed();
4.3. Create/Open Collections
To create or open a NitriteCollection call
NitriteCollection collection = db.getCollection("collectionName");
If no collection exists with the name in the database, a new collection with the given name will be created. But if there is already a collection with the same name in the database, it will be opened.
Similarly, to create or open an ObjectRepository call
// creates an object repository of type Employee
ObjectRepository<Employee> repository = db.getRepository(Employee.class);
// creates an object repository of type Employee with a specified key
ObjectRepository<Employee> repository = db.getRepository("key", Employee.class);
If no object repository of the given type exists in the database, a new one will be created; otherwise the existing one will be opened.
4.4. Commit
To persists unsaved data to the disk call
db.commit();
No need to call this method after every change, if auto-commit is enabled while opening the db. However, it may still be called to flush all changes to disk. |
To check if a nitrite database has any unsaved changes which has not been committed yet, use below method.
boolean unsaved = db.hasUnsavedChanges();
4.5. Compaction
The nitrite data file can be compacted using below method. Compaction is done by moving all chunks next to each other.
db.compact();
By default auto compaction is enabled. It compacts the database before close. |
5. Document
Nitrite stores data as Documents which are JSON-like field and value pairs. Document is a schema-less data structure and it can store any arbitrary java object.
Document doc = createDocument("firstName", "John")
.put("lastName", "Doe")
.put("birthDay", new Date())
.put("data", new byte[] {1, 2, 3})
.put("fruits", new ArrayList<String>() {{ add("apple"); add("orange"); add("banana"); }})
.put("note", "a quick brown fox jump over the lazy dog");
A document field:
-
is case sensitive
-
does not allow duplicates
-
can not be
null
A document value:
-
can be any java type
-
can be another document
-
can be
null
except '_id' field’s value -
if indexed, it’s java type should be primitive or implement java.lang.Comparable
A document can directly be constructed from JSON string:
|
6. NitriteId
A unique identifier across a nitrite database. Each document in a NitriteCollection is associated with a unique NitriteId .
During insertion of a document, nitrite will generate a new NitriteId and put its value in the '_id' field of the document.
// create a document
Document docu = createDocument("name", "John Doe");
// insert the document in the collection
WriteResult writeResult = collection.insert(docu);
NitriteId nitriteId = Iterables.firstOrDefault(writeResult);
// assert that document now has _id field populated
assertEquals(nitriteId, docu.getId());
assertEquals(docu.get("_id"), nitriteId.getIdValue());
7. Collections
Nitrite supports two types of collections
-
NitriteCollection - for storing Documents
-
ObjectRepository - for storing java objects
7.1. NitriteCollection
Nitrite stores documents into NitriteCollection . NitriteCollection are analogous to a table in RDBMS. A NitriteCollection is constructed using a NitriteMap which internally maintains a counted B+ tree to store documents.
Index can be created on a NitriteCollection for faster retrieval.
Each NitriteCollection has a unique name across the database to identify it uniquely. It can be created or
opened by its name only. db.getCollection(String name)
call opens a NitriteCollection from the database. But if
it does not exist, it will be created and returned.
A NitriteCollection is observable. Any modification to it can be listened to by an implementation of ChangeListener
interface. Each operation raises different events like INSERT, UPDATE, REMOVE etc.
NitriteCollection is thread-safe for concurrent use. |
// create/open a collection named - test
NitriteCollection collection = db.getCollection("test");
// observe any change to the collection
collection.register(new ChangeListener() {
@Override
public void onChange(ChangeInfo changeInfo) {
// your logic based on action
}
});
Document doc = createDocument("firstName", "John")
.put("lastName", "Doe")
.put("birthDay", new Date())
.put("data", new byte[] {1, 2, 3})
.put("fruits", new ArrayList<String>() {{ add("apple"); add("orange"); add("banana"); }})
.put("note", "a quick brown fox jump over the lazy dog");
// insert a document into the collection
collection.insert(doc);
7.2. ObjectRepository
Along with NitriteCollection, nitrite also supports ObjectRepository . It is a persistent generic collection of POJO classes. Internally it is backed by a NitriteCollection, where an object is converted into a Document and then stored.
An ObjectRepository also supports the same set of operations that NitriteCollection supports. It is also observable and tread-safe for concurrent use.
ObjectRepository does not allow null or empty string as an id value.
|
// create/open a database
Nitrite db = Nitrite.builder()
.compressed()
.openOrCreate("user", "password");
// create an object repository
ObjectRepository<Employee> employeeStore = db.getRepository(Employee.class);
// observe any change to the repository
employeeStore.register(new ChangeListener() {
@Override
public void onChange(ChangeInfo changeInfo) {
// your logic based on action
}
});
// initialize an employee object
Employee emp = new Employee();
emp.setEmpId(20365);
emp.setName("John Doe");
emp.setJoinDate(new Date());
// insert the employee object
employeeStore.insert(emp);
// Employee class
@Indices({
@Index(value = "joinDate", type = IndexType.NonUnique),
@Index(value = "name", type = IndexType.Unique)
})
public class Employee implements Serializable {
@Id
private long empId;
private Date joinDate;
private String name;
// ... public getters and setters
}
ObjectRepository is thread-safe for concurrent use. |
7.2.1. Annotations
Nitrite provides a set of annotations for entity objects while using it in ObjectRepository. The annotations are to let Nitrite knows about various information about the ObjectRepository while constructing it. It also helps to reduce some boilerplate code.
// Employee class
@Indices({
@Index(value = "joinDate", type = IndexType.NonUnique),
@Index(value = "name", type = IndexType.Unique)
})
public class Employee implements Serializable {
@Id
private long empId;
private Date joinDate;
private String name;
private String address;
// ... public getters and setters
}
Index
annotation is to let Nitrite knows about the field which will be indexed. Id
annotation
is to mark a field as id field. This id field is used to uniquely identify an object inside an
ObjectRepository. More on these annotations will be discussed later.
7.2.2. NitriteMapper
Nitrite converts java objects to Document before storing it in an ObjectRepository and similarly converts the Document back to POJO while retrieving. The conversion is seamless for the end users. This conversion is managed by a NitriteMapper implementation. By default NitriteMapper uses Jackson to convert POJO to a field-value map, but a custom implementation can be set via NitriteBuilder.
Nitrite db = Nitrite.builder()
.nitriteMapper(new GSONMapper()) // custom NitriteMapper
.filePath("/tmp/test.db")
.openOrCreate("user", "password");
The default NitriteMapper:
-
does not allow circular reference (will throw ObjectMappingException)
-
needs POJO classes to have a public parameter-less constructor
-
honors fields declared as
transient
As of 3.1.0, a jackson module can be easily registered with the default jackson mapper using the builder.
Nitrite db = Nitrite.builder()
.registerModule(new Jdk8Module()) // register jdk8 module
.registerModule(new JavaTimeModule()) // register java.time module
.filePath("/tmp/test.db")
.openOrCreate("user", "password");
7.2.3. Mappable
NitriteMapper relies on third-party serialization libraries for Document serialization. Those libraries heavily depend on reflection, but reflection has its toll. In environment like Android use of reflection degrades the performance drastically. To bypass this overhead, Nitrite provides a mechanism called Mappable interface.
If an object is Mappable
, Nitrite will use the implementation
to convert the object to a Document and vice versa thus bypass the reflection
overhead.
public class Employee implements Mappable {
private String empId;
private String name;
private Date joiningDate;
private Employee boss;
@Override
public Document write(NitriteMapper mapper) {
Document document = new Document();
document.put("empId", getEmpId());
document.put("name", getName());
document.put("joiningDate", getJoiningDate());
if (getBoss() != null) {
Document bossDoc = getBoss().write(mapper);
document.put("boss", bossDoc);
}
return document;
}
@Override
public void read(NitriteMapper mapper, Document document) {
if (document != null) {
setEmpId((String) document.get("empId"));
setName((String) document.get("name"));
setJoiningDate((Date) document.get("joiningDate"));
Document bossDoc = (Document) document.get("boss");
if (bossDoc != null) {
Employee bossEmp = new Employee();
bossEmp.read(mapper, bossDoc);
setBoss(bossEmp);
}
}
}
}
7.3. Operations
Collection supports usual CRUD operations and indexing operations which will be discussed in details in coming sections. Apart from these it supports other operations also.
Drop
collection.drop();
It drops the collection
and all of it indices associated with it. Any further
access to a dropped collection would result into an error.
boolean isDropped = collection.isDropped();
Above code checks if a collection has already been dropped or not.
The drop() operation raises a DROP event.
|
Close
collection.close();
It closes the collection
for further access. If a NitriteCollection
is closed once, it can only be opened from a nitrite instance. Any
access to a closed collection would result into an error.
boolean isClosed = collection.isClosed();
The above code checks if the collection
is already closed or not.
The close() operation raises a CLOSE event.
|
7.4. CRUD Operations
CRUD operations create, read, update, and delete documents/objects in collections.
7.4.1. WriteResult
Each modify operation returns a WriteResult . It represents the result of a modification operation in a collection. It is also an iterable constructs which iterates over all affected NitriteIds.
WriteResult result = collection.insert(doc1, doc2, doc3);
System.out.println("Affected counts - " + result.getAffectedCount());
for (NitriteId id : result) {
System.out.println("Id - " + id);
}
7.4.2. Insert
Create or insert operations add new documents/objects to a collection.
_id Field
In nitrite, each document stored in a collection requires a unique '_id' field that acts as a primary key and helps to identify a document within a collection. During insertion, nitrite generates a new and unique NitriteId for every document and saves the value of the NitriteId into the '_id' field of the documentation.
@Id Annotation
Each object in an ObjectRepository can be uniquely identified by a field marked with @Id
annotation. Nitrite
maintains an unique index on that field to identify the objects.
Id field of an object does not have any direct relation with the But one can retrieve the corresponding NitriteId for an object like this
|
WriteResult insert(Document document, Document... documents)
WriteResult insert(Document[] documents)
WriteResult insert(T object, T... others)
WriteResult insert(T[] objects)
// insert one document
collection.insert(doc1);
// insert multiple documents
collection.insert(doc1, doc2, doc3);
// another way to insert multiple documents
Document[] documents = new Document[] {doc1, doc2, doc3};
collection.insert(documents);
// create employee object
Employee emp1 = new Employee();
emp1.setEmpNumber(12548);
emp1.setEmpName("John Doe");
// insert employee object
repository.insert(emp1);
// insert multiple employee objects
repository.insert(emp1, emp2, emp3);
// another way to insert multiple objects
Employee[] employees = new Employee[] {emp1, emp2, emp3};
repository.insert(employees);
Error Scenario
Insertion operation will result in an error if:
-
document/object is
null
-
a field of the document is indexed and it violates the unique constraint in the collection(if any).
An insert operation raises an INSERT event. |
7.4.3. Update
Update operations modify documents/objects in a collection.
WriteResult update(Document update)
WriteResult update(Document update, boolean upsert)
WriteResult update(Filter filter, Document update)
WriteResult update(Filter filter, Document update, UpdateOptions updateOptions)
WriteResult update(T element)
WriteResult update(T element, boolean upsert)
WriteResult update(ObjectFilter filter, T update)
WriteResult update(ObjectFilter filter, T update, boolean upsert)
WriteResult update(ObjectFilter filter, Document update)
WriteResult update(ObjectFilter filter, Document update, boolean justOnce)
If the filter
is null
, it will update all elements in the collection.
Update one element
WriteResult update(Document document)
WriteResult update(T object)
It updates a single element in the collection. The object
must have
a field marked with @Id
annotation.
Employee emp = new Employee();
emp.setEmpId(12564);
emp.setAddress("12 Some Street");
employeeRepository.insert(emp);
// update object
emp.setAddress("25 New Street");
employeeRepository.update(emp);
Document doc = createDocument("name", "John Doe")
.put("age", 30);
NitriteId nitriteId = doc.getId();
collection.insert(doc);
// update the document
Document document = collection.getById(nitriteId);
document.put("age", 31);
collection.update(document);
Update with Upsert
WriteResult update(Document update, boolean upsert)
WriteResult update(T object, boolean upsert)
Specified element must have an id. If the element is not found in the
collection, it will be inserted only if upsert option is set to true
.
emp.setAddress("25 New Street");
// if emp object is not there in repository, it will be inserted
employeeRepository.update(emp, true);
Document document = createDocument("firstName", "John")
.put("lastName", "Doe");
// generates NitriteId of the document
document.getId();
// if filter does not find any document, it will insert 'document'
WriteResult updateResult = collection.update(document, true);
Update Multiple Objects
WriteResult update(Filter filter, Document update)
WriteResult update(ObjectFilter filter, T update)
WriteResult update(ObjectFilter filter, Document update)
Updates multiple elements in the collection. If the filter
is
null
, it will update all objects in the collection.
Employee emp = new Employee();
emp.setEmpId(12564);
emp.setAddress("12 Some Street");
emp.setCity("Kolkata");
employeeRepository.insert(emp);
// update all employees' join date whose city = Kolkata
Employee empUpdate = new Employee();
// id field should not be set here
empUpdate.setJoinDate(new Date());
// if emp object is not there in repository, it will not insert
employeeRepository.update(eq("city", "Kolkata"), empUpdate);
Update Multiple Objects with Options
WriteResult update(Filter filter, Document update, UpdateOptions updateOptions)
WriteResult update(ObjectFilter filter, T update, boolean upsert)
WriteResult update(ObjectFilter filter, Document update, boolean justOnce)
Updates multiple elements in the repository. Update operation can be customized
with the help of updateOptions
. If the filter
is null
, it will update
all objects in the collection unless justOnce
is set to true
in updateOptions
.
UpdateOptions
Update operation can be customized with the help of updateOptions
. It provides two options:
-
Upsert - indicates whether the update operation will insert a new document if it does not find any existing document to update using the
filter
(default isfalse
). -
JustOnce - indicates whether only one document will be updated or all of them if the
filter
finds multiple documents (default isfalse
).
// simple update example
// update the documents whose firstName = fn1 with lastName = newLastName1
WriteResult updateResult = collection.update(eq("firstName", "fn1"),
createDocument("lastName", "newLastName1"));
// update with update options
// create an update options
UpdateOptions updateOptions = new UpdateOptions();
updateOptions.setJustOnce(true); // only first document will be updated
updateOptions.setUpsert(false); // no upsert
// update the document whose firstName != fn1 with lastName = newLastName1 but no upsert
// and it will update only 1 document
Document document = createDocument("lastName", "newLastName1");
WriteResult updateResult = collection.update(not(eq("firstName", "fn1")),
document, updateOptions);
Error Scenario
Update operation will result in an error if:
-
the
update
parameter is set tonull
-
the
updateOptions
isnull
-
update
object does not have any id field forupdate(T, boolean)
andupdate(T)
operations. -
update
object hasnull
value in id field forupdate(T, boolean)
andupdate(T)
operations.
An update operation raises an UPDATE or INSERT event. |
7.4.4. Remove
Removes documents/objects from a collections.
WriteResult remove(Document element)
WriteResult remove(Filter filter)
WriteResult remove(Filter filter, RemoveOptions removeOptions)
WriteResult remove(T object)
WriteResult remove(ObjectFilter filter)
WriteResult remove(ObjectFilter filter, RemoveOptions removeOptions)
If the filter
is null
, it will remove all elements in the collection.
// removes all documents where firstName = John
collection.remove(eq("firstName", "John"));
// removes all documents
collection.remove(Filters.ALL);
// removes a single document
collection.remove(doc);
// removes all objects where firstName = John
repository.remove(eq("firstName", "John"))
// remove all objects
repository.remove(ObjectFilters.ALL);
RemoveOptions
Remove operation can be customized by removeOptions
. It provides
below option
-
JustOnce - indicates if only one document will be removed or all of them if
filter
finds multiple documents in the collection (default isfalse
).
RemoveOptions options = new RemoveOptions();
options.setJustOne(true);
// removes first document where firstName = John
collection.remove(eq("firstName", "John"), options);
A remove operation raises an REMOVE event. |
7.4.5. Find
Finds documents/objects in a collection.
Cursor find()
Cursor find(FindOptions findOptions)
Cursor find(Filter filter)
Cursor find(Filter filter, FindOptions findOptions)
Document getById(NitriteId nitriteId);
Cursor<T> find()
Cursor<T> find(FindOptions findOptions)
Cursor<T> find(ObjectFilter filter)
Cursor<T> find(ObjectFilter filter, FindOptions findOptions)
T getById(NitriteId nitriteId);
A find operation will take advantage of an index if it exists for the field being queried. Further details about find filters are discussed in section Filter.
// extracts all records from the collections
Cursor results = collection.find();
// extracts paginated records from the collections
Cursor results = collection.find(FindOptions.limit(0, 1));
// extracts all records and sorts them based on the value of 'age' field
Cursor results = collection.find(FindOptions.sort("age", SortOrder.Ascending));
// extracts all records where value of 'age' field is greater than 30
Cursor results = collection.find(Filters.gt("age", 30));
// finds all records where 'age' field value is greater than 30
// then sorts those records in ascending order and takes first 10 records
Cursor results = collection.find(Filters.gt("age", 30), FindOptions
.sort("age", SortOrder.Ascending)
.thenLimit(0, 10));
// gets a document from the collection corresponding to a NitriteId
Document document = collection.getById(id);
// extracts all objects from the repository
org.dizitart.no2.objects.Cursor<Employee> cursor = repository.find();
// extracts paginated employee records from the repository
Cursor<Employee> cursor = repository.find(limit(0, 1));
// extracts all employee records and sorts them based on the value of 'age' field
Cursor<Employee> cursor = repository.find(sort("age", SortOrder.Ascending));
// extracts all employee records where value of 'age' field is greater than 30
Cursor<Employee> cursor = repository.find(ObjectFilters.gt("age", 30));
// finds all employee records where 'age' field value is greater than 30
// then sorts those records in ascending order and takes first 10 records
Cursor<Employee> cursor = repository.find(ObjectFilters.gt("age", 30),
sort("age", SortOrder.Ascending)
.thenLimit(0, 10));
// gets a employee from the repository corresponding to a NitriteId
Employee employee = repository.getById(id);
Cursor
A Cursor is a lazy record iterator. It iterates over a database search results and fetch Document from database on demand.
Cursor cursor = collection.find();
for (Document document : cursor) {
//...
}
Cursor<Employee> cursor = repository.find();
for (Employee employee : cursor) {
//...
}
A Cursor is also used to project records in a different format. More in this is here.
FindOptions
A FindOptions is used to specify search options. It provides pagination as well as sorting mechanism on Cursor.
// sorts all records by age in ascending order then take first 10 records and return as a Cursor
Cursor results = collection.find(sort("age", SortOrder.Ascending).thenLimit(0, 10));
// sorts the records by age in ascending order
Cursor results = collection.find(sort("age", SortOrder.Ascending));
// sorts the records by name in ascending order with custom collator
Cursor results = collection.find(sort("name", SortOrder.Ascending, Collator.getInstance(Locale.FRANCE)));
// fetch 10 records starting from offset = 2
Cursor results = collection.find(limit(2, 10));
Filter Embedded Document/Object
Find operation can query on embedded document/object also. Nitrite uses field separator .
for querying
embedded object.
// find a document which contains an "info" embedded document with age 30
Document doc = createDocument("firstName", "John")
.put("lastName", "Doe")
.put("info", createDocument("age", 30));
collection.insert(doc);
Cursor results = collection.find(Filters.eq("info.age", 30));
// find an employee whose note contains email addresses
@Data
@Indices({
@Index(value = "employeeNote.text", type = IndexType.Fulltext)
})
public class Employee implements Serializable {
@Id
private Long empId;
private Note employeeNote;
}
@Data
public class Note {
@Id
private Long noteId;
private String text;
}
Cursor<Employee> cursor = employeeRepository.find(regex("employeeNote.text", "^[a-z0-9._%+-]+@[a-z0-9.-]+\\.[a-z]{2,6}$"));
7.4.6. Projection
Projection converts a record within a cursor from one format to another. A projection operation can also convert the document/object within a cursor into a sub-set of document with selected fields or another object with similar fields.
Cursor cursor = collection.find(lte("birthDay", new Date()),
sort("firstName", SortOrder.Ascending).thenLimit(0, 3));
// a document with only selected field - 'firstName' and 'lastName'
Document projection = createDocument("firstName", null)
.put("lastName", null);
// it will return documents containing only 'firstName' and 'lastName'
RecordIterable<Document> documents = cursor.project(projection);
// it will return Employees containing every field that has been inserted
Cursor<Employee> projection = repository.find();
// it will return list of SubEmployee objects containing only some fields
// of Employee object
List<SubEmployee> subEmployeeList
= repository.find().project(SubEmployee.class).toList();
7.4.7. Join
Performs a left outer join to a collection in the same database to filter in records from the “joined” collection. It does an equality match between a field from the input cursor with a field from the cursor of the “joined” collection.
Document doc1 = createDocument("firstName", "fn1")
.put("lastName", "ln1")
.put("birthDay", simpleDateFormat.parse("2012-07-01T16:02:48.440Z"))
.put("data", new byte[] {1, 2, 3})
.put("list", new ArrayList<String>() {{ add("one"); add("two"); add("three"); }})
.put("body", "a quick brown fox jump over the lazy dog");
Document doc2 = createDocument("firstName", "fn2")
.put("lastName", "ln2")
.put("birthDay", simpleDateFormat.parse("2010-06-12T16:02:48.440Z"))
.put("data", new byte[] {3, 4, 3})
.put("list", new ArrayList<String>() {{ add("three"); add("four"); add("three"); }})
.put("body", "quick hello world from nitrite");
Document doc3 = createDocument("firstName", "fn3")
.put("lastName", "ln2")
.put("birthDay", simpleDateFormat.parse("2014-04-17T16:02:48.440Z"))
.put("data", new byte[] {9, 4, 8})
.put("body", "Lorem ipsum dolor sit amet, consectetur adipiscing elit. " +
"Sed nunc mi, mattis ullamcorper dignissim vitae, condimentum non lorem.");
collection.insert(doc1, doc2, doc3);
// another collection
Document fdoc1 = createDocument("fName", "fn1")
.put("address", "ABCD Street")
.put("telephone", "123456789");
Document fdoc2 = createDocument("fName", "fn2")
.put("address", "XYZ Street")
.put("telephone", "000000000");
Document fdoc3 = createDocument("fName", "fn2")
.put("address", "Some other Street")
.put("telephone", "7893141321");
foreignCollection.insert(fdoc1, fdoc2, fdoc3);
// join operation
Lookup lookup = new Lookup();
lookup.setLocalField("firstName");
lookup.setForeignField("fName");
lookup.setTargetField("personalDetails");
RecordIterable<Document> result = collection.find().join(foreignCollection.find(), lookup);
The result will look like
{
firstName=fn1,
lastName=ln1,
birthDay=Sun Jul 01 16:02:48 IST 2012,
data= [1, 2, 3],
list= [
one,
two,
three
],
body="a quick brown fox jump over the lazy dog",
_id=9078368118890,
_revision=1,
_modified=1510638278124,
personalDetails= [
{
fName=fn1,
address=ABCD Street,
telephone=123456789,
_id=9078368118887,
_revision=1,
_modified=1510638278119
}
]
}
{
firstName=fn2,
lastName=ln2,
birthDay=Sat Jun 12 16:02:48 IST 2010,
data= [3, 4, 3],
list= [
three,
four,
three
],
body="quick hello world from nitrite",
_id=9078368118891,
_revision=1,
_modified=1510638278130,
personalDetails= [
{
fName=fn2,
address=XYZ Street,
telephone=000000000,
_id=9078368118888,
_revision=1,
_modified=1510638278123
},
{
fName=fn2,
address=Some other Street,
telephone=7893141321,
_id=9078368118889,
_revision=1,
_modified=1510638278123
}
]
}
{
firstName=fn3,
lastName=ln2,
birthDay=Thu Apr 17 16:02:48 IST 2014,
data= [ 9, 4, 8],
body="Lorem ipsum dolor sit amet, consectetur adipiscing elit. Sed nunc mi, mattis ullamcorper dignissim vitae, condimentum non lorem.",
_id=9078368118892,
_revision=1,
_modified=1510638278130
}
Join operation is supported in ObjectRepository also
import lombok.Data;
@Data
public static class Person {
private String id;
private String name;
}
@Data
public static class Address {
private String personId;
private String street;
}
@Data
public static class PersonDetails {
private String id;
private String name;
private List<Address> addresses;
}
Lookup lookup = new Lookup();
lookup.setLocalField("id");
lookup.setForeignField("personId");
lookup.setTargetField("addresses");
RecordIterable<PersonDetails> result
= personRepository.find().join(addressRepository.find(), lookup,
PersonDetails.class);
7.5. Events
Collections are observable by nature. Every modification to it raises a certain event which an event listener can listen to and take appropriate user actions if registered.
Available events are:
-
Insert - triggered when a new element is inserted
-
Update - triggered when an element is updated
-
Remove - triggered when an element is removed
-
Drop - triggered when a collection is dropped
-
Close - triggered when a collection is closed
EventListener & ChangeInfo
Every collection change can be listened to via an event listener implementation. An event listener implementation must be registered with a collection first, then the context information of the collection change event will be supplied to it via a ChangeInfo object.
// observe any change to a NitriteCollection
collection.register(new ChangeListener() {
@Override
public void onChange(ChangeInfo changeInfo) {
System.out.println("Action - " + changeInfo.getChangeType());
System.out.println("List of affected ids:");
for (ChangedItem item : changeInfo.getChangedItems()) {
System.out.println("Id - " + item.getChangeType());
System.out.println("Id - " + item.getChangeTimestamp());
System.out.println("Id - " + item.getDocument());
}
}
});
// observe any change to a ObjectRepository
repository.register(new ChangeListener() {
@Override
public void onChange(ChangeInfo changeInfo) {
System.out.println("Action - " + changeInfo.getChangeType());
System.out.println("List of affected ids:");
for (ChangedItem item : changeInfo.getChangedItems()) {
System.out.println("Id - " + item.getChangeType());
System.out.println("Id - " + item.getChangeTimestamp());
System.out.println("Id - " + item.getDocument());
}
}
});
Event listener code always executes in a background thread in a non-blocking fashion. |
7.6. Indexing
Indexes helps efficient execution of queries in Nitrite. Without indexes Nitrite must scan all documents in a collection to find a matching document. If an index exists for a field, Nitrite uses the index to limit the number of documents it must scan.
Index can be created at any time on an empty or non-empty collection. |
Indexes are stored in a NitriteMap which internally maintains a counted B+ tree for persistence storage.
Nitrite supports indexes on any field or sub-field of the document in a collection, provided
-
field is not of array or collection type
-
field contains value of Comparable type
-
another index does not exist on the same field
Compound index is not supported in Nitrite. |
Type of Index
Nitrite supports 3 kinds of index:
-
Unique Index
-
Non-unique Index
-
Full-text Index
Create an Index
To create an index in collections use below function
void createIndex(String field, IndexOptions indexOptions)
Nitrite supports indexing on embedded field also
|
All indexing operation is synchronous and blocking in nature, unless
IndexOptions.async is set to true .
|
Rebuild Index
To rebuild a corrupted index call
void rebuildIndex(String field, boolean async)
Drop Index
To drop index data of a specific field call
void dropIndex(String field)
And to drop all indices of a collection call
void dropAllIndices()
Other Index Utilities
To check if a field is already indexed call
boolean hasIndex(String field)
To check if currently indexing is running on a specific field call
boolean isIndexing(String field)
To get list of all index information of a collection call
Collection<Index> listIndices()
if (!collection.hasIndex("firstName")) {
// create a on unique index on field 'firstName'
collection.createIndex("firstName",
indexOptions(IndexType.NonUnique, true));
}
// drop index on field 'lastName'
collection.dropIndex("lastName");
// rebuild index on age asynchronously
collection.rebuildIndex("age", true);
// print all index details of a collection
for(Index idx : collection.listIndices()) {
System.out.println("Field = " + idx.getField());
System.out.println("Index Type = " + idx.getIndexType());
}
Error Scenario
Indexing operation results in an error if:
-
creating index of a field where already index exists
-
rebuild index of a field which is not already indexed
-
drop index of a field where indexing is already running
-
drop index of a field which is not already indexed
-
create full-text index of a field which does not contains string value
-
any field value violates unique constraints for an unique index
7.6.1. Text Index
Nitrite supports text indexing on collections. It scans documents and create index entries by decomposing texts of an indexed field. Text indexing is supported only on the field of string data type.
Nitrite has its own text indexing engine, but a third-party text indexing engine like lucene can also be configured.
Nitrite’s own text indexing engine is case insensitive by nature. |
Create Text Index
collection.createIndex("notes", indexOptions(IndexType.Fulltext, true));
Tokenization & Stemming
Nitrite’s text indexing engines supports below delimiters
space |
\t |
\n |
\r |
\f |
+ |
* |
% |
& |
/ |
( |
) |
? |
' |
! |
, |
. |
; |
- |
_ |
# |
@ |
| |
^ |
{ |
} |
[ |
] |
< |
> |
` |
" |
= |
: |
~ |
\ |
By default, Nitrite drops english stop words (e.g. the, an, a, and, etc.) before creating text index entries.
Universal Tokenizer
Filtering stop words for other languages can also be enabled using UniversalTextTokenizer
from version 2.1.0 onwards.
UniversalTextTokenizer tokenizer = new UniversalTextTokenizer();
// enable tokenizer for bengali, english and chinese text only
tokenizer.loadLanguage(Languages.Bengali, Languages.English, Languages.Chinese);
// or, enable tokenization for all supported languages (resource heavy, as it loads all stop words in memory)
tokenizer.loadAllLanguages();
// initialize db with the universal tokenizer
Nitrite db = Nitrite.builder()
.textTokenizer(tokenizer)
.filePath("/tmp/test.db")
.openOrCreate();
Supported Languages
-
Afrikaans
-
Arabic
-
Armenian
-
Basque
-
Bengali
-
Breton
-
Bulgarian
-
Catalan
-
Chinese
-
Croatian
-
Czech
-
Danish
-
Dutch
-
English
-
Esperanto
-
Estonian
-
Finnish
-
French
-
Galician
-
German
-
Greek
-
Hausa
-
Hebrew
-
Hindi
-
Hungarian
-
Indonesian
-
Irish
-
Italian
-
Japanese
-
Korean
-
Kurdish
-
Latin
-
Latvian
-
Lithuanian
-
Malay
-
Marathi
-
Norwegian
-
Persian
-
Polish
-
Portuguese
-
Romanian
-
Russian
-
Sesotho
-
Slovak
-
Slovenian
-
Somali
-
Spanish
-
Swahili
-
Swedish
-
Tagalog
-
Thai
-
Turkish
-
Ukrainian
-
Urdu
-
Vietnamese
-
Yoruba
-
Zulu
Third-party Text Indexing Engine
Nitrite’s built-in text indexing only supports english language. For other languages a third-party text indexing engines like lucene can be configured by implementing TextIndexingService interface like below
/*
*
* Copyright 2017-2018 Nitrite author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*
*/
package org.dizitart.no2.services;
import com.fasterxml.jackson.annotation.JsonAutoDetect;
import com.fasterxml.jackson.databind.ObjectMapper;
import org.apache.lucene.analysis.Analyzer;
import org.apache.lucene.analysis.standard.StandardAnalyzer;
import org.apache.lucene.document.Document;
import org.apache.lucene.document.Field;
import org.apache.lucene.document.StringField;
import org.apache.lucene.document.TextField;
import org.apache.lucene.index.*;
import org.apache.lucene.queryparser.classic.ParseException;
import org.apache.lucene.queryparser.classic.QueryParser;
import org.apache.lucene.search.*;
import org.apache.lucene.store.Directory;
import org.apache.lucene.store.RAMDirectory;
import org.dizitart.no2.NitriteId;
import org.dizitart.no2.exceptions.IndexingException;
import org.dizitart.no2.fulltext.TextIndexingService;
import java.io.IOException;
import java.util.LinkedHashSet;
import java.util.Set;
import static org.dizitart.no2.exceptions.ErrorMessage.errorMessage;
import static org.dizitart.no2.util.StringUtils.isNullOrEmpty;
public class LuceneService implements TextIndexingService {
private static final String CONTENT_ID = "content_id";
private static final int MAX_SEARCH = Byte.MAX_VALUE;
private IndexWriter indexWriter;
private ObjectMapper keySerializer;
private Analyzer analyzer;
private Directory indexDirectory;
public LuceneService() {
try {
this.keySerializer = new ObjectMapper();
keySerializer.setVisibility(keySerializer
.getSerializationConfig()
.getDefaultVisibilityChecker()
.withFieldVisibility(JsonAutoDetect.Visibility.ANY)
.withGetterVisibility(JsonAutoDetect.Visibility.NONE)
.withIsGetterVisibility(JsonAutoDetect.Visibility.NONE));
indexDirectory = new RAMDirectory();
analyzer = new StandardAnalyzer();
IndexWriterConfig iwc = new IndexWriterConfig(analyzer);
iwc.setOpenMode(IndexWriterConfig.OpenMode.CREATE_OR_APPEND);
indexWriter = new IndexWriter(indexDirectory, iwc);
} catch (IOException e) {
throw new IndexingException(errorMessage("could not create full-text index", 0), e);
} catch (VirtualMachineError vme) {
handleVirtualMachineError(vme);
}
}
@Override
public void createIndex(NitriteId id, String field, String text) {
try {
Document document = new Document();
String jsonId = keySerializer.writeValueAsString(id);
Field contentField = new TextField(field, text, Field.Store.NO);
Field idField = new StringField(CONTENT_ID, jsonId, Field.Store.YES);
document.add(idField);
document.add(contentField);
synchronized (this) {
indexWriter.addDocument(document);
}
} catch (IOException ioe) {
throw new IndexingException(errorMessage("could not write full-text index data for " + text, 0), ioe);
} catch (VirtualMachineError vme) {
handleVirtualMachineError(vme);
}
}
@Override
public void updateIndex(NitriteId id, String field, String text) {
try {
String jsonId = keySerializer.writeValueAsString(id);
Document document = getDocument(jsonId);
if (document == null) {
document = new Document();
Field idField = new StringField(CONTENT_ID, jsonId, Field.Store.YES);
document.add(idField);
}
Field contentField = new TextField(field, text, Field.Store.YES);
document.add(contentField);
synchronized (this) {
indexWriter.updateDocument(new Term(CONTENT_ID, jsonId), document);
}
} catch (IOException ioe) {
throw new IndexingException(errorMessage("could not update full-text index for " + text, 0), ioe);
} catch (VirtualMachineError vme) {
handleVirtualMachineError(vme);
}
}
@Override
public void deleteIndex(NitriteId id, String field, String text) {
try {
String jsonId = keySerializer.writeValueAsString(id);
Term idTerm = new Term(CONTENT_ID, jsonId);
synchronized (this) {
indexWriter.deleteDocuments(idTerm);
}
} catch (IOException ioe) {
throw new IndexingException(errorMessage("could not remove full-text index for " + id, 0));
} catch (VirtualMachineError vme) {
handleVirtualMachineError(vme);
}
}
@Override
public void deleteIndexesByField(String field) {
if (!isNullOrEmpty(field)) {
try {
Query query;
QueryParser parser = new QueryParser(field, analyzer);
parser.setAllowLeadingWildcard(true);
try {
query = parser.parse("*");
} catch (ParseException e) {
throw new IndexingException(errorMessage("could not remove full-text index for value " + field, 0));
}
synchronized (this) {
indexWriter.deleteDocuments(query);
}
} catch (IOException ioe) {
throw new IndexingException(errorMessage("could not remove full-text index for value " + field, 0));
} catch (VirtualMachineError vme) {
handleVirtualMachineError(vme);
}
}
}
@Override
public Set<NitriteId> searchByIndex(String field, String searchString) {
IndexReader indexReader = null;
try {
QueryParser parser = new QueryParser(field, analyzer);
parser.setAllowLeadingWildcard(true);
Query query = parser.parse("*" + searchString + "*");
indexReader = DirectoryReader.open(indexDirectory);
IndexSearcher indexSearcher = new IndexSearcher(indexReader);
TopScoreDocCollector collector = TopScoreDocCollector.create(MAX_SEARCH);
indexSearcher.search(query, collector);
TopDocs hits = collector.topDocs(0, MAX_SEARCH);
Set<NitriteId> keySet = new LinkedHashSet<>();
if (hits != null) {
ScoreDoc[] scoreDocs = hits.scoreDocs;
if (scoreDocs != null) {
for (ScoreDoc scoreDoc : scoreDocs) {
Document document = indexSearcher.doc(scoreDoc.doc);
String jsonId = document.get(CONTENT_ID);
NitriteId nitriteId = keySerializer.readValue(jsonId, NitriteId.class);
keySet.add(nitriteId);
}
}
}
return keySet;
} catch (IOException | ParseException e) {
throw new IndexingException(errorMessage("could not search on full-text index", 0), e);
} finally {
try {
if (indexReader != null)
indexReader.close();
} catch (IOException ignored) {
// ignored
}
}
}
private Document getDocument(String jsonId) {
IndexReader indexReader = null;
try {
Term idTerm = new Term(CONTENT_ID, jsonId);
TermQuery query = new TermQuery(idTerm);
indexReader = DirectoryReader.open(indexDirectory);
IndexSearcher indexSearcher = new IndexSearcher(indexReader);
TopScoreDocCollector collector = TopScoreDocCollector.create(MAX_SEARCH);
indexSearcher.search(query, collector);
TopDocs hits = collector.topDocs(0, MAX_SEARCH);
Document document = null;
if (hits != null) {
ScoreDoc[] scoreDocs = hits.scoreDocs;
if (scoreDocs != null) {
for (ScoreDoc scoreDoc : scoreDocs) {
document = indexSearcher.doc(scoreDoc.doc);
}
}
}
return document;
} catch (IOException e) {
throw new IndexingException(errorMessage("could not search on full-text index", 0), e);
} finally {
try {
if (indexReader != null)
indexReader.close();
} catch (IOException ignored) {
// ignored
}
}
}
@Override
public void drop() {
try {
indexDirectory = new RAMDirectory();
analyzer = new StandardAnalyzer();
IndexWriterConfig iwc = new IndexWriterConfig(analyzer);
iwc.setOpenMode(IndexWriterConfig.OpenMode.CREATE_OR_APPEND);
indexWriter = new IndexWriter(indexDirectory, iwc);
} catch (IOException e) {
throw new IndexingException(errorMessage("could not drop full-text index", 0), e);
}
}
@Override
public void clear() {
try {
indexWriter.deleteAll();
} catch (IOException e) {
throw new IndexingException(errorMessage("could not clear full-text index", 0), e);
}
}
private void handleVirtualMachineError(VirtualMachineError vme) {
if (indexWriter != null) {
try {
indexWriter.close();
} catch (IOException ioe) {
// ignore it
}
}
throw vme;
}
@Override
public void commit() {
try {
indexWriter.commit();
} catch (IOException e) {
throw new IndexingException(errorMessage("could not commit unsaved changes", 0), e);
}
}
@Override
public void close() {
if (indexWriter != null) {
try {
commit();
indexWriter.close();
} catch (IOException ioe) {
// ignore it
}
}
}
}
// Configure third-party indexing service while opening the database
Nitrite db = Nitrite.builder()
.textIndexingService(new LuceneService())
.filePath("/tmp/test.db")
.openOrCreate();
7.6.2. Index Annotations
For ObjectRepositories there is another alternative for configuring indices, the object type can be annotated with the indexing formation. While creating a new ObjectRepository, Nitrite will scan the object type for indexing information and creates if any found.
import org.dizitart.no2.IndexType;
import org.dizitart.no2.objects.Id;
import org.dizitart.no2.objects.Index;
import org.dizitart.no2.objects.Indices;
@Indices({
@Index(value = "joinDate", type = IndexType.NonUnique),
@Index(value = "address", type = IndexType.Fulltext),
@Index(value = "employeeNote.text", type = IndexType.Fulltext)
})
public class Employee implements Serializable {
@Id
private long empId;
private Date joinDate;
private String address;
private Note employeeNote;
}
public class Note {
@Id
private long noteId;
private String text;
}
Index in Superclass
Generally, Nitrite only scans the immediate type for index annotations, it skips
the superclass from scan. To enable the scanning in its superclass for index annotation,
a class should be marked with @InheritIndices
.
@Indices(
@Index(value = "date", type = IndexType.Unique)
)
public class ParentClass {
@Id
public Long id;
public Date date;
}
@InheritIndices
public class ChildClass extends ParentClass {
public String name;
}
8. Filter
Filters are the selectors in the collection’s find operation. It matches documents in the collection depending on the criteria provided and returns a set of documents, a.k.a Cursor.
Each filtering criteria is based on a field of a document. If the field is indexed, the find operation takes the advantage of it and only scans the index map for that field. But if the field is not indexed, it scans the whole collection.
Filter | Method | Description |
---|---|---|
Equals |
eq(String, Object) |
Matches values that are equal to a specified value. |
Greater |
gt(String, Object) |
Matches values that are greater than a specified value. |
GreaterEquals |
gte(String, Object) |
Matches values that are greater than or equal to a specified value. |
Lesser |
lt(String, Object) |
Matches values that are less than a specified value. |
LesserEquals |
lte(String, Object) |
Matches values that are less than or equal to a specified value. |
In |
in(String, Object[]) |
Matches any of the values specified in an array. |
NotIn |
notIn(String, Object[]) |
Matches none of the values specified in an array. |
Filter | Method | Description |
---|---|---|
Not |
not(Filter) |
Inverts the effect of a filter and returns results that do not match the filter. |
Or |
or(Filter[]) |
Joins filters with a logical OR returns all ids of the documents that match the conditions of either filter. |
And |
and(Filter[]) |
Joins filters with a logical AND returns all ids of the documents that match the conditions of both filters. |
Filter | Method | Description |
---|---|---|
Element Match |
elemMatch(String, Filter) |
Matches documents that contain an array field with at least one element that matches the specified filter. |
Filter | Method | Description |
---|---|---|
Text |
text(String, String) |
Performs full-text search. |
Regex |
regex(String, String) |
Selects documents where values match a specified regular expression. |
All filters for NitriteCollection find() operations are listed in
|
8.1. Examples
// matches all documents where 'age' field has value as 30 and
// 'name' field has value as John Doe
collection.find(and(eq("age", 30), eq("name", "John Doe")));
// matches all documents where 'age' field has value as 30 or
// 'name' field has value as John Doe
collection.find(or(eq("age", 30), eq("name", "John Doe")));
// matches all documents where 'age' field has value not equals to 30
collection.find(not(eq("age", 30)));
// matches all documents where 'age' field has value as 30
collection.find(eq("age", 30));
// matches all documents where 'age' field has value greater than 30
collection.find(gt("age", 30));
// matches all documents where 'age' field has value greater than or equal to 30
collection.find(gte("age", 30));
// matches all documents where 'age' field has value less than 30
collection.find(lt("age", 30));
// matches all documents where 'age' field has value lesser than or equal to 30
collection.find(lte("age", 30));
// matches all documents where 'age' field has value in [20, 30, 40]
collection.find(in("age", 20, 30, 40));
// matches all documents where 'age' field does not have value in [20, 30, 40]
collection.find(notIn("age", 20, 30, 40));
// matches all documents which has an array field - 'color' and the array
// contains a value - 'red'.
collection.find(elemMatch("color", eq("$", "red"));
// matches all documents where 'address' field has a word 'roads'.
collection.find(text("address", "roads"));
// matches all documents where 'address' field has word that starts with '11A'.
collection.find(text("address", "11a*"));
// matches all documents where 'address' field has a word that ends with 'Road'.
collection.find(text("address", "*road"));
// matches all documents where 'address' field has a word that contains a text 'oa'.
collection.find(text("address", "*oa*"));
// matches all documents where 'address' field has words like '11a' and 'road'.
collection.find(text("address", "11a road"));
// matches all documents where 'address' field has word 'road' and another word that start with '11a'.
collection.find(text("address", "11a* road"));
// matches all documents where 'name' value starts with 'jim' or 'joe'.
collection.find(regex("name", "^(jim|joe).*"));
9. Replication
Replication synchronizes one Nitrite instance with another using a Nitrite DataGate server. Nitrite supports both-way replication.
Configuring replication is very easy in Nitrite, provided a DataGate server is already setup.
// open database
Nitrite db = Nitrite.builder()
.filePath("/tmp/test.db")
.openOrCreate("user", "password");
// create a collection
NitriteCollection collection
= db.getCollection("test");
// connect to a DataGate server
DataGateClient dataGateClient = new DataGateClient("http://localhost:9898")
.withAuth("userId", "password");
DataGateSyncTemplate syncTemplate
= new DataGateSyncTemplate(dataGateClient, "remote-collection@userId");
// create sync handle
SyncHandle syncHandle = Replicator.of(db)
.forLocal(collection)
// a DataGate sync template implementation
.withSyncTemplate(syncTemplate)
// replication attempt delay of 1 sec
.delay(timeSpan(1, TimeUnit.SECONDS))
// both-way replication
.ofType(ReplicationType.BOTH_WAY)
// sync event listener
.withListener(new SyncEventListener() {
@Override
public void onSyncEvent(SyncEventData eventInfo) {
}
})
.configure();
// start sync in the background using handle
syncHandle.startSync();
Replication is fully automatic after it is started once and runs in background thread. |
The application code doesn’t have to pay attention to the details: it just knows that when it makes changes to the local Nitrite instance they will eventually be replicated to all other remote Nitrite instances.
9.1. Security
Replication is a secured operation. There are two sets of credentials needed to successfully perform replication.
-
The client credential
-
The user credential
Client credential is required to create user credentials in the DataGate server. The user credential is required to perform several operations during the replication life cycle. Once a user credential is created it can be used for replication.
A client credential can be created using the DataGate portal. Once it is created, an app can use that client credential to create further users. Those users will take part in the replication.
The user credential has USER authority and a client credential has CLIENT authority. |
9.2. SyncHandle
SyncHandle is the handler for replication job. It can be used by the application code to control replication in various stages of application life-cycle.
Start Replication
syncHandle.startSync();
Pause Replication
syncHandle.pauseSync();
If any replicator thread is currently running, it will not be paused but the next
iteration will be paused until it has been resumed by resumeSync()
call.
Resume Replication
syncHandle.resumeSync();
Reset Local
syncHandle.resetLocalWithRemote(0, 100);
This operation clears local collection and downloads server data. This operation supports pagination for downloading remote data.
Reset Remote
syncHandle.resetRemoteWithLocal(0, 100);
This operation clears server data and uploads local collection data. This operation supports pagination for uploading local data.
Cancel Sync
syncHandle.cancelSync();
This operation cancels the background replicator thread.
9.3. Algorithm
Nitrite DataGate server facilitates the replication between multiple Nitrite instances. The replication logic triggers from the client end. It runs in a background thread at a configured interval. If the previous run is not completed within the configured interval, the next run will be skipped.
Steps
-
If the replication type is configured as PULL, the replicator will download all changes from server to local and updates local collection accordingly.
-
It first checks if the server is online.
-
If online, it will try to acquire sync lock for server copy.
-
Once lock is acquired, it reads the last sync time stored in the metadata of local collection.
-
Gets the change feed from the server copy from last sync time.
-
Updates the local collection with the remote change feed.
-
Updates last sync time received from server and store it in local metadata.
-
Releases remote sync lock.
-
-
If the replication type is configured as PUSH, the replicator will upload all changes from local to server.
-
It first checks if the server is online.
-
If online, it will try to acquire sync lock for server copy.
-
Once lock is acquired, it reads the last sync time stored in the metadata of local collection.
-
Gets the change feed of local from last sync time to current time.
-
Updates the server with the local change feed.
-
Updates the last sync time metadata in local.
-
Releases remote sync lock.
-
-
If the replication type is configured as BOTH_WAY, the replicator will merge changes.
-
It first checks if the server is online.
-
If online, it will try to acquire sync lock for server copy.
-
Once lock is acquired, it reads the last sync time stored in the metadata of local collection.
-
Gets the change feed of both server copy and local collection from last sync time.
-
Updates the server copy with the local change feed.
-
Updates the local collection with the remote change feed.
-
Updates last sync time received from server and store it in local metadata.
-
Releases remote sync lock.
-
Sync Lock
Sync lock is a mechanism to let other replicating clients know that the remote server is currently replicating with a client, so that others will not attempt to change the server data until the replication is completed.
A lock is acquired by updating syncLock metadata attribute with current epoch time in server. Every sync lock write is also associated with an expiry time metadata by updating the expiryWait attribute.
In case if a replicating client fails to release the sync lock, another client will wait until server’s current epoch time is greater than the value of expiryWait metadata. After it expires the new client will acquire lock and starts replication.
ChangeFeed
ChangeFeed is the list of all cumulative changes in a collection (for both remote and local) within a certain time period. Feed is obtained using the document metadata and the remove log of a collection.
Once replication is enabled for a collection, it maintains a remove log which keeps track of the removed documents from the local collection. Replicator collects the deleted documents details from the remove log for a certain time interval to create the removed feed.
Replicator uses the _created and _modified attributes of a document to generate the updated feed and created feed of a ChangeFeed.
Sync Events
There are 5 different events for various life cycle stages of a replication.
-
STARTED
-
IN_PROGRESS
-
COMPLETED
-
CANCELED
-
STOPPED
-
REPLICATION_ERROR
10. Under The Hood
10.1. NitriteStore
NitriteStore is a storage abstraction layer of Nitrite database. Currently it uses h2 database’s MVStore as the underlying storage implementation.
NitriteStore houses NitriteMaps in itself which are the building block of database collections.
10.2. NitriteMap
NitriteMap is a persistent key-value pair named map which is the main building block of NitriteCollection and ObjectRepository. A NitriteMap is implemented using h2 database’s MVMap.
11. Tools
Nitrite comes with several tools for various database operations.
11.1. Data Exchange
Nitrite has a built in data exchange tool. Data can be imported or exported as json.
// Export data to a file
Exporter exporter = Exporter.of(db);
exporter.exportTo(schemaFile);
//Import data from the file
Importer importer = Importer.of(db);
importer.importFrom(schemaFile);
ExportOptions
While exporting data, a user can choose what to export by means of ExportOptions
class.
ExportOptions exportOpt = new ExportOptions();
exportOpt.setExportIndices(false);
Exporter.of(db)
.withOptions(exportOpt)
.exportTo(schemaFile);
11.1.1. Data Format
Exchange of database data follows a specific format as described below.
{
"$schema": "http://json-schema.org/draft-04/schema#",
"type": "object",
"title": "Nitrite Data Exchange Format",
"description": "The data format for importing and exporting data out of Nitrite database.",
"properties": {
"collections": {
"type": "array",
"title": "List of all Nitrite Collections",
"items": {
"type": "object",
"properties": {
"name": {
"type": "string",
"title": "Name of the Collection"
},
"indices": {
"type": "array",
"title": "Indices",
"items": {
"type": "object",
"properties": {
"index": {
"type": "object",
"title": "Index",
"properties": {
"indexType": {
"type": "string",
"title": "Type of the Index"
},
"field": {
"type": "string",
"title": "Indexed field"
},
"collectionName": {
"type": "string",
"title": "Name of the Collection"
}
},
"required": [
"indexType",
"field",
"collectionName"
]
}
},
"required": [
"index"
]
}
},
"data": {
"type": "array",
"title": "Collection data format",
"items": {
"type": "object",
"title": "NitriteId and Document pairs",
"properties": {
"key": {
"type": "object",
"title": "NitriteId",
"properties": {
"idType": {
"type": "string",
"title": "Type of ObjectId"
},
"objectId": {
"type": "object",
"title": "ObjectId"
}
},
"required": [
"idType",
"objectId"
]
},
"value": {
"type": "object",
"title": "Document"
}
},
"required": [
"key",
"value"
]
}
}
},
"required": [
"name",
"indices",
"data"
]
}
},
"repositories": {
"type": "array",
"title": "List of all Object Repositories",
"items": {
"type": "object",
"properties": {
"type": {
"type": "string",
"title": "Type of the Object"
},
"indices": {
"type": "array",
"title": "Indices",
"items": {
"type": "object",
"properties": {
"index": {
"type": "object",
"title": "Index",
"properties": {
"indexType": {
"type": "string",
"title": "Type of the Index"
},
"field": {
"type": "string",
"title": "Indexed field"
},
"collectionName": {
"type": "string",
"title": "Internal name of the Object Repository"
}
},
"required": [
"indexType",
"field",
"collectionName"
]
}
},
"required": [
"index"
]
}
},
"data": {
"type": "array",
"title": "Repository data format",
"items": {
"type": "object",
"title": "NitriteId and Document pairs",
"properties": {
"key": {
"type": "object",
"title": "NitriteId",
"properties": {
"idType": {
"type": "string",
"title": "Type of ObjectId"
},
"objectId": {
"type": "object",
"title": "ObjectId"
}
},
"required": [
"idType",
"objectId"
]
},
"value": {
"type": "object",
"title": "Document"
}
},
"required": [
"key",
"value"
]
}
}
},
"required": [
"type",
"indices",
"data"
]
}
}
},
"required": [
"collections",
"repositories"
]
}
11.2. Recovery
A data recovery tool also comes with Nitrite as a built-in tool to recover from a corrupted data file. While opening a database if Nitrite finds the file is corrupted, it uses this tool to recover from it using a last known good version of the database.
This tool can also be used from application code on-demand.
Recovery.recover(dataFilePath);
11.3. DataGate
Nitrite DataGate is the replication server for Nitrite database. It comes as a separate product. It is available as a binary distribution or as a docker image.
To get the latest binary distribution please visit the Release page of Github. The docker image is available at docker hub
docker pull dizitart/nitrite-datagate
Configuration
DataGate server needs a MongoDb instance to run. To configure Mongo details, edit the file datagate.properties inside the conf directory of the binary distribution and set the below properties
# Mongo Config
datagate.mongo.host=
datagate.mongo.port=
datagate.mongo.user=
datagate.mongo.password=
datagate.mongo.database=
And run the server, execute the below command from bin folder
./datagate.sh
To configure and run the docker image, some details like MongoDb connection needs to be provided. Create some docker file like below and build it for the desired result.
FROM dizitart/nitrite-datagate
COPY keystore.jks /
## Connection details (Replace with your own values)
ENV DATAGATE_HOST "0.0.0.0"
ENV DATAGATE_HTTP_PORT "8080"
ENV DATAGATE_HTTPS_PORT "8443"
ENV DATAGATE_MONITOR_PORT "9090"
ENV DATAGATE_KEY_STORE "keystore.jks"
ENV DATAGATE_KEY_PASSWORD "s3cret"
## Mongo connection details (Replace with your own values)
ENV DATAGATE_MONGO_HOST "192.168.0.100"
ENV DATAGATE_MONGO_PORT "2706"
ENV DATAGATE_MONGO_USER "demo"
ENV DATAGATE_MONGO_PASSWORD "demoPass"
ENV DATAGATE_MONGO_DATABASE "demo"
## Starts the server
RUN ["chmod", "+x", "./datagate.sh"]
ENTRYPOINT [ "./datagate.sh" ]
Once the server is up and running, access the admin portal using the url
http(s)://<ip>:<port>/datagate
12. Potassium Nitrite
Potassium Nitrite (KNO2) is a kotlin extension of nitrite database. It aims to streamline the usage of nitrite with kotlin by leveraging its language features like extension function, builders, infix functions etc.
To use potassium-nitrite in any kotlin application, just add the below dependency:
Maven
<dependency>
<groupId>org.dizitart</groupId>
<artifactId>potassium-nitrite</artifactId>
<version>3.4.2</version>
</dependency>
Gradle
compile 'org.dizitart:potassium-nitrite:3.4.2'
12.1. Initialization
Database can be initialized using builder method nitrite
:
// without credentials
val db = nitrite {
file = File(fileName) // or, path = fileName
autoCommitBufferSize = 2048
compress = true
autoCompact = false
}
// with credentials
val db = nitrite("userId", "password") {
file = File(fileName) // or, path = fileName
autoCommitBufferSize = 2048
compress = true
autoCompact = false
}
NitriteCollection
and ObjectRepository
can be initialized as follow:
// add import statement
import org.dizitart.kno2.*
// Initialize a Nitrite Collection
val collection = db.getCollection("test") {
insert(documentOf("a" to 1),
documentOf("a" to 2),
documentOf("a" to 3),
documentOf("a" to 4),
documentOf("a" to 5))
val cursor = find(limit(0, 2))
}
// Initialize an Object Repository
val repository = db.getRepository<Employee> {
insert(Employee(1, "red"), Employee(2, "yellow"))
}
The library has some builder methods to create documents:
// add import statement
import org.dizitart.kno2.*
// create empty document
val doc = emptyDocument()
val doc = documentOf()
// create a document with one pair
val doc = documentOf("a" to 1)
// create a document with more pairs
val doc = documentOf("a" to 1, "b" to 2, "c" to 3)
12.2. Filters
Potassium nitrite has some convenient infix functions for creating search filters.
Document Filter
// add import statement
import org.dizitart.kno2.filters.*
// equivalent to eq("a", 1)
val cursor = find("a" eq 1)
// equivalent to gt("a", 1)
val cursor = find("a" gt 1)
// equivalent to gte("a", 1)
val cursor = find("a" gte 1)
// equivalent to lt("a", 1)
val cursor = find("a" lt 1)
// equivalent to lte("a", 1)
val cursor = find("a" lte 1)
// equivalent to `in`("a", arrayOf(1, 2, 5))
val cursor = find("a" within arrayOf(1, 2, 5))
// equivalent to `in`("a", 1..5)
val cursor = find("a" within 1..5)
// equivalent to `in`("a", listOf(1, 2, 3))
val cursor = find("a" within listOf(1, 2, 3))
// equivalent to elemMatch("a", `in`("$", 3..5))
val cursor = find("a" elemMatch ("$" within 3..5))
// equivalent to text("a", "*ipsum")
val cursor = find("a" text "*ipsum")
// equivalent to regex("a", "[a-z]+")
val cursor = find("a" regex "[a-z]+")
// equivalent to and(eq("a", 1), gt("b", 2))
val cursor = find(("a" eq 1) and ("b" gt 2))
// equivalent to or(eq("a", 1), gt("b", 2))
val cursor = find(("a" eq 1) or ("b" gt 2))
// equivalent to not("a" within 1..5))
val cursor = find(!("a" within 1..5))
Object Filters
Infix functions for object filters only applies for simple properties of kotlin classes
// add import statement
import org.dizitart.kno2.filters.*
@Indices(Index(value = "text", type = IndexType.Fulltext))
data class TestData(@Id val id: Int, val text: String, val list: List<ListData> = listOf())
class ListData(val name: String, val score: Int)
// equivalent to eq("id", 1)
val cursor = find(TestData::id eq 1)
// equivalent to gt("id", 1)
val cursor = find(TestData::id gt 1)
// equivalent to gte("id", 1)
val cursor = find(TestData::id gte 1)
// equivalent to lt("id", 1)
val cursor = find(TestData::id lt 1)
// equivalent to lte("id", 1)
val cursor = find(TestData::id lte 1)
// equivalent to `in`("id", 1..2)
val cursor = find(TestData::id within 1..2)
// equivalent to elemMatch("list", eq("score", 4))
val cursor = find(TestData::list elemMatch (ListData::score eq 4))
// equivalent to text("text", "*u*")
val cursor = find(TestData::text text "*u*")
// equivalent to regex("text", "[0-9]+")
val cursor = find(TestData::text regex "[0-9]+")
// equivalent to and(eq("id", 1), text("text", "12345"))
val cursor = find((TestData::id eq 1) and (TestData::text text "12345"))
// equivalent to or(eq("id", 1), text("text", "12345"))
val cursor = find((TestData::id eq 1) or (TestData::text text "12345"))
// equivalent to not(lt("id", 1))
val cursor = find(!(TestData::id lt 1))
12.3. Others
Here are some utility functions for various other capabilities
// export data to a file
db exportTo file
// import data from file
db importFrom file
12.4. Kotlin Data Class
The library has a build-in support for kotlin data classes via jackson-kotlin module. This module is already registered to the default jackson mapper so that user does not have to deal with it explicitly.
12.5. Tips
In Android Nitrite DB might fail if it’s being opened by two processes at the same time. The solution to that problem is to apply the classic singleton design pattern. The idea is to make sure only one process is accessing the DB file, and subsequent requests are using the already built Nitrite DB object and not try to open the DB file again if it has already been opened in the same context.
class DBHandler {
companion object {
@Volatile private var INSTANCE: Nitrite? = null
fun getDBInstance(context: Context): Nitrite {
return INSTANCE ?: synchronized(this) {
INSTANCE ?: buildNitriteDB(context).also { INSTANCE = it }
}
}
private fun buildNitriteDB(context: Context): Nitrite {
return Nitrite.builder()
.compressed()
.filePath(context.filesDir.path + "/app.db")
.openOrCreate()
}
}
}