Here is a quick cheat sheet if you ever wanted to build a mobile app that can take advantage of the camera built into the device, capture the vehicle or vehicles nameplate(s) in a frame and process that image and send it on API that can analyze the image and relay back the information it just scanned. This app can be extended to fulfil requirements like checking if the vehicle registration is up to date or insurance renewal is overdue etc. provided if there are APIs already available that can deliver this information.
So what is the tech involved in building this app?
- To build a mobile app that can be deployed on iOS or Android, I used the Visual Builder service from the Oracle Cloud stack. This service provides the capability to build Web as well as Mobile applications through a declarative approach with the ability to introduce code for any complex requirements.
- To store the captured image and use the image for downstream application purposes I used the Oracle Content & Experience service that comes with a rich set of APIs for content ingestion, public document link generation etc. From an enterprise architecture viewpoint, it makes sense to store the images with metadata in a content store, so I decided to archive the image using this service as part of the mobile app build process.
- The most significant bit is to use a library / API that can process the image or OCR and send back the information we are interested n. For these purposes, I used the open source ALPR library. There are API’s available already if you want to fast track your app.
- This one is optional. If you want to validate the information captured, we can set up a few API’s using the Oracle Autonomous Database with some data to complete the validation flow in the app.
This is what the Architecture would look like :

Here is what the outcome would look like when I capture the picture of a few vehicles in a single frame.


Let’s see how we can build this app :
Step 1: Create a mobile application in Visual Builder with foundation template and a service creation with the API endpoints
Quick Mobile App build pr using Visual Builder
Step 2: Define the API endpoints
a) Create Document Service Upload API when you are creating the API connection in the Service Tab as in Step 1.
API Endpoint is : https://contentserviceurl/documents/api/1.2/files/data


So we need to make sure we provide a sample response JSON object in the “Response” tab. Here is what the sample response would look like when an image is uploaded to the content service.
{
’
“createdBy”: {
“displayName”: “Service Account — The Fort”,
“id”: “U8F8D928598D6E47D74E87444ECDF285B9A9”,
“loginName”: “svc_the_fort”,
“type”: “user”
},
“createdTime”: “2019–02–08T04:37:21Z”,
“errorCode”: “0”,
“errorKey”: “!csServiceStatusMessage_checkin,SOMETHING00006755310000000014”,
“errorMessage”: “Successfully checked in content item ‘SOMETHING00006755310000000014’.”,
“id”: “D29D60778C6832155A81F450662CF1C8848548F2BAE5”,
“mimeType”: “image/png”,
“modifiedBy”: {
“displayName”: “Service Account — The Fort”,
“id”: “U8F8D928598D6E47D74E87444ECDF285B9A9”,
“loginName”: “svc_the_fort”,
“type”: “user”
},
“modifiedTime”: “2019–02–08T04:37:21Z”,
“name”: “44.png”,
“ownedBy”: {
“displayName”: “Vijay Kumar Yenne”,
“id”: “U116511163DB94D14727824C38C87F1C7872”,
“loginName”: “vijaykumar.yenne@oracle.com”,
“type”: “user”
},
“parentID”: “F51FC2867E31B155EC90CF3C0FF2A9973DA95D8877B1”,
“size”: “2446908”,
“type”: “file”,
“version”: “1”
}
b). Create a public document URL link API : https://contentserviceurl/documents/api/1.2/publiclinks/file/{fileid}
Note that there is query param “fileId” which needs to be passed at runtime for the API to return the required results.


The JSONson request sample :
{
“assignedUsers”: “@everybody”,
“linkName”: “MyFileLinkOne”,
“role”: “downloader”
}
The response body sample :
{
“assignedUsers”: “@everybody”,
“createdTime”: “2019–02–14T14:46:25Z”,
“errorCode”: “0”,
“id”: “D50578746CE232FB46B76698F5BF6F1CF778CFB9D0A2”,
“lastModifiedTime”: “2019–02–14T14:46:25Z”,
“linkID”: “LD4D67DBC6E8DAF1FB911A72202B90F7D301E87CE579”,
“linkName”: “MyFileLinkDuplicate”,
“ownedBy”: {
“displayName”: “Vijay Kumar Yenne”,
“id”: “U116511163DB94D14727824C38C87F1C7872”,
“loginName”: “vijaykumar.yenne@oracle.com”,
“type”: “user”
},
“role”: “downloader”,
“type”: “publiclink”
}
c) Create a service connection for OCR / Image recognition.
You can set up an account at https://cloud.openalpr.com and get hold of the API key and use the below API endpoint to pass the image being captured through a mobile device :
https://api.openalpr.com/v2/recognize_url?image_url={{image_url}}&secret_key= {{secret_key}}&recognize_vehicle=0&country={{country}}&state&return_image=0&topn=10&prewarp


API Endpoint to be used : https://api.openalpr.com/v2
API Method: recognize_url


Step 3: Add UI functionality that opens the camera, captures the image, invokes the API and displays the information.
a) Navigate to the Mobile App and open the home-start page.

b) Drop a button on the content placeholder from the component catalogue and change the name of the button.

c) Create an event when the button “scan vehicle is invoked”. Go to the Events Tab and select the ‘ojAction’ event to create the binding and generate the action chain associated with it.


d) Drop the “Take Photo” action from the actions catalogue after the start event. This will allows the app to open the Camera on the device and capture the picture.

e) The image captured by the camera action cannot be mapped directly to the Content API as the API expects the image to be in a binary format. Also, the content API for uploading assets is structured to accepts multipart/form-data content type only.
So let see how we can manage this Content Integration with Visual
Select “Call Module Function” from the Actions catalogue and drop it after the Photo Action in the canvas. This function provides the capability to invoke custom javascript code.

Hit the “Select Module Function” and create a page function.


Assign the Input Parameters by mapping the Image captured from the camera action to the function’s argument.

Navigate to the home-start page and open the JS tab to edit the function we just created to add the required Javascript code.

Replace the entire page JS with the below JS Code :
All we are doing here is taking the path of the file captured through the camera action, converting it into a base64 encoded string and then converting into a binary file, assigning a random name to the binary data and returning that when the function is invoked.
define([], function() {
‘use strict’;
var PageModule = function PageModule() {};
/**
*
* @param {String} arg1
* @return {String}
*/
PageModule.prototype.imageCheckin = function(arg1) {
return new Promise(function(resolve, reject) {
console.log(“imagePath=” + arg1);
convertFilePathToBase64(arg1, function(base64Img) {
console.log(“after encoding :” + base64Img);
resolve(base64Img);
}, function(error) {
console.log(error);
reject(“The error is: “ + error);
});
});
};
function convertFilePathToBase64(filePath, callback) {
window.resolveLocalFileSystemURL(filePath, gotFile, fail);
function fail(e) {
alert(‘Cannot find requested file’);
}
function gotFile(fileEntry) {
fileEntry.file(function(file) {
var reader = new FileReader();
reader.onloadend = function() {
console.log(“Successful file read: “ + this.result);
var base64String = reader.result.split(‘,’).pop();
var contentType = “image/jpeg”;
var sliceSize = 512;
var byteCharacters = atob(base64String);
var byteArrays = [];
for (var offset = 0; offset < byteCharacters.length; offset +=
sliceSize) {
var slice = byteCharacters.slice(offset, offset +
sliceSize);
var byteNumbers = new Array(slice.length);
for (var i = 0; i < slice.length; i++) {
byteNumbers[i] = slice.charCodeAt(i);
}
var byteArray = new Uint8Array(byteNumbers);
byteArrays.push(byteArray);
}
var uniqueFilename = Math.floor(Math.random() * 1000);
var blob = new Blob(byteArrays, {
type: contentType
});
blob.name = uniqueFilename + “.jpg”;
callback(blob);
};
reader.readAsDataURL(file);
});
}
}
return PageModule;
});
Next, we drop a “Call Rest Endpoint action” after the call module in the action chain and select the endpoint by browsing the connection we created at the beginning. For image checkin, we select the post /files/data endpoint.


After you select the endpoint we need to map the Payload parameters required for the API to be invoked. Click on Assign and map the body to the below as static content :


{
“jsonInputParameters”: “{{‘{ \”parentID\”: \”F51FC2867E31B155EC90CF3C0FF2A9973DA95D8877B1\” }’}}”,
“primaryFile”: “{{$chain.results.callModuleFunction1}}”
}
The “parentID” is the “folderID” where you want the images to be stored. We could invoke another API to create the folder at runtime and grab that. However, for this app, we decided to save the images in one folder that is already created for us. You can log in into the Content UI and browse to the folder to grab the folderId from the browser URL that is required here.
The primaryFile is the return value from the callModuleFunction we invoked before the rest call.
Once the image is ingested, we need to create a public link to the file so that we could pass this link to the recognition API.
Drop another rest call action in the success path for the above and select the “publiclink” generation endpoint and map the required payloads.






{
“assignedUsers”: “@everybody”,
“linkName”: “{{$chain.results.callRestEndpoint2.body.id}}”,
“role”: “downloader”
}
We need to pass the ImageURL that needs to be constructed at the runtime as above API execution returns only the “linkId” and not the URL. So for this purpose, we can create another moduleFunction that builds up the URL for us.
Just like earlier we will create a function called getAssetURL and pass the linkId and docName from the API results into this function.



PageModule.prototype.getAssetURL = function(linkId,docId,docName){
var docUrl = “https://fort-cec-apacanzset01.cec.ocp.oraclecloud.com/documents/link/"+linkId+"/file/"+docId+"/_"+docName;
return docUrl;
};

Map the rest call results from the Public Link and Image checkin to the Input params of the function.


Create a flow variable at the home page level and assign the output of the callModuleFunction that constructs the image URL to this variable by dropping an assignVariable action in the action flow.








Once the URL is assigned, we can drop a navigate action on to the flow so that we can display the extracted details from the Image on new details page.




Select the target point to the page we just created.



Open the details page and drop a list view to the placeholder that allows to build up the page using the wizard.


Click on Add Data to populate the details.





That is all is required for the app. We can now configure the app to be deployed on the device (iOS or Android).



Check out these two articles for iOS and Android build profile :
Once the profile is configured hit the Play button and build your app that generates the apk or ipa that can be installed on your device.

Thank you for taking the time to read this article. If you have any questions/feedback to drop me a note here.
Hello,
I read your blog and i found it very interesting and useful blog for me. I hope you will post like this, i am very thankful to you for these type of post.
Thank you.
LikeLike
Great article, i will try out! Thanks.
Smatclouds.wordpress.com
LikeLike
useful blog “license plate recognition”
LikeLike