Media Capture using Oracle Visual Builder for Facial Recognition App

Recently I built a Facial Recognition Mobile App using Oracle Visual Builder having set up the Facial recognition APIs using Tensorflow taking some inspiration from FaceNet. As highlighted above the app does the following: record a video of your face and send it to the API that generates various images and classifies them based on the label we provide at runtime. And in turn, invoke another API that is going to train the machine learning model to update the dataset with the new images and label provided. These two APIs will build a facial recognition Database. Once I have this, I can capture the face and compare that with the dataset I have captured earlier in my Facial recognition Database to output if the face exists in our system.

Here is a quick demo of the app :

One of the neat capability with Oracle Visual Builder is this feature Take Photo action that allows capturing the image using the device’s camera. However, the limitation with this functionality is that it doesn’t turn on the video recording on the device when the camera action is invoked as it is meant only for image capture. In this article, I would like to take you through the steps involved to enable video recording on the camera function in VBCS.

In the latest release of the visual builder, there is a feature that allows adding custom plugins that the default mobile Template shipped with the product may not support out of the box.

The first step is to download the Cordova Template used in the visual builder.

Navigate to the Custom Plugins option in your Mobile App Project configuration settings option :

As per the instruction, download the Cordova Project source and unzip the contents into a directory.

As per the readme, there are certain pre-requisites that you need to take care of before you add the plugin to the package. I added the plugin for the Android Template. Here are the steps involved :

  1. Ensure Cordova is installed. If you have got npm installed, issue the following command to get started with Cordova.
$npm install -g cordova

2. Install Java Development Kit (JDK) 8.

3. Install Android Studio for the Android SDK that is required to build the plugin package

3. Install Gradle.

$brew install gradle

(If you are using mac and homebrew is installed)

4. Make sure Gradle and Java are in your classpath.

5. Since we are building the plugin for Android Platform execute the below command in the directory where you extracted the content. This will add all the available plugins to the config.xml in the project.

$cd cordova-package

$cordova platform add android — nosave

7. The most important part of the activity: add the plugin that you want to be packaged for your Mobile App project.

In my case, I was interested in the media capture plugin that allows the device camera function to capture video recording.

$cordova plugin add cordova-plugin-media-capture

6. Use the below command to build the plugin package ( I didn’t have much success with the build.json file approach and the below hack is what worked for me).

$read -p "Please enter store password:" password  && cordova build android --debug -- --storePassword=<pass>

Replace — debug with the release, if you want to build a release profile for the Mobile App (Typically used for Production Versions). Note that if the build is successful, the package is ready to be added to the mobile project.

Once the build template is ready, go back to the Visual Builder Settings Configuration screen and upload the package

Make sure to select the right build type when you upload the package.

Now that we have the media plugin available, let have a look at how we can add the video capture functionality to our Mobile App.

Create a Page on your Mobile App and add a button with a select event.

Open the Action chain corresponding to the Event. The idea is when the user hits the Scan button, we would like to pop up the Camera with a Video recording capability turned on.

As part of the Action Chain Sequence, drop a custom module function from the action palette and name it launchVideo.

Edit the CustomModule Function and replace the script with the below code.

define([], function () {
  'use strict';
  var PageModule = function PageModule() { };

  /**
   *
   * @param {String} arg1
   * @return {String}
   */
  PageModule.prototype.launchVideo = function (arg1) {
    return new Promise(function (resolve, reject) {
      scanVideo(function (path) {
        console.log("after encoding :" + path);
        resolve(path);
      }, function (error) { console.log(error); reject("The error is: " + error); });
    });
  };

  function scanVideo(callback) {
    console.log("Launch Video");
    // Only subscribe to events after deviceready fires
    document.addEventListener('deviceready', onDeviceReady); function onDeviceReady() {
      console.log("Launch Device Ready");// capture callback
      var captureSuccess = function (mediaFiles) {
        console.log("Captured Video");
        var i, path, len;
        for (i = 0, len = mediaFiles.length; i < len; i += 1) {
          path = mediaFiles[i].fullPath;
          // do something interesting with the file
          console.log("file Path=" + path);
          callback(path);
        }
      };// capture error callback
      var captureError = function (error) {
        navigator.notification.alert('Error code: ' + error.code, null, 'Capture Error');
      };
      var recordingOptions = { limit: 1, duration: 5 };
      // start image capture
      navigator.device.capture.captureVideo(captureSuccess, captureError, recordingOptions);
    }
  }

  return PageModule;
});

Essentially, we are wrapping the video invocation inside a promise by calling the functions within the plugin. Notice that I am limiting the number of videos to 1 with a duration for 5 seconds.

The output of this is a path to the video file captured through the device camera.

I have set up an API that can accept the video, and generate various images required for my Tensor flow model to train the face as shown in the demonstration above.

However please note that, if you have an API that accepts a video, you need to transform the footage into a binary before passing to the API. You can refer my earlier article that has got the code snippets from video to base64 transformation.

This is a good example of using Oracle Visual Builder to quickly build mobile apps with very rich functionality using very little code, through the extensible nature of the default shipped mobile templates. These apps can also be configured to integrate with back end APIs in minutes, in order to deploy a feature rich enterprise-grade app.

Thank you for your time and hope you got something out of this article.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s