How to implement Google Vision in Ionic 4
The blog focuses on implementing Google Vision in an Android / iOS application using Ionic 4 framework. The app supports various features including but not limited to Object Detection, Labeling Images, Text Recognition, Face Detection using Google Vision API.
Introduction
Ionic Framework
Ionic Framework is an open-source UI toolkit for building performant, high-quality mobile and desktop apps using web technologies (HTML, CSS, and JavaScript). Ionic Framework is mainly concentrated over the frontend user experience, or UI interaction of an app (controls, interactions, gestures, animations).
Ionic Framework has been around for around 5 years and within these 5 years, Ionic Framework has been very popular among developers for its ease of usage over Swift / Java. Also in Ionic 4, you get to keep the single source code for both Android and iOS apps. What more can a developer ask for!
Ionic 4 is the latest version (at the time of writing this post) of Ionic and is much more reliable and robust than previous versions.
Google Vision API
Google Vision API is also known as Cloud Vision API.
Cloud Vision API allows developers to easily integrate vision detection features including image labeling, face, and landmark detection, optical character recognition (OCR), and tagging of explicit content, within applications.
Cloud Vision API offers you the following features to be applied on images:
Label detection
Text detection
Safe search (explicit content) detection
Facial detection
Landmark detection
Logo detection
Image properties
Crop hints
Web detection
Document text detection
Object localizer
For more information about Google Vision API, click HERE.
Brief History of Machine Learning
Arthur Samuel, a pioneer in the field of artificial intelligence and computer gaming, coined the term “Machine Learning”. He defined machine learning as — “Field of study that gives computers the capability to learn without being explicitly programmed”.
In a very layman manner, Machine Learning(ML) can be explained as automating and improving the learning process of computers based on their experiences without being actually programmed i.e. without any human assistance. The process starts with feeding good quality data and then training our machines(computers) by building machine learning models using the data and different algorithms.
ML is one of the most impressive technologies that one would have ever come across.
ML vs Traditional Programming
- Traditional Programming: We feed in DATA (Input) + PROGRAM (logic), run it on a machine and get output.
- Machine Learning: In Machine Learning, we feed DATA( i.e. Input) + Output, run it on an ML model during training and the ML model creates its own program(logic), which can be evaluated while testing.
Implementing Google Vision in Ionic 4
Google Cloud’s Vision API offers robust pre-trained machine learning models through REST and RPC APIs. It assigns labels to images and quickly classifies them into millions of predefined classes or categories. It can detect objects and faces, read the printed or handwritten text, and build valuable metadata into your image catalog.
For implementing Google Vision in your Ionic 4 app, you can visit the given link or, follow certain steps:
- Install the Ionic CLI following the instructions here.
- Sign in to your Google Account.
- Select or Create a Google Cloud Platform Project.
Please make sure that billing is enabled for your Google Cloud Platform project. To enable billing, follow the procedure provided in the given link.
N.B.: You will not be able to get any response from API until you enable the billing. - To Enable the Cloud Vision API.
a. First, select a project.
b. After selecting the project, click on “Go to API’s overview” button.
c. Click on “Enable API and Services”,
d. Search for Cloud Vision API.
e. Click on the enable button.
f. To use this API, click on Create Credentials.
g. Choose the Cloud Vision API from the list.
h. Click on “Yes I’m using one or both” and proceed further.
i. Click on the Done button.
j. Click on “Create Credentials”.
k. Click “API key” from options.
l. Copy the API key popped up and save it at a safer place. It should not be made public.
m. Click the close button.
Congrats Your API key has been generated. - Create an ionic Project using the following command at Command Prompt.
$ ionic start IonVision blank
It will simply create a blank new Ionic 4 Project named IonVision.
When completed, simply hop into the newly created folder and install some of the required dependencies.
$ cd ./IonVision
Install the required Camera plugin using the following code.
$ ionic cordova plugin add cordova-plugin-camera $ npm install --save @ionic-native/camera
The above code will add the Cordova camera plugin and Ionic Native library in our project.
Fire up your code editor and edit src/app/app.module.ts. Import the Camera and add it to the array of providers:
import { Camera } from '@ionic-native/camera/ngx';
import { HttpModule } from '@angular/http';
...
providers: [
StatusBar,
SplashScreen,
Camera,
{ provide: RouteReuseStrategy, useClass: IonicRouteStrategy }
] ...
Now let's create a file to store our configuration details.
$ touch ./src/environment.ts
and add this construct to the newly created file
export const environment = {};
Now create a service GoogleCloudVisionService using Ionic CLI:
$ ionic g service GoogleCloudVisionService
Now, open the newly created file (/src/app/google-cloud-vision-service/google-cloud-vision-service.service.ts) and add the following:
import { Injectable } from '@angular/core';
import { Http } from '@angular/http';
import { environment } from '../environment';
@Injectable({
providedIn: 'root'
})
export class GoogleCloudVisionServiceService {
constructor(public http: Http) { }
getLabels(base64Image,feature) {
const body = {
"requests": [
{
"features": [
{
"type": feature.value,
"maxResults": 10
}],
"image": {
"content": base64Image
}}]}
return this.http.post('https://vision.googleapis.com/v1/images:annotate?key=' + environment.googleCloudVisionAPIKey, body);
}}
In this project, we will create an app that can provide a facility to select any of the features. You can check out all the features available.
Your code editor may be complaining about our googleCloudVisionAPIKey reference. Let’s add it to our environment.ts file, filling in the API key you generated way back :
googleCloudVisionAPIKey: ""
Now we have a fully configured app, we have to just click a new photo with a camera, analyze it and show the result. On the home page, we will prompt the user to select a feature to be applied and click the photo.
[photo homepage]
We will create another page to show the image and responses on the app.
$ ionic generate page showclass
Here, showclass is the name of the page.
Now, In /src/app/home/home.page.ts we will add the following code to get the picture, send it to google vision and send the response to the showclass page.
We can get the picture from two sources i.e. from Camera and from Gallary also.
So first we create a method for a Camera.
import { Camera, CameraOptions } from '@ionic-native/camera/ngx';
import { GoogleCloudVisionServiceService } from '../google-cloud-vision-service.service';
import { Router, NavigationExtras } from '@angular/router';
import { LoadingController } from '@ionic/angular';
... constructor( ...
private camera: Camera,
private vision: GoogleCloudVisionServiceService,
private route : Router,
public loadingController: LoadingController,
){}
async takePhoto() {
const options: CameraOptions = {
quality: 100,
targetHeight: 500,
targetWidth: 500,
destinationType: this.camera.DestinationType.DATA_URL,
encodingType: this.camera.EncodingType.JPEG,
mediaType: this.camera.MediaType.PICTURE,
// correctOrientation: true
}
this.camera.getPicture(options).then(async (imageData) => {
const loading = await this.loadingController.create({
message: 'Getting Results...',
translucent: true
});
await loading.present();
this.vision.getLabels(imageData,this.selectedfeature).subscribe(async (result) => {
console.log(result.json())
let navigationExtras: NavigationExtras = {
queryParams: {
special: JSON.stringify(imageData),
result : JSON.stringify(result.json()),
feature : JSON.stringify(this.selectedfeature)
}};
this.route.navigate(["showclass"],navigationExtras)
await loading.dismiss()
}, err => {
console.log(err);
});
}, err => {
console.log(err);
});
}
Here we have created a loading controller to show loader during the interval of sending an image to Google Vision and sending the response.
Now let’s create a method for Gallary.
async selectPhoto(){
const options: CameraOptions = {
quality: 100,
destinationType: this.camera.DestinationType.DATA_URL,
encodingType: this.camera.EncodingType.JPEG,
mediaType: this.camera.MediaType.PICTURE,
sourceType: this.camera.PictureSourceType.SAVEDPHOTOALBUM
}
this.camera.getPicture(options).then(async (imageData) => {
// imageData is either a base64 encoded string or a file URI
// If it's base64:
const loading = await this.loadingController.create({
message: 'Getting Results...',
translucent: true
});
await loading.present();
this.vision.getLabels(imageData,this.selectedfeature).subscribe(async (result) => {
let navigationExtras: NavigationExtras = {
queryParams: {
special: JSON.stringify(imageData),
result : JSON.stringify(result.json()),
feature : JSON.stringify(this.selectedfeature)
}};
this.route.navigate(["showclass"],navigationExtras)
await loading.dismiss()
}, err => {
console.log(err);
});
}, (err) => {
console.log(err)
})
}
Now prompt the user to select a photo from Gallary or click a new photo from Camera. For that, we will create an alert controller for selecting the Camera or Gallary Option.
import { AlertController } from '@ionic/angular';
...
constructor(
...
public alertController: AlertController){}
...
async presentAlertConfirm() {
const alert = await this.alertController.create({
header: 'Select one option ',
message: 'Take Photo or Select from Galary!!!',
buttons: [
{
text: 'Camera',
role: 'camera',
handler: () => {
this.takePhoto();
}
}, {
text: 'Gallary',
role: 'gallary',
handler: () => {
this.selectPhoto();
}
}
]
});
await alert.present();
}
Now we have created an alert for selecting Camera or Gallary. After that, we have created a separate function for both Gallary and Camera. After receiving a response we have navigated that response to showclass page.
Now let's create a function that will set the values of feature from radio button selection. For storing the value of feature we will also declare a class variable.
export class HomePage {
selectedfeature:"LABEL_DETECTION"
...
radioGroupChange(event)
{
this.selectedfeature = event.detail;
}
...
}
Now we need to switch over HTML page to edit /src/app/home/home.page.html
Till now, we have completed the homepage.
Now let’s accept the parameters in the showclass page. For that, we will add the following lines of code to /src/app/showclass/showclass.page.ts
import { ActivatedRoute, Router } from '@angular/router';
...
constructor(private route: ActivatedRoute, private router: Router) { }
Now create three class variables to store the values. We unzip the parameters we got from the homepage and store the result as per feature.
image:any
result:any
feature:any
...
ngOnInit() {
this.route.queryParams.subscribe(params => {
if (params && params.special && params.result && params.feature ) {
this.image = JSON.parse(params.special);
this.result = JSON.parse(params.result);
this.feature = JSON.parse(params.feature);
}
switch(this.feature.value){
case "TEXT_DETECTION":{
this.result = this.result.responses[0].textAnnotations
break;
}
case "FACE_DETECTION":{
this.result = this.result.responses
break;
}
case "OBJECT_LOCALIZATION":{
this.result = this.result.responses[0].localizedObjectAnnotations
break;
}
case "LANDMARK_DETECTION":{
this.result = this.result.responses[0].landmarkAnnotations
break;
}
case "LOGO_DETECTION":{
this.result = this.result.responses[0].logoAnnotations
break;
}
case "WEB_DETECTION":{
this.result = this.result.responses[0].webDetection.webEntities
break;
}
case "SAFE_SEARCH_DETECTION":{
this.result = this.result.responses
break;
}
case "IMAGE_PROPERTIES":{
this.result = this.result.responses[0].imagePropertiesAnnotation.dominantColors.colors
break;
}
case "CROP_HINTS":{
this.result = this.result.responses[0].cropHintsAnnotation.cropHints
break;
}
case "DOCUMENT_TEXT_DETECTION":{
this.result = this.result = this.result.responses[0].textAnnotations
break;
}
default:{
this.result = this.result.responses[0].labelAnnotations
}
}
console.log(this.result)
});
}
Now we have saved our image, feature name, and result. Let’s display the response using HTML and CSS on the showclass page.
Now, edit /src/app/showclass/showclass.page.html
I’m sure you must be missing some UI changes. For that we will edit out /src/app/showclass/showclass.page.scss
img{ height: 70vh !important; width: auto !important; margin-left: auto !important; margin-right: auto !important; margin-bottom: auto !important; margin-top:10px; display: block !important; border: 1px solid #000; padding: 5px; border-radius: 4px; }
.itemSection{ display: flex; justify-content:center; align-items: center; width: 100%; }
ion-content{ background: #eee; }
If you are a developer working on a project and want to add Google Vision to their project. You can simply buy the Google Vision API Starter from our Store at a minimal cost. By buying this Starter, you can save your hundred’s of precious development hours for your Ionic 4 project. This Starter can be used in apps like Food Detector App, Place Finder App, etc.
Enappd Developer Team provides free starters for beginners and experts to jump-start their development. You can find…store.enappd.com
For Data Analysis or Data Visualization videos, Check the playlist below:
YaY..!! 👻 You’re almost done. Now, you have successfully implemented Google Vision in Ionic 4.