Offline First Progressive Web Applications

Offline First

What is the fundamental difference between an App and a Web Site? What is “that aspect” that makes you consider that an App is not well implemented and what we tolerate on a Web?

Obviously there are many differences. Mobile user experience, user interfaces adapted to gestures, access to device capabilities such as camera or GPS.

However, there is a key factor that makes us hate an App: not supporting being offline.

Can you imagine trying to access your previous emails and you can’t because you don’t have an Internet connection? It is frustrating on desktop Web, yes. But it is absolutely unbearable on mobile Web. And totally intolerable in an App / PWA.


Service Workers

A service worker is a code script executed in background by the browser, separated from a web page, offering functionality to the Web that is not related to user interaction.


In other words, a service worker becomes something like a backend-on-browser, which runs code in parallel with the logic of the user interface and that allows to intercept and manage network requests as well as retrieve data from custom managed cache.

It is, in short, a technology that could allow offline user experiences since if there is no Internet connection the application’s requests are intercepted and resolved by a cache that returns (for example) the last available data.

Ok, this could solve intermittent Internet connections. But what about a total loss of continued Internet access?

Offline Work and Sync Data

Creating an offline user experience can be complex since there are many things to worry about as how to synchronize correctly when Internet comes back or how to manage conflicts when multiple users modify the same information at the same time.

Service workers are a great technology that allows partially solving problems through an infinite cache while there is no connection, but does not resolve conflicts or is able to mix information that has been modified in the client while other clients have also modified data and have already published that to Internet.

For this we need an offline operation and synchronization mechanism with conflict management.

PouchDB and 4-Way Data Binding

PouchDB is an in-browser database that allows applications to save data locally, so that users can enjoy all the features of an app even when they’re offline. Plus, the data is synchronized between clients, so users can stay up-to-date wherever they go.


One of the best goal you can achieve using PouchDB is to implement 4-way data binding by keeping the Model, View, Serve & Offline Data all in sync while providing the user with a mature offline experience.

All CRUD operations performed with PouchDB are versioned in a history similar to the history of changes that occurs in a Git code repository. That is, if what you want to do is create a new data (CREATE), a new command (PUT) is versioned and added to the transaction log. If subsequently what is desired is to delete that same data (REMOVE) a new command (REMOVE) is versioned and added again to the transaction log.

That is, all CRUD requests add commands to the PouchDB history.

This allows PouchDB to compute which are the pending changes to be applied when uploading data to the server in a synchronization operation as well as correctly determining which are the changes in the server that must be applied in the local PouchDB database.


Example – To-Do PWA

We are going to create an example PWA to try all of this: a simple To-Do PWA that is automatically synced and also works in offline mode.


The application consists of a simple list of pending tasks that can be:

  • Created
  • Retrieved
  • Updated
  • Deleted

Since we are going to need a backend that synchronize all our data when connected, we will use an instance of CouchDB executed with Docker Compose:

version: '3'


    image: couchdb
      - '5984:5984'

To start it, simply launch the following command:

$ docker-compose up

Once it is running it is necessary to enable CORS so that requests made by PouchDB from the PWA that will be executed in the browser can reach the CouchDB server:



Next we will create a PWA with Ionic Framework 4 and Angular 8:

$ ionic start todoapp blank

The item data model will have the following structure:

export interface Item {
  _id?: string;
  _rev?: string;
  title: string;
  description: string;

It will be necessary to create a new Angular Service that performs the CRUD operations of the PWA user interface:

import { Injectable } from '@angular/core';
import { Item } from '../models/item';
import PouchDB from 'pouchdb';
import { Observable } from 'rxjs';

  providedIn: 'root'
export class ItemsService {

  private readonly db = new PouchDB('items');

  constructor() {
    this.db.sync('http://localhost:5984/items', { live: true, retry: true });

  async findAll() {
    const docs = await this.db.allDocs({ include_docs: true });
    return => row.doc);

  add(item: Item) {

  remove(item: Item) {
    return this.db.remove(item._id, item._rev);

  update(item: Item) {
    return this.db.put(item);

  changes() {
    return new Observable(subscriber => {
        .changes({ live: true, since: 'now' })
        .on('change', _ => {; });


We will modify the page to display the PWA interface:

And the controller of that page will use the previously created service:

import { Component, OnInit, ChangeDetectorRef } from '@angular/core';
import { Item } from '../../models/item';
import { ItemsService } from '../../services/items.service';
import { AlertController } from '@ionic/angular';

  selector: 'app-home',
  templateUrl: '',
  styleUrls: ['']
export class HomePage implements OnInit {

  items: Item[] = [];

    private alertCtrl: AlertController,
    private itemsService: ItemsService,
    private changeDetetorRef: ChangeDetectorRef
  ) {}

  ngOnInit() {
    this.itemsService.changes().subscribe(() => { this.refresh(); });

  private refresh() {
    this.itemsService.findAll().then(docs => { this.items = docs; });

  async add() {
    const alert = await this.alertCtrl.create({
      header: 'New item',
      inputs: [
          name: 'title',
          placeholder: 'Title'
          name: 'description',
          placeholder: 'Description'
      buttons: [
          text: 'Cancel',
          role: 'cancel'
          text: 'Add',
          handler: (item: Item) => { this.itemsService.add(item); }

  async edit(item: Item) {
    const alert = await this.alertCtrl.create({
      header: 'Edit item',
      inputs: [
          name: 'title',
          placeholder: 'Title',
          value: item.title
          name: 'description',
          placeholder: 'Description',
          value: item.description
      buttons: [
          text: 'Cancel',
          role: 'cancel'
          text: 'Save',
          handler: (newItem: Item) => { newItem._id = item._id; newItem._rev = item._rev; this.itemsService.update(newItem); }

  remove(item: Item) {


The end result is a synchronized PWA that allows you to continue working even when there is no Internet connection!


GitHub source

Demystified Elasticsearch with Elastic App Search


Most of our projects are Apps that we develop for our customers, usually natively or hybrid based on Ionic and Angular.


It is common that we need a full-text-search engine for contents where the user can type, in addition to being able to parameterize other data such as date, geolocation or filtering fields from selectable option lists.

Most of the times we recommend using Elasticsearch because it has many features that, sooner or later, end up being necessary to incorporate into our applications:

  • Clustering and high availability
  • Horizontal scalability
  • Snapshot and restore
  • Amazing powerfull search API
  • Support for aggregations
  • Ingest and management API

However, great power carries great… configuration. And not always is easy to implement searching in Elasticsearch or having a simple administration tool allowing from day 1 to have a simple environment on which to implement an MVP that can eventually become a killer app.

Until now…

Elastic App Search: Advanced Search Made Easy

It is amazing how easy it can be to implement an Elasticsearch-based environment that has everything you need to start creating a complete search application.

With Elastic App Search, the entire process is streamlined allowing you to concentrate efforts and resources on what matters most: create great applications that implement content search in a scalable manner, with control of results relevance, using well-maintained clients and robust analytics.

Starting a Simple Search Application

What if we create a simple application to search for food recipes? Next we will see the necessary steps to create our first application with Elastic App Search.

If you needed you can download all the contents from GitHub:

Creating Sample Data

To do this we will start from a sample data set that we can get from different sources (e.g. from TheMealDB:

for first in {a..z}
    curl -s "${first}" | jq -s '[.[][][]]' >> data.json

This will create a data.json file with a data structure similar to the following:

        "id": "52768",
        "meal": "Apple Frangipan Tart",
        "category": "Dessert",
        "area": "British",
        "instructions": "...",
        "thumb": "http://..."
        "id": "52872",
        "meal": "Spanish Tortilla",
        "category": "Vegetarian",
        "area": "Spanish",
        "instructions": "...",
        "thumb": "http://..."

Running Elastic App Search With Docker Compose

Create the following docker-compose.yml file:

version: '3.7'


    - ""
    - "discovery.type=single-node"
    - ""
    - "bootstrap.memory_lock=true"
    - "ES_JAVA_OPTS=-Xms512m -Xmx2048m"
        soft: -1
        hard: -1

    - ""
    - "allow_es_settings_modification=true"
    - "JAVA_OPTS=-Xmx2048m"
    - 3002:3002

Notice that it is important to increase the maximum heap memory values for the elasticsearch and appsearch services from 512m to 2048m.

Once created it is necessary to start the services in order so that elasticsearch has enough time to start before appsearch starts:

$ docker-compose up -d elasticsearch
$ docker-compose up -d appsearch

If you need to monitor the status of the processes, you can do so with the following command:

$ docker-compose logs -f

This will show the following:


Data Ingestion Using Elastic App Search Dashboard

Open the following address and you will see the Elastic App Search Dashboard:



Some of the concepts we know in Elasticsearch have a different name in Elastic App Search. Specifically, an index is renamed to engine in App Search.

Proceed to create a new engine called meals and load the previously created data.json file:


Simple Searching

From here it is possible to carry out simple searching using the Elastic App Search Dashboard UI:


Internally, searches are executed by Elasticsearch, so it is always possible to visualize the actual search performed by Elasticsearch from behind:


Configuring Synonims

For some searches the term tortilla (omelette in Spanish) is not suitable since it would be desirable for searches to return Spanish Tortilla when searching for omelette.

For this it is possible to configure the synonyms in a simple and intuitive way:


This allows subsequent searches such as the following obtaining the expected results:

Configuring Curations

In case you want to promote a particular search result in a special way, it can be done from the Curations option.

A curation helps people discover what you would most like them to discover. Or, what you would not like them to discover.

There are two curation parameters:

  • Promoted: Promote specific documents to have them appear in a prominent way. You can promote multiple documents per query, the array order representing the order in which they will appear.
  • Hidden: Hide specific documents so that they do not appear. You can hide multiple documents per query, the array order being unimportant because … they are hidden.

Configuring Relevance Tuning

One of the most powerful features of Elasticsearch is its ability to refine the relevance of some results over others. It is possible to enhance the fact that a result depends more on the search on a specific field than on others, by configuring the final score that Elasticsearch gives to each search result.

With Elastic App Search this configuration of relevance in the results is extremely simple to configure.



Standard Elastic App Search includes a simple but powerful analytics dashboard where it is possible to visualize, filtering by dates:

  • Total number of queries
  • Total number of queries without results
  • Tags
  • Detailed analytics by searches performed by end users
  • Recent Searches


Continuous Integration and Deployment

As Okode grows, one of our top priorities has been to make our developers happy. Their mission is to create great applications focusing on what they do best: coding. All tasks related to construction, static analysis, packaging, signing or publishing in app markets must be automated to optimize their time and to remain focused on their work without distractions.

Adjusting the parameters and configuration of the continuous integration server becomes a crucial activity, where our engineers must balance costs, average construction times and development of support utilities for CI seeking optimization and continuous improvement.

Today we build more than 100 builds / day reaching production directly from CircleCI.

The results, when done well, bring a clear benefit to the organization and to the productivity of our software engineers which makes them, ultimately, happier coders.

Continuous Integration at Okode

Historically in Okode we have used continuous integration and deployment tools based on Cloud services. Services such as Jenkins or Travis CI have been crucial to initially define what our task workflows should look like based on each type of application developed. Creating a Cordova-based hybrid App that is published in the App Store is not the same as an Angular-based SPA Web site that is published in AWS S3 / AWS Cloudfount. And yet, both the App and SPA workflow share similar activities such as TypeScript code transpiling, static analysis, unit testing and e2e and WebPack packaging.

As more applications are developed in Okode, the need to align and orchestrate the construction of each one in a similar way becomes more evident, which has recently led us to develop even our own CLI!.


After evaluating different alternatives currently we have migrated all of our construction workflows to CircleCI because:

  • Allows us to use Linux nodes (vm or containers), use custom Docker images and build macOS for iOS.
  • Supports creating complex workflows for testing and building, with multiple parameterized conditional stages.
  • Allows access via SSH to a building node for debugging.
  • It is possible to install additional tools, SDKs and toolchains to the building nodes when necessary.
  • It’s very fast (incredibly fast!) finding a free node to start building.
  • Allows to share the workflow configuration in small snippets of .yml code called Orbs that can be reused between projects.

Agile Development Requires Fast Builds

Any commit, however small, needs to be validated by the continuous integration environment. Our developers publish hundreds of commits a day, integrating changes that evolve our customers’ applications to incorporate new functionalities into the main branch of development (usually develop when we use GitFlow).

Instead of running a full build of the application, which in many cases requires building on several platforms (iOS / Android), packaging, signing and sending to beta distribution channels such as Fabric, TestFlight or Google Play Alpha, we prefer to make a fast build performing the following tasks in parallel when possible:

  • Optimized build for production
  • Static analysis
  • Unit testing
  • E2E testing


What we are looking for with this strategy is fail as soon as possible when it is detected that some of the metrics of code quality don’t pass. That is, we complete a fast build as quickly as possible so that the developer can fix errors detected as soon as possible and publish code changes to SCM to, again, launch a new fast build.

Only in case it is necessary to deliver a full new version of the application in a user testing environment (preproduction) or publishing for end users (production) is it when a more complete multichannel build workflow (iOS, Android, desktop Web) is made with packaging, signing and sending the bundles to the distribution channels.

Reusing Configuration Using CircleCI Orbs

When creating multiple mobile Apps, many of them based on the same technology, it became clear at Okode that it was necessary to have a building configuration reuse mechanism since most of them are all built in a similar way.

For example, for most our npm-based projects it is necessary to initially download the dependencies using npm install. However, since CircleCI supports cache between constructions, it is preferable to try to download the node_modules folder of a previous compatible build (as long as the dependencies have not changed, for which the md5 of the package-lock.json is calculated and compared) and if it is still necessary to install them, the operation is done with npm ci so there is no need to recalculate transitive dependencies and optimize their download.

All this process can be stored in an yaml snippet like following:

cache-key-npm: &cache-key-npm
  key: cache-npm-{{ arch }}-{{ .Environment.CIRCLE_JOB }}-{{ .Environment.CIRCLE_BRANCH }}-{{ checksum "package-lock.json" }}

      - image: circleci/node:12

    description: Run npm install
      - restore_cache:
          << : *cache-key-npm
      - run:
          name: Installing NPM dependencies
          command: if [ ! -d "node_modules" ]; then npm ci; fi
      - run:
          name: Restoring package-lock.json
          command: git checkout package-lock.json
      - save_cache:
          << : *cache-key-npm
            - node_modules

CircleCI allows to register configuration files in the Orbs registry so that they can be shared between different workflows or between different organizations:


There are more sophisticated examples where for some of our customers it is necessary to initiate a VPN connection to an AWS VPC so that the generated artifacts are stored in private binary repositories. These are some examples of common commands that typically need to be reused in different application construction workflows at Okode and that it is possible to share as an Orb in the CircleCI registry and import it into the .circleci/config.yml of each application that needs them.


Continuous Delivery: From GitHub to App Store / Google Play

How many builds can be made of an application and published for production? Well, it is true that the number is limited by the maximum frequency that the distribution channel can support (App Store for example requires an average time of Apple validation between 1 or 2 days before publication). However, this is not something that affects Web SPA applications or Cloud serverless services or microservices of a Kubernetes cluster. For these cases it is perfectly possible to reach production in seconds after the last git push command from a developer.

At Okode all our applications are automated to deliver to Google Play / App Store directly from CircleCI. Which is the same as saying directly from GitHub after the creation of a tag by the developer.


App Development for Business and Brands

App Development for Business and Brands

At Okode we are convinced that the development of mobile applications must be carried out in such a way that it is not necessary to maintain native code for each platform (iOS / Android).

Hybrid Apps make it easier for developers to work, since it is easy to create a single application that runs efficiently on different platforms without practically any additional effort and taking advantage of all the knowledge that Web designers and programmers already have in the Web-based hybrid technology frameworks. In addition, when necessary it is simple to be able to use the platform’s native functionality through the use of Cordova or Capacitor, allowing the technological innovations introduced by the big manufacturers to be used from hybrid Apps as if they were true native Apps. By developing in this way, the development times are greatly optimized, allowing companies and brands to reach the markets of applications in record time.

One of the most important decisions of this whole process is that you need a framework of advanced and high-quality mobile applications to create hybrid applications. Therefore, it is necessary to choose the mobile framework with prudence. Some of the best Frameworks of hybrid mobile applications for their characteristics and user experience are the following:


Open-source, the preferred app development platform for web developers.

React Native

Open-source, Reusable Components. Developed by Facebook developer. Used by Facebook, Instagram, Uber Food etc. easily integrate native feature with Hybrid.


Open-source, create hybrid-native apps using Dart VM for iOS / Android platforms. Flutter code is declarative, favours composition, and uses a single threading model. Its hot-restart and sub-second hot-reload is truly game-changing.

Cloud Native Application Development

Innovate for a Better Future.

We are an enterprise leadership in Mobile Business Application Development. As a technology-based company we’re specialized in innovative global solutions for international customers in different sectors such as insurance, banking, marketing, communications and logistics.

We have a highly specialized team of full-stack software engineers, which allows us to start agile application developments immediately, regardless of the technology you needed. We make technology migrations, creating minimum viable productos and helping to expand the experience and the ability of our customers development teams so that they are able to maintain their solutions in an environment of permanent continuous change.

Digital Transformation is not an option, is the strategic opportunity to incorporate new technologies so that your business is more efficient and allows new opportunities.

Effective Digital Transformation Strategy

Digital Transformation for Insurance Companies.

We are an enterprise leadership in Mobile Business Application Development. As a technology-based company we’re specialized in innovative global solutions for international customers in different sectors such as insurance, banking, marketing, communications and logistics.

We have a highly specialized team of full-stack software engineers, which allows us to start agile application developments immediately, regardless of the technology you needed. We make technology migrations, creating minimum viable products and helping to expand the experience and the ability of our customers development teams so that they are able to maintain their solutions in an environment of permanent continuous change.

Digital Transformation is not an option, is the strategic opportunity to incorporate new technologies so that your business is more efficient and allows new opportunities.