Get started with Polymer 1.0 and Sublime Text 3.0

In a previous post I have explained how to get started with Polymer 1.0 and Webstorm 11. Unfortunately WebStorm is a paid product and not all of my followers can purchase a license (me neither, I am trying to see if JetBrains will grant me a teacher license for free).

So, I decided to create a new tutorial which explains how to get started with Polymer 1.0 using Sublime Text, a power and free text editor also used by Google Polymer’s Team. In this tutorial I will use more tools than in the previous one because SublimeText is just a text editor (powerful but still a text editor), so in order to run your app you also need an integrated web server like GULP and other tools.

Download the necessary tools

First of all, in order to have the proper environment setup correctly, especially in Windows, you MUST download all tools before you start to download SublimeText or Polymer, otherwise you will start the nightmare of “path not found” errors and similar.

  • GIT
    GIT is a well-known command tool for managing GIT repositories. Even if you come from an SVN or TFS (Team Foundation Server) environment I would suggest you to get started on GIT because even companies like Microsoft are moving their source code into GitHub or BitBucket repositories and GIT is used by BOWER to download and synchronize components from Polymer.
  • NODE.js
    Node is a JavaScript runtime build on Chrome V8 JavaScript engine. If you work with Polymer your primary use of Node is with the command NPM, which will be used in parallel with BOWER.
    If you come from Java you may be aware of “Maven Central” while if you come from .NET you may be aware of “Nuget”. Well BOWER is the same concept applied to web development. It allows you to download “packages” of CSS, JavaScript and HTML files packed as “components”. Without Node.js you can’t use BOWER because it requires the NPM (Node Package Manager) command tool.

So at this point you have your core tools registered and working correctly. Now it’s time to download SublimeText 3.0 and configure it in order to get it setup correctly. The download link is available here:

Configure Sublime Text 3.0

After Sublime Text is installed you need to configure it in order to understand Polymer 1.0 and in order to being able to run Polymer using GULP.

Step #01 – Sublime Package Manager

Sublime provides an integrated console where you can run Sublime commands. You can open the console using two different techniques:

  • CTRL + `
  • Menu > View > Show Console

When the console is open, copy the script that will enable package manager, which is available here.

Step #02 – Install Sublime Plugins

Sublime comes out of the box with almost anything but in order to create a proper development environment for Polymer 1.0 we need some plugins:

Tip: CTRL + SHIFT + P will open “Package Manager”


Below is a list of plugins that I personally believe you should have installed in order to be able to work with Polymer

  • Install Package > Autoprefixer

    If you want a quick way to add vendor prefixes to your CSS, you can do so with this handy plugin.
  • Install Package > Emmet
    Add some useful keyboard shortcuts and snippets to your text editor.
  • Install Package > HTML-CSS-JS Prettify
    This extension gives you a command to format your HTML, CSS and JS. You can even prettify your files whenever your save a file.
  • Install Package > Git Gutter
    Add a marker in the gutter wherever there is a change made to a file.
  • Install Package > Gutter Color
    Gutter Color shows you a small color sample next to your CSS.

Step #03 – Create a new Project

Finally, we need to create a Sublime Text project in order to keep all our files in a good structure. First of all you need a folder, in my case I work in “C:\DEV” and in this case I am going to have a project folder called “C:\DEV\Polymer_First” where I will save my project structure.

Open Sublime Text and point to the menu > Project > Save Project As:


This will create the new project with an extension of .sublime-project. Then go again into View Menu and choose Sidebar or simply press CTRL + K, CTRL + B.

Initialize Polymer

Now we can finally initialize our Polymer project.

Click on Project > Add Folder to Project and choose your root folder so that your workspace and project structure are pointing to your root project.

Open your SHELL or Command Prompt or TERMINAL and point to your Sublime Text root path, which is in my case “C:\DEV\Polymer_First” and type bower init:


Then download the basic setup for polymer using:

  • bower install –save Polymer/polymer#^1.2.0
  • bower install –save PolymerElements/iron-elements
  • bower install –save PolymerElements/paper-elements

At the end you should have this structure which includes the first .html filder (Index.html):


Final step, which is the one I love most, is to install Sublime Server, which is nothing more than a very simple Phyton local webserver.

CTRL + P > Install Package > Sublime Server

And voila’, now you can right click an .HTML file inside your Sublime Text editor and choose “View in Browser” which is by default http://localhost:8080.

Final Note
This is just an overview of how to setup Sublime Text but if you come from a complex IDE like Visual Studio or IntelliJ I would kindly suggest you to spend some time on Sublime and download all plugins that will make your life much easier. There are tons of useful plugins for web development and some specific for Polymer like the following:

… and many more

Polymer 1.0–Customize Style of existing components

Polymer 1.0 comes with plenty of available components, especially if you are going to implement Material Design. The only problem is that they implement the default style which usually doesn’t fit with mine.

In this post I want to show you multiple options of customizing a Polymer component together with the help of Polymer 1.0 documentation, available online.

Override the Style using <style>

Let’s start by preparing a Polymer project, like explained in my previous tutorial, and we add a simple Toolbar with two icons:


And this is my markup to import and create a Toolbar with two paper-icon-buttons:

<!-- Toolbar element -->
<link href="bower_components/paper-icon-button/paper-icon-button.html" rel="import">
<link href="bower_components/iron-icons/iron-icons.html" rel="import">
<link href="bower_components/paper-toolbar/paper-toolbar.html" rel="import">

<!-- Toolbar -->
<paper-toolbar>    <paper-icon-button icon="menu"></paper-icon-button>    <div class="title">My Toolbar</div>    <paper-icon-button icon="search"></paper-icon-button> </paper-toolbar>

Now, if I want to modify the background color of the Toolbar, first, I need to head to the paper-toolbar documentation here and find out how I can do that.
The CSS property that we want to override in our case is called –paper-toolbar-background.

Inside the HEAD tag of your page, you can create a new <style> tag and override the style globally by using the selector :root, like this example:


<!-- Override globally -->
<style is="custom-style">
--paper-toolbar-background: #FF6D00;


It is important to use the attribute is=”custom-style” which will inform Polymer that your style is going to override the normal Polymer CSS. Also, the tag style must be inside the <HEAD> tag otherwise it won’t work because it will be loaded too late by polymer.

Now, let’s say that this is too invasive for you and you want to override the toolbar background only for a specific scenario, for example, only for a certain webpage. We know that for the specific webpage the toolbar must be pink, so we can identify the toolbar id with a “pink” name:

<style is="custom-style">
--paper-toolbar-background: #FF6D00;

--paper-toolbar-background: #E91E63;
</style> <!-- Toolbar --> <paper-toolbar id="pink-toolbar">

With this statement you are going to have all toolbars in your project with a background color equals to #FF6D00 except the one with id equals to pink-toolbar which will have a background color equals to #e91e63.


Create a custom style component

Now, the best feature of Polymer is the capability of loading components, it is really helpful. Think about this, you have deployed an application which should be capable to load custom themes based on the user logged in, so what about having a custom CSS component which will override all your UX settings? Well with Polymer 1.0 this is possible and is quite straight forward.

First of all, we need to create a custom component which we will call “raf-theme” as following:


Then I am going to declare my DOM inside the page “raf-theme.html” as following:

<dom-module id="raf-theme">
--paper-toolbar-background: #FF6600;
is: "raf-theme"

And now I can just use my custom theme as a normal Polymer component in the following way:

<!-- import the style -->
<link href="bower_components/raf-theme/raf-theme.html" rel="import">

<!-- use it -->
<style is="custom-style" include="raf-theme"></style>

In my own opinion, the best way is to have multiple dom-element into your “theme” component, one per each part of the UX. For example inside raf-theme I have the following elements:

  • raf-toolbar-theme
  • raf-drawer-theme
  • raf-form-theme

and so on and I declare each theme only when I need it, plus everything is modularize so for me it’s easier to find a specific CSS property of a specific component and override it.

Get started with Polymer 1.0 and WebStorm 11

So this year one of my goals is to learn Polymer 1.0. I followed this project for about a year now and I believe that the framework is mature enough to be adopted even into a production environment. Unfortunately Google didn’t make a great job regarding Polymer (in my own opinion) and there is not a great web development IDE available like they did with Android Studio for Android development.

In the past I worked a lot with Visual Studio but also with JetBrains tools and I found WebStorm a very reliable tool for web development, so I decided to start this article which will explain how to setup the correct development environment for Polymer 1.0 using WebStorm.

I have both Linux and Windows 10 PCs, I don’t use MAC at all, and this article is all based on my Windows configuration. If you are going to use WebStorm and Polymer on Linux the tutorial is still valid, but if you are on MAC, well you are on your own Winking smile

Download the necessary tools

First of all, in order to have the proper environment setup correctly, especially in Windows, you MUST download all tools before you start to download WebStorm or Polymer, otherwise you will start the nightmare of “path not found” errors and similar.

  • GIT
    GIT is a well-known command tool for managing GIT repositories. Even if you come from an SVN or TFS (Team Foundation Server) environment I would suggest you to get started on GIT because even companies like Microsoft are moving their source code into GitHub or BitBucket repositories and GIT is used by BOWER to download and synchronize components from Polymer.
  • NODE.js
    Node is a JavaScript runtime build on Chrome V8 JavaScript engine. If you work with Polymer your primary use of Node is with the command NPM, which will be used in parallel with BOWER.
    If you come from Java you may be aware of “Maven Central” while if you come from .NET you may be aware of “Nuget”. Well BOWER is the same concept applied to web development. It allows you to download “packages” of CSS, JavaScript and HTML files packed as “components”. Without Node.js you can’t use BOWER because it requires the NPM (Node Package Manager) command tool.

Now, after you install GIT, open your command prompt and type:

git --version
# and the output will be
git version

If you don’t get so far it means that you don’t have the GIT bin folder registered as a Windows Environment Variable. The second step is to verify that node is installed and that’s the same of GIT, just type

npm version
# and the output will be
{ npm: '3.6.0',
  ares: '1.10.1-DEV',
  http_parser: '2.6.1',
  icu: '56.1',
  modules: '47',
  node: '5.6.0',
  openssl: '1.0.2f',
  uv: '1.8.0',
  v8: '',
  zlib: '1.2.8' }

Great. Last step is to install the package BOWER from Node.

npm install -g bower

And then verify that bower is installed by typing:

bower --version

So now you are 100% sure that you can move forward and prepare your dev environment.

Install and Configure WebStorm

At the time I am writing this article WebStorm is available in the version 11. I assume that in the future nothing will change regarding this part of the setup, especially because all JetBrains IDE are based on the same core IntelliJ IDE. WebStorm can be downloaded here: and you can use it for 30 days, then you need to buy a license or apply for a free license.

After WebStorm is installed, open the IDE and choose “new empty project” and name it polymer_test like I did:


Now you should have an empty project and WebStorm ready to rock. If you look at the bottom of the IDE you will see a “Terminal” window. This is an instance of your DOS command prompt (in Windows) and your Terminal prompt (in Linux). Just double check that everything is fine by typing something like “bower –version” or “git –version”:


Step 01 – Init Bower

This is the basic of polymer which is needed in order to get started, you need to run the command “bower init” which will prepare a JSON file with the project configuration information. If you don’t know what to do here, just keep press ENTER until the json file is created.


Now you should have in your root a file named bower.json which contains your project’s configuration information.

Step 02 – Download the core of Polymer

Second step is to download the Polymer core. This is required by any Polymer project. type “bower install –save Polymer/polymer#^1.2.0” in order to download the latest stable version of Polymer which is the 1.2.0 version.
If you don’t specify the version, bower will download all available version and then ask you which version you want to use.


At this point your project will have a new folder called “bower_components” and the Polymer core component’s folders:


Final, create a new page and name it “index.html” with the following HTML code:

<!DOCTYPE html>
<html lang="en">
<meta charset="UTF-8">
<title>My First Polymer Application</title>
<script type="text/javascript" src="bower_components/webcomponentsjs/webcomponents.js"></script>
<link href="bower_components/polymer/polymer.html" rel="import">


Now your page is ready to be used as a Polymer page, you only need to import the components that you need or create your own ones. If you want to be sure that everything works as expected, just right click on your .html file and choose “browse with” and test it on your preferred browser (I assume Chrome):


Step 03 – Download the Material Design components

If you know already that you are going to work with Material Design components, then you can easily download the whole iron-components and paper-components into your project with two simple bower commands:

bower install --save PolymerElements/iron-elements
bower install --save PolymerElements/paper-elements

Then you can try to create your first page by importing, for example, the Material Design Toolbar components as following:

<!DOCTYPE html>
<html lang="en">
<meta charset="UTF-8">
<title>My First Polymer Application</title>
<script type="text/javascript" src="bower_components/webcomponentsjs/webcomponents.js"></script>
<link href="bower_components/polymer/polymer.html" rel="import">

<!-- Toolbar element -->
<link href="bower_components/paper-toolbar/paper-toolbar.html" rel="import">
<div class="title">My Toolbar</div>

And you should have a nice page with a basic Material Design Toolbar like mine:



Enable Authentication in ElasticSearch

After you have configured your ElasticSearch endpoint, the next important step is to make it secure. If you have a standard installation of ElasticSearch probably you have your ES endpoint pointing to http://localhost:9200. In my case I do have this configuration and this is how I can query into a test index:

GET http://localhost:9200/test/persons/1 HTTP/1.1
User-Agent: Fiddler
Host: localhost:9200
Content-Length: 0

And ElasticSearch will return an HTTP Status 200 OK with my resultset:

HTTP/1.1 200 OK
Content-Type: application/json; charset=UTF-8
Content-Length: 196


The major problem of the default installation of ES is that is totally unsecure.

  • First is running over an HTTP protocol, which means that all information sent are in clear text, including your authentication credentials 
  • Second, there is no authentication mechanism in place, which means that anybody can get the data from my endpoint

So let’s see how we can enable Authentication and how we can enable SSL to make our ES endpoint secure.

Enable Authentication with Shield

One of the features that I really love about ES is the plugin architecture. You can easily install on an existing ES endpoint any of the available plugins with a couple of lines of code and configure them at runtime.

One of the most important plugins available for ES is Shield. Shield is a plugin that provides plenty of options to secure your ES endpoint, just to mention few:

  • Enable Basic Authentication with Username and Password
  • Provide three different type of basic User [admin | power_user | guest]
  • Create custom roles with custom permissions
  • Enabling Message Authentication
  • Auditing
  • Custom authentication providers such as LDAP, OAuth, Active Directory and more

If you have multiple nodes for your cluster, in order to have Shield running correctly you must stop all nodes and install shield over all your nodes. This is the only option available so you must take offline your ES while installing Shield

In order to install Shield you should open an ElasticSearch SHELL, which in Windows means “open a DOS console and point to your /bin folder where the elasticsearch program is available”:

# browse the elastic search dir
$ CD C:\Program Files\elasticsearch-2.1.1\bin

Then you have to install two plugins, License and Shield. This is required because Shield is partially free, which means that:

When your license expires, Shield operates in a degraded mode where access to the Elasticsearch cluster health, cluster stats, and index stats APIs is blocked. Shield keeps on protecting your cluster, but you won’t be able to monitor its operation until you update your license. For more information, see License Expiration.

$ plugin install license
$ plugin install shield

Until you reboot your ES endpoint, you will not be able to have the Authentication enabled. After reboot, if you try to issue the previous request, you will get an HTTP 401 UNHAUTORIZED response:

HTTP/1.1 401 Unauthorized
WWW-Authenticate: Basic realm="shield"
Content-Type: application/json; charset=UTF-8
Content-Length: 357

{"error":{"root_cause":[{"type":"security_exception","reason":"missing authentication token for REST request [/test/persons/1]","header":{"WWW-Authenticate":"Basic realm=\"shield\""}}],"type":"security_exception","reason":"missing authentication token for REST request [/test/persons/1]","header":{"WWW-Authenticate":"Basic realm=\"shield\""}},"status":401}

So the next step is to create an Admin user that we can use to start to query our ES endpoint:

#browse /bin/shield directory
$ esusers useradd es_admin -r admin

With the esusers useradd command we created a new user named es_admin with the param –r we defined the role admin (available roles are admin, power_user, guest).

Second step, we need to issue an authenticated request to ES using our account in the following way:

GET http://localhost:9200/test/persons/1 HTTP/1.1
User-Agent: Fiddler
Host: localhost:9200
Content-Length: 0
Authorization: Basic ZXNfYWRtaW46c2VjcmV0

Where the “strange” string is just username : password combination, converted by Fiddler in a 64 bit string.

All right, so now you have your ES secured and the basic user management in place but you have another problem. Now you are sending per each request a username and password over the web, so you must enable SSL communication on your ES endpoint to encrypt the communication.

If you want to create custom roles with custom access, you need to modify the file [elasticsearch]/config/shield/roles.yml. In this file you can have a look at the current roles [admin, power_user, user] and modify them or create new one. For example you may have a role that grant permissions only to a specific index (see multi-tenant architecture).
Remember, every time you modify a configuration file in ES you MUST restart the service otherwise ES will keep using the cached .yml configuration files.

Enable SSL on ElasticSearch

First of all you need a X.509 certificate for your domain. There is no shortcut here, if you have a domain where you host your ES endpoint you must purchase an SSL certificate from a CA authority that covers your domain. This procedure is a bit painful and verbose but when done it’s done for your entire ES installation. You can also create a cert file from your own Windows/Linux machine, for example, using OPENSSL or similar but remember that your certification file will not be signed by a known CA authority and you may encounter problems during production.

In my case I host ES in a public DNS and I have a Domain Certificate from a CA authority in the form of .pfx file. Using OPENSSL I create a new .pem file from my .pfx using this command:

openssl pkcs12 -in certificate.pfx -out certificate.pem -nodes

Anyway you must have access to a .pem certificate file in order to continue.

keytool -importcert -keystore node01.jks -file cacert.pem -alias my_ca

This command (keytool) will import the certificate into the Java Keystore Database, which contains a list of trusted certificates.

Next you need to generate a private key and a certificate for your node as following:

keytool -genkey -alias node01 -keystore node01.jks -keyalg RSA -keysize 2048 -validity 712 -ext,ip:

With this command we create a key and a public certificate valid for 712 days (which is the classic example shown on ElasticSearch documentation). This command will also prompt you for some questions that will be registered together with the certificate. The san attribute allows to specify the DOMAIN where you are hosting your ES endpoint.

So, right now you have a node’s certificate but it needs to be signed by a CA authority, actually the one who issued the certificate for you.

keytool -certreq -alias node01 -keystore node01.jks -file node01.csr -keyalg rsa -ext,ip:

Now we have a Certificate Signing Request .csr that we can send to our CA. The CA will sign it and returns to you a .crt file. You can also use OPENSSL and generate the .crt by yourself if you wish. At this point you have to import the .crt into your .jks database with the following command:

keytool -importcert -keystore node01.jks -file node01-signed.crt -alias node01

Now we can configure SHIELD to use SSL and HTTPS transport protocol. Just let ES knows where your keystore is located and which is the password for your keystore and for your certificate:

# ---------------------------------- SSL -----------------------------------
shield.ssl.keystore.path: C:\Program Files\elasticsearch-2.2.0\elasticsearch-2.2.0\config\cert\keystore.jks
shield.ssl.keystore.password: [keystore password]
shield.ssl.keystore.key_password: [certificate password]
shield.transport.ssl: true
shield.http.ssl: true

At this point your Elastic Search will be hosted still on port 9200 (if you didn’t override this setting) but available only over HTTPS.

Alternative way of creating a keystore

In my specific scenario I didn’t want to modify the certificate because my CA authority is quite strict so I didn’t have the option to import a .pem, modify and create a .csr and so on so I used an alternative way.

First of all I have created my keystore:

keytool -genkey -alias mydomain -keyalg RSA -keystore keystore.jks -keysize 2048

Then I have imported by .pfx file as is, without any modification:

keytool -importkeystore -srckeystore mypfxfile.pfx -srcstoretype pkcs12 
-destkeystore keystore.jks -deststoretype JKS

Then I created a .csr request for my DNS but I will not send the request to my CA authority:

keytool -certreq -alias mydomain -keystore keystore.jks -file mydomain.csr

And finally I moved the two files, .jks and .csr into the config directory of elastic search and configured elastic search using the keystore password and the original certificate password. It worked like a charm and I didn’t need to send a .csr and import a .crt.

Publish Android library to Maven using Android Studio 1.5

If you are working with Android Studio and more in general with the Android platform, soon or later you will need to download a library from a Maven or JCenter repository.
If you are clueless of what I am talking about, just open an Android project using Android Studio and look at the file called build.gradle (The one called build.gradle Project and not the one specific of a module).

Gradle dependencies overview

You should see a layout similar to mine:

buildscript {
    repositories {
    dependencies {
        classpath ''

In this file we simply asked Gradle to download the project dependencies from JCenter. This means that Android Studio, when you build the project, will query JCenter central repository and try to resolve any dependency and download them.

Now, if you move through the structure of your Android project you will find another build.grandle file.

Actually you will find one per module. You can think of a module like a component of your android application.

In this case in my module I have a reference to an external library and I declare the dependency in this way (at the bottom of my gradle file):

dependencies {
    compile fileTree(dir: 'libs', include: ['*.jar'])
    compile 'com.squareup.okhttp:okhttp:2.0.+'

So, in this example I am working with a library called okhttp, available from the package com.squareup.okhttp and more precisely I am asking for the version 2.0. The + sign at the end means that any sub-release of the major 2.0 is fine for me, so 2.0.1 or 2.0.999 they are both ok.
Now, inside my code I can declare this package and start to use its internal classes and interfaces because I know that gradle with synchronize the references into Android Studio during compile time.

Another scenario may happen if you need to work with a public library but the library is not available on Maven Central but on a custom repo. In my case, I have created an upgraded version of a famous library for Android Wear and I do not want to publish it on Maven Central but I rather keep it on my own repo. In this case, in order to use the dependency, from the module build.gradle file you must declare first where is located the Maven repository and then you can add the dependency like I did here:

        url ""
dependencies {
    compile 'com.mariux.teleport.lib:teleportlib:0.1.6'

If this part is not clear, I personally found very helpful the documentation area of gradle, which is available here.

Please pay attention that Android Studio 1.5 works with gradle 1.5 while the latest gradle is now 2.1 so some features may refer to gradle 2.1 which is not compatible with Android Studio.

Create an Maven account

Assuming that everything is clear so far now it’s time to deep dive into Maven and create your own account and repository. Without this part setup you cannot create your own library and publish it.

Head to and create a new Account. You can create the new account using Username and Password or you can link one of your existing social accounts: Google+, GitHub and Twitter.


When your account is up and running, you should have an account home page available at this URL:[your_username].
In this page you can setup your user profile, change your profile picture and add social accounts.

Note: if you host like me, your open source projects on GitHub, I kindly suggest you to link your GitHub account because it will be a lot easier to display release notes and documentation directly from GitHub.

Now, look at the right pane of your user account and click on the Maven link. From there you will be redirected into your Maven’s package manager.

Click Add New Package to start to create your public maven library:


In this page, you have to setup your Maven package information. It is very important how you name your package because this will be the naming convention that we will carry forward on this tutorial, plus it will be used by your users.

From the owned repositories choose Maven and a “new Maven package” page will open:


Information regarding your package

In the create new package window, BinTray asks you for some basic information required for your package:


In my case I am using GitHub so I can easily port my source code repository, my read me files, the issues tracker and the wiki into BinTray.

Now, our package name in BinTray is called LicenseChecker but we still do not have any code or library in it, so it’s now time to move into Android Studio 1.5 and create our Package.

Android Studio and Maven

At this point it’s time to make our Android Library. In my specific case I have a library composed by 3 modules, one module refers to a demo app for Smartphone, one module refers to a demo app for Wearable device and one module is my Android Library:


01 – Preparation

Now, in order to be able to publish the library into BinTray we need to configure Android Studio.

  • Open the build.gradle file related to the project (the first one in my previous picture)
    • Add a reference to the following plugins:
      classpath 'com.jfrog.bintray.gradle:gradle-bintray-plugin:1.2'
      classpath "com.github.dcendents:android-maven-gradle-plugin:1.3"
    • Re-compile and verify that gradle found your plugins

Now your project build.gradle file should look like this one:

buildscript {
    repositories {
    dependencies {
        classpath ''
        classpath 'com.jfrog.bintray.gradle:gradle-bintray-plugin:1.2'
        classpath "com.github.dcendents:android-maven-gradle-plugin:1.3"

Second step, we need to apply the plug-ins to the libraries that will get published into BinTray. In my case the library project is licensecheckerlib, so I am going to edit the build.gradle of this specific module and apply the plug-ins and rebuilt:

apply plugin: ''
apply plugin: 'com.jfrog.bintray'
apply plugin: ''

android {
    compileSdkVersion 23
    buildToolsVersion "23.0.2"

Now, in order to upload your library, Maven needs information related to the POM file. If you don’t know what is a POM file, I suggest you to have a look here.

Because we are using the Maven’s plugin for Android Studio, just add these two lines after your plugin declaration (always inside the library build.gradle file):

apply plugin: ''
apply plugin: 'com.jfrog.bintray'
apply plugin: ''

group = 'com.raffaeu.licensecheckerlib' // Change this to match your package name
version = '1.0.0' // Change this to match your version number

Here we are saying to BinTray “hey, look that I am going to upload a package called com.raffaeu.licensecheckerlib and its version is 1.0.0”.

Next step, which is optional but mandatory if you are considering to make your library visible over JCenter, Maven Central and more, you need to create a source .jar file. Yes, you need to because the plugin for Maven is capable to build only .aar packages which are not compatibles to JCenter. Always inside your library build.gradle file, create this task:

task generateSourcesJar(type: Jar) {
    classifier 'sources'

Second step to conform with JCenter and Maven Central is to generate also a Java Doc. The JavaDoc is very helpful for your users especially because you are releasing a custom library with custom APIs, so probably the method void doSomething() is unknown to people outside your organization, and this is why JCenter suggests to publish also a Java Doc together with your library.

The JavaDoc should also be transformed into a jar, to do so we create an additional task called generateJavadocsJar and we declare a dependency so that the task will not start until the generateJavadocs task is completed.

task generateJavadocs(type: Javadoc) {
    source =
    classpath += project.files(android.getBootClasspath()

task generateJavadocsJar(type: Jar) {
    from generateJavadocs.destinationDir
    classifier 'javadoc'

generateJavadocsJar.dependsOn generateJavadocs

Last step for our preparation is to include the artifact keyword of gradle. The artifact keyword is used to inform gradle that a specific library is composed by additional steps, in our case the steps required to generate the .jar and the documentation:

artifacts {
    archives generateJavadocsJar
    archives generateSourcesJar

At this point we need to build everything and be sure that the tasks are running correctly and that our library is also including documentation and .jar.

Go to Gradle project panel > “refresh” > your library > other > install and double click the task to start it. It will rebuild your library and include also the artifacts required by JCenter:


You can double check that everything is done by browsing your library project’s folder and double check that the following items exist:

Your project folder:

  • Build > outputs > aar
    • library-debug.aar
    • library-release.aar
  • Build > libs
    • library-1.0.0-javadoc.jar
    • library-1.0.0-sources.jar

02 – Publish the library

All right, now we know that our library is building correctly and can be published. This is very important because you can use the install task to just rebuild everything and ensure that you are ready to go live. Technically speaking, every time you make a change you should rebuild using install and run your tests. If you get a green light than you are ready to publish into Maven.

In order to publish the artifact into Maven we need to inform the Maven Plug-in ‘’ about who we are and what project we are going to upload.
The entire documentation for the plugin settings is available here. 

bintray {
    user = '[your BinTray username]'
    key = '[Your bintray key]'
    pkg {
        repo = 'maven'
        name = 'LicenseChecker' // the name of the package in BinTray

        version {
            name = 'licensecheckerlib' // the name of your library project
            desc = 'This is the first version'
            released  = new Date()
            vcsTag = '1.0.0' // the version

        licenses = ['Apache-2.0']
        vcsUrl = '' // your GitHub repo
        websiteUrl = '' // your website or whatever has documentation 
    configurations = ['archives']

Search for the Task bintrayUpload and run it:


At this point you can head to BinTray and release your package to the public.


Note: Remember that every time you make a new release, BinTray will not publish the package until you confirm that. This is a sort of safe guard put in place by BinTray to avoid unwanted publishing.

Last check, before asking BinTray to release your package over Maven and JCenter, you can double check that everything has been published correctly, and in my case here you go:


OXY, the Open Source SmartWatch

OXY, a new SmartWatch is coming November 15th

Net Architectures ltd., a startup company based in Bristol (United Kingdom) is going to release November 15th on IndieGogo an innovative SmartWatch called OXY™. (
The SmartWatch will be available in two shapes: Round and Square and in two colors: Black and Silver.
OXY is equipped with ELF OS and IWOP (Ingenic Wearable Open Platform), a custom Android ROM based on Android Lollipop 5.1.1 exclusively designed for wearable devices by Ingenic Semiconductors and available for download at
The platform is 100% open source and it promises to speedup the development process for wearable devices.

The hardware has been designed and produced by Ingenic Semiconductors (, a Chinese fabless semiconductor company based in Beijing, China founded in 2005.

Ingenic purchased licenses for the MIPS architecture instruction sets in 2009 and design CPU-microarchitectures based on them.
They have created a micro CPU Dual Core called M200 which is powered by a Dual Core 1.2 Ghz processor and is capable to run a full Android Operating System.

We have obtained some previews pictures of how the watches will look like and we have to say that they have made a great job so far.


OXY is willing to target a wide range of consumers, their SmartWatches look clean and the minimal design is willing to resemble the shape of some classic watches. The case and wristband are made of CNC finished 316L Stainless Steel and the display is protected by a Corning© Gorilla© Glass layer to make it resistant to scratches and shocks.

The watch is rooted and the code is Open Source, this means that anybody can download the original ROM and create new customizations and apps for the watch.
A free marketplace will be available later this year and Net Architectures ltd. promises to make available more than 40 different professional watchfaces and plenty of apps.
They also open their door to the XDA ( community and are willing to make ELF OS the new Open Source Android ROM for wearable devices.


A powerful hardware on your wrist

The hardware specifications published on OXY Google Plus Page are really interesting. The core is powered by a tiny low-energy consumption MIPS Dual Core and it has a dedicated GPU chipset capable to run video and animations without any hickup.
The hardware is equipped with a wide range of sensors like: Gyroscope, Accellerometer, Heart Rate, WiFi and Bluetooth, Speakers and Microphone and a mechanism to detect gestures.
The square model has a 320 mAh LiPo battery while the round version a 350 mAh. The AMOLED version can run with one charge for more than 3 days while in standby more for over a week.
Below is a comparative table of the hardware provided for each model:


More than just making two shapes (this solution was implemented only by LG in the past) they also took a step further and OXY will be available with two different displays. The most expensive model is equipped with an AMOLED display produced by AUO while the cheapest version will be equipped with a TFT Transflective display. Both displays are protected by a layer of Corning© Gorilla© glass and have a touch sensor, so no need to push any button with OXY.

OXY is willing to settle in the middle tier of the SmartWatches market by providing an high quality watch for a competitive price, the most expensive model equipped with Black Stainless Steel and AMOLED display is going to be priced around 250 USD while the cheapest square model equipped with a TFT transflective display and Silver Stainless Steel is going to be priced at 170 USD. The price will include the watch, a beautiful case, a magnetic charger and 1 year of warranty from their manufacturer.
OXY is not only building SmartWatches but an complete technology brand, they have also created a beautiful and ergonomic Charging Station, some Power Banks and some clothes related to the SmartWatch campaign.

Is OXY going to be the new Pebble? We will see when OXY will open their door on IndieGogo on November 15th.

Android Plugin Application

For one of the project I am working at the moment I have the need to implement a plug-in architecture where the main application is just a “view holder” and all the behaviors of the application are provided by a set of plug-ins.

The advantage of this approach is that in the future I will not need to re-distribute my entire framework over the marketplace but simply release new plug-ins that the customer can add or update the existing one.

Note: the code presented in this article is not optimized and its only purpose is to explain one of the possible solutions to implement a custom plug-in architecture in Android. The methods exposed are not optimized and do not use an asynchronous pattern so I suggest to refactor them before adopting this code into your real applications

What is a plug-in architecture?

Before deep diving into the code I want to take a moment and explain what I personally mean for plug-in architecture, using the following diagram:

The previous picture represents a description of the JPF, the Java Plugin Framework, which is a similar solution to the one we are going to implement. The architecture is primarily composed by two different components.

One component is the main application, the agnostic framework capable to load plug-ins. The second component is the plug-in registry, a repository that inform the system about the available plug-ins installed into the system.

In Android we have tons of different ways to implement a plug-in architecture. For example we can have a plug-in composed by a .JAR package that contains activities and code related to the plug-in. But I found out that in Android the best way to package things is to use the .apk system. With an .apk I can include Activities, Fragments, resources and layouts like a standalone application with the advantage of using some sort of contracts to force the code to be in a certain way.

Retrieve available packages

The basic project is a simple Android Application with a main activity. The main activity contains a ListView that will display all the available .apk that we can consider a plug-in for our application.

But first of all, let’s see how we can retrieve a list of installed .apk using some basic Android APIs. First of all I need to create a custom ListView item that can be used to display the information relative to a package. And this is the custom class:

package ltd.netarchitectures.na_plugins;


public class ApplicationDetail {

    private CharSequence label;
    private CharSequence name;
    private Drawable icon;

    public ApplicationDetail(CharSequence label, CharSequence name, Drawable icon) {
        this.label = label; = name;
        this.icon = icon;

    public CharSequence getLabel() {
        return label;

    public CharSequence getName() {
        return name;

    public Drawable getIcon() {
        return icon;

So with this custom class we can represents an available package. For now the information exposed are enough but we can retrieve more information like the company who made the package, the size of the package, the version and so on. We can also expose the package to see how many Fragments or Activities are available. Again, here the limit is your imagination.

Now we need to fetch all available packages. Because we do not have any plug-in available yet, let’s see how we can fetch all the available applications just to start to have a look at the android API.

    private void loadApplication(){
        // package manager is used to retrieve the system's packages 
        packageManager = getPackageManager();
        applications = new ArrayList<ApplicationDetail>();
        // we need an intent that will be used to load the packages
        Intent intent = new Intent(Intent.ACTION_MAIN, null);
        // in this case we want to load all packages available in the launcher
        List<ResolveInfo> availableActivities = packageManager.queryIntentActivities(intent,0);
        // for each one we create a custom list view item
        for(ResolveInfo resolveInfo:availableActivities){
            ApplicationDetail applicationDetail = new ApplicationDetail(

At the end (I skip the code to create a custom list view item cause it should be quite easy to implement, anyway you can find it in the source code of this article) we will have a list view populated with all available applications. In my case in alphabetical order:


Create a plug-in

In order to not loose any of the advantages of the android application package we want to distribute our plug-ins as standalone packages but we don’t want that the user is capable to execute the packages as stand alone packages.

First of all, I want to make a little explanation of how android works and how each package is treated by the underlying Linux system:

  • The Android operating system is a multi-user Linux system in which each app is a different user.
  • By default, the system assigns each app a unique Linux user ID (the ID is used only by the system and is unknown to the app). The system sets permissions for all the files in an app so that only the user ID assigned to that app can access them.
  • Each process has its own virtual machine (VM), so an app’s code runs in isolation from other apps.
  • By default, every app runs in its own Linux process. Android starts the process when any of the app’s components need to be executed, then shuts down the process when it’s no longer needed or when the system must recover memory for other apps.

All right, so first of all we need to create a new Android project and set the project to run in background. The project will have its own icon and activities and bla bla but it cannot be shown into the home launcher.

            android:label="@string/app_name" >
                <action android:name="android.intent.action.MAIN" />
                <category android:name="android.intent.category.LAUNCHER" />

And then the application is installed into our system, but is not visible in the launcher like the following screenshot:


The next step is to share a custom INTENT between my plug-in application and my main application and set the category of the plug-in to this custom intent. In this way my list view will be populated only with the list of available plugins.

If you pay attention to the Android manifest the INTENT is nothing more than a custom string, so in my plugin manifest I will change the intent in the following one:

    android:label="@string/app_name" >
        <action android:name="android.intent.action.MAIN" />
        <category android:name="ltd.netarchitectures.PLUGIN" />

And inside my ListView adapter I will load the activities that only implement my INTENT like this one:

packageManager = getPackageManager();
applications = new ArrayList<>();
Intent intent = new Intent(Intent.ACTION_MAIN, null);

And now when I start my main application only my plugins will be loaded:


Easy, isn’t it? In the next article I will explain how we can execute this plugin within the same thread (Linux VM) of the main activity and how we can control when a new plug-in is installed into the system.

Android and the transparent status bar

With the introduction of Google Material Design we also got a new status bar design and we can choose for three different layouts:

  • Leave the status bar as is (usually black background and white foreground)
  • Change the color of the status bar using a different tone
  • Make the status bar transparent

The picture below shows the three different solutions provided with the Material Design guidelines:

A) Classic Status Bar

B) Colored Status Bar

C) Transparent Status Bar

In this post I want to finally give a working solution that allows you to achieve all this variations of the status. Except for the first solution which is the default layout of Android, so if you don’t want to comply to the Material Design Guidelines just leave the status bar black colored.

Change the Color of the StatusBar

So the first solution we want to try here is to change the color of the status bar. I have a main layout with a Toolbar component in it and the Toolbar component has a background color like the following:


So according Material Design my Status Bar should be colored using the following 700 tone variation:


If you are working with Material Design only and Android Lollipop this is quite easy to accomplish, just set the proper attribute inside the Material Theme Style(v21) XML file as following:

<!-- This is the color of the Toolbar -->
<item name="colorPrimary">@color/primary</item>
<!-- This is the color of the Status bar -->
<item name="colorPrimaryDark">@color/primary_dark</item>
<!-- The Color of the Status bar -->
<item name="statusBarColor">@color/primary_dark</item>

Unfortunately this solutions does not make your status bar transparent, so if you have a Navigation Drawer the final result will look a bit odd compared to the real Material Design guidelines, like the following one:


As you can see the Status Bar simply covers the Navigation Drawer giving a final odd layout. But with this simple solution you can change your status bar color but only for Lollipop systems.

In Android KitKat you cannot change the color of the status bar except if you use the following solution because only in Lollipop Google introduced the attribute statuBarColor

Make the StatusBar transparent

A second solution is to make the Status Bar transparent. This is easy to achieve by using the following XML attributes in your Styles.xml and Styles(v21).xml:

    <!-- Make the status bar traslucent -->
    <style name="AppTheme" parent="AppTheme.Base">
        <item name="android:windowTranslucentStatus">true</item>

But with only this solution in place we get another odd result where the Toolbar moves behind the status bar and get cut like the following screenshot:


So first of all we need to inform the Activity that we need to add some padding to our toolbar and the padding should be the size of the status bar, which is completely different from one device to another. So how can we achieve that is quite simple. First we get the status bar height with this function:

// A method to find height of the status bar
public int getStatusBarHeight() {
    int result = 0;
    int resourceId = getResources().getIdentifier("status_bar_height", "dimen", "android");
    if (resourceId > 0) {
        result = getResources().getDimensionPixelSize(resourceId);
    return result;

Then in our OnCreate method we specify the padding of the Toolbar with the following code:

protected void onCreate(Bundle savedInstanceState) {

   // Retrieve the AppCompact Toolbar
    Toolbar toolbar = (Toolbar) findViewById(;

   // Set the padding to match the Status Bar height
    toolbar.setPadding(0, getStatusBarHeight(), 0, 0);

And finally we can see that the Status Bar is transparent and that our Toolbar has the right padding. Unfortunately the behavior between Lollipop and KitKat is totally different. In Lollipop the system draw a translucency of 20% while KitKat does not draw anything and the final result between the two systems is completely different:


So, in order to get the final result looking the same on both systems we need to use a nice library called Android System Bar Tint available here on GitHub: This library is capable of re-tinting the Status Bar with the color we need and we can also specify a level of transparency. So, because the default Material Design Status Bar should be 20% darker than the Toolbar color we can also say that the Status Bar should be 20% black, which correspond to the following RGB color: #20000000. (But you can also provide a darker color and play with transparency, this is really up to you).

So, going back to our onCreate method, after we setup the padding top for the Toolbar we can change the color of the Status Bar using the following code:

// create our manager instance after the content view is set
SystemBarTintManager tintManager = new SystemBarTintManager(this);
// enable status bar tint
// enable navigation bar tint
// set the transparent color of the status bar, 20% darker

At this point if we test again our application, the final result is pretty nice and also the overlap of the Navigation Drawer is exactly how is supposed to be in the Material Design Guidelines:


The Next Video shows the final results running on KitKat and Lollipop device emulators using Genymotion.

The Final result on Lollipop and KitKat

Understand Density Independent Pixels (DPI)

If you are working on a Mobile application (using mobile CSS, native Android SDK or native iOS SDK) the first problem you are going to face is the difference between the various screen sizes. For example, if you work with Android you will notice that different devices have different screen resolutions. Android categorize these devices in 4 different buckets called respectively MDPI, HDPI, XHDPI and XXHDPI

As I usually say, a picture is worth a thousands words:

Figure 1 So as you can see, in this case we have 4 devices with 4 different pixels resolutions but also 4 different DPI classifications.

What is DPI and why we should care?

DPI stands for Dots per Inches, which can be translated in how many pixels can be drawn into a screen for a given inch of screen’s space.

This measure is totally unbind to the screen size and to the pixel resolution so we can state that screens at different size and different resolution may be classified within the same DPI category and screens with same size but different resolution may be classified into different DPI category.

Assuming we are loading on our phone a raster picture of XX px wide, this is the result we will see using different DPI if we keep the image at the same size:


The blurring effect is caused by the fact that on a screen with 165dpi the amount of pixels drawn per inch is way lower (165) than on a 450dpi screen so the first thing that we loose is the sharpness of the image.

How Android works with DPI?

In android you can classify your device’s screens into 4 or more different dpi buckets which are used to classify the device’s screen depending on the amount of dpi and not the pixel resolution of the screen size. The picture below shows the available DPI classifications with a sample device for each category. You can find all the available DPI classification on this lovely website DPI Love.


So for Android specifically, a device of 160 DPI has a ratio of 1:1 with the pixels on the screen while a device with more than 480 DPI has a ratio of 1:3 pixels on the screen compared to the same design for a 160 DPI screen.

Based on this classification we can now easily derive the following formula which can be used to calculate the real DPI resolution of a device based on its DPI classification and pixels resolution:


The Formula can be translated as DP = PX * 160 / DPI. So let’s make a couple of examples.

We want to design on the screen a square that should be 200px * 50px on our MDPI screen which we are using for mocking the UI (this is what I call default viewport)

Note: in Android SDK you will refer to DP  to define a density independent measure and not DPI, this is why the previous formula has on the left side px (pixels) and dp (density independent pixels).

Considering the previous list of devices (Figure 1) this is the result I come with in order to have the same aspect ration of my rectangle over multiple devices starting from an MDPI viewport:

GALAXY ACE MDPI 1:1 200 * 50 px 200 * 50 dp
HTC DESIRE HDPI 1:1.5 200 * 50 px 133 * 33 dp
NEXUS 7 XHDPI 1:2 200 * 50 px 100 * 25 dp
NEXUS 6 XXHDPI 1:3 200 * 50 px 67 * 16 dp

Regarding iOS the ratio is exactly the same except for XHDPI (retina) where the ratio is 1:2.25 and not 1:3 like in Android. iOS does not offer a classification for XXHDPI devices.

Entity Framework 6 and Collections With DDD

If you start to work with Entity Framework 6 and a real Domain modeled following the SOLID principles and most common known rules of DDD (Domain Driven Design) you will also start to clash with some limits imposed by this ORM.

Let’s start with a classic example of a normal Entity that we define as UserRootAggregate. For this root aggregate we have defined some business rules as following:

  1. A User Entity is a root aggregate
  2. A User Entity can hold 0 or infinite amount of UserSettings objects
  3. A UserSetting can be created only within the context of a User root aggregate
  4. A UserSetting can be modified or deleted only within the context of a User root aggregate
  5. A UserSetting hold a reference to a parent User

Based on this normal DDD principles I will create the two following objects:

A User Entity is a root aggregate

/// My Root Aggregate
public class User : IRootAggregate
   public Guid Id { get; set; }

   /// A root aggregate can be created
   public User() {  }

A User Entity can hold 0 or infinite amount of UserSettings

public class User : IRootAggregate
   public Guid Id { get; set; }
   public virtual ICollection<UserSetting> Settings { get; set; }

   public User()
      this.Settings = new HashSet<Setting>();

A UserSetting can be created or modified or deleted only within the context of a User root aggregate

    public class UserSetting
       public Guid Id { get; set; }
       public string Value { get; set; }
       public User User { get; set; }
       internal UserSetting(User user, string value)
          this.Value = value;
          this.User = user;
    /// inside the User class
    public void CreateSetting(string value)
       var setting = new UserSetting (this, value);
    public void ModifySetting(Guid id, string value)
       var setting = this.Settings.First(x => x.Id == id);
       setting.Value = value;
    public void DeleteSetting(Guid id)
       var setting = this.Settings.First(x => x.Id == id);

So far so good, Now, considering that we have a Foreign Key between the table UserSetting and the table User we can easily map the relationship with this class:

public class PersonSettingMap : EntityTypeConfiguration<PersonSetting>
   public PersonSettingMap()
       HasRequired(x => x.User)
           .WithMany(x => x.Settings)
           .Map(cfg => cfg.MapKey("UserID"))

Now below I want to show you the strange behavior of Entity Framework 6.

If you Add a child object and save the context Entity Framework will properly generate the INSERT statement:

using (DbContext context = new DbContext)
   var user = context.Set<User>().First();
   user.CreateSetting("my value");


If you try to UPDATE a child object, again EF is smart enough and will do the same UPDATE statement you would like to get issued:

using (DbContext context = new DbContext)
   var user = context.Set<User>()
                     .Include(x => x.Settings).First();
   var setting = user.Settings.First();
   setting.Value = "new value";


The problem occurs with the DELETE. Actually you would issue this C# statement and think that Entity Framework like any other ORM does already, will be smart enough to issue the DELETE statement …

using (DbContext context = new DbContext)
   var user = context.Set<User>()
                     .Include(x => x.Settings).First();
   var setting = user.Settings.First();


But you will get a nice Exception has below:


An error occurred while saving entities that do not expose foreign key properties for their relationships.

The EntityEntries property will return null because a single entity cannot be identified as the source of the exception.

Handling of exceptions while saving can be made easier by exposing foreign key properties in your entity types.

See the InnerException for details. —>

System.Data.Entity.Core.UpdateException: A relationship from the ‘UserSetting_User’ AssociationSet is in the ‘Deleted’ state.

Given multiplicity constraints, a corresponding ‘UserSetting_User_Source’ must also in the ‘Deleted’ state.

So this means that EF does not understand that we want to delete the Child object. So inside the scope of our Database Context we have to do this:

using (DbContext context = new DbContext)
   var user = context.Set<User>()
                     .Include(x => x.Settings).First();
   var setting = user.Settings.First();

   // inform EF
   context.Entry(setting.Id).State = EntityState.Deleted;


I have searched a lot about this problem and actually you can read from the Entity Framework team that this is a feature that is still not available for the product: