11 posts tagged

visualisation

Later Ctrl + ↑

Updates in Redash v8

Время чтения текста – 5 минут

Two weeks ago the final release of Redash edition 8 took place. The complete list of improvements can be found on the page of beta version of v8 release.

In my overview of the novelties, I will focus my attention only on those, that have the most significant impact on user’s experience of Redash utilization, i.e. that are more illustrative for you and me:

  • Support for multi-select in dropdown (and query dropdown) parameters.
  • Support for dynamic values in date and date-range parameters.
  • Search dropdown parameter values.
  • Pivot Table: support hiding totals.
  • New Visualization: Details View.

Support for multi-select in dropdown (and query dropdown) parameters.

In the 8th version of Redash we finally receive the long-awaited support of selection of several values in a parameter such as “dropdown list” or “dropdown list on the basis of the query”:

When we select several different clues, the above-mentioned values transform into a list, divided by a comma. At that, there is an opportunity to wrap the values of such a list into single quotes or double quotes.

For us it means an opportunity to use requirements of IN type with a parameter in the queries:

At application of such parameter values, we will receive the following URL in the line of the browser:

Support for dynamic values in date and date-range parameters.

Extremely handy and useful feature for selection of dates, that is often used in Tableau, however hadn’t been implemented into Redash until recently: utilization of dynamic values for dates:

Pressing the zipper button, you can select a random period, and Redash will insert the required values into the query on its own. Thus, for instance, you can quickly look through the statistics for last week or last month.

Search dropdown parameter values.

Let’s assume that we are using a parameter of a type “dropdown list on the basis of the query”, and in the query selected there are more than 300 resulting lines, based on which we need to find a value of our interest. In the 8th version this problem is solved by auto-complete and search by parameter values. In the example below I am inserting a line “Orga” and obtain all the values, in which this line is met, very convenient.

Pivot Table: support hiding totals.

Quite long-awaited feature, that allows to aggregate outcomes by rows / columns.

New Visualization: Details View

New presentation of data results: view of details by each line. I assume, it can be convenient at use in a dashboard, when you apply filtration by a specific user / partner.
It visualizes the names of columns and results of a specific line in a table form.

 No comments    431   2019   BI-tools   redash   visualisation

Diagram of BCG (Boston Consulting Group) Matrix

Время чтения текста – 5 минут

I will water down the blog with an interesting report, that was developed for Yota company on November, 2011. BCG Matrix has inspired us to develop this report.

We had: one Excel package, 75 VBA macro, ODBC connection to Oracle, SQL queries to databases of all sorts and colours. We will review report construction within this stack, but first, let’s speak about the very idea of the report.

BCG Matrix – is 2x2 matrix, whereon the clients’ segments are displayed by circumferences with their centres in the intersection of coordinates, formed by the relevant paces of two indicators selected.

To make it simple, we had to divide all the clients of the company into 4 segments: ARPU above average/below average, traffic consumption (main service) above average/below average. Thus, it turned out that 4 quadrants appear, and you need to place a bubble chart into each one of them, whereas the size of a bubble means the total amount of users within a segment. In addition to that, one more bubble was added to each quadrant (smaller one), that showcased the churn in each segment (author’s improvement).

What did we want to get at the output?
A chart of the following type:

Representation of the BCG matrix on the data of Yota company

The task statement is more or less clear, let’s move to the realization.
Let’s assume, that we’ve already collected all the required data (meaning that, we’ve learned to identify the average ARPU and average traffic consumption, in this post we won’t examine SQL-query), then the paramount task lies in understanding how to display the bubbles in the required places by means of Excel tools.

For this aim, a bubble chart comes to help:

Insert – Chart – Bubble

Going to the menu Selection of data source and evaluating, what is required in order to build a chart in the type that we need: coordinates X, coordinates Y, values of bubbles’ sizes.

Great, so it turns out that if we assume that our chart will be located in coordinates on the X axis from -1 to 1, and on the Y axis from -1 to 1, then the centre of the right upper bubble will be the spot (0.5; 0.5) on the chart. Likewise, we’ll place all the other bubbles.

We should separately consider the bubbles of Churn type (for displaying of the churn), they are located more to the right then the main bubble and might intersect with it, therefore we will place the right upper bubble to empirically obtained coordinates (0.65; 0.35).

Thus, for four main and four additional bubbles, we can organize the data as follows:

Let’s review more thoroughly how we’ll use them:

So, we set on X-axis – horizontal coordinates of the centres of our bubbles, that lie in the cells A9:A12, on Y-axis – vertical coordinates of the centres of our bubbles, that lie in the cells B9:B12, and the sizes of the bubbles are stored in the cells E9:E12.
Furthermore, we add another data set for the Churn, once more indicating all the required parameters.

We’ll get the following chart:

Then, we’re making it pretty: changing colours, deleting axis and getting a beautiful result.

By adding the required data labels, we receive what we initially needed in the task.

Share your experience in comments – did you build such charts and how you solved the task?

 No comments    914   2019   analysis   data analytics   excel   marketing   sql   strategy   visualisation

How to calculate Retention?

Время чтения текста – 6 минут

In this post we will discover, how to properly construct a report on Retention with application of Redash and SQL language.
For starters, let’s explain in a nutshell what the metric Retention rate is, why it is important,

Retention rate

Retention rate metric is widespread and is particularly popular within the mobile industry, since it allows to understand how well a product engages the users into daily use. Let’s recall (or discover), how Retention is calculated:

Retention of day X – is N% of users that will return to the product on day X. In other words, if on some specific day (day 0) 100 new users came, and 15 returned on the first day, then Retention of the 1st day will be equal to 15/100=15%.
Most commonly, Retention of days 1, 3, 7 and 30 are singled out as the most descriptive metrics of a product, however it’s useful to address Retention curve as a whole and make conclusions, proceeding from it.

Retention curve

In the end, we are interested in construction of such curve, that shows the retention of users from day 0 to day 30.

Retention rate curve from day 0 do day 30

Rolling Retention (RR)

Besides classic Retention rate, Rolling Retention (hereinafter, RR) is allocated. At calculation of RR, apart from day X, all the subsequent days are also considered. Thus, RR of the 1st day – the amount of users who returned on the 1st and subsequent days.

Let’s compare Retention and Rolling Retention of the 10th day:
Retention10 — the amount of users, who returned on the 10th day / the amount of users, who installed the app 10 days ago * 100%.
Rolling Retention10 — the amount of users, who returned on the 10th day or later / the amount of users, who installed the app 10 days ago * 100%.

Granularity (retention of time periods)

In some industries and respective tasks, it is useful to understand the Retention of a specific day (most often, in the mobile industry), in other cases it is useful to understand the retention of users on various time intervals: for example, weekly or monthly periods (oftentimes, it’s handy in e-commerce, retail).

An example of cohorts by months and monthly Retention respective thereto

How to build a Retention report on SQL language?

We have sorted out above how to calculate Retention in formulas. Now let’s apply it with SQL language.
Let’s assume, that we have two tables: user — storing data about users’ identifiers and meta-information, client_session — information on visits of the mobile app by users.
Only these two tables will be present in the query, so you can easily adapt the query to yourself.
note: within this code, I am using Impala as DBMS.

Collecting the size of cohorts

SELECT from_unixtime(user.installed_at, "yyyy-MM-dd") AS reg_date,
          ndv(user.id) AS users
   FROM USER
   WHERE from_unixtime(user.installed_at)>=date_add(now(), -60)
     AND from_unixtime(user.installed_at)<=date_add(now(), -31)
   GROUP BY 1

Let’s sort out this pretty simple query: for every day we calculate the number of unique users for the period [60 days ago; 31 days ago].
In order not to mess with documentation: command ndv() in Impala is analogue of a command count(distinct).

Calculating the number of returned users on each cohort

SELECT from_unixtime(user.installed_at, "yyyy-MM-dd") AS reg_date,
          datediff(cast(cs.created_at AS TIMESTAMP), cast(installed_at AS TIMESTAMP)) AS date_diff,
          ndv(user.id) AS ret_base
   FROM USER
   LEFT JOIN client_session cs ON cs.user_id=user.id
   WHERE 1=1
     AND datediff(cast(cs.created_at AS TIMESTAMP), cast(installed_at AS TIMESTAMP)) between 0 and 30
     AND from_unixtime(user.installed_at)>=date_add(now(), -60)
     AND from_unixtime(user.installed_at)<=date_add(now(), -31)
   GROUP BY 1, 2

In this query, the key part is contained in the command datediff: now we are calculating for each cohort and for each datediff the number of unique users with the very same command ndv() (practically, the number of users, who returned within the days from 0 to 30).

Great, now we have the size of cohorts and the number of returned users.

Combining all together

SELECT reg.reg_date AS date_registration,
       reg.users AS cohort_size,
       cohort.date_diff AS day_difference,
       cohort.ret_base AS retention_base,
       cohort.ret_base/reg.users AS retention_rate
FROM
  (SELECT from_unixtime(user.installed_at, "yyyy-MM-dd") AS reg_date,
          ndv(user.id) AS users
   FROM USER
   WHERE from_unixtime(user.installed_at)>=date_add(now(), -60)
     AND from_unixtime(user.installed_at)<=date_add(now(), -31)
   GROUP BY 1) reg
LEFT JOIN
  (SELECT from_unixtime(user.installed_at, "yyyy-MM-dd") AS reg_date,
          datediff(cast(cs.created_at AS TIMESTAMP), cast(installed_at AS TIMESTAMP)) AS date_diff,
          ndv(user.id) AS ret_base
   FROM USER
   LEFT JOIN client_session cs ON cs.user_id=user.id
   WHERE 1=1
     AND datediff(cast(cs.created_at AS TIMESTAMP), cast(installed_at AS TIMESTAMP)) between 0 and 30
     AND from_unixtime(user.installed_at)>=date_add(now(), -60)
     AND from_unixtime(user.installed_at)<=date_add(now(), -31)
   GROUP BY 1, 2) cohort ON reg.reg_date=cohort.reg_date
    ORDER BY 1,3

We have received the query, that calculates Retention for each cohort, and, eventually, the result can be displayed as follows:

Retention rate, calculated for each cohort of users

Construction of the sole Retention curve

Let’s modify our query a bit and obtain the data for construction of one Retention curve:

SELECT 
       cohort.date_diff AS day_difference,
       avg(reg.users) AS cohort_size,
       avg(cohort.ret_base) AS retention_base,
       avg(cohort.ret_base)/avg(reg.users)*100 AS retention_rate
FROM
  (SELECT from_unixtime(user.installed_at, "yyyy-MM-dd") AS reg_date,
          ndv(user.id) AS users
   FROM USER
   WHERE from_unixtime(user.installed_at)>=date_add(now(), -60)
     AND from_unixtime(user.installed_at)<=date_add(now(), -31)
   GROUP BY 1) reg
LEFT JOIN
  (SELECT from_unixtime(user.installed_at, "yyyy-MM-dd") AS reg_date,
          datediff(cast(cs.created_at AS TIMESTAMP), cast(installed_at AS TIMESTAMP)) AS date_diff,
          ndv(user.id) AS ret_base
   FROM USER
   LEFT JOIN client_session cs ON cs.user_id=user.id
   WHERE 1=1
     AND datediff(cast(cs.created_at AS TIMESTAMP), cast(installed_at AS TIMESTAMP)) between 0 and 30
     AND from_unixtime(user.installed_at)>=date_add(now(), -60)
     AND from_unixtime(user.installed_at)<=date_add(now(), -31)
   GROUP BY 1,2) cohort ON reg.reg_date=cohort.reg_date
    GROUP BY 1        
    ORDER BY 1

Now, we have average by all the cohorts Retention rate, calculated for each day.

More on the subject

 No comments    2917   2019   analysis   BI-tools   redash   sql   visualisation

Overview of Yandex DataLens

Время чтения текста – 5 минут

Let’s take our minds off of the project on receipt data collection for a while. We will speak about the project’s following steps a bit later.

Today we’ll be discussing a new service from Yandex – DataLens (the access to demo was kindly provided to me by my great friend Vasiliy Ozerov and the team Fevlake / Rebrain). Currently, the service is in Preview mode and is, in essence, a cloud BI. The main shtick of the service is that it can easily and handy work with clickhouse (Yandex Clickhouse).

Connection of data sources

Let’s review the major things: connection of a data source and dataset setting.
The selection of DBMS is not vast, nevertheless some main things are present. For the purpose of our testing, let’s take MySQL.

Selection of data sources DataLens

On the basis of the connection created, it is suggested to create a dataset:

Interface of dataset settings, definition of measurements and metrics

On this stage it’s defined which table’s attributes are becoming measurements, and which are turning into metrics. You can choose data aggregation type for the metrics.
Unfortunately, I didn’t manage to discover how it’s possible to state several interconnected tables (for example, attach a handbook for measurements) instead of a single table. Perhaps, on this stage developers suggest us to solve this issue by creating of required view.

Data visualization

Regarding the interface itself – everything is pretty easy and handy. It reminds of a cloud version of Tableau. If comparing to Redash, which is most frequently used in conjunction with Clickhouse, the opportunities of visualization are simply staggering.
Even pivot tables, in which one can use Measure Names as columns’ names are worth something:

Setting of pivot tables in DataLens

Obviously, there is an opportunity to make also basic charts in DataLens from Yandex:

Construction of a linear chart in DataLens

There are also area charts:

Construction of an area chart in DataLens

However, I didn’t manage to find out how data classification by months / quarters / weeks is carried out. According to an example of data, available in the demo version, developers are still solving this issue by creating additional attributes (DayMonth, DayWeek, etc).

Dashboards

For now, interface of dashboard blocks’ creation looks bulky, and interface windows are not always comprehensive. Here is, for instance, a window, allowing to state a parameter:

Not really apparent setting window for dashboard parameters

However, in the gallery of examples we can see highly functional and convenient dashboards with selectors, tabs and parameters:

An example of a working dashboard with parameters and tabs in DataLens

Looking forward to fixing of interface shortcomings, improving of Datalens and preparing to use it together with Clickhouse!

 No comments    397   2019   analysis   BI-tools   business intelligence   visualisation

Building a funnel-report in redash

Время чтения текста – 3 минуты

So, we’ve been planning to review Funnel-visualization of a report in Redash.
First and foremost, let’s build a request to the data source that we’ve created – Google Analytics.

The following text needs to be placed in the request console:

{
    "ids": "ga:128886640",
    "start_date": "30daysAgo",
    "end_date": "yesterday",
    "metrics": "ga:users,ga:goal1Completions,ga:goal2Completions,ga:goal3Completions"
}

In this request we are asking API Google Analytics to provide data for the last 30 days on the account GA: 128886640. We want to see the number of users and the number of completion of goals 1, 2 and 3.

As a result, our table will look as follows:

ga:users ga:goal1Completions ga:goal2Completions ga:goal3Completions
3,926 105 41 32

Great, that’s right what we need in order to build a funnel.
Now I will tell you about one very useful Redash feature: query-results. In order to connect tables with results of queries’ execution, we go to Data Sources and search for query-results (beta). Connecting new data source.
Now we have an opportunity to refer to results of Redash queries. Thus, for instance, we can use the results of a requests to Google Analytics API.

How to do it?
We need to choose a data source query-results on the left:

Drop down menu with selection of data sources (in the console – on the left)

Now we’ll learn to make funnel-visualization. For this purpose, we write the following SQL-query:

select 'Add a good to the shopping cart' as step_name, ga_goal1Completions as goalCompletion from query_8
union all
select 'View the shopping cart' as step_name, ga_goal2Completions from query_8
union all
select 'Order processing' as step_name, ga_goal3Completions from query_8

In this case query_8 – is the very table with results of request to the data source Google Analytics.

Let’s set visualization:

Carefully selecting parameters, in order to achieve the desired result

As a result, we receive the funnel of conversions from one goal to another:

You can display this funnel in the dashboard and add filters / parameters thereto.
 No comments    1167   2018   BI-tools   google analytics   redash   visualisation
Earlier Ctrl + ↓