Linear Trend Analysis using Least Squares Regression and R

Trend analysis helps you estimate if the metric of interest (daily users, sessions, sales and so on) is growing or declining. Sometimes the trend is obvious:


But what about this example of real life data:


Can we be sure now that the metric grows on average over time or not? Simple way to see this is to fit the least squares regression line:


From the plot above we can see that our metric slowly grows over time.

# R script to plot the data and regression line

# data is vector of data points
# days is vector of days of observations
days <- c(1:110)

plot(data, type="l", col="darkblue", lwd=1)
abline(lm(data ~ days), col="darkgreen")

But can we see how fast the metric grows on average? Let’s investigate the regression slope:

> lm(data ~ days)

(Intercept)         days  
    40601.4         27.6  

From this output we can see that our metric grows by 27.6 daily on average. What can we do more? So far we fitted the regression over the entire period of observations (in our case it is 110 days). Now let’s see the regression coefficients for last 30 days:

> lm(data[80:110]~days[80:110])

 (Intercept)  days[80:110]  
     23244.7         210.8  

You can see that if we consider only last 30 days our metric grows by 210.8 daily, so our trend rate is also increasing our time! This is very useful to know and estimate.

The trend analysis we just did is quite simple, but unfortunately it cannot be always applied. You may need to consider other methods like Theil-Sen estimation if the data contains many outliers, or autoregressive models if there is autocorrelation in data.

Using Python UDF to Aggregate Data in Apache Pig

Apache Pig allows you to use the GROUP statement to combine data by a key, and unlike SQL you do not need to apply an aggregation function like SUM, MAX, MIX to return just a single row for each group. Pig just groups values for each key into separate bags that you can iterate and transform as needed.

Let’s consider the following meteo data set containing state, city, annual high and annual low temperature as follows:

CA,San Diego,70,58
CA,San Jose,73,51

Now we will group data by state and see the results:

-- Load input data from a file
d = load 's3://epic/dmtolpeko/meteo.txt' 
  using PigStorage(',') as (state:chararray, city:chararray, 
                            high:chararray, low:chararray);

-- Group data by state
g = group d by state; 

-- Show the results
dump g;
(CA,{(CA,San Jose,73,51),(CA,Berkeley,68,48),(CA,San Diego,70,58),(CA,Irvine,73,54)})

You can see that data are grouped by each key, and we do not need to apply an aggregate function as this is required by GROUP BY in SQL.

Now let’s write a Python UDF to iterate each item in the group and return just 2 first rows with values city and low temperature only.

Note that you can get this functionality in pure Pig syntax, and this example is just intended to show how you can handle bag items inside Python UDF that can be useful to implement some more complex transformations and aggregations.

# Pig UDF returns a bag of 2-element tuples
def getCitiesLow(data):
    result = []
    # Select first 2 items i group only
    for i in range(2):
        city = data[i][1]
        low = data[i][3]
        result.append((city, low))
    return result

Put this Python code to a file and run the following Pig script:

-- Register UDF
register './' USING jython as udf;

-- Transforming data using UDF
s = foreach g generate group, udf.getCitiesLow(d);

-- Show the results
dump s;
(CA,{(San Jose,51),(Berkeley,48)})

From this example you can learn how nicely you can handle Pig a bag of tuples in Python, it just becomes a list of tuples that you can iterate and extract individual items. You can also see how the input group can be transformed: in our example we selected only 2 rows from each group and returned a different number of columns. This can useful in some advanced transformations.