Addressing Low Dog Licence Compliance in Toronto [R Analysis]

R

Table of contents

  1. Background on the issue
  2. Preparing for analysis
  3. Cleaning dog licences data
  4. Cleaning census data
  5. Cleaning neighbourhood data
  6. Exploring the combined data
  7. Summary of analysis results
  8. Background of recent licensing promotion efforts
  9. Recommendations

Background on the issue

Annual pet licensing is a legal requirement for Torontonians who own a dog or a cat, but the city has a low pet licence compliance rate. I will be focusing on dog licences in particular as part of my analysis series about dogs in Toronto. A study of cat licences should be undertaken separately, as there are potentially different variables affecting dog and cat ownership and licensing in the city.

A spayed or neutered dog in Toronto can be licensed for $25, while the cost is $60 for a dog that’s not spayed or neutered. Senior pet owners get 50% off the licence fee, and low-income residents qualify for subsidized or waived fees.

The penalty for not licensing a dog in Toronto is $240, but enforcement is low. It was reported that in 2014, only eight tickets were handed out for unlicensed dogs in the city1.

A 2007 survey estimated that there were 215,000 dogs living in Toronto2, and yet 48,700 dogs were registered for licences in 2021 (according to available data). That’s an estimated compliance rate of only 23%, and it could be even lower since the population of Toronto has grown since 2007.

The city provides annual data on pet licences by area, so we can investigate the following questions: Where do Toronto’s licensed dogs live? Can we identify any similarities between areas with lower proportions of dog licences? This could help us form a strategy for increasing compliance rates.

Preparing for analysis

Our first step is to load the packages we’ll need in R.

library(tidyverse) 
library("readxl") # bringing in data sets
library(tidyr) # pivoting data
library(sqldf) # using SQL queries
library(ggcorrplot) # correlation plotting
library(showtext) # adding preferred font from Google Fonts
font_add_google("Ubuntu", "ubuntu")
showtext_auto()

Cleaning dog licences data

Now we’ll import the publicly available licences data from 2016. We are looking at the year 2016 because we need to use the corresponding census, and 2016 is the most recent year with full census data available.

licences <- read_excel('by-forward-sortation-area-fsa-2016.xls')

Checking out the data set.

head(licences)

## # A tibble: 6 x 4
##   `Number of Licenced Cats and Dogs By Forward Sorting Area (FSA). \nSales between January… ...2  ...3  ...4 
##   <chr>                                                                                     <chr> <chr> <chr>
## 1 <NA>                                                                                      <NA>  <NA>  <NA> 
## 2 <NA>                                                                                      CAT   DOG   Total
## 3 M1B                                                                                       307   612   919  
## 4 M1C                                                                                       338   852   1190 
## 5 M1E                                                                                       481   966   1447 
## 6 M1G                                                                                       216   393   609

The data shows pet licence registrations from 2016 broken down by Forward Sortation Area (FSA – the first three letters of a postal code).

We need to isolate just the columns about dog licences and remove title rows. Then we’ll rename the columns.

licences <- licences[c(3:100), c(1, 3)]

colnames(licences) <- c("FSA", "dogs")

We can’t compare the areas’ licences yet because we don’t have data on the population of the FSAs.

That’s where the census data comes in!

Cleaning census data

Statistics Canada provides census data for each FSA.

census <- read.csv('98-401-X2016046_English_CSV_data.csv')

Taking a look at the data.

glimpse(census)

## Rows: 3,689,574
## Columns: 14
## $ CENSUS_YEAR                                          <int> 2016, 2016, 2016, 2016, 2016, 2016, 2016, 2016…
## $ GEO_CODE..POR.                                       <chr> "01", "01", "01", "01", "01", "01", "01", "01"…
## $ GEO_LEVEL                                            <int> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0…
## $ GEO_NAME                                             <chr> "Canada", "Canada", "Canada", "Canada", "Canad…
## $ GNR                                                  <dbl> 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4…
## $ GNR_LF                                               <dbl> 5.1, 5.1, 5.1, 5.1, 5.1, 5.1, 5.1, 5.1, 5.1, 5…
## $ DATA_QUALITY_FLAG                                    <int> 20000, 20000, 20000, 20000, 20000, 20000, 2000…
## $ ALT_GEO_CODE                                         <chr> "01", "01", "01", "01", "01", "01", "01", "01"…
## $ DIM..Profile.of.Forward.Sortation.Areas..2247.       <chr> "Population, 2016", "Population, 2011", "Popul…
## $ Member.ID..Profile.of.Forward.Sortation.Areas..2247. <int> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14,…
## $ Notes..Profile.of.Forward.Sortation.Areas..2247.     <int> 1, 2, NA, 3, 4, NA, NA, 5, NA, NA, NA, NA, NA,…
## $ Dim..Sex..3...Member.ID...1...Total...Sex            <chr> "35151728", "33476688", "5.0", "15412443", "14…
## $ Dim..Sex..3...Member.ID...2...Male                   <chr> "...", "...", "...", "...", "...", "...", "...…
## $ Dim..Sex..3...Member.ID...3...Female                 <chr> "...", "...", "...", "...", "...", "...", "...…

There are 14 columns, and we only need the ones for the FSA, the category, and the total value.

census <- census[, c(2, 9, 12)]

The columns need to be renamed.

colnames(census) <- c("FSA", "category", "value")

Right now, every value for each FSA is recorded in its own individual row.

To make the data easier to work with, we’ll pivot it so the categories become the columns.

census <- census %>% 
  pivot_wider(names_from = category, 
              values_from = value)

The census data offers stats on a variety of different variables, and we need to narrow down the 1,136 columns to those that are relevant to this analysis.

I plan to analyse the proportion of dog licences in each FSA alongside the following data for the areas:

  • percentage of private dwellings that are apartments
  • percentage of households that are renters
  • median income
  • median age
  • percentage of seniors
  • percentage that cannot speak either of the two official languages
  • percentage of residents that are immigrants
  • unemployment rate

To do so, we’ll isolate the columns for these categories and then rename them.

# Selecting the categories we want to analyse, 
# as well as the population in 2016 and the number of private dwellings.

census <- census[, c(1, 2, 6, 38:40, 42:46, 37, 55, 374, 90, 94, 25, 528, 530, 896, 898, 1047)]


# Renaming each variable.

colnames(census) <- c("FSA", 
                       "population_2016", # Population in 2016
                       "priv_dwellings", # Number of private dwellings occupied by usual residents
                       "total_occupied", # Total occupied dwellings for structural data (slightly different than previous variable in some FSAs)
                      "single_detached", # Single-detached house
                      "apartment_5storeys", # Apartment in a building that has five or more storeys 
                      "semidetached", # Semi-detached house
                      "rowhouse", # Row house
                      "duplexapartment", # Apartment or flat in a duplex
                      "apartment_below5", # Apartment in a building that has fewer than five storeys    
                      "othersingleattached", # Other single-attached house  
                       "median_age", # Median Age
                       "avg_household_size", # Average household size
                       "median_income", # Median income
                       "total_language_pop", # Total population data for knowledge of official languages data 
                       "official_languages", # Population that can speak neither English nor French
                       "pct_seniors", # Population that is 65+
                       "img_total_status", # Total population data for immigrant status
                       "immigrants", # Number of immigrants
                      "priv_households", # Total private household data by tenure status
                      "renters", # Number of households that are renters
                       "unemployment_rate" # Unemployment rate
)

Let’s check out the data frame now.

glimpse(census)

## Rows: 1,642
## Columns: 22
## $ FSA                 <chr> "01", "A0A", "A0B", "A0C", "A0E", "A0G", "A0H", "A0J", "A0K", "A0L", "A0M", "A0…
## $ population_2016     <list> "35151728", "46587", "19792", "12587", "22294", "35266", "17804", "7880", "260…
## $ priv_dwellings      <list> "14072079", "19426", "8792", "5606", "9603", "15200", "7651", "3426", "11090",…
## $ total_occupied      <list> "14072080", "19425", "8790", "5605", "9605", "15200", "7650", "3425", "11090",…
## $ single_detached     <list> "7541495", "17935", "8340", "5235", "8965", "14230", "6755", "3020", "10310", …
## $ apartment_5storeys  <list> "1391040", "5", "0", "0", "5", "0", "0", "0", "0", "0", "15", "0", "0", "0", "…
## $ semidetached        <list> "698795", "435", "55", "75", "110", "245", "270", "60", "120", "25", "60", "10…
## $ rowhouse            <list> "891305", "255", "80", "80", "260", "180", "295", "160", "115", "60", "25", "6…
## $ duplexapartment     <list> "784300", "385", "50", "60", "55", "150", "145", "120", "110", "90", "50", "55…
## $ apartment_below5    <list> "2539390", "275", "135", "70", "100", "265", "125", "55", "250", "25", "175", …
## $ othersingleattached <list> "36000", "30", "20", "10", "10", "25", "20", "5", "30", "5", "5", "20", "15", …
## $ median_age          <list> "41.2", "48.5", "54.2", "53.1", "51.0", "52.8", "51.6", "52.9", "51.8", "48.3"…
## $ avg_household_size  <list> "2.4", "2.4", "2.2", "2.2", "2.3", "2.3", "2.3", "2.2", "2.3", "2.4", "2.2", "…
## $ median_income       <list> "34204", "29545", "27561", "25030", "26707", "24770", "24761", "23604", "27044…
## $ total_language_pop  <list> "34767250", "46285", "19640", "12490", "22105", "35090", "17715", "7800", "259…
## $ official_languages  <list> <"648975", "636515">, <"10", "0">, <"10", "5">, <"10", "10">, <"5", "0">, <"10…
## $ pct_seniors         <list> <"5935630", "16.9", "5436830", "790820", "275225">, <"9925", "21.3", "9475", "…
## $ img_total_status    <list> "34460065", "45800", "19600", "12265", "22060", "34625", "17875", "7655", "257…
## $ immigrants          <list> "7540830", "485", "210", "125", "180", "245", "120", "40", "275", "115", "20",…
## $ priv_households     <list> "14072080", "19325", "8825", "5545", "9605", "15195", "7720", "3410", "11090",…
## $ renters             <list> "4474530", "2680", "950", "710", "1260", "1785", "1340", "600", "1175", "400",…
## $ unemployment_rate   <list> "7.7", "16.4", "18.5", "26.8", "23.5", "27.5", "25.2", "28.0", "36.3", "21.1",…

Right now, official_languages and pct_seniors are lists of elements (due to duplicate categories with the same title in the original data set). We’ll grab the elements we need.

census <- census %>% 
                   rowwise() %>%
                   mutate(official_languages = official_languages[[1]],
                                pct_seniors = pct_seniors[[2]])

Now the two data frames can be joined together.

combined <- merge(x = licences, y = census, by = "FSA")

This inner join has also done the job of narrowing down all the FSAs to just those in Toronto.

There are two very small downtown FSAs (M5W and M5X) that have a population of less than 15, as well as NAs for the categories we’re interested in. We’ll remove these two FSAs.

combined <- combined[-c(68:69),]

Let’s check the data types of each column.

glimpse(combined)

## Rows: 96
## Columns: 23
## $ FSA                 <chr> "M1B", "M1C", "M1E", "M1G", "M1H", "M1J", "M1K", "M1L", "M1M", "M1N", "M1P", "M…
## $ dogs                <chr> "612", "852", "966", "393", "318", "393", "636", "492", "603", "870", "513", "5…
## $ population_2016     <list> "66108", "35626", "46943", "29690", "24383", "36699", "48434", "35081", "22913…
## $ priv_dwellings      <list> "20230", "11274", "17161", "9767", "8985", "12274", "17930", "12428", "8623", …
## $ total_occupied      <list> "20230", "11270", "17160", "9765", "8985", "12275", "17930", "12425", "8620", …
## $ single_detached     <list> "6240", "8895", "6355", "4100", "2625", "2905", "5130", "3370", "4325", "4855"…
## $ apartment_5storeys  <list> "4240", "160", "5915", "4035", "5150", "6930", "8135", "4310", "2685", "945", …
## $ semidetached        <list> "1795", "365", "560", "20", "225", "215", "930", "950", "85", "380", "630", "1…
## $ rowhouse            <list> "4630", "885", "2385", "220", "185", "640", "425", "1240", "555", "90", "1830"…
## $ duplexapartment     <list> "2020", "855", "995", "1160", "605", "1170", "1700", "905", "535", "870", "890…
## $ apartment_below5    <list> "1210", "100", "920", "220", "190", "400", "1415", "1590", "375", "1920", "715…
## $ othersingleattached <list> "95", "5", "25", "5", "5", "10", "190", "50", "60", "35", "25", "5", "80", "35…
## $ median_age          <list> "38.2", "44.0", "42.2", "37.2", "38.1", "37.2", "40.1", "38.0", "44.8", "45.2"…
## $ avg_household_size  <list> "3.3", "3.1", "2.7", "3.0", "2.7", "2.9", "2.7", "2.8", "2.6", "2.4", "2.7", "…
## $ median_income       <list> "24832", "37454", "26902", "21835", "24803", "23932", "24065", "24539", "29387…
## $ total_language_pop  <list> "65845", "35330", "46085", "29555", "24360", "36480", "48320", "34265", "22685…
## $ official_languages  <chr> "2705", "565", "975", "1330", "1250", "1140", "1810", "1345", "475", "310", "24…
## $ pct_seniors         <chr> "14.3", "18.0", "17.7", "15.1", "15.4", "13.7", "14.5", "12.3", "17.9", "17.5",…
## $ img_total_status    <list> "65915", "35290", "45990", "29590", "24100", "36200", "48265", "34145", "22590…
## $ immigrants          <list> "39490", "15840", "21980", "16575", "13970", "20580", "26745", "17820", "9125"…
## $ priv_households     <list> "20235", "11270", "17155", "9825", "8930", "12235", "17935", "12410", "8680", …
## $ renters             <list> "5745", "1025", "6350", "5010", "3540", "6710", "8795", "5995", "2985", "3035"…
## $ unemployment_rate   <list> "10.0", "7.4", "10.7", "12.5", "8.9", "11.7", "8.9", "10.8", "9.4", "8.8", "9.…

They are all characters or lists (although all the list columns store only one variable). We should leave the FSA column as a character and change the rest to numeric.

combined <- combined %>%
    mutate(across(c(2:23), as.numeric))

Now we need to do some calculations and create new variables to make this data useful.

The most important new variable will be “dog licences per 10,000 private dwellings (occupied by usual owners)”. This will enable us to compare the number of licences adjusted to the scale of the number of households in the area.

Using private dwellings rather than the population of individual residents seems appropriate here because dogs are usually owned by a household rather than an individual.

# Creating variable: Dogs per 10,000 private dwellings.

combined$dogs_per_pd <- round((combined$dogs / combined$priv_dwellings), 4) * 10000

# Percentage of official language speakers.

combined <- combined %>%
        mutate(no_off_language = (official_languages / total_language_pop) * 100)

# Percentage of private dwellings that are apartments (excluding apartments in duplexes). 

combined$apartment_pct <- round((combined$apartment_5storeys + combined$apartment_below5) / combined$total_occupied, 4) * 100

# Percentage of households that are renters.

combined$renters_pct <- round(combined$renters / combined$priv_households, 4) * 100

# Percentage of residents that are immigrants. 

combined <- combined %>% mutate(immigrant_pct = (immigrants / img_total_status) * 100)

Now that we’ve transformed the data, we can remove unnecessary columns and do some reordering.

combined <- combined[c(1, 24, 2:4, 13:15, 18, 23, 25:28)]

Cleaning neighbourhood data

Since FSAs aren’t a well-known way to refer to areas of the city, we should add some more information on what areas these FSA codes represent. I’ve scraped region and neighbourhood names for each of the FSAs from Wikipedia, which we can import now.

neighbourhoods <- read.csv('FSA neighbourhood names.csv')

head(neighbourhoods)

##                                                                                                      FSA_neighbourhood
## 1                                                                                                    M9Z\nNot assigned
## 2                                                      M6M\nYork\n(Del Ray / Mount Dennis / Keelsdale and Silverthorn)
## 3                                                                             M4P\nCentral Toronto\n(Davisville North)
## 4                                                                                   M2K\nNorth York\n(Bayview Village)
## 5 M8Z\nEtobicoke\n(Mimico NW / The Queensway West / South of Bloor / Kingsway Park South West / Royal York South West)
## 6                                                                                        M1S\nScarborough\n(Agincourt)

The data set has the FSA, region, and neighbourhood name(s) all in one column.

We’ll split this into FSA and then the area information.

neighbourhoods <- neighbourhoods %>% 
  mutate(FSA = substr(FSA_neighbourhood, 1, 3), 
         area = substr(FSA_neighbourhood, 5, length(FSA_neighbourhood))) 

Now we’ll divide the area names into region and neighbourhood.

Let’s add this to our combined data set with the dog licence and census info.

# The areas are described as "[region] ([neighbourhood])" so we'll get the neighbourhood names by extracting the string from inside the brackets.

neighbourhoods <- neighbourhoods %>% 
  mutate(neighbourhood = sapply(str_extract_all(area, "(?<=\\()[^)(]+(?=\\))"), paste0))

# To get the region, we can create a new column that has the area info with everything inside brackets removed. 

neighbourhoods$region <- gsub("\\([^)(]+\\)", "", neighbourhoods$area) 

# Removing the \n at the end of the region strings. 

neighbourhoods$region <- gsub("\n", "", neighbourhoods$region) 

# Cleaning up the Etobicoke and North York rows that have been formatted as "Etobicoke West" etc.  
neighbourhoods$region[startsWith(neighbourhoods$region, "Etobicoke")] <- "Etobicoke"
neighbourhoods$region[startsWith(neighbourhoods$region, "North York")] <- "North York"

# Removing the FSA codes that are not assigned to an area and transforming the neighbourhood variable from a list to a character type.  

neighbourhoods <- neighbourhoods %>% 
  filter(!neighbourhood == 'character(0)') %>%
  rowwise() %>%
  mutate(neighbourhood = neighbourhood[[1]]) 

# Isolating only the three columns we need.

neighbourhoods <- neighbourhoods %>%
  select(FSA, region, neighbourhood)

Let’s add this to our combined data set with the dog licence and census info.

combined <- merge(x = combined, y = neighbourhoods, by = "FSA")

Exploring the combined data

We can check out some stats on the dog licences per 10,000 private dwellings in Toronto’s FSAs.

summary(combined$dogs_per_pd)

##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##    89.0   330.2   474.0   520.1   687.0  1142.0

And visualize the distribution in a density chart.

combined %>%
  ggplot() +
  geom_density(aes(x = dogs_per_pd), fill="#9664a6", color="#9664a6") +
    geom_vline(aes(xintercept = mean(dogs_per_pd)), linetype = "dashed") +
  labs(title = "Dogs per 10,000 Private Dwellings in Toronto FSAs", x = "Dogs per 10,000 Private Dwellings", y = "Density") +
  theme_minimal() +
    theme(
    text=element_text(family="ubuntu"),
    axis.title = element_text(size = 8),
    plot.title = element_text( size = 14, color = "#36454F"),
    axis.text=element_text(size=9, color = "#36454F"),
    panel.grid.major = element_blank(),
    panel.grid.minor = element_blank(),
    axis.text.x = element_text(angle = 90)) +
    scale_x_continuous(breaks = c(100, 200, 300, 400, 500, 600, 700, 800, 900, 1000))

From the summary and density chart, we can see a right-skewed distribution with a mean of 520 and a median of 474. The highest dog licence proportion is 1142 per 10,000 dwellings, and the lowest is 89. 75% of FSAs have dog licence proportions under 687. I’m curious to see what factors might contribute to some areas having much higher or lower numbers of dog licences than the median.

Now we’ll make a correlation matrix for the variables of interest and visualize it.

# Isolating the relevant columns

combined_corr <- combined %>%
  select(2, 6:14)

#  Creating the matrix

corr <- round(cor(combined_corr, use = "pairwise.complete.obs"), 2)

# Visualizing the matrix

ggcorrplot(corr, p.mat = cor_pmat(combined_corr),
           hc.order = FALSE, 
           type = "lower",
           color = c("#6c91e3", "white", "#cc2f2f"),
           outline.col = "white", 
           lab = TRUE)

The bottom row in the plot shows us the correlation between all the variables and dogs per 10,000 private dwellings.

We can see some strong positive correlations here: including median income and median age.

There are also strong negative correlations: the percentage of immigrants, the unemployment rate, the percentage of apartments, the percentage of people who do not speak either official language, and the percentage of renters.

The correlation with the percentage of immigrants is significantly strong (-0.71) but the negative correlation between median income and percentage of immigrants is even stronger (-0.77). Since licences are a yearly purchase, the correlation with median income seems more relevant here.

Let’s also look at scatter plots depicting the correlations between the nine values and dogs per 10,000 private dwellings.

# Pivoting the variables for the visualization. 

combined_pivoted <- combined %>%
  pivot_longer(
    cols = c(6:14),
    names_to = "category",
    values_to = "value"
  )

# Transforming correlation matrix into data frame.

corr <- rownames_to_column(as.data.frame(corr))

# Selecting just the dogs_per_pd correlation column and renaming. 

corr <- corr[1:2]
colnames(corr) <- c("variable", "correlation")

ggplot(combined_pivoted, aes(x=value, y=dogs_per_pd)) +
  geom_point(color = "#4DB7C2") + 
  geom_smooth(method="lm", se=FALSE, color = "#154B7A") +
facet_wrap(~category, scales = "free_x") +
  labs(title = "Correlation with Dogs per 10,000 Private Dwellings", y = "Dogs per 10,000 private dwellings", x = NULL) + 
  theme(
    panel.background = element_rect(fill = "white"),
    text=element_text(family="ubuntu"),
    axis.title.x = element_text(size = 8),
    plot.title = element_text( size = 14, color = "#36454F"),
    axis.text=element_text(size=9, color = "#36454F"),
     axis.text.x = element_text(angle = 90)
    ) 

One thing that stands out is that there are five outliers of FSAs with high median incomes but some of the lowest dog licence rates. Let’s pull up the neighbourhood names for those.

high_income_low_licences <- combined %>%
  filter(median_income > 50000) %>%
  arrange(dogs_per_pd) %>%
  select(neighbourhood) %>%
  slice(1:5)

head(high_income_low_licences)

##                                                                                                      neighbourhood
## 1                                                              Harbourfront East / Union Station / Toronto Islands
## 2                                                                                       Richmond / Adelaide / King
## 3                                                                                                      Berczy Park
## 4 CN Tower / King and Spadina / Railway Lands / Harbourfront West / Bathurst Quay / South Niagara / Island airport
## 5                                                                                                   Church and Adelaide

Now it becomes more clear: these areas are all in the downtown core, where there are many high-rise apartments.

ggplot(combined, aes(x = median_income ,y = dogs_per_pd, color = apartment_pct)) +
  geom_point() + 
  geom_smooth(method="lm", se=FALSE, color = "black") +
    scale_color_viridis(option = "D") +
  labs(title = "Areas with mostly apartments have lower numbers \n of dog licences across all income groups", y = "Dogs per 10,000 Private Dwellings", x = "Median Income", color = "Apartment %") +
     theme(
    panel.background = element_rect(fill = "white"),
    text=element_text(family="ubuntu"),
    axis.title = element_text(size = 8),
    plot.title = element_text(hjust = 0.5, size = 13, color = "#36454F"),
    axis.text=element_text(size=9, color = "#36454F")    
    ) 

In each of the five outlier high income areas, 97% or more of private dwellings are apartments. The chart shows that the percentage of apartments has a strong negative correlation with the number of dog licences across all median income brackets.

Note: We can’t say for certain, based on this data alone, whether people who live in apartments are less likely to own a dog or less likely to license a dog. More research is required to find this out.

The median age of the areas also skews younger than other areas with higher income.

ggplot(combined, aes(x = median_income, y = dogs_per_pd, color = median_age)) +
  geom_point() + 
  geom_smooth(method="lm", se=FALSE, color = "black") +
    scale_color_viridis(option = "D", direction = -1) +
  labs(title = "Areas with younger median age have lower numbers \n of dog licences across all income groups", y = "Dogs per 10,000 Private Dwellings", x = "Median Income", color = "Median Age") +
     theme(
    panel.background = element_rect(fill = "white"),
    text=element_text(family="ubuntu"),
    axis.title = element_text(size = 8),
    plot.title = element_text(hjust = 0.5, size = 13, color = "#36454F"),
    axis.text=element_text(size=9, color = "#36454F")    ) 

Summary of analysis results

We’ve seen that the areas with the highest proportions of dog licences have a significantly higher median income, higher median age, and lower percentage of dwellings that are apartments.

The strong correlation between these variables and numbers of dog licences does not prove that people in certain areas are less likely to license their dogs, because it’s still possible that residents of these regions simply own fewer dogs.

However, in the case of the correlation with income, it’s notable that research on the economics of pet ownership in the United States has shown that pets are a “normal good” to Americans, meaning they’re “more appropriately viewed as necessities in the household than as luxuries.”3

If pet ownership in Toronto is similarly not significantly affected by changes in household income, then dog owners in the city’s lower income areas have a lower licence compliance rate. More research on this (such as surveys of residents) is needed to confirm this. Such research could also investigate whether other variables with strong area-level correlations (like younger age and apartment living) make dog owners less likely to buy a licence.

Despite the lack of household-level information about dog ownership, considering the available data and the fact that dog licences are a yearly purchase of $25 to $60, the strong positive correlation between an area’s dog licences and its median income does appear to be significant.

While residents with an income of less than $50,000 qualify for subsidized or waived fees, it’s possible that many residents are not aware of this discount or the cost is still too high.

Background of recent licensing promotion efforts

BluePaw rewards program

In 2014, Toronto Animal Services (TAS) launched BluePaw, a program that provides discounts on pet-related products and services to all city residents who license their dog or cat.

  • Members of the BluePaw program use a key chain tag or promo code to receive their offers and discounts.
  • A list of the participating businesses is available on the City of Toronto website.

Chip Truck

Since 2012, the TAS has operated a mobile clinic (the “Chip Truck”) that offers a bundle of a pet licence, microchip, and rabies vaccine for $35 (this halted in 2020 due to the pandemic).4

Public education campaign

A public education campaign titled “Give Your Head a Shake” was started in 2014 to encourage Torontonians to learn about how their pet licence fees are used. The ads addressed the popular belief that licences are a “cash grab,” and some ads informed them that their payments go to help animals in need.

Recommendations

Considering the strong correlation between income and dog licences, the major focus of a strategy should be to highlight to all dog owners that they can recuperate their licence fee by receiving discounts on products and services.

Selling the idea of licences to residents as a service to return lost dogs has not been effective, possibly because many pets are already microchipped or wear a rabies tag5. Public education about licence fees being used to help animals in need has not led to a significant increase in dog licences.

Dog owners need to clearly see the value of the licence fee, which could encourage voluntary licence compliance.

My recommendations are as follows:

1. Update BluePaw by building an app

A BluePaw app should be built with these features:

  • Easily accessible information on the participating businesses and links to their websites.
  • Proof of licence purchase that can be shown in-person at businesses.
  • Easy retrieval of the code to use for online purchases.
  • Notification when a new business has been added to the program.
  • Yearly notification to remind owners to renew licences.
  • Ability to complete licence sign-up or renewal easily within the app.
  • Ability to order a replacement pet tag in app.
  • If it’s possible to have businesses scan the app when a purchase is made, then there should be a history of purchases made using the BluePaws program that shows the money the pet owners have saved over time.

2. Expand BluePaw program

  • The BluePaw program should be expanded to include more businesses in the city, with a particular focus on connecting with businesses in the regions of Toronto with very low proportions of dog licences.
  • Toronto residents should be surveyed to find out what businesses they would most like to receive a discount or special offer from.
  • These businesses could include restaurants, hotels, stores, etc. that are not pet-related.
  • Businesses will benefit from the marketing to pet owners.

3. Educate residents about BluePaw app and licensing

  • Use social media advertisements to educate Torontonians about the new app.
  • Provide posters/pamphlets to retailers, vets offices, etc.

4. Make connections with local businesses

Arrangements with private businesses, veterinarians, etc should be made and strengthened so that pet owners can learn about the licensing program and use the app to immediately license their pet.

It would be helpful if a pet owner could license their dog quickly and conveniently through the app after learning from a provider about a discount or offer.

5. Provide public education about dog licensing in other languages

This analysis has shown that areas with higher percentages of residents who do not speak an official language tend to have lower dog licence proportions. Public education about dog licensing should therefore be tailored to these populations by providing information (posters, pamphlets) in languages commonly used in the areas.

6. Offer free first year of licence if dog is adopted

New pet owners should receive a free year of a dog licence if they adopt their dog from a Toronto Animal Services shelter.

After experiencing the benefits for the first year, they may be likely to renew in upcoming year. The dog will also now be in the system so it can be tracked if it does get lost, which will be less of a strain on animal shelters.

7. Free emergency sticker with every first dog licence

When a dog is licensed, the owner could receive an emergency sticker to be placed on their window.
The sticker would have the BluePaw logo on it and a QR code or link for the app, which would increase brand recognition and encourage other dog owners to check out the program.

Conclusion

The main focus of a compliance boosting strategy should be to encourage pet owners to view licences as something of personal value and a worthwhile purchase.

This could be achieved through a BluePaw app and the expanding of the BluePaw rewards program. Promotion of the licensing program could also include connections with local businesses, a free year of a licence for adopted dogs, and emergency stickers.

Overall, the City of Toronto should continue to counter the popular belief that the licence fees are a “cash grab” through public education while also allowing residents to recuperate their fee with discounts and offers from local businesses.

Footnotes

  1. Carter, Alan. “Less than one third of dogs in Toronto are licensed.” Global News, 3 October 2014, https://globalnews.ca/news/1597574/less-than-one-third-of-dogs-in-toronto-are-licensed/. Accessed 1 March 2022.

  2. Auditor General’s Office. “Toronto Animal Services. Licence Compliance Targets Need to Be More Aggressive.” 5 October 2011, https://www.toronto.ca/legdocs/mmis/2011/au/bgrd/backgroundfile-42372.pdf. Accessed 1 March 2022.

  3. Schwarz, Peter M., Jennifer L. Troyer, and Jennifer Beck Walker. “Animal House: Economics of Pets and the Household.” The B.E. Journal of Economic Analysis & Policy, vol. 7, no. 1, 2007. http://www.bepress.com/bejeap/vol7/iss1/art35. Accessed 1 March 2022.

  4. Woods, Michael. “‘Chip truck’ goes on the road to deliver pet ID.” Toronto Star, 10 September 2012, https://www.thestar.com/news/gta/2012/09/10/chip_truck_goes_on_the_road_to_deliver_pet_id.html. Accessed 1 March 2022.

  5. Putter, Kelly. “What are pet licenses for, and does your dog or cat really need one?” Yahoo! News, 28 August 2015, https://ca.news.yahoo.com/blogs/dailybrew/what-are-pet-licenses-for–and-does-your-dog-or-cat-really-need-one-160919933.html. Accessed 1 March 2022.

Leave a comment