Introducing Dask-GeoPandas for scalable spatial analysis in Python

Using Python for data science is usually a great experience, but if you’ve ever worked with pandas or GeoPandas, you may have noticed that they use only a single core of your processor. Especially on larger machines, that is a bit of a sad situation.

Developers came up with many solutions to scale pandas, but the one that seems to take the lead is Dask. Dask (specifically dask.dataframe as Dask can do much more) creates a partitioned data frame, where each partition is a single pandas.DataFrame. Each of them can be then processed in parallel and combined when necessary. On top of that, the whole pipeline can be scaled to a cluster of machines and can deal with out-of-core computation, i.e. with datasets that do not fit the memory.

Today, we announce the release of Dask-GeoPandas 0.1.0, a new Python package that extends dask.dataframe in the same way GeoPandas extends pandas, bringing the support for geospatial data to Dask. That means geometry columns and spatial operations but also spatial partitioning, ensuring that geometries that are close in space are within the same partition, necessary for efficient spatial indexing.

The project has been in development for quite some time. The original exploration of bridging Dask and GeoPandas started almost 5 years ago by Matt Rocklin, the author of Dask. Later, in 2020, Julia Signell revised the idea and created the foundations of the current project. Since then, GeoPandas maintainers have taken over and led the recent development.

What is awesome about Dask-GeoPandas? First, you can do your spatial analysis in parallel, making sure all available resources are used (no more sad idle cores!), turning your workflow into faster and more efficient ones. You can also use Dask-GeoPandas to process data that do not fit your machine’s memory as Dask comes with a support of out-of-core computation. Finally, you can distribute the work across many machines in a cluster. And all that with almost the same familiar GeoPandas APIs.

The latest evolution of underlying libraries powering GeoPandas ensures that it is efficient in terms of utilisation of resources but also performant within each partition. For example, unlike GeoPandas, where the use of pygeos, a new vectorised interface to GEOS is optional, Dask-GeoPandas requires it. Similarly, it depends on pyogrio, a vectorised interface to GDAL, to read geospatial file formats.

At this moment, Dask-GeoPandas can do a lot of what GeoPandas can, with some limitations. When your code involves individual geometries, without assessing a relationship between them (like computing a centroid or area), you should be able to use it directly. When you need to work out some relationships, you can try (still a bit limited) sjoin or make use of spatial partitions and spatial indexing.

But not everything is ready. For example, overlapping computation needed for use cases like accessibility or K-nearest neighbour analyses is not yet implemented, PostGIS IO is not done, and some overlay operations are implemented only partially (sjoin) or not at all (overlay, sjoin_nearest). But the 0.1.0 release is just a start.

You can try it yourself, installing via conda (or mamba) or from PyPI (but see the instructions, GeoPandas can be tricky to install using pip).

mamba install dask-geopandas
pip install dask-geopandas 

The best starting point to learn how Dask-GeoPandas works is the documentation, but this is the gist:

import geopandas
import dask_geopandas

df = geopandas.read_file(
    geopandas.datasets.get_path("naturalearth_lowres")
)
dask_df = dask_geopandas.from_geopandas(df, npartitions=8)

dask_df.geometry.area.compute()

The code creates a dask_geopandas.GeoDataFrame with 8 partitions because I have 8 cores and compute each polygon’s area in parallel, giving almost 8x speedup compared to the vanilla GeoPandas.

You can also check my latest post comparing Dask-GeoPandas performance on a large spatial join with PostGIS and cuSpatial (GPU) implementations.

If you want to help, have questions or ideas, you are always welcome. Just head over to Github or Gitter and say hi!

Dask-GeoPandas vs PostGIS vs GPU: Performance and Spatial Joins

Paul Ramsey saw a spatial join done using a GPU and tried to do the same with PostGIS, checking how fast that is compared to the GPU-based RAPIDS.AI solution. Since Paul used parallelisation in PostGIS, I got curious how fast Dask-GeoPandas is on the same task.

So, I gave it a go.

import download
import geopandas
import dask_geopandas
import dask.dataframe
from dask.distributed import Client, LocalCluster

Let’s download the data using Paul’s query, to ensure we work with the same CSV.

curl "https://phl.carto.com/api/v2/sql?filename=parking_violations&format=csv&skipfields=cartodb_id,the_geom,the_geom_webmercator&q=SELECT%20*%20FROM%20parking_violations%20WHERE%20issue_datetime%20%3E=%20%272012-01-01%27%20AND%20issue_datetime%20%3C%20%272017-12-31%27" > phl_parking.csv

And then download and unzip the neighbourhoods shapefile.

download.download(
    "https://github.com/azavea/geo-data/raw/master/Neighborhoods_Philadelphia/Neighborhoods_Philadelphia.zip",
    "Neighborhoods_Philadelphia", 
    kind="zip"
)

Paul used a machine with 8 cores. Since I use a machine with 16 cores, I’ll create a local cluster limited to 8 workers. That should be as close to Paul’s machine as I can get without using some virtual one. Keep in mind that this distorts the benchmark as we use different processors with different performances. But the point here is to get a sense of how fast can Dask-based solution be compared to PostGIS and the original GPU code.

client = Client(
    LocalCluster(
        n_workers=8, 
        threads_per_worker=1
    )
)

With Dask, we create the whole pipeline to create a task graph and then run it all, so we won’t have the timings for individual steps, just the total one.

Read parking data CSV into a partitioned data frame (25MB per partition).

ddf = dask.dataframe.read_csv(
    "phl_parking.csv", 
    blocksize=25e6, 
    assume_missing=True
)

Create point geometry and assign it to the data frame, creating dask_geopandas.GeoDataFrame.

ddf = ddf.set_geometry(
    dask_geopandas.points_from_xy(
        ddf, 
        x="lon", 
        y="lat", 
        crs=4326
    )
)

Read neighbourhood polygons and reproject to EPSG:4326 (same as parking data).

neigh = geopandas.read_file(
    "Neighborhoods_Philadelphia"
).to_crs(4326)

Create the spatial join.

joined = dask_geopandas.sjoin(ddf, neigh, predicate="within")

Finally, let’s compute the result.

%%time
r = joined.compute()

Time on a local cluster with 8 workers and 1 thread per worker to pretend it is an 8-core CPU:

CPU times: user 9.34 s, sys: 2.09 s, total: 11.4 s
Wall time: 21.3 s

The complete pipeline took 21.3 seconds, including sending all data to a single process, in the end, to create a single partition joined GeoDataFrame. Usually, that is unnecessary as you work with the data directly in Dask. It does take a few seconds guessing from the Dask Dashboard.

Let’s compare it to the PostGIS solution:

  • Reading in the 9M records from CSV takes about 29 seconds
  • Making a second copy while creating a geometry column takes about 24 seconds
  • The final query running with 4 workers takes 24 seconds

That gives us a total of 77 seconds compared to 21 seconds using Dask-GeoPandas. It’s still slower than 13 seconds using RAPIDS.AI although that covers only the join itself, not reading and creating geometry, so my sense is that it will be almost equal. One aspect that makes the difference between Dask and PostGIS is that our pipeline is parallelised at every step – reading the CSV, creating points, generating spatial index (that is done under the hood in sjoin), the actual join.

While Paul was using the 8-core machine, PostGIS actually utilised only 4 cores (I am not sure why). Let’s try to run our code limited to 4 workers as well.

CPU times: user 9.53 s, sys: 2 s, total: 11.5 s
Wall time: 28.4 s

28 seconds is a bit slower than before, but still quite fast!

When comparing PostGIS and GPU solutions, Paul says

Basically, it is very hard to beat a bespoke performance solution with a general-purpose tool. Yet, PostgreSQL/PostGIS comes within “good enough” range of a high end GPU solution, so that counts as a “win” to me.

At the moment, Dask-GeoPandas is somewhere between PostGIS and bespoke solutions. It does not offer as many functions as PostGIS, but it is designed as a general-purpose tool. So I would say that we are all winners here.

The notebook is available here.

EDIT (Mar 24, 2022): See also Dewey Dunnington’s follow-up expanding the comparison to R.

Methodological Foundation of a Numerical Taxonomy of Urban Form

The final paper based on my PhD thesis is (finally!) out in the Environment and Planning B: Urban Analytics and City Science. We looked into ways of identifying patterns of urban form and came up with the Methodological foundation of a numerical taxonomy of urban form. You can read it on the journal website (open access).

We use urban morphometrics (i.e. data-driven methods) to derive a classification of urban form in Prague and Amsterdam, and you can check the results in online interactive maps – http://martinfleis.github.io/numerical-taxonomy-maps/ or below. (Check the layers toggle!)

The paper explores the method that can eventually support the creation of a taxonomy of urban types in a similar way you know from biology. We even borrowed the foundations of the method from biology.

We measure many variables based on building footprints and street networks (using the momepy Python package) and use Gaussian Mixture Model clustering to get urban tissue types independently in both cities. Then we apply Ward’s hierarchical clustering to build a taxonomy of these types.

The code is available, and the repo even includes a clean Jupyter notebook with the complete method, so you can apply it to your data if you wish. https://github.com/martinfleis/numerical-taxonomy-paper. If you instead want to play with our data, it is available as well https://doi.org/10.6084/m9.figshare.16897102.

Abstract

Cities are complex products of human culture, characterised by a startling diversity of visible traits. Their form is constantly evolving, reflecting changing human needs and local contingencies, manifested in space by many urban patterns. Urban morphology laid the foundation for understanding many such patterns, largely relying on qualitative research methods to extract distinct spatial identities of urban areas. However, the manual, labour-intensive and subjective nature of such approaches represents an impediment to the development of a scalable, replicable and data-driven urban form characterisation. Recently, advances in geographic data science and the availability of digital mapping products open the opportunity to overcome such limitations. And yet, our current capacity to systematically capture the heterogeneity of spatial patterns remains limited in terms of spatial parameters included in the analysis and hardly scalable due to the highly labour-intensive nature of the task. In this paper, we present a method for numerical taxonomy of urban form derived from biological systematics, which allows the rigorous detection and classification of urban types. Initially, we produce a rich numerical characterisation of urban space from minimal data input, minimising limitations due to inconsistent data quality and availability. These are street network, building footprint and morphological tessellation, a spatial unit derivative of Voronoi tessellation, obtained from building footprints. Hence, we derive homogeneous urban tissue types and, by determining overall morphological similarity between them, generate a hierarchical classification of urban form. After framing and presenting the method, we test it on two cities – Prague and Amsterdam – and discuss potential applications and further developments. The proposed classification method represents a step towards the development of an extensive, scalable numerical taxonomy of urban form and opens the way to more rigorous comparative morphological studies and explorations into the relationship between urban space and phenomena as diverse as environmental performance, health and place attractiveness.

Capturing the Structure of Cities with Data Science

During the Spatial Data Science Conference 2021, I had a chance to deliver a workshop illustrating the application of PySAL and momepy in understanding the structure of cities. The recording is now available for everyone. The materials are available on my GitHub and you can even run the whole notebook in your browser using the MyBinder service.

xyzservices: a unified source of XYZ tile providers in Python

A Python ecosystem offers numerous tools for the visualisation of data on a map. A lot of them depend on XYZ tiles, providing a base map layer, either from OpenStreetMap, satellite or other sources. The issue is that each package that offers XYZ support manages its own list of supported providers.

We have built xyzservices package to support any Python library making use of XYZ tiles. I’ll try to explain the rationale why we did that, without going into the details of the package. If you want those details, check its documentation.

The situation

Let me quickly look at a few popular packages and their approach to tile management – contextily, folium, ipyleaflet and holoviews.

contextily

contextily brings contextual base maps to static geopandas plots. It comes with a dedicated contextily.providers module, which contains a hard-coded list of providers scraped from the list used by leaflet (as of version 1.1.0).

folium

folium is a lightweight interface to a JavaScript leaflet library. It providers built-in support for 6 types of tiles and allows passing any XYZ URL and its attribution to a map. It means that it mostly relies on external sources of tile providers.

ipyleaflet

ipyleaflet brings leaflet support to Jupyter notebooks and comes with a bit more options than folium. It has a very similar approach as contextily does – it has a hard-coded list of about 37 providers in its basemaps module.

holoviews

holoviews provides a Python interface to the Bokeh library and its list of supported base maps is also hard-coded.

A similar situation is in other packages like geemap or leafmap.

Each package has to maintain the list of base maps, ensure that they all work, respond to users requiring more, update links… That is a lot of duplicated maintenance burden. We think it is avoidable.

The vision

All XYZ tile providers have a single lightweight home and a clean API supporting the rest of the ecosystem. All the other packages use the same resource, one which is tested and expanded by a single group of maintainers.

We have designed xyzservices to be exactly that. It is a Python package that has no dependencies and only a single purpose – to collect and process metadata of tile providers.

We envisage a few potential use cases.

The first – packages like contextily and geopandas will directly support xyzservices.TileProvider object when specifying tiles. Nothing else is needed, contextily will fetch the data it needs (final tile URL, an attribution, zoom and extent limits) from the object. In the code form:

import xyzservices.providers as xyz
from contextily import add_basemap

add_basemap(ax, source=xyz.CartoDB.Positron)

The second option is wrapping xyzservices.providers into a custom API providing, for example, an interactive selection of tiles.

The third one is using different parts of a TileProvider individually when passing the information. This option can be currently used, for example, with folium:

import folium
import xyzservices.providers as xyz

tiles = xyz.CartoDB.Positron

folium.Map(
    location=[53.4108, -2.9358],
    tiles=tiles.build_url(),
    attr=tiles.attribution,
)

The last one is the most versatile. The xyzservices comes with a JSON file used as a storage of all the metadata. The JSON is automatically installed to share/xyzservices/providers.json where it is available for any other package without depending on xyzservices directly.

We hope to cooperate with maintainers of other existing packages and move most of the functionality around XYZ tiles that can be reused to xyzservices. We think that it will:

  1. Remove the burden from individual developers. Any package will just implement an interface to Python or JSON API of xyzservices.
  2. Expand the list easy-to-use tiles for users. xyzservices currently has over 200 providers, all of which should be available for users across the ecosystem, without the need to individually hard-code them in every package.

While this discussion started in May 2020 (thanks @darribas!), the initial version of the package is out now and installable from PyPI and conda-forge. We hope to have as many developers as possible on board to allow for the consolidation of the ecosystem in the future.

Evolution of Urban Patterns: Urban Morphology as an Open Reproducible Data Science

We have a new paper published in the Geographical Analysis on the opportunities current developments in geographic data science within the Python ecosystem offer to urban morphology. To sum up – there’s a lot to play with and if you’re interested in the quantification of urban form, there’s no better choice for you at the moment.

Urban morphology (study of urban form) is historically a qualitative discipline that only recently expands into more data science-ish waters. We believe that there’s a lot of potential in this direction and illustrate it on the case study looking into the evolution of urban patterns, i.e. how different aspects of urban form has changed over time.

The paper is open access (yay!) (links to the online version and a PDF), the research is fully reproducible (even in your browser thanks to amazing MyBinder!) with all code on GitHub.

Short summary

We have tried to map all the specialised open-source tools urban morphologists can use these days, which resulted in this nice table. The main conclusion? Most of them are plug-ins for QGIS or ArcGIS, hence depend on pointing and clicking. A tricky thing to reproduce properly.

Table 1 from the paper

We prefer code-based science, so we took the Python ecosystem and put it in the test. We have designed a fully reproducible workflow based on GeoPandas, OSMnx, PySAL and momepy to sample and study 42 places around the world, developed at very different times.

We measured a bunch of morphometric characters (indicators for individual aspects of form) and looked at their change in time. And there is a lot to look at. There are significant differences not only in scale (on the figure below) but in other aspects as well. One interesting observation – it seems that we have forgotten how to make a properly connected, dense grid.

As we said in the paper, “Switching to a code-based analysis may be associated with a steep learning curve. However, not everyone needs to reach the developer level as the data science ecosystem aims to provide a middle ground user level. That is a bit like Lego—the researcher learns how to put pieces together and then find pieces they need to build a house.

We think that moving from QGIS to Python (or R), as daunting as it may seem to some, is worth it. It helps us overcome the reproducibility crisis science is going through, the crisis caused, among other things, by relying on pointing and clicking too much.

The open research paradigm, based on open platforms and transparent community-led governance, has the potential to democratize science and remove unnecessary friction caused by the lack of cooperation between research groups while bringing additional transparency to research methods and outputs.

page 18

Abstract

The recent growth of geographic data science (GDS) fuelled by increasingly available open data and open source tools has influenced urban sciences across a multitude of fields. Yet there is limited application in urban morphology—a science of urban form. Although quantitative approaches to morphological research are finding momentum, existing tools for such analyses have limited scope and are predominantly implemented as plug-ins for standalone geographic information system software. This inherently restricts transparency and reproducibility of research. Simultaneously, the Python ecosystem for GDS is maturing to the point of fully supporting highly specialized morphological analysis. In this paper, we use the open source Python ecosystem in a workflow to illustrate its capabilities in a case study assessing the evolution of urban patterns over six historical periods on a sample of 42 locations. Results show a trajectory of change in the scale and structure of urban form from pre-industrial development to contemporary neighborhoods, with a peak of highest deviation during the post-World War II era of modernism, confirming previous findings. The wholly reproducible method is encapsulated in computational notebooks, illustrating how modern GDS can be applied to urban morphology research to promote open, collaborative, and transparent science, independent of proprietary or otherwise limited software.

Fleischmann, M., Feliciotti, A. and Kerr, W. (2021), Evolution of Urban Patterns: Urban Morphology as an Open Reproducible Data Science. Geogr Anal. https://doi.org/10.1111/gean.12302

Talk at ISUF 2021: Classifying urban form at a national scale

I had a chance to present our ongoing work on the classification of the (built) environment in Great Britain during the International Seminar on Urban Form 2021, which was held virtually in Glasgow. I was presenting the classification of urban form, one component of Spatial Signatures we’re developing as part of the Urban Grammar AI project together with Dani Arribas-Bel. The video of the presentation is attached below, as well as the abstract.

Classifying urban form at a national scale: The case of Great Britain

There is a pressing need to monitor urban form and function in ways that can feed into better planning and management of cities. Both academic and policymaking communities have identified the need for more spatially and temporally detailed, consistent, and scalable evidence on the nature and evolution of urban form. Despite impressive progress, the literature can achieve only two of those characteristics simultaneously. Detailed and consistent studies do not scale well because they tend to rely on small-scale, ad-hoc datasets that offer limited coverage. Until recently, consistent and scalable research has only been possible by using simplified measures that inevitably miss much of the nuance and richness behind the concept of urban form.

This paper outlines the notion of “spatial signatures”, a characterisation of space based on form and function, and will specifically focus on its form component. Whilst spatial signature sits between the purely morphological and purely functional description of the built environment, its form-based component reflects the morphometric definition of urban tissue, the distinct structurally homogenous area of a settlement. The proposed method employs concepts of “enclosures” and “enclosed tessellation” to derive indivisible hierarchical geographies based on physical boundaries (streets, railway, rivers, coastline) and building footprints to delineate such tissues in the built fabric. Each unit is then characterised by a comprehensive set of data-driven morphometric characters feeding into an explicitly spatial contextual layer, which is used as an input of cluster analysis.

The classification based on spatial signatures is applied to the entirety of Great Britain on a fine grain scale of individual tessellation cells and released as a fully reproducible open data product. The results provide a unique input for local authorities to drive planning and decision-making and for the wider research community as data input.

Video

The Urban Atlas: Methodological Foundation of a Morphometric Taxonomy of Urban Form

The Urban Atlas: Methodological Foundation of a Morphometric Taxonomy of Urban Form is the title of my PhD Thesis defended in January 2021 at the University of Strathclyde.

Thanks to Ombretta and Sergio for guiding me along the way!

Abstract

No two cities in the world are alike. Each urban environment is characterised by a unique variety and heterogeneity as a result of its evolution and transformation, reflecting the differences in needs human populations have had over time manifested, in space, by a plethora of urban patterns.

Traditionally, the study of these patterns over time and across space is the domain o urban morphology, a field of research stretching from geography to architecture. Whilst urban morphology has considerably advanced the current understanding of processes of formation, transformation and differentiation of many such patterns, predominantly through qualitative approaches, it has yet to fully take advantage of quantitative approaches and data-driven methods recently made possible by advances in geographic data science and expansion of available mapping products. Although relatively new, these methods hold immense potential in expanding our capacity to identify, characterise and compare urban patterns: these can be rich in terms of information, scalable (applicable to the large scale of extent, regional and national) and replicable, drastically improving the potential of comparative analysis and classification.

Different disciplines with more profound quantitative methods can help in the development of data-driven urban morphology, as now, for the first time, we are in the position where we can rely on a large amount of data on the built environment, unthinkable just a decade ago. This thesis, therefore, aims to link urban morphology and methodologically strong area of quantitative biological systematics, adapting its concepts and methods to the context of built-up fabric. That creates an infrastructure for numerical description of urban form, known as urban morphometrics, and a subsequent classification of urban types.

Conceptually building on the theory of numerical taxonomy, this research progresses the development of urban morphometrics to automate processes of urban form characterisation and classification. Whilst many available methods are characterised by significant limitation in applicability due to difficulties in obtaining necessary data, the proposed method employs only minimal data input – street network and building footprints – and overcomes limitations in the delineation of plots by identifying an alternative spatial unit of analysis, the morphological tessellation, a derivative of Voronoi tessellation partitioning the space based on a composition of building footprints. As tessellation covers the entirety of urban space, its inherent contiguity then constitutes a basis of a relational framework aimed at the comprehensive characterisation of individual elements of urban form and their relationships. Resulting abundant numerical description of all features is further utilised in cluster analysis delineating urban tissue types in an unrestricted urban fabric, shaping an input for hierarchical classification of urban form – a taxonomy.

The proposed method is applied to the historical heterogeneous city of Prague, Czechia and validated using supplementary non-morphological data reflecting the variation of built-up patterns. Furthermore, its cross-cultural and morphological validity and expandability are tested by assessment of Amsterdam, Netherlands and a combination of both cases into a unified taxonomy of their urban patterns. The research is accompanied by a bespoke open-source software momepy for quantitative assessment of urban form, providing infrastructure for replicability and further community-led development.

The work builds a basis for morphometric research of urban environment, providing operational tools and frameworks for its application and further development, eventually leading to a coherent taxonomy of urban form.

Keywords: urban morphometrics, taxonomy, classification, measuring, urban form, quantitative analysis, urban morphology, software


PDF of the thesis is available from my Dropbox before it will be visible in the University repository. The code repository will be shared once I’ll manage to clean it :).

Clustergam: visualisation of cluster analysis

In this post, I introduce a new Python package to generate clustergrams from clustering solutions. The library has been developed as part of the Urban Grammar research project, and it is compatible with scikit-learn and GPU-enabled libraries such as cuML or cuDF within RAPIDS.AI.


When we want to do some cluster analysis to identify groups in our data, we often use algorithms like K-Means, which require the specification of a number of clusters. But the issue is that we usually don’t know how many clusters there are.

There are many methods on how to determine the correct number, like silhouettes or elbow plot, to name a few. But they usually don’t give much insight into what is happening between different options, so the numbers are a bit abstract.

Matthias Schonlau proposed another approach – a clustergram. Clustergram is a two-dimensional plot capturing the flows of observations between classes as you add more clusters. It tells you how your data reshuffles and how good your splits are. Tal Galili later implemented clustergram for K-Means in R. And I have used Tal’s implementation, ported it to Python and created clustergram – a Python package to make clustergrams.

clustergram currently supports K-Means and using scikit-learn (inlcuding Mini-Batch implementation) and RAPIDS.AI cuML (if you have a CUDA-enabled GPU), Gaussian Mixture Model (scikit-learn only) and hierarchical clustering based on scipy.hierarchy. Alternatively, we can create clustergram based on labels and data derived from alternative custom clustering algorithms. It provides a sklearn-like API and plots clustergram using matplotlib, which gives it a wide range of styling options to match your publication style.

Install

You can install clustergram from conda or pip:

conda install clustergram -c conda-forge

or

pip install clustergram

In any case, you still need to install your selected backend (scikit-learn and scipy or cuML).

from clustergram import Clustergram
import urbangrammar_graphics as ugg
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.preprocessing import scale

sns.set(style='whitegrid')

Let us look at some examples to understand how clustergram looks and what to do with it.

Iris flower data set

The first example which we try to analyse using clustergram is the famous Iris flower data set. It contains data on three species of Iris flowers measuring sepal width and length and petal width and length. We can start with some exploration:

iris = sns.load_dataset("iris")
g = sns.pairplot(iris, hue="species", palette=ugg.COLORS[1:4])
g.fig.suptitle("Iris flowers", y=1.01)

It seems that setosa is a relatively well-defined group, while the difference between versicolor and virginica is smaller as they partially overlap (or entirely in the case of sepal width).

Okay, so we know how the data looks. Now we can check how does clustergram look. Remember that we know that there are three clusters, and we should ideally be able to recognise this from clustergram. I am saying ideally because even though there are known labels, it does not mean that our data or clustering method are able to distinguish between those classes.

Let’s start with K-Means clustering. To get a stable result, we can run a clustergram with 1000 initialisations.

data = scale(iris.drop(columns=['species']))

cgram = Clustergram(range(1, 10), n_init=1000)
cgram.fit(data)

ax = cgram.plot(
    figsize=(10, 8),
    line_style=dict(color=ugg.COLORS[1]),
    cluster_style={"color": ugg.COLORS[2]},
)
ax.yaxis.grid(False)
sns.despine(offset=10)
ax.set_title('K-Means (scikit-learn)')

On the x axis, we can see the number of clusters. Points represent a centre of each cluster (by default) weighted by the first principal component (that helps with the diagram’s readability). The lines connecting points and their thickness represent observations moving between clusters. Therefore, we can read when new clusters are formed as a split of a single existing class and when they are formed based on observations from two clusters.

We’re looking for the separation, i.e., did an additional cluster bring any meaningful split? The step from one cluster to two is a big one – nice and clear separation. From two to three, we also have quite a nice split in the top branch. But from 3 to 4, there is no visible difference because the new fourth cluster is almost the same as the existing bottom branch. Although it is now separated into two, this split does not give us much information. Therefore, we could conclude that the ideal number of clusters for Iris data is three.

We can also check some additional information, like a silhouette score or Calinski-Harabazs score.

fig, axs = plt.subplots(2, figsize=(10, 10), sharex=True)
cgram.silhouette_score().plot(
    xlabel="Number of clusters (k)",
    ylabel="Silhouette score",
    color=ugg.COLORS[1],
    ax=axs[0],
)
cgram.calinski_harabasz_score().plot(
    xlabel="Number of clusters (k)",
    ylabel="Calinski-Harabasz score",
    color=ugg.COLORS[1],
    ax=axs[1],
)
sns.despine(offset=10)

These plots would suggest 3-4 clusters, similarly to clustergram, but they are not very conclusive.

Palmer penguins data set

Now let’s try different data, one where clusters are a bit more complicated to assess. Palmer penguins contain similar data as Iris example, but it measures several attributes of three species of penguins.

penguins = sns.load_dataset("penguins")

g = sns.pairplot(penguins, hue="species", palette=ugg.COLORS[3:])
g.fig.suptitle("Palmer penguins", y=1.01)

Looking at the situation, we see that the overlap between species is much higher than before. It will likely be much more complicated to identify them. Again, we know that there are three clusters, but that does not mean that data has the power to distinguish between them. In this case, it may be especially tricky to differentiate between Adelie and Chinstrap penguins.

data = scale(penguins.drop(columns=['species', 'island', 'sex']).dropna())

cgram = Clustergram(range(1, 10), n_init=1000)
cgram.fit(data)

ax = cgram.plot(
    figsize=(10, 8),
    line_style=dict(color=ugg.COLORS[1]),
    cluster_style={"color": ugg.COLORS[2]},
)
ax.yaxis.grid(False)
sns.despine(offset=10)
ax.set_title("K-Means (scikit-learn)")

We’re looking for separations, and this clustergram shows plenty. It is actually quite complicated to determine the optimal number of clusters. However, since we know what happens between different options, we can play with that. If we have a reason to be conservative, we can go with 4 clusters (I know, it is already more than the initial species). But further splits are also reasonable, which indicates that even higher granularity may provide useful insight, that there might be meaningful groups.

Can we say it is three? Since we know it should be three… Well, not really. The difference between the split from 2 – 3 and that from 3 – 4 is slight. However, the culprit here is K-Means, not clustergram. It just simply cannot correctly cluster these data due to the overlaps and the overall structure.

Let’s have a look at how the Gaussian Mixture Model does.

cgram = Clustergram(range(1, 10), n_init=100, method="gmm")
cgram.fit(data)

ax = cgram.plot(
    figsize=(10, 8),
    line_style=dict(color=ugg.COLORS[1]),
    cluster_style={"color": ugg.COLORS[2]},
)
ax.yaxis.grid(False)
sns.despine(offset=10)
ax.set_title("Gaussian Mixture Model (scikit-learn)")

The result is very similar, though the difference between the third and fourth split is more pronounced. Even here, I would probably go with a four cluster solution.

A situation like this happens very often. The ideal case does not exist. We ultimately need to make a decision on the optimal number of clusters. Clustergam gives us additional insights into what happens between different options, how it splits. We can tell that the four-cluster option in Iris data is not helpful. We can tell that Palmer penguins may be tricky to cluster using K-Means, that there is no decisive right solution. Clustergram does not give an easy answer, but it gives us additional insight, and it is upon us how we interpret it.

You can install clustergram using conda install clustergram -c conda-forge or pip install clustergram. In any case, you will still need to install a clustering backend, either scikit-learn or cuML. The documentation is available at clustergram.readthedocs.io, and the source code is on github.com/martinfleis/clustergram, released under MIT license.

If you want to play with the examples used in this article, the Jupyter notebook is on GitHub. You can also run it in an interactive binder environment in your browser.

For more information, check Tal Galili’s blog post and original papers by Matthias Schonlau.

Give it a go!

Spatial Analytics + Data Talk

On March 30, 2021, I had a chance to deliver a talk as part of the Spatial Analytics + Data Seminar Series organised by the University of Newcastle (Rachel Franklin), the University of Bristol (Levi Wolf) and the Alan Turing Institute. The recording of the event is now available on YouTube.

Spatial Signatures: Dynamic classification of the built environment

This talk introduces the notion of “spatial signatures”, a characterisation of space based on form and function. We know little about how cities are organised over space influences social, economic and environmental outcomes, in part because it is hard to measure. It presents the first stage of the Urban Grammar AI research project, which develops a conceptual framework to characterise urban structure through the notions of spatial signatures and urban grammar and will deploy it to generate open data products and insight about the evolution of cities.

The slides are available online at https://urbangrammarai.github.io/talks/202103_sad/.