Identifying potentially related records - How does the GBIF data-clustering feature work?

Many data users may suspect they’ve discovered duplicated records in the GBIF index. You download data from GBIF, analyze them and realize that some records have the same date, scientific name, catalogue number and location but come from two different publishers or have slightly different attributes. There are many valid reasons why these duplicates appear on GBIF. Sometimes an observation was recorded in two different systems, sometimes several records correspond to herbaria duplicates (you can check the work of Nicky Nicolson on the topic), sometimes a specimen was digitized twice, sometimes a record has been enriched with genetic information and republished via a different platform…

What are the flags "Collection match fuzzy", "Collection match none", "Institution match fuzzy", "Institution match none" and how to remove them?

You are a data publisher of occurrence data through, care about your data quality, and wonder what to do about the issue flags that show up on your occurrences. You might have noticed a new type flag this year relating to collection and institution codes and identifiers. They are the result of our attempt at linking specimens records to our Registry of Scientific Collections (GRSciColl). We want to be able to group specimens and combine metrics at the collection and institution levels (which can be independant from the way there were published on GBIF).

GBIF API beginners guide

This a GBIF API beginners guide.

The GBIF API technical documentation might be a bit confusing if you have never used an API before. The goal of this guide is to introduce the GBIF API to a semi-technical user who may have never used an API before.

The purpose of the GBIF API is to give users access to GBIF databases in a safe way. The GBIF API also allows and rgbif to function.

Did you know that...? - some of the lesser known functionalities around

During the first-ever virtual GBIF 2021 Global Nodes Meeting, GBIFS hosted a “game show”: a one-hour “battle of Nodes vs. helpdesk”. The not-so-hidden goal was to demonstrate some of the lesser known functionalities of through a fun, interactive session.

The following is a summary of the questions and answers from this session, plus some extras that did not manage to make it into the time frame of the event. The summary is following the layout and sequence of the interactive hour:

GBIF and Apache-Spark on AWS tutorial

GBIF now has a snapshot of 1.3 billion occurrence records on Amazon Web Services (AWS). This guide will take you through running Spark notebooks on AWS. The GBIF snapshot is documented : here.

You can read previous discussions about GBIF and cloud computing here. The main reason you would want to use cloud computing is to run big data queries that are slow or impractical on a local machine.

Derived datasets

You’ve finished an analysis using GBIF-mediated data, you’re writing up your manuscript and checking all the references, but you’re unsure of how to cite GBIF. If you Google it, you’ll probably end up reading our citation guideslines and quickly realize that GBIF is all about DOIs. Datasets have their own DOIs and downloads of aggregated data also have their own DOIs.

But maybe you didn’t download data through the portal. Maybe you relied on an R package like rgbif or dismo that retrived data synchronously from the GBIF API? Maybe a grad student downloaded if for you? Maybe you accessed and analyzed the data using a cloud computing service, like Microsoft Azure or Amazon Web Services? In any case, which DOI do you cite if you don’t have one?

GBIF and Apache-Spark on Microsoft Azure tutorial

GBIF now has a snapshot of 1.3 billion occurrences records on Microsoft Azure.

It is hosted by the Microsoft AI for Earth program, which hosts geospatial datasets that are important to environmental sustainability and Earth science. Hosting is convenient because you could now use occurrences in combination with other environmental layers and not need to upload any of it to the Azure. You can read previous discussions about GBIF and cloud computing here. The main reason you would want to use cloud computing is to run big data queries that are slow or impractical on a local machine.

The GBIF Registry of Scientific Collections (GRSciColl) in 2021

The GBIF Registry of Scientific Collections, also known as GRSciColl, has been available on since 2019 but it recently got some more attention when we connected it to GBIF occurrences. Now is the perfect time to share a bit of GRSciColl history and what we plan for its future. A brief history of GRSciColl First of all, here are a few facts about GrSciColl today, at the start of 2021:

Common things to look out for when post-processing GBIF downloads

Here I present a checklist for filtering GBIF downloads. In this guide, I will assume you are familar with R. This guide is also somewhat general, so your solution might differ. This guide is intended to give you a checklist of common things to look out for when post-processing GBIF downloads. Here is an example a filtering checklist script that would work for most users. Indvidual users might want to add/remove some steps.

(Almost) everything you want to know about the GBIF Species API

Today, we are talking about the GBIF Species API. Although you might not use it directly, you probably encountered it while using the GBIF web portal:

This API is what allow us to navigate through the names available on GBIF. I will try to avoid repeating what you can already find in its documentation. Instead, I will attempt to give an overview and answer some questions that we received in the past.

GBIF Issues & Flags

Publishers share datasets, but also manage data quality. GBIF provides access to the use of biodiversity data, but also flags suspicious or missing content. Users use data, but also clean and remove records. Each play an important role in managing and improving data quality.. What are GBIF issues and flags? The GBIF network publishes datasets, integrating them into a common access system. Here users can retrieve data through common search and download services.

Outlier Detection Using DBSCAN

Geographic outliers at GBIF are a known problem. Outliers can be errors, coordinates with high uncertainty, or simply occurrences from an under-sampled region. In data cleaning pipelines outliers are often removed (even if they are legitimate points) because the researcher does not have time to verify each record one-by-one. In almost all cases, outlier points are occurrences that need attention. Currently, there is no outlier detection implemented at GBIF and it is up to the user to remove outliers themselves (e.

GBIF Regional Statistics - 2020

I was asked to prepare some statistics for the GBIF regional regional meetings being held virtually this year. This blog post is a companion for those meetings. You can watch a video presentation of the preparation of these meetings here. The presentation of this blog post starts here. The North American virtual nodes meeting 2020 was on 5 - 6 May 2020 The Europe and Central Asia-virtual nodes meeting 2020 was on 11 - 12 May 2020 The Latin America and Caribbean virtual nodes meeting 2020 will be on 18 - 20 May 2020 The Africa virtual nodes meeting 2020 will be on 10 - 12 June 2020 GBIF introduced a regional framework across the GBIF Network a little more than a decade ago, with groups based on clusters of national participants.

Which tools can I use to share my data on GBIF?

As you probably already know, doesn’t host any data. The system relies on each data provider making their data available online in a GBIF-supported format. It also relies on organization letting GBIF know where to find these data (in other words registering the data). But how to do just that? The good news is that there are several GBIF-compatible systems. They will export or make available the data for you in the correct format and several provide means to register them as datasets on GBIF.

GBIF occurrence license processing

GBIF is now processing occurrence licenses record-by-record.

iNaturalist research-grade observations

Previously all occurrence licenses defaulted to their dataset license (provided by the publisher).

Does Biodiversity Informatics 💘 Wikidata?

Open online APIs are fantastic! You can use someone else’s infrastructure to create workflows, do research and create products without giving anything in return, except acknowledgement. But wait a minute! Why is everyone not using them? Why do we create our own data sources and suck up the costs in time and money? Not to mention the duplication of effort.

Frictionless Data and Darwin Core

Frictionless data is about removing the friction in working with data through the creation of tools, standards, and best practices for publishing data using the Data Package standard, a containerization format for any kind of data. It offers specifications and software around data publication, transport and consumption. Similarly to Darwin Core, data resources are presented in CSV files while the data model is described in a JSON structure.

How to choose a dataset class on GBIF?

If you are a (first time) publisher on GBIF and you are trying to decide which type of dataset would best fit your data, this blogpost is for you. All the records shared on GBIF are organized into datasets. Each dataset is associated with some metadata describing its content (the classic “what, where, when, why, how”). The dataset’s content depends strongly on the dataset’s class. GBIF currently support four types of dataset:

Understanding basis of record - a living specimen becomes a preserved specimen

Recently a user noticed that there were Asian Red Pandas (Ailuridae) occurring in North America, and wondered if someone had made a mistake. When an occurrence observation comes from a zoo or botanical garden, it is usually considered a living specimen, but when it comes from a museum it is usually called a preserved specimen. This label helps users remove records that they might not want, which come from zoos.

Search, download, analyze and cite (repeat if necessary)

Finding and accessing data There is a lot of GBIF-mediated data available. More than 1.3 B occurrence records covering hundreds of thousands of species in all part of the worlds. All free, open and available at the touch of a button. Users can download data through the portal, via the GBIF API, or one of the third-party tools available for programmatic access, e.g. rgbif. If there is one area in which GBIF has been immensely successful, it’s making the data available to users.

Six questions answered about the GBIF Backbone Taxonomy

This past week our informatics team has been updating the Backbone taxonomy on This is a fairly disruptive process which sometimes involves massive taxonomic changes but DON’T PANIC. This update is a good thing. It means that some of the taxonomic issues reported have been addressed (see for example this issue concerning the Xylophagidae family) and that new species are now visible on GBIF. Plus, it gives me an excellent opportunity to talk about the GBIF backbone taxonomy and answer some of the questions you might have.

Downloading occurrences from a long list of species in R and Python

It is now possible to download up to 100,000 names on GBIF! Until recently it was not possible to download occurrences for more than a few hundred species at the same time, but it is now possible to request more species names (up to 100,000 taxonkeys). For those multiple taxa downloaders out there, GBIF now supports download requests of up to 100,000(!) taxa. That should cover most use cases :) For such large requests, however, you will need to POST you query to the Occurrence Download API service: https://t.

Citizen Science on GBIF - 2019

Citizen Science datasets on GBIF plotted with all other (gray) GBIF datasets (>100K occurrences). There are many citizen science datasets with millions of occurrences (eBird, (Swedish) Artportalen), and the top 3 datasets on GBIF are all citizen science datasets. But in terms of number of unique species, only iNaturalist competes with large museum datasets like Smithsonian NMNH. Because of very large datasets like eBird and Artportalen, Citizen Science makes up a large percentage of the total occurrence records on GBIF.

Exploring es50 for GBIF

jpg | pdf | svg | code It has been suggested that GBIF could make es50 maps similar to what organizations like OBIS are already doing. I decided to make one for land animals (graph above). link to code es50 (Hulbert index) is the statistically expected number of unique species in a random sample of 50 occurrence records, and is an indicator of biodiversity richness. The score can be computed without random sampling, but the mean of infinite random sampling will produce the same result.

Not a bird download

Recently we were asked on GitHub whether there was a way to get all animal occurrences that are not a bird. This seems like an easy enough request, but unfortunately, there is currently no way to exclude groups from a download search and get everything but a certain group. A user can get all birds, but they can’t get no birds! I thought this was an interesting question and probably useful for other people wanting smaller downloads, since there are currenly around half a billion occurrence records for birds.

Big National Checklists

link to interactive map Big 15-300K total names Medium 5-15K total names Small 0-5K total names Here I plot the total names in checklists published on GBIF linked to a single country. A checklist dataset is a term for any dataset that contains primarily a list of taxonomic names. National species checklists are lists of species recorded from a country usually through some organized effort. GBIF has published a guide on best practices for making national checklist datasets, which advises making national checklists as big as possible.

GBIF checklist datasets and data gaps

A checklist dataset is a catch-all term describing any dataset that contains primarily a list of taxonomic names. The lines between a checklist dataset and an occurrence dataset can be blurry. GBIF classifies at least 6 types of datasets as checklists. National (or regional) lists of species example Taxonomic list of species example Species description example Checklists made up of other checklists GBIF backbone taxonomy & Catalogue of Life Checklists with occurrences example Checklists made from occurrences example The top two are probably what most people imagine when they think of a checklist dataset.

Sequenced-based data on GBIF - What you need to know before analyzing data

As I mentioned in my previous post, a lot more sequence-based data has been made available on GBIF this past year. MGnify alone, published 295 datasets for a total of 13,285,109 occurrences. Even though most of these occurrences are Bacteria or Chromista, more than a million of them are animals and more than 300,000 are plants. So chances are, that even if you are not interested in bacteria, you might encounter sequence-based data on GBIF.

Sequence-based data on GBIF - Sharing your data

[Edit 2021-09-16] Important: To find guidance on how to publish Sequence-based data on GBIF, please consult the following guide: Andersson AF, Bissett A, Finstad AG, Fossøy F, Grosjean M, Hope M, Jeppesen TS, Kõljalg U, Lundin D, Nilsson RN, Prager M, Svenningsen C & Schigel D (2020) Publishing DNA-derived data through biodiversity data platforms. v1.0 Copenhagen: GBIF Secretariat. [End edit 2021-09-16] GBIF is trying to make it easier to share sequence-based data.

Gridded Datasets Update

Gridded datasets are now flagged on the GBIF registry This update builds on work from a previous blog post. Gridded datasets are broadly datasets that have low coordinate precision due to rasterized sampling or rounding. This can be a data quality issue because a user might assume an occurrence record has more precision than it actually does. Current statistics 572 datasets are currently flagged as gridded or rasterized on the registry.

Country Centroids

Country Centroids are a known data quality issue within the GBIF network. Sometimes data publishers will not know the exact lat-lon location of a record and will enter the lat-long center of the country instead. This is a data issue because users might be unaware that an observation is pinned to a country center and assume it is a precise location. Below I plot the top country centroids found on GBIF within at least 1km.

Hunger mapping

Where are we missing biodiversity data? A hunger map is a map of missing biodiversity data (a biodiversity data gap). The main challenge with hunger mapping is proving that a species does not exist but should exist in a region. Hunger maps are important because they could be used to prioritize funding and digitization efforts. Currently, GBIF has no way of telling what species are missing from where. In this blog post I review some potential ways GBIF could make global biodiversity hunger maps.

Will citizen science take over?

Citizen science Citizen science is scientific research conducted, in whole or in part, by amateur (or non-professional) scientists. Biodiversity observations by citizen scientists have become significant in the last 10 years thanks to projects like: eBird iNaturalist Artportalen Sweden Artsdatabanken Norway Southern African Bird Atlas Bird Life Austrailia Dansk Ornitologisk Forening Great Back Yard Bird Count Citizen science is scientific research conducted, in whole or in part, by amateur (or non-professional) scientists.

Using shapefiles on GBIF data with R

Not all filters are born equal It happens sometimes that users need GBIF data that fall within specific boundaries. The GBIF Portal provides a location filter where it is possible to draw a rectangle or a polygon on the map and get the occurrence records within this shape. However these tools have a limited precision and occasionally the job calls for more complex shapes than the GBIF Portal currently supports.

Sharing images, sounds and videos on GBIF

This blog post covers the publication of multimedia on GBIF. However, it is not intended to be documentation. For more information, please check the references below. NB: GBIF does not host original multimedia files and there is no way to upload pictures to the platform. For more information, please read the how to publish paragraphs. Media displayed on the GBIF portal Let’s say that you are looking for pictures of otters, or perhaps the call of a sea eagle.

Finding citizen science datasets on GBIF

Can we automatically label citizen science datasets? The short answer is yes, partially. Why label GBIF datasets as “citizen science”? What is citizen science? Citizen science is scientific research conducted, in whole or in part, by amateur (or non-professional) scientists. Citizen science is sometimes described as “public participation in scientific research,” participatory monitoring, and participatory action research (wikipedia definition). Citizen science on GBIF A 2016 study showed that nearly half of all occurrence records shared through the GBIF network come from datasets with significant volunteer contributions (for more information, see our “citizen science” page on gbif.

Plot almost anything using the GBIF maps api

The GBIF maps api is an under-used but powerful web service provided by GBIF. The maps api is used by the main GBIF portal to create the maps including the big map used on We can make a simple call to the api by pasting the link below into a web browser. You should end up with an image like this. This api call is composed essentially of two elements

Finding gridded datasets

EBCC Atlas of European Breeding Birds (gridded) Naturalis Biodiversity Center (NL) - Aves (not gridded) Gridded data in GBIF Gridded datasets are a known problem at GBIF. Many datasets have equally-spaced points in a regular pattern. These datasets are usually systematic national surveys or data taken from some atlas (“so-called rasterized collection designs”). In this blog post I will describe how I found gridded dataset in GBIF.

GBIF download trends

Link To App Explanation of tool This tool plots the downloads through time for species or other taxonomic groups with more than 25 downloads at GBIF. Downloads at GBIF most often occur through the web interface. In a previous post, we saw that most users are downloading data from GBIF via filtering by scientific name (aka Taxon Key). Since the GBIF index currently sits at over 1 billion records (a 400+GB csv), most users will simplying filter by their taxonomic group of interest and then generate a download.