A collaborative project combining millions of images and artificial intelligence
to develop a tool for the automatic recognition of the French fauna in camera-trap images

Camera traps have revolutionized the way ecologists monitor biodiversity and population abundances. The full potential of camera traps is however only realized when the hundreds of thousands of images collected can be quickly classified, ideally with minimal human intervention. Machine learning approaches, and in particular deep learning models, have recently allowed extraordinary progress towards this end. There is however no model available that can, currently, automatically identify the French fauna in camera-trap images.

The DeepFaune initiative aims to fill this gap.

DeepFaune is a large-scale collaboration between dozens of partners involved in research, conservation and management of wildlife in France, to (1) build the first and largest dataset of annotated camera-trap images collected in France, and (2) train successful species classification models using deep learning approaches.

We also provide the community with a free, user-friendly and cross-platform software to use the models developed, so that anyone can run our best models on his/her images on a personal computer and classify his/her camera-trap images or videos.

Data and Model

A large training dataset

The success of our collaboration with our partners has allowed us to gather over a million annotated pictures, likely representing the largest database of this kind in France.

A two-step approach

Our approach is based on two models: 1) a detection model identifies whether an animal/human/vehicle is present in the image, as ‘empty’ images are often collected due to false triggers; 2) a classification model predicts to which class (i.e. species or higher taxonomic group) the animal detected belongs to.

Species classified and model performance

Currently, the model uses 28 classes. Each image or video that is submitted, taken by day or by night, is predicted to show either:

  • nothing: class ‘empty’
  • a human presence: classes ‘human’, ‘vehicule’
  • an ungulate: classes ‘chamois’, ‘ibex’, ‘red deer’, ‘roe deer’, ‘wild boar’
  • a different large mammal: classes ‘bear’, ‘lynx’, ‘wolf’
  • a smaller mammal: classes ‘badger’, ‘lagomorph’, ‘genet’, ‘hedgehog’, ‘marmot’, ‘mustelid’, ‘nutria’, ‘red fox’, ‘squirrel’
  • a domestic animal: classes ‘cat’, ‘cow’, ‘dog’, ‘equid’, ‘goat’, ‘sheep’
  • something else, within classes ‘bird’ or ‘micromammal’

Images for which the confidence score is below a user-defined threshold are considered ‘undefined’, and should be inspected visually to be classified.

However, on validation images, 96% of predictions are correct! This excellent performance, estimated on some of partners' images that have not been used to train the model, might be lower on images taken in different contexts. In any case, we trust that the model’s performance should remain high enough to allow anyone collecting thousands of pictures to save a huge amount of time.

Try the software on your own images - and let us know how it performs!
The quality of the predictions improve regularly, as we work on the model and add images to the training database. If you have classified camera-trap pictures or videos, you can help us to improve the model. Contact us to contribute!

You want to learn more about our approach? A pre-print is available:

Rigoudy et al. 2022. The DeepFaune initiative: a collaborative effort towards the automatic identification of the French fauna in camera-trap images. bioRxiv https://doi.org/10.1101/2022.03.15.484324

Software

We provide a user-friendly software to use the DeepFaune model to classify camera-trap pictures or videos. The software returns a spreadsheet with the classifications, but also allows to copy or move the pictures or videos within distinct folders according to the classifications.

This software has been designed to be used on a standard computer and does not require to send pictures or videos to a distant server.

This software is free to use for non-commercial purposes, and is distributed under the CeCILL licence .

– Download and use –

  • Windows:
  1. Download the compressed folder here, and uncompress it where you want. This folder can be deleted once the installation has been completed (step 2).
  2. Install by double-clicking on the deepfaune_installer.exe file.
  3. Launch the DeepFaune software like any other Windows software.
  • Linux/Mac:
  1. Download the compressed folder here.
  2. Uncompress the folder at an easily accessible place, as you have to access it each time you launch the software.
  3. Follow the instructions provided in the README.md file.
  • All systems: You will find the user’s manual here (French, English version coming soon…). For help or suggestions, contact us

Partners

The DeepFaune initiative brings together more than 40 partners, all involved in research, conservation or management of biodiversity: regional or national parks, hunting federations, naturalist associations, research groups and even individuals.
These partners are actively taking part in the project, by contributing camera-trap images or videos to our growing database, and participating in defining the project’s aims and user needs.

Map of the geographical locations of our partners

The list of partners who have joined DeepFaune is available here.
Want to join the DeepFaune project? Contact us!

Our team

The project is led by Simon Chamaillé-Jammes and Vincent Miele, both employed by the Institut National d'Ecologie et d'Environnement (INEE) of the CNRS and respectively affiliated to the Centre d'Ecologie Fonctionnelle et Evolutive (CEFE, Montpellier) and to the Laboratoire de Biologie et Biométrie Evolutive (LBBE, Lyon).
The DeepFaune team is also made up of Gaspard Dussert (LBBE), Noa Rigoudy (CEFE) and Bruno Spataro (LBBE).

Simon Chamaillé-Jammes is a research director at the CNRS and specializes in population dynamics and behavioral ecology.
Vincent Miele is a research engineer at the CNRS and specializes in developing deep-learning-based computer vision techniques in ecology.
Gustave Dussert is a PhD student in machine learning who develops further the methodological approaches underlying the DeepFaune models.
Noa Rigoudy was the first student to work on DeepFaune. She’s now working on a PhD on the effects of anthropisation on animal behavior but remains involved in the initiative.
Bruno Spataro is a research engineer at the CNRS and manages data storage and computation services for the PRABI-LBBE data center.

The team would like to thank Elias Chetouane, Julien Rabault, Antoine Régnier et Pierre Cornette for their contributions to model or software developments.

This project is funded by the CNRS.