Vident-real: an intra-oral video dataset for multi-task learning - Open Research Data - Bridge of Knowledge

Search

Vident-real: an intra-oral video dataset for multi-task learning

Description

We introduce Vident-real, a large dataset of 100 video sequences of intra-oral scenes from real conservative dental treatments performed at the Medical University of Gdańsk, Poland. The dataset can be used for multi-task learning methods including:

  • video enhancement
  • video segmentation
  • motion estimation
  • video stabilization

The dataset allows for training and validating models on multiple vision-based tasks in challenging real conditions characterized by compromised visibility. The recordings were acquired with a tiny micro-camera firmly attached to dental handpieces with various dental burs and tools. The dental scenes were crowded due to the presence of dental tools and artifacts and featured occlusions, appearance variations, tool-teeth interactions, bleeding, motion blur, light reflections, splashing water and other fluids, and camera fouling.

Since the sequences recorded real dental treatment procedures, collecting target labels from referential, additional sensors in confined spaces is impractical. In the whole dataset, each input video frame, which is corrupted due to sensor miniaturization and other common adversarial factors, is paired with pseudo-labels of:

  • enhanced frame
  • segmented teeth
  • teeth-based homography (4 dof / similarity) between consecutive frames

Vident-real contains 100 real intra-oral videos of 70K frames recorded during conservative treatment procedures. All sequences were recorded in RAW 10-bit format through a wide-angle lens with the sensor's resolution of 800x800 pixels and high sampling frequency ranging from 55 to 60 Hz. The RAW images were debayerized and stored in JPEG format. Sensor's gain and integration time were manually adjusted to each patient's intra-oral cavity to account for on-site low-light conditions thereby improving visibility and colors in the dynamically changing environment.

A miniaturized camera affixed to a dental handpiece could allow dentists to continuously monitor the progress of conservative dental interventions. Camera-augmented dental interventions hold the potential to facilitate dental training and education, optimize workflow ergonomics, and improve patient outcomes. For safe and effective navigation in the mouth, the necessary miniaturization of sensors and optics introduces artifacts to video streams. The inevitable camera shakes result in eye fatigue. The unique challenges posed by intra-oral conditions, such as noise, blur, texture paucity, light variations, shadows, reflections, and fluid dynamics make continuous macro-visualization of complex dental scenes on customized displays difficult. Enhancement of videos acquired in these challenging conditions appears as a natural step towards advancing the field of Video-Assisted Dentistry (VAD), enabling clearer view of the teeth, fractures, gums, blood, cavities, fillings, dentine, pulp, and dental tools.

Dataset file

Vident-real.zip
32.1 GB, S3 ETag b498b75750282520b99364673f62b3dc-65, downloads: 7
The file hash is calculated from the formula
hexmd5(md5(part1)+md5(part2)+...)-{parts_count} where a single part of the file is 512 MB in size.

Example script for calculation:
https://github.com/antespi/s3md5
download file Vident-real.zip

File details

License:
Creative Commons: by-nc 4.0 open in new tab
CC BY-NC
Non-commercial
File embargo:
2024-09-30

Details

Year of publication:
2024
Verification date:
2024-07-01
Dataset language:
English
Fields of science:
  • information and communication technology (Engineering and Technology)
  • automation, electronics, electrical engineering and space technologies (Engineering and Technology)
  • medical sciences (Medical and Health Sciences )
  • biomedical engineering (Engineering and Technology)
DOI:
DOI ID 10.34808/vjnh-9c35 open in new tab
Ethical papers:
Approval no. KB-14/22 by Bioethics Committee at the Regional Medical Chamber in Gdańsk
Verified by:
Gdańsk University of Technology

Keywords

References

Cite as

seen 585 times