GSense Documentation
  • Welcome to GSense
  • Getting Started
    • Installation
    • Workflow
    • Features
  • Basics
    • Image loading and managing
      • Load images
      • Viewer
      • Layers
    • Spectral Indexing
    • Image Segmentation
      • Default models
      • Custom models
    • Annotation
    • Binarizer
    • Export
Powered by GitBook
On this page
  1. Basics

Image Segmentation

with Meta AI's Segment Anything Model (SAM)

PreviousSpectral IndexingNextDefault models

Last updated 6 months ago

The image segmentation feature integrates Meta AI's Segment Anything Model (SAM) to partition image pixels into distinct, meaningful segments. This state-of-the-art Vision Transformer based promptable segmentation algorithm with zero-shot generalization allows for precise isolation of image components.

GSense supports Vit-h, Vit-b SAM backbones and uses the hugging face transformers library. Downloading model checkpoints for default Vit-h and Vit-b weights is not required.

Segmentation in GSense requires initializing a segmentation model. You can choose between a or a .

default checkpoint
custom checkpoint