{"name":"napari-sam3-assistant","display_name":"SAM3 Assistant","visibility":"public","icon":"","categories":["Utilities"],"schema_version":"0.2.1","on_activate":null,"on_deactivate":null,"contributions":{"commands":[{"id":"napari-sam3-assistant.make_widget","title":"SAM3 Assistant","python_name":"napari_sam3_assistant.widgets.main_widget:MainWidget","short_title":null,"category":null,"icon":null,"enablement":null},{"id":"napari-sam3-assistant.make_mask_operations_widget","title":"SAM3 Mask Operations","python_name":"napari_sam3_assistant.widgets.mask_operations_widget:MaskOperationsWidget","short_title":null,"category":null,"icon":null,"enablement":null}],"readers":null,"writers":null,"widgets":[{"command":"napari-sam3-assistant.make_widget","display_name":"SAM3 Assistant","autogenerate":false},{"command":"napari-sam3-assistant.make_mask_operations_widget","display_name":"SAM3 Mask Operations","autogenerate":false}],"sample_data":null,"themes":null,"menus":{},"submenus":null,"keybindings":null,"configuration":[]},"package_metadata":{"metadata_version":"2.4","name":"napari-sam3-assistant","version":"4.2.1","dynamic":["license-file"],"platform":null,"supported_platform":null,"summary":"A napari plugin for Segment Anything Model 3 (SAM3) image segmentation with Simple and Advanced workflows, text, points, boxes, exemplars, large-image ROI inference, mask operations, and 3D/video-like propagation","description":"\n# napari-sam3-assistant\n![napari-sam3-assistant UI](docs/ui.png)\n\n\n`napari-sam3-assistant` is a napari plugin for Segment Anything Model 3 (SAM3) image segmentation. Version 4 adds a two-mode interface: `Simple` for guided image segmentation and `Advanced` for the original full Step 1 to Step 6 workflow.\n\nThe plugin focuses on task-based segmentation workflows:\n\n- 2D segmentation with text, box, point, and mask-style prompts\n- 3D stack / video-like propagation from prompts on a selected slice or frame\n- exemplar segmentation from Shapes ROI boxes\n- text-based concept segmentation\n- large OME-Zarr and TIFF segmentation through local ROI inference\n- Live Points with positive and negative prompts\n- downstream mask cleanup, merge, and export operations\n\n\n## What's New in 4.2.1 Optional completion chime for long runs\n\nSAM3 Assistant can play a short, soft completion chime when a long-running task finishes.\nThis is useful when preview or 3D/video propagation takes more than one minute and the user is working away from the screen.\nThe chime is optional and can be turned on or off from the plugin UI.\n\n\n## What's New in 4.2.0\n\nVersion 4.2.0 adds experimental CPU-only support for SAM3.0 2D image workflows when using a CPU-safe `sam3` backend such as `sam3-cpu`.\n\n- CPU-only setup is documented in [docs/cpu_only.md](docs/cpu_only.md).\n- Device mode is now environment-driven and shown as an indicator; normal users no longer need to choose CPU or GPU manually.\n- Advanced manual device override is available only for backend testing with `NAPARI_SAM3_ENABLE_DEVICE_OVERRIDE=1`.\n- SAM3.0 2D CPU workflows are enabled for points, boxes, text, exemplar, and Live Points when the installed `sam3` backend supports CPU model construction.\n- SAM3.1, 3D/video propagation, and SAM3 video-predictor workflows remain CUDA/GPU-only in this plugin.\n- Model-folder setup now passes the detected BPE tokenizer path to SAM3 and can create `bpe_simple_vocab_16e6.txt.gz` automatically from `merges.txt`.\n- The plugin reports a clear upstream CPU-support limitation if a non-CPU-safe SAM3 backend still allocates CUDA tensors during CPU image model construction.\n\n\n\n## What's New in 4.0.0\nVersion 4.0.0 was a workflow release focused on the new Simple mode and a cleaner Advanced mode.\n\n- New `Simple` mode for common imaging tasks with a compact one-column layout.\n- `Advanced` mode keeps the full manual UI for model setup, batch work, large-image ROI settings, result tables, mask operations, and detailed logs.\n- The mode selector stays visible, so users can move between Simple and Advanced without restarting napari.\n- Simple mode uses the same SAM3 execution path and writes the same napari preview layers as Advanced mode.\n- Simple mode keeps common tasks short: choose the image/task, add or enter the prompt, then run preview.\n- Simple mode includes `Mask Ops` in the Run area to open the standalone mask cleanup widget for preview labels.\n- Simple mode uses SAM3.0 for 2D image tasks so Advanced SAM3.1 video-model settings do not break Simple image segmentation.\n- Device selection is explicit. `GPU / CUDA` is recommended for full SAM3 functionality; `CPU` is experimental for SAM3.0 2D image workflows and requires a CPU-safe SAM3 backend.\n- Live Points are still available with `T` for next point mode and `Shift+T` to flip selected or latest points.\n\nSAM 3 is not bundled with this plugin. Install the SAM 3 backend and download the SAM 3 model files separately from Meta's Hugging Face repository.\n\n## Status\n\nThis project is under active development. The current widget supports local SAM 3 model loading, napari prompt collection, Simple and Advanced UI modes, large-image ROI execution, downstream mask operations, background execution, and writing results back to napari layers.\n\n## Changelog\n\nRelease notes and bug-fix history are maintained in [CHANGELOG.md](CHANGELOG.md).\n\n## Requirements\n\n- Python `>=3.11`\n- napari `>=0.5`\n- SAM 3 Python package importable as `sam3`\n- CUDA-enabled PyTorch and torchvision installed for your platform for normal use\n- A local SAM 3 checkpoint directory containing:\n  - `config.json`\n  - `processor_config.json`\n  - one weight file such as `sam3.pt`, `model.safetensors`, or `sam3.1_multiplex.pt`\n\nCPU-only use is possible for SAM3.0 2D image workflows with a CPU-safe SAM3 backend. See [CPU-only SAM3.0 setup](docs/cpu_only.md).\n\n## Setup\n\n### Windows\n\n1. Download and install **Miniforge**:  \n   https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-Windows-x86_64.exe\n\n2. After Miniforge is installed, open either:\n   - **Miniforge Prompt**\n   - **PowerShell**\n\n3. Create and activate the environment\n```Bash\nconda create -n napari-sam3 python=3.11 -y\nconda activate napari-sam3\n```\n4. Install base Python tools and napari\n```Bash\npython -m pip install --upgrade pip wheel\npython -m pip install \"setuptools<82\"\npython -m pip install \"napari[all]\"\n```\n5. Install CUDA-enabled PyTorch\n\n```Bash\npython -m pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu128\n```\n6. Install SAM3\n\nChoose one:\n\nOption A. Install from a local clone\n```bash\ngit clone https://github.com/facebookresearch/sam3.git\ncd sam3\npython -m pip install --no-cache-dir -e .\n```\nOption B. Install directly from GitHub\n```Bash\npython -m pip install --no-cache-dir git+https://github.com/facebookresearch/sam3\n```\n7. Install extra dependencies\n```Bash\npython -m pip install einops triton-windows pycocotools\n```\nIf `SAM3.1` multiplex propagation later fails on Windows with errors such as\n`No available kernel. Aborting execution!`, see the replacement-file workaround in\n[`windows_sam31_workaround/README.md`](windows_sam31_workaround/README.md).\n\n8. Install napari-sam3-assistant\n```Bash\npython -m pip install napari-sam3-assistant\n```\nIf you are installing from a local repository checkout instead:\n```Bash\npython -m pip install -e .\n```\n9. Launch napari\n```\nnapari\n```\n\n### Linux ARM64 (AArch64)\n1. Download Miniforge\nhttps://conda-forge.org/download/\n\nInstall Miniforge first.\n```Bash\nchmod +x Miniforge3-Linux-aarch64.sh\n./Miniforge3-Linux-aarch64.sh -b -p \"$HOME/miniforge3\"\nsource \"$HOME/miniforge3/bin/activate\"\n```\n2. Create and activate the environment\n```Bash\nconda create -n napari-sam3 python=3.11 -y\nconda activate napari-sam3\n```\n3. Install base Python tools and napari\n```Bash\npython -m pip install --upgrade pip wheel\npython -m pip install \"setuptools<82\" \"numpy>=1.26,<2\"\npython -m pip install \"napari[bermuda, pyqt6, optional-numba, optional-base]\"\n```\n\n4. Install CUDA-enabled PyTorch\n\n```Bash\npython -m pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu130\n```\n5. Install SAM3\n\n```Bash\ngit clone https://github.com/facebookresearch/sam3.git\ncd sam3\npython -m pip install --no-cache-dir -e .\n```\n6. Install extra dependencies\n```Bash\npython -m pip install einops triton pycocotools\n```\n7. Install napari-sam3-assistant\n```Bash\npython -m pip install napari-sam3-assistant\n```\nIf you are installing from a local repository checkout instead:\n```Bash\npython -m pip install -e .\n```\n8. Launch napari\n```Bash\nnapari\n```\n\nVerify the installation\n\nInside the activated environment, confirm that SAM3 imports correctly:\n```Bash\npython -c \"import sam3; print('sam3 import OK')\"\n```\nYou can also verify napari:\n```Bash\npython -c \"import napari; print(napari.__version__)\"\n```\n\n## Download SAM 3 model files\n\nDownload the model files from:\n\n- `https://huggingface.co/facebook/sam3`\n- `https://huggingface.co/facebook/sam3.1`\n\nThese repositories are gated, so you must request or accept access first.\n\nAfter approval, download the files manually from the **Files and versions** tab.\n\nA reference screenshot of the file list is shown below:\n\n![SAM 3 model files screenshot](docs/sam3_model_files.png)\n\n## Model folder\n\nThe model files can be stored in any folder you want.\n\nExamples:\n\n```text\nD:\\models\\sam3\nD:\\models\\sam3_1\n\n```\n\nThe plugin only needs the correct folder path.\n\nExample SAM3.0 folder:\n\n```text\nD:\\models\\sam3\\\n  config.json\n  processor_config.json\n  sam3.pt\n```\n\n`model.safetensors` is also supported as a SAM3 3.0 weight file.\n\nExample SAM3.1 folder:\n\n```text\nD:\\models\\sam3_1\\\n  config.json\n  processor_config.json\n  sam3.1_multiplex.pt\n```\n\nCurrent model support:\n\n- SAM3.0 weights: 2D image tasks and 3D/video propagation\n- SAM3.1 `sam3.1_multiplex.pt`: 3D/video propagation through the SAM3.1 multiplex video predictor\n- SAM3.1 is not currently routed through the plugin's 2D image model loader\n\n\n## Device rule\n\n- Use a CUDA-enabled PyTorch environment for normal use.\n- Select **GPU / CUDA** for SAM3.0, SAM3.1, and 3D/video workflows.\n- **CPU** is experimental for SAM3.0 2D image workflows with a CPU-safe SAM3 backend. Tested CPU workflows include points, boxes, exemplar, text, and Live Points. See [CPU-only SAM3.0 setup](docs/cpu_only.md).\n\n## For developers\n\nIf you want editable installs and `git pull`, use:\n\n```powershell\ngit clone https://github.com/facebookresearch/sam3.git\ncd sam3\npython -m pip install --no-cache-dir -e .\n\ngit clone https://github.com/wulinteousa2-hash/napari-sam3-assistant.git\ncd napari-sam3-assistant\npython -m pip install -e .\n```\n\n## Basic Workflow\n\n### Simple Mode\n\nUse `Simple` when you want a guided image-segmentation workflow with fewer controls on screen.\n\nThe main flow is:\n\n1. Select an image and choose the task.\n2. Add the prompt.\n3. Click `Run Preview`.\n\nSimple mode is intended for common imaging tasks:\n\n- `2D`: points, boxes, labels-mask, or text prompts on the selected image plane. CPU mode requires a CPU-safe SAM3 backend.\n- `Text`: enter a short imaging concept such as `cell`, `nucleus`, or `myelin`\n- `Refine`: use Live Points to add positive or negative point corrections\n- `Exemplar`: draw one or more boxes around example objects\n- `3D/Video`: start propagation from a prompt on the selected frame or slice when the current data and model support it\n\nSimple mode keeps model setup small:\n\n- model folder\n- `GPU / CUDA` or experimental `CPU`\n- SAM3.0 for Simple image tasks\n\nUse `Advanced` when you need SAM3.1 model selection, batch processing, large-image ROI controls, detailed result tables, or CSV export.\n\nAfter a Simple preview creates labels, click `Mask Ops` in the Run area to open `SAM3 Mask Operations` on the cleanup tab.\n\n### Advanced Mode\n\n`Advanced` is the original full workflow. It keeps the Step 1 to Step 6 layout for users who want manual control.\n\n1. Open an image in napari.\n2. Open `Plugins > SAM3 Assistant`.\n3. Choose `Advanced`.\n4. Select the image in `Target image`.\n5. Select a task.\n6. Create a prompt layer if the task needs one.\n7. Click `Run Preview`.\n8. Inspect `SAM3 preview labels`, `SAM3 preview masks`, or `SAM3 preview boxes`.\n9. Use `Save & Clean` in `Step 4. Run and Save` for quick mask acquisition, or open `SAM3 Mask Operations` if you want advanced cleanup, merge, or export controls.\n\nUse `Clear Preview` to remove generated preview layers without deleting prompts or saved labels.\n\n### Run and Save\n\n`Step 4. Run and Save` includes a quick save path for users who want to acquire a mask and immediately continue segmentation.\n\nAfter a preview is created, choose:\n\n- output folder\n- output format\n- filename\n\nThen click `Save & Clean`.\n\n`Save & Clean`:\n\n- saves the current preview mask as a new napari Labels layer\n- writes the mask file to the selected output folder\n- removes temporary preview layers\n- releases Python / CUDA temporary memory where available\n- unloads the SAM3 model so memory returns closer to baseline\n- leaves an `Open Folder` shortcut for the saved mask location\n\nIf `Load model when running` is checked, the next run reloads the model automatically.\n\nSupported quick-save formats:\n\n| Preview type | TIFF | NumPy `.npy` | PNG |\n| --- | --- | --- | --- |\n| 2D mask | Yes | Yes | Yes |\n| 3D/video propagated mask | Yes | Yes | No |\n\nPNG is only offered for 2D masks. For 3D/video outputs, use TIFF stack or NumPy `.npy`.\n\n### Large-Image Local Inference\n\nLarge-image mode is optional and off by default. When it is off, the plugin keeps the existing full-image inference path.\n\nUse this mode for OME-Zarr, large TIFF, and similar large images where sending the full selected plane to SAM3 is too expensive.\n\nWorkflow:\n\n1. Set up the normal task and prompt type.\n2. Enable `Enable large-image local inference` in Advanced task setup.\n3. Choose a local ROI size:\n\n```text\n512 x 512\n1024 x 1024\n2048 x 2048\n4096 x 4096\n8192 x 8192\n```\n\n4. Add a point or box prompt.\n5. Click `Run Preview`, or add points in Live Points mode.\n\nROI behavior:\n\n- Point prompts use the latest point as the ROI anchor.\n- Box prompts use the box center and keep the box inside the local inference window when possible.\n- Live Points use the latest point as the ROI anchor.\n- Text-only prompts keep the full-image path in this first pass unless a point or box anchor is also available.\n- If a new point or box stays inside the current ROI, the same ROI is reused.\n- If a new point or box falls outside the current ROI, the ROI is rebuilt around the new prompt.\n\nThe active ROI is shown as:\n\n```text\nSAM3 active ROI\n```\n\nSAM3 receives only the local ROI image data. Returned labels and boxes are written back into global image coordinates in the normal preview layers.\n\nStatus messages report:\n\n```text\nLarge-image mode OFF: full-image inference.\nLarge-image mode ON: local ROI inference (WIDTH x HEIGHT).\nActive ROI bounds: y=Y0:Y1, x=X0:X1.\n```\n\nFor large ROI work, `Save & Clean` is the recommended handoff after a successful preview. It saves the mask, clears temporary preview memory, unloads the model, and lets users continue with another ROI without manually visiting Mask Operations.\n\n### Batch Multiple 2D Images\n\nUse `Batch all image layers` when several open 2D images should receive the same prompt setup.\n\nWorkflow:\n\n1. Open multiple images in napari.\n2. Configure a 2D task such as text, box, exemplar, or labels-mask segmentation.\n3. Add the prompt once.\n4. Enable `Batch all image layers` in `Advanced`.\n5. Click `Run Preview`.\n\nEach source image gets its own output layers:\n\n```text\nSAM3 preview labels [image name]\nSAM3 preview masks [image name]\nSAM3 preview boxes [image name]\n```\n\nBatch preview layers can be reviewed directly or used in Mask Operations.\n\nBatch mode is intended for 2D image tasks. It is disabled for Live Points and 3D/video propagation because those workflows depend on one active image/session.\n\n### Multi-Text Batch Mode\n\nUse `Batch text prompts` when you want each text concept to run independently instead of writing one combined phrase such as `cat and dog`.\n\nWorkflow:\n\n1. Set `Task` to `Text segmentation`.\n2. Enter one concept per line in `Batch text prompts`:\n\n```text\ncat\ndog\nperson\n```\n\n3. Leave `Batch all image layers` off to run all prompts on the selected image only.\n4. Enable `Batch all image layers` to run every prompt on every open image.\n5. Click `Run Preview`.\n\nOutputs include both image and prompt:\n\n```text\nSAM3 preview labels [Image 1 - cat]\nSAM3 preview labels [Image 1 - dog]\nSAM3 preview labels [Image 2 - cat]\nSAM3 preview labels [Image 2 - dog]\n```\n\nThe Results table includes a `Prompt` column. Object IDs are scoped to each image-prompt result, so `Object ID 1` for `Image 1 - cat` is separate from `Object ID 1` for `Image 1 - dog`.\n\n## Tasks\n\n### Text Segmentation\n\nUse text to segment all matching instances of a concept.\n\nWorkflow:\n\n1. Set `Task` to `Text segmentation`.\n2. Leave `Prompt type` as `Text only`.\n3. Enter a short phrase, for example:\n\n```text\ncell\nnucleus\nmyelin\nmyelin sheath\n```\n\n4. Keep `Detection threshold` near the default `0.35`, or lower it if the result is empty.\n5. Press `Enter` in the text prompt field or click `Run Preview`.\n\nNo prompt layer is needed for text segmentation. `Create Prompt Layer` is not required.\n\nText prompts usually work better as short noun phrases than instructions. Prefer `myelin sheath` over `segment all the myelin rings`. The plugin strips common instruction prefixes before sending the prompt to SAM3, but microscopy-specific language can still be difficult for the model.\n\nIf the result says `objects=0`, SAM3 ran but did not return masks above threshold. Try a shorter noun phrase, lower `Detection threshold`, or use a box/exemplar prompt for structures that are visually clear but not well recognized by text.\n\n### 2D Segmentation With Boxes\n\nUse boxes to segment the target region inside each drawn box.\n\nWorkflow:\n\n1. Set `Task` to `2D segmentation`.\n2. Set `Prompt type` to `Box`.\n3. Click `Create Prompt Layer`.\n4. Draw one or more rectangles in the `SAM3 boxes` Shapes layer.\n5. Click `Run Preview`.\n\nEach 2D box preview writes segmentation only inside the corresponding drawn box. This is different from `Exemplar segmentation`, which uses boxed examples to find and segment similar objects outside the original boxes.\n\n### Exemplar Segmentation\n\nUse example ROIs to segment similar objects.\n\nWorkflow:\n\n1. Set `Task` to `Exemplar segmentation`.\n2. Set `Prompt type` to `Box`.\n3. Click `Create Prompt Layer`.\n4. Draw boxes around one or more example objects.\n5. Click `Run Preview`.\n\nThe local SAM 3 image API exposes visual exemplars through geometric box prompts. The plugin stores ROI metadata, but inference currently passes exemplar ROIs as SAM 3 visual box prompts.\n\n`Exemplar segmentation` is a 2D/image task in this plugin. For exemplar-like 3D propagation, use `3D/video propagation` with a box prompt on the selected frame or slice.\n\n### Live Points With Positive and Negative Points\n\nUse points to correct a result.\n\nWorkflow:\n\n1. Set `Task` to `Live Points`.\n2. Set `Prompt type` to `Points (positive/negative)`.\n3. Click `Create Prompt Layer`.\n4. Choose `Positive` and add points on regions to include.\n5. Choose `Negative` and add points on regions to exclude.\n6. Click `Run Preview`.\n\nThis is useful after a text, box, or exemplar preview is close but not correct.\n\n### Labels Mask Prompt\n\nUse a napari Labels layer as a mask-style prompt.\n\nWorkflow:\n\n1. Set a task that supports mask prompts.\n2. Set `Prompt type` to `Labels mask`.\n3. Click `Create Prompt Layer`.\n4. Paint non-zero pixels in `SAM3 mask prompt`.\n5. Click `Run Preview`.\n\nLabels-mask prompts are currently supported for 2D/image workflows, not `3D/video propagation`. For 3D/video propagation, use text, box, or point prompts on the selected frame.\n\n### 3D Stack / Video Propagation\n\nTreat a stack as video-like data and propagate a prompt through frames or slices.\n\nWorkflow:\n\n1. Open a stack in napari.\n2. Set `Task` to `3D/video propagation`.\n3. Select the target frame or slice in napari.\n4. Create a prompt layer and add prompts on that frame.\n5. Choose propagation direction:\n   - `both`\n   - `forward`\n   - `backward`\n6. Click `Start 3D Propagation`.\n\nIn `3D/video propagation` mode, the primary run button changes from `Run Preview` to `Start 3D Propagation`. This starts a new SAM3 video session, adds the current frame prompt, and propagates through the stack. `Propagate Existing Session` is an advanced action that reuses the current SAM3 video session without adding a new prompt; it is enabled only after a successful 3D propagation run.\n\n3D/video prompt limits:\n\n- SAM3.0 video propagation supports one initial visual box on the prompted frame.\n- SAM3.1 video multiplex can accept multiple box prompts.\n- Point prompts target one object per request and cannot be mixed with text or box prompts in the same 3D/video request.\n- Point prompts are limited to 16 points per request. SAM3's tracker prompt encoder uses the first 8 and last 8 points when more are supplied, so the plugin fails early instead of letting middle points be ignored.\n- Labels-mask prompts are not supported by the SAM3 video predictor API used by this plugin.\n- `Exemplar segmentation` itself is routed through the 2D image model. Use box prompts in `3D/video propagation` when the goal is exemplar-like propagation through a stack.\n\nPreview output is written to:\n\n```text\nSAM3 propagated preview labels\n```\n\nSaved output is written to:\n\n```text\nSAM3 saved propagated labels\n```\n\nThe current SAM 3 video predictor backend is CUDA-only. CPU mode is experimental for SAM3.0 2D image workflows with a CPU-safe SAM3 backend; see [CPU-only SAM3.0 setup](docs/cpu_only.md).\n\n## Channel Axis\n\n`Channel axis` tells the plugin which data axis is color/channel.\n\nDefault:\n\n```text\n-1\n```\n\nUse `-1` for grayscale images and normal RGB/RGBA images. The plugin auto-detects trailing RGB/RGBA axes of size `3` or `4`.\n\nExamples:\n\n```text\n(H, W)          -> -1\n(H, W, 3)      -> -1\n(H, W, 4)      -> -1\n(Z, H, W)      -> -1\n(C, H, W)      -> 0\n(Z, C, H, W)   -> 1\n(T, C, H, W)   -> 1\n(Z, H, W, C)   -> 3\n```\n\nLeave it at `-1` unless your image has an explicit multi-channel microscopy dimension.\n\n## Output Layers\n\nPreview layers:\n\n```text\nSAM3 preview labels\nSAM3 preview masks\nSAM3 preview boxes\nSAM3 propagated preview labels\n```\n\nSaved layers:\n\n```text\nSAM3 saved labels\nSAM3 saved propagated labels\n```\n\nButtons:\n\n- `Validate`: check the selected SAM 3 model directory.\n- `Load 2D Model`: load the 2D/image model.\n- `Load 3D/Video Model`: load the video propagation model.\n- `Run Preview`: run the selected task.\n- `Clear Preview`: remove generated preview layers only.\n- `Cancel`: stop a running worker.\n- `Unload`: unload the SAM3 model from memory.\n- `Save Accepted Object`: save a preview label object in Mask Operations.\n\nResults table:\n\n```text\nLayer | Prompt | Frame | Object ID | Score | Area\n```\n\n- `Layer`: source image layer.\n- `Prompt`: text prompt used for text and multi-text results. For non-text workflows this is `-`.\n- `Frame`: propagated frame or slice index. For 2D results this is `-`.\n- `Object ID`: SAM3 object ID when available, otherwise a generated label ID.\n- `Score`: SAM3 confidence/probability when returned by the backend.\n- `Area`: number of mask pixels for that object in the displayed 2D plane or frame.\n\nResults actions:\n\n- `Clear Results`: clear the table only.\n- `Copy Clipboard`: copy tab-separated results, including headers, for pasting into Excel or statistics software.\n- `Export CSV`: save the results table to a CSV file.\n\n## Tested Image Coverage\n\nThe following image sizes and channel layouts have been exercised with the current workflow:\n\n| Image type | Size / layout | Workflow | Status |\n| --- | --- | --- | --- |\n| Single-channel 2D | about `2048 x 2048` | 2D preview and quick save | Passed |\n| RGB 2D | about `2048 x 2048 x 3` | 2D preview and quick save | Passed |\n| Single-channel large image | about `60000 x 60000` | large-image local ROI inference with quick save and clean | Passed |\n| RGB large image | about `60000 x 60000 x 3` | large-image local ROI inference | Not yet tested |\n\nLarge-image coverage means the plugin was used with local ROI inference rather than sending the full image plane to SAM3 at once. Actual memory use depends on ROI size, model type, device, source image backend, and whether napari already holds large arrays in memory.\n\nLabel-value merge:\n\nUse this when multiple SAM3 objects should become the same class value in a Labels layer. For example, if labels `3`, `4`, `5`, and `6` are all the same biological class, set:\n\n```text\nValues to replace: 3,4,5,6\nNew value: 3\n```\n\nThen click `Merge Label Values`. The selected Labels layer is updated in place.\n\n## Mask Operations\n\n`SAM3 Mask Operations` is a separate napari widget for turning SAM3 previews into curated masks for analysis or training data. Open it from the plugin menu or from the `Mask Ops` button in Simple mode. When opened from `Mask Ops`, it starts as a floating tool window.\n\nTabs:\n\n- `Accepted Objects`: save a preview Labels layer as a named accepted object with class metadata, append it to an existing accepted layer, or replace an existing accepted layer.\n- `Class Merge`: merge selected accepted-object layers into a class working mask.\n- `Mask Cleanup`: analyze connected components, delete selected components, remove small objects, fill holes, smooth masks, keep the largest component, and relabel values.\n- `Final Merge / Export`: merge cleaned class masks into semantic, instance, or binary final masks, choose overlap handling, and export TIFF, PNG, or NumPy `.npy` files.\n\nThe mask operations panel works on napari Labels layers, including SAM3 preview and saved label layers.\n\nMouse-assisted cleanup:\n\n- In `Mask Cleanup`, enable `Right-click Delete`.\n- Right-click a label object in the selected target Labels layer to remove that label value.\n- Click `Undo Last Edit` to restore the previous mask state for the selected Labels layer.\n- Double-click a component table row to jump the viewer to that mask component.\n- This is useful for supervised cleanup after SAM3 creates a preview mask.\n\nOverlap inspection:\n\n- In `Final Merge / Export`, select two or more class mask layers.\n- Click `Show Overlap Map`.\n- The plugin creates `SAM3 overlap map`, where non-zero pixels mark locations covered by more than one selected class mask.\n\n## ARM64, CUDA, and DGX Spark\n\nFor ARM64 systems such as NVIDIA DGX Spark / GB10:\n\n- Use Python 3.11 or newer.\n- Keep the NVIDIA driver and CUDA stack current.\n- Install a PyTorch/torchvision build that supports your GPU architecture.\n- Use a PyTorch/torchvision/SAM3 build with CUDA kernels compatible with the device.\n- CPU-only PyTorch requires a CPU-safe SAM3 backend for image workflows; see [CPU-only SAM3.0 setup](docs/cpu_only.md).\n\nCheck PyTorch GPU support:\n\n```bash\npython - <<'PY'\nimport torch\nprint(\"torch:\", torch.__version__)\nprint(\"torch cuda runtime:\", torch.version.cuda)\nprint(\"cuda available:\", torch.cuda.is_available())\nif torch.cuda.is_available():\n    print(\"device:\", torch.cuda.get_device_name(0))\n    print(\"capability:\", torch.cuda.get_device_capability(0))\n    print(\"arch list:\", torch.cuda.get_arch_list())\nPY\n```\n\nGB10 reports compute capability `12.1` (`sm_121`). If your PyTorch build does not include compatible kernels, you may see:\n\n```text\nCUDA error: no kernel image is available for execution on the device\nnvrtc: error: invalid value for --gpu-architecture\n```\n\nThe plugin does not compile PyTorch, torchvision, or SAM 3 CUDA extensions.\n\n## Troubleshooting\n\n### No mask appears and status says `objects=0`\n\nSAM 3 returned no detections above threshold. Try:\n\n- a shorter text prompt\n- a more common concept phrase\n- a lower `Detection threshold`\n- box or exemplar prompts\n- a CUDA/PyTorch/SAM3 build compatible with your GPU\n\n### CUDA kernel image error\n\nError:\n\n```text\nCUDA error: no kernel image is available for execution on the device\n```\n\nThe GPU is visible, but at least one required CUDA kernel was not built for the device architecture. Install compatible PyTorch/torchvision/SAM 3 builds for the GPU. For CPU-only 2D use, see [CPU-only SAM3.0 setup](docs/cpu_only.md).\n\n### Invalid GPU architecture\n\nError:\n\n```text\nnvrtc: error: invalid value for --gpu-architecture\n```\n\nThe installed PyTorch CUDA runtime cannot compile for the detected GPU. Install a PyTorch/torchvision/SAM 3 build that supports the GPU. For CPU-only 2D use, see [CPU-only SAM3.0 setup](docs/cpu_only.md).\n\n### BFloat16 conversion errors\n\nThe plugin converts SAM3 `bfloat16` outputs to `float32` before writing NumPy-backed napari layers. If you still see dtype errors, restart napari after changing device mode and run again.\n\n### SAM3.1 `start_session` fails with `unexpected keyword argument 'offload_state_to_cpu'`\n\nThis is a plugin/backend API mismatch, not a prompt or data problem.\n\n- The failure happens during `start_session` / `init_state`, before propagation actually begins.\n- The installed `sam3` backend does not accept the keyword that newer plugin code may pass.\n- The plugin includes compatibility handling for this case so older installed `sam3` backends can still start a 3D/video session.\n\nIf you still see this exact error, first verify that napari is importing the intended local `sam3` install and not an older duplicate environment copy.\n\n### SAM3.1 propagation fails later with `No available kernel. Aborting execution!` on Windows\n\nThis is a different issue from the `offload_state_to_cpu` startup mismatch.\n\n- `start_session` succeeds.\n- prompts are accepted.\n- the failure happens only when `SAM3.1` multiplex propagation actually starts.\n\nOn some Windows systems using `triton-windows`, this appears to be an upstream `sam3` runtime/kernel path issue rather than a napari prompt-collection bug.\n\nUse the documented Windows workaround here:\n\n- [`windows_sam31_workaround/README.md`](windows_sam31_workaround/README.md)\n\n### Text prompt creates no layer\n\nThat is expected. Text segmentation does not need a prompt layer. Enter text and click `Run Preview`.\n\n## Development\n\nInstall in editable mode:\n\n```bash\npip install -e .\n```\n\nRun tests:\n\n```bash\nPYTHONPATH=src pytest -q\n```\n\nThe test suite covers coordinate mapping, prompt collection, adapter utility behavior, and static widget UI checks. It does not download SAM 3 weights.\n\n## References\n\n- SAM 3 repository: https://github.com/facebookresearch/sam3\n- SAM 3 model files: https://huggingface.co/facebook/sam3\n- PyTorch installation selector: https://pytorch.org/get-started/locally/\n\n## Acknowledgement\nThe demo image was provided by the Electron Microscopy Core Facility at Houston Methodist Research Institute\n\n## License\n\nMIT. See the project license file.\n","description_content_type":"text/markdown","keywords":"napari,SAM3,segmentation,tracking,microscopy,image analysis","home_page":null,"download_url":null,"author":"Wulin Teo","author_email":null,"maintainer":null,"maintainer_email":null,"license":"MIT","classifier":["Framework :: napari","Intended Audience :: Science/Research","License :: OSI Approved :: MIT License","Programming Language :: Python :: 3","Programming Language :: Python :: 3.11","Topic :: Scientific/Engineering :: Image Processing","Topic :: Scientific/Engineering :: Artificial Intelligence"],"requires_dist":["napari>=0.5.0","qtpy","numpy","pillow"],"requires_python":">=3.11","requires_external":null,"project_url":["Homepage, https://github.com/wulinteousa2-hash/napari-sam3-assistant","Repository, https://github.com/wulinteousa2-hash/napari-sam3-assistant","Issues, https://github.com/wulinteousa2-hash/napari-sam3-assistant/issues"],"provides_extra":null,"provides_dist":null,"obsoletes_dist":null},"npe1_shim":false}