{"name":"gfap-segmenter","display_name":"Gfap Segmenter","visibility":"public","icon":"","categories":[],"schema_version":"0.2.1","on_activate":null,"on_deactivate":null,"contributions":{"commands":[{"id":"gfap-segmenter.get_reader","title":"Open data with Gfap Segmenter","python_name":"gfap_segmenter._reader:napari_get_reader","short_title":null,"category":null,"icon":null,"enablement":null},{"id":"gfap-segmenter.export_model_bundle","title":"Export GFAP model bundle","python_name":"gfap_segmenter._writer:export_model_bundle","short_title":null,"category":null,"icon":null,"enablement":null},{"id":"gfap-segmenter.import_model_bundle","title":"Import GFAP model bundle","python_name":"gfap_segmenter._writer:import_model_bundle","short_title":null,"category":null,"icon":null,"enablement":null},{"id":"gfap-segmenter.write_multiple","title":"Save multi-layer data with Gfap Segmenter","python_name":"gfap_segmenter._writer:write_multiple","short_title":null,"category":null,"icon":null,"enablement":null},{"id":"gfap-segmenter.write_single_image","title":"Save image data with Gfap Segmenter","python_name":"gfap_segmenter._writer:write_single_image","short_title":null,"category":null,"icon":null,"enablement":null},{"id":"gfap-segmenter.make_segmenter_widget","title":"GFAP Segmenter","python_name":"gfap_segmenter:SegmenterWidget","short_title":null,"category":null,"icon":null,"enablement":null}],"readers":[{"command":"gfap-segmenter.get_reader","filename_patterns":["*.tif","*.tiff","*.czi"],"accepts_directories":false}],"writers":[{"command":"gfap-segmenter.write_multiple","layer_types":["image*","labels*"],"filename_extensions":[".tif",".tiff",".npy"],"display_name":""},{"command":"gfap-segmenter.write_single_image","layer_types":["image"],"filename_extensions":[".tif",".tiff",".npy"],"display_name":""}],"widgets":[{"command":"gfap-segmenter.make_segmenter_widget","display_name":"GFAP Segmenter","autogenerate":false}],"sample_data":null,"themes":null,"menus":{},"submenus":null,"keybindings":null,"configuration":[]},"package_metadata":{"metadata_version":"2.4","name":"gfap-segmenter","version":"1.0.0","dynamic":["license-file"],"platform":null,"supported_platform":null,"summary":"A Bachelor Thesis project to allow Segmentation of Astrocytes on GFAP Staining using Machine Learning in napari","description":"# gfap-segmenter\n[![License BSD-3](https://img.shields.io/pypi/l/gfap-segmenter.svg?color=green)](https://github.com/MMV-Lab/gfap-segmenter/raw/main/LICENSE)\n[![PyPI](https://img.shields.io/pypi/v/gfap-segmenter.svg?color=green)](https://pypi.org/project/gfap-segmenter)\n[![Python Version](https://img.shields.io/pypi/pyversions/gfap-segmenter.svg?color=green)](https://python.org)\n[![tests](https://github.com/MMV-Lab/gfap-segmenter/workflows/tests/badge.svg)](https://github.com/MMV-Lab/gfap-segmenter/actions)\n[![codecov](https://codecov.io/gh/MMV-Lab/gfap-segmenter/branch/main/graph/badge.svg)](https://codecov.io/gh/MMV-Lab/gfap-segmenter)\n[![napari hub](https://img.shields.io/endpoint?url=https://api.napari-hub.org/shields/gfap-segmenter)](https://napari-hub.org/plugins/gfap-segmenter)\n[![npe2](https://img.shields.io/badge/plugin-npe2-blue?link=https://napari.org/stable/plugins/index.html)](https://napari.org/stable/plugins/index.html)\n[![Copier](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/copier-org/copier/master/img/badge/badge-grayscale-inverted-border-purple.json)](https://github.com/copier-org/copier)\n\nA Bachelor Thesis project to allow segmentation of astrocytes on GFAP staining using machine learning in napari.\n\n----------------------------------\n\n## Features\n- Load `.tif/.tiff/.czi` microscopy data via **bioio** and display them in napari.\n- Run tiled U-Net inference that produces probability maps and binary masks.\n- Curate training patches with ≥300×300 validation and immediate colour feedback.\n- Synthesize 1000 augmented patches per curated region and train in the background with progress updates.\n- Import/export checkpoints (including ONNX) and optionally bundle curated training data.\n\n## Installation\nInstall `gfap-segmenter` via [pip]:\n\n```\npip install gfap-segmenter\n```\n\nIf napari (Qt backend included) is not yet installed, use:\n\n```\npip install \"gfap-segmenter[all]\"\n```\n\n### GPU support\n\nThe plugin automatically uses GPU acceleration when available. GPU support significantly speeds up both model training and inference (typically 10-50x faster than CPU).\n\n#### Installation\n\nThe plugin relies on PyTorch. By default, `pip install gfap-segmenter` installs the CPU-only version of PyTorch. To enable GPU acceleration, you need a CUDA-enabled PyTorch build.\n\n**Important:** PyTorch bundles its own CUDA runtime, so you don't need to install the full CUDA toolkit separately. You only need NVIDIA GPU drivers (usually pre-installed on systems with NVIDIA GPUs).\n\n**For NVIDIA GPUs (CUDA):**\n\n**Option 1: Fresh installation (recommended)**\n\n1. Install PyTorch with CUDA support from [PyTorch's official installation guide](https://pytorch.org/get-started/locally/):\n\n   **CUDA 11.8:**\n   ```bash\n   pip install torch --index-url https://download.pytorch.org/whl/cu118\n   ```\n\n   **CUDA 12.4:**\n   ```bash\n   pip install torch --index-url https://download.pytorch.org/whl/cu124\n   ```\n\n   **CUDA 12.8:**\n   ```bash\n   pip install torch --index-url https://download.pytorch.org/whl/cu128\n   ```\n\n   **CUDA 13.0:**\n   ```bash\n   pip install torch --index-url https://download.pytorch.org/whl/cu130\n   ```\n\n2. Then install `gfap-segmenter`:\n   ```bash\n   pip install gfap-segmenter\n   ```\n\n**Option 2: Upgrade existing CPU-only PyTorch installation**\n\nIf you already have `gfap-segmenter` installed with CPU-only PyTorch, you can upgrade to CUDA-enabled PyTorch:\n\n1. Uninstall the CPU-only PyTorch:\n   ```bash\n   pip uninstall torch\n   ```\n\n2. Install CUDA-enabled PyTorch (choose the version matching your driver):\n   ```bash\n   pip install torch --index-url https://download.pytorch.org/whl/cu121\n   ```\n\n3. The plugin will automatically use the CUDA-enabled PyTorch - no need to reinstall `gfap-segmenter`.\n\n**Note:** You don't need to match the exact CUDA version. PyTorch includes its own CUDA runtime. As long as your NVIDIA driver is compatible (typically driver 11.0+ for CUDA 11.x, or 12.0+ for CUDA 12.x), you can use any recent PyTorch CUDA build. If unsure, use CUDA 12.1.\n\n**For Apple Silicon (macOS):**\n\nMPS (Metal Performance Shaders) support is automatically available if you have PyTorch 1.12+ installed. No additional steps required - the plugin will automatically use your Apple Silicon GPU.\n\n#### Verification\n\nTo verify that GPU support is enabled:\n\n1. **Check in the plugin UI**: The status bar at the bottom of the plugin widget shows the current device (e.g., \"Device: CUDA (NVIDIA GeForce RTX 3090)\" or \"Device: CPU\").\n\n2. **Check via Python**:\n   ```python\n   import torch\n   print(f\"CUDA available: {torch.cuda.is_available()}\")\n   if torch.cuda.is_available():\n       print(f\"CUDA device: {torch.cuda.get_device_name(0)}\")\n       print(f\"CUDA version: {torch.version.cuda}\")\n   ```\n\n3. **Check PyTorch CUDA support** (command line):\n   ```bash\n   python -c \"import torch; print('CUDA available:', torch.cuda.is_available()); print('CUDA version:', torch.version.cuda if torch.cuda.is_available() else 'N/A')\"\n   ```\n\n**Checking your NVIDIA driver version** (optional, for reference):\n\n- **Using nvidia-smi** (if NVIDIA drivers are installed):\n  ```bash\n  nvidia-smi\n  ```\n  Look for the \"CUDA Version\" line (shows the maximum CUDA version your driver supports).\n\n- **Using nvcc** (if CUDA toolkit is installed):\n  ```bash\n  nvcc --version\n  ```\n\n- **Using PyTorch** (after PyTorch is installed):\n  ```bash\n  python -c \"import torch; print(torch.version.cuda)\"\n  ```\n\n#### Troubleshooting\n\n**Problem: Plugin shows \"Device: CPU\" even though you have a GPU**\n\n- **PyTorch CPU-only build**: You may have installed the CPU-only version of PyTorch. Uninstall and reinstall with CUDA support:\n  ```bash\n  pip uninstall torch\n  pip install torch --index-url https://download.pytorch.org/whl/cu121  # Adjust CUDA version\n  ```\n\n- **CUDA driver mismatch**: Your CUDA driver version may not match the PyTorch CUDA version. Check compatibility:\n  ```bash\n  nvidia-smi  # Shows driver version\n  python -c \"import torch; print(torch.version.cuda)\"  # Shows PyTorch CUDA version\n  ```\n  The driver version should be >= the PyTorch CUDA version.\n\n- **CUDA not detected**: Ensure your NVIDIA drivers are up to date and CUDA is properly installed.\n\n**Problem: \"CUDA out of memory\" errors**\n\n- Reduce batch size in training or prediction settings\n- Use a smaller patch size\n- Close other GPU-intensive applications\n\n**Problem: Training/inference is still slow**\n\n- Verify GPU is actually being used (check status bar or console output)\n- Ensure you installed the correct CUDA version for your GPU\n- Check GPU utilization: `nvidia-smi` should show high GPU usage during training\n\n#### Performance Notes\n\n- The plugin automatically uses GPU for both training and inference when available - no configuration needed. The current device is displayed at the bottom of the plugin.\n- If no GPU is detected, the plugin automatically falls back to CPU execution (slower but functional).\n\n### Development version\n```\npip install git+https://github.com/MMV-Lab/gfap-segmenter.git\n```\n\n## Usage overview\n1. **Prediction tab** – load the default checkpoint or import a custom one, choose an image layer, and run inference to create probability and mask layers.\n2. **Curate Patches tab** – draw rectangles in the dedicated shapes layer. Patches ≥300×300 are tinted blue; smaller ones are red. Add valid regions to the training set.\n3. **Train tab** – launch background training; progress and logs stream into the widget. Trained models are stored under `~/.gfap_segmenter/models`.\n4. **Model IO tab** – export/import checkpoint bundles. Bundles can optionally include the curated training data for reproducibility.\n\n## Contributing\nContributions are very welcome. Tests can be run with [tox]; please ensure coverage at least stays the same before submitting a pull request.\n\n## License\nDistributed under the terms of the [BSD-3] license, \"gfap-segmenter\" is free and open source software.\n\n## Issues\nIf you encounter any problems, please [file an issue] along with a detailed description.\n\n[napari]: https://github.com/napari/napari\n[copier]: https://copier.readthedocs.io/en/stable/\n[@napari]: https://github.com/napari\n[MIT]: http://opensource.org/licenses/MIT\n[BSD-3]: http://opensource.org/licenses/BSD-3-Clause\n[GNU GPL v3.0]: http://www.gnu.org/licenses/gpl-3.0.txt\n[GNU LGPL v3.0]: http://www.gnu.org/licenses/lgpl-3.0.txt\n[Apache Software License 2.0]: http://www.apache.org/licenses/LICENSE-2.0\n[Mozilla Public License 2.0]: https://www.mozilla.org/media/MPL/2.0/index.txt\n[napari-plugin-template]: https://github.com/napari/napari-plugin-template\n[file an issue]: https://github.com/MMV-Lab/gfap-segmenter/issues\n[tox]: https://tox.readthedocs.io/en/latest/\n[pip]: https://pypi.org/project/pip/\n[PyPI]: https://pypi.org/\n","description_content_type":"text/markdown","keywords":null,"home_page":null,"download_url":null,"author":"Lennart Kowitz","author_email":"lennart.kowitz@isas.de","maintainer":null,"maintainer_email":null,"license":"Copyright (c) 2025, Lennart Kowitz\nAll rights reserved.\n\nRedistribution and use in source and binary forms, with or without\nmodification, are permitted provided that the following conditions are met:\n\n* Redistributions of source code must retain the above copyright notice, this\n  list of conditions and the following disclaimer.\n\n* Redistributions in binary form must reproduce the above copyright notice,\n  this list of conditions and the following disclaimer in the documentation\n  and/or other materials provided with the distribution.\n\n* Neither the name of copyright holder nor the names of its\n  contributors may be used to endorse or promote products derived from\n  this software without specific prior written permission.\n\nTHIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\nAND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\nIMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\nDISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\nFOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\nDAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\nSERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\nCAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\nOR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\nOF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n","classifier":["Development Status :: 2 - Pre-Alpha","Framework :: napari","Intended Audience :: Developers","License :: OSI Approved :: BSD License","Operating System :: OS Independent","Programming Language :: Python","Programming Language :: Python :: 3","Programming Language :: Python :: 3 :: Only","Programming Language :: Python :: 3.10","Programming Language :: Python :: 3.11","Programming Language :: Python :: 3.12","Programming Language :: Python :: 3.13","Topic :: Scientific/Engineering :: Image Processing"],"requires_dist":["numpy","qtpy","scikit-image","tifffile","bioio","bioio-czi","torch>=2.0.0","onnx","onnxruntime","tqdm","napari[all]; extra == \"all\""],"requires_python":">=3.10","requires_external":null,"project_url":["Bug Tracker, https://github.com/MMV-Lab/gfap-segmenter/issues","Documentation, https://github.com/MMV-Lab/gfap-segmenter#README.md","Source Code, https://github.com/MMV-Lab/gfap-segmenter","User Support, https://github.com/MMV-Lab/gfap-segmenter/issues"],"provides_extra":["all"],"provides_dist":null,"obsoletes_dist":null},"npe1_shim":false}