Replace all references to PlanktoNScope by PlanktoScope

This commit is contained in:
Romain Bazile 2021-07-01 15:55:56 +02:00
parent 94b4a7132f
commit f316796e99
18 changed files with 50 additions and 657 deletions

View file

@ -19,7 +19,7 @@ jobs:
contains(github.event.issue.labels.*.name, 'hardware') || contains(github.event.issue.labels.*.name, 'hardware') ||
contains(github.event.pull_request.labels.*.name, 'hardware') contains(github.event.pull_request.labels.*.name, 'hardware')
with: with:
project: 'https://github.com/PlanktonPlanet/PlanktonScope/projects/2' project: 'https://github.com/PlanktonPlanet/PlanktoScope/projects/2'
column_name: 'Bugs' column_name: 'Bugs'
- name: Assign issues and pull requests with `software` label to project `Sofware` - name: Assign issues and pull requests with `software` label to project `Sofware`
@ -28,5 +28,5 @@ jobs:
contains(github.event.issue.labels.*.name, 'software') || contains(github.event.issue.labels.*.name, 'software') ||
contains(github.event.pull_request.labels.*.name, 'software') contains(github.event.pull_request.labels.*.name, 'software')
with: with:
project: 'https://github.com/PlanktonPlanet/PlanktonScope/projects/1' project: 'https://github.com/PlanktonPlanet/PlanktoScope/projects/1'
column_name: 'To Do' column_name: 'To Do'

View file

@ -33,7 +33,7 @@ There are several ways to join the development effort, share your progress with
We are using slack as a communication platform between interested parties. You can [request to join by filling this form](https://docs.google.com/forms/d/e/1FAIpQLSfcod-avpzWVmWj42_hW1v2mMSHm0DAGXHxVECFig2dnKHxGQ/viewform). We are using slack as a communication platform between interested parties. You can [request to join by filling this form](https://docs.google.com/forms/d/e/1FAIpQLSfcod-avpzWVmWj42_hW1v2mMSHm0DAGXHxVECFig2dnKHxGQ/viewform).
This repository is also a good way to get involved. Please fill in an issue if you witnessed a bug in the software or hardware. If you are able, you can also join the development effort. Look through the [issues opened](https://github.com/PlanktonPlanet/PlanktonScope/labels/good%20first%20issue) and choose one that piques your interest. Let us know you want to work on it in the comments, we may even be able to guide your beginnings around the code. This repository is also a good way to get involved. Please fill in an issue if you witnessed a bug in the software or hardware. If you are able, you can also join the development effort. Look through the [issues opened](https://github.com/PlanktonPlanet/PlanktoScope/labels/good%20first%20issue) and choose one that piques your interest. Let us know you want to work on it in the comments, we may even be able to guide your beginnings around the code.
# License: Our work is fully open source # License: Our work is fully open source

View file

@ -6,7 +6,7 @@
"sample_sampling_gear": "net", "sample_sampling_gear": "net",
"sample_gear_net_opening": 40, "sample_gear_net_opening": 40,
"acq_id": 1, "acq_id": 1,
"acq_instrument": "PlanktonScope v2.2", "acq_instrument": "PlanktoScope v2.2",
"acq_celltype": 200, "acq_celltype": 200,
"acq_minimum_mesh": 10, "acq_minimum_mesh": 10,
"acq_maximum_mesh": 200, "acq_maximum_mesh": 200,

View file

@ -1,4 +1,4 @@
# PlanktonScope Simple Setup Guide # PlanktoScope Simple Setup Guide
## Download the image ## Download the image
@ -19,7 +19,7 @@ Review your selections and click 'Flash!' to begin writing data to the SD card.
## Inserting the SD card ## Inserting the SD card
Once flashing is over, you can unmount the SD card from the computer (usually done by right clicking on the card icon in the taskbar). Once flashing is over, you can unmount the SD card from the computer (usually done by right clicking on the card icon in the taskbar).
Insert now the card in the Raspberry installed in your PlanktonScope. Insert now the card in the Raspberry installed in your PlanktoScope.
## Install a mDNS client ## Install a mDNS client
@ -33,4 +33,4 @@ To install the client, download the installer [here](https://download.info.apple
## Start playing! ## Start playing!
Start up your PlanktonScope and connect to its WiFi network. You can now access the webpage at http://planktonscope.local:1880/ui to start using your machine! Start up your PlanktoScope and connect to its WiFi network. You can now access the webpage at http://planktoscope.local:1880/ui to start using your machine!

View file

@ -2,7 +2,7 @@
We are using the [Github Flow approach](https://docs.github.com/en/free-pro-team@latest/github/collaborating-with-issues-and-pull-requests) for our development efforts. We are using the [Github Flow approach](https://docs.github.com/en/free-pro-team@latest/github/collaborating-with-issues-and-pull-requests) for our development efforts.
If you want to join us, have a look at the [currently opened issues](https://github.com/PlanktonPlanet/PlanktonScope/issues) and pick one where you feel like you can have an impact. Let us know you want to work it in the comments and get started. If you want to join us, have a look at the [currently opened issues](https://github.com/PlanktonPlanet/PlanktoScope/issues) and pick one where you feel like you can have an impact. Let us know you want to work it in the comments and get started.
For working on Node-Red, we recommend to install it directly on your development machine to allow for faster cycles of testing (and ease of use). But feel free to setup a Pi Zero as a portable and compact development environment! (One of us is using one configured as usb gadget to do so!) For working on Node-Red, we recommend to install it directly on your development machine to allow for faster cycles of testing (and ease of use). But feel free to setup a Pi Zero as a portable and compact development environment! (One of us is using one configured as usb gadget to do so!)

View file

@ -2,7 +2,7 @@
This documentation is hosted by [ReadTheDocs.org](https://readthedocs.org/) at https://planktonscope.readthedocs.io/. This documentation is hosted by [ReadTheDocs.org](https://readthedocs.org/) at https://planktonscope.readthedocs.io/.
The source files are in the main [github repository](https://www.github.com/PlanktonPlanet/PlanktonScope), in the `docs` folder. The source files are in the main [github repository](https://www.github.com/PlanktonPlanet/PlanktoScope), in the `docs` folder.
They are simple [Markdown files](https://www.markdownguide.org/), that you can edit in any text editor of your choice. They are simple [Markdown files](https://www.markdownguide.org/), that you can edit in any text editor of your choice.
@ -12,7 +12,7 @@ After installing mkdocs, you can use `mkdocs serve` in the main folder of this r
If you want to include pictures and diagrams in the documentation, please set the pictures in a dedicated folder to the name of the page you are creating (for example, if your page is named `expert_setup.md`, please put all the related pictures in the `docs/expert_setup/` folder). Each picture should be named with a simple yet descriptive name, using jpg or png format if possible. Try to limit the size of the file by limiting the resolution to what is necessary for the picture to be clear on screen. If you want to include pictures and diagrams in the documentation, please set the pictures in a dedicated folder to the name of the page you are creating (for example, if your page is named `expert_setup.md`, please put all the related pictures in the `docs/expert_setup/` folder). Each picture should be named with a simple yet descriptive name, using jpg or png format if possible. Try to limit the size of the file by limiting the resolution to what is necessary for the picture to be clear on screen.
Contributions should be made by creating pull requests on [Github directly](https://github.com/PlanktonPlanet/PlanktonScope/pulls). Contributions should be made by creating pull requests on [Github directly](https://github.com/PlanktonPlanet/PlanktoScope/pulls).
## Extensions available ## Extensions available

File diff suppressed because one or more lines are too long

View file

@ -1,221 +0,0 @@
#########################################################
Installation of MorphoCut development version:
#########################################################
$ pip install -U git+https://github.com/morphocut/morphocut.git
Looking in indexes: https://pypi.org/simple, https://www.piwheels.org/simple
Collecting git+https://github.com/morphocut/morphocut.git
Cloning https://github.com/morphocut/morphocut.git to /tmp/pip-req-build-NnGYk4
Installing build dependencies ... error
Complete output from command /usr/bin/python -m pip install --ignore-installed --no-user --prefix /tmp/pip-build-env-e4DHrk --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.org/simple --extra-index-url https://www.piwheels.org/simple -- setuptools wheel:
Looking in indexes: https://pypi.org/simple, https://www.piwheels.org/simple, https://www.piwheels.org/simple
Collecting setuptools
Downloading https://www.piwheels.org/simple/setuptools/setuptools-45.0.0-py2.py3-none-any.whl (583kB)
setuptools requires Python '>=3.5' but the running Python is 2.7.16
----------------------------------------
Command "/usr/bin/python -m pip install --ignore-installed --no-user --prefix /tmp/pip-build-env-e4DHrk --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.org/simple --extra-index-url https://www.piwheels.org/simple -- setuptools wheel" failed with error code 1 in None
#########################################################
Installation of MorphoCut packaged on PyPI via pip
#########################################################
$ pip install morphocut
Looking in indexes: https://pypi.org/simple, https://www.piwheels.org/simple
Collecting morphocut
Could not find a version that satisfies the requirement morphocut (from versions: )
No matching distribution found for morphocut
#########################################################
Installation of MorphoCut packaged on PyPI via pip3
#########################################################
$ pip3 install morphocut
Looking in indexes: https://pypi.org/simple, https://www.piwheels.org/simple
Collecting morphocut
Downloading https://files.pythonhosted.org/packages/7e/c7/1addaf867234dd30db6d1f4bf8d3b685d93b743023a863814451abd5cef8/morphocut-0.1.1-py3-none-any.whl
Collecting scikit-image>=0.16.0 (from morphocut)
Downloading https://www.piwheels.org/simple/scikit-image/scikit_image-0.16.2-cp37-cp37m-linux_armv7l.whl (39.7MB)
100% |████████████████████████████████| 39.7MB 10kB/s
Collecting tqdm (from morphocut)
Downloading https://files.pythonhosted.org/packages/72/c9/7fc20feac72e79032a7c8138fd0d395dc6d8812b5b9edf53c3afd0b31017/tqdm-4.41.1-py2.py3-none-any.whl (56kB)
100% |████████████████████████████████| 61kB 1.6MB/s
Collecting scipy (from morphocut)
Downloading https://files.pythonhosted.org/packages/04/ab/e2eb3e3f90b9363040a3d885ccc5c79fe20c5b8a3caa8fe3bf47ff653260/scipy-1.4.1.tar.gz (24.6MB)
100% |████████████████████████████████| 24.6MB 17kB/s
Installing build dependencies ... done
Requirement already satisfied: numpy in /usr/lib/python3/dist-packages (from morphocut) (1.16.2)
Collecting pandas (from morphocut)
Downloading https://www.piwheels.org/simple/pandas/pandas-0.25.3-cp37-cp37m-linux_armv7l.whl (33.1MB)
100% |████████████████████████████████| 33.1MB 12kB/s
Collecting PyWavelets>=0.4.0 (from scikit-image>=0.16.0->morphocut)
Downloading https://www.piwheels.org/simple/pywavelets/PyWavelets-1.1.1-cp37-cp37m-linux_armv7l.whl (6.1MB)
100% |████████████████████████████████| 6.1MB 67kB/s
Collecting networkx>=2.0 (from scikit-image>=0.16.0->morphocut)
Downloading https://files.pythonhosted.org/packages/41/8f/dd6a8e85946def36e4f2c69c84219af0fa5e832b018c970e92f2ad337e45/networkx-2.4-py3-none-any.whl (1.6MB)
100% |████████████████████████████████| 1.6MB 255kB/s
Requirement already satisfied: pillow>=4.3.0 in /usr/lib/python3/dist-packages (from scikit-image>=0.16.0->morphocut) (5.4.1)
Collecting imageio>=2.3.0 (from scikit-image>=0.16.0->morphocut)
Downloading https://files.pythonhosted.org/packages/1a/de/f7f985018f462ceeffada7f6e609919fbcc934acd9301929cba14bc2c24a/imageio-2.6.1-py3-none-any.whl (3.3MB)
100% |████████████████████████████████| 3.3MB 123kB/s
Collecting python-dateutil>=2.6.1 (from pandas->morphocut)
Downloading https://files.pythonhosted.org/packages/d4/70/d60450c3dd48ef87586924207ae8907090de0b306af2bce5d134d78615cb/python_dateutil-2.8.1-py2.py3-none-any.whl (227kB)
100% |████████████████████████████████| 235kB 822kB/s
Collecting pytz>=2017.2 (from pandas->morphocut)
Downloading https://files.pythonhosted.org/packages/e7/f9/f0b53f88060247251bf481fa6ea62cd0d25bf1b11a87888e53ce5b7c8ad2/pytz-2019.3-py2.py3-none-any.whl (509kB)
100% |████████████████████████████████| 512kB 660kB/s
Collecting decorator>=4.3.0 (from networkx>=2.0->scikit-image>=0.16.0->morphocut)
Downloading https://files.pythonhosted.org/packages/8f/b7/f329cfdc75f3d28d12c65980e4469e2fa373f1953f5df6e370e84ea2e875/decorator-4.4.1-py2.py3-none-any.whl
Requirement already satisfied: six>=1.5 in /usr/lib/python3/dist-packages (from python-dateutil>=2.6.1->pandas->morphocut) (1.12.0)
Building wheels for collected packages: scipy
Running setup.py bdist_wheel for scipy ... error
Complete output from command /usr/bin/python3 -u -c "import setuptools, tokenize;__file__='/tmp/pip-install-vrivsnzz/scipy/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" bdist_wheel -d /tmp/pip-wheel-2w9xwni0 --python-tag cp37:
Traceback (most recent call last):
File "/tmp/pip-build-env-3v411zsn/lib/python3.7/site-packages/numpy/core/__init__.py", line 16, in <module>
from . import multiarray
ImportError: libf77blas.so.3: cannot open shared object file: No such file or directory
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-install-vrivsnzz/scipy/setup.py", line 540, in <module>
setup_package()
File "/tmp/pip-install-vrivsnzz/scipy/setup.py", line 516, in setup_package
from numpy.distutils.core import setup
File "/tmp/pip-build-env-3v411zsn/lib/python3.7/site-packages/numpy/__init__.py", line 142, in <module>
from . import add_newdocs
File "/tmp/pip-build-env-3v411zsn/lib/python3.7/site-packages/numpy/add_newdocs.py", line 13, in <module>
from numpy.lib import add_newdoc
File "/tmp/pip-build-env-3v411zsn/lib/python3.7/site-packages/numpy/lib/__init__.py", line 8, in <module>
from .type_check import *
File "/tmp/pip-build-env-3v411zsn/lib/python3.7/site-packages/numpy/lib/type_check.py", line 11, in <module>
import numpy.core.numeric as _nx
File "/tmp/pip-build-env-3v411zsn/lib/python3.7/site-packages/numpy/core/__init__.py", line 26, in <module>
raise ImportError(msg)
ImportError:
Importing the multiarray numpy extension module failed. Most
likely you are trying to import a failed build of numpy.
If you're working with a numpy git repo, try `git clean -xdf` (removes all
files not under version control). Otherwise reinstall numpy.
Original error was: libf77blas.so.3: cannot open shared object file: No such file or directory
----------------------------------------
Failed building wheel for scipy
Running setup.py clean for scipy
Complete output from command /usr/bin/python3 -u -c "import setuptools, tokenize;__file__='/tmp/pip-install-vrivsnzz/scipy/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" clean --all:
`setup.py clean` is not supported, use one of the following instead:
- `git clean -xdf` (cleans all files)
- `git clean -Xdf` (cleans all versioned files, doesn't touch
files that aren't checked into the git repo)
Add `--force` to your command to use it anyway if you must (unsupported).
----------------------------------------
Failed cleaning build dir for scipy
Failed to build scipy
Installing collected packages: PyWavelets, decorator, networkx, imageio, scikit-image, tqdm, scipy, python-dateutil, pytz, pandas, morphocut
Running setup.py install for scipy ... error
Complete output from command /usr/bin/python3 -u -c "import setuptools, tokenize;__file__='/tmp/pip-install-vrivsnzz/scipy/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-record-s4uxcq_l/install-record.txt --single-version-externally-managed --compile --user --prefix=:
Note: if you need reliable uninstall behavior, then install
with pip instead of using `setup.py install`:
- `pip install .` (from a git repo or downloaded source
release)
- `pip install scipy` (last SciPy release on PyPI)
Traceback (most recent call last):
File "/tmp/pip-build-env-3v411zsn/lib/python3.7/site-packages/numpy/core/__init__.py", line 16, in <module>
from . import multiarray
ImportError: libf77blas.so.3: cannot open shared object file: No such file or directory
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-install-vrivsnzz/scipy/setup.py", line 540, in <module>
setup_package()
File "/tmp/pip-install-vrivsnzz/scipy/setup.py", line 516, in setup_package
from numpy.distutils.core import setup
File "/tmp/pip-build-env-3v411zsn/lib/python3.7/site-packages/numpy/__init__.py", line 142, in <module>
from . import add_newdocs
File "/tmp/pip-build-env-3v411zsn/lib/python3.7/site-packages/numpy/add_newdocs.py", line 13, in <module>
from numpy.lib import add_newdoc
File "/tmp/pip-build-env-3v411zsn/lib/python3.7/site-packages/numpy/lib/__init__.py", line 8, in <module>
from .type_check import *
File "/tmp/pip-build-env-3v411zsn/lib/python3.7/site-packages/numpy/lib/type_check.py", line 11, in <module>
import numpy.core.numeric as _nx
File "/tmp/pip-build-env-3v411zsn/lib/python3.7/site-packages/numpy/core/__init__.py", line 26, in <module>
raise ImportError(msg)
ImportError:
Importing the multiarray numpy extension module failed. Most
likely you are trying to import a failed build of numpy.
If you're working with a numpy git repo, try `git clean -xdf` (removes all
files not under version control). Otherwise reinstall numpy.
Original error was: libf77blas.so.3: cannot open shared object file: No such file or directory
----------------------------------------
Command "/usr/bin/python3 -u -c "import setuptools, tokenize;__file__='/tmp/pip-install-vrivsnzz/scipy/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-record-s4uxcq_l/install-record.txt --single-version-externally-managed --compile --user --prefix=" failed with error code 1 in /tmp/pip-install-vrivsnzz/scipy/
#########################################################
Installation of Scipy using sudo apt-get
#########################################################
$ sudo apt-get install python3-scipy
Lecture des listes de paquets... Fait
Construction de l'arbre des dépendances
Lecture des informations d'état... Fait
Les paquets supplémentaires suivants seront installés :
python3-decorator
Paquets suggérés :
python-scipy-doc
Les NOUVEAUX paquets suivants seront installés :
python3-decorator python3-scipy
0 mis à jour, 2 nouvellement installés, 0 à enlever et 41 non mis à jour.
Il est nécessaire de prendre 8925 ko dans les archives.
Après cette opération, 38,0 Mo d'espace disque supplémentaires seront utilisés.
Souhaitez-vous continuer ? [O/n] o
Réception de :1 http://ftp.igh.cnrs.fr/pub/os/linux/raspbian/raspbian buster/main armhf python3-decorator all 4.3.0-1.1 [14,5 kB]
Réception de :2 http://ftp.igh.cnrs.fr/pub/os/linux/raspbian/raspbian buster/main armhf python3-scipy armhf 1.1.0-7 [8910 kB]
8925 ko réceptionnés en 3s (2561 ko/s)
Sélection du paquet python3-decorator précédemment désélectionné.
(Lecture de la base de données... 99960 fichiers et répertoires déjà installés.)
Préparation du dépaquetage de .../python3-decorator_4.3.0-1.1_all.deb ...
Dépaquetage de python3-decorator (4.3.0-1.1) ...
Sélection du paquet python3-scipy précédemment désélectionné.
Préparation du dépaquetage de .../python3-scipy_1.1.0-7_armhf.deb ...
Dépaquetage de python3-scipy (1.1.0-7) ...
Paramétrage de python3-decorator (4.3.0-1.1) ...
Paramétrage de python3-scipy (1.1.0-7) ...
#########################################################
Installation of MorphoCut packaged on PyPI via pip3 using sudo
#########################################################
$ sudo pip3 install morphocut
Looking in indexes: https://pypi.org/simple, https://www.piwheels.org/simple
Collecting morphocut
Using cached https://files.pythonhosted.org/packages/7e/c7/1addaf867234dd30db6d1f4bf8d3b685d93b743023a863814451abd5cef8/morphocut-0.1.1-py3-none-any.whl
Requirement already satisfied: scikit-image>=0.16.0 in /usr/local/lib/python3.7/dist-packages (from morphocut) (0.16.2)
Requirement already satisfied: numpy in /usr/lib/python3/dist-packages (from morphocut) (1.16.2)
Requirement already satisfied: tqdm in /usr/local/lib/python3.7/dist-packages (from morphocut) (4.41.1)
Requirement already satisfied: pandas in /usr/local/lib/python3.7/dist-packages (from morphocut) (0.25.3)
Requirement already satisfied: scipy in /usr/lib/python3/dist-packages (from morphocut) (1.1.0)
Requirement already satisfied: imageio>=2.3.0 in /usr/local/lib/python3.7/dist-packages (from scikit-image>=0.16.0->morphocut) (2.6.1)
Requirement already satisfied: PyWavelets>=0.4.0 in /usr/local/lib/python3.7/dist-packages (from scikit-image>=0.16.0->morphocut) (1.1.1)
Requirement already satisfied: networkx>=2.0 in /usr/local/lib/python3.7/dist-packages (from scikit-image>=0.16.0->morphocut) (2.4)
Requirement already satisfied: pillow>=4.3.0 in /usr/lib/python3/dist-packages (from scikit-image>=0.16.0->morphocut) (5.4.1)
Requirement already satisfied: python-dateutil>=2.6.1 in /usr/local/lib/python3.7/dist-packages (from pandas->morphocut) (2.8.1)
Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.7/dist-packages (from pandas->morphocut) (2019.3)
Requirement already satisfied: decorator>=4.3.0 in /usr/local/lib/python3.7/dist-packages (from networkx>=2.0->scikit-image>=0.16.0->morphocut) (4.4.1)
Requirement already satisfied: six>=1.5 in /usr/lib/python3/dist-packages (from python-dateutil>=2.6.1->pandas->morphocut) (1.12.0)
Installing collected packages: morphocut
Successfully installed morphocut-0.1.1

View file

@ -1,169 +0,0 @@
"""Experiment on processing KOSMOS data using MorphoCut."""
import datetime
import os
from skimage.util import img_as_ubyte
from morphocut import Call
from morphocut.contrib.ecotaxa import EcotaxaWriter
from morphocut.contrib.zooprocess import CalculateZooProcessFeatures
from morphocut.core import Pipeline
from morphocut.file import Find
from morphocut.image import (
ExtractROI,
FindRegions,
ImageReader,
ImageWriter,
RescaleIntensity,
RGB2Gray,
)
from morphocut.stat import RunningMedian
from morphocut.str import Format
from morphocut.stream import TQDM, Enumerate
from skimage.feature import canny
from skimage.color import rgb2gray, label2rgb
from skimage.morphology import disk
from skimage.morphology import erosion, dilation, closing
from skimage.measure import label, regionprops
import_path = "/media/tpollina/rootfs/home/pi/Desktop/PlanktonScope_acquisition/01_17_2020/RAW"
export_path = "/media/tpollina/rootfs/home/pi/Desktop/PlanktonScope_acquisition/01_17_2020/"
CLEAN = os.path.join(export_path, "CLEAN")
os.makedirs(CLEAN, exist_ok=True)
OBJECTS = os.path.join(export_path, "OBJECTS")
os.makedirs(OBJECTS, exist_ok=True)
archive_fn = os.path.join(export_path, "ecotaxa_export.zip")
# Meta data that is added to every object
global_metadata = {
"acq_instrument": "Planktoscope",
"process_datetime": datetime.datetime.now(),
"sample_project": "PlanktonScope Villefranche",
"sample_ship": "Kayak de Fabien",
"sample_operator": "Thibaut Pollina",
"sample_id": "Flowcam_PlanktonScope_comparison",
"sample_sampling_gear": "net",
"sample_time":150000,
"sample_date":16112020,
"object_lat": 43.696146,
"object_lon": 7.308359,
"acq_fnumber_objective": 16,
"acq_celltype": 200,
"process_pixel": 1.19,
"acq_camera": "Pi Camera V2.1",
"acq_instrument": "PlanktonScope V2.1",
"acq_software": "Node-RED Dashboard and raw python",
"acq_instrument_ID": "copepode",
"acq_volume": 24,
"acq_flowrate": "Unknown",
"acq_camera.resolution" : "(3280, 2464)",
"acq_camera.iso" : 60,
"acq_camera.shutter_speed" : 100,
"acq_camera.exposure_mode" : 'off',
"acq_camera.awb_mode" : 'off',
"acq_nb_frames" : 1000
}
# Define processing pipeline
with Pipeline() as p:
# Recursively find .jpg files in import_path.
# Sort to get consective frames.
abs_path = Find(import_path, [".jpg"], sort=True, verbose=True)
# Extract name from abs_path
name = Call(lambda p: os.path.splitext(os.path.basename(p))[0], abs_path)
# Read image
img = ImageReader(abs_path)
# Apply running median to approximate the background image
flat_field = RunningMedian(img, 10)
# Correct image
img = img / flat_field
# Rescale intensities and convert to uint8 to speed up calculations
img = RescaleIntensity(img, in_range=(0, 1.1), dtype="uint8")
# Convert image to uint8 gray
img_gray = RGB2Gray(img)
img_gray = Call(img_as_ubyte, img_gray)
img_canny = Call(canny, img_gray, sigma=0.3)
img_dilate = Call(dilation, img_canny)
img_closing = Call(closing, img_dilate)
mask = Call(erosion, img_closing)
# Show progress bar for frames
TQDM(Format("Frame {name}", name=name))
# Apply threshold find objects
#threshold = 204 # Call(skimage.filters.threshold_otsu, img_gray)
#mask = img_gray < threshold
# Write corrected frames
frame_fn = Format(os.path.join(CLEAN, "{name}.jpg"), name=name)
ImageWriter(frame_fn, img)
# Find objects
regionprops = FindRegions(
mask, img_gray, min_area=1000, padding=10, warn_empty=name
)
# For an object, extract a vignette/ROI from the image
roi_orig = ExtractROI(img, regionprops, bg_color=255)
roi_orig
# Generate an object identifier
i = Enumerate()
#Call(print,i)
object_id = Format("{name}_{i:d}", name=name, i=i)
#Call(print,object_id)
object_fn = Format(os.path.join(OBJECTS, "{name}.jpg"), name=object_id)
ImageWriter(object_fn, roi_orig)
# Calculate features. The calculated features are added to the global_metadata.
# Returns a Variable representing a dict for every object in the stream.
meta = CalculateZooProcessFeatures(
regionprops, prefix="object_", meta=global_metadata
)
# If CalculateZooProcessFeatures is not used, we need to copy global_metadata into the stream:
# meta = Call(lambda: global_metadata.copy())
# https://github.com/morphocut/morphocut/issues/51
# Add object_id to the metadata dictionary
meta["object_id"] = object_id
# Generate object filenames
orig_fn = Format("{object_id}.jpg", object_id=object_id)
# Write objects to an EcoTaxa archive:
# roi image in original color, roi image in grayscale, metadata associated with each object
EcotaxaWriter(archive_fn, (orig_fn, roi_orig), meta)
# Progress bar for objects
TQDM(Format("Object {object_id}", object_id=object_id))
import datetime
BEGIN = datetime.datetime.now()
# Execute pipeline
p.run()
END = datetime.datetime.now()
print("MORPHOCUT :"+str(END-BEGIN))

View file

@ -1,181 +0,0 @@
"""Experiment on processing KOSMOS data using MorphoCut."""
import datetime
import os
from skimage.util import img_as_ubyte
from morphocut import Call
from morphocut.contrib.ecotaxa import EcotaxaWriter
from morphocut.contrib.zooprocess import CalculateZooProcessFeatures
from morphocut.core import Pipeline
from morphocut.file import Find
from morphocut.image import (
ExtractROI,
FindRegions,
ImageReader,
ImageWriter,
RescaleIntensity,
RGB2Gray,
)
from morphocut.stat import RunningMedian
from morphocut.str import Format
from morphocut.stream import TQDM, Enumerate, FilterVariables
from skimage.feature import canny
from skimage.color import rgb2gray, label2rgb
from skimage.morphology import disk
from skimage.morphology import erosion, dilation, closing
from skimage.measure import label, regionprops
import cv2
import_path = "/home/tpollina/Desktop/JUPYTER/IMAGES/RAW"
export_path = "/home/tpollina/Desktop/JUPYTER/IMAGES/"
CLEAN = os.path.join(export_path, "CLEAN")
os.makedirs(CLEAN, exist_ok=True)
ANNOTATED = os.path.join(export_path, "ANNOTATED")
os.makedirs(ANNOTATED, exist_ok=True)
OBJECTS = os.path.join(export_path, "OBJECTS")
os.makedirs(OBJECTS, exist_ok=True)
archive_fn = os.path.join(export_path, "ecotaxa_export.zip")
# Meta data that is added to every object
global_metadata = {
"acq_instrument": "Planktoscope",
"process_datetime": datetime.datetime.now(),
"sample_project": "PlanktonScope Villefranche",
"sample_ship": "Kayak de Fabien",
"sample_operator": "Thibaut Pollina",
"sample_id": "Flowcam_PlanktonScope_comparison",
"sample_sampling_gear": "net",
"sample_time":150000,
"sample_date":16112020,
"object_lat": 43.696146,
"object_lon": 7.308359,
"acq_fnumber_objective": 16,
"acq_celltype": 200,
"process_pixel": 1.19,
"acq_camera": "Pi Camera V2.1",
"acq_instrument": "PlanktonScope V2.1",
"acq_software": "Node-RED Dashboard and raw python",
"acq_instrument_ID": "copepode",
"acq_volume": 24,
"acq_flowrate": "Unknown",
"acq_camera.resolution" : "(3280, 2464)",
"acq_camera.iso" : 60,
"acq_camera.shutter_speed" : 100,
"acq_camera.exposure_mode" : 'off',
"acq_camera.awb_mode" : 'off',
"acq_nb_frames" : 1000
}
# Define processing pipeline
with Pipeline() as p:
# Recursively find .jpg files in import_path.
# Sort to get consective frames.
abs_path = Find(import_path, [".jpg"], sort=True, verbose=True)
# Extract name from abs_path
name = Call(lambda p: os.path.splitext(os.path.basename(p))[0], abs_path)
# Read image
img = ImageReader(abs_path)
# Show progress bar for frames
#TQDM(Format("Frame {name}", name=name))
# Apply running median to approximate the background image
flat_field = RunningMedian(img, 5)
# Correct image
img = img / flat_field
# Rescale intensities and convert to uint8 to speed up calculations
img = RescaleIntensity(img, in_range=(0, 1.1), dtype="uint8")
FilterVariables(name,img)
#
frame_fn = Format(os.path.join(CLEAN, "{name}.jpg"), name=name)
ImageWriter(frame_fn, img)
# Convert image to uint8 gray
img_gray = RGB2Gray(img)
# ?
img_gray = Call(img_as_ubyte, img_gray)
#Canny edge detection
img_canny = Call(cv2.Canny, img_gray, 50,100)
#Dilate
kernel = Call(cv2.getStructuringElement, cv2.MORPH_ELLIPSE, (15, 15))
img_dilate = Call(cv2.dilate, img_canny, kernel, iterations=2)
#Close
kernel = Call(cv2.getStructuringElement, cv2.MORPH_ELLIPSE, (5, 5))
img_close = Call(cv2.morphologyEx, img_dilate, cv2.MORPH_CLOSE, kernel, iterations=1)
#Erode
kernel = Call(cv2.getStructuringElement, cv2.MORPH_ELLIPSE, (15, 15))
mask = Call(cv2.erode, img_close, kernel, iterations=2)
frame_fn = Format(os.path.join(ANNOTATED, "{name}.jpg"), name=name)
ImageWriter(frame_fn, mask)
# Find objects
regionprops = FindRegions(
mask, img_gray, min_area=1000, padding=10, warn_empty=name
)
# For an object, extract a vignette/ROI from the image
roi_orig = ExtractROI(img, regionprops, bg_color=255)
# For an object, extract a vignette/ROI from the image
roi_mask = ExtractROI(mask, regionprops, bg_color=255)
# Generate an object identifier
i = Enumerate()
#Call(print,i)
object_id = Format("{name}_{i:d}", name=name, i=i)
#Call(print,object_id)
object_fn = Format(os.path.join(OBJECTS, "{name}.jpg"), name=object_id)
ImageWriter(object_fn, roi_orig)
# Calculate features. The calculated features are added to the global_metadata.
# Returns a Variable representing a dict for every object in the stream.
meta = CalculateZooProcessFeatures(
regionprops, prefix="object_", meta=global_metadata
)
# Add object_id to the metadata dictionary
meta["object_id"] = object_id
# Generate object filenames
orig_fn = Format("{object_id}.jpg", object_id=object_id)
# Write objects to an EcoTaxa archive:
# roi image in original color, roi image in grayscale, metadata associated with each object
EcotaxaWriter(archive_fn, (orig_fn, roi_orig), meta)
# Progress bar for objects
TQDM(Format("Object {object_id}", object_id=object_id))
import datetime
BEGIN = datetime.datetime.now()
# Execute pipeline
p.run()
END = datetime.datetime.now()
print("MORPHOCUT :"+str(END-BEGIN))

View file

@ -15,12 +15,12 @@ server {
} }
location @autoindex { location @autoindex {
xslt_stylesheet /home/pi/PlanktonScope/scripts/gallery/nginx_template.xslt path='$uri'; xslt_stylesheet /home/pi/PlanktoScope/scripts/gallery/nginx_template.xslt path='$uri';
} }
# assets, media # assets, media
location ~* \.(?:css(\.map)?|js(\.map)?|svg|ttf|eot|woff|woff2)$ { location ~* \.(?:css(\.map)?|js(\.map)?|svg|ttf|eot|woff|woff2)$ {
root /home/pi/PlanktonScope/scripts/gallery/; root /home/pi/PlanktoScope/scripts/gallery/;
expires 30d; expires 30d;
access_log off; access_log off;
} }

View file

@ -41,22 +41,22 @@ logger.info("Starting the PlanktoScope python script!")
# Library for exchaning messages with Node-RED # Library for exchaning messages with Node-RED
import planktoscope.mqtt import planktoscope.mqtt
# Import the planktonscope stepper module # Import the planktoscope stepper module
import planktoscope.stepper import planktoscope.stepper
# Import the planktonscope imager module # Import the planktoscope imager module
import planktoscope.imager import planktoscope.imager
# Import the planktonscope segmenter module # Import the planktoscope segmenter module
import planktoscope.segmenter import planktoscope.segmenter
# Import the planktonscope LED module # Import the planktoscope LED module
import planktoscope.light import planktoscope.light
# Import the planktonscope uuidName module # Import the planktoscope uuidName module
import planktoscope.uuidName import planktoscope.uuidName
# Import the planktonscope display module for the OLED screen # Import the planktoscope display module for the OLED screen
import planktoscope.display import planktoscope.display
# global variable that keeps the wheels spinning # global variable that keeps the wheels spinning
@ -93,13 +93,13 @@ if __name__ == "__main__":
sys.exit(1) sys.exit(1)
# Let's make sure the used base path exists # Let's make sure the used base path exists
img_path = "/home/pi/PlanktonScope/img" img_path = "/home/pi/PlanktoScope/img"
# check if this path exists # check if this path exists
if not os.path.exists(img_path): if not os.path.exists(img_path):
# create the path! # create the path!
os.makedirs(img_path) os.makedirs(img_path)
export_path = "/home/pi/PlanktonScope/export" export_path = "/home/pi/PlanktoScope/export"
# check if this path exists # check if this path exists
if not os.path.exists(export_path): if not os.path.exists(export_path):
# create the path! # create the path!

View file

@ -66,9 +66,9 @@ class ImagerProcess(multiprocessing.Process):
logger.info("planktoscope.imager is initialising") logger.info("planktoscope.imager is initialising")
if os.path.exists("/home/pi/PlanktonScope/hardware.json"): if os.path.exists("/home/pi/PlanktoScope/hardware.json"):
# load hardware.json # load hardware.json
with open("/home/pi/PlanktonScope/hardware.json", "r") as config_file: with open("/home/pi/PlanktoScope/hardware.json", "r") as config_file:
configuration = json.load(config_file) configuration = json.load(config_file)
logger.debug(f"Hardware configuration loaded is {configuration}") logger.debug(f"Hardware configuration loaded is {configuration}")
else: else:

View file

@ -21,8 +21,8 @@ import subprocess # nosec
################################################################################ ################################################################################
class raspimjpeg(object): class raspimjpeg(object):
def __init__(self, *args, **kwargs): def __init__(self, *args, **kwargs):
self.__configfile = "/home/pi/PlanktonScope/scripts/raspimjpeg/raspimjpeg.conf" self.__configfile = "/home/pi/PlanktoScope/scripts/raspimjpeg/raspimjpeg.conf"
self.__binary = "/home/pi/PlanktonScope/scripts/raspimjpeg/bin/raspimjpeg" self.__binary = "/home/pi/PlanktoScope/scripts/raspimjpeg/bin/raspimjpeg"
self.__statusfile = "/dev/shm/mjpeg/status_mjpeg.txt" # nosec self.__statusfile = "/dev/shm/mjpeg/status_mjpeg.txt" # nosec
self.__pipe = "/dev/shm/mjpeg/FIFO" # nosec self.__pipe = "/dev/shm/mjpeg/FIFO" # nosec
self.__sensor_name = "" self.__sensor_name = ""

View file

@ -975,9 +975,4 @@ class SegmenterProcess(multiprocessing.Process):
# This is called if this script is launched directly # This is called if this script is launched directly
if __name__ == "__main__": if __name__ == "__main__":
# TODO This should be a test suite for this library # TODO This should be a test suite for this library
segmenter_thread = SegmenterProcess( pass
None, "/home/rbazile/Documents/pro/PlanktonPlanet/Planktonscope/Segmenter/data/"
)
segmenter_thread.segment_path(
"/home/rbazile/Documents/pro/PlanktonPlanet/Planktonscope/Segmenter/data/test"
)

View file

@ -18,7 +18,7 @@ Example of metadata file received
"sample_sampling_gear": "net_hsn", "sample_sampling_gear": "net_hsn",
"sample_concentrated_sample_volume": 100, "sample_concentrated_sample_volume": 100,
"acq_id": "Tara atlantique sud 2021_hsn_2021_01_22_1", "acq_id": "Tara atlantique sud 2021_hsn_2021_01_22_1",
"acq_instrument": "PlanktonScope v2.2", "acq_instrument": "PlanktoScope v2.2",
"acq_instrument_id": "Babane Batoukoa", "acq_instrument_id": "Babane Batoukoa",
"acq_celltype": 300, "acq_celltype": 300,
"acq_minimum_mesh": 20, "acq_minimum_mesh": 20,

View file

@ -165,10 +165,10 @@ class stepper:
class StepperProcess(multiprocessing.Process): class StepperProcess(multiprocessing.Process):
focus_steps_per_mm = 40 focus_steps_per_mm = 40
# 507 steps per ml for Planktonscope standard # 507 steps per ml for PlanktoScope standard
# 5200 for custom NEMA14 pump with 0.8mm ID Tube # 5200 for custom NEMA14 pump with 0.8mm ID Tube
pump_steps_per_ml = 507 pump_steps_per_ml = 507
# focus max speed is in mm/sec and is limited by the maximum number of pulses per second the Planktonscope can send # focus max speed is in mm/sec and is limited by the maximum number of pulses per second the PlanktoScope can send
focus_max_speed = 0.5 focus_max_speed = 0.5
# pump max speed is in ml/min # pump max speed is in ml/min
pump_max_speed = 30 pump_max_speed = 30
@ -182,9 +182,9 @@ class StepperProcess(multiprocessing.Process):
self.stop_event = event self.stop_event = event
if os.path.exists("/home/pi/PlanktonScope/hardware.json"): if os.path.exists("/home/pi/PlanktoScope/hardware.json"):
# load hardware.json # load hardware.json
with open("/home/pi/PlanktonScope/hardware.json", "r") as config_file: with open("/home/pi/PlanktoScope/hardware.json", "r") as config_file:
configuration = json.load(config_file) configuration = json.load(config_file)
logger.debug(f"Hardware configuration loaded is {configuration}") logger.debug(f"Hardware configuration loaded is {configuration}")
else: else:

View file

@ -118,7 +118,7 @@ motion_file 0
# macros_path can be used to store macros executed by sy command # macros_path can be used to store macros executed by sy command
# boxing_path if set is where h264 files will be temporarily stored when boxing used # boxing_path if set is where h264 files will be temporarily stored when boxing used
# image, video and lapse may be configured relative to media_path if first / left out # image, video and lapse may be configured relative to media_path if first / left out
base_path /home/pi/PlanktonScope/scripts/raspimjpeg/ base_path /home/pi/PlanktoScope/scripts/raspimjpeg/
preview_path /dev/shm/mjpeg/cam.jpg preview_path /dev/shm/mjpeg/cam.jpg
image_path /dev/shm/mjpeg/image.jpg image_path /dev/shm/mjpeg/image.jpg
lapse_path /dev/shm/mjpeg/tl_%i_%t_%Y%M%D_%h%m%s.jpg lapse_path /dev/shm/mjpeg/tl_%i_%t_%Y%M%D_%h%m%s.jpg
@ -126,7 +126,7 @@ video_path /dev/shm/mjpeg/vi_%v_%Y%M%D_%h%m%s.mp4
status_file /dev/shm/mjpeg/status_mjpeg.txt status_file /dev/shm/mjpeg/status_mjpeg.txt
control_file /dev/shm/mjpeg/FIFO control_file /dev/shm/mjpeg/FIFO
media_path /home/pi/data/ media_path /home/pi/data/
macros_path /home/pi/PlanktonScope/scripts/raspimjpeg/macros macros_path /home/pi/PlanktoScope/scripts/raspimjpeg/macros
user_annotate /dev/shm/mjpeg/user_annotate.txt user_annotate /dev/shm/mjpeg/user_annotate.txt
boxing_path boxing_path
subdir_char @ subdir_char @
@ -168,9 +168,9 @@ callback_timeout 30
user_config user_config
#logfile for raspimjpeg, default to merge with scheduler log #logfile for raspimjpeg, default to merge with scheduler log
log_file /home/pi/PlanktonScope/scripts/raspimjpeg/raspimjpeg.log log_file /home/pi/PlanktoScope/scripts/raspimjpeg/raspimjpeg.log
log_size 10000 log_size 10000
motion_logfile /home/pi/PlanktonScope/scripts/raspimjpeg/motion.log motion_logfile /home/pi/PlanktoScope/scripts/raspimjpeg/motion.log
#enforce_lf set to 1 to only process FIFO commands when terminated with LF #enforce_lf set to 1 to only process FIFO commands when terminated with LF
enforce_lf 1 enforce_lf 1