Merge pull request #2 from PlanktonPlanet/master

Get new doc from OceanTrotter + GPS setting
This commit is contained in:
tpollina 2020-08-04 12:37:27 +02:00 committed by GitHub
commit 4485e89035
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
3 changed files with 525 additions and 1145 deletions

702
README.md
View file

@ -1,247 +1,595 @@
PlanktonScope
=============
# PlanktoScop Main Repository
The PlanktoScop is an open and affordable modular imaging platform for citizen oceanography.
This GitHub is part of a community that you can find on this website : https://www.planktonscope.org/
This GitHub is part of a community that you can find on [its website](https://www.planktonscope.org/).
Basic Installation
==================
# Fast Setup
Before going further, notice that you can download the image disk already setup without having to deal with all these command lines.
Jump here : http://planktonscope.su.domains/Images_raspberry/Raspbian_Buster_Morphocut_WiFi.img
Expert Installation
===================
# Expert Setup
After getting your kit and finding the necessary components, but before assembling your kit, you should take the time to do a mockup build and setup your Raspberry.
*************************************
Install Raspbian on your Raspberry Pi
*************************************
## Install and setup Raspbian on your Raspberry Pi
Download the image
==================
### Computer setup
In order to make it easy to connect to the PlanktoScop, you may want to install [avahi](https://en.wikipedia.org/wiki/Avahi_%28software%29) or the [Bonjour](https://en.wikipedia.org/wiki/Bonjour_%28software%29) service on any computer you will use to access the PlanktoScop interface. This will allow you to connect to the PlantoScop using an address similar such as http://planktoscope.local instead of an IP address.
Download the .zip file of Raspbian Buster with desktop from the Raspberry Pi website Downloads page.
### Download the image
Writing an image to the SD card
The latest Raspbian version can always be downloaded from [the Raspberry Pi Downloads page](https://www.raspberrypi.org/downloads/raspbian/).
For a first build, it's recommende to download an image of Raspbian Buster with desktop.
Download the latest version of balenaEtcher and install it.
#### Writing an image to the SD card
Download the latest version of [balenaEtcher](https://www.balena.io/etcher/) and install it.
Connect an SD card reader with the micro SD card inside.
Open balenaEtcher and select from your hard drive the Raspberry Pi .zip file you wish to write to the SD card.
Open balenaEtcher and select from your hard drive the image zip file you just downloaded.
Select the SD card you wish to write your image to.
Select the SD card you want to write your image to.
Review your selections and click 'Flash!' to begin writing data to the SD card.
Review your selections and click `Flash!` to begin writing data to the SD card.
Prepare your Raspberry Pi
-------------------------
`Getting Started with your Raspberry Pi <https://projects.raspberrypi.org/en/projects/raspberry-pi-getting-started/>`_
#### Prepare your Raspberry Pi
[Getting Started with your Raspberry Pi](https://projects.raspberrypi.org/en/projects/raspberry-pi-getting-started/)
Plug the SD Card in your Raspberry Pi
Plug the SD Card in your Raspberry Pi and connect your Pi to a screen, mouse and a keyboard. Check the connection twice before plugging the power.
Connect your Pi to a screen, mouse, keyboard and power
The first boot to the desktop may take up to 120 seconds. This is normal and is caused by the image expanding the filesystem to the whole SD card. DO NOT REBOOT before you reach the desktop.
Finish the setup
#### Finish the setup
Make sure you have access to internet and update/upgrade your fresh raspbian
Make sure you have access to internet and update/upgrade your fresh Raspbian install.
Update your Pi first
::
sudo apt-get update -y
sudo apt-get upgrade -y
Update your Pi first. Open up a terminal, and do the following:
```sh
sudo apt update -y
sudo apt upgrade -y
sudo apt install git
```
Reboot your Pi safely
::
sudo reboot now
You can now reboot your Pi safely.
```sh
sudo reboot now
```
***************************
Raspberry Pi configurations
***************************
## Raspberry Pi configuration
Enable Camera/SSH/I2C in raspi-config
### Clone this repository!
Open up the configuration page and select Interfacing Options by typing this command:
::
sudo raspi-config
First of all, and to ensure you have the latest documentation available locally, you should clone this repository using git.
Select **Serial**
Simply run the following in a terminal:
```sh
git clone https://github.com/PlanktonPlanet/PlanktonScope/
```
Select **NO**
### Enable Camera/SSH/I2C in raspi-config
Keep the **Serial Port Hardware enabled**
You can now launch the configuration tool:
```sh
sudo raspi-config
```
Reboot your Pi safely
::
sudo reboot now
While you're here, a wise thing to do would be to change the default password for the `pi` user. This is very warmly recommended if your PlanktoScop is connected to a shared network you do not control. Just select the first option `1 Change User Password`.
You may also want to change the default hostname of your Raspberry. To do so, choose option `2 Network Options` then `N1 Hostname`. Choose a new hostname. We recommend using `planktoscope`.
We need to activate a few things for the PlanktoScop to work properly.
First, we need to activate the camera interface. Choose `5 Interfacing Options`, then `P1 Camera` and `Yes`.
Now, you can go to `5 Interfacing Options`, then `P2 SSH`. Choose `Yes` to activate the SSH access.
Again, select `5 Interfacing Options`, then `P4 SPI`. Choose `Yes` to enable the SPI interface.
One more, select `5 Interfacing Options`, then `P5 I2C`. Choose `Yes` to enable the ARM I2C interface of the Raspberry.
Finally, select `5 Interfacing Options`, then `P6 Serial`.
This time, choose `No` to deactivate the login shell on the serial connection, but then choose `Yes` to keep the Serial port hardware enabled.
These steps can also be done from the Raspberry Pi Configuration GUI tool that you can find in `Main Menu > Preferences`. Go to the `Interfaces` tab. Pay attention, here the Serial Port must be enabled, but the Serial Port Console must be disabled.
Reboot your Pi safely.
```sh
sudo reboot now
```
## Install the needed libraries for the PlanktoScop
Most of the following happens in a command line environment. If you are using the desktop, please start a terminal emulator.
You can also connect to your PlanktoScop by using ssh using `ssh pi@planktoscope.local`.
You can then run the following to make sure your Raspberry has the necessary components to install and build everything it needs and to create the necessary folders:
```sh
sudo apt install build-essential python3 python3-pip
mkdir test libraries
```
### Install CircuitPython
Start by following [Adafruit's guide](https://learn.adafruit.com/circuitpython-on-raspberrypi-linux/installing-circuitpython-on-raspberry-pi). You can start at the chapter `Install Python Libraries`.
For the record, the command are as following, however, Adafruit's page might have been updated, so please make sure this is still needed:
```sh
sudo pip3 install RPI.GPIO
sudo pip3 install adafruit-blinka
sudo pip3 install adafruit-circuitpython-motorkit
```
It is recommended to test this setup by creating this small script under the name `test/blinkatest.py` and running it (you can use the editor nano if you are using the terminal).
```python
#!/usr/bin/python3
import board
import digitalio
import busio
**************************************************
Install the needed libraries for the PlanktonScope
**************************************************
print("Hello blinka!")
Install CircuitPython
=====================
`Installing CircuitPython on Raspberry Pi <https://learn.adafruit.com/circuitpython-on-raspberrypi-linux/installing-circuitpython-on-raspberry-pi>`_
# Try to great a Digital input
pin = digitalio.DigitalInOut(board.D4)
print("Digital IO ok!")
Run the following command to install adafruit_blinka
::
pip3 install adafruit-blinka
sudo pip3 install adafruit-circuitpython-motorkit
# Try to create an I2C device
i2c = busio.I2C(board.SCL, board.SDA)
print("I2C ok!")
Install RPi Cam Web Interface
=============================
# Try to create an SPI device
spi = busio.SPI(board.SCLK, board.MOSI, board.MISO)
print("SPI ok!")
`RPi Cam Web Interface <https://elinux.org/RPi-Cam-Web-Interface>`_
print("done!")
```
Clone the code from github and enable and run the install script with the following commands
::
git clone https://github.com/silvanmelchior/RPi_Cam_Web_Interface.git
cd RPi_Cam_Web_Interface
./install.sh
To run the script, just run the following:
```sh
chmod +x test/blinkatest.py
./test/blinkatest.py
```
Press Enter to allow default setting of the installation
Press Enter to start RPi Cam Web Interface now
Found what is the IP of your Raspberry Pi
::
sudo ip addr show | grep 'inet 1'
The output should be similar to this:
```
pi@planktoscope:~ $ ./test/blinkatest.py
Hello blinka!
Digital IO ok!
I2C ok!
SPI ok!
done!
```
Reach the url on a local browser : http://127.0.0.1/html/
Also, to make sure the wiring is good, we are going to use `sudo i2cdetect -y 1` to see if our devices are detected:
```
pi@planktoscope:~ $ sudo i2cdetect -y 1
0 1 2 3 4 5 6 7 8 9 a b c d e f
00: -- -- -- -- -- -- -- -- -- -- 0d -- --
10: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
30: -- -- -- -- -- -- -- -- -- -- -- -- 3c -- -- --
40: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
50: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
60: 60 -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
70: 70 -- -- -- -- -- -- --
```
Install Ultimate GPS HAT
========================
`Installing Adafruit GPS HAT <https://learn.adafruit.com/adafruit-ultimate-gps-hat-for-raspberry-pi/pi-setup>`_
The device appearing at addresses 60 and 70 is our motor controller. Address `0d` is the fan controller and `3c` is the oled screen (we'll set up both a bit further down). Your version of the RGB Cooling Hat may not have the screen, it's fine as the screen is not necessary for proper operation of the Planktoscope.
`Use Python Thread with GPS HAT <http://www.danmandle.com/blog/getting-gpsd-to-work-with-python/>`_
In case the motor controller does not appear, shutdown your Planktoscope and check the wiring. If your board is using a connector instead of a soldered pin connection (as happens with the Adafruit Bonnet Motor Controller), sometimes the pins on the male side need to be bent a little to make good contact. In any case, do not hesitate to ask for help in Slack.
::
sudo apt-get install python gpsd gpsd-clients
Install RGB Cooling HAT
=======================
`Installing RGB Cooling HAT <https://www.yahboom.net/study/RGB_Cooling_HAT>`_
Type these command to install:
::
git clone https://github.com/WiringPi/WiringPi.git
cd WiringPi
sudo ./build
sudo apt-get install gcc
### Install RPi Cam Web Interface
Install Node-RED
==================
`Installing Node-RED on Raspberry Pi <https://nodered.org/docs/getting-started/raspberrypi>`_
You can find more information about the RPi Cam Web Interface on [eLinux' website](https://elinux.org/RPi-Cam-Web-Interface).
Prerequisites
-------------
Ensure npm is able to build any binary modules it needs to install.
::
sudo apt-get install build-essential
To set it up, clone the code from Github and enable and run the install script with the following commands
```sh
cd ~/libraries
git clone https://github.com/silvanmelchior/RPi_Cam_Web_Interface.git
cd RPi_Cam_Web_Interface
./install.sh
```
Download and installation
-------------------------
To install Node.js, npm and Node-RED onto a Raspberry Pi, run the following command will that download and install them:
::
bash <(curl -sL https://raw.githubusercontent.com/node-red/linux-installers/master/deb/update-nodejs-and-nodered)
Due to the limited memory of the Raspberry Pi, you will need to start Node-RED with an additional argument to tell the underlying Node.js process to free up unused memory sooner than it would otherwise.
::
node-red-pi --max-old-space-size=256
Press Enter to allow default setting of the installation. Once everything is installed, press Enter to start the RPi Cam Web Interface now.
Autostart on boot
-----------------
Run Node-RED when the Pi is turned on, or re-booted, enable the service to autostart by running the command:
::
sudo systemctl enable nodered.service
To test the interface locally, try accessing this url from the browser in the Raspberry: http://localhost/html
Check the installation
----------------------
Make sure NodeRed is correctly installed by reaching the following page from the broswer of your pi :
::
http://localhost:1880.
You can also try to access this page from another computer connected to the same network.
Install few nodes
-----------------
These nodes will be used in Node-RED:
::
cd .node-red/
npm install node-red-dashboard
npm install node-red-contrib-python3-function
npm install node-red-contrib-camerapi
npm install node-red-contrib-gpsd
npm install node-red-contrib-web-worldmap
If your computer has `avahi` or the `Bonjour` service installed and running, you can directly use this url: http://raspberrypi.local/html/ .
Import the last GUI
-------------------
If this is not the case, you first need to find the IP address of your Raspberry Pi by running the following:
```sh
sudo ip addr show | grep 'inet 1'
```
Import the `lastest version of the GUI <https://raw.githubusercontent.com/tpollina/PlanktonScope/master/scripts/flows_planktonscope.json>`_
The web page can then be accessed at `http://[IP_ADDRESS]/html/`.
Install Mosquitto MQTT
======================
In order to send and receive from Node-RED:
::
sudo apt-get install mosquitto mosquitto-clients
Install mqtt-paho
=================
In order to send and receive from python:
::
pip3 install paho-mqtt
Install OpenCV
=================
Use the quick version without virtual env
https://www.pyimagesearch.com/2019/09/16/install-opencv-4-on-raspberry-pi-4-and-raspbian-buster/
If the interface is loading and a picture is displayed, you can stop this interface for now by simply running `./stop.sh`.
Install MorphoCut
=================
### Install Ultimate GPS HAT
[Installing MorphoCut](https://morphocut.readthedocs.io/en/stable/installation.html)
You can start by testing that the GPS module is working. Either install your PlanktoScop with a view of the sky, or connect the external antenna.
Now you need to run the following:
```sh
sudo apt install gpsd gpsd-clients
stty -F /dev/serial0 raw 9600 cs8 clocal -cstopb
cat /dev/serial0
```
If the GPS works, you should now see NMEA sentences scrolling:
```
$GPGGA,000908.799,,,,,0,00,,,M,,M,,*7E
$GPGSA,A,1,,,,,,,,,,,,,,,*1E
$GPGSV,1,1,00*79
$GPRMC,000908.799,V,,,,,0.00,0.00,060180,,,N*44
$GPVTG,0.00,T,,M,0.00,N,0.00,K,N*32
$GPGGA,000909.799,,,,,0,00,,,M,,M,,*7F
$GPGSA,A,1,,,,,,,,,,,,,,,*1E
$GPRMC,000909.799,V,,,,,0.00,0.00,060180,,,N*45
$GPVTG,0.00,T,,M,0.00,N,0.00,K,N*32
$GPGGA,000910.799,,,,,0,00,,,M,,M,,*77
$GPGSA,A,1,,,,,,,,,,,,,,,*1E
$GPRMC,000910.799,V,,,,,0.00,0.00,060180,,,N*4D
$GPVTG,0.00,T,,M,0.00,N,0.00,K,N*32
```
Until you get a GPS fix, most of the sentences are empty (see the lines starting with GPGSA and with lot of commas).
We are going to use gpsd to parse the GPS data. We need to set it up by editing `/etc/default/gpsd`. This file is source just before starting gpsd and allows to configure its working.
```sh
sudo nano /etc/default/gpsd
```
Change the `USB_AUTO` line to read `false`
```sh
USBAUTO="false"
```
Also change the `DEVICES` line to add the device we are going to use `/dev/serial0`:
```sh
DEVICES="/dev/serial0"
```
Finally, we want to add the parameter `-n` to `GPSD_OPTIONS`:
```sh
GPSD_OPTIONS="-n"
```
Save your work, and restart gpsd by running the following:
```sh
sudo systemctl restart gpsd.service
```
If you wait a bit, you can run `gpsmon` to check that your configuration is correct. You should get an output similar to this:
```
pi@planktoscope:~ $ gpsmon
/dev/serial0 NMEA0183>
┌──────────────────────────────────────────────────────────────────────────────┐
│Time: 2020-07-21T11:09:26.000Z Lat: 45 33' 28.08539" Non: 1 03' 44.02019" W│
└───────────────────────────────── Cooked TPV ─────────────────────────────────┘
┌──────────────────────────────────────────────────────────────────────────────┐
│ GPGGA GPGSA GPRMC GPZDA GPGSV │
└───────────────────────────────── Sentences ──────────────────────────────────┘
┌──────────────────┐┌────────────────────────────┐┌────────────────────────────┐
│Ch PRN Az El S/N ││Time: 110926.000 ││Time: 110927.000 │
│ 0 27 351 78 49 ││Latitude: 4533.4809 N ││Latitude: 4533.4809 │
│ 1 21 51 69 47 ││Longitude: 00103.7367 W ││Longitude: 00103.7367 │
│ 2 16 184 61 43 ││Speed: 0.00 ││Altitude: -0.1 │
│ 3 10 116 51 50 ││Course: 201.75 ││Quality: 2 Sats: 11 │
│ 4 8 299 47 49 ││Status: A FAA: D ││HDOP: 0.87 │
│ 5 20 66 42 46 ││MagVar: ││Geoid: 49.3 │
│ 6 123 138 28 43 │└─────────── RMC ────────────┘└─────────── GGA ────────────┘
│ 7 26 165 25 30 │┌────────────────────────────┐┌────────────────────────────┐
│ 8 11 264 23 48 ││Mode: A3 ...s: 27 21 16 10 ││UTC: RMS: │
│ 9 7 303 15 38 ││DOP: H=0.87 V=1.13 P=1.42 ││MAJ: MIN: │
│10 18 56 14 44 ││TOFF: 0.530187817 ││ORI: LAT: │
│11 30 330 5 35 ││PPS: ││LON: ALT: │
└────── GSV ───────┘└──────── GSA + PPS ─────────┘└─────────── GST ────────────┘
(42) $GPGSV,4,4,14,15,03,035,36,01,02,238,*72
(72) $GPRMC,110922.000,A,4533.4809,N,00103.7366,W,0.01,322.19,210720,,,D*7E
(35) $GPZDA,110922.000,21,07,2020,,*5B
(81) $GPGGA,110923.000,4533.4809,N,00103.7367,W,2,11,0.87,-0.1,M,49.3,M,0000,0000*5B
(64) $GPGSA,A,3,16,27,30,10,18,21,20,08,11,07,26,,1.43,0.87,1.13*0B
(72) $GPRMC,110923.000,A,4533.4809,N,00103.7367,W,0.01,188.90,210720,,,D*7D
(35) $GPZDA,110923.000,21,07,2020,,*5A
(81) $GPGGA,110924.000,4533.4809,N,00103.7367,W,2,11,0.87,-0.1,M,49.3,M,0000,0000*5C
(64) $GPGSA,A,3,16,27,30,10,18,21,20,08,11,07,26,,1.43,0.87,1.13*0B
(72) $GPRMC,110924.000,A,4533.4809,N,00103.7367,W,0.01,156.23,210720,,,D*71
```
#### Bonus Configuration: Automatic time update from GPSD
The Adafruit GPS HAT allows your PlanktoScop to automatically sets its time to the GPS received one. Moreover, since the PPS (Pulse Per Second) output is activated, you can even set your PlanktoScope to act as a stratum 1 timeserver.
We are first going to make sure that your PlanktoScope receives proper PPS signal. We need to add the following line at the end of `/boot/config.txt`:
```
sudo nano /boot/config.txt
# Add the following line at the end of the line
dtoverlay=pps-gpio,gpiopin=4
```
We also need to activate the pps module of the kernel, by editing `/etc/modules`:
```
sudo nano /etc/modules
# Add the following line at the end of the line
pps-gpio
```
Now install `pps-tools` so we can check that this is properly running.
```sh
sudo apt install pps-tools
```
Finally, in the `/etc/default/gpsd` file, we need to add our pps device to the line `DEVICES`. Append `/dev/pps0`:
```sh
DEVICES="/dev/serial0 /dev/pps0"
```
Reboot your PlanktoScope now and check that the PPS signal is properly parsed by the PlanktoScope. You can do this by running `sudo ppstest /dev/pps0`:
```
pi@planktoscope:~ $ sudo ppstest /dev/pps0
trying PPS source "/dev/pps0"
found PPS source "/dev/pps0"
ok, found 1 source(s), now start fetching data...
source 0 - assert 1595329939.946478786, sequence: 4125 - clear 0.000000000, sequence: 0
source 0 - assert 1595329940.946459463, sequence: 4126 - clear 0.000000000, sequence: 0
```
`gpsmon` should also show a PPS signal in the `GSA + PPS` box.
We now need to install the software that will act as timeserver, both locally and globally. Its name is Chrony. It's a more modern replacement for `ntp`, using the same underlying protocol. Let's go ahead and install it:
```sh
sudo apt install chrony
```
We need to edit the configuration of chrony, to activate both the GPS time synchronization and to allow clients to request time updates directly from our microscope.
Edit the file `/etc/chrony/chrony.conf` and replace its content with the following:
```
server 0.pool.ntp.org maxpoll 5
server 1.pool.ntp.org maxpoll 5
server 2.pool.ntp.org maxpoll 5
server 3.pool.ntp.org maxpoll 5
driftfile /var/lib/chrony/drift
allow
makestep 1 5
refclock SHM 2 pps refid NMEA
#refclock PPS /dev/pps0 precision 1e-7 noselect refid GPPS
```
Before restarting `chrony`, we need to make sure the timesync service from systemd is deactivated:
```sh
sudo systemctl stop systemd-timesyncd.service
sudo systemctl disable systemd-timesyncd.service
```
Final step, let's start `chrony` with its new configuration and restart `gpsd`:
```sh
sudo systemctl restart chrony
sudo systemctl restart gpsd
```
To check that everything is working as intended, wait a few minutes, and then type `chronyc sources -v`. This command will show the time sources `chrony` is using, and right at the top there should be our NMEA source. Make sure its line starts with `#*`, which means this source is selected:
```
pi@planktoscope:~ $ chronyc sources -v
210 Number of sources = 5
.-- Source mode '^' = server, '=' = peer, '#' = local clock.
/ .- Source state '*' = current synced, '+' = combined , '-' = not combined,
| / '?' = unreachable, 'x' = time may be in error, '~' = time too variable.
|| .- xxxx [ yyyy ] +/- zzzz
|| Reachability register (octal) -. | xxxx = adjusted offset,
|| Log2(Polling interval) --. | | yyyy = measured offset,
|| \ | | zzzz = estimated error.
|| | | \
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
#* NMEA 0 4 377 13 -434ns[ -582ns] +/- 444ns
^- mail.raveland.org 3 7 377 215 -18ms[ -18ms] +/- 53ms
^- nio.nucli.net 2 6 377 19 -7340us[-7340us] +/- 63ms
^- ntp4.kashra-server.com 2 8 377 146 -11ms[ -11ms] +/- 50ms
^- pob01.aplu.fr 2 8 377 83 -15ms[ -15ms] +/- 66ms
```
The other servers are here just as fallback measures, in case the GPS is not working for an unknown reason.
This part is now complete! Everytime you start your Planktoscope, it will set its own time after a few minutes (once a GPS signal is acquired).
The ultimate step will have to be done on the other equipment on the network where you want to use this time source. You will need to add the line `server planktoscope.local` to your ntp configuration file either at `/etc/ntp.conf` or at `/etc/chrony/chrony.conf` and then restart your ntp service.
You can find more information in this hardware module in Adafruit documentation at [Installing Adafruit GPS HAT](https://learn.adafruit.com/adafruit-ultimate-gps-hat-for-raspberry-pi/overview) or on this page to [use Python Thread with GPS HAT](http://www.danmandle.com/blog/getting-gpsd-to-work-with-python/)
### Install RGB Cooling HAT
To setup the RGB Cooling HAT, you just need to clone and build the WiringPi library:
```sh
cd ~/libraries
git clone https://github.com/WiringPi/WiringPi.git
cd WiringPi
sudo ./build
gpio -v
```
The last command should output something similar to the following:
```
gpio version: 2.60
Copyright (c) 2012-2018 Gordon Henderson
This is free software with ABSOLUTELY NO WARRANTY.
For details type: gpio -warranty
Raspberry Pi Details:
Type: Pi 4B, Revision: 01, Memory: 4096MB, Maker: Sony
* Device tree is enabled.
*--> Raspberry Pi 4 Model B Rev 1.1
* This Raspberry Pi supports user-level GPIO access.
```
You will also need to install some python modules:
```sh
sudo apt install python3-smbus i2c-tools
sudo pip3 install Adafruit-SSD1306
```
More information can be found on Yahboom website, on the page [Installing RGB Cooling HAT](https://www.yahboom.net/study/RGB_Cooling_HAT).
### Install Node-RED
#### Download and installation
To install Node.js, npm and Node-RED onto a Raspberry Pi, you just need to run the following command. You can review the content of this script [here](https://raw.githubusercontent.com/node-red/linux-installers/master/deb/update-nodejs-and-nodered).
```sh
bash <(curl -sL https://raw.githubusercontent.com/node-red/linux-installers/master/deb/update-nodejs-and-nodered)
```
Type `y` at both prompts to accept the installation and its settings.
#### Enable start on boot and launch Node-RED
To run Node-RED when the Pi is turned on or restarted, you need to enable the systemd service by running this command:
```sh
sudo systemctl enable nodered.service
```
You can now start Node-RED by running the following:
```sh
sudo systemctl start nodered.service
```
#### Check the installation
Make sure Node-RED is correctly installed by reaching the following page from the browser of your pi http://localhost:1880 or http://planktoscope.local:1880 from another computer on the same network.
#### Install the necessary nodes
These nodes will be used by the PlanktoScop software and needs to be installed:
```sh
cd ~/.node-red/
npm install node-red-dashboard node-red-contrib-python3-function node-red-contrib-camerapi node-red-contrib-gpsd node-red-contrib-web-worldmap node-red-contrib-interval
sudo systemctl restart nodered.service
```
#### Import the last GUI
From Node-RED gui in your browser, choose the Hamburger menu top right, and then Import. You can paste the code directly from the lastest version of the GUI available [here](https://raw.githubusercontent.com/PlanktonPlanet/PlanktonScope/blob/master/flows/main.json).
You can also download it directly:
```sh
wget -N -O ~/.node-red/flows_planktoscope.json https://raw.githubusercontent.com/PlanktonPlanet/PlanktonScope/master/flows/main.json
sudo systemctl restart nodered.service
```
#### More information
[Installing Node-RED on Raspberry Pi](https://nodered.org/docs/getting-started/raspberrypi)
### Install Mosquitto MQTT
In order to send and receive data from Node-RED, you need to install this. Run the following:
```
sudo apt install mosquitto mosquitto-clients
```
### Install mqtt-paho
In order to send and receive data from python, you need this library. Run the following:
```
sudo pip3 install paho-mqtt
```
### Install OpenCV
We need to install the latest OpenCV version. Unfortunately, it is not available in the repositories. We are going to install it directly by using pip.
First, we need to install the needed dependencies, then we will directly install opencv:
```sh
sudo apt install libgtk-3-0 libavformat58 libtiff5 libcairo2 libqt4-test libpango-1.0-0 libopenexr23 libavcodec58 libilmbase23 libatk1.0-0 libpangocairo-1.0-0 libwebp6 libqtgui4 libavutil56 libjasper1 libqtcore4 libcairo-gobject2 libswscale5 libgdk-pixbuf2.0-0 libhdf5-dev libilmbase-dev libopenexr-dev libgstreamer1.0-dev libavcodec-dev libavformat-dev libswscale-dev libwebp-dev libatlas-base-dev
sudo pip3 install "picamera[array]"
sudo pip3 install opencv-contrib-python==4.1.0.25
```
You can now check that opencv is properly installed by running a python interpreter and importing the cv2 module.
```sh
pi@planktoscope:~ $ python3
Python 3.7.3 (default, Dec 20 2019, 18:57:59)
[GCC 8.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import cv2
>>> cv2.__version__
'4.1.0'
>>> quit()
```
If all goes well, the displayed version number should be `4.1.0`.
More detailed information can be found on this [website](https://www.pyimagesearch.com/2019/09/16/install-opencv-4-on-raspberry-pi-4-and-raspbian-buster/).
### Install MorphoCut
MorphoCut is packaged on PyPI and can be installed with pip:
```sh
sudo apt-get install python3-scipy
pip3 install -U git+https://github.com/morphocut/morphocut.git
sudo apt install python3-scipy
sudo pip3 install -U git+https://github.com/morphocut/morphocut.git
```
Finishing the install
=====================
To test the installation, open up once again a python interpreter and import the morphocut module:
```sh
pi@planktoscope:~ $ python3
Python 3.7.3 (default, Dec 20 2019, 18:57:59)
[GCC 8.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import morphocut
>>> morphocut.__version__
'0.1.1+42.g01a051e'
>>> quit()
```
Make sure to update your Pi
::
sudo apt-get update -y
sudo apt-get full-upgrade -y
The MorphoCut documentation can be found [on this page](https://morphocut.readthedocs.io/en/stable/index.html).
## Finishing the install
Make sure to update your Pi
```
sudo apt update -y
sudo apt full-upgrade -y
```
Reboot your Pi safely
::
sudo reboot now
```
sudo reboot now
```
*******************
Usefull later maybe
*******************
## Useful later maybe
Download the GitHub repo
========================
At this link : https://github.com/tpollina/PlanktonScope/archive/master.zip
Unzip to a specific location:
::
unzip /home/pi/Downloads/PlanktonScope-master.zip -d /home/pi/
mv /home/pi/PlanktonScope-master /home/pi/PlanktonScope
### Update the cloned repository
Update node-RED interface
=========================
To update the interface, you can just download the lastest .json file:
::
wget -P $HOME/.node-red https://raw.githubusercontent.com/tpollina/PlanktonScope/master/scripts/flows_planktonscope.json
Updates are published on Github regurlarly. Make sure to update once in a while by running this command:
```sh
cd PlanktonScope
git pull
```
Share WiFi via Ethernet
=======================
This will pull and merge all the changes made since your last update.
### Update node-RED interface
To update the interface and make sure you run the latest version, you need to copy the json config file from the cloned repository to the Node-RED library:
```sh
cp ~/PlanktonScope/flows/main.json ~/.node-red/flows_planktoscope.json
```
### Share WiFi via Ethernet
At this link : https://www.instructables.com/id/Share-WiFi-With-Ethernet-Port-on-a-Raspberry-Pi/

View file

@ -1,439 +0,0 @@
import paho.mqtt.client as mqtt
from picamera import PiCamera
from datetime import datetime, timedelta
from adafruit_motor import stepper
from adafruit_motorkit import MotorKit
from time import sleep
import json
import os
import subprocess
from skimage.util import img_as_ubyte
from morphocut import Call
from morphocut.contrib.ecotaxa import EcotaxaWriter
from morphocut.contrib.zooprocess import CalculateZooProcessFeatures
from morphocut.core import Pipeline
from morphocut.file import Find
from morphocut.image import (
ExtractROI,
FindRegions,
ImageReader,
ImageWriter,
RescaleIntensity,
RGB2Gray,
)
from morphocut.stat import RunningMedian
from morphocut.str import Format
from morphocut.stream import TQDM, Enumerate, FilterVariables
from skimage.feature import canny
from skimage.color import rgb2gray, label2rgb
from skimage.morphology import disk
from skimage.morphology import erosion, dilation, closing
from skimage.measure import label, regionprops
import cv2, shutil
import smbus
#fan
bus = smbus.SMBus(1)
################################################################################
kit = MotorKit()
pump_stepper = kit.stepper1
pump_stepper.release()
focus_stepper = kit.stepper2
focus_stepper.release()
################################################################################
camera = PiCamera()
camera.resolution = (3280, 2464)
camera.iso = 60
camera.shutter_speed = 500
camera.exposure_mode = 'fixedfps'
################################################################################
message = ''
topic = ''
count=''
################################################################################
def on_connect(client, userdata, flags, rc):
print("Connected! - " + str(rc))
client.subscribe("actuator/#")
rgb(0,255,0)
def on_subscribe(client, obj, mid, granted_qos):
print("Subscribed! - "+str(mid)+" "+str(granted_qos))
def on_message(client, userdata, msg):
print(msg.topic+" "+str(msg.qos)+" "+str(msg.payload))
global message
global topic
global count
message=str(msg.payload.decode())
topic=msg.topic.split("/")[1]
count=0
def on_log(client, obj, level, string):
print(string)
def rgb(R,G,B):
bus.write_byte_data(0x0d, 0x00, 0)
bus.write_byte_data(0x0d, 0x01, R)
bus.write_byte_data(0x0d, 0x02, G)
bus.write_byte_data(0x0d, 0x03, B)
bus.write_byte_data(0x0d, 0x00, 1)
bus.write_byte_data(0x0d, 0x01, R)
bus.write_byte_data(0x0d, 0x02, G)
bus.write_byte_data(0x0d, 0x03, B)
bus.write_byte_data(0x0d, 0x00, 2)
bus.write_byte_data(0x0d, 0x01, R)
bus.write_byte_data(0x0d, 0x02, G)
bus.write_byte_data(0x0d, 0x03, B)
cmd="i2cdetect -y 1"
subprocess.Popen(cmd.split(),stdout=subprocess.PIPE)
################################################################################
client = mqtt.Client()
client.connect("127.0.0.1",1883,60)
client.on_connect = on_connect
client.on_subscribe = on_subscribe
client.on_message = on_message
client.on_log = on_log
client.loop_start()
################################################################################
while True:
################################################################################
if (topic=="pump"):
rgb(0,0,255)
direction=message.split(" ")[0]
delay=float(message.split(" ")[1])
nb_step=int(message.split(" ")[2])
client.publish("receiver/pump", "Start");
while True:
if direction == "BACKWARD":
direction=stepper.BACKWARD
if direction == "FORWARD":
direction=stepper.FORWARD
count+=1
# print(count,nb_step)
pump_stepper.onestep(direction=direction, style=stepper.DOUBLE)
sleep(delay)
if topic!="pump":
pump_stepper.release()
print("The pump has been interrompted.")
client.publish("receiver/pump", "Interrompted");
rgb(0,255,0)
break
if count>nb_step:
pump_stepper.release()
print("The pumping is done.")
topic="wait"
client.publish("receiver/pump", "Done");
rgb(0,255,0)
break
################################################################################
elif (topic=="focus"):
rgb(255,255,0)
direction=message.split(" ")[0]
nb_step=int(message.split(" ")[1])
client.publish("receiver/focus", "Start");
while True:
if direction == "FORWARD":
direction=stepper.FORWARD
if direction == "BACKWARD":
direction=stepper.BACKWARD
count+=1
# print(count,nb_step)
focus_stepper.onestep(direction=direction, style=stepper.MICROSTEP)
if topic!="focus":
focus_stepper.release()
print("The stage has been interrompted.")
client.publish("receiver/focus", "Interrompted");
rgb(0,255,0)
break
if count>nb_step:
focus_stepper.release()
print("The focusing is done.")
topic="wait"
client.publish("receiver/focus", "Done");
rgb(0,255,0)
break
################################################################################
elif (topic=="image"):
camera.start_preview(fullscreen=False, window = (160, 0, 640, 480))
sleep_before=int(message.split(" ")[0])
nb_step=int(message.split(" ")[1])
path=str(message.split(" ")[2])
nb_frame=int(message.split(" ")[3])
sleep_during=int(message.split(" ")[4])
#sleep a duration before to start
sleep(sleep_before)
client.publish("receiver/image", "Start");
#flushing before to begin
rgb(0,0,255)
for i in range(nb_step):
pump_stepper.onestep(direction=stepper.FORWARD, style=stepper.DOUBLE)
sleep(0.01)
rgb(0,255,0)
directory = os.path.join(path, "PlanktonScope")
os.makedirs(directory, exist_ok=True)
export = os.path.join(directory, "export")
os.makedirs(export, exist_ok=True)
date=datetime.now().strftime("%m_%d_%Y")
time=datetime.now().strftime("%H_%M")
path_date = os.path.join(directory, date)
os.makedirs(path_date, exist_ok=True)
path_time = os.path.join(path_date,time)
os.makedirs(path_time, exist_ok=True)
while True:
count+=1
# print(count,nb_frame)
filename = os.path.join(path_time,datetime.now().strftime("%M_%S_%f")+".jpg")
rgb(0,255,255)
camera.capture(filename)
rgb(0,255,0)
client.publish("receiver/image", datetime.now().strftime("%M_%S_%f")+".jpg has been imaged.");
rgb(0,0,255)
for i in range(10):
pump_stepper.onestep(direction=stepper.FORWARD, style=stepper.DOUBLE)
sleep(0.01)
sleep(0.5)
rgb(0,255,0)
if(count>nb_frame):
camera.stop_preview()
client.publish("receiver/image", "Completed");
# Meta data that is added to every object
local_metadata = {
"process_datetime": datetime.now(),
"acq_camera_resolution" : camera.resolution,
"acq_camera_iso" : camera.iso,
"acq_camera_shutter_speed" : camera.shutter_speed
}
global_metadata = None
config_txt = None
RAW = None
CLEAN = None
ANNOTATED = None
OBJECTS = None
archive_fn = None
config_txt = open('/home/pi/PlanktonScope/config.txt','r')
node_red_metadata = json.loads(config_txt.read())
global_metadata = {**local_metadata, **node_red_metadata}
RAW = os.path.join(path_time, "RAW")
os.makedirs(RAW, exist_ok=True)
os.system("mv "+str(path_time)+"/*.jpg "+str(RAW))
CLEAN = os.path.join(path_time, "CLEAN")
os.makedirs(CLEAN, exist_ok=True)
ANNOTATED = os.path.join(path_time, "ANNOTATED")
os.makedirs(ANNOTATED, exist_ok=True)
OBJECTS = os.path.join(path_time, "OBJECTS")
os.makedirs(OBJECTS, exist_ok=True)
archive_fn = os.path.join(directory,"export", str(date)+"_"+str(time)+"_ecotaxa_export.zip")
client.publish("receiver/segmentation", "Start");
# Define processing pipeline
# Define processing pipeline
with Pipeline() as p:
# Recursively find .jpg files in import_path.
# Sort to get consective frames.
abs_path = Find(RAW, [".jpg"], sort=True, verbose=True)
FilterVariables(abs_path)
# Extract name from abs_path
name = Call(lambda p: os.path.splitext(os.path.basename(p))[0], abs_path)
Call(rgb, 0,255,0)
# Read image
img = ImageReader(abs_path)
# Show progress bar for frames
#TQDM(Format("Frame {name}", name=name))
# Apply running median to approximate the background image
flat_field = RunningMedian(img, 5)
# Correct image
img = img / flat_field
FilterVariables(name,img)
# Rescale intensities and convert to uint8 to speed up calculations
img = RescaleIntensity(img, in_range=(0, 1.1), dtype="uint8")
frame_fn = Format(os.path.join(CLEAN, "{name}.jpg"), name=name)
ImageWriter(frame_fn, img)
FilterVariables(name,img)
# Convert image to uint8 gray
img_gray = RGB2Gray(img)
# ?
img_gray = Call(img_as_ubyte, img_gray)
#Canny edge detection
img_canny = Call(cv2.Canny, img_gray, 50,100)
#Dilate
kernel = Call(cv2.getStructuringElement, cv2.MORPH_ELLIPSE, (15, 15))
img_dilate = Call(cv2.dilate, img_canny, kernel, iterations=2)
#Close
kernel = Call(cv2.getStructuringElement, cv2.MORPH_ELLIPSE, (5, 5))
img_close = Call(cv2.morphologyEx, img_dilate, cv2.MORPH_CLOSE, kernel, iterations=1)
#Erode
kernel = Call(cv2.getStructuringElement, cv2.MORPH_ELLIPSE, (15, 15))
mask = Call(cv2.erode, img_close, kernel, iterations=2)
FilterVariables(name,img,img_gray,mask)
# Find objects
regionprops = FindRegions(
mask, img_gray, min_area=1000, padding=10, warn_empty=name
)
Call(rgb, 255,0,255)
# For an object, extract a vignette/ROI from the image
roi_orig = ExtractROI(img, regionprops, bg_color=255)
# Generate an object identifier
i = Enumerate()
#Call(print,i)
object_id = Format("{name}_{i:d}", name=name, i=i)
#Call(print,object_id)
object_fn = Format(os.path.join(OBJECTS, "{name}.jpg"), name=object_id)
ImageWriter(object_fn, roi_orig)
# Calculate features. The calculated features are added to the global_metadata.
# Returns a Variable representing a dict for every object in the stream.
meta = CalculateZooProcessFeatures(
regionprops, prefix="object_", meta=global_metadata
)
json_meta = Call(json.dumps,meta, sort_keys=True, default=str)
Call(client.publish, "receiver/segmentation/metric", json_meta)
# Add object_id to the metadata dictionary
meta["object_id"] = object_id
# Generate object filenames
orig_fn = Format("{object_id}.jpg", object_id=object_id)
FilterVariables(orig_fn,roi_orig,meta,object_id)
# Write objects to an EcoTaxa archive:
# roi image in original color, roi image in grayscale, metadata associated with each object
EcotaxaWriter(archive_fn, (orig_fn, roi_orig), meta)
# Progress bar for objects
TQDM(Format("Object {object_id}", object_id=object_id))
Call(client.publish, "receiver/segmentation/object_id", object_id)
meta=None
FilterVariables(meta)
p.run()
#remove directory
#shutil.rmtree(import_path)
client.publish("receiver/segmentation", "Completed");
rgb(255,255,255)
sleep(sleep_during)
rgb(0,255,0)
date=datetime.now().strftime("%m_%d_%Y")
time=datetime.now().strftime("%H_%M")
path_date = os.path.join(directory, date)
os.makedirs(path_date, exist_ok=True)
path_time = os.path.join(path_date,time)
os.makedirs(path_time, exist_ok=True)
rgb(0,0,255)
for i in range(nb_step):
pump_stepper.onestep(direction=stepper.FORWARD, style=stepper.DOUBLE)
sleep(0.01)
rgb(0,255,0)
os.makedirs(path_time, exist_ok=True)
count=0
if topic!="image":
pump_stepper.release()
print("The imaging has been interrompted.")
client.publish("receiver/image", "Interrompted");
rgb(0,255,0)
count=0
break
else:
# print("Waiting")
sleep(1)

View file

@ -1,529 +0,0 @@
################################################################################
#Actuator Libraries
################################################################################
#Library for exchaning messages with Node-RED
import paho.mqtt.client as mqtt
#Library to control the PiCamera
from picamera import PiCamera
#Libraries to control the steppers for focusing and pumping
from adafruit_motor import stepper
from adafruit_motorkit import MotorKit
#Library to send command over I2C for the light module on the fan
import smbus
################################################################################
#Practical Libraries
################################################################################
#Library to get date and time for folder name and filename
from datetime import datetime, timedelta
#Library to be able to sleep for a duration
from time import sleep
#Libraries manipulate json format, execute bash commands
import json, shutil, os, subprocess
################################################################################
#Morphocut Libraries
################################################################################
from skimage.util import img_as_ubyte
from morphocut import Call
from morphocut.contrib.ecotaxa import EcotaxaWriter
from morphocut.contrib.zooprocess import CalculateZooProcessFeatures
from morphocut.core import Pipeline
from morphocut.file import Find
from morphocut.image import (ExtractROI,
FindRegions,
ImageReader,
ImageWriter,
RescaleIntensity,
RGB2Gray
)
from morphocut.stat import RunningMedian
from morphocut.str import Format
from morphocut.stream import TQDM, Enumerate, FilterVariables
################################################################################
#Other image processing Libraries
################################################################################
from skimage.feature import canny
from skimage.color import rgb2gray, label2rgb
from skimage.morphology import disk
from skimage.morphology import erosion, dilation, closing
from skimage.measure import label, regionprops
#pip3 install opencv-python
import cv2
################################################################################
#STREAMING
################################################################################
import io
import picamera
import logging
import socketserver
from threading import Condition
from http import server
import threading
PAGE="""\
<html>
<head>
<title>picamera MJPEG streaming demo</title>
</head>
<body>
<img src="stream.mjpg" width="640" height="480" />
</body>
</html>
"""
class StreamingOutput(object):
def __init__(self):
self.frame = None
self.buffer = io.BytesIO()
self.condition = Condition()
def write(self, buf):
if buf.startswith(b'\xff\xd8'):
# New frame, copy the existing buffer's content and notify all
# clients it's available
self.buffer.truncate()
with self.condition:
self.frame = self.buffer.getvalue()
self.condition.notify_all()
self.buffer.seek(0)
return self.buffer.write(buf)
class StreamingHandler(server.BaseHTTPRequestHandler):
def do_GET(self):
if self.path == '/':
self.send_response(301)
self.send_header('Location', '/index.html')
self.end_headers()
elif self.path == '/index.html':
content = PAGE.encode('utf-8')
self.send_response(200)
self.send_header('Content-Type', 'text/html')
self.send_header('Content-Length', len(content))
self.end_headers()
self.wfile.write(content)
elif self.path == '/stream.mjpg':
self.send_response(200)
self.send_header('Age', 0)
self.send_header('Cache-Control', 'no-cache, private')
self.send_header('Pragma', 'no-cache')
self.send_header('Content-Type', 'multipart/x-mixed-replace; boundary=FRAME')
self.end_headers()
try:
while True:
with output.condition:
output.condition.wait()
frame = output.frame
self.wfile.write(b'--FRAME\r\n')
self.send_header('Content-Type', 'image/jpeg')
self.send_header('Content-Length', len(frame))
self.end_headers()
self.wfile.write(frame)
self.wfile.write(b'\r\n')
except Exception as e:
logging.warning(
'Removed streaming client %s: %s',
self.client_address, str(e))
else:
self.send_error(404)
self.end_headers()
class StreamingServer(socketserver.ThreadingMixIn, server.HTTPServer):
allow_reuse_address = True
daemon_threads = True
################################################################################
#MQTT core functions
################################################################################
#Run this function in order to connect to the client (Node-RED)
def on_connect(client, userdata, flags, rc):
#Print when connected
print("Connected! - " + str(rc))
#When connected, run subscribe()
client.subscribe("actuator/#")
#Turn green the light module
rgb(0,255,0)
#Run this function in order to subscribe to all the topics begining by actuator
def on_subscribe(client, obj, mid, granted_qos):
#Print when subscribed
print("Subscribed! - "+str(mid)+" "+str(granted_qos))
#Run this command when Node-RED is sending a message on the subscribed topic
def on_message(client, userdata, msg):
#Print the topic and the message
print(msg.topic+" "+str(msg.qos)+" "+str(msg.payload))
#Update the global variables command, args and counter
global command
global args
global counter
#Parse the topic to find the command. ex : actuator/pump -> pump
command=msg.topic.split("/")[1]
#Decode the message to find the arguments
args=str(msg.payload.decode())
#Reset the counter to 0
counter=0
################################################################################
#Actuators core functions
################################################################################
def rgb(R,G,B):
#Update LED n°1
bus.write_byte_data(0x0d, 0x00, 0)
bus.write_byte_data(0x0d, 0x01, R)
bus.write_byte_data(0x0d, 0x02, G)
bus.write_byte_data(0x0d, 0x03, B)
#Update LED n°2
bus.write_byte_data(0x0d, 0x00, 1)
bus.write_byte_data(0x0d, 0x01, R)
bus.write_byte_data(0x0d, 0x02, G)
bus.write_byte_data(0x0d, 0x03, B)
#Update LED n°3
bus.write_byte_data(0x0d, 0x00, 2)
bus.write_byte_data(0x0d, 0x01, R)
bus.write_byte_data(0x0d, 0x02, G)
bus.write_byte_data(0x0d, 0x03, B)
#Update the I2C Bus in order to really update the LEDs new values
cmd="i2cdetect -y 1"
subprocess.Popen(cmd.split(),stdout=subprocess.PIPE)
################################################################################
#Init function - executed only once
################################################################################
#define the bus used to actuate the light module on the fan
bus = smbus.SMBus(1)
#define the names for the 2 exsting steppers
kit = MotorKit()
pump_stepper = kit.stepper1
pump_stepper.release()
focus_stepper = kit.stepper2
focus_stepper.release()
#Precise the settings of the PiCamera
camera = PiCamera()
camera.resolution = (3280, 2464)
camera.iso = 60
camera.shutter_speed = 500
camera.exposure_mode = 'fixedfps'
#Declare the global variables command, args and counter
command = ''
args = ''
counter=''
client = mqtt.Client()
client.connect("127.0.0.1",1883,60)
client.on_connect = on_connect
client.on_subscribe = on_subscribe
client.on_message = on_message
client.loop_start()
################################################################################
local_metadata = {
"process_datetime": datetime.now(),
"acq_camera_resolution" : camera.resolution,
"acq_camera_iso" : camera.iso,
"acq_camera_shutter_speed" : camera.shutter_speed
}
config_txt = open('/home/pi/PlanktonScope/config.txt','r')
node_red_metadata = json.loads(config_txt.read())
global_metadata = {**local_metadata, **node_red_metadata}
archive_fn = os.path.join("/home/pi/PlanktonScope/","export", "ecotaxa_export.zip")
# Define processing pipeline
with Pipeline() as p:
# Recursively find .jpg files in import_path.
# Sort to get consective frames.
abs_path = Find("/home/pi/PlanktonScope/tmp", [".jpg"], sort=True, verbose=True)
# Extract name from abs_path
name = Call(lambda p: os.path.splitext(os.path.basename(p))[0], abs_path)
Call(rgb, 0,255,0)
# Read image
img = ImageReader(abs_path)
# Show progress bar for frames
TQDM(Format("Frame {name}", name=name))
# Apply running median to approximate the background image
flat_field = RunningMedian(img, 5)
# Correct image
img = img / flat_field
# Rescale intensities and convert to uint8 to speed up calculations
img = RescaleIntensity(img, in_range=(0, 1.1), dtype="uint8")
FilterVariables(name,img)
# frame_fn = Format(os.path.join("/home/pi/PlanktonScope/tmp","CLEAN", "{name}.jpg"), name=name)
# ImageWriter(frame_fn, img)
# Convert image to uint8 gray
img_gray = RGB2Gray(img)
# ?
img_gray = Call(img_as_ubyte, img_gray)
#Canny edge detection
img_canny = Call(cv2.Canny, img_gray, 50,100)
#Dilate
kernel = Call(cv2.getStructuringElement, cv2.MORPH_ELLIPSE, (15, 15))
img_dilate = Call(cv2.dilate, img_canny, kernel, iterations=2)
#Close
kernel = Call(cv2.getStructuringElement, cv2.MORPH_ELLIPSE, (5, 5))
img_close = Call(cv2.morphologyEx, img_dilate, cv2.MORPH_CLOSE, kernel, iterations=1)
#Erode
kernel = Call(cv2.getStructuringElement, cv2.MORPH_ELLIPSE, (15, 15))
mask = Call(cv2.erode, img_close, kernel, iterations=2)
# Find objects
regionprops = FindRegions(
mask, img_gray, min_area=1000, padding=10, warn_empty=name
)
Call(rgb, 255,0,255)
# For an object, extract a vignette/ROI from the image
roi_orig = ExtractROI(img, regionprops, bg_color=255)
# Generate an object identifier
i = Enumerate()
#Call(print,i)
object_id = Format("{name}_{i:d}", name=name, i=i)
#Call(print,object_id)
object_fn = Format(os.path.join("/home/pi/PlanktonScope/","OBJECTS", "{name}.jpg"), name=object_id)
ImageWriter(object_fn, roi_orig)
# Calculate features. The calculated features are added to the global_metadata.
# Returns a Variable representing a dict for every object in the stream.
meta = CalculateZooProcessFeatures(
regionprops, prefix="object_", meta=global_metadata
)
json_meta = Call(json.dumps,meta, sort_keys=True, default=str)
Call(client.publish, "receiver/segmentation/metric", json_meta)
# Add object_id to the metadata dictionary
meta["object_id"] = object_id
# Generate object filenames
orig_fn = Format("{object_id}.jpg", object_id=object_id)
# Write objects to an EcoTaxa archive:
# roi image in original color, roi image in grayscale, metadata associated with each object
EcotaxaWriter(archive_fn, (orig_fn, roi_orig), meta)
# Progress bar for objects
TQDM(Format("Object {object_id}", object_id=object_id))
Call(client.publish, "receiver/segmentation/object_id", object_id)
output = StreamingOutput()
address = ('', 8000)
server = StreamingServer(address, StreamingHandler)
threading.Thread(target=server.serve_forever).start()
################################################################################
camera.start_recording(output, format='mjpeg', resize=(640, 480))
while True:
if (command=="pump"):
rgb(0,0,255)
direction=args.split(" ")[0]
delay=float(args.split(" ")[1])
nb_step=int(args.split(" ")[2])
client.publish("receiver/pump", "Start");
while True:
if direction == "BACKWARD":
direction=stepper.BACKWARD
if direction == "FORWARD":
direction=stepper.FORWARD
pump_stepper.onestep(direction=direction, style=stepper.DOUBLE)
counter+=1
sleep(delay)
if command!="pump":
pump_stepper.release()
print("The pump has been interrompted.")
client.publish("receiver/pump", "Interrompted");
rgb(0,255,0)
break
if counter>nb_step:
pump_stepper.release()
print("The pumping is done.")
command="wait"
client.publish("receiver/pump", "Done");
rgb(0,255,0)
break
################################################################################
elif (command=="focus"):
rgb(255,255,0)
direction=args.split(" ")[0]
nb_step=int(args.split(" ")[1])
client.publish("receiver/focus", "Start");
while True:
if direction == "FORWARD":
direction=stepper.FORWARD
if direction == "BACKWARD":
direction=stepper.BACKWARD
counter+=1
focus_stepper.onestep(direction=direction, style=stepper.MICROSTEP)
if command!="focus":
focus_stepper.release()
print("The stage has been interrompted.")
client.publish("receiver/focus", "Interrompted");
rgb(0,255,0)
break
if counter>nb_step:
focus_stepper.release()
print("The focusing is done.")
command="wait"
client.publish("receiver/focus", "Done");
rgb(0,255,0)
break
################################################################################
elif (command=="image"):
sleep_before=int(args.split(" ")[0])
nb_step=int(args.split(" ")[1])
path=str(args.split(" ")[2])
nb_frame=int(args.split(" ")[3])
sleep_during=int(args.split(" ")[4])
#sleep a duration before to start
sleep(sleep_before)
client.publish("receiver/image", "Start");
#flushing before to begin
rgb(0,0,255)
for i in range(nb_step):
if (command=="image"):
pump_stepper.onestep(direction=stepper.FORWARD, style=stepper.DOUBLE)
sleep(0.01)
else:
break
rgb(0,255,0)
while True:
counter+=1
print(datetime.now().strftime("%H_%M_%S_%f"))
filename = os.path.join("/home/pi/PlanktonScope/tmp",datetime.now().strftime("%M_%S_%f")+".jpg")
rgb(0,255,255)
camera.capture(filename)
rgb(0,255,0)
client.publish("receiver/image", datetime.now().strftime("%M_%S_%f")+".jpg has been imaged.");
rgb(0,0,255)
for i in range(10):
pump_stepper.onestep(direction=stepper.FORWARD, style=stepper.DOUBLE)
sleep(0.01)
sleep(0.5)
rgb(0,255,0)
if(counter>nb_frame):
# camera.stop_preview()
client.publish("receiver/image", "Completed");
# Meta data that is added to every object
client.publish("receiver/segmentation", "Start");
# Define processing pipeline
p.run()
#remove directory
#shutil.rmtree(import_path)
client.publish("receiver/segmentation", "Completed");
cmd = os.popen("rm -rf /home/pi/PlanktonScope/tmp/*.jpg")
rgb(255,255,255)
sleep(sleep_during)
rgb(0,255,0)
rgb(0,0,255)
for i in range(nb_step):
pump_stepper.onestep(direction=stepper.FORWARD, style=stepper.DOUBLE)
sleep(0.01)
rgb(0,255,0)
counter=0
if command!="image":
pump_stepper.release()
print("The imaging has been interrompted.")
client.publish("receiver/image", "Interrompted");
rgb(0,255,0)
counter=0
break
else:
# print("Waiting")
sleep(1)