Commit graph

12 commits

Author SHA1 Message Date
Romain Bazile 7ef2cd7eab Various comment updates and small sourcery suggestions 2021-12-01 13:33:40 +01:00
Romain Bazile 0bdb95b607 light: fix typo
imager: format metadata.json
2020-12-17 17:33:42 +01:00
Romain Bazile d9550c777b light: the return of i2cdetect to sync messages 2020-12-16 13:58:10 +01:00
Romain Bazile ede52d59db double all i2c calls and change nginx config in update script 2020-12-05 11:39:49 +01:00
Romain Bazile 51ff3b7375 update the light library
also replaces smbus by smbus2 to allow for cleaner setups
2020-11-30 00:17:30 +01:00
Romain Bazile 156286a8da add sample_gear_net_opening and sample_concentrated_sample_volume
improve #44
2020-11-29 23:48:54 +01:00
Romain Bazile a62891eb6a light: catch exceptions 2020-11-27 11:30:29 +01:00
Romain Bazile 1cf3d1df12 Fix bugs all around 2020-11-25 17:01:14 +01:00
Romain Bazile 05dc317095 imager: add support for white balance 2020-11-24 17:31:11 +01:00
Romain Bazile b66776a5fb sourcery.ai cleanup 2020-11-14 09:45:14 +01:00
Romain Bazile e89cbdc2aa python: reminder to add tests 2020-10-07 12:22:52 +02:00
Romain Bazile 06ef401a3b
Extraction and refactor of the python code from node-red flow
The rationale for this rewrite is to improve the readability, the modularity, the reliability and the future-proofing of the main python script.

All in all, this is now 124 commits that are going to be squashed and merged together, spanning more than two weeks of development and testing.

Please test away this release and break things. An upgrading guide will be published in the coming days, along with a new image for people to use if they don't want to upgrade on their own.

Read along if you want to know all the goodies!

As a starter, the python script was extracted from the main flow, and now lives in its own files at `PlantonScope/scripts/*`.

We set up the auto formatting of the code by using [Black](https://github.com/psf/black). This make the code clearer and uniform. We are using the default settings, so if you just install Black and set your editor to format on save using it, you should be good to go.

The code is separated in four main processes, each with a specific set of responsibilities:
- The main process controls all the others, starts everything up and cleans up on shutdown
- The stepper process manages the stepper movements. It's now possible to have simultaneous movements of both motors (this closes #38 ).
- The imager process controls the camera and the streaming server via a state machine.
- The segmenter process manages the segmentation and its outputs. The segmentation happens recursively in all folders in `/home/pi/PlanktonScope/img/`. Each folder has its own output archive, and bug #26 is now closed.


Those processes communicates together using MQTT and json messages. Each message is adressed to one topic. The high level topic controls which process receives the message. The details of each topic is at the end of this commit message.

Every imaging sessions has now its own folder, under the `img` root. Metadata are saved individually for every session, in a JSON file in the same directory as the pictures.

The configuration is not parsed from `config.json` anymore and passed directly through MQTT messages to the concerned process.

A new configuration file has been created: `hardware.json`. This file contains information related to your specific hardware configuration. You can choose to reverse the connection of the motors for example, but you can also define specific speed limits and steps number for your pump and focus stage. This will make it easier for people who wants to experiment with different kind of hardware. It's not necessary to have this file though. If it doesn't exists, the default configuration will be applied.

The code is architectured around 6 modules and about 10 classes. I encourage you to have a look at the files, they're pretty straightforward to understand.

There is a lot of work left around the node-red code refactoring, dashboard ui improvements, better and clearer LED messages, OLED screen integration and finer control of the segmentation process, but this is quite good for now.


Here is the topic lists for MQTT and the corresponding messages.
- actuator :  This topic adresses the stepper control thread
              No publication under this topic should happen from the python process
  - actuator/pump :   Control of the pump
                      The message is a json object
                      {"action":"move", "direction":"FORWARD", "volume":10, "flowrate":1}
                      to move 10mL forward at 1mL/min
                      action can be "move" or "stop"
                      Receive only
  - actuator/focus :  Control of the focus stage
                      The message is a json object, speed is optional
                      {"action":"move", "direction":"UP", "distance":0.26, "speed":1}
                      to move up 10mm
                      action can be "move" or "stop"
                      Receive only
- imager/image :      This topic adresses the imaging thread
                      Is a json object with
                      {"action":"image","sleep":5,"volume":1,"nb_frame":200}
                      sleep in seconds, volume in mL
                      Can also receive a config update message:
                      {"action":"config","config":[...]}
                      Can also receive a camera settings message:
                      {"action":"settings","iso":100,"shutter_speed":40}
                      Receive only
- segmenter/segment : This topic adresses the segmenter process
                      Is a json object with
                      {"action":"segment"}
                      Receive only
- status :    This topics sends feedback to Node-Red
              No publication or receive at this level
  - status/pump :     State of the pump
                      Is a json object with
                      {"status":"Start", "time_left":25}
                      Status is one of Started, Ready, Done, Interrupted
                      Publish only
  - status/focus :    State of the focus stage
                      Is a json object with
                      {"status":"Start", "time_left":25}
                      Status is one of Started, Ready, Done, Interrupted
                      Publish only
  - status/imager :   State of the imager
                      Is a json object with
                      {"status":"Start", "time_left":25}
                      Status is one of Started, Ready, Completed or 12_11_15_0.1.jpg has been imaged.
                      Publish only
  - status/segmenter :   Status of the segmentation
      - status/segmenter/name
      - status/segmenter/object_id
      - status/segmenter/metric




Here is the original commit history:
* Extract python main.py from flow

* Fix bug in server addresses

These addresses should be the loopback device instead of the network 
address of the device. Using the loopback address will not necessitate 
to update the script when the network address changes.

* clean up picamera import

* changes to main python and flow:
update MQTT requests address to localhost (bugfix)
update streaming output address to nothing
update main flow to remove python script references and location

* Automatically initialise imaging led on startup to off state.

* Add the ability to invert outputs of the motor

We added a key to config.json "hardware_config" with a subkey 
"stepper_reverse". If this key is present in the config file and set to 
1, the output of the motors are inversed (stepper2 becomes the pump 
motor and stepper1 the focus motor)

* move all non main script to a subfolder

* add __init__.py to package

* light module rewrite

* json cleanup and absolute path for config file

* light.py forgot to import subprocess

#oups

* Add command to turn the leds off

* Auto formatting of main.py

I've used Black with default settings, see https://github.com/psf/black

* First commit of stepper.py
Pump parameters still needs to be checked and tuned.

* addition of hardware details in config.json

* Introduce hardware.json to replace the `hardware_config` of config.json

* stepper.py: calibration, typos

* creates the MQTT_Client class

* pump_max_speed is now in ml/min to help readability

* forgot to add self to the class def

* addition of threading capabilities to stepper.py (UNTESTED)

* mqtt: fix topic bug

* remove counter

* mqtt add doc about topics

* stepper.py creates an "actuator/*/state" topic

* stepper.py: rename mqtt_client to pump_client

* mqtt.py: add details about topics

* stepper.py: rename pump_client to actuator_client

* topic was not split properly and a part was lost

* switch to f-strings for mqtt.py

* cosmetic update

* stepper.py: folder name will be planktoscope change calls

* hardware.json became more straightforward

* stepper.py syntax bugs

* stepper.py addition of a received stop command

* stepper.py: update to max travel distance

* stepper.py: several typos here

* rename folder

* main.py: reword to reflect folder rename

* main.py: remove logic that has been moved to stepper.py and mqtt.py

* main.py: update to add mqtt imaging client

* mqtt.py: make command and args local to class and output more verbose

* make stepper.py a class

* main.py: instantiate stepper class and call it

* main.py: name mqtt client

* update to main.json to reflect main.py changes

* fix bugs around pump control

* update flows to latest version from Thibault

* distance can be a small value, and definitevely should be a float.

* unify mqtt topics

* unify mqtt output in the main flow

* first logger implementation, uses loguru

* mqtt: add reason to on_connect

* mqtt: add on_disconnect handler

* stepper: add more logger calls for debug mainly

* main: add levels for logger

* imager.py: first move of the imager logic

* imager: time import cleanup

* imager: morphocut import cleanup

* imager: skimage import cleanup

* imager: finishing import cleanup

* imager: Class creation - WIP

Also provides a fix for #26 (see line 190).

* imager: threading is needed for Condition()

* streamer: get the streamer server its own file

* imager: creates start_camera and get the server creation out

* imager: subclass multiprocessing.Process

* imager: get Pipeline creation its own function

* imager: cleanup of self calls

* main: code removal and corresponding calls to newly created classes

* imager: various formatting changes

* main: management of signal shutdown

* add requirements.txt

* mqtt: messages are now json objects

Also, addition of a flag on receiving a new message

* mqtt: make message private and add logic to synchronise

* stepper: creates the stepper class

* stepper: use the new class

* stepper: uses the new logic

* stepper: add the shutdown event

* stepper: add shutdown method

* main: add shutdown event

* imager: graceful shutdown

* stepper: nicer way of checking the Eevnt

* self is a required first argument for a method in a class

Especially if you use said class private members!

* python: various typos and small errors in import

* stepper: create mqtt client during init

* stepper: instanciate the mqtt client inside run

Otherwise it's not accessible from inside the loop. It's a PITA,
more information at  https://stackoverflow.com/questions/17172878/using-pythons-multiprocessing-process-class

* stepper: little bugs and typos all around

* mqtt: add shutdown method

* mqtt: add connect in init

* stepper: fix bugs, sanitize inputs

* stepper: work on delay prediction improvements

* stepper: json is mean, double quote are mandatory inside

* mqtt: add details about message exchanged

* imager: first implementation of json messages

* main.json: add new tab for RPi management + json for payloads

* imager: add state_machine class

* stepper: publish last will

* imager: major refactor

* main: make streaming server process a daemon

* mqtt: insert debug statement on close

* main: reorder imports

* imager: make it work!

Reinsert the streaming server logic in there, because there is a problem with the synchronisation part otherwise.
Also, eventually, StreamingOuput() will have to be made not global

Final very critical learning: it's super duper important to make sure the memory split is at least 256Meg for the GPU.
Chaos ensues otherwise

* main: changes to accomodate the streamer/imager fusino

* imager_state_machine: insert states transition description

* stepper: cleanup of code

* segmenter: creation of the class

* python: include segmenter changes

* remove unused files

* stepper: check existence of hardware.json

* main.json: changes to reflect the python script evolution

* remove unecessary TODOs and add some others

* main: add check for config and directories

* imager: update_config is implemented and we have better management of directories now

* segmenter: now work recursively in all folders

* flow: the configuration is now sent via mqtt

* segmenter: better manage pipeline error

* segmenter: declaration of archive_fn in init

* imager: small bugs and typos

* main: add uniqueID output

* imager: add the camera settings message

We can now update the ISO, shutter speed and resolution from Node-Red

* package.json: update dependencies
2020-09-28 11:05:27 +02:00