In Editor, make changes to [[config.json]] , common things that should be changed are listed below but the full set of options is in the [documentation](https://plantcv.readthedocs.io/en/stable/parallel_config/) and is worth reviewing.
- "input_dir": "./imgs" *\[Put file path/name of input directory for images you want analyzed\]*
- "json": "data_output.json" *\[Put the path/name of the data output file (located in the args container under results within your workflow)\]*
- "filename_metadata": \["camera", "id", "timestamp"\] *\[list of metadata terms to collect. Supported metadata terms include: camera, imgtype, zoom, exposure, gain, frame, lifter, timestamp, id, plantbarcode, treatment, cartag, measurementlabel, and other\]*
- "workflow": "multi-plant-workflow.py" *\[path/name of user-defined (your) PlantCV workflow Python script\]*
- "img_outdir": "./output_images" *\[path/name of output directory where measured images will be stored. Default is "./output_images"\]*
- "imgformat": "jpg" *\[image file format/extension. Default is "png"\]*
- "timestampformat": "%Y-%m-%d-%H-%M" *\[date format as observed in your naming scheme. For explanation what each of the symbols mean, see the python [time format documentation](https://docs.python.org/3.7/library/datetime.html#strftime-and-strptime-behavior) \]*
- "append": false *\[(bool, default = False): if [[True ]]will append results to an existing json file. If [[False]], will delete previous results stored in the specified JSON file.\]*
- *"cluster": "LocalCluster" \[There are several cluster types, the default option is "LocalCluster" which will run in parallel on the machine you run the run workflow command from. The complete list of options is:* [["LocalCluster", "HTCondorCluster", "LSFCluster", "MoabCluster", "OARCluster", "PBSCluster", "SGECluster", and "SLURMCluster"]] which can be read about in the [dask docs](https://jobqueue.dask.org/).*\]*
- *cluster_config:*
- n_workers: In the example below this is still 1, but you will increase this based on how many cores you have available/want to use. This controls the number of workers to run in parallel. The "cores" argument is how many cores each worker needs, which will almost always stay as 1.