## What's what `1_json_to_csv.py`: - converts cm_tags.json to a csv format I'm more used to and that is easier to use with my existing tooling `2_retrieve_images_by_cc.py`: - retrieves the validation images using cheesechaser, downloads them to "original/" `3_common_tags.py`: - clean up both models tag sets to only consider the common tags; note down the indexes to use to fetch the correct tag probs from the dumps generated by the inference scripts `4_cm_onnx_inference.py`: - run inference using the ONNX model of camie-tagger, dump the activated (post-sigmoid) outputs `5_wd_v3_onnx_inference.py`: - run inference using the ONNX model of wd_v3 taggers (hardcoded to use swinv2_v3, could be made to use any of the v3 series), dump the activated outputs `6_get_labels.py`: - get one-hot encoded labels from val_dataset.csv, dumps them to "labels.npy" `7_analyze_metrics_macro.py`: - final analysis tool ## Results Category 0: general tags - full ``` [user]$ python 7_analyze_metrics_macro.py -tc sw_tags_common.csv -c 0 -d swinv2_probs.npy -l labels.npy -a Final # of tags: 7853 swinv2_probs.npy: {'thres': 0.2613, 'F1': 0.5386, 'F2': 0.5475, 'MCC': 0.548, 'A': 0.9972, 'R': 0.5619, 'P': 0.5619} [user]$ python 7_analyze_metrics_macro.py -tc cm_tags_common.csv -c 0 -d cm_probs.npy -l labels.npy -a Final # of tags: 7853 cm_probs.npy: {'thres': 0.2694, 'F1': 0.273, 'F2': 0.2835, 'MCC': 0.2859, 'A': 0.9955, 'R': 0.3075, 'P': 0.3075} ``` Category 0: general tags - ignoring the top 5000 ``` [user]$ python 7_analyze_metrics_macro.py -tc sw_tags_common.csv -c 0 -d swinv2_probs.npy -l labels.npy -a -s 5000 Final # of tags: 2859 swinv2_probs.npy: {'thres': 0.2272, 'F1': 0.4908, 'F2': 0.5026, 'MCC': 0.5069, 'A': 0.9998, 'R': 0.5256, 'P': 0.5255} [user]$ python 7_analyze_metrics_macro.py -tc cm_tags_common.csv -c 0 -d cm_probs.npy -l labels.npy -a -s 5000 Final # of tags: 2859 cm_probs.npy: {'thres': 0.2484, 'F1': 0.1961, 'F2': 0.2032, 'MCC': 0.2104, 'A': 0.9997, 'R': 0.2295, 'P': 0.2294} ``` Category 4: character tags ``` [user]$ python 7_analyze_metrics_macro.py -tc sw_tags_common.csv -c 4 -d swinv2_probs.npy -l labels.npy -a Final # of tags: 2585 swinv2_probs.npy: {'thres': 0.3411, 'F1': 0.9464, 'F2': 0.9482, 'MCC': 0.9491, 'A': 1.0, 'R': 0.9519, 'P': 0.952} [user]$ python 7_analyze_metrics_macro.py -tc cm_tags_common.csv -c 4 -d cm_probs.npy -l labels.npy -a Final # of tags: 2585 cm_probs.npy: {'thres': 0.2493, 'F1': 0.7148, 'F2': 0.7226, 'MCC': 0.7266, 'A': 0.9998, 'R': 0.74, 'P': 0.7397} ``` ## Caveats The swinv2_v3 tagger has got an unfair home advantage, in that out of the 20116 validation samples, only 2908 are guaranteed to not have been part of the training set. Gathering a more fair validation set is left as an exercise for the reader. The exact datasets splits use for wd_v3 models are available here: https://huggingface.co/datasets/SmilingWolf/wdtagger-v3-seed The analysis script removes tags that have no positive samples in the val set. It would probably be a good idea to only select tags that have 5+ samples. It's a fairly trivial change to make to the analysis script, if one feels so inclined. I have only used the tags that have the exact same name and category to build the common tag set. I haven't adjusted for aliases and implications.