I have seen some kaggle competitions do image transformations and put the data back into the training set to increase the robustness of the classifier. For instance, rotating images, slightly skewing them, etc.
I would propose that for this leopard problem, instead of just skewing the images, you also performed transformations on the COLOR and put the images back into the training set.
Maybe applying certain filters, such asdimming the saturation or contrast of images, so that the contrast between the leopoard spots were less visible (i.e. "A Leopoard in low lighting") - maybe this would force the neural net to learn more than just its print.
Knowing the right set of color filters to apply to all images could be tricky though.
I would propose that for this leopard problem, instead of just skewing the images, you also performed transformations on the COLOR and put the images back into the training set.
Maybe applying certain filters, such asdimming the saturation or contrast of images, so that the contrast between the leopoard spots were less visible (i.e. "A Leopoard in low lighting") - maybe this would force the neural net to learn more than just its print.
Knowing the right set of color filters to apply to all images could be tricky though.