Strange bug when using transfer learning (Xception model; imagenet) . 0% Accuracy! #3576
Unanswered
jonathanhillwebsite
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi all,
I've adapted my training script following the tutorials and it seems all the gpu's are communicating. However, when testing it out with loading an xception model (with imagenet weights) there's some odd things happening:
Has anyone noticed this behaviour before and had found a solution? (Or am I missing something that's not in the basics tutorials). My problem is highly imbalanced image multi-class classification so I'd expect pretty low accuracy but I was expecting some accuracy a little higher than 0%! Especially if the Xception base model hits 20% when frozen.
A couple more details:
Tensorflow version 2.1 and 2.5,
Using tf.keras,
This is running on AWS SageMaker (4 GPU's, 1 instance).
When using frozen xception model + a dense and dropout layer:

When allowing the xception model layers to be trainable + a dense and dropout layer:

I'm gonna keep looking through the script to see I've not done anything silly but I'm out of ideas :/
Thanks,
Jonathan
Beta Was this translation helpful? Give feedback.
All reactions