Skip to content

msoley/keras-vggface

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

keras-vggface

Oxford VGGFace Implementation using Keras Functional Framework v2+

  • This model is converted from original caffe network. (580 MB)
  • It supports both Tensorflow and Theano backeds.
  • You can also load only feature extraction layers with VGGFace(include_top=False) initiation (59MB).
  • When you use it for the first time , weights are downloaded and stored in ~/.keras folder.
pip install keras_vggface

News

  • Project is now up-to-date with the new Keras version (2.0).

  • Old Implementation is still available at 'keras1' branch.

Library Versions

  • Keras v2.0+
  • Tensorflow 1.0+

Example Usage

  • Feature Extraction
from keras.engine import  Model
from keras.layers import Input
from keras_vggface.vggface import VGGFace

image_input = Input(shape=(224, 224, 3))
# for theano uncomment
# image_input = Input(shape=(3,224, 224))

# Convolution Features
vgg_model_conv = VGGFace(include_top=False, pooling='avg') # pooling: None, avg or max

# FC7 Features
vgg_model = VGGFace() # pooling: None, avg or max
out = vgg_model.get_layer('fc7').output
vgg_model_fc7 = Model(image_input, out)

# After this point you can use your models as usual for both.
# ...
  • Finetuning
from keras.engine import  Model
from keras.layers import Flatten, Dense, Input
from keras_vggface.vggface import VGGFace

#custom parameters
nb_class = 2
hidden_dim = 512

image_input = Input(shape=(224, 224, 3))
# for theano uncomment
# image_input = Input(shape=(3,224, 224))
vgg_model = VGGFace(input_tensor=image_input, include_top=False)
last_layer = vgg_model.get_layer('pool5').output
x = Flatten(name='flatten')(last_layer)
x = Dense(hidden_dim, activation='relu', name='fc6')(x)
x = Dense(hidden_dim, activation='relu', name='fc7')(x)
out = Dense(nb_class, activation='softmax', name='fc8')(x)
custom_vgg_model = Model(image_input, out)

# Train your model as usual.
# ...
  • Prediction
import numpy as np
from keras.preprocessing import image
from keras_vggface.vggface import VGGFace

# tensorflow
model = VGGFace()

# Change the image path with yours.
img = image.load_img('../image/ak.jpg', target_size=(224, 224))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
# TF order aka 'channel-last'
x = x[:, :, :, ::-1]
# TH order aka 'channel-first'
# x = x[:, ::-1, :, :]
# Zero-center by mean pixel
x[:, 0, :, :] -= 93.5940
x[:, 1, :, :] -= 104.7624
x[:, 2, :, :] -= 129.1863

preds = model.predict(x)
print('Predicted:', np.argmax(preds[0]))

References

Licence

Original models can be used for non-commercial research purposes under Creative Commons Attribution License.

The code that provided in this project is under MIT License.

If you find this project useful, please include reference link in your work.

About

VGGFace implementation with Keras Framework

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 100.0%