This video is the second part of the Tensorflow Serving example.
In this video, we will create two deep learning models using the TensorFlow High Level API - Keras. Then we will export these models to the saved_model format and deploy them using the Tensorflow Serving Docker container. Also, we will see how to use models.config file and how to access the different version of the model using the REST API.
00:08 Video Description
00:46 Create models with Keras
02:22 Save/Load Keras models
02:43 Export Keras model to the saved_model / load from the saved_model format
04:00 Prepare powershell Environment for tensorflow serving docker container
04:20 TensorFlow Serving Models.Config file
04:41 Starting TensorFlow Serving from the Docker container
05:08 Checking which models are available
05:33 Access to the different versions of the model using the REST API
06:54 Final words
Source Code from the Video:
[ Ссылка ]
First part of this example: [ Ссылка ]
What is the TensorFlow Serving: [ Ссылка ]
TensorFlow Serving guide: [ Ссылка ]
TensorFlow 2.0 RC: [ Ссылка ]
![](https://i.ytimg.com/vi/CUpUPY5g8NE/maxresdefault.jpg)