python - How to deploy and serve prediction using TensorFlow from API? -


from google tutorial know how train model in tensorflow. best way save trained model, serve prediction using basic minimal python api in production server.

my question tensorflow best practices save model , serve prediction on live server without compromising speed , memory issue. since api server running on background forever.

a small snippet of python code appreciated.

tensorflow serving high performance, open source serving system machine learning models, designed production environments , optimized tensorflow. initial release contains c++ server , python client examples based on grpc. basic architecture shown in diagram below.

enter image description here

to started quickly, check out tutorial.


Comments

Popular posts from this blog

php - Wordpress website dashboard page or post editor content is not showing but front end data is showing properly -

javascript - Get parameter of GET request -

javascript - Twitter Bootstrap - how to add some more margin between tooltip popup and element -