Serverless Transformer NLP Inference

Choosing an NLP Model

ONNX and ONNX-Runtime

AWS Lambda

Transformer NLP Inference Lambda Function

Image from https://cloudblogs.microsoft.com/opensource/2021/03/01/optimizing-bert-model-for-intel-cpu-cores-using-onnx-runtime-default-execution-provider/. For the same sequence length, increasing batch size does not improve sequences/s (throughput).

Deployment

Tuning Lambda Function

Inference Latency versus input sequence length (Subword ids)
Price for 1B inferences using Lambda Stateless Function for NLP Inference

Conclusion

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store