Skip to content

Pytorch example #477

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 9 commits into from
Sep 20, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
173 changes: 173 additions & 0 deletions examples/image-classifier/alexnet.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,173 @@
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"name": "alexnet.ipynb",
"provenance": [],
"collapsed_sections": []
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
}
},
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "_KePiywrVHG2",
"colab_type": "text"
},
"source": [
"# Export Alexnet from Torchvision Models\n",
"In this notebook we convert Alexnet to ONNX and upload it to S3 where it can be deployed by Cortex\n",
"\n",
"Based on: [PytorchOnnxExport](https://github.com/onnx/tutorials/blob/master/tutorials/PytorchOnnxExport.ipynb)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "dzphLNy5VswD",
"colab_type": "text"
},
"source": [
"## Install dependencies"
]
},
{
"cell_type": "code",
"metadata": {
"id": "N69aGD72Is4t",
"colab_type": "code",
"colab": {}
},
"source": [
"!pip install torch==1.2.* torchvision==0.4.*"
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "2raEvUmojKhK",
"colab_type": "text"
},
"source": [
"## Download and Export Model\n",
"Download the pretrained Alexnet Model and export to ONNX model format:"
]
},
{
"cell_type": "code",
"metadata": {
"id": "zKuFyRTlJUkd",
"colab_type": "code",
"colab": {}
},
"source": [
"import torch\n",
"import torch.onnx\n",
"import torchvision\n",
"\n",
"# Standard ImageNet input - 3 channels, 224x224,\n",
"# values don't matter since we only care about network structure.\n",
"dummy_input = torch.randn(1, 3, 224, 224)\n",
"\n",
"# We are going to use a Pretrained alexnet model\n",
"model = torchvision.models.alexnet(pretrained=True)\n",
"\n",
"# Export to ONNX\n",
"torch.onnx.export(model, dummy_input, \"alexnet.onnx\")"
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "4YvEPLmljaMT",
"colab_type": "text"
},
"source": [
"## Upload the model to AWS\n",
"Cortex loads models from AWS, so we need to upload the exported model.\n",
"\n",
"Set these variables to configure your AWS credentials and model upload path:"
]
},
{
"cell_type": "code",
"metadata": {
"id": "y-SAhUH-Jlo_",
"colab_type": "code",
"cellView": "form",
"colab": {}
},
"source": [
"AWS_ACCESS_KEY_ID = \"\" #@param {type:\"string\"}\n",
"AWS_SECRET_ACCESS_KEY = \"\" #@param {type:\"string\"}\n",
"S3_UPLOAD_PATH = \"s3://my-bucket/image-classifier/alexnet.onnx\" #@param {type:\"string\"}\n",
"\n",
"import sys\n",
"import re\n",
"\n",
"if AWS_ACCESS_KEY_ID == \"\":\n",
" print(\"\\033[91m{}\\033[00m\".format(\"ERROR: Please set AWS_ACCESS_KEY_ID\"), file=sys.stderr)\n",
"\n",
"elif AWS_SECRET_ACCESS_KEY == \"\":\n",
" print(\"\\033[91m{}\\033[00m\".format(\"ERROR: Please set AWS_SECRET_ACCESS_KEY\"), file=sys.stderr)\n",
"\n",
"else:\n",
" try:\n",
" bucket = re.search(\"s3://(.+?)/\", S3_UPLOAD_PATH).group(1)\n",
" key = re.search(\"s3://.+?/(.+)\", S3_UPLOAD_PATH).group(1)\n",
" except:\n",
" print(\"\\033[91m{}\\033[00m\".format(\"ERROR: Invalid s3 path (should be of the form s3://my-bucket/path/to/file)\"), file=sys.stderr)"
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "HmvoV7v96jip",
"colab_type": "text"
},
"source": [
"Upload the model to S3:"
]
},
{
"cell_type": "code",
"metadata": {
"id": "--va3L2KNBHX",
"colab_type": "code",
"colab": {}
},
"source": [
"import boto3\n",
"\n",
"s3 = boto3.client(\"s3\", aws_access_key_id=AWS_ACCESS_KEY_ID, aws_secret_access_key=AWS_SECRET_ACCESS_KEY)\n",
"print(\"Uploading {} ...\".format(S3_UPLOAD_PATH), end = '')\n",
"s3.upload_file(\"alexnet.onnx\", bucket, key)\n",
"print(\" ✓\")"
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "acHZMDxqjnNQ",
"colab_type": "text"
},
"source": [
"<!-- CORTEX_VERSION_MINOR -->\n",
"That's it! See the [example on GitHub](https://github.com/cortexlabs/cortex/tree/master/examples/alexnet) for how to deploy the model as an API."
]
}
]
}
33 changes: 33 additions & 0 deletions examples/image-classifier/alexnet_handler.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
import requests
import numpy as np
import base64
from PIL import Image
from io import BytesIO
from torchvision import transforms

labels = requests.get(
"https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt"
).text.split("\n")[1:]


# https://github.com/pytorch/examples/blob/447974f6337543d4de6b888e244a964d3c9b71f6/imagenet/main.py#L198-L199
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
preprocess = transforms.Compose(
[transforms.Resize(256), transforms.CenterCrop(224), transforms.ToTensor(), normalize]
)


def pre_inference(sample, metadata):
if "url" in sample:
image = requests.get(sample["url"]).content
elif "base64" in sample:
image = base64.b64decode(sample["base64"])

img_pil = Image.open(BytesIO(image))
img_tensor = preprocess(img_pil)
img_tensor.unsqueeze_(0)
return img_tensor.numpy()


def post_inference(prediction, metadata):
return labels[np.argmax(np.array(prediction).squeeze())]
13 changes: 10 additions & 3 deletions examples/image-classifier/cortex.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,15 @@
name: image

- kind: api
name: classifier
model: s3://cortex-examples/inception
request_handler: handler.py
name: classifier-inception
model: s3://cortex-examples/image-classifier/inception
request_handler: inception_handler.py
tracker:
model_type: classification

- kind: api
name: classifier-alexnet
model: s3://cortex-examples/image-classifier/alexnet.onnx
request_handler: alexnet_handler.py
tracker:
model_type: classification
2 changes: 1 addition & 1 deletion examples/image-classifier/inception.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -136,7 +136,7 @@
"source": [
"AWS_ACCESS_KEY_ID = \"\" #@param {type:\"string\"}\n",
"AWS_SECRET_ACCESS_KEY = \"\" #@param {type:\"string\"}\n",
"S3_UPLOAD_PATH = \"s3://my-bucket/inception\" #@param {type:\"string\"}\n",
"S3_UPLOAD_PATH = \"s3://my-bucket/image-classifier/inception\" #@param {type:\"string\"}\n",
"\n",
"import sys\n",
"import re\n",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,6 @@
from PIL import Image
from io import BytesIO


labels = requests.get(
"https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt"
).text.split("\n")
Expand Down
1 change: 1 addition & 0 deletions examples/image-classifier/requirements.txt
Original file line number Diff line number Diff line change
@@ -1 +1,2 @@
pillow
torchvision