Skip to content

Commit 65476d2

Browse files
committed
Make vision folder
1 parent 6a7ce61 commit 65476d2

14 files changed

+34379
-0
lines changed

Computer Vision/01_Custom.ipynb

+1,064
Large diffs are not rendered by default.

Computer Vision/01_Pets.ipynb

+1,651
Large diffs are not rendered by default.

Computer Vision/01_Slides.pdf

144 KB
Binary file not shown.

Computer Vision/01_Slides.pptx

52 KB
Binary file not shown.

Computer Vision/02_Deployment.ipynb

+356
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,356 @@
1+
{
2+
"nbformat": 4,
3+
"nbformat_minor": 0,
4+
"metadata": {
5+
"colab": {
6+
"name": "02_Deployment.ipynb",
7+
"provenance": [],
8+
"collapsed_sections": []
9+
},
10+
"kernelspec": {
11+
"name": "python3",
12+
"display_name": "Python 3"
13+
},
14+
"accelerator": "GPU"
15+
},
16+
"cells": [
17+
{
18+
"cell_type": "markdown",
19+
"metadata": {
20+
"id": "RJQ5_qEdyLvz",
21+
"colab_type": "text"
22+
},
23+
"source": [
24+
"# 02 - Deployment\n",
25+
"\n",
26+
"* [Starlette](https://www.starlette.io/)\n",
27+
"* Follow the `Render` tutorial [here](https://github.com/render-examples/fastai-v3)\n",
28+
"* [Link](https://github.com/muellerzr/fastai2-Starlette) to a fastai2 template\n",
29+
"\n",
30+
"* **Note**: You do not **need** to deploy on Render to get the code working, we can test locally on our machine! (which we will do today)"
31+
]
32+
},
33+
{
34+
"cell_type": "markdown",
35+
"metadata": {
36+
"id": "J-FaHcuTzdq8",
37+
"colab_type": "text"
38+
},
39+
"source": [
40+
"## What will we focus on?\n",
41+
"\n",
42+
"* Looking at how to format inputs/outputs for each model type and feeding it in. \n",
43+
"* Images, Tabular, NLP\n",
44+
"\n",
45+
"## What code do we change?\n",
46+
"* `server.py`"
47+
]
48+
},
49+
{
50+
"cell_type": "markdown",
51+
"metadata": {
52+
"id": "wo-u77GqzwHb",
53+
"colab_type": "text"
54+
},
55+
"source": [
56+
"# Images:\n",
57+
"\n",
58+
"* Different input types:\n",
59+
" * URL\n",
60+
" * File upload"
61+
]
62+
},
63+
{
64+
"cell_type": "code",
65+
"metadata": {
66+
"id": "6_Y8FVlL4TxD",
67+
"colab_type": "code",
68+
"colab": {}
69+
},
70+
"source": [
71+
"async def get_bytes(url):\n",
72+
" async with aiohttp.ClientSession() as session:\n",
73+
" async with session.get(url) as response:\n",
74+
" return await response.read()"
75+
],
76+
"execution_count": 0,
77+
"outputs": []
78+
},
79+
{
80+
"cell_type": "markdown",
81+
"metadata": {
82+
"id": "EiCTLwlQ4AfC",
83+
"colab_type": "text"
84+
},
85+
"source": [
86+
"An image upload"
87+
]
88+
},
89+
{
90+
"cell_type": "code",
91+
"metadata": {
92+
"id": "oanc7RJbx642",
93+
"colab_type": "code",
94+
"colab": {}
95+
},
96+
"source": [
97+
"@app.route('/analyze', methods=['POST'])\n",
98+
"async def analyze(request):\n",
99+
" img_data = await request.form()\n",
100+
" img_bytes = await (img_data['file'].read())\n",
101+
" pred = learn.predict(BytesIO(img_bytes))[0]\n",
102+
" return JSONResponse({\n",
103+
" 'results': str(pred)\n",
104+
" })"
105+
],
106+
"execution_count": 0,
107+
"outputs": []
108+
},
109+
{
110+
"cell_type": "markdown",
111+
"metadata": {
112+
"id": "iNr3NlPB4CIQ",
113+
"colab_type": "text"
114+
},
115+
"source": [
116+
"A URL"
117+
]
118+
},
119+
{
120+
"cell_type": "code",
121+
"metadata": {
122+
"id": "noh6yk1a4C2A",
123+
"colab_type": "code",
124+
"colab": {}
125+
},
126+
"source": [
127+
"@app.route('/analyze', methods=['POST'])\n",
128+
"async def analyze(request):\n",
129+
" img_bytes = await get_bytes(request.query_params[\"url\"])\n",
130+
" pred = learn.predict(BytesIO(img_bytes))[0]\n",
131+
" return JSONResponse({\n",
132+
" 'results' : str(pred)\n",
133+
" })"
134+
],
135+
"execution_count": 0,
136+
"outputs": []
137+
},
138+
{
139+
"cell_type": "markdown",
140+
"metadata": {
141+
"id": "Pe591V9T6FCy",
142+
"colab_type": "text"
143+
},
144+
"source": [
145+
"A zip file (see below on how to upload a `zip` or other file"
146+
]
147+
},
148+
{
149+
"cell_type": "code",
150+
"metadata": {
151+
"id": "1rp6U5jK6Ghf",
152+
"colab_type": "code",
153+
"colab": {}
154+
},
155+
"source": [
156+
"import zipfile\n",
157+
"import csv\n",
158+
"\n",
159+
"@app.route('/analyze', methods=['POST'])\n",
160+
"async def analyze(request):\n",
161+
" data = await request.form()\n",
162+
" content = data['content']\n",
163+
" zip_ref = zipfile.ZipFile(content, 'r')\n",
164+
" mkdir('Downloaded_Images')\n",
165+
" zipref.extractall('Downloaded_Images')\n",
166+
" zip_ref.close()\n",
167+
" path = Path('Downloaded_Images')\n",
168+
" imgs = get_image_files(path)\n",
169+
" learn = load_learner(path/export_file_name)\n",
170+
" dl = test_dl(learn.dbunch, imgs)\n",
171+
" _, __, preds = learn.get_preds(dl=dl, with_decoded=True)\n",
172+
" rm -r 'Downloaded_Images'\n",
173+
" resultsFile = open('results.csv', 'wb')\n",
174+
" wr = csv.writer(resultsFile)\n",
175+
" wr.writerows([preds])\n",
176+
" return FileResponse('results.csv')"
177+
],
178+
"execution_count": 0,
179+
"outputs": []
180+
},
181+
{
182+
"cell_type": "markdown",
183+
"metadata": {
184+
"id": "LHVBE3G47Y1y",
185+
"colab_type": "text"
186+
},
187+
"source": [
188+
"Parsing a csv with image urls"
189+
]
190+
},
191+
{
192+
"cell_type": "code",
193+
"metadata": {
194+
"id": "RzxsDi387amI",
195+
"colab_type": "code",
196+
"colab": {}
197+
},
198+
"source": [
199+
"import csv\n",
200+
"import StringIO\n",
201+
"\n",
202+
"@app.route('/analyze', methods=['POST'])\n",
203+
"async def analyze(request):\n",
204+
" data = await request.form()\n",
205+
" content = await (data['file'].read())\n",
206+
" s = str(content, 'utf-8')\n",
207+
" data = StringIO(s)\n",
208+
" mkdir('Downloaded_Images')\n",
209+
" download_images('Downloaded_Images', urls=data)\n",
210+
" path = Path('Downloaded_Images')\n",
211+
" learn = load_learner(path/export_file_name)\n",
212+
" imgs = get_image_files(path)\n",
213+
" dl = test_dl(learn.dbunch, imgs)\n",
214+
" _, __, preds = learn.get_preds(dl=dl, with_decoded=True)\n",
215+
" rm -r 'Downloaded_Images'\n",
216+
" resultsFile = open('results.csv', 'wb')\n",
217+
" wr = csv.writer(resultsFile)\n",
218+
" wr.writerows([preds])\n",
219+
" return FileResponse('results.csv')"
220+
],
221+
"execution_count": 0,
222+
"outputs": []
223+
},
224+
{
225+
"cell_type": "markdown",
226+
"metadata": {
227+
"id": "4ma0IvVA5FYH",
228+
"colab_type": "text"
229+
},
230+
"source": [
231+
"# Tabular"
232+
]
233+
},
234+
{
235+
"cell_type": "markdown",
236+
"metadata": {
237+
"id": "ggo5cmwm8hNe",
238+
"colab_type": "text"
239+
},
240+
"source": [
241+
"Tabular is different. Most work will be done by sending large chuncks of data for analysis. Let's recreate what we did, but load it into Pandas"
242+
]
243+
},
244+
{
245+
"cell_type": "code",
246+
"metadata": {
247+
"id": "hLJsXSMY5Gul",
248+
"colab_type": "code",
249+
"colab": {}
250+
},
251+
"source": [
252+
"import StringIO\n",
253+
"import csv\n",
254+
"\n",
255+
"@app.route('/analyze', methods=['POST'])\n",
256+
"async def analyze(request):\n",
257+
" data = await request.form()\n",
258+
" content = await (data['file'].read())\n",
259+
" s = str(content, 'utf-8')\n",
260+
" data = StringIO(s)\n",
261+
" df = pd.read_csv(data)\n",
262+
" learn = load_learner(path/export_file_name)\n",
263+
" # if we want to do GPU:\n",
264+
" # learn.model = learn.model.cuda()\n",
265+
" dl = learn.dbunch.train_dl.new(df)\n",
266+
" _, __, y = learn.get_preds(dl=dl, with_decoded=True)\n",
267+
" df['Predictions'] = y\n",
268+
" # if we want to store the results\n",
269+
" path_res = Path('app/static/')\n",
270+
" df.to_csv(path_res/'results.csv')\n",
271+
"\n",
272+
" return FileResponse('results.csv', media_type='csv')"
273+
],
274+
"execution_count": 0,
275+
"outputs": []
276+
},
277+
{
278+
"cell_type": "markdown",
279+
"metadata": {
280+
"id": "nH4qfhb--HlF",
281+
"colab_type": "text"
282+
},
283+
"source": [
284+
"We need to adjust the JavaScript to accept a form:\n",
285+
"\n",
286+
"`client.js`:"
287+
]
288+
},
289+
{
290+
"cell_type": "code",
291+
"metadata": {
292+
"id": "e3KGo_MZ-MOJ",
293+
"colab_type": "code",
294+
"colab": {}
295+
},
296+
"source": [
297+
"function analyze(){\n",
298+
" var uploadFiles = el('file-input').files;\n",
299+
" if (uploadFiles.length < 1) alert('Please select 1 file to analyze!');\n",
300+
"\n",
301+
" el('analyze-button').innerHTML = 'Analyzing...';\n",
302+
" var xhr = new XMLHttpRequest();\n",
303+
" var loc = window.location\n",
304+
" xhr.open('POST', `${loc.protocol}//${loc.hostname}:${loc.port}/analyze`, true);\n",
305+
" xhr.onerror = function() {alert (xhr.responseText);}\n",
306+
" xhr.onload = function(e) {\n",
307+
" if (this.readyState === 4) {\n",
308+
" el(\"result-label\").innerHTML = `Result = Good`;\n",
309+
" \n",
310+
" download('results.csv', 'results.csv');\n",
311+
" xhr.send();\n",
312+
" }\n",
313+
" el(\"analyze-button\").innerHTML = \"Analyze\";\n",
314+
" };\n",
315+
"\n",
316+
" var fileData = new FormData();\n",
317+
" fileData.append(\"file\", uploadFiles[0]);\n",
318+
" xhr.send(fileData);\n",
319+
" }\n",
320+
" }"
321+
],
322+
"execution_count": 0,
323+
"outputs": []
324+
},
325+
{
326+
"cell_type": "markdown",
327+
"metadata": {
328+
"id": "pKZsDRx1_a5m",
329+
"colab_type": "text"
330+
},
331+
"source": [
332+
"# Text\n",
333+
"\n",
334+
"To write a simple function for text based models:"
335+
]
336+
},
337+
{
338+
"cell_type": "code",
339+
"metadata": {
340+
"id": "61OsZ_1q_gmV",
341+
"colab_type": "code",
342+
"colab": {}
343+
},
344+
"source": [
345+
"@app.route('/analyze', methods=['POST'])\n",
346+
"async def analyze(request):\n",
347+
" data = await request.form()\n",
348+
" content = data['content']\n",
349+
" pred = learn.predict(content)[0]\n",
350+
" return JSONResponse({'result': pred})"
351+
],
352+
"execution_count": 0,
353+
"outputs": []
354+
}
355+
]
356+
}

0 commit comments

Comments
 (0)