FunSnap
A DESKTOP BASED PHOTO EDITING
TOOL
Final Report of the Project
Supervised by
Dr. Zerina Begum
Professor
Institute of Information Technology
University of Dhaka
Submitted by
Saiba Alam Pritul (BSSE 0918)
Exam roll: 0929
BSSE Session: 2016-2017
Institute of Information Technology
University of Dhaka
Submission date: 04.04.2021
LETTER OF TRANSMITTAL
4th April, 2021
The Coordinator
Software Project Lab 3
Institute of Information Technology
University of Dhaka
Subject: Submission of the final report of Software Project Lab 3.
Dear Sir,
With due respect, I am pleased to submit the final report on FunSnap, A Desktop based
photo editing tool. Although this report may have shortcomings, I have tried my level
best to produce an acceptable report. I would be highly obliged if you overlooked the
mistakes and accepted the effort that has been put in this report.
Sincerely yours,
Saiba Alam Pritul
Roll: BSSE 0918
Exam Roll: 0929
BSSE 9th batch
Institute of Information Technology
University of Dhaka
Acknowledgment
At first, I would like to thank almighty for helping to prepare the final report of this
project.
I would like to express my deepest gratitude to all those who provided me the support and
encouragement to start this project. Thanks to my supervisor Dr. Zerina Begum,
Professor, Institute of Information Technology, University of Dhaka, whose continuous
suggestions and guidance has been invaluable to me.
I am grateful to the Institute of Information Technology for giving me the opportunity to
start such a project.
Lastly, I would like to thank my classmates. They have always been helpful and provided
valuable insights from time to time.
Abstract
This document contains the software requirements and specifications, Use-case diagram,
Data-based modeling, Class-based modeling, Archetype definition, Mapping
requirements to software architecture, Preliminary test plan, High-level description of
testing goals, User interface, User Manual, Summary of items and features to be tested in
“FunSnap: A Desktop Based Photo Editing Tool”. This Tool can be used on pictures to
crop, resize, give filter effects, control contrast, sharpness and make them cartoonish and
if there is a human expression make it an emoji.
TABLE OF
CONTENTS
Chapter 1: Introduction..................................................................................................8
1.1 Purpose.................................................................................................................8
1.2 Scope....................................................................................................................8
1.3 Assumptions.........................................................................................................9
1.4 Definitions............................................................................................................9
Chapter 2: Overall Description....................................................................................10
2.1 Quality Function Deployment............................................................................10
2.2 Usage Scenario...................................................................................................11
Chapter 3: Scenario Based Modeling...........................................................................12
3.1 Use Case Diagram..............................................................................................12
Chapter 4: Class Based Model......................................................................................20
4.1 Analysis Class......................................................................................................20
4.2 Class Card..........................................................................................................21
4.3 Class Diagram.....................................................................................................23
Chapter 5: Architectural Design...................................................................................24
5.1 Architectural Overview......................................................................................24
5.2 Architectural Context Diagram...........................................................................25
Chapter 6: Test Plan.....................................................................................................26
6.1 High-level description of testing goals...............................................................26
6.2 Summary of items and features to be tested.....................................................26
6.3: Validation..........................................................................................................28
Chapter 7: Methodology..............................................................................................29
Chapter 8: User Interface Design.................................................................................39
chapter 9. Implementation Overview..........................................................................45
9.1 Technology Used in implementation.................................................................45
9.2 Source code description.....................................................................................46
Chapter 10: User Manual Design.................................................................................48
Chapter 11: Conclusion................................................................................................50
References....................................................................................................................50
CHAPTER 1: INTRODUCTION
Photo editing is the changing of images. Many photos of models are edited to remove
blemishes or make the model "better". This is usually called retouching, airbrushing or
photoshopping, even if Photoshop or airbrushes are not used. We can make any event
look and feel more vibrant and fun with photo editing. we can also make our old
photographs that are black and white come to life with color. These photographs can be
fixed even if they are damaged. Photo editing can bring to life any picture with more
color and joy! So I found it interesting.
1.1 Purpose
The purposes of this document are:
Identify the requirements that have to be carried out as the part of the project.
Form the baseline for construction of the proposed system.
Help to reduce the development effort and reveal misunderstandings, and
inconsistencies early in the development cycle when these problems are easier to
correct.
1.2 Scope
The scope of the project is given below:
The system will be developed and tested on Windows 10 operating system.
This application is only for desktop
1.3 Assumptions
The assumptions of the project are:
User will use png or jpg file as an input
1.4 Definitions
Cartoonify: The process of making something look cartoonish.
Emojify: Detect the facial expression and make an emoji using the expression
Sepia Filter: Sepia filters are monochromatic, some even consider them to be black
and white photos. They have distinct warm, brown-yellow tones.
Contrast: Contrast is the scale of difference between black and white in photos.
Sharpness: Sharpness describes the clarity of detail in a photo, and can be a valuable
creative tool for emphasizing texture.
CHAPTER 2: OVERALL DESCRIPTION
This chapter presents quality function deployment and usage scenario of the photo editing
tool
2.1 Quality Function Deployment
Quality function deployment translates the needs of the customer into technical
requirements for software. I have identified the following requirements for the system:
Normal Feature:
Rotate (Rotate the image)
Flip (Flip the image)
Resize (Resize the image)
Adjustment (Adjust brightness, contrast, sharpness and saturation of the
image)
Greyscale (Convert an image to B&W from RGB)
Cartoonify (There are four available options: Pencil Sketch, Detail
Enhancement, pencil Edges, Bilateral Filter)
Emojify (Detect the facial expression and make an emoji using the
expression)
Expected Feature
User can take picture as an input from anywhere by browsing files.
This tools can be used in every format ()
While editing, reset option is available
The output can be saved in any format
Exciting Feature
Multiple filters are available
Multiple editing in one picture
There will be a preset option featuring multiple presets like: film grain, cool tones
etc.
2.2 Usage Scenario
Filter
It offers four different kind of filters: normal, sepia, negative, grayscale
Adjusting
It adjusts the contrast, brightness and sharpness
Modification
It resizes the image
Rotation
It has two option rotation and flip
Cartoonify
It gives a Cartoon effect
Emojify
Detect the emotion and make this an emoji
CHAPTER 3: SCENARIO BASED MODELING
This chapter describes Scenario Based Modeling of the system.
3.1 Use Case Diagram
Use Case diagrams of photo editing tool are given below:
Level 0: Photo Editing Tool
Figure 1: Level 0 use case
Level 0
Actor User
Goal in context Edit pictures
Level 1: Modules of Photo Editing Tool
Figure 2: Level 1 use case
Level 1
Actor User
Goal in context The diagram shown in figure 2 shows all the
modules of the application.
This photo editing tool consists of 6 modules.
They are
Level 1.1: Filters
Level 1.2: Adjustment
Level 1.3: Modification
Level 1.4: Rotation
Level 1.5: Cartoonify
Level 1.6: Emojify
Level 1.1: Filter
Figure 3: Level 1.1 use case
Level 1.1
Actor User
Actions and Replies A1: User Selects None filter
R1: System doesn’t change the photo
A2: User Selects sepia filter
R2: System add a sepia filter to the photo
A3: User Selects negative filter
R3: System add a negative filter to the photo
A4: User selects black and white
R4: System add a grayscale filter to the photo
Level 1.2: Adjustment
Figure 4: Level 1.2 use case
Level 1.2
Actor User
Actions and Replies A1: User reduce / increase contrast
R1: System reduce / increase contrast of the
photo accordingly
A2: User reduce / increase brightness
R2: System reduce / increase brightness of
the photo
A3: User reduce / increase sharpness
R3: System reduce / increase sharpness of a
photo
Level 1.3: Modification
Figure 5: Level 1.3 use case
Level 1.3
Actor User
Actions and Replies A1: User resize the photo
R1: system create a new height width
Level 1.4: Rotation
Figure 6: Level 1.4 use case
Level 1.4
Actor User
Actions and Replies A1: User select rotate option
R1: System rotate the picture left to right,
clockwise and vice versa
A2: User select Flip option
R2: System flip the picture horizontally and
vertically
Level 1.5: Cartoonify
Figure 7: Level 1.5 use case
Level 1.5
Actor User
Actions and Replies A1: User select cartoonify option
R1: system apply the cartoonify filter
Level 1.6: Emojify
Figure 8: Level 1.6 use case
Level 1.6
Actor User
Actions and Replies A1: User select emojify option
R1: system apply the emojify filter
CHAPTER 4: CLASS BASED MODEL
This chapter describes the class based modeling of the FunSnap: Picture Editing Tool
4.1 Analysis Class
After identifying nouns from scenario, I have filtered nouns belonging to solution domain
using General Classification (External entities, Things, Events, Roles, Organizational
units, Places, and Structures). Nouns selected as potential class were filtered using
Selection Criteria (Retained information, Needed services, Multiple attributes, Common
attributes, Common operations, and Essential requirements). After performing analysis
on potential classes, I have found the following analysis classes:
Table: Analysis Classes
Serial Class name Attributes Methods
no
1 CartoonTab image, line_size, color_quantization () +
blur_value
edge_mask () +
on_cartoonify()
2 EmojiTab image show_vid() + show_vid2() +
on_emojify()
3 RotationTab image on_rotate_left() + on_rotate_right() +
on_flip_left() + on_flip_top()
4 ModificationTab image set_boxes() + on_width_change()
+ on_height_change() +
on_ratio_change() + on_apply()
5 AdjustingTab image reset_slider() +
on_brightness_slider_released() +
on_sharpness_slider_released() +
on_contrast_slider_released()
6 FiltersTab name, title, add_filter_thumb() + on_filter_select() +
filter_name toggle_thumb()
7 MainLayout - place_preview_image() + on_save() +
on_upload() + update_img_size_lbl() +
on_reset()
8 FunSnap event center() + closeEvent()
4.2 Class Card
Table: CartoonTab Class
Responsibility Collaborators
image, line_size, blur_value
Table: EmojiTab Class
Responsibility Collaborators
1. Classifies human facial expression to image
filter and
2. Maps corresponding emojis or avatar
Table: RotationTab Class
Responsibility Collaborators
1. Rotate the photo top and right image
2. Flip the photo right and left
Table: ModificationTab Class
Responsibility Collaborators
1. Width, height, ration change image
2. Apply the changes
Table: AdjustingTabClass
Responsibility Collaborators
1. Increase/decrease brightness image
2. Increase/decrease sharpness
3. Increase/decrease contrast
Table: FiltersTab Class
Responsibility Collaborators
1. Add filter to the pictures name, title, filter_name
2. Preview the filters
4.3 Class Diagram
Figure 9: Class diagram of FunSnap: A photo editing tool
CHAPTER 5: ARCHITECTURAL DESIGN
This chapter describes architectural overview and architectural context diagram of the
Photo editor
5.1 Architectural Overview
This Photo editor will follow 2-tier architecture. It will be divided into presentation layer,
logic layer.
Figure 10: 2-tier Architecture
Presentation Layer: The presentation layer is responsible for accepting user input and
displaying image to the user. It requests a logic layer to edit the photo.
Logic Layer: The logic layer is a trained model which analyses the input photo and give it
a requested edit effect.
5.2 Architectural Context Diagram
Figure 11: Architectural Context diagram of FunSnap: A photo editing tool
User is the only actor that is both uploader and editor of photo uploaded or edited by the
System. Finally, Trained model is used by the system for emojifying. It is shown as
subordinate system. There is no superordinate or peer-level system.
CHAPTER 6: TEST PLAN
I have used black box testing technique to test the FunSnap: A photo editing Tool. All the
tests have been conducted on Windows 10 (64-bit).
6.1 High-level description of testing goals
The photo editing app will undergo high level testing which is popularly known
as Black-box testing. Black Box Testing is a software testing method in which the
functionalities of software applications are tested without having knowledge of internal
code structure, implementation details and internal paths. Black Box Testing mainly
focuses on input and output of software applications and it is entirely based on software
requirements and specifications.
Serving as a bridge between users and development team of a product, the ultimate goal
of software testing is to troubleshoot all the issues and bugs as well as control the quality
of a resulted product. The goals of high level software testing are given below:
To ensure that the system works properly that means the system can edit pictures
correctly
To ensure that the system satisfies the user requirements and works as desired.
To find any existing bug
To improve the system
6.2 Summary of items and features to be tested
Test Test case Input Steps to be Expected Actual Pass/Fail
ID Data Executed Result Result
Test
T1 Test if imag [Link] upload An image An image is Pass
system e button will be
can 2. search an image uploaded uploaded
upload a
photo
T2 Test if imag [Link] reset button An image An image is Pass
system e will reset reset
can reset
a photo
T3 Test if imag [Link] filtering Different Different filter Pass
system e thumbnails filter will is added for
can add be added different
filter to a for options
photo different
options
T4 Test if imag [Link] adjust Adjustment Adjustment is Pass
system e button will be performed on
can adjust performed the photo
[Link]/decrease
color of a on the
contrast,
photo photo
sharpness,
brightness
T5 Test if imag [Link] rotate An image An image is Pass
system e button will be rotated
can rotate rotated
2. rotate right or
a photo
left
T6 Test if imag [Link] rotate An image An image is Pass
system e button will be flipped
can flip a flipped
2. flip right or top
photo
T7 Test if imag [Link] cartoonify An image An image is Pass
system e button will be cartoonified
can cartoonified
cartoonify
a photo
T8 Test if short [Link] emojify The facial The facial Pass
system video button expression expression is
can will be detected right
perform detected and a emoji is
emojify and a emoji created
will be
created
T9 Test if short [Link] emojify The facial The facial Fail
system video button expression expression is
can will be detected wrong
perform detected and a emoji is
emojify and a emoji created
will be
created
T10 Test if imag [Link] save button An image An image is Pass
system e will be saved
can save a saved
photo
6.3: Validation
This system can perform 100% accurately on filter, adjustment, modification, rotation
and cartoonify.
But on emojify it works 60% accurately. As from the trained model we have gained 64%
accuracy for the validation set and 86% for training set, sometimes, this system fails to
detect the emotion from the facial expression. Moreover, human expressions are very
different from one to another, so it becomes hard for the system to detect the right
expression always.
CHAPTER 7: METHODOLOGY
I have total six available options to apply on the picture. Four basic editing options and
two specialized option.
For the four basic operation of the editor I used PIL, pyqt5 etc. library. I provided four
filter option using color_filter function of ImageEnhance module. I adjusted the
brightness, sharpness, contrast using the ImageEnhance module. Also, rotated and flipped
the image using rotate function of the Image module and modified the size using resize
function of the Image module.
For the other two options, I cartoonified the image using K-means and emojified the
image using CNN.
Cartoonify:
To create a cartoon effect, I paid attention to two things; edge and color palette.
Those are what make the differences between a photo and a cartoon. To adjust that
two main components, there are four main steps that I through: Load image, create
edge mask, Reduce the color palette and Combine edge mask with the colored
image.
1. Load Image
The first main step is loading the image. Define the read_file function, which
includes the cv2_imshow to load our selected image.
2. Create Edge Mask
Commonly, a cartoon effect emphasizes the thickness of the edge in an image. I
detected the edge in an image by using the [Link]() function.
In that egde_mask function, I transformed the image into grayscale. Then, I
reduced the noise of the blurred grayscale image by using [Link]. The
larger blur value means fewer black noises appear in the image. And then,
apply adaptiveThreshold function, and define the line size of the edge. A larger
line size means the thicker edges that will be emphasized in the image.
Figure 12: Input Image (left) Edge masked Image(right)
3. Reduce the Color Palette
The main difference between a photo and a drawing — in terms of color — is the
number of distinct colors in each of them. A drawing has fewer colors than a
photo. Therefore, we use color quantization to reduce the number of colors in the
photo.
Color Quantization
To do color quantization, I applied the K-Means clustering algorithm. K-Means
represents an unsupervised algorithm from machine learning theory. This
algorithm aims to cluster input data points and is one of the most straightforward
and intuitive clustering algorithms, among many others.
Color quantization means one way of comparing two colors is to calculate
Euclidean distance between them, which will be done with the following formula:
√ ( R 1−R 2 )2 + ( G 1−G 2 )2+ ( B 1−B 2 )2
So, here we see that every pixel is defined with red, green, and blue. For two data
points, we will just make these subtractions. The closer these points are in this 3D
color space or a cube, the lower the Euclidean distance will be. In the case where
we have very distant colors, this Euclidean distance will be relatively larger. For
instance, white and black will be very far apart. On the other hand, any certain
two points will be relatively close. We can better understand this if we look at the
following example:
Figure 13: Color Quantization
The k value can be adjusted to determine the number of colors that I want to apply
to the image. Here I use, 9 colors.
Figure 14: Input Image (left) color quantized Image(right)
Bilateral Filter
After doing color quantization, I reduced the noise in the image by using a bilateral
filter. A bilateral filter is a non-linear, edge-preserving, and noise-
reducing smoothing filter for images. It replaces the intensity of each pixel with a
weighted average of intensity values from nearby pixels. This weight can be
based on a Gaussian distribution. Crucially, the weights depend not only on
Euclidean distance of pixels, but also on the radiometric differences (e.g., range
differences, such as color intensity, depth distance, etc.). This preserves sharp
edges. It gave a bit blurred and sharpness-reducing effect to the image.
There are three parameters that I can adjust based on your preferences:
d — Diameter of each pixel neighborhood
sigmaColor — A larger value of the parameter means larger areas of semi-equal
color.
sigmaSpace –A larger value of the parameter means that farther pixels will
influence each other as long as their colors are close enough.
Figure 15: Input Image (left) Smoothened Image(right)
4. Combine Edge Mask with the Colored Image
The final step is combining the edge mask that I created earlier, with the color-
processed image. To do so, I used the cv2.bitwise_and function.
Figure 16: Input Image (left) Cartoonified Image(right)
Emojify:
1. Collecting dataset:
The facial expression recognition dataset consists of 48*48 pixel grayscale face
images. The images are centered and occupy an equal amount of space. This
dataset consists of facial emotions of six categories: angry, disgust, fear, happy,
sad, surprised, natural
2. Train the model:
The I built a convolution neural network architecture and train the model on the
given dataset for Emotion recognition from images. I made a file [Link] and
follow the steps:
First I initialized the training and validation generators.
Then I designed the CNN model for emotion detection with different layers. I
started with the initialization of the model followed by batch normalization layer
and then different convents layers with ReLu as an activation function, max pool
layers, and dropouts to do learning efficiently.
Convolutional Neural Network (CNN)
A convolutional neural network works a bit differently than the neural networks
in our brains. For example, it sees images as RGB pixels (or numbers), and while
the layers are usually organized hierarchically, the convolution operator works
in 3 dimensions: width, height and depth.
All convolutional layers contain filters (or kernels) that slide over the input image,
detect features, create a feature map, and pass the results to the next layer. Then
the operation repeats. Filters in the first few layers handle the simplest features
(like edges or curves), while the next layers combine these results and use them to
detect more detailed ones (like textures or entire body parts).
When all the feature maps are ready, they are merged to get the final output that
represents predictions about the object that the machine sees. And the more
images it processes this way, the smarter it gets
VGG-16.
In this training process I used VGG-16. The VGG-16 is a CNN with 16
convolutional layers with a couple of max pooling layers in between and some
dense fully-connected layers at the end.
The Convolution Step
The primary purpose of Convolution is to extract features from the input
image. Convolution preserves the spatial relationship between pixels by learning
image features using small squares of input data.
As I discussed above, every image can be considered as a matrix of pixel values.
Consider a 5 x 5 image whose pixel values are only 0 and 1 (note that for a
grayscale image, pixel values range from 0 to 255, the green matrix below is a
special case where pixel values are only 0 and 1):
Also, consider another 3 x 3 matrix as shown below:
Then, the Convolution of the 5 x 5 image and the 3 x 3 matrix can be computed as
shown in the picture below:
I slide the orange matrix over our original image (green) by 1 pixel (also called
‘stride’) and for every position, I compute element wise multiplication (between
the two matrices) and add the multiplication outputs to get the final integer which
forms a single element of the output matrix (pink). Note that the 3×3 matrix
“sees” only a part of the input image in each stride.
In CNN terminology, the 3×3 matrix is called a ‘filter’ or ‘kernel’ or ‘feature
detector’
Introducing Non Linearity (ReLU)
An additional operation called ReLU has been used after every Convolution
operation. ReLU stands for Rectified Linear Unit and is a non-linear operation.
ReLU is an element wise operation (applied per pixel) and replaces all negative
pixel values in the feature map by zero. The purpose of ReLU is to introduce non-
linearity in our VGG-16
The ReLU operation can be understood clearly from Figure below
Figure: ReLu
The Pooling Step
Spatial Pooling reduces the dimensionality of each feature map but retains the
most important information. Spatial Pooling can be of different types: Max,
Average, Sum etc.
In case of Max Pooling, we define a spatial neighborhood (for example, a 2×2
window) and take the largest element from the rectified feature map within that
window
Figure: Max Pooling
The function of Pooling is to progressively reduce the spatial size of the input
representation. In particular, pooling makes the input representations (feature
dimension) smaller and more manageable
reduces the number of parameters and computations in the network, therefore,
controlling overfitting
The Fully Connected Layer
The objective of a fully connected layer is to take the results of the
convolution/pooling process and use them to classify the image into a label (in
a simple classification example). ... They then pass forward to the output layer,
in which every neuron represents a classification label.
Three Fully-Connected (FC) layers follow a stack of convolutional
layers (which has a different depth in different architectures): the first two has
4096 channels each The final layer is the soft-max layer.
Figure: Sample of the steps
Zero-Padding
Zero-padding refers to the process of symmetrically adding zeroes to the input
matrix. It’s a commonly used modification that allows the size of the input to be
adjusted to our requirement. It is mostly used in designing the CNN layers when
the dimensions of the input volume need to be preserved in the output volume.
Figure: Zero padding
Sequential- A sequential model is just a linear stack of layers which is putting
layers on top of each other as we progress from the input layer to the output layer.
Dropout: Dropout is a technique where randomly selected neurons are ignored
during the training. They are “dropped out” randomly. This reduces overfitting.
Flatten: This just flattens the input from ND to 1D and does not affect the batch
size.
Dense: It is the final nail in the coffin which uses the features learned using the
layers and maps it to the label. During testing, this layer is responsible for creating
the final label for the image being processed.
3. Compiling the model
After this I compiled the model using Adam as an optimizer, loss as categorical
cross-entropy, and metrics as accuracy. After compiling the model, I then fit the
data for training and validation. I took batch size to be 64 with 30 epochs. Then I
save the model weights in an h5 file so that I can make use of this file to make
predictions rather than training the network again.
4. Detect the emotion and emojify it
I loaded the model weights that I saved earlier after training. After importing the
model weights, I have imported a haar cascade file that is designed by open cv to
detect the frontal face. After importing the haar cascade file I detected faces and
classify the desired emotions. I assigned the labels that will be different emotions
like angry, happy, sad, surprise, neutral. After running the code, it will detect the
face of the person, draw a bounding box over the detected person, and then
convert the RGB image into grayscale & classify it and make an emoji.
CHAPTER 8: USER INTERFACE DESIGN
The graphical interface design is provided in the following part
This is the home page of FunSnap. User upload a picture by selecting upload option.
After Uploading a picture, first option is filter, 4 filters (normal, sepia, negative, black
and white) are available in this option.
2nd option is to adjust the contrast, brightness and sharpness of the picture
3rd option is to modify the height and width of the photo
4th and 5th option is rotation and cartoofinify the photo
Final option is to detect the emotion of the photo and make an emoji of it.
CHAPTER 9. IMPLEMENTATION OVERVIEW
This chapter aims to describe the implementation process of “FunSnap”. Here the
technologies that have been used to develop this system will be described in brief.
9.1 Technology Used in implementation
Development technologies are growing very rapidly with the increase of requirements.
The
technologies that have been used to develop this system is the most recent technologies
and also very much appropriate to it.
Python: Python is an interpreted, object-oriented, high-level programming
language with dynamic semantics. ... Python's simple, easy to learn syntax
emphasizes readability and therefore reduces the cost of program
maintenance. Python supports modules and packages, which encourages program
modularity and code reuse. Python 3.8.5 has been used for this project.
PIL: Python Imaging Library is a free and open-source additional library for the
Python programming language that adds support for opening, manipulating, and
saving many different image file formats. It is available for Windows, Mac OS X
and Linux.
OpenCV: OpenCV (Open Source Computer Vision Library) is an open source
computer vision and machine learning software library. OpenCV was built to
provide a common infrastructure for computer vision applications and to
accelerate the use of machine perception in the commercial products. Being a
BSD-licensed product, OpenCV makes it easy for businesses to utilize and
modify the code.
Tensorflow: TensorFlow is an end-to-end open source platform for machine
learning. It has a comprehensive, flexible ecosystem of tools, libraries and
community resources that lets researchers push the state-of-the-art in ML and
developers easily build and deploy ML powered applications. 2.3.0 version has
been used here.
Keras: Keras is an open-source software library that provides a Python interface
for artificial neural networks. Keras acts as an interface for the TensorFlow
library. Up until version 2.3 Keras supported multiple backends, including
TensorFlow, Microsoft Cognitive Toolkit, Theano, and PlaidML. I used Keras
2.4.3 for the project.
Numpy: NumPy is a library for the Python programming language, adding
support for large, multi-dimensional arrays and matrices, along with a large
collection of high-level mathematical functions to operate on these arrays.
PyQt5: PyQt is a Python binding of the cross-platform GUI toolkit Qt,
implemented as a Python plug-in. PyQt is free software developed by the British
firm Riverbank Computing. PyQt implements around 440 classes and over 6,000
functions and methods including: a substantial set of GUI widgets.
9.2 Source code description
There are 7 classes from the SRS.
[Link]
Attributes: image, line_size, blur_value
Methods Description
color_quantization () Quantization of colors of the picture
edge_mask () Detecting the edges
on_cartoonify() Cartoonify the image
[Link]
Methods Description
Show_vid() Detect the emotion
On_emojify() Emojify the picture by detecting emotion
[Link]
Methods Description
on_rotate_left() Rotate the picture left
on_rotate_right() Rotate the picture right
on_flip_left() Flip the picture left
on_flip_top() Flip the picture top
[Link]
Methods Description
on_width_change() Change the width of the picture
on_height_change() Change the height of the picture
on_ratio_change() Change the ratio of the picture
on_apply() Apply the changes to the picture
[Link]
Methods Description
reset_slider() Reset the slider
on_brightness_slider_released() Control the brightness
on_sharpness_slider_released() Control the sharpness
on_contrast_slider_released() Control the contrast
[Link]
Methods Description
add_filter_thumb() Add filters to the pictures and show it.
on_filter_select() Selecting the filter
toggle_thumb() Switch from one filter to another
7. MainLayout
Methods Description
place_preview_image() Previewing the image
on_save() Saving the image
on_upload() Uploading the image
update_img_size_lbl() Update the modification in image
on_reset() Reset the image to its previous state
CHAPTER 10: USER MANUAL DESIGN
What is FunSnap?
FunSnap is a desktop based photo editing tool. We can use
this to enhance our pictures.
What are the features?
Filter
Adjustment
Rotation
Modification
Cartoonify
Emojify
How to use?
1. Upload a picture using upload button.
2. There will be six options available
(Filter, Adjust, Modification, Rotation,
Cartoonify and Emojify)
3. In the filter option, there are 4 option:
normal, sepia, negative and black and
white.
4. In the adjusting option we can adjust
the brightness, contrast and sharpness
5. In the modification option we can
resize the image
6. In the rotation option we can rotate and
flip the image
7. In the cartoonify option we can
cartoonify the image
8. In the emojify option we can detect the
emotions and make an emoji
The seven available emojis are:
Code Available at : [Link]
CHAPTER 11: CONCLUSION
This document has pointed out every necessary point which is necessary for development
of the project. For better understanding, different figure and the table have shown in this
document. All the steps for developing the software and both high-level and low level
categories have been shown in brief in the usage scenario point. Finally, with this
document, I have tried to minimize all the ambiguity of the development. I hope this
report can be used effectively to maintain the software development cycle.
REFERENCES
[Link]
[Link]
[Link]
[Link]
[Link]
learning-6a6e67336aa1
[Link]