Two Cloud Architecture
Two Cloud Architecture
Privacy Preserving
I. PROJECT SUMMARY
To enhance the security practicality with the logical and mathematical queries can't give
adequate security assurance and some possible difficulties may arise. Most of the researchers
works and analyzed towards to identify and provide the best possible ways to achieve the level of
privacy preservation in cloud computing. In most cases the security analysis shows that the
confidentiality of numeric information is strongly protected against data providers in two cloud
architecture.
Most of the industries and individuals outsource database only to reliable and convenient
with low-cost applications and services. In order to provide such security and reliability in cloud
database to support sufficient functionality for SQL queries, many secure database schemes have
been verified. However, those schemes are vulnerable to privacy leakage to cloud server. The
main reason for providing security in cloud database is those data owners hosting their important
information and processing beyond their control. To handle the numerical range query (“>”, “<”,
etc.), such schemes cannot provide sufficient privacy protection against practical challenges like
privacy leakage of statistical properties and access pattern.
Mean while processing more number of numerical queries, unavoidably will lead to
information reveal in cloud server. To avoid this two-cloud architecture is used to provide more
secure on database, with a number of intersection protocols to support privacy preservation to
range of numeric-related queries. Security analysis in two cloud architecture shows that isolation
of numerical information is strongly provided in cloud providers against information leakage.
In the current circumstances most of the businesses and people are looking for database
to accomplish trustful and minimal cost maintenance administrations for their applications. But
this is more risked in the cloud server, which is outside the ability to make control of the data
processing and queries against the proprietor. Some secure database schemes are needed to work
with these SQL queries, yet this is chain working spillage to the privacy preservation in cloud
server.
Though those problems exist in current scenario with cloud computing, the users are
widely increasing because of its huge storage capacity. And also it provides flexible platform for
their users to access their data from anywhere at any time easily.
Currently many enterprises have started using cloud storage due to its huge advantages,
but considering the data security, data privacy, data protection and other data related issues are
still problematic. Security and privacy preservation is a major task in cloud computing when
comes to the data storage, because it is most important and necessary requirement in the
database management is to provide security to the database and holding the confidential
information without using any encryption methods, which leads to information theft, most
vulnerable act.
In the present conditions as it can be seen cloud has taken the control over the IT business
with its immeasurable advantages. It makes all possible IT business segments and providing
much expectations as service.
Though the cloud platform can be open/public and in addition it’s private. All Private
clouds are linked with their inner data centers of a business or an association, not made
accessible to the overall public. In this way Cloud computing can be compressed as a merger of
saas and utility computing, to improve the performance of data centre like little + medium
estimated. Here also Security is the most concern of the cloud computing. Cloud clients meet
head-on security dangers which may from both outside and inside the cloud. Defending the data
from the server itself is the expert of the main issues related in it. In this scenario the server will
by description control the "bottom layer" of the product stack, which sufficiently goes around
most known security methods, Such a way the cloud server is said as as semi-trusted that is
“honest-but-curious”.
II. INTRODUCTION
cloud computing is the wider range of computing services which includes servers,
storage, databases, networking, software, analytics, and intelligence on the Internet (“the cloud”)
to provide faster innovation, flexible resources, and economies of scale. In cloud user in general
can pay only for cloud services which they used called as “pay as you used”. In this way cloud
user can enjoy low cost services on all operation. It can also provide an efficient and scalable
platform for user to run their infrastructure efficiently to meet their business needs.
In other words Cloud computing can stated as the delivery of on-demand computing
services that is “from applications to storage and processing power” over the internet and on a
pay-as-you-go basis.
Now a day’s Cloud computing services cover a huge range of options for their user, from
the networking and processing power storage basic the natural processing and artificial
intelligence and few standard office applications can be derived. Many services that doesn't
require user to manually close to the computer hardware that you are using can now be delivered
via the cloud.
Fig 1 Cloud architecture
Cloud computing includes a large number of services which includes consumer services
Email services like Gmail and cloud back-up of the photos from smart phone also backup of
whatsapp services, along with the services which includes large enterprises to host all their data
and run all of their applications in the cloud.
Cloud computing provides services to run video and audio streaming and some other
business systems on internet which can easily access by user from anywhere at any time without
any extra devices, best example for this is Netflix, Amazon prime etc.
On this era the cloud computing becomes the default option for many applications; also
software vendors are contributing their applications as services in the internet pretty than
standalone products instead they try to switch to new subscription model. Cloud may offers a lot
of services rather it have potential down side on its services which brings costs and security risks
for using it.
A fundamental concept behind this cloud computing is that the spot of the service.
Because of that many of the details such as the hardware, software and operating system on
which it is running, are largely unrelated to the user. With this statement can we state that cloud
services are derived from old telecoms network schematics, like public telephone network and
internet often denote simplified figure of speech. This also brings more simplified remain that
the services, data and query processing are key issue.
The main cloud computing is widely divided into three computing models such as,
Infrastructure-as-a-Service (IaaS)
Platform-as-a-Service (PaaS)
Software-as-a-Service (SaaS)
2.3.1 Infrastructure-as-a-Service
2.3.2 Platform-as-a-Service
The next level layer is Platform-as-a-Service (PaaS) as well as this service primarily uses
storage, networking, and virtual servers which include the tools and software that developers
need to build applications on top of that could include middleware, database management,
operating systems, and development tools.
2.3.3Software-as-a-Service
When considering benefits of cloud computing the primary usage is by using cloud
services companies not to have or maintain their own computing infrastructure.
In cloud platform there is no more such works like buying any servers, updating
applications or operating systems, or decommissioning and disposing of hardware or software
when it is expired, because all those are taken care by the supplier.
For service applications, such as email, backup files from smart phones can switch into a
cloud provider, rather than rely on in-house skills. To provide more secure and efficient service
to the end user the companies that running and securing these services should have good skill
and more experienced staff.
As like other services in cloud the companies can easily move their projects and test out
concepts without too long procedure, also they no need to pay so much instead pay as they
utilized the resource.
This business dexterity concept is the key benefit mentioned by the cloud advocates. The
companies are growing up as it is they wish to access new services without time and effort
associated with their traditional procedure in order to get going with new applications. of course
if a new application turns out to be a wildly popular the resilient nature of the cloud makes it is
easier to scale it up fast.
If the company has an application which is in peak or very high in usage, it means that
only used currently or particularly at a time, week or year, it may create more economic trend to
host it in the cloud rather than have specific hardware and software to utilize it.
Moving to a cloud hosted application for services like email or CRM could remove a work load
on internal IT staff, and if such applications don't generate much competitive advantage, there
will be little other impact. Moving to a services model also moves spending from capex to opex,
which might be useful in some companies. More over ultimate benefit is cloud provides an
continuity opportunity for business.
i) Database skills
Programming skills are widely needed for developing any applications. In earlier days few
languages are developed such as COBAL, PASCAL and FORTRAN and the programmer had
working experience with those languages, and later many languages are developed with
advanced options. Among those several languages Python is a good build up for working with
cloud database operations. With these easy learning process developers can easily create, manage
and deploy the applications easily with low cost and in short time. Though working with these
easy programming skills the database developer should learn the followings:
Cloud database is a collection of services related with database operations which will be
accessed by user through cloud platform. Services such as create, service update and manipulate
database which are all same as traditional database additionally with easy access. User can install
software to implement the database in cloud infrastructure.
Key features:
All the database services can be build and accessed in a cloud platform easily
In cloud database users can able to host or make use of available services without buying
any special hardware
It can be managed by the user and also provider can easily satisfy by providing exact
what they want
Cloud computing offers a wide support to the existed database services like relational
databases such as MySQL and PostgreSQL and also NoSQL databases which
includes MongoDatabase and Apache CouchDatabase
As said early cloud database can easy to access, which means that can be accessed
through a web interface or vendor-provided API instead buying or renting any special
component
Why cloud databases
Ease of access
Using vendor’s API or interface from web cloud databases can be accessed through virtual
connection from anywhere at any time.
Scalability
When we consider storage space in cloud database, it can be easily expand during running time
and the changes made in it can be easily adaptable also it provides “pay-as-use” facility to user it
means that the user can pay only for their usage.
Disaster recovery
Cloud database provides an large support to the backup service, here the backup service is taken
place automatically with the permission of user in order to avoid data loss during any natural
disaster or incase of any equipments it will helps users to recover their original data.
Control options
When the users are looking for virtual machine cloud is better opt for it, it serves as database as a
service<DBaaS).
Database technology
Scalable of cloud database in some places is difficult when compared with NoSQL data base but
it doesn’t applicable for working few applications.
Security
In cloud database all the users are searching the security measures for their data, though all
organizations are providing much better security measures to their users.
Maintenance
Cloud database is a virtual machine (VM), when we work with VM should ensure underlying
process of their staff for maintenance.
To secure cloud computing environments against threads on network cloud security provides
some procedures, techniques and tools for both internal and external attack. Mostly all of the
companies at a time some government projects too looking for innovative and collaborated
environment for easy access and better performance, in some conditions it may satisfied by
information which are all produced from cloud computing
In day today environment there are many threats and insecure activities against data and
applications are available. To prevent these activities in order to avoid data loss it is mandatory
to keep security measures in cloud database.
Based upon the usage of cloud computing the security system may differ, mainly cloud
computing is categorized into four main type which are listed below,
A public cloud provider from cloud storage may run or handle all the request from the client.
It also comprise three important cloud services to be unique such as software-as-a-service
(SaaS), infrastructure-as-a-service (IaaS), and platform-as-a-service (PaaS).
The private cloud is mainly provided in two ways, in one side it will be deal with providing
an computing platform of cloud to any specific customer, but it will be operated by some
other third party.
In this type both the Private and public cloud computing services are configured together to
provide an excellent services like workload hosting, optimizing data, cost and security in
cloud database
The main problem in internet is insecurity, so the system which is all connected with the
internet needs more secure protocols, as like internet cloud system also needs communication
protocol to avoid vulnerability. To make this easy the internet connected system needs a strong
security. More over to provide secure connection over the internet all the companies are using
firewall and some intrusion detection algorithms to protect their systems. Mean while using
cloud security provides much better security than normal internet storage, also distributed system
or distributed computing environment provides same as cloud security.
To make user comfortable cloud storage provider issuing service by a public cloud provider,
here data and all applications are hosted and handled by third party which makes data in self
controlled network. This is major security difference from normal traditional IT and cloud
computing.
In order to provide quality services in cloud platform all the service providers are concentrating
on security. Since they are keen watching security issues raised in cloud services along with the
services provided by them, but they can’t predict how the customers are using the services, what
they are expecting and what are all the changes they are adding into database and who have the
access control. Each customer must have some access policies to protect their sensitive data and
configuration with their cloud architecture. In all cloud services both cloud provider and cloud
customer have different levels of security responsibility by service type it can classified as,
Software-as-a-service (SaaS) – here only the Customers are responsible for securing their data
and also they decide the user access.
From the above statement we can conclude that in public cloud database customers are
responsible for securing their data also they have to decide the accessing and controlling
authority of data. Data security is major task in cloud computing to make achieve successful
computing services which will be adopted easily and the cloud benefits by all customers.
Companies are using most popular cloud services “SaaS” are producing flexibility for
accessing the utilities in cloud environment, for example Microsoft office 365 can be utilized by
any one at free of cost, or any sales force need to plan for fulfill their functionality to protect
shared data in cloud database.
In IaaS service, service accuser should have an more advanced and inclusive plan that
should be about data, cloud application security traffic in virtual network and operating system
and any other component that may create issues on data security.
Cloud computing solution is composed of number of elements like clients, the data center and
distributed servers. All elements have a particular part and use in providing a functional cloud
based application.
Clients
Clients are normally a device, which is used by the end user to communicate the information on
the cloud. Computers, laptops, tablet computers, mobile phones and Personal digital assistant can
be used as a client. Portability is the main reason for using the above said devices as a client.
Nature of the client should be portable, thin or thick.
Data center
Group of servers where we have housed the application to which we need to access is called
Data center. Virtualization is one of the methods to get services from the cloud. Virtualization is
of two types:
a) Full Virtualization
This is the method, where an entire installation of one machine is execute on another machine. In
this system, all software executing on the server is located in a virtual machine. Software
executing on the server can be viewed on the clients in this type of virtualization.
b) Para Virtualization
This type allows running different types of operating system more efficiently at a particular
time on a single device by using same device’s processors and memory. This type is more
efficient than full virtualization. Since all elements in this type are efficient than full
virtualization.
Distributed Servers:
Generally location of the distributed servers is geographically scattered. Distributed servers are
act to a cloud subscriber as if they are buzzing away right next to each other. This feature gives
more options and security for service provider.
3.4 SQL
A special domain specific language designed mainly to manage data which is held in
relational database management system (RDBMS), also for supporting query processing or
stream processing in a relational data stream management system (RDSMS) is none other
Structured Query Language (SQL), which is mainly used to handle structured data, also it
supports relations among entities and variables in relational data base.
In earlier some read write operations are done with application programming Interface
(API) like ISAM, VSAM, among these structured query language provides us two main
advantages those are:
With this SQL more number of records can be accessed by user at a single time using
single command line
In this several records can be accessed without specifying key value such as index, it
doesn’t need to mention how to reach a record.
In SQL based upon columns and rows with relational calculus and relational algebra can create
many type of statements which called as sub languages such as
Data Query Language (DQL)
Data Definition Language (DDL)
Data Manipulation Language( DML)
Data Control Language (DCL)
Data Query Language (DQL) is used to combine SQL sub languages, statements generated in
these languages are used only within schema objects to get the desired schema relation.
To create and modify any objects in a table such as index, users, rows and columns, this Data
Definition Language (DDL) is used. This is also called as Data Description Language, it is a
syntax similar to programming languages for describing or defining data structures. Example
DDL commands includes CREATE, ALTER, and DROP.
This Manipulation Language (DML) is used to manipulating some operations in table like
inserting, deleting, alter and deletion in a table. In DML we can specifically mention READ and
Write operation which helps user to selecting data from distributed table. This is sub language of
SQL and as like computer programming language.
From the name Data Control we can state that this is syntax based programming language used
to give data accessing authorization to the database there are two main commands are used
GRAND AND REVOKE,
GRANT using this may decide who are all can access the database
Predicates- t change the flow of the program predicts defines either 3 valued logic (3VL) that is
true/false/unknown or Boolean value true or false statement
Queries—the most important element in SQL is queries, which defines the all.
Statements—all the actions such as program flow, connections, sessions control transactions
and schema of the data base is controlled over this statements.
This data type is defined earlier as syntax, user can fit it to their needs, it helps to define
character, numbers and binary values in commands
Character Types
It is used to define three types of character in a line like, character (CHAR), varying character
(VARCHAR) and large object character (CLOB).
Binary Types
As like in character types it also structured three different binary data, Binary (BIN), varying
binary (VARBIN) finally large object binary (BLOB).
Numeric Types
In this all type of numeric values can be define suc as exact numeric values, approximate values,
date and time, decimal value.
Constructed types
In this constructed data types user can define multiple values in a single command as ARRAY.
Here a sequence of statements is used to create object and elements in a table. User can create
their own command line which may include all the binary, Boolean, numeric values.
Numeric data type in SQL can be defined for both numeric and decimal types by defining
NUMERIC(P,S) and DECIMAL (P,S), whereas here P is declared as precision and S is define as
scale.
Precision is a integer value that indicates total number of digits in a column, those digits should
be radix or number base which is both binary and decimal.
Scale also indicates integer value, it represents number of decimal places in left and right of the
decimal number. Left indicates positive values and right indicates negative value in decimal
point.
balance numeric(5,3)
);
Here the precision P represents the total number of all digits of balance column in above example
that is 8 and scale S represents the two digits after the decimal point in the same i.e 2.
Though both numeric and decimal types are same, there is small difference exist between the
precision and scale. In numeric data type the exact scale and precisions are identified where as in
decimal data type scale is identified as same it is, but for precision sometimes equal or greater is
identified from what the coder defines.
In earlier a system is generated to replace or alter the SQL which is known as NOSQL, “Not
Only SQL”. In this a new buzz word is used in the database means that the database is not only
the database.
Since it is not a relational database, can be used widely because of its light weight and open
source property. Also it supports ACID property of relational database, few examples are IBM
DB2, MySQL, Microsoft SQL Server, PostgreSQL, Oracle RDBMS, Informix, etc
Features of NOSQL
Though it is light weighted database, usage of this is very easy in conventional load
balanced cloud environment.
Data is constant
Depends on available memory data are possess scalable
Migration between schemas are made easy and allowable
NOSQL have an unique query system instead of using existing SQL query
It follows reliability among all the clusters
Based upon the above features relational database is designed with high storage and processing
power availability with lowest price. Also it have an ability to face challenges and bugs among
the modern applications.
In a database there be a complex structure between key known as document, which may have
many different key-value pairs or key-array pairs or nested documents.
Graph
Per All the information about the networks are stored in node called as graph, includes key-value
and hyper value database.
3.6.1Benefits of NOSQL
When we consider portability and scalability of data NOSQL gives better result than relational
database also gives much better performance.
It supports great volumes of all type of data such as, structured, semi structured and unstructured
with efficient schema.
NOSQL associated with object oriented programming methods so it is easy to use as an iterative
model.
Relational database and SQL databases want to add data before we want to work with it. But in
NOSQL database can be updated during running or under working condition.
Insertion of data is allowed in NOSQL without any predefined schema, this makes any changes
in real-time applications are easy without interrupting any service around it. Due to this property
development of any application using this may take less consuming time also integration of code
will be more reliable.
Auto-sharding:
NOSQL follows a procedure to distribute the data across multiple machines which is called as
sharding. In recent technologies many of databases are using this concept to provide distributed
sharing of single data to carry scalable deployment of large data sets. Example: MongoDB.
MongoDB uses this sharding concept in order to achieve high throughput among a vast database.
In a NOSQL database system capacity of database can be defined by the applications throughput.
There are two methods for addressing system growth: vertical and horizontal scaling.
Vertical Scaling
Vertical scaling improves capacity of servers by adding more powerful CPU, increasing storage
space or increasing size of the RAM. Depends upon the workload it makes single server to be
more sufficient to work with distributed schema. It also increases the accessibility of user to the
data by improving the speed and flexibility of server.
Horizontal Scaling
Parallel working concepts are used here to improve the infrastructure. Here the server network is
divided into multiple servers to balance an over work load. To increase the capacity of server,
new servers are added additionally. By this dividing concept capacity and speed of single
machine is not too high but it improves efficiency by subset of handling over all workload.
This horizontal scaling improves the complexity of infrastructure, also provides lower cost with
high end hardware specification for a single machine with low maintenance and high
deployment.
shard: it can be deploy as a subset because every shard may have subset of shared data.
mongos: to create an interface between client applications and the sharded cluster a router is
needed, mongos are be active as a query router to perform such connection. MongoDB version
4.4 uses hedged reads to support these query router to minimize these latencies.
config servers: Config server component is used to store configuration detail about sharded data
base and detail of meta data.
Reads / Writes
In MongoDB read and write operation can be distributed as workload in sharded cluster among
several shards, and allows all shards to process this read/write operation as subset in clusters.
Finally the result from all the subset can be scaled horizontally from all the clusters. More shards
can be added during horizontal sharding if they need.
In a distributed cluster network a shard key can shared along with workload to the subset of each
cluster, this compound key is used to added in the prefix of the subset. This makes broadcasting
process more effective and efficient then shard individually to each cluster.
Storage Capacity
Here the storage is improved in each sharded cluster by sub dividing workload among several
shards, these distributions provides anonymous space in every individual shards. It also supports
sub diving and having more number of cluster data among total cluster data.
High Availability
In shards all read and write operations are carried by sub dividing it into sever cluster to carry out
the work load, so multiple shards may have the replication of same data. If any of the shards are
not available for accessing data it might be available in some other shards or sub set. With the
help of key append with it can easily access those available data. Any way at last the data
unavailable shards cannot be process read/write query request instead it will be redirected into
some other data available shards.
Though more shards are unavailable at a time the read/write operations are carry out by sharded
cluster partially.
Generally, database may have a collection of both sharded and unsharded queries. Before start to
do read/write operations those mixture are partitioned separately and sharded collections are
distributed across shards in cluster. Mean while unsharded collections are stored in a primary
shard. Both the shards might have its own primary shards for this purpose.
While establishing connection between sharded cluster we should make a connection between
mongos router with any one collection in the sharded cluster. this interaction includes both
sharded and unsharded collections. To perform single read/write operation clients never connect
to a single shard.
Sharding Strategy
There are two sharding strategies are available in mangosDB for distributing data
across sharded clusters.
Hashed Sharding
Ranged Sharding
Hashed Sharding
A hashed shard key is computed and shard key field values are collected in hased
sharding. Then based on the shards key a range based chunk is assigned.
The chunk maintains the hashed values though the range of shard keys are closed. Data
distribution is happen based upon the hashed values which may facilitate data distribution
effortly. Also it supports the monotonically changes on the shard key in the data sets.
Ranged Sharding
Instead of maintain has key values, the data is divided into ranges based on the shard key values
and chunk is assigned to each divided ranges with range based on the shard key values.
Most likely in a range of shard keys when the values are close the request is remaining in the
same chunk. This leads to allow the targeted operations to the shards which contain the required
data.
The ranged sharding depends upon the choosen of shard key. When the shard keys are poorly
chosen it leads to uneven distribution of data, this may cause some issues on sharding benefits
and performance bottleneck problem also raised.
During the processing of query extract, transform and load process an external storage is used
called as data staging area. This locates between source data and target data; those might be any
one of data mart, data warehouse or any data repositories.
In a group of users a common message need to be communicate among them with terms of
security to the storage. A number of systems have been developed for possessing data even in a
untrusted storage repository. But there exits an priority problem i.e when more number of queries
request are arrived the system might be unconditional. In order to provide an security and avoid
above said issues in query processing two mechanisms are generated,
On first, a simple schema is generated to produce erasure code i.e an erasable temporary code is
encodes the redundant files in cloud storage. In this schema encoder before encodes the file
appending some special blocks with data file. on later these blocks can be derived file containing
keys which is added by the verifier.
The second method is adult mechanism, which is introduced to retrieve blocks for sufficient
retrieval of reconstructing file. During this process a special blocks are used by the verifier to
check the correctness, which should not be used before.
A new model created to address the data possession, a Message Authentication Code (MAC) can
be used here to store the block of redundantly encoded data. In order to provide additional
security third party audit method is used. In this data staging security is made as stronger with
many components but the cost effective is very high.
3.8 ENERGY EFFICIENT AND OPTIMAL CLOUD STORAGE
There are number of methods proposed to support secure cloud data storage for data
structures and algorithms, among them a new scheduling algorithm is used to provide security.
A weighted routing is introduced to obtain the optimization of energy level in the network
infrastructure, so that energy efficiency is achieved in some level.
Now a day all the works comes under online due to this all the applications are need to optimize
energy level for better performance. Also it needs vast storage for fast retrieval and secure.
Role based access controls are used in most of the cloud network to make efficient way of
utilizing available cloud network resources and their rules.
To achieve such effective routing a power aware protocols are used in this to achieve effective
routing nodes with higher energy are taken and shortest path are identified among nodes. This
type of protocol is used exactly where, in temporal databases needs efficient storage and retrieval
of temporal data. When we consider this temporal data base we should be aware of mining
algorithm which are all already existed in cloud database. Here a temporal association rule
mining algorithm is used, which is extended from apriori algorithm along with it a new time
stamps are included. Stamps are used to create a temporal tree structure of temporal data base.
Tree structures are used to perform rule mining operations effectively.
A fuzzy temporal rule mining algorithm is used to in temporal database, they used new
membership function for taking temporal decisions by performing rule based association. To
achieve fast retrieval with minimum energy and higher security two more algorithms are used,
they are temporal B-tree data structure and temporal query processing algorithms. The temporal
B tree data structure can be manipulated with existing data manipulation structures.
The process of assigning available resources to the cloud services is called resource allocation
with the consideration of cost. To achieve this Service Level Agreement (SLA) is used between
the nodes in cloud services. These SLA are used to provide the complete information about the
user for accessing the particular resource. In other words resource allocation is the process of put
together all the available resources to meet the needs of user or request provider within the cloud
limit in elastic and transparent manner. It supports both user and service provider in cloud
computing to achieve their needs. Cloud computing have a service oriented nature, so that all the
users are worried about quality, reliability of the resources and services because of it, they wish
to complete their task on or before their allocated time.
In cloud over-provisioning may happen when multiple users wish to access the same data
resource at same time, in other words provider’s needs to maximize profit but they concerting of
providing small amount of resources alone. It creates problem on allocating resources mutually
to all requester, this is named as under provisioning.
When the user make an request the Service Level Agreement are required by the provider for
offering resources, the status of SLA are maintained by periodical updation of each process.
At a time SLA from both services requester and providers are updated in single form so that
satisfy various user requirements. A resource allocation strategy must avoid the following
scenarios,
Under provisioning of resources
Over provisioning of resources
Resource contention
Resource starvation
Resource fragmentation
1. Under provisioning of resources: here an application is assigned to few resources only to
meet the Service Level Agreement
2. Over provisioning of resources: more resources are assigned to the QoS over they need.
3. Resource contention: it creates situation where multiple users accessing the same resource at
a same time
4. Resource starvation: also called as scarcity of resources, it may happen when more users are
requesting same resource at multiple times.
5. Resource fragmentation: this is also overflow of resources where the resources are available
but we can’t catch them.
Resources Allocation Strategies (RAS)
In Service oriented architecture multiple services are hosted in the same server. And at a time
some specific services are limited for access such as CPU time, memory, network resources and
bandwidth.
It is important to note that the user may see those limited resources as unlimited and the tool that
makes that possible is the Resource Allocation Strategy. A variety of resource allocation
strategies are examined.
1) Execution Time - Based RAS
While allocating the resources there will be a problem of resource argument in parallel
processing. Here the actual execution time is considered for effective distribution of resources in
cloud computing. By adjusting the resource allocation from the updating of actual time.
To avoid this data wing problems a new method is proposed, there timing constrains are
included with request called deadlines and those requested resources are provided to user in the
form of virtual machine. Along with these an Earliest Dead Line (EDL) first and Time Sensitive
Resource (TSR) allocation schemes are used. In EDL as like First Come First Serve (FCFS)
basis user request with earliest deadlines are processed first. The main purpose of it is to avoid
data sagging in a network.
TSR is a unified schema structure with integrated concepts of other methods. A real trace
data were simulated from evaluated schemes. Here a logging scheme is introduced with real
trace method to maintain effective cloud storage. In each node a log is maintained to avoid
communication overhead between log monitor, for each log accompanied with request query the
particular logging cloud should respond.
Privacy protection to the data in cloud data base is tedious process, can be implemented by the
following methods,
Anonymity-based Method
Protection of data in a cloud computing is achieved through this Anonymity based method by
comprising various techniques. In this method the algorithm take care of over all data, before
going to process it encrypt them to avoid mistreating it, during this process the service provider
may use this hidden knowledge and searching these data to obtain the required information.
Anonymity based method is totally different from conventional cryptographic model which is
used protecting privacy of their providers data. here the key management tool is used to make the
security process simple and elegant. Also it supports attributes to be encrypted in the cloud
environment to achieve the security by threatening the illegal access.
A privacy-preserving Architecture
A physical schema is used to protects confidential data of the service provider in cloud
effectively, with these all the out sourced data are protected from the unauthorized access. This
preserving method effectively protect the out sourced data from the intruption.
Architecture for database storage in cloud that effectively protects the confidential data of the
users is thus proposed and described in this paper. This method essentially protects the
outsourced data from abuses that would be either internal or external in nature. The architectural
there are three fundamental elements are used in this scheme such as user interface, rule interface
and cloud database.
All the requests are initially forward to the user interface to support the database in the form of
XML/RPC request to the user engine. And then forwarded to the rule interface at last to the
cloud database. For every request which are all received to be encrypted and the resources
allocated to them are monitored continuously, same as on the response side each level the
protection of data is ensured by the system in order to provide the security concern.
The Server Reencryption Mechanism (SRM) is a method used to provide access control and
monitor both request and response in the mean of security concern on the cloud network. While
multiple users access the same resource at the same time the preservation of access is denied it
made complex to the whole system. To avoid this encoding of request is enabled on the network
with two types of phases they are base phase and surface.
The work is partitioned into two categories to make aware of the security issues, so that the
initial phase makes encoding of all the local attributes and the server is handled by surface phase.
A user initially creates the request to the server based on their needs. Here the decoding takes
place to preserving the data from server; hence SRM mechanism can do this. At the side of main
server the decoding is take place, which does not affect the user requirement at any cost. At the
main server the hidden approach method is used to prevent the service. Complete authorized
permissions are issued to the user on or before requesting the resource, also depends on the
request resources are made available to finish the work.
In Policy Enforcement Points (PEP’s) the identified threats are stopped and processed to further
resource utilization. It act as second level verification to the vulnerability.
Along with the above an obligation service on the user side is created and processed to ensure
the occupation of the accessing zone, hence at this stage we can easily identifies the unauthorized
access to the resources. The main service handles all the above said authorized related issues, it
never bother about lagging of resources instead it monitor the presence of any illegal access
present in the network. It consider whole network into the single infrastructure by combining all
the individual services together.
Data out sourcing is a method to provide preserve to the data and make confidentiality to the data
concern. This method uses a graphical representation to avoid the concerns present in the
network to issue the protection mechanism. Two parameters can be used by nodes in a graph, it
would be connected graph. Where in the graph attributes are made more confidential hence that
the data is preserve in the network as per outsiders perspective.
With this graph construction mechanism nodes and attributes are worked together to maintain
confidential data, though it is complex system there is a possibility to eradicate the security by
the outsiders. Since to improve the security mechanism fragments are created and graph coloring
is suggested. In graph coloring algorithm the fragments are only the responsible for keep data
preservation. Workload maintenance is the precisions to fragment at the minimum level of the
node in the graph. Also at any node the confidentiality about the network is maintained in order
to maintain the active graph in the network. To keep this fragmented work periodically the
elements are used as task manager. Elements are nothing but the metrics of the active task.this
work can be effectively carried out by integrating the levels and metrics at each level to provide
the security to the protect the data from the outsides attack.
Developed cloud system is designed to encrypt message of the user, sent to its cloud server and
also this system will decrypt the data received from its server. Individual user can use this system
to encrypt text message of user choice before transmitting to cloud server. Developed cloud
system will provide easy data storage and encryption of stored data.
To make an data privacy along with the outsourcing method PccP -Preserving cloud computing
Privacy is proposed. Due to the demand on cloud service it needs an action to be performed, that
is it receive the request message which is transmitted from the node, with this it provides an
access control to request in the cloud services. All these works are carried out on behalf of
service provider in the consumer layer of the basement model in the network.
IP address modification is one another mechanism used to ensure the credential data security,
initially it reads the request from the service requester and hide the information completely in
order to support the data confidentiality. it also protects the IP address spoofing by some other
outside user. It not only ensures the user data it also supports to provide the exact service access
with quality assurance.
Adeela Waqar is an analyst who introduced a method that can possibly find the exploitation of
metadata in cloud. If attacker has acquired sufficient knowledge about the metadata, then the
user’s security can be suitably compromised in the cloud environment. Still the data security is
possible in this by developing a perfect framework that will suit into preservation mechanism.
Procedure for the same as follows, Segregation of the metadata to be done at initially, next this
should be followed with the grouping of separated elements.
Grouping can be done in any one of the three forms given below. First group in the grouping is
called as the exclusive private group; the second group in the grouping is called as the partial
private group and finally the third group in the grouping is called as non private group. To
maintaining the privacy of the data, as per the regulation the segregation is done. The tabulation
process encounters is the next process in the grouping after the completion of grouping process,
in this process segregation is done by distributing the tables both vertically and horizontally.
Next phase in the grouping is the ephemeral referential consonance phase which helps in
regrouping of metadata as per the cloud service requirement. Ephemeral phase is incorporated
for avoiding data leakages completely in this process both after and before the segregation
process.
This method reveals that the getting back of data that has been injected as inputs by using a key
technique is possible. Here the whole set of data that has been administered, can’t be accessed,
only a section or portion of data taken for further consideration. POR protocol is the protocol,
which is used for the encryption process. Sentinel values is the name which is given to the values
that support the encryption process. The Safety levels between the client and the server can be
assured by the position of these values. If there is modification or deletion of entire file or
portion of file, then there will be considerable drop in the number of these sentinel value, which
play major role in assuring the safety level between client and server. This above said scheme
ensures the following:
This scheme is more efficient and also secure. Because this scheme is not using any complicated
method and also doesn’t depend on any complicated method, which is normally used for the
purpose of encrypting data. This is just using the reliable encryption technique uses only a very
small amount of resources. As we discussed early, this method is not using any complicated
encrypting method or technique. This scheme is using only a minimum amount of resources for
the encryption purpose. Hence the usage of sentinel blocks and the data expansion is completely
removed. Entire Sentinel block zone under use should under gone integrity verification for
ensuring the data protection. Because, the sentinel blocks are always grouped with the system
capabilities to protection the data. Integrity verification is the method which will compare the
previously entered value by the user, if the value is match with any of the previous data then this
method will ensure the uniqueness in the combination and it will proceed further. The following
are the features of this method:
This method is used to identify the distinctiveness of the data which is stored in cloud is thus
executed by making use of third auditor. In this method the safety of the data which is stored in
the cloud is ensured. In this method, any action on the data can be done without disturbing its
distinctiveness. Mekle Hash Tree(MHT) is the method which is ensuring the safety and
efficiency of the system. This method ensures the prevention of damage in the data or any
alteration in the data. This method ensures the following in the data safety;
In this method, a new protocol ie remote integrity verifying protocol that fully depends on tags
which is verified by the homomorphic inclusions. Two types of keys are normally generated in
this method. There are two types of keys. First type of key is called Public keys and another type
of the key is called private keys. The public key is accessible to the general public and the
private keys are generated to retain the top level secrets such as account credentials.
Commutation process has to be done over the files. Once it is over, then the public data to be
send to general public and the private data to be send to target user which is having top level
secret data. The feature of this scheme is as follows:
Enhanced security against the server possessing the feature of public verification
Increased security against the threats imposed by the outsiders.
5.1 MOTIVATION
Cloud is a platform where the data is distributed as a service in addition the security is a
main concern though the network and storage is vast. Data protection and privacy is a major task
in cloud now a days, it becomes more complicate for the service providers. Whenever the user
decides to use cloud platform for any of storage or process they are worried about the data
persistent and security. In order to provide the security the data providers, users or any
organizations performs the encryption of data before it transmitted to the cloud network, the
decryption also takes place every time when the user wish to access the same or if they change
any credentials on the existed data. Though the protection of data is achieved but it makes time
complexity, mean while the process takes more running time.
To rectify these problems a two level encryption is taken, where the user performs first level
encryption on data before transmission, at the cloud side the second level encryption is
performed to get decentralizing. From the user side itself the entry and exit are defined clearly
during the encryption it avoids the need of group manager. Along with encryption the accessing
policies also define in detail it avoids the data duplication and resource unavailability during
resource requesting.
A new group key management scheme is defined to make use of the available resource much
efficiently. It provides an policies for access control defining, in these policies privacy
reservation and confidentiality also defined for security purpose.
When considering cloud storage security is most needed and important factor. Many of the
researchers are working towards achieve better performance and improve quality of service.
Many of the techniques such as Service Level Agreement (SLA), Quality of Service (QoS) and
privacy preservation are used to provide better performance. In this research work we
implement a new technique Blow Fish encryption algorithm to achieve all of the above.
Main objective of this research is to obtain security and privacy preservation to the data in the
cloud storage. A technique Blowfish encryption algorithm scheme is used for providing secured
data in two cloud public and commercial cloud computing environment.
In cloud computing users are outsource their data, access their data from anywhere at any time
and also it provides a backup for individuals. All these are taken over the internet, so the users
are mainly concentrating on security issues and confidentiality of their own data. Issues in terms
of privacy, integrity, confidentiality and maintainability. All these issues are considered while
constructing a method which is Blow fish encryption algorithm, it is use both users private and
public commercial data.
4.2.1 METHODOLOGY
Due to the drawbacks in existing system we are proposing a system which focuses on the issues
related to the data security aspect of cloud computing. As data and information will be shared
with a third party, cloud computing users want to avoid an entrusted cloud provider. Protecting
private and important information, such as employee transaction details or employee medical
records from attackers or malicious insiders is of critical importance. In addition, the potential
for migration from a single cloud to a multi-cloud environment is examined and research related
to security issues in single and multi-clouds in cloud computing surveyed. Our proposed secure
database system includes a database administrator, and two non-colluding clouds. In this model,
the database administrator can be implemented on a client’s side from the perspective of cloud
service. The two clouds (refer to Cloud A and Cloud B), as the server’s side, provide the storage
and the computation service. Briefly depicts the architecture of our outsourced secure database
system in our scheme. The two clouds work together to respond each query request from the
client/authorized users (availability). For privacy concerns, these two clouds are assumed to be
non-colluding with each other, and they will follow the intersection protocols to preserve privacy
of data and queries
Bruce Schneier has designed a symmetric block cipher called Blowfish encryption
scheme in 1993. This scheme has large number of encryption product and it is designed as an
alternative to Data Encryption Scheme(DES). This scheme takes a length of 128 bit as a default
and it takes a variable length between 32 bit and 448 bit. This scheme is available to users free
because blow fish encryption scheme is patent free and license free. This scheme is best fit for
the application where the key of such application doesn’t change much for example automatic
file encryptor or a communication link. Blowfish encryption scheme is faster than DES scheme
which was implemented on a 32bit microprocessor with large number of caches such as Pentium
and PowerPC.
This blowfish algorithm has two parts; first part is called Key expansion and second part
is called data encryption part. Conversion of a key into so many sub-keys occurs in Key
expansion part. Data encryption happens through the 16 Feistel network. Key dependent
permutation, a key and data dependent permutation [2,7] are the process in each round. The main
features of this research are arranged chronically below. We put forward a blowfish encryption
scheme using symmetric block cipher to secure data in cloud computing environment. This
blowfish encryption algorithm is developed in order to provide security to the third party data in
a cloud database.
Before defining blow fish we must have knowledge about symmetric cipher algorithm, in
general cipher algorithm a secrete key is generated and used during encryption and decryption
where as in symmetric cipher algorithm same key is generated and used at both encryption and
decryption. In blow fish same procedure is followed, additionally the messages are divided into
small blocks; the blocks are already fixed length. Here the length of blowfish is defined as 64
bits. A generated secrete key is in variable length ranges from 32 bits to 448 bits and the
messages are padded with 8 bytes in a size as block.
The defined symmetric key is used where the communication process doesn’t change the key
often. Probably blow fish algorithm is used in a large communication link or an encryption may
take automatically. Compared with other encryption algorithm blow fish produce better result in
performance. Other algorithms takes 32 bit block length, our proposed blow fish takes 64 bit
blocks in feistel network with 16 times more iteration of simple encryption function.
All the data are converted into non-comprehensible data based on encryption algorithm. Blow
fish algorithm uses encryption scheme which is symmetric, means that the same key is used for
both encryption and decryption. In cloud storage users individuals or any companies wish to
transmitting data have to convert data before outsourcing it on network. For the conversion of
data symmetric key are used.
For example, in a cloud environment the data owner or user want to send or retrieve data they
should follow the following steps to do it.
64 bit plaintext to be encrypted is denoted by X, X is divided into two equal parts, first 32
bit is leftmost plaintext(xL) and second 32 bit is rightmost plaintext(xR).
Number of iterations is denoted by i.
Array of 18, 32-bit sub key is denoted by p, then XOR operation is performed with 32-bit
leftmost plain text the result is passed to the blowfish function f.
It is an iteration process since the output of becomes right most 32 bit for nect round, then
the output from function f is again XOR with original rightmost 32 bit from plain text.
The result will become left most 32 bit, this iteration is keep on going as shown in fig.
Example
Consider the following example a sample database is designed with four fields which are
ID, unique ID, and username and cipher text in cloud platform. Information in table are stored in
order to store and retrieve easily when they are need it.
A number of transaction which is taken in between the time duration, in other words throughput
is an energy which is depends in a cycle number of transactions or a ration of application to be
handled. In a network through put is depends upon data transmission, that is amount of data to be
transmitted from one place to another successfully in a given time period. Or number of cycles
takes to complete the transaction. This transaction is measured either in bits per second (bps),
megabits per second(Mbps) or gigabits per second (Gbps).
B. Encryption/decryption time
Encryption time- total amount of time to be taken to encrypt data from client point
Decryption time- total amount of time to be taken to decrypt data at receiver side
C. Channel utilization
Efficiency of channel and packet drop ratio defines the utilization of channel. In general it is
measured by bandwidth utilization of network. Channel utilization also depends on through
put of the network. It is measured in percentage of net bit rates (bit/s).
In our proposed system a database administrator and two non-colluding clouds are
included. On the client side the database administrator is used to establish and process the entire
data request. To make establish communication between clients or nodes are through only the
permission of admin. Two clouds are maintained at the server side in order to provide the
compatible storage.
In this step client’s requesting entry to the cloud system. In blow fish algorithm the two
clouds are working together to provide response to the request given by the user, who should be
authorized. Two clouds on the server side are no colluding that means working individually for
the security concern. Intersection protocols are used to provide the privacy preservation to the
data queries.
System interface creates a knowledge of given data and partitioned it into two parts and stored it
in cloud. This also provides interface between those two parts. Even without any one part ththe
user.e privacy of data can’t be obtained. To maintain security among database, in a two non-
colluding clouds are maintained, the detail of encrypted and outsource of data are stored in one
cloud and the private keys are stored in another cloud. While the user making request both cloud
works independently together to obtain desired output by
In this method blowfish symmetric block cipher algorithm is used to encrypt data to provide
security to the database. This algorithm follows encryption of 64 bits at a time. This security
mechanism provides security in two concerns they are,
1. Key-expansion
2. Data Encryption
Key-expansion
In a blow fish mechanism symmetric key encryption is used. Symmetric key means that single
key is used for both encryption and decryption. 64 bit blocks are used to generate encryption
keys.
Data Encryption
There are two clouds are maintain in this blow fish, in one cloud the request are encrypted. The
encrypted data are spread over the cloud network without encryption key stored in another cloud
can’t decrypt it. The encryption is taken place only to provide security.
Sample output
5. CONCLUSION
Data even in kilo byte can be encrypted and stored in cloud database by this system, Microsoft
Azure cloud is taken for this execution. By this blow fish encrypted algorithm a unique key is
generated, using that key ID the user data is encrypted and using the same key is decrypted data
are retrieved from the cloud data base.
In this encryption algorithm the generated unique id is served as authentication key for restore of
data. This also checks that the same unique key should not be use by more than single user. This
unique id is maintained along with the encryption key by the user. This plays major role in cloud
database for privacy preservation. Using this ID only user can store and retrieve data from cloud
data base.
Cloud environment has provided an on-demand, pay-peruse storage services for users which
enables users to access data stored in the cloud from any part globe over the internet. The cloud
environment also has with it some demerit which includes loss of governance, lock-in
capabilities, isolation failure, compliance, management interface failure amongst others. The
developed application enables a user to incorporate an extra level of security by encrypting the
data before sending it to a cloud environment. The application enhances ease of use of the
encryption algorithm and guarantees efficiency of the encryption.
Assurance of cloud data security, data availability and integrity is achieved in this mechanism. It
also ensures the quality of service for the user request. It makes secure data communication with
dependable cloud storage services for users, this method is effective, flexible distributed secure
environment. Code correction can also be taken during file distribution with the authorized
unique ID. The data redundancy is reduced and hence the data dependability is also guaranteed.
We presented two different types of cloud architecture and various interaction protocols for
outsourced database service. Privacy preservation of data contents, statistical pattern and query
pattern is ensured in two cloud architecture presented in this paper. With the help of wide range
of queries, it addresses the privacy leakage in statistical properties and also confidentiality of
static data. Our cloud architecture will meet the privacy preservation requirements as per security
analysis. Further, our proposed architecture is efficient as per our performance evaluation result.