0% found this document useful (0 votes)
19 views10 pages

Ecommerce Big Data Computeing Platform System Based On Distribuded Computing

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
19 views10 pages

Ecommerce Big Data Computeing Platform System Based On Distribuded Computing

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 10

Cluster Computing

https://doi.org/10.1007/s10586-018-2074-6 (0123456789().,-volV)(0123456789().,-volV)

E-commerce big data computing platform system based on distributed


computing logistics information
Junmin Hu1

Received: 20 January 2018 / Revised: 3 February 2018 / Accepted: 7 February 2018


 Springer Science+Business Media, LLC, part of Springer Nature 2018

Abstract
E-commerce websites generate a large amount of user behavior data, with the continuous increase of the business volume
of e-commerce companies. Enterprises hope to have a deep understanding of each customer through these data and expect
to form a learning relationship with customers. Based on this, this paper firstly elaborates the background and significance
of the system development, the status quo of related technologies at home and abroad, then carries on the overall design
and gives the system realization plan, and finally designs and realizes the big data T station based on distributed real-time
computing framework, Enterprises, especially Internet companies, can provide internal data support by building a hier-
archical data warehouse infrastructure and real-time data computing technology, and then cluster, divide and predict more
relevant data through mathematical statistics and mining algorithms. Each of us is tagged with descriptions to get each of
our attributes and hobbies, etc., providing a strong support for the external needs of enterprises and fast data-driven
business.

Keywords Distributed computing  E-commerce  Big data  Computing platform

1 Introduction order placement, and were yet to be addressed in real-time


data calculations[3]. Users generate more and more data in
E-commerce websites generate a large amount of user their daily operations due to the continual escalation of
behavior data, with the continuous increase of the business business systems and the constantly increased number of
volume of e-commerce companies. Enterprises hope to users. Earlier data platforms are not enough to support
have a deep understanding of each customer through these more and more massive amounts of data due to their simple
data and expect to form a learning relationship with cus- architecture [4]. At the same time, the well-constructed
tomers [1]. Businesses can translate customer data into data system is also a huge systematic project. The entire
customer knowledge, making mining big data enable construction period is very long and involves a long chain
computers to learn about customer relationships, improving of ecological resources including: data collection and
the overall operational efficiency of e-commerce compa- access, cleaning, storage computing, data mining, visual-
nies, and improving GMV and other key indicators by ization, etc. Each link can all be built as a complex system.
taking advantage of the vast amount of information they All the data are put together. Decoupling is very risky for
have and the data they generate each time they interact future system integration [5].This paper studies e-com-
with their customers [2]. Early data warehousing and rec- merce big data computing platform system based on dis-
ommendation engines used off-line data calculations, with tributed computing based on this.
inefficient internal reporting requirements and lagging site
recommendation results, affecting user experience and
2 State of the art
& Junmin Hu
WilhelminaaDbV@yahoo.com Many scholars have done corresponding research in the
study of massive network data mining. Some scholars put
1
Party School of Guangxi District Organs of C. P. C, Guangxi, forward let the massive data and mining tasks be
Nanning, China

123
Cluster Computing

decomposed into multiple servers and parallel processing selection to build our data warehouse which can provide
to improve the efficiency of network data mining. They refining and query analysis of data.
also proposed to use the method of database simulating In the realization of data warehouse, the process of data
linked list structure to solve large-scale tree structure and cleaning can be simplified through hierarchical manage-
graph model of distributed computing, and realize the ment of data. Because dividing the original work into
efficiency of data mining in Hadoop. Some scholars also multiple steps is equivalent to dividing a complicated work
proposed to apply Map Reduce idea to Naive Bayesian into simple tasks, the original black box into a white box,
Classification Algorithm, K-modes Clustering Algorithm and each layer of the processing logic is relatively simple
and ECLAT Frequent Item Set Mining Algorithm, and the and easy to understand, so that we can easily ensure the
effectiveness of this application for data mining was veri- accuracy of each step, when the data error occurs, we often
fied by experiments They applied the Map Reduce pro- only need to adjust a local step. The benefit of doing so is
gramming model to Ant Colony Algorithm and realized the that each layer of data is decoupled and the data flows in a
rapid mining of web logs. These researches provide the certain order so that the entire data warehouse system is
theoretical basis for big data mining in e-commerce plat- more robust. The figure below shows the hierarchy of the
form [6].There were many scholars have conducted data warehouse (Fig. 1).
research as for distributed computing, which is two or more
software to share information with each other. These 3.2 Data knowledge and metadata management
software can run on the same computer or on several dif-
ferent computers connected by a network [7].The vast Data knowledge and metadata management is the data of
majority of distributed computing is based on client/server data, that is, the intermediary of data, which is similar to
models and there are two kind of main software: client the administrator of the library, which can record man-
software that makes requests for information or services; agement information, business rules and algorithms as well
and server software that provides such information or as data features of the warehouse and the market, and
services. provide multi-dimensional metadata browsing and query
service. It can also view the implementation of the running
tasks and the complete or unsuccessful tasks in the current
3 Methodology warehouse, and it can inform the data warehouse to
develop engineer by the way of pushing notification or
3.1 Design and implementation of data alerting, and three functions are mainly included, respec-
warehouse system tively are rapid retrieval of metadata information, data
kinship map and rich metadata information.
Data warehouse technology: there are often a large amount Rapid retrieval of metadata information refers to locat-
of data tables in daily operations of enterprise, in which ing the required metadata information quickly and accu-
attribute fields are about 50 and the data size is more than rately through the search engine and multi-dimensional
100 million records, for which the traditional relational labels. These metadata information can help data ware-
database can not support, and there are a lot of good house developers to accurately locate the desired theme
products with big data gambling and computing in the field
of engineering big data technology, among which the most
influential one is Hadoop, which provides a set of solution L3 ADM
of big data storage and computing, solving the issues of
storage and computing of big data perfectly. Early data L2 TMP GDM DIM
warehouse used a single mysql, which quickly reached the
L1 FDM
bottleneck in the rapid development of business. So
migrating from mysql to Oracle and upgrading to mini-
L0 BDM
computer high-end storage can also satisfy most of the
needs. But the data volume is bigger and bigger, in addition Buffering Data Model
to the needs of real-time, accuracy, search engine, etc., the
Fundamental Data Model
long-open-sourcing Hive was elected through research.
General Data Model
ApacheHive is a data warehouse built on top of the Hadoop Aggregative Data Model
architecture. When redacted hql in hive, hive dynamically
DIM
compiled that into a mapreduce program, batching and TMP
calculating data in hadoop. It is a basic technology
Fig. 1 Hierarchical architecture of data warehouse

123
Cluster Computing

data, and the newly added or modified table agencies have setting up data cube with multi-dimensional analysis based
history, field and table on-line or off-line have a clear on theme can be directly faced analysts and engineers to
record, which is more conducive to develop Labor Man- get through the associated data and products of upstream
agement, preventing data problems and duplication of work and downstream and to marketing users, and as for product
caused by negligence or personnel changes; we can trace managers and front-line sales personnel, they can filter out
the source of the data and understand the dynamics of the predetermined population in the data cube and directly
related tasks through the data kinship map; rich metadata call the marketing platform for issuing coupons, EDM and
information includes big data platform data warehouse and other operations, reducing many intermediate links,
all the metadata information of the data in market, in achieving efficient operations and precision marketing, so
addition to the basic information, and also includes the all- as to greatly enhance the efficiency. Multidimensional
around and multi-dimensional metadata information such analysis is one of the excellent applications of producibility
as partition information, loading information, etc. It as well of user portrait among them. The combination of many
as includes: collections, visit records and comments and dimensions of user portrait and orders, goods, traffic and
other personalized services, and items that need to be paid other indicators can quickly realize intelligent analysis and
attention to noted in the comment function, which are not provide professional and effective advice according to data
only convenient for newcomers later, but also can enhance comparison and analysis. We can also establish member-
the collaborative development efficiency of data warehouse ship growth system according to the user portrait, which, in
team. The figure below shows the details of the table. fact, is the user’s level label. It can meet the individual
needs of members and provide better services for members
3.3 Realization of user portrait market through the membership level, so that the level label has
practical application value, that is, the higher the level of
In fact, the user portrait is a data mart built in the upper membership loyalty and the frequency of consumption are,
layer of the data warehouse. Setting up user portrait plat- the higher the membership level will be, and the higher the
form can connect the data platform which has a large exclusive rights and services will be.
amount of user data with the visual data tool platform. Let
personnel of R&D and production, user research, market-
ing and so on analyze user characteristics of different 4 Result analysis and discussion
products at any time independently according to needs to
rapidly insight into users’ needs with the application of 4.1 Engine component with real-time
value of data mining platform according to interaction computing-Infobright
scenarios of different users. Perfect user portrait platform
needs to consider a comprehensive model system, usually Data analysts have stringent requirements on the timeliness
speaking, the data required to build user portrait platform is of data queries in the scenario of AD-HOC, hoping to
divided into three types of entities, namely are users, pro- respond to a few urgent requests in a short time and make
duct and channels. Each tab provides a perspective on how timely market decisions and marketing feedback. After
we observe, recognize and describe the user. User Portrait several rounds of technical selection, we selected Info-
is a whole, which means that each dimension is not isolated bright to solve such problem very well, which can achieve
and there is a link between the labels. For each type of data excellent performance through knowledge grid and com-
entity, the data dimension of the user portrait further prehensive optimization technology for complex queries.
decomposes the data dimension that can fall to form a field Once imported into Infobright, the data was highly com-
set. The figure below shows the data processing flow of the pressed and stored as ‘‘chunks’’, and at the same time, the
user portrait (Fig. 2). knowledge grid automatically created a very compact piece
First, we generate user tags to identify a user’s attribute of metadata that contained the relational information
by labeling the user, and to reflect the true characteristics of between the statistics and the chunks. So when a query was
the user through the user’s web site browsing behavior; received, Infobright’s query optimizer could intelligently
second, to give the definition of user’s life cycle labels, we decide which blocks of data were related to the query
divided it into several types in accordance with the order request through metadata and decompressed it. Infobright
situation based on the rules of business summary as for the did not need to specifically divide the data and did not need
user’s life cycle stage. to create index based on knowledge grid technology so as
In the application of user portrait, the established user to save query processing time and improve response speed.
portrait fair can be standardized output to the outside and Therefore, Infobright was finally selected for an ad hoc
synchronized to a unified data mart layer to facilitate the query on the identified data mart to meet the fast and
development of the underlying call; at the same time,

123
Cluster Computing

Fig. 2 The overall architecture


of user portrait generation Application
scene Market Customer service Operate Public relations ĂĂ

User portrait Basic attributes Shopping features Interest preference User rating

Behavior natural language machine prediction clustering


modeling Text mining
processing learning algorithm algorithm

Reference Amount of Consumptio Shopping Gross


dimension Order ĂĂ
money n frequency category interest rate

Data hand
collection Network log data User behavior data Web site transaction data

complicated demand for analysts. InfoBright’s knowledge meet this need, we needed to make dynamic splicing of sql
grid data architecture is shown below (Fig. 3). statements in the front-end, and users in the front-end
It was a visual interface to interact with users at the front dynamically generated sql statement and sent to the back-
end of data products. We hoped that users can select dif- end for data calculations in the form of dragging and
ferent reports forms to set up their own reports according to dropping (Fig. 4).
their own needs of measures and dimensions while they A route calculation needed to be performed to determine
were viewing inherent reports they created. They could whether the request could be quickly returned and ad hoc
also create a derived measure based on the measurement query, or to the underlying data warehouse for curing when
provided making a simple operation when there was no required allocation. The routing rule was also summarized
suitable measure, while the business people just shared in the experiment, which based on the test environment
created reports under the department for departmental use with a single node, 2 physical cpu, each cpu8 core, 2.6GHZ
through the approval of departmental leaders. In order to dominant frequency of cpu and 128G internal storage. Due

ptimizer: minimized data to


Repeated optimization Granular Computing Engine unzip data using a
& Implementation knowledge grid

Knowledge Grid
The knowledge grid Knowledge node: storage
stores data rather than metadata, column
index Knowledge Knowledge information, table relation
Nodes Nodes
Data block node: store data
block distribution state
statistics
Pre query data statistics Data block node: store data
Data Pack Nodes Data Pack Nodes Data Pack Nodes block distribution state
statistics

Column A Column B Column C

Data compression
algorithm for patent 1st 1st 1st
application Data block: storing real
compressed data by each
data block 65K

Fig. 3 InfoBright based on the architecture of the knowledge grid

123
Cluster Computing

Business Logic In practice, in order to be able to respond quickly to user


requests for computation, we have built a data mart facing
Analyst-web
more than one themes for column database on the topic of
Analyst-server data warehouse technology. When in the interactive inter-
face, the user dynamically added dimensions and indicators
Analyst

through drag-and-drop and submitted the request to the


Routing rules system, which generated a SQL statement dynamically
according to the request submitted by the user, and the
Business Logic

routing rule would determine which calculation engine the


Mysql (for report) InfoBright generated SQL statement would sent to.

4.2 Engine component with real-time


computing-Preresto
Presto & Hive
DW

Mysql Meta DB Oracle ETL DB


Presto is a distributed query and execution engine fully
Fig. 4 SQL dynamic splicing and data routing rules architecture internal-storage-based, so the hardware choice for the
Presto cluster must meet the requirements of large internal
to the production environment was relatively close, this storage, 10GbE and high computational power. Presto
time we imported 140 million data to the following tests cluster is the topology of master–slave, because that Presto
(Figs. 5, 6). is divided into two kinds of node services, namely are
We got the routing rules from the experiment, if the Coordinator and Worker. In addition, due to Presto has two
number of the join table was more than 2 or 3, could be clients: Cli Client: the server deployed Presto command
considered taking Presto&Hive; if there had distinct, you line client. Application Client: JDBC driver which devel-
could decide whether to take hive according to the join opers can use Presto to query and calculate big data using
table data and the data volume of table; if there was Java code. Therefore, the hardware architecture of the
orderby, you could consider taking hive; If the amount of entire Presto cluster includes four kinds of servers: Coor-
data was too large, such as more than 300 million, you dinator, Worker, Cli client and application client. The
could consider taking hive; if there was case when, you realization of hardware system was shown in the
could consider Presto&Hive; if there had conditions such figure (Fig. 7).
as C, B and so on of where condition in the query, you Presto’s support for multiple data sources, data source
could decide whether or not take Presto&Hive based on the decoupling, and ease of scalability are attributed to Presto’s
volume data of table. Through the routing rules summa- Connector design. Each data source corresponds to a
rized above, we gave different weights for different situa- Connector in Presto, you can discretionarily develop your
tions, realizing the function of dynamic routing of data own desired data source. Presto is a distributed big data
query through rules and weights. Of course, we also needed query and execution engine fully internal-storage-based,
to constantly adjust the parameters in production to con- with all queries and calculations executing in internal
tinuously optimize the query efficiency. storage. Based on PiPeLine(pipeline)design, each query

Fig. 5 Single table query 3 indexes take 8.9 s

123
Cluster Computing

Fig. 6 Two tables associated query 6 indexes take 22 s

can be basically guaranteed is on the client side. The Presto


client submits a query statement to the Coordinator, which
decomposes the query into tasks and executes them by
every worker. The Coordinator obtains the final query
result from the Worker in SingleStage and returns the
query result to the client. The realization of Presto’s soft-
ware system is shown in the figure (Fig. 8).
Performing a query was divided into seven steps in
Presto which can be seen from the figure, so we could
import the data from one data source into another data
source using a simple SQL statement as Presto can easily
support multiple data sources and can support mixed
computing of multiple data sources. And data in Mysql or
Postgre databases can be imported into Hive via the cre-
atetable…asselect…statement.And then realized that
imported the data in Hive into Mysql through the sec-
ondary development. In practical applications, with the use
Fig. 7 Hardware system architecture diagram of azkaban scheduling system, scheduled or fixed-fre-
quency scheduling Presto execution scripts can be imple-
will be broken down into multiple Tasks distributed in the mented to accomplish related ETLs.This use of Presto
Worker, each Task exists dependencies with its upstream provided a new ETL solution that was more convenient and
and downstream Task in the data stream, and each Task faster than the traditional ETL solution. In real-time data
will be further subdivided into multiple operators, each of computing, quasi-real-time data flow analysis mainly refers
them represents an operation on the data stream. Whenever to the use of SQL through presto-kafka message queue to
the query starts, it will continue to perform Task on each analyze and calculate the data flow in Kafka. There were
Worker, and each processing a Buffer size of the data, it two scenarios in the actual use of the process, respectively
will transfer the results to the Task in the downstream were retaining historical data and query data. Specific
phase, so that the transmission of data real-time dynamic system architecture as shown below (Fig. 9).

123
Cluster Computing

Coordinator
Data
1.Send Query 1.Send Query Plan Location API
3
Using HTTP W .Sen
or d
ke Ta
Parser/Anal r a sk
Client Planner Scheduler cc T
yzer or o
din Da
g Q ta
7.Client get Results ue Loc
ry ale
from Coordinator Pl
an
Re 6.Co
su or
lt di
fro na
m t or Worker
w o ge
rk ts
er DataStream
s
API

Connector
Data Source

Worker DataStream
API

Worker
Data Source
5.Workers run tasks in memory 4.Workers read data
through Connector

Fig. 8 Software system architecture diagram

Fig. 9 Quasi real time


calculation of data flow chart

As can be seen from the figure above: we needed to 4.3 Engine component with real-time
cooperate with presto-hive and presto-kafka to complete computing-storm
the calculation for the needs of analysis and calculation of
kafka historical data. In the above real-time computing scenarios, the processed
data was prepared in the data warehouse in advance. There
was another demand in the actual application scenario was
that the data dynamically access the real-time response to
the traditional online transaction processing system, which

123
Cluster Computing

was unable to support the real-time flow of massive data queue cluster through a Zookeeper cluster. Kafka managed
processing. Storm, based on Twitter-open-source, was the cluster configuration through Zookeeper, elected lead-
elected as a technical selection after investigation and ers, and redistributed data as ConsumerGroup changed. In
survey. We embed points in the user’s front page and read this way we have realized the function of message queue
the user’s operation log as a data source access to the module, and provided the foundation for the upper real-
message queue, and Storm calculation engine to pull the time data calculation. In Stom’s flow calculation engine,
data computing from the message queue, and displayed to the flow of calculation data is as shown in the figure below
the front page. Here we made a architecture for the entire (Fig. 12).
real-time data flow taking a hierarchical data architecture From the picture above, the Spout component read the
idea (Fig. 10). data from the external data source kafka according to a
In the system realization of real-time data acquisition certain frequency and sent a message to Topology after the
and analysis, achieving real-time capture and resolve work message queue getting the message: the level of frequency
for data log of mainstream data source and achieving real- of t knock Le « was also pre-set,for example, the front
time capture and resolution for the operation log of page required a refresh every 30 s, then we could set the
sqlserver, mysql, oracle and hbase through the develop- data transmission interval with 30 s. Spout was divided
ment of data collection system based on the log system. into reliable and unreliable modes and could be setting by
After the data was collected, the data would be passed to configuring. If this message was not successfully processed
the message queue, which in this mainly played the role of by Stann, the reliable Spout could re-transmit the message,
data buffer. To consider the real-time data push for Storm, while the unreliable Spout component would not resend the
we used Kafka and a distributed message queue as the data message once it had been sent. There were two important
transfer pipeline at the front of the Storm computing methods of Spout, namely were ack and fail, ack method
engine. Kafka is a distributed message queue system that was called after Storm checking the successful imple-
forms the basis of the Data Transfer Processing Pipeline, menting of the message,otherwise it would call the fail
including production and consumer terminals, and is now method, only reliable spout would call these two methods.
used by many different types of companies as multiple The Bolt component then received the message sent by the
types of data pipelines and messaging systems (Fig. 11). Spout component of the data source. The Bolt component
As shown in the figure, the overall flow is that the data processed the message logically. All business operations
source used push mode to post the message to the message processed by message were encapsulated in Bolt, in which
queue server, and the data consumer terminal used the we could filter, aggregate and store the result and progress
pulled mode to subscribe and consume messages from the other operations. Bolt received the data from the Spout and
message queue server. Kafka cluster of our system contains processed it in parallel, and the processing of complex
several Producers, such as browsing records generated by streams could be done by multiple Bolt components. In the
spotting in the web front-end, or server logs, etc.,a number figure above,we defined the calculation of the parallel
of servers of broker core and Kafka’s support for horizontal transaction amount for multiple orders of CountBolt.
expansion, the more the number of broker was, the higher Multiple CmmtBolts were computed in parallel and then
the throughput rate of the cluster would be. Producers push summarized in MergeBolt.We pre-set the rules, such as
the data to the Broker in the form of data push. Con- real-time statistics to show the total amount of men’s
sumerGroup, the data consumer, pulled the data in the form wear’s day sales.We could filter in CountBolt in parallel
of data pullout according to the business demand with a first and initial aggregated for those Bolt in line with the
certain frequency or scheduling task. Intermediate would men’s clothing, and then assigned into MergeBolt through
make configuration management for the overall message the StreamGrouping data stream task mentioned above for

Fig. 10 Real-time data flow


chart based on Storm Shopping
visualization
Mall
DB-BinLog
Distri Real
Finance Sourc buted time Recommend
e data Write in
mess consum arith
file ption
extra age metic
ction queui engin Advertiseme
Intelligence
ng e nt
News report

O2O ĂĂ

123
Cluster Computing

Fig. 11 Real-time data flow


chart based on Storm (Producer) Front End Front End Front End Front End

(push) (push) (push)


(push)

(Broker) Kafka Kafka Kafka

(pull) (pull) (pull) (pull)


ZooKeeper

(Consumer)
Hadoop Real-time Other Data
Cluster monitoring service warehouse

Fig. 12 The calculated data CountBolt


flow chart of the Spout and Bolt calculation of the
components amount
Display
Count Bolt the front
calculation of the interface,
Spout accepts amount Count Bolt
Kafka order information summary, check the
from Kafka CountBolt write to Mysql database
calculation of the every 30
amount
seconds
CountBolt
calculation of the
amount

the final total, and then deposited into the mysql database. behavior of user’s portrait, product portrait and knowledge
The front page read data from the raysql database in real map, and the continuously adjusted model is fed back to
time and presented it to the customer.The picture below the forecasting engine in time. The closed-loop formation
shows the real-time data Kanban feature through the Storm of this data makes real-time prediction more accurate. On
engine. the system architecture, it is recommended that the system
adopt the solution of Cassandm ? Lucene deployed on the
same machine with the storage and indexing service to
5 Conclusion improve the stability and robustness of the recall service,
reducing the number of remote calls, and reducing the time
The paper mainly describes how the e-commerce big data delay.
platform system based on distributed computing builds
from the underlying data warehouse to the real-time com-
puting engine, and to real-time recommendations for the
value of today’s data-driven business. It is convenient to References
maintain for operation and maintenance staff and ware-
1. Huang, T.E., Guo, Q., Sun, H.A.: Distributed computing platform
house engineers through the metadata management system. supporting power system security knowledge discovery based on
Helping marketers to predict sales through the reporting online simulation. IEEE Trans. Smart Grid. 8(99), 1–1 (2016)
system and user portraits. According to the degree of 2. Wu, P.L., He, F., Zhong, Y.: An envelope analysis algorithm of
response to different marketing methods of the history of experiment data based on distributed computing platform. Cyber-
space Secur. 1(2), 12–14 (2017)
the user, marketing costs, marketing products and the 3. Guo, W., Liu, F.: Research on parallel algorithm based on Hadoop
relationship between marketing effectiveness, locking tar- distributed computing platform. Int. J. Grid Distrib. Comput. 8(1),
get population more accurately and targeted marketing to 77–78 (2015)
improve marketing efficiency and reduce marketing costs. 4. Zhang, X., Jiang, J., Zhang, X., et al.: A data transmission
algorithm for distributed computing system based on maximum
Based on the recommendation system, the Hoeffding tree- flow. Cluster Comput. 18(3), 1157–1169 (2015)
based stochastic forest model is trained on the online

123
Cluster Computing

5. Rho, S., Park, S., Hwang, S.: Privacy enhanced data security Junmin Hu (1966.4), male,
mechanism in a large-scale distributed computing system for HTC native of Zhangjiajie in Hunan,
and MTC. Int. J. Contents. 12(2), 6–11 (2016) Tujia Nationality, Vice Princi-
6. Kumar, R.S., Chandrasekharan, E.: A parallel distributed comput- pal in Party School of Guangxi
ing framework for Newton-Raphson load flow analysis of large District Organs of C. P. C,
interconnected power systems. Int. J. Electr. Power Energy Syst. Professor, Ph.D. in economics,
73, 1–6 (2015) major research areas in energy
7. Hofmann, M., Rünger, G.: Sustainability through flexibility: economics, regional economics
building complex simulation programs for distributed computing and international business
systems. Simul. Model. Pract. Theory. 58, 65–78 (2015) theory.

123

You might also like