Skip to content

Commit

Permalink
typo
Browse files Browse the repository at this point in the history
  • Loading branch information
jmuth authored Apr 24, 2019
1 parent 9841843 commit ddb492f
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion EnglishVersion/1-Overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -107,7 +107,7 @@ Since this is a simple application, let's estimate the runtime data size in each

## Logical Plan

The actual execution procedure is more complicated than what we described above. Generally speaking, Spark firstly creates a logical plan (namely data dependency graph) for each application, then it transforms the logical plan into a physical plan (a DAG graph of map/reduce stages and map/reduce tasks). After that, concrete map/reduce tasks will be lanuched to process the input data. Let's detail the logical plan of this application:
The actual execution procedure is more complicated than what we described above. Generally speaking, Spark firstly creates a logical plan (namely data dependency graph) for each application, then it transforms the logical plan into a physical plan (a DAG graph of map/reduce stages and map/reduce tasks). After that, concrete map/reduce tasks will be launched to process the input data. Let's detail the logical plan of this application:

The function call of `RDD.toDebugString` can return the logical plan:

Expand Down

0 comments on commit ddb492f

Please sign in to comment.