Ritu Munshi's Editing Tool Project Report
Ritu Munshi's Editing Tool Project Report
Project Report
{Editing Tool}
18-Januray to 15-July 2016
Ritu Munshi
BranchComputer
Engineering
Roll NoCE-1117-2K12
Bachelors of Technology
ACKNOWLEDGEMENT
First and foremost, I would like to express my hearty thanks and indebtedness to my guide
Mr.Prassana Jagdeshwaran my Manager for his enormous help and encouragement
throughout the course of this thesis. The directions given by him shall take me a long way in the
journey of life on which I am about to embark.
I express my soulful gratitude to Mr. Prakash Mahendran my Mentor, for providing me with
the opportunity to work on this project. Without him this project work could not have seen the
daylight and helped me with their constant involvement during my project tenure here.
I acknowledge gratefully the help and suggestion of DCG people and friends, who in spite of
their busy schedule and huge workload were always eager to help me with their warm attitude
and technical knowledge.
Finally, I would like to thank each and every person who has contributed in any manner to my
learning during this internship.
(RITU MUNSHI)
Table of contents :
I.
Technologies Used............................5
II.
Project..............................................6
III.
Goals Provided..................................6
IV.
Training Period.................................7
V.
VI.
X.
XI.
XII. References......................................62
XIII. Since ,according to our project our main requirement from
Team
I was in publisher to readers team which comes under Kindle
Team. This team as suggested by its name work towards the
team the provide tools to content readers so that the authors are
able to edit the book the way want to see it and able to give best
view towards to the readers enhancing their experience
It works on:
Overall communication response time.
Tools to enhance the reading experience for user
Technologies Used
The team I work in use various technologies like:
C++ 11
C++ with Qt
Java
JavaScript for UI Support
Sqlite
Project
Here in Amazon every Intern in provided by new project
.Therefore I was given project to work towards was Editing
Tools:
Goals Provided
At the starting days of internship we are provided with the goals
that are needed to be met and are expected to be reached at end
of internship.
App.
Investigate if periodic sync from local DB to remote DB will be
better.
Training Period
I attended the SDE Boot Camp in the 1st weak (18th Jan 23rd
Jan). SDE Bootcamp
In-person technical induction training reduces the time it takes
new engineers to
Ramp up on Amazon software development tools and best
practices. The training is
Comprised of Amazon developer culture, Open Source and legal
policies, an intro
to Information Security, and hands-on training on the developer
toolset.
Amazon contain so many internal developer tools so learning
these tools is the first
Step to developing and deploying software at Amazon. This Boot
Camp Contain the
training for all the internal tools
1.
2.
3.
4.
5.
6.
7.
SQlite
MysqL
Oracle
PostgreSQL
Mongo DB
Couch DB
Mongo DB
Pros
MySQL
- Easy to use
- Cross
platform(androi
d/ios)
- Supports
network
- Secure
SQLite
- Easy
- Fast
- File Based
-
PostgreSQL
- Strongly work
on ACID
Properties.
- Highly secure
- Can perform
complex queries
Cons
Licensing
- Not very
stable
- Lack of
full-text
search.
- Stagnated
Developm
ent
- Cannot do
network
connections
- Not
Suitable
for project
-less
performant
then MYSQL
as it may overkill read-heavy
operations.
GPL
Public
Domain
LGPL
-Replication
Oracle
- Portability
-
10
-Slow
Network /
Network traffic
Proprietary
11
Pros
Cons
Licensing
MongoDB
-Dynamic
Schema
-Better for Real
Time
Applications
-Scales Easily
-Platform
Supported
-No hardware
requirement
-Cross
Platform.
-Easily Scalable
- Automatic
Growth
Amazons
DynamoDB
-Offical
MongoDb
supports are
high
Properity
AWS
Dynamo DB
12
- DynamoDB
13
No SQL vs SQL:
14
15
16
as input to an internal hash function; the output from the hash function determines the
partition where the item is stored. All items with the same partition key are stored
together, in sorted order by sort key value. It is possible for two items to have the same
partition key value, but those two items must have different sort key values.
Note
The partition key of an item is also known as its hash attribute. The term "hash attribute"
derives from DynamoDB's usage of an internal hash function to evenly distribute data
items across partitions, based on their partition key values.
The sort key of an item is also known as its range attribute. The term "range attribute"
derives from the way DynamoDB stores items with the same partition key physically
close together, in sorted order by the sort key value.
5. Secondary Indexes
In DynamoDB, you can read data in a table by providing primary key attribute values. If
you want to read the data using non-key attributes, you can use a secondary index to
do this. After you create a secondary index on a table, you can read data from the index
in much the same way as you do from the table. By using secondary indexes, your
applications can use many different query patterns, in addition to accessingthe data by
primary key values.
Basic DynamoDB Operations
DynamoDB API:
To use the shell, you enter JavaScript code on the left side, and then click the play
button arrow ( ) to run the code. The right side shows you the results.
In this tutorial you create a table called Musicand perform various operations on it,
including add items, modify items, and read items. This exercise provides JavaScript
code snippets that you copy and paste into the shell.
Prerequisites
Before you begin this tutorial, you need to download and run DynamoDB so that you
can access the built-in JavaScript shell.
Download and Run DynamoDB
DynamoDB is available as an executable .jarfile. It runs on Windows, Linux, Mac OS
X, and other
platforms that support Java. Follow these steps to download and run DynamoDB on
your computer.
1. Download DynamoDB for free using one of these links:
.tar.gz format: http://dynamodb-local.s3-website-us-west-2.amazonaws.com/
dynamodb_local_latest.tar.gz
PROJECT REPORT {EDITING TOOL} -
17
18
Next Step
Step 1: Create a Table (p. 6)
Step 1: Create a Table
In this step, you create a table named Music.You use the CreateTableAPI
operation to do this. The
primary key for the table consists of two attributes that are both string type: Artist
(partition key) and
SongTitle(sort key).
1. Copy the following code and paste it into the left side of the DynamoDB JavaScript
shell window.
varparams={
TableName:"Music",
KeySchema:[
{AttributeName:"Artist",KeyType:"HASH"},//Partitionkey
{AttributeName:"SongTitle",KeyType:"RANGE"}//Sortkey
],
AttributeDefinitions:[
{AttributeName:"Artist",AttributeType:"S"},
{AttributeName:"SongTitle",AttributeType:"S"}
],
ProvisionedThroughput:{
ReadCapacityUnits:1,
WriteCapacityUnits:1
}
};
dynamodb.createTable(params,function(err,data){
PROJECT REPORT {EDITING TOOL} -
19
if(err)
console.log(JSON.stringify(err,null,2));
else
console.log(JSON.stringify(data,null,2));
});
In the code, you specify the table name, its primary key attributes and their data types.
The
ProvisionedThroughputparameter is required; however, the downloadable version
of DynamoDB
ignores it.
2. Click the play button arrow to run the code, as shown in the following screen shot.
The response from DynamoDB; is shown in the right side of the window.
In the response, take note of the TableStatus. Its value should be ACTIVE. This
indicates that the
Musictable is ready for use.
In the code snippet, note the following:
The paramsobject holds the parameters for the corresponding DynamoDB API
operation.
The dynamodb.<operation>line invokes the operation, with the correct
parameters. In the example
above, the operation is createTable.
Next Step
Step 2: Get Information About Tables
DynamoDB stores detailed metadata about your tables, such as table name, its primary
key attributes,
table status, and provisioned throughput settings. In this section you retrieve information
about the music
PROJECT REPORT {EDITING TOOL} -
20
table using the DynamoDB DescribeTableoperation and also obtain a list of tables
using the ListTablesoperation.
Step 2.1: Retrieve a Table Description
Use the DynamoDB DescribeTableoperation to view details about a table.
1. Replace everything in the left side of the DynamoDB JavaScript shell window with the
following code:
varparams={
TableName:"Music"
};
dynamodb.describeTable(params,function(err,data){
if(err)
console.log(JSON.stringify(err,null,2));
else
console.log(JSON.stringify(data,null,2));
});
2. Click the play button arrow to run the code. The response from DynamoDB contains
a complete
description of the table.
Step 2.2: Retrieve a List of Your Tables
Use the ListTablesAPI operation to list the names of all of your tables. This
operation does not require any parameters.
1. Replace everything in the left side of the DynamoDB JavaScript shell window with the
following code:
varparams={};
dynamodb.listTables(params,function(err,data){
if(err)
console.log(JSON.stringify(err,null,2));
else
console.log(JSON.stringify(data,null,2));
});
2. Click the play button arrow to run the code. The response from DynamoDB contains
just one table called Music.
Step 3: Write Items to the Table
When you write an item to a DynamoDB table, only the primary key attribute(s) are
required. Other than the primary key, the table does not require a schema. In this
section, you write an item to a table (PutItem operation), write an item conditionally,
and also write multiple items in a single operation (BatchWriteItemoperation).
Step 3.1: Write a Single Item
Use the PutItemAPI operation to write an item.
PROJECT REPORT {EDITING TOOL} -
21
1. Replace everything in the left side of the DynamoDB JavaScript shell window with the
following code:
varparams={
TableName:"Music",
Item:{
"Artist":"NoOneYouKnow",
"SongTitle":"CallMeToday",
"AlbumTitle":"SomewhatFamous",
"Year":2015,
"Price":2.14,
"Genre":"Country",
"Tags":{
"Composers":[
"Smith",
"Jones",
"Davis"
],
"LengthInSeconds":214
}
}
};
docClient.put(params,function(err,data){
if(err)
console.log(JSON.stringify(err,null,2));
else
console.log(JSON.stringify(data,null,2));
});
2. Click the play button arrow to run the code. If the write is successful, the response is
an empty map:
{}
Note the following about the item you just added:
Artistand SongTitleare primary key attributes (partition key and sort key,
respectively). Both are
of string type. Every item that you add to the table must have values for these attributes.
Other attributes are AlbumTitle(string), Year(number), Price(number), Genre
(string), and Tags
(map).
DynamoDB allows you to nest attributes within other attributes. The Tagsmap
contains two nested
attributesComposers(list) and LengthInSeconds(number).
Artist, SongTitle, AlbumTitle, Year, Price, Genre, and Tagsare top-level
attributes because
they are not nested within any other attributes.
22
23
24
"Genre":"Rock",
"PromotionInfo":{
"RadioStationsPlaying":[
"KHCR","KBQX","WTNR","WJJH"
],
"TourDates":{
"Seattle":"20150625",
"Cleveland":"20150630"
},
"Rotation":"Heavy"
}
}
}
},
{
PutRequest:{
Item:{
"Artist":"TheAcmeBand",
"SongTitle":"LookOut,World",
"AlbumTitle":"TheBuckStartsHere",
"Price":0.99,
"Genre":"Rock"
}
}
}
]
}
};
docClient.batchWrite(params,function(err,data){
if(err)
console.log(JSON.stringify(err,null,2));
else
console.log(JSON.stringify(data,null,2));
});
2. Click the play button arrow to run the code. If the batch write is successful, the
response contains
the following: "UnprocessedItems":{}. This indicates that all of the items in the
batch have been
written.
Step 4: Read an Item Using Its Primary Key
DynamoDB provides the GetItemoperation for retrieving one item at a time.You can
retrieve an entire item, or a subset of its attributes. DynamoDB supports the Map and
List attribute types. These attribute types allow you to nest other attributes within them,
so that you can store complex documents in an item.You can use GetItemto
retrieve an entire document, or just some of the nested attributes within that document.
PROJECT REPORT {EDITING TOOL} -
25
26
27
"Artist":"NoOneYouKnow",
"SongTitle":"CallMeToday"
},
ProjectionExpression:"AlbumTitle,#y",
ExpressionAttributeNames:{"#y":"Year"}
};
In the ProjectionExpression, the word Yearis replaced by the token #y. The #
(pound sign) is
required, and indicates that this is a placeholder. The ExpressionAttributeNames
parameter
indicates that #yis to be replaced by Yearat runtime.
4. Click the play button arrow to run the code. The AlbumTitleand Yearattributes
appear in the response.
Step 4.3: Retrieve Nested Attributes Using Document Path Notation
DynamoDB supports a map type attribute to store documents. In the Musictable, we
use a map type attribute called Tagsto store information such as list of music
composers, song duration information, and so on.These are nested attributes.You can
retrieve entire document, or a subset of these nested attributes by specifying document
path notation. A document path tells DynamoDB where to find the attribute, even if it is
deeply nested within multiple lists and maps. In a document path, use the following
operators to access nested attributes:
For a list, use square brackets: [n], where n is the element number. List elements are
zero-based, so
[0] represents the first element in the list, [1] represents the second, and so on.
For a map, use a dot: . The dot acts as a separator between elements in the map.
1. Modify the paramsobject so that it looks like this:
varparams={
TableName:"Music",
Key:{
"Artist":"NoOneYouKnow",
"SongTitle":"CallMeToday"
},
ProjectionExpression:"AlbumTitle,#y,Tags.Composers[0],
Tags.LengthInSeconds",
ExpressionAttributeNames:{"#y":"Year"}
};
2. Click the play button arrow to run the code. The response contains only the top-level
and nested
attributes that were specified in ProjectionExpression.
Step 4.4: Read Multiple Items Using BatchGetItem
The GetItemoperation retrieves a single item by its primary key. DynamoDB also
supports BatchGetItem
PROJECT REPORT {EDITING TOOL} -
28
operation for you to read multiple items in a single request.You specify a list of primary
keys for this
operation.
The following example retrieves a group of music items. The example also specifies the
optional
ProjectionExpressionto retrieve only a subset of the attributes.
1. Replace everything in the left side of the DynamoDB JavaScript shell window with the
following code:
varparams={
RequestItems:{
"Music":{
Keys:[
{
"Artist":"NoOneYouKnow",
"SongTitle":"MyDogSpot"
},
{
"Artist":"NoOneYouKnow",
"SongTitle":"SomewhereDownTheRoad"
},
{
"Artist":"TheAcmeBand",
"SongTitle":"StillInLove"
},
{
"Artist":"TheAcmeBand",
"SongTitle":"LookOut,World"
}
],
ProjectionExpression:"PromotionInfo,CriticRating,Price"
}
}
};
docClient.batchGet(params,function(err,data){
if(err)
console.log(JSON.stringify(err,null,2));
else
console.log(JSON.stringify(data,null,2));
});
2. Click the play button arrow to run the code. The response contains all of the attributes
specified in ProjectionExpression. If one of the items does not have an attribute, it
appears in the response as an empty map: {}
3. (Optional) Remove the ProjectionExpresionentirely, and retrieve all of the
attributes from the items.
4. (Optional) Add a new ProjectionExpressionthat retrieves at least one nested
attribute. Use document path notation to do this.
29
30
31
2. Click the play button arrow to run the code. The response contains the only song by
The Acme
Band that is in heavy rotation on at least three radio stations.
Step 5.3: Scan the Table
You can use the Scanoperation to retrieve all of the items in a table. In the following
example you scan
the Musictable.
1. Replace everything in the left side of the DynamoDB JavaScript shell window with the
following code:
varparams={
TableName:"Music"
};
docClient.scan(params,function(err,data){
if(err)
console.log(JSON.stringify(err,null,2));
else
console.log(JSON.stringify(data,null,2));
});
2. Click the play button arrow to run the code. All of the table items appear in the
response.
Step 6:Work with a Secondary Index
Without an index, you can query for items based on primary key.You can add indexes to
your table depending on your query patterns. DynamoDB supports two different kinds of
indexes:
Global secondary index an index with a partition key and sort key that can be
different from those on the table.You can create or delete a global secondary index on a
table at any time.
Local secondary index an index that has the same partition key as the primary key
of the table, but a different sort key.You can only create a local secondary index when
you create a table; when you delete the table, the local secondary index is also deleted.
In this step, you add a secondary index to the Musictable. Then, you then query and
scan the index, in the same way as you would query or scan a table.
Step 6.1: Create a Global Secondary Index
The Musictable has a primary key made of Artist(partition key) and SongTitle
(sort key). Now suppose you want to query this table by Genreand find all of the
Countrysongs. Searching on the primary key does not help in this case.To do this, we
build a secondary index with Genreas the partition key.
32
To make this interesting, we use the Priceattribute as the sort key. So you can now
run a query to find all Countrysongs with Priceless than 0.99.
You can add an index at the time that you create a table or later using the
UpdateTableoperation.
1. Replace everything in the left side of the DynamoDB JavaScript shell window with the
following code:
varparams={
TableName:"Music",
AttributeDefinitions:[
{AttributeName:"Genre",AttributeType:"S"},
{AttributeName:"Price",AttributeType:"N"}
],
GlobalSecondaryIndexUpdates:[
{
Create:{
IndexName:"GenreAndPriceIndex",
KeySchema:[
{AttributeName:"Genre",KeyType:"HASH"},//Partition
key
{AttributeName:"Price",KeyType:"RANGE"},//Sortkey
],
Projection:{
"ProjectionType":"ALL"
},
ProvisionedThroughput:{
"ReadCapacityUnits":1,"WriteCapacityUnits":1
}
}
}
]
};
dynamodb.updateTable(params,function(err,data){
if(err)
console.log(JSON.stringify(err,null,2));
else
console.log(JSON.stringify(data,null,2));
});
In the code:
AttributeDefinitionslists data types of attributes that are later defined as the
partition key
and sort key of the index.
GlobalSecondaryIndexUpdatesspecifies the index operations.You can create
index, update
index, or delete an index.
The ProvisionedThroughputparameter is required, but the downloadable version
of DynamoDB
PROJECT REPORT {EDITING TOOL} -
33
ignores it.
2. Click the play button arrow to run the code.
In the response, take note of the IndexStatus. Its value should be CREATING, which
indicates that
the index is being built. The new should be available for use within a few seconds.
Step 6.2: Query the Index
Now we use the index to query for all Countrysongs. The index has all of the data
you need, so you query the index and not the table.
1. Replace everything in the left side of the DynamoDB JavaScript shell window with the
following code:
varparams={
TableName:"Music",
IndexName:"GenreAndPriceIndex",
KeyConditionExpression:"Genre=:genre",
ExpressionAttributeValues:{
":genre":"Country"
},
ProjectionExpression:"SongTitle,Price"
};
docClient.query(params,function(err,data){
if(err)
console.log(JSON.stringify(err,null,2));
else
console.log(JSON.stringify(data,null,2));
});
2. Click the play button arrow to run the code. Only the Countrysongs are returned.
3. Now let us query for Countrysongs that cost more than two dollars. Here you
specify both the
partition key and sort key values for index. Modify the paramsobject so that it looks
like this:
varparams={
TableName:"Music",
IndexName:"GenreAndPriceIndex",
KeyConditionExpression:"Genre=:genreandPrice>:price",
ExpressionAttributeValues:{
":genre":"Country",
":price":2.00
},
ProjectionExpression:"SongTitle,Price"
};
4. Click the play button arrow to run the code. This query uses both of the index key
attributes (Genre
and Price), returning only the Countrysongs that cost more than 2.00.
Step 6.3: Scan the Index
PROJECT REPORT {EDITING TOOL} -
34
You can scan an index (using the Scanoperation) in the same way that you scan a
table. When scanning an index, you provide both the table name and index name.
Step 6.2: Query the Index
In this example, we scan the entire global secondary index you created, but we'll
retrieve specific attributes only.
1. Replace everything in the left side of the DynamoDB JavaScript shell window with the
following code:
varparams={
TableName:"Music",
IndexName:"GenreAndPriceIndex",
ProjectionExpression:"Genre,Price,SongTitle,Artist,
AlbumTitle"
};
docClient.scan(params,function(err,data){
if(err)
console.log(JSON.stringify(err,null,2));
else
console.log(JSON.stringify(data,null,2));
});
2. Click the play button arrow to run the code. All of the items in the index are returned.
3. (Optional) Note that there are only four items in the index (Count), but there are five
items in the
table. The reason is that one of the items does not have a Priceattribute, so that item
was not
included in GenreAndPriceIndex.
Which of the songs in Musicitems does not have a Priceattribute? Can you
determine which one
it is?
Next Step
Step 7: Modify Items in the Table
You can modify an item in a table using UpdateItemor you can delete an item using
the DeleteItemoperations.You can update item by updating values of existing
attributes, add new attributes, or removeexisting attributes.You can use keywords in
the UpdateItemoperation such as Setand Removetorequest specific updates.
Step 7.1: Update an Item
The UpdateItemAPI operation lets you do the following:
Add more attributes to an item.
Modify the values of one or more attributes in the item.
Remove attributes from the item.
To specify which operations to perform, you use an update expression. An update
expression is a string containing attribute names, operation keywords (such as SET
and REMOVE), and new attribute values.
PROJECT REPORT {EDITING TOOL} -
35
By default, UpdateItemoperation does not return any data (empty response).You can
optionally specify
the ReturnValuesparameter to request attribute values as they appeared before or
after the update:
ALL_OLDreturns all attribute values as they appeared before the update.
UPDATED_OLDreturns only the updated attributes as they appeared before the
update.
ALL_NEWreturns all attribute values as they appear after the update.
UPDATED_NEWreturns only the updated attributes as they appeared after the update.
In this example, you perform a couple of updates to an item in the Musictable.
1. The following example updates a Musictable item by adding a new RecordLabel
attribute using
the UpdateExpressionparameter.
Replace everything in the left side of the DynamoDB JavaScript shell window with the
following code:
varparams={
TableName:"Music",
Key:{
"Artist":"NoOneYouKnow",
"SongTitle":"CallMeToday"
},
UpdateExpression:"SETRecordLabel=:label",
ExpressionAttributeValues:{
":label":"GlobalRecords"
},
ReturnValues:"ALL_NEW"
};
docClient.update(params,function(err,data){
if(err)
console.log(JSON.stringify(err,null,2));
else
console.log(JSON.stringify(data,null,2));
});
2. Click the play button arrow to run the code.Verify that the item in the response has a
RecordLabel
attribute.
3. We now apply multiple changes to the item using the UpdateExpression
parameter: Change the
price, and remove one of the composers.
Modify the paramsobject so that it looks like this:
varparams={
TableName:"Music",
Key:{
"Artist":"NoOneYouKnow",
"SongTitle":"CallMeToday"
},
PROJECT REPORT {EDITING TOOL} -
36
UpdateExpression:
"SETPrice=:priceREMOVETags.Composers[2]",
ExpressionAttributeValues:{
":price":0.89
},
ReturnValues:"ALL_NEW"
};
4. Click the play button arrow to run the code. Verify that the UpdateExpression
worked.
Specify a Conditional Write
By default updates are performed unconditionally.You can specify a condition in the
UpdateItem
operation to perform conditional update. For example, you may want to check if an
attribute exists before
changing its value, or check the existing value and apply an update only if the existing
value meets certain
criteria.
The UpdateItemoperation provides ConditionExpressionparameter for you to
specify one or more
conditions.
In this example, you add an attribute only if it doesn't already exist.
1. Modify the paramsobject from Step 7.1 so that it looks like this:
varparams={
TableName:"Music",
Key:{
"Artist":"NoOneYouKnow",
"SongTitle":"CallMeToday"
},
UpdateExpression:"SETRecordLabel=:label",
ExpressionAttributeValues:{
":label":"NewWaveRecordings,Inc."
},
ConditionExpression:"attribute_not_exists(RecordLabel)",
ReturnValues:"ALL_NEW"
};
2. Click the play button arrow to run the code. This should fail with response: The
conditional
requestfailedbecause the item already has the RecordLabelattribute.
Specify an Atomic Counter
DynamoDB supports atomic counters, where you use the UpdateItemoperation to
increment or decrement the value of an existing attribute without interfering with other
write requests. (All write requests are applied in the order in which they were received.)
For example, a music player application might want to maintain
PROJECT REPORT {EDITING TOOL} -
37
a counter each time song is played. In this case, the application would need to
increment this counter regardless of its current value.
Step 7.1: Update an Item
In this example, we first use the UpdateItemoperation to add an attribute (Plays) to
keep track of the number of times the song is played. Then, using another UpdateItem
operation, we increment its value by 1.
1. Modify the paramsobject from Step 7.1 so that it looks like this:
varparams={
TableName:"Music",
Key:{
"Artist":"NoOneYouKnow",
"SongTitle":"CallMeToday"
},
UpdateExpression:"SETPlays=:val",
ExpressionAttributeValues:{
":val":0
},
ReturnValues:"UPDATED_NEW"
};
2. Click the play button arrow to run the code. The Playsattribute is added, and its
value (zero) is
shown in the response.
3. Now modify the paramsobject so that it looks like this:
varparams={
TableName:"Music",
Key:{
"Artist":"NoOneYouKnow",
"SongTitle":"CallMeToday"
},
UpdateExpression:"SETPlays=Plays+:incr",
ExpressionAttributeValues:{
":incr":1
},
ReturnValues:"UPDATED_NEW"
};
4. Click the play button arrow to run the code. The Playsattribute is incremented by
one.
5. Run the code a few more times. Each time you do this, Playsis incremented.
Step 7.2: Delete an Item
You now use the DeleteItemAPI operation to delete an item from the table. Note that
this operation is permanentthere is no way to restore an item.
1. Replace everything in the left side of the DynamoDB JavaScript shell window with the
following code:
PROJECT REPORT {EDITING TOOL} -
38
varparams={
TableName:"Music",
Key:{
Artist:"TheAcmeBand",
SongTitle:"LookOut,World"
}
};
docClient.delete(params,function(err,data){
if(err)
console.log(JSON.stringify(err,null,2));
else
console.log(JSON.stringify(data,null,2));
});
2. Click the play button arrow to run the code. The item is deleted.
Specify a Conditional Delete
By default, a delete operation is unconditional. However, you can use the
ConditionExpressionparameter to perform conditional deletes. In this example
you delete an item, only if its price is 0.
1. Modify the paramsobject so that it looks like this:
varparams={
TableName:"Music",
Key:{
Artist:"NoOneYouKnow",
SongTitle:"MyDogSpot"
},
ConditionExpression:"Price=:price",
ExpressionAttributeValues:{
":price":0.00
}
};
2. Click the play button arrow to run the code. The conditional delete fails because the
song is not
free (Priceis not 0.00).
Step 8: Clean Up
In this step, you use the DeleteTableAPI operation to remove the table. When you
do this, the Musictable, GenreAndPriceIndex, and all of the data is permanently
deleted. This operation cannot beundone.
To delete the table
1. Replace everything in the left side of the DynamoDB JavaScript shell window with the
following code:
varparams={
TableName:"Music"
PROJECT REPORT {EDITING TOOL} -
39
};
dynamodb.deleteTable(params,function(err,data){
if(err)
console.log(JSON.stringify(err,null,2));
else
console.log(JSON.stringify(data,null,2));
});
2. Click the play button arrow to run the code to delete the table.
The table is deleted immediately.
40
SYNOPSIS
git *
DESCRIPTION
This tutorial explains how to import a new project into Git, make changes to it, and share
changes with other developers.
If you are instead primarily interested in using Git to fetch a project, for example, to test the
latest version, you may prefer to start with the first two chapters of The Git Users Manual.
First, note that you can get documentation for a command such as git log --graph with:
$ man git-log
or:
$ git help log
With the latter, you can use the manual viewer of your choice; see git-help[1] for more
information.
It is a good idea to introduce yourself to Git with your name and public email address before
doing any operation. The easiest way to do so is:
$ git config --global user.name "Your Name Comes Here"
$ git config --global user.email [email protected]
Youve now initialized the working directoryyou may notice a new directory created, named
".git".
41
Next, tell Git to take a snapshot of the contents of all files under the current directory (note the .),
with git add:
$ git add .
This snapshot is now stored in a temporary staging area which Git calls the "index". You can
permanently store the contents of the index in the repository with git commit:
$ git commit
This will prompt you for a commit message. Youve now stored the first version of your project
in Git.
Making changes
Modify some files, then add their updated contents to the index:
$ git add file1 file2 file3
You are now ready to commit. You can see what is about to be committed using git diff with the
--cached option:
$ git diff --cached
(Without --cached, git diff will show you any changes that youve made but not yet added to the
index.) You can also get a brief summary of the situation with git status:
$ git status
On branch master
Changes to be committed:
Your branch is up-to-date with 'origin/master'.
(use "git reset HEAD <file>..." to unstage)
modified:
modified:
modified:
file1
file2
file3
If you need to make any further adjustments, do so now, and then add any newly modified
content to the index. Finally, commit your changes with:
$ git commit
This will again prompt you for a message describing the change, and then record a new version
of the project.
Alternatively, instead of running git add beforehand, you can use
$ git commit -a
which will automatically notice any modified (but not new) files, add them to the index, and
commit, all in one step.
A note on commit messages: Though not required, its a good idea to begin the commit message
with a single short (less than 50 character) line summarizing the change, followed by a blank line
PROJECT REPORT {EDITING TOOL} -
42
and then a more thorough description. The text up to the first blank line in a commit message is
treated as the commit title, and that title is used throughout Git. For example, git-format-patch[1]
turns a commit into email, and it uses the title on the Subject line and the rest of the commit in
the body.
Often the overview of the change is useful to get a feel of each step
$ git log --stat --summary
Managing branches
A single Git repository can maintain multiple branches of development. To create a new branch
named "experimental", use
$ git branch experimental
The "experimental" branch is the one you just created, and the "master" branch is a default
branch that was created for you automatically. The asterisk marks the branch you are currently
on; type
$ git checkout experimental
to switch to the experimental branch. Now edit a file, commit the change, and switch back to the
master branch:
(edit file)
$ git commit -a
$ git checkout master
PROJECT REPORT {EDITING TOOL} -
43
Check that the change you made is no longer visible, since it was made on the experimental
branch and youre back on the master branch.
You can make a different change on the master branch:
(edit file)
$ git commit -a
at this point the two branches have diverged, with different changes made in each. To merge the
changes made in experimental into master, run
$ git merge experimental
If the changes dont conflict, youre done. If there are conflicts, markers will be left in the
problematic files showing the conflict;
$ git diff
will show this. Once youve edited the files to resolve the conflicts,
$ git commit -a
This command ensures that the changes in the experimental branch are already in the current
branch.
If you develop on a branch crazy-idea, then regret it, you can always delete the branch with
$ git branch -D crazy-idea
Branches are cheap and easy, so this is a good way to try something out.
This creates a new directory "myrepo" containing a clone of Alices repository. The clone is on
an equal footing with the original project, possessing its own copy of the original projects
history.
PROJECT REPORT {EDITING TOOL} -
44
When hes ready, he tells Alice to pull changes from the repository at /home/bob/myrepo. She
does this with:
alice$ cd /home/alice/project
alice$ git pull /home/bob/myrepo master
This merges the changes from Bobs "master" branch into Alices current branch. If Alice has
made her own changes in the meantime, then she may need to manually fix any conflicts.
The "pull" command thus performs two operations: it fetches changes from a remote branch,
then merges them into the current branch.
Note that in general, Alice would want her local changes committed before initiating this "pull".
If Bobs work conflicts with what Alice did since their histories forked, Alice will use her
working tree and the index to resolve conflicts, and existing local changes will interfere with the
conflict resolution process (Git will still perform the fetch but will refuse to merge --- Alice will
have to get rid of her local changes in some way and pull again when this happens).
Alice can peek at what Bob did without merging first, using the "fetch" command; this allows
Alice to inspect what Bob did, using a special symbol "FETCH_HEAD", in order to determine if
he has anything worth pulling, like this:
alice$ git fetch /home/bob/myrepo master
alice$ git log -p HEAD..FETCH_HEAD
This operation is safe even if Alice has uncommitted local changes. The range notation
"HEAD..FETCH_HEAD" means "show everything that is reachable from the FETCH_HEAD
but exclude anything that is reachable from HEAD". Alice already knows everything that leads
to her current state (HEAD), and reviews what Bob has in his state (FETCH_HEAD) that she has
not seen with this command.
If Alice wants to visualize what Bob did since their histories forked she can issue the following
command:
$ gitk HEAD..FETCH_HEAD
This uses the same two-dot range notation we saw earlier with git log.
Alice may want to view what both of them did since they forked. She can use three-dot form
instead of the two-dot form:
$ gitk HEAD...FETCH_HEAD
PROJECT REPORT {EDITING TOOL} -
45
This means "show everything that is reachable from either one, but exclude anything that is
reachable from both of them".
Please note that these range notation can be used with both gitk and "git log".
After inspecting what Bob did, if there is nothing urgent, Alice may decide to continue working
without pulling from Bob. If Bobs history does have something Alice would immediately need,
Alice may choose to stash her work-in-progress first, do a "pull", and then finally unstash her
work-in-progress on top of the resulting history.
When you are working in a small closely knit group, it is not unusual to interact with the same
repository over and over again. By defining remote repository shorthand, you can make it easier:
alice$ git remote add bob /home/bob/myrepo
With this, Alice can perform the first part of the "pull" operation alone using the git fetch
command without merging them with her own branch, using:
alice$ git fetch bob
Unlike the longhand form, when Alice fetches from Bob using a remote repository shorthand set
up with git remote, what was fetched is stored in a remote-tracking branch, in this case
bob/master. So after this:
alice$ git log -p master..bob/master
shows a list of all the changes that Bob made since he branched from Alices master branch.
After examining those changes, Alice could merge the changes into her master branch:
alice$ git merge bob/master
This merge can also be done by pulling
Note that git pull always merges into the current branch, regardless of what else is given on the
command line.
Later, Bob can update his repo with Alices latest changes using
bob$ git pull
Note that he doesnt need to give the path to Alices repository; when Bob cloned Alices
repository, Git stored the location of her repository in the repository configuration, and that
location is used for pulls:
bob$ git config --get remote.origin.url
/home/alice/project
46
(The complete configuration created by git clone is visible using git config -l, and the gitconfig[1] man page explains the meaning of each option.)
Git also keeps a pristine copy of Alices master branch under the name "origin/master":
bob$ git branch -r
origin/master
If Bob later decides to work from a different host, he can still perform clones and pulls using the
ssh protocol:
bob$ git clone alice.org:/home/alice/project myrepo
Alternatively, Git has a native protocol, or can use http; see git-pull[1] for details.
Git can also be used in a CVS-like mode, with a central repository that various users push
changes to; see git-push[1] and gitcvs-migration[7].
Exploring history
Git history is represented as a series of interrelated commits. We have already seen that the git
log command can list those commits. Note that first line of each git log entry also gives a name
for the commit:
$ git log
commit c82a22c39cbc32576f64f5c6b3f24b99ea8149c7
Author: Junio C Hamano <[email protected]>
Date:
Tue May 16 17:18:22 2006 -0700
merge-base: Clarify the comments on post processing.
We can give this name to git show to see the details about this commit.
$ git show c82a22c39cbc32576f64f5c6b3f24b99ea8149c7
But there are other ways to refer to commits. You can use any initial part of the name that is long
enough to uniquely identify the commit:
$ git show c82a22c39c
Every commit usually has one "parent" commit which points to the previous state of the project:
$ git show HEAD^ # to see the parent of HEAD
$ git show HEAD^^ # to see the grandparent of HEAD
$ git show HEAD~4 # to see the great-great grandparent of HEAD
Note that merge commits may have more than one parent:
$ git show HEAD^1 # show the first parent of HEAD (same as HEAD^)
$ git show HEAD^2 # show the second parent of HEAD
You can also give commits names of your own; after running
PROJECT REPORT {EDITING TOOL} -
47
you can refer to 1b2e1d63ff by the name "v2.5". If you intend to share this name with other
people (for example, to identify a release version), you should create a "tag" object, and perhaps
sign it; see git-tag[1] for details.
Any Git command that needs to know a commit can take any of these names. For example:
$ git diff v2.5 HEAD
# compare the current HEAD to v2.5
$ git branch stable v2.5 # start a new branch named "stable" based
# at v2.5
$ git reset --hard HEAD^ # reset your current branch and working
# directory to its state at HEAD^
Be careful with that last command: in addition to losing any changes in the working directory, it
will also remove all later commits from this branch. If this branch is the only branch containing
those commits, they will be lost. Also, dont use git reset on a publicly-visible branch that other
developers pull from, as it will force needless merges on other developers to clean up the history.
If you need to undo changes that you have pushed, use git revert instead.
The git grep command can search for strings in any version of your project, so
$ git grep "hello" v2.5
is a quick way to search just the files that are tracked by Git.
Many Git commands also take sets of commits, which can be specified in a number of ways.
Here are some examples with git log:
$
$
$
$
git
git
git
git
log
log
log
log
v2.5..v2.6
# commits
v2.5..
# commits
--since="2 weeks ago" # commits
v2.5.. Makefile
# commits
# Makefile
You can also give git log a "range" of commits where the first is not necessarily an ancestor of
the second; for example, if the tips of the branches "stable" and "master" diverged from a
common commit some time ago, then
$ git log stable..master
will list commits made in the master branch but not in the stable branch, while
$ git log master..stable
will show the list of commits made on the stable branch but not the master branch.
PROJECT REPORT {EDITING TOOL} -
48
The git log command has a weakness: it must present commits in a list. When the history has
lines of development that diverged and then merged back together, the order in which git log
presents those commits is meaningless.
Most projects with multiple contributors (such as the Linux kernel, or Git itself) have frequent
merges, and gitk does a better job of visualizing their history. For example,
$ gitk --since="2 weeks ago" drivers/
allows you to browse any commits from the last 2 weeks of commits that modified files under
the "drivers" directory. (Note: you can adjust gitks fonts by holding down the control key while
pressing "-" or "+".)
Finally, most commands that take filenames will optionally allow you to precede any filename
by a commit, to specify a particular version of the file:
$ git diff v2.5:Makefile HEAD:Makefile.in
You can also use git show to see any such file:
$ git show v2.5:Makefile
SYNOPSIS
git *
DESCRIPTION
You should work through gittutorial[7] before reading this tutorial.
The goal of this tutorial is to introduce two fundamental pieces of Gits architecturethe object
database and the index fileand to provide the reader with everything necessary to understand
the rest of the Git documentation.
49
What are the 7 digits of hex that Git responded to the commit with?
We saw in part one of the tutorial that commits have names like this. It turns out that every object
in the Git history is stored under a 40-digit hex name. That name is the SHA-1 hash of the
objects contents; among other things, this ensures that Git will never store the same data twice
(since identical data is given an identical SHA-1 name), and that the contents of a Git object will
never change (since that would change the objects name as well). The 7 char hex strings here are
simply the abbreviation of such 40 character long strings. Abbreviations can be used everywhere
where the 40 character strings can be used, so long as they are unambiguous.
It is expected that the content of the commit object you created while following the example
above generates a different SHA-1 hash than the one shown above because the commit object
records the time when it was created and the name of the person performing the commit.
We can ask Git about this particular object with the cat-file command. Dont copy the 40 hex
digits from this example but use those from your own version. Note that you can shorten it to
only a few characters to save yourself typing all 40 hex digits:
$ git cat-file -t 54196cc2
commit
$ git cat-file commit 54196cc2
tree 92b8b694ffb1675e5975148e1121810081dbdffe
author J. Bruce Fields <[email protected]> 1143414668 -0500
committer J. Bruce Fields <[email protected]> 1143414668 -0500
initial commit
A tree can refer to one or more "blob" objects, each corresponding to a file. In addition, a tree
can also refer to other tree objects, thus creating a directory hierarchy. You can examine the
contents of any tree using ls-tree (remember that a long enough initial portion of the SHA-1 will
also work):
$ git ls-tree 92b8b694
100644 blob 3b18e512dba79e4c8300dd08aeb37f8e728b8dad
file.txt
Thus we see that this tree has one file in it. The SHA-1 hash is a reference to that files data:
$ git cat-file -t 3b18e512
blob
A "blob" is just file data, which we can also examine with cat-file:
$ git cat-file blob 3b18e512
hello world
Note that this is the old file data; so the object that Git named in its response to the initial tree
was a tree with a snapshot of the directory state that was recorded by the first commit.
All of these objects are stored under their SHA-1 names inside the Git directory:
PROJECT REPORT {EDITING TOOL} -
50
$ find .git/objects/
.git/objects/
.git/objects/pack
.git/objects/info
.git/objects/3b
.git/objects/3b/18e512dba79e4c8300dd08aeb37f8e728b8dad
.git/objects/92
.git/objects/92/b8b694ffb1675e5975148e1121810081dbdffe
.git/objects/54
.git/objects/54/196cc2703dc165cbd373a65a4dcf22d50ae7f7
.git/objects/a0
.git/objects/a0/423896973644771497bdc03eb99d5281615b51
.git/objects/d0
.git/objects/d0/492b368b66bdabf2ac1fd8c92b39d3db916e59
.git/objects/c4
.git/objects/c4/d59f390b9cfd4318117afde11d601c1085f241
and the contents of these files is just the compressed data plus a header identifying their length
and their type. The type is either a blob, a tree, a commit, or a tag.
The simplest commit to find is the HEAD commit, which we can find from .git/HEAD:
$ cat .git/HEAD
ref: refs/heads/master
As you can see, this tells us which branch were currently on, and it tells us this by naming a file
under the .git directory, which itself contains a SHA-1 name referring to a commit object, which
we can examine with cat-file:
$ cat .git/refs/heads/master
c4d59f390b9cfd4318117afde11d601c1085f241
$ git cat-file -t c4d59f39
commit
$ git cat-file commit c4d59f39
tree d0492b368b66bdabf2ac1fd8c92b39d3db916e59
parent 54196cc2703dc165cbd373a65a4dcf22d50ae7f7
author J. Bruce Fields <[email protected]> 1143418702 -0500
committer J. Bruce Fields <[email protected]> 1143418702 -0500
add emphasis
The "tree" object here refers to the new state of the tree:
$ git ls-tree d0492b36
100644 blob a0423896973644771497bdc03eb99d5281615b51
$ git cat-file blob a0423896
hello world!
file.txt
The tree object is the tree we examined first, and this commit is unusual in that it lacks any
parent.
PROJECT REPORT {EDITING TOOL} -
51
Most commits have only one parent, but it is also common for a commit to have multiple
parents. In that case the commit represents a merge, with the parent references pointing to the
heads of the merged branches.
Besides blobs, trees, and commits, the only remaining type of object is a "tag", which we wont
discuss here; refer to git-tag[1] for details.
So now we know how Git uses the object database to represent a projects history:
"commit" objects refer to "tree" objects representing the snapshot of a directory tree at a
particular point in the history, and refer to "parent" commits to show how theyre
connected into the project history.
"tree" objects represent the state of a single directory, associating directory names to
"blob" objects containing file data and "tree" objects containing subdirectory information.
References to commit objects at the head of each branch are stored in files under
.git/refs/heads/.
Note, by the way, that lots of commands take a tree as an argument. But as we can see above, a
tree can be referred to in many different waysby the SHA-1 name for that tree, by the name of
a commit that refers to the tree, by the name of a branch whose head refers to that tree, etc.--and
most such commands can accept any of these names.
In command synopses, the word "tree-ish" is sometimes used to designate such an argument.
52
but this time instead of immediately making the commit, lets take an intermediate step, and ask
for diffs along the way to keep track of whats happening:
$ git diff
--- a/file.txt
+++ b/file.txt
@@ -1 +1,2 @@
hello world!
+hello world, again
$ git add file.txt
$ git diff
The last diff is empty, but no new commits have been made, and the head still doesnt contain the
new line:
$ git diff HEAD
diff --git a/file.txt b/file.txt
index a042389..513feba 100644
--- a/file.txt
+++ b/file.txt
@@ -1 +1,2 @@
hello world!
+hello world, again
So git diff is comparing against something other than the head. The thing that its comparing
against is actually the index file, which is stored in .git/index in a binary format, but whose
contents we can examine with ls-files:
$ git ls-files --stage
100644 513feba2e53ebbd2532419ded848ba19de88ba00 0
$ git cat-file -t 513feba2
blob
$ git cat-file blob 513feba2
hello world!
hello world, again
file.txt
So what our git add did was store a new blob and then put a reference to it in the index file. If we
modify the file again, well see that the new modifications are reflected in the git diff output:
$ echo 'again?' >>file.txt
$ git diff
index 513feba..ba3da7b 100644
--- a/file.txt
+++ b/file.txt
@@ -1,2 +1,3 @@
hello world!
hello world, again
+again?
With the right arguments, git diff can also show us the difference between the working directory
and the last commit, or between the index and the last commit:
$ git diff HEAD
diff --git a/file.txt b/file.txt
index a042389..ba3da7b 100644
--- a/file.txt
+++ b/file.txt
PROJECT REPORT {EDITING TOOL} -
53
@@ -1 +1,3 @@
hello world!
+hello world, again
+again?
$ git diff --cached
diff --git a/file.txt b/file.txt
index a042389..513feba 100644
--- a/file.txt
+++ b/file.txt
@@ -1 +1,2 @@
hello world!
+hello world, again
At any time, we can create a new commit using git commit (without the "-a" option), and verify
that the state committed only includes the changes stored in the index file, not the additional
change that is still only in our working tree:
$ git commit -m "repeat"
$ git diff HEAD
diff --git a/file.txt b/file.txt
index 513feba..ba3da7b 100644
--- a/file.txt
+++ b/file.txt
@@ -1,2 +1,3 @@
hello world!
hello world, again
+again?
So by default git commit uses the index to create the commit, not the working tree; the "-a"
option to commit tells it to first update the index with all changes in the working tree.
Finally, its worth looking at the effect of git add on the index file:
$ echo "goodbye, world" >closing.txt
$ git add closing.txt
The effect of the git add was to add one entry to the index file:
$ git ls-files --stage
100644 8b9743b20d4b15be3955fc8d5cd2b09cd2336138 0
100644 513feba2e53ebbd2532419ded848ba19de88ba00 0
closing.txt
file.txt
And, as you can see with cat-file, this new entry refers to the current contents of the file:
$ git cat-file blob 8b9743b2
goodbye, world
The "status" command is a useful way to get a quick summary of the situation:
$ git status
On branch master
Changes to be committed:
(use "git reset HEAD <file>..." to unstage)
new file:
closing.txt
54
modified:
file.txt
Since the current state of closing.txt is cached in the index file, it is listed as "Changes to be
committed". Since file.txt has changes in the working directory that arent reflected in the index,
it is marked "changed but not updated". At this point, running "git commit" would create a
commit that added closing.txt (with its new contents), but that didnt modify file.txt.
Also, note that a bare git diff shows the changes to file.txt, but not the addition of closing.txt,
because the version of closing.txt in the index file is identical to the one in the working directory.
In addition to being the staging area for new commits, the index file is also populated from the
object database when checking out a branch, and is used to hold the trees involved in a merge
operation.
55
When you type cc at the command line a lot of stuff happens. There
are four entities involved in the compilation process: preprocessor,
compiler, assembler, linker (see Figure 1).
56
cc.
To see the
57
Notice that you could invoke each of the above steps by hand. Since it is an
annoyance to call each part separately as well as pass the correct flags and
files, cc does this for you. For example, you could run the entire process by hand by
invoking /lib/cpp and then cc -S and then /bin/as and finally ld. If you think this is
easy, try compiling a simple program in this way.
Running a Program
When you type a.out at the command line, a whole bunch of things
must happen before your program is actually run. The loader
magically does these things for you. On UNIX systems, the loader
creates a process. This involves reading the file and creating an
address space for the process. Page table entries for the
instructions, data and program stack are created and the register
set is initialized. Then the loader executes a jump instruction to the
first instruction in the program. This generally causes a page fault
and the first page of your instructions is brought into memory. On
some systems the loader is a little more interesting. For example,
on systems like Windows NT that provide support for dynamically
loaded libraries (DLLs), the loader must resolve references to such
libraries similar to the way a linker does.
Memory
58
59
60
Binary Literals Binary literals are now supported. Such literals are prefixed with 0B
or 0b and consist of only the digits 0 and 1. (C++14)
Return Type Deduction The return type of normal functions can now be deduced,
including functions with multiple return statements and recursive functions. Such
function definitions are preceded by the auto keyword as in function definitions with a
trailing return type, but the trailing return type is omitted. (C++14)
decltype(auto) Type deduction using the auto keyword for initializing expressions
strips ref-qualifiers and top-level cv-qualifiers from the expression. decltype(auto)
preserves ref- and cv-qualifiers and can now be used anywhere that auto can be
used, except to introduce a function with an inferred or trailing return type. (C++14)
Inheriting Constructors A derived class can now specify that it will inherit the
constructors of its base class, Base, by including the statement using Base::Base; in
its definition. A deriving class can only inherit all the constructors of its base class,
there is no way to inherit only specific base constructors. A deriving class cannot
inherit from multiple base classes if they have constructors that have an identical
signature, nor can the deriving class define a constructor that has an identical
signature to any of its inherited constructors. (C++11)
Alignment Query and Control The alignment of a variable can be queried by using
the alignof() operator and controlled by using the alignas() specifier. alignof() returns
the byte boundary on which instances of the type must be allocated; for references it
61
returns the alignment of the referenced type, and for arrays it returns the alignment
of the element type. alignas() controls the alignment of a variable; it takes a constant
or a type, where a type is shorthand for alignas(alignof(type)). (C++11)
Extended sizeof The size of a class or struct member variable can now be
determined without an instance of the class or struct by using sizeof().(C++11)
Attributes provide a way to extend syntax on functions, variables, types and other
program elements without defining new keywords.(C++11)
Thread-Safe "Magic" Statics Static local variables are now initialized in a threadsafe way, eliminating the need for manual synchronization. Only initialization is
thread-safe, use of static local variables by multiple threads must still be manually
synchronized. The thread-safe statics feature can be disabled by using the
/Zc:threadSafeInit- flag to avoid taking a dependency on the CRT. (C++11)
noexcept The noexcept operator can now be used to check whether an expression
might throw an exception. The noexcept specifier can now be used to specify that a
function does not throw exceptions. (C++11)
Unrestricted Unions A Union type can now contain types with non-trivial
constructors. Constructors for such unions must be defined.(C++11)
New Character Types and Unicode Literals Character and string literals in UTF-8,
UTF-16, and UTF-32 are now supported and new character types char16_t and
char32_t have been introduced. Character literals can be prefixed with u8 (UTF-8), u
(UTF-16), or U (UTF-32) as in U'a', while string literals can additionally be prefixed
with raw-string equivalents u8R (UTF-8 raw-string), uR (UTF-16 raw-string), or UR
(UTF-32 raw-string). Universal character names can be freely used in unicode literals
as in u'\u00EF', u8"\u00EF is i", and u"\U000000ef is I". (C++11)
62
digit separators single quotes can be inserted at regular intervals to make long
numerical literals easier to read: int x = 1'000'000; C++14
Null Forward Iterators The standard library now allows the creation of forward
iterators that do not refer to a container instance. Such iterators are value-initialized
and compare equal for a particular container type. Comparing a value-initialized
iterator to one that is not value-initialized is undefined. (C++14)
quoted() The standard library now supports the quoted() function to make working
with quoted string values and I/O easier. With quoted(), an entire quoted string is
treated as a single entity (as strings of non-whitespace characters are in I/O streams);
in addition, escape sequences are preserved through I/O operations. (C++14)
exchange() The standard library now supports the std::exchange() utility function to
assign a new value to an object and returns its old value. For complex types,
exchange() avoids copying the old value when a move constructor is available,
avoids copying the new value if its a temporary or is moved, and accepts any type
as the new value taking advantage of any converting assignment operator. (C++14)
get<T>() The standard library now supports the get<T>() template function to allow
tuple elements to be addressed by their type. If a tuple contains two or more
63
elements of the same type get<T>() the tuple can't be addressed by that type, but
other uniquely-typed elements can still be addressed. (C++14)
tuple_element_t The standard library now supports the tuple_element_t<I, T> type
alias which is an alias for typename tuple_element<I, T>::type. This provides some
convenience for template programmers, similar to the other metafunction type
aliases in <type_traits>.(C++14)
File System "V3" Technical Specification The included implementation of the File
System Technical Specification has been updated to version 3 of the specification.
[N3940]
Minimal Allocators The standard library now supports the minimal allocator
interface throughout; notable fixes include std::function, shared_ptr,
allocate_shared(), and basic_string. (C++11)
N4259 std::uncaught_exceptions()
C Runtime Library
CRT Library RefactoringThe CRT has been refactored into two parts. The Universal
CRT contains the code that implements the standard C runtime library. The vcruntime140.dll
(or .lib) contains version-specific code for process start-up and exception handling. The
Universal CRT has a stable API, so it can be used without changing the version number in
each release of Visual Studio. It's now a Windows operating system component that is
serviced by Windows Update. It's already installed in Windows 10. By using the Visual C++
Redistributable Package (vcredist), you can distribute it together with your applications for
earlier versions of Windows.
C99 Conformance Visual Studio 2015 fully implements the C99 Standard Library, with the
exception of any library features that depend on compiler features not yet supported by the
Visual C++ compiler (for example, <tgmath.h> is not implemented).
PROJECT REPORT {EDITING TOOL} -
64
Performance Much of the library has been refactored to streamline and simplify header file
macro usage. This speeds up compilation and IntelliSense, and improves readability. In
addition, many stdio functions have been rewritten for both standards compliance and
improved performance.
Breaking Changes
This improved support for ISO C/C++ standards may require changes to existing code so
that it conforms to C++11 and C99, and compiles correctly in Visual Studio 2015. For more
information, see Breaking Changes in Visual C++ 2015.
The concurrency::task class and related types in ppltasks.h are no longer based on the
ConcRT runtime. They now use the Windows Threadpool as their scheduler. This only imacts
code that uses ConcRT synchronization primitives inside concurrency::task operations. Such
code should use the Windows synchronization primitives instead.
The synchronization primitives in the STL are also no longer based on ConcRT. To avoid
deadlocks, do not use STL synchronization primitives inside functions
such concurrency::parallel_for or with the PPL asynchronous agent types.
Incremental Linking for Static Libraries Changes to static libraries that are
referenced by other code modules now link incrementally.
Algorithmic improvements have been made to the linker to decrease link times.
Improvements have been made that will allow building template heavy code faster.
65
Object file size reduction Compiler and C++ standard library enhancements result
in significantly smaller object files and static libraries. These enhancements do not
affect the size of dynamically-linked libraries (DLLs) or executables (EXEs) because
the redundant code has historically been removed by the linker.
Refactoring
We have added refactoring support for C++ with the following features:
Function Extraction Move selected code into its own function. This refactoring is
available as an extension to Visual Studio on the Visual Studio Gallery.
Implement Pure Virtuals Generates function definitions for pure virtual functions
inherited by a class or structure. Multiple and recursive inheritance are supported.
Activate this refactoring from the inheriting class definition to implement all inherited
pure virtual functions, or from a base class specifier to implement pure virtual
functions from that base class only.
66
Move Function Definition Moves the body of a function between the source code
and header files. Activate this refactoring from the function's signature.
Convert to Raw String Literal Converts a string containing escape sequences into
a raw-string literal. Supported escape sequences are \\ (backslash), \n (new line), \t
(tab), \' (single quote), \" (double quote), and \? (question mark). Activate this feature
by right-clicking anywhere inside a string.
Find in Files has been improved by enabling subsequent results to be appended to previous
results; accumulated results can be deleted.
Solution Scanning speed has been improved, especially for large solutions.
Operations like Go To Definition are no longer blocked during solution scan except
during the initial solution scan when a new solution is opened for the first time.
Diagnostics
1. Debugger Visualizations Add Natvis debugger visualizations to your Visual Studio
project for easy management and source control integration. Natvis files can be
edited and saved during a debugging session and the debugger will automatically
pick up the changes. For more information, see this blog post.
2. Native Memory Diagnostics
a. Memory diagnostic sessions (Ctrl+Alt+F2) enables you to monitor the live
memory use of your native application during a debugging session.
b. Memory snapshots capture a momentary image of your application's heap
contents. Differences in heap state can be examined by comparing two
memory snapshots. View object types, instance values, and allocation call
stacks for each instance after stopping the application. View the call tree by
stack frame for each snapshot.
3. Improved deadlock detection and recovery when calling C++ functions from the
Watch and Immediate windows.
67
Targeting Windows 10
Visual Studio now supports targeting Windows 10 in C++. New project templates for
Universal Windows App development support targeting Windows 10 devices such as desktop
computers, mobile phones, tablets, HoloLens, and other devices..
Graphics diagnostics
Graphics Diagnostics has been improved with the following features:
Enhanced Graphics Event List A new Draw Calls view is added which displays
captured events and their state in a hierarchy organized by Draw Calls. You can
expand draw calls to display the device state that was current at the time of the draw
call and you can further expand each kind of state to display the events that set their
values.
Support for Windows Phone 8.1 Graphics Diagnostics now fully supports
debugging Windows Phone 8.1 apps in Phone emulator or on tethered Phone.
68
Dedicated UI for Graphics Analysis The new Visual Studio Graphics Analyzer
window is a dedicated workspace for analyzing graphics frames.
Shader Edit and Apply View the impact of shader code changes in a captured log
without re-running the app.
69
References
1. https://msdn.microsoft.com/en-us/library/3bstk3k5.aspx
2. https://courses.cs.washington.edu/courses/cse378/97au/help/compilation.ht
ml
3. https://msdn.microsoft.com/en-us/library/dd831853.aspx4
4. https://msdn.microsoft.com/en-us/library/y4skk93w.aspx
5. http://docs.aws.amazon.com
6. https://aws.amazon.com
7. https://stackoverflow.com/questions/17931073/dynamodb-vsmongodb
70
71
72