Introduction To Algorithms and Data Structures in Swift 4 Get Ready For Programming Job Interviews. Write Better, Faster Swift Code. (Swift Clinic Book 1) - Karoly Nyisztor
Introduction To Algorithms and Data Structures in Swift 4 Get Ready For Programming Job Interviews. Write Better, Faster Swift Code. (Swift Clinic Book 1) - Karoly Nyisztor
Introduction
Thank you for buying my book “Introduction to
Algorithms and Data Structures in Swift 4”. This book is
going to teach you fundamental knowledge about
algorithms and data structures.
SECTION 1
Prerequisites
This book is beginner-friendly. Prior programming
experience may be helpful, but you need not have
actually worked with Swift itself.
To implement the exercises in this book, you’ll need a
Mac with macOS 10.12.6 (Sierra) or newer. Sierra is
required because Xcode 9 won’t install on prior versions
of macOS.
You’ll also need Xcode 9 or newer. You can download
Xcode for free from the Mac App Store.
We’re going to use modern Swift 4 to implement the
source code in this course.
Swift 3.0 has brought fundamental changes and language
refinements. Swift 4 added some useful enhancements
and new features. All the samples are compatible with the
latest Swift version. I am going to update the source code
as changes in the language make it necessary.
Our benchmark shows that the size of the input array does
not affect the run time. There are only negligible
differences in the order of microseconds.
for i in 0..<size {
let key = String(i)
result[key] = i
}
return result
}
Quadratic Time
Complexity
Quadratic Time represents an algorithm whose
performance is directly proportional to the square of the
size of the input dataset.
As you can see in this graph, the runtime increases
sharply, faster than the input sizes.
The runtime grows even more sharply with the input size
in the case of cubic or quartic time complexity.
In the following demo, we are going to build a function
that creates multiplication tables. The function will use
two nested loops; because of the nested iterations, this
algorithm has a quadratic time complexity.
Now let’s switch to XCode.
If you want to follow along with me, download
the repository from GitHub. Open the Big-O
playground from the big-o-src folder. You can
find the source code for this demo in the
“Quadratic Time” playground page.
for i in 0..<sizes.count {
let size = sizes[i]
let execTime = BenchTimer.measureBlock {
_ = multTable(size: size)
}
print("Average multTable() execution time for \(size) elements: \
(execTime.formattedTime)")
}
Logarithmic Time
Logarithmic Time represents an extremely efficient
algorithm, used by advanced algorithms like the binary
search technique.
Logarithmic time means that time goes up linearly while
the input data size goes up exponentially.
Summary
We dedicated this entire chapter to the Big-O notation.
Understanding time complexity paves the road to
working with algorithms.
We’ve talked about constant time complexity - where the
execution time is constant and does not depend on the
input size.
Checking the first element of an array or retrieving an
item from a dictionary are good examples for the constant
time complexity.
Linear time complexity describes an algorithm whose
runtime grows in direct proportion to the size of the input.
For example, enumerating through the elements of an
array works in linear time.
The execution times of quadratic time algorithms go up
as a square of the input dataset size. Quadratic time
complexity is produced by a loop nested into another
loop, as we’ve seen in our multiplication table example.
Try to avoid polynomial time complexity - like quadratic,
quartic or cubic - as it can become a huge performance
bottleneck.
Logarithmic time describes complex algorithms like the
quicksort and shows its benefits when working with
larger data sets.
CHAPTER 3
Recursion
In programming, repetition can be described using loops,
such as the for-loop or the while loop. Another way is to
use recursion.
We encounter recursion frequently while studying
algorithms and data structures.
Thus, it is important to understand what recursion is.
I’m going to show you how recursion works through live
coding examples.
Recursion is a useful technique, yet it doesn’t come
without pitfalls. We’ll finish this chapter by
demonstrating how to avoid common issues when using
recursion in Swift projects.
SECTION 1
What’s Recursion?
By definition, recursion is a way to solve a reoccurring
problem by repeatedly solving similar subproblems.
In programming, we can have recursive functions. A
function is recursive if it calls itself. The call can happen
directly like in this case:
func r() {
//...
r()
//...
}
func g() {
//...
r()
//...
}
init(value: String) {
self.value = value
}
}
Each Node can link to the next node through the next
property.
node1.next = node2
node2.next = node3
node3.next = nil
parseNodes(from: node1)
Finally, we call the parseNodes(from:) function with the first
node as input parameter.
If we run the demo, it prints the expected values. Since
the data structure is recursive, we can use recursion to
iterate through it.
Recursion won’t necessarily produce faster or
more efficient code. But it usually provides an
elegant alternative to iterative approaches and
requires fewer lines of code.
SECTION 2
Recursion Pitfalls
Recursion is great, but it doesn’t come without pitfalls.
The biggest problem is infinite recursion. I’m going to
illustrate it using a function which calculates the sum of
the first n positive integers.
To understand the root cause, let’s quickly recap how
recursion works.
Each time a nested call occurs, a record of the current
context is made and added as a new stack frame to the top
of the stack.
Nope! Since I only check for zero, the function will cause
a runtime crash for negative input.
We must ensure that the function actually progresses
towards the base case.
For that, we need to modify the base case so that it covers
not only the value zero, but also negative values.
func badSum(n: Int) -> Int {
if n == 0 {
if n <= 0 {
return 0
}
if n == 0 {
guard n > 0 else {
return 0
}
The Power Of
Algorithms
In this chapter, we’re going to take a closer look at the
importance and the benefits of algorithms and algorithmic
thinking.
We’ve already talked about the Big-O notation. We saw
that our choice of implementing a problem can make a
huge difference when it comes to the performance and the
scalability of our solution.
Algorithmic thinking is the ability to approach a problem
and find the most efficient technique to solve it.
To demonstrate the power of algorithms, we are going to
solve the following problems:
We’ll calculate the sum of the first N natural numbers.
Then, we are going to implement a function that, given an
array and a value, returns the indices of any two distinct
elements whose sum is equal to the target value
Finally, we are going to calculate the equilibrium index of
an array.
The equilibrium or balance index represents the index
which splits the array such that the sum of elements at
lower indices is equal to the sum of items at higher
indices.
We’re going to use a brute-force approach first. Then
we’ll implement a solution with efficiency in mind.
You’ll see how some basic math and the knowledge of
Swift language features and data structures can help us in
implementing a cleaner and more performant solution.
SECTION 1
Calculate Sum(N)
Our first task is to implement a function which calculates
the sum of the first N natural numbers.
We’ll start with a naive implementation. Then, we are
going to implement a more efficient way to solve this
problem using a formula that is more than 2000 years old.
All right, so let’s switch to our Xcode playground project.
If you want to follow along with me, download
the repository from GitHub. Open the Sum(N)
playground from the algorithm-power-src
folder.
sumOptimized() does
not rely on loops. Instead, it uses the
triangle numbers formula.
The new function is not only cleaner, but it also operates
in constant time; that is, its execution time doesn’t
depend on the input.
You can check this by running the same performance
tests as we did for the sum() function. The results will
prove that the execution times do not vary regardless of
the input size. There will be only some minor, negligible
differences in the range of µs.
The sumOptimized is more efficient even for smaller
values, and the difference just grows with the input. This
chart visualizes the running times of the two functions:
The optimized, sumOptimized() function doesn’t depend on
the input size, unlike the sum() function which runs in
linear time.
By applying this clever formula, we managed to
implement a solution with an optimal performance.
SECTION 2
for i in 0..<array.count {
let left = array[i]
for j in (i + 1)..<array.count {
let right = array[j]
if left + right == target {
return (i, j)
}
}
}
return nil
}
for i in 0..<array.count {
let left = array[i]
return nil
}
findTwoSumOptimized(_, target:) uses a single loop to iterate
through the array.
For each number, we check whether the difference
between the target value and the given number can be
found in the dictionary called diffs.
If the difference is found, we've got our two numbers, and
we return the tuple with the indices. Else, we store the
current index (the difference being the key) and we iterate
further.
Note that both dictionary insertion and search
happen in constant time. Therefore, these
operations won't affect the time complexity of
our function.
var left = 0
var right = 0
for i in 0..<count {
left = 0
right = 0
for j in 0..<i {
left = left + numbers[j]
}
for j in i+1..<count {
right = right + numbers[j]
}
if left == right {
indices.append(i)
}
}
var leftSum = 0
var sum = numbers.reduce(0, +)
let count = numbers.count
for i in 0..<count {
sum = sum - numbers[i]
if leftSum == sum {
indices.append(i)
}
Summary
In this section, we’ve seen some practical examples of
solving problems using two different approaches.
Although the naive implementations produced the right
results, they start to show their weaknesses as the input
size gets bigger.
By using more efficient techniques, we reduced the time
complexity and - as a consequence - the execution time of
our solutions considerably.
Coming up with the optimal algorithm requires research
and deeper understanding of the problem we are trying to
solve. Math skills and the ability to apply the features of
the given programming language will help you in creating
more efficient algorithms.
The time complexity of an algorithm is crucial when it
comes to performance.
Do your best to avoid polynomial and worse
time complexities!
Generics
Generics stand at the core of the Swift standard library.
They are so deeply rooted in the language that you can't
avoid them. In most cases, you won’t even notice that
you’re using generics.
SECTION 1
Why Generics?
To illustrate the usefulness of generics, we’ll try to solve
a simple problem.
If you want to follow along with me, download
the repository from GitHub. Open the
Generics playground from the generics-src
folder. You can find the source code for this
demo in the “Pair without Generics”
playground page.
Generic Types
Wouldn’t it be cool to have only one type which can
work with any value?
Generic types come to the rescue!
If you want to follow along with me, download
the repository from GitHub. Open the
Generics playground from the generics-src
folder. You can find the source code for this
demo in the “Generic Types” playground
page.
Generic Functions
Generic functions are another powerful feature of the
Swift language.
A generic function or method can work with any type.
Thus, we can avoid duplications and write cleaner code.
Let’s start with a programming challenge: we need to
implement a method which tells whether two values are
equal.
Done!
By now, you probably see where this goes.
This is not the way to go!
Implementing a new function for every new type leads to
a lot of redundant code.
Such a code-base is hard to maintain and use.
We should always avoid code repetition. And generics
help us solve this problem, too.
Let’s create the generic isEqual function.
The Array
Arrays store values of the same type in a specific order.
The values must not be unique: each value can appear
multiple times.
In other words, we could define the Array as an ordered
sequence of non-unique elements.
If you want to follow along with me, download
the repository from GitHub. Open the
SwiftCollectionTypes playground from the
collections-src folder. You can find the source
code for this demo in the playground page
“The Array”.
Note that first and last return optional values. If the array is
empty, their value will be nil.
We don’t have this safety net when accessing the array by
index. A runtime error occurs if we try to retrieve a value
using an invalid index.
If you want to follow along with me, download
the repository from GitHub. Open the
SwiftCollectionTypes playground from the
collections-src folder. You can find the source
code for this demo in the playground page
“The Array”.
if index >= 0
All the elements after the given index are shifted one
position to the right.
If you pass in the last index, the new element is appended
to the array.
When using insert(_:, at:) make sure that the index is valid.
Otherwise, you’ll end up in a runtime error.
You can use the remove(at:) instance method to remove an
element from an array.
mutableNumbers.remove(at: 1)
print(mutableNumbers)
// Output: [1, 5, 3, 42, 1, 2, 11]
After the element is removed, the gap is closed.
Rule of thumb:
Always check whether the index is out of bounds before
accessing it!
The Array has further methods.
removeFirst() removes and returns the first element, whereas
removeLast() removes and returns the last element.
mutableNumbers = [1, 2, 5, 3, 1, 2]
let wasFirst = mutableNumbers.removeFirst()
print(mutableNumbers)
// Output: [2, 5, 3, 1, 2]
Summary
Arrays store values of the same type in an ordered
sequence. Choose the array if the order of the elements is
important and if the same values shall appear multiple
times.
If the order is not important, or the values must be
unique, you should rather use a Set.
SECTION 4
The Set
We’ve seen that the array stores elements in a given
order. We can even have duplicates in an array.
What if we need a collection that guarantees the
uniqueness of its elements?
The Set is the answer.
Sets store unique values with no ordering, and a given
value can only appear once.
Besides, the Set exposes useful mathematical set
operations like union and subtract.
If you want to follow along with me, download
the repository from GitHub. Open the
SwiftCollectionTypes playground from the
collections-src folder. You can find the source
code for this demo in the playground page
“The Set”.
let doubles: Set = [1.5, 2.2, 5] // same as -> let doubles: Set<Double> = [1.5,
2.2, 5]
Whereas the Set, declared with the same literals, will only
have one value. The redundant values are skipped.
let onesSet: Set = [1, 1, 1, 1]
print(onesSet)
// Output: [1]
sorted() returns
the elements of the set as an array sorted
using the “<“ operator.
We can also use the forEach(_:) collection method with sets.
This method executes its closure on each element in the
Set:
numbers.forEach { value in
print(value)
}
// Output: undefined order, e.g. 5, 2, 3, 1, 4
SECTION 5
Unlike the array, the set doesn’t have indices. We can use
the contains() instance method to check whether a value
exists in the set:
var mutableStringSet: Set = ["One", "Two", "Three"]
let item = "Two"
// set.contains()
if mutableStringSet.contains(item) {
print("\(item) found in the set")
} else {
print("\(item) not found in the set")
}
// Output: Two found in the set
contains() returns
a boolean value, which lets us use it in
conditional logic
like in this example. If the element cannot be found or if
the set is empty, contains() return false.
Regarding empty sets: we can check whether a set has
elements through the isEmpty property:
let strings = Set<String>()
if strings.isEmpty {
print("Set is empty")
}
// Output: Set is empty
mutableStringSet.remove("Three")
The call does nothing if the element is not in the list. The
remove() method returns the element that was removed
from the list. We can use this feature to check whether the
value was indeed deleted.
mutableStringSet = ["One", "Two", "Three"]
if let removedElement = mutableStringSet.remove("Ten") {
print("\(removedElement) was removed from the Set")
} else {
print("\"Ten\" not found in the Set")
}
// Output: "Ten" not found in the Set
mutableStringSet.removeAll()
Set Operations
The Set exposes useful methods that let us perform
fundamental operations.
If you want to follow along with me, download
the repository from GitHub. Open the
SwiftCollectionTypes playground from the
collections-src folder. You can find the source
code for this demo in the playground page
“The Set”.
Union
union() createsa new set with all the elements in the two
sets. If the two sets have elements in common, only one
instance will appear in the resulting set.
let primes: Set = [3, 5, 7, 11]
let odds: Set = [1, 3, 5, 7]
// set.union(otherSet)
let union = primes.union(odds)
print(union.sorted())
// Output: [1, 3, 5, 7, 11]
Intersection
Subtract
We can also subtract one set from another.
The result will contain those values which are only in the
source set and not in the subtracted set.
let primes: Set = [3, 5, 7, 11]
let odds: Set = [1, 3, 5, 7]
Symmetric Difference
The symmetricDifference() method returns a Set with the
elements that are only in either set, but not both.
The Set exposes many other useful methods, like the ones
which let us test for equality and membership. I suggest
you download the sample projects and start
experimenting with sets.
SECTION 7
Hashable inherits
from the Equatable protocol. If a protocol
inherits from another one, all conforming types must also
implement the requirements defined in that protocol.
Conforming to the Equatable protocol is straightforward,
too. We have to implement the “==” operator. The
equality operator is a static method that tells whether two
instances of the given type are equal or not. We consider
two SimpleStruct instances to be equal if their identifiers are
equal.
struct SimpleStruct: Hashable {
var identifier: String
The Dictionary
The Dictionary, also known as hash-map, stores key-
value pairs.
Use this collection type if you need to look up values
based on their identifiers.
Each value must be associated with a key that is unique.
The order of the keys and values is undefined.
Just like the other Swift collection types, the Dictionary is
also implemented as a generic type.
SECTION 9
Creating Dictionaries
If you want to follow along with me, download
the repository from GitHub. Open the
SwiftCollectionTypes playground from the
collections-src folder. You can find the source
code for this demo in the playground page
“The Dictionary”.
// Specify the key and the value type to create an empty dictionary
var dayOfWeek = Dictionary<Int, String>()
Heterogeneous
Dictionaries
When creating a dictionary, the type of the keys and
values is supposed to be consistent - e.g. all keys are of
Integer type and all the values are of type String.
Type inference won't work if the type of the dictionary
literals is mixed.
dayOfWeek.updateValue("Tue", forKey: 2)
dayOfWeek[1] = nil
You can achieve the same result (with more typing) by
calling the removeValue(forKey:) method:
dayOfWeek.removeValue(forKey: 2)
print(dayOfWeek)
// Output: [:]
Basic Sorting
Understanding the inner workings and knowing how to
implement the basic sorting algorithms gives you a strong
foundation to building other, more sophisticated
algorithms.
We’re going to analyze how each algorithm works and
we’ll implement them from scratch using Swift.
What is sorting first of all?
Sorting is a technique for arranging data in a
logical sequence according to some well-
defined rules.
Selection Sort
Selection Sort is one of the simplest sorting algorithms.
It starts by finding the smallest item and exchanging it
with the first one. Then, it finds the next smallest item
and exchanges it with the second item. The process goes
on until the entire sequence is sorted.
Implementation
If you want to follow along with me, download
the repository from GitHub and open the
SelectionSort playground from the basic-
sorting-src folder.
if index != indexLowest {
result.swapAt(index, indexLowest)
}
}
return result
}
// ...
return result
}
if index != indexLowest {
result.swapAt(index, indexLowest)
}
}
return result
}
if index != indexLowest {
result.swapAt(index, indexLowest)
}
}
Which gives:
Insertion Sort
Insertion sort is a basic sorting algorithm, which works
by analyzing each element and inserting it into its proper
place, while larger elements move one position to the
right.
Insertion sort has quadratic time complexity. However,
the performance of the insertion sort is largely affected by
the initial order of the elements in the sequence.
Implementation
In the following demo, we are going to implement the
insertion sort algorithm in Swift.
We’ll visualize how insertion sort works. Then, we are
going to analyze the time complexity of this algorithm.
We will conduct an interesting experiment: we’ll
compare the efficiency of the insertion sort with the
selection sort algorithm that was presented in the previous
episode. There will be three distinct use-cases: first, we’ll
use a shuffled array as input, then a partially sorted one,
and finally an already sorted array.
If you want to follow along with me, download
the repository from GitHub and open the
InsertionSort playground from the basic-
sorting-src folder.
We clone the input array first. This copy will hold the
sorted result.
The insertionSort() function uses two loops.
The outer loop progresses as we process the array.
let count = result.count
4 | 3, 2,
1, 0
4, 3 | 2, 3, 4 | 2,
1, 0 1, 0
3, 4, 2 | 3, 2, 4 | 2, 3, 4 |
1, 0 1, 0 1, 0
2, 3, 4, 2, 3, 1, 2, 1, 3, 1, 2, 3,
1|0 4|0 4|0 4|0
1, 2, 3, 1, 2, 3, 1, 2, 0, 1, 0, 2, 0, 1, 2
4, 0 0, 4 3, 4 3, 4 3, 4
The number of swaps will be equal to the number of
items in the sorted subsection. To calculate the time
complexity, we need to sum up the number of compares
and the number of swaps:
(n-1)×n2+(n-1)×n2=n2-n
So, the worst case complexity of the insertion sort is n2-
n.
When using Big-O notation, we discard the low-order
term which gives O(n2) - quadratic running time.
The average case when each element is halfway in order,
the number of swaps and compares is halved compared to
the worst case. This gives us n2-n2, which is also a
quadratic time complexity.
To summarize: the insertion sort performs in linear time
for already or almost sorted arrays. When the input is
shuffled or in reverse order, the insertion sort will run in
quadratic time.
execTime = BenchTimer.measureBlock {
_ = selectionSort(random1000)
}
print("Average selectionSort() execution time for \(inputSize) elements: \
(execTime.formattedTime)")
execTime = BenchTimer.measureBlock {
_ = selectionSort(progressiveArray1000)
}
print("Average selectionSort() execution time for \(inputSize) elements: \
(execTime.formattedTime)")
Bubble Sort
The Bubble Sort algorithm works by repeatedly
evaluating adjacent items and swapping their position if
they are in the wrong order.
In the following demo, we are going to implement the
bubble sort algorithm.
As with the other algorithms, we are going to analyze the
time complexity of the Bubble sort, and visualize how it
works. Then, we are going to compare the Bubble sort,
the insertion sort, and the selection sort algorithm in
terms of efficiency.
Implementation
The bubbleSort() function takes an array of integers as input
and returns the sorted copy of the input array.
If you want to follow along with me, download
the repository from GitHub and open the
BubbleSort playground from the basic-
sorting-src folder.
repeat {
isSwapped = false
for index in 1..<count {
if result[index] < result[index - 1] {
result.swapAt((index - 1), index)
isSwapped = true
}
}
} while isSwapped
return result
}
This means that the running time of the bubble sort only
depends on the input size when the sequence is already
sorted. In other words, the best-time complexity of the
bubble-sort is linear.
The worst case is when the array is reverse-sorted. If
there are n items in the sequence, the algorithm will run n
passes in total. During each pass, our function executes n-
1 comparisons. This means n×(n-1) compares.
For a reverse-ordered sequence, the number of swaps will
be n - 1 in the first pass, n - 2 in the second pass, and so
on, until the last exchange is made in the penultimate
pass.
execTime = BenchTimer.measureBlock {
_ = selectionSort(random100)
}
print("Average selectionSort() execution time for \(inputSize) elements: \
(execTime.formattedTime)")
execTime = BenchTimer.measureBlock {
_ = bubbleSort(random100)
}
print("Average bubble() execution time for \(inputSize) elements: \
(execTime.formattedTime)")
Advanced Sorting
In this chapter, we’re going to take a look at two
advanced sorting algorithms.
The merge sort and quick sort are performant and can be
used in production code. These sorting algorithms are
actually included in various libraries and frameworks.
The merge sort splits the sequence to be sorted into two
halves. Then, it sorts the halves. The sorted parts get
combined. During this merge step, additional sorting is
done. Finally, we get the sorted result.
The other sorting algorithm we’ll be studying is the
quicksort. This algorithm uses a similar approach like the
merge sort - also known as divide-and-conquer technique.
The difference is that the resulting halves are already
sorted before the merge. So, there’s no need for further
sorting when combining the parts during the last step.
All right, now let’s delve into these algorithms.
SECTION 1
First, we split the array into two parts.
So, now comes the sorting and merging phase. After two
steps, the elements of the left half are ordered.
We then follow the same steps for the right half:
One split, then two more splits on the 2-element sublists,
and we get the one-element arrays.
Next, the single-element sublists are sorted and combined
until the right half is also sorted.
There is only one step left: during this last step, the two
sorted halves are merged and sorted.
Finally, the result is the ordered array.
Now that you know how it works, let’s implement this
amazing algorithm.
Implementation
If you want to follow along with me, download
the repository from GitHub and open the
MergeSort playground from the advanced-
sorting-src folder.
var leftIndex = 0
var rightIndex = 0
return sorted
}
The final step makes sure that the last element from either
the left or the right subarray is added to the sorted list.
if leftIndex < leftPart.count {
sorted.append(contentsOf: leftPart[leftIndex..<leftPart.count])
} else if rightIndex < rightPart.count {
sorted.append(contentsOf: rightPart[rightIndex..<rightPart.count])
}
Quicksort
Quicksort is probably the most widely used sorting
algorithm.
The quicksort is likely to run faster than any other,
compare-based sorting algorithms in most cases. The
algorithm was invented in 1960 and it’s been consistently
studied and refined over time.
Hoare, the inventor of the algorithm, Dijkstra, Lomuto
and others have been working on improving the
efficiency of the quick sort even further.
The popularity of the quicksort algorithm is related to its
performance. Besides, it’s not too difficult to implement,
and it works well with many different input types.
Quicksort uses a divide-and-conquer technique like the
merge sort. However, the approach is different. Unlike
for the merge sort, the final sorting of elements happens
before the merge phase.
Let’s visualize how this algorithm works.
Here’s our unsorted array:
Implementation
This quicksort variant is the simplest possible
implementation.
If you want to follow along with me, download
the repository from GitHub and open the
Qsort playground from the advanced-sorting-
src folder.
Where Do You Go
From Here?
Congrats, you’ve reached the end of this book!
You've learned a lot about algorithms and you understand
their benefits. Whenever in doubt, feel free to revisit the
lectures in the section called “The Power of Algorithms.”
The chapter about Big-O notation has clarified some of
the most common time complexities through Swift code
examples. Concepts like linear or quadratic time
complexity won’t make you raise your eyebrows
anymore.
We delved into the details of three popular basic sorting
algorithms and two advanced ones, including the
extremely widespread quicksort. By now, you are
probably able to explain and implement a sorting
algorithm from scratch.
You should keep working on improving your algorithmic
problem-solving skills.
You’ll have to practice a lot to make algorithmic thinking
a habit. Instead of jumping to implementing a naive, slow
solution, you’ll eventually find yourself analyzing the
problem and considering various aspects like worst-case
or average time complexity and memory considerations.
You’ll not only solve the problems, but you’ll be able to
provide elegant, efficient and reliable, long-term
solutions.
SECTION 1
Resources To Sharpen
Your Skills
Now, you may want to deepen your knowledge further.
So, what’s next?
I give you some useful online resources which will help
you in sharpening your coding and problem solving
skills:
Codility is a great resource for both developers and
recruiters.
It has many coding exercises and challenges to test your
knowledge. The site provides an online editor and
compiler, and supports a number of different
programming languages, including Swift. You can
provide custom test data and run several test rounds
before submitting your solution.
The solution is evaluated for correctness, edge-case
scenarios and time complexity as well. You may not
achieve the highest score even if your solution provides
the expected results if it’s performance is slow or it is
fails some extreme edge case. An algorithmic approach is
definitely required to solve most of the exercises on this
site.
Hackerrank has a lot of tutorials and challenges.
Project Euler is a collection of challenging math and
computer programming problems.
SECTION 2
Goodbye!
I’d love to hear from you! Feel free to email me at
carlos@leakka.com.
And if you found this book useful, please leave a nice
review or rating at
https://www.amazon.com/dp/B077D8MQ31 .
Thank you!
SECTION 3
Copyright
© Copyright © 2018 by Károly Nyisztor.
All rights reserved. This book or any portion thereof may
not be reproduced or used in any manner whatsoever
without the express written permission of the publisher
except for the use of brief quotations in a book review.
First Edition, 2018
Version 1.0
www.leakka.com
Table of Contents
Introduction 2
Prerequisites 3
Why should you learn algorithms? 5
What’s covered in this book? 6
The Big-O Notation 8
Constant Time Complexity 11
Linear Time Complexity 17
Quadratic Time Complexity 22
Hints for Polynomial Time Complexity 27
Logarithmic Time 29
Summary 32
Recursion 34
What’s Recursion? 35
How Does Recursion Work? 39
Recursion Pitfalls 41
How to Avoid Infinite Recursion? 45
The Power of Algorithms 48
Calculate Sum(n) 50
Pair Matching Challenge 54
Find the Equilibrium Index 59
Summary 64
Generics 66
Why Generics? 67
Generic Types 70
Generic Functions 72
The Built-In Swift Collection Types 78
The Array 79
Accessing the Array 83
Modifying the Array 86
The Set 95
Accessing and Modifying the Set 99
Set Operations 103
Union 103
Intersection 104
Subtract 104
Symmetric Difference 105
The Hashable Protocol 107
The Dictionary 111
Creating Dictionaries 112
Heterogeneous Dictionaries 114
Accessing & Modifying the Contents of a Dictionary 117
Basic Sorting 120
Selection Sort 122
Implementation 122
Selection Sort Time Complexity 126
Insertion Sort 130
Implementation 130
Insertion Sort Time Complexity 134
Insertion Sort vs. Selection Sort 137
Bubble Sort 142
Implementation 142
Bubble Sort Time Complexity 146
Bubble vs. Insertion vs. Selection Sort 148
Advanced Sorting 151
The Merge Sort 152
Implementation 163
Quicksort 168
Implementation 172
Where Do You Go From Here? 175
Resources to sharpen your skills 177
Goodbye! 179
Copyright 180