You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+12-1
Original file line number
Diff line number
Diff line change
@@ -10,7 +10,7 @@ the sentences that are closest to the cluster's centroids. This library also use
10
10
https://github.com/huggingface/neuralcoref library to resolve words in summaries that need more context. The greedyness of
11
11
the neuralcoref library can be tweaked in the CoreferenceHandler class.
12
12
13
-
As of version 0.4.2, by default, CUDA is used if a gpu is available.
13
+
As of the most recent version of bert-extractive-summarizer, by default, CUDA is used if a gpu is available.
@@ -61,6 +61,17 @@ result = model(body, ratio=0.2) # Specified with ratio
61
61
result = model(body, num_sentences=3) # Will return 3 sentences
62
62
```
63
63
64
+
#### Using multiple hidden layers as the embedding output
65
+
66
+
You can also concat the summarizer embeddings for clustering. A simple example is below.
67
+
68
+
```python
69
+
from summarizer import Summarizer
70
+
body ='Text body that you want to summarize with BERT'
71
+
model = Summarizer('distilbert-base-uncased', hidden=[-1,-2], hidden_concat=True)
72
+
result = model(body, num_sentences=3)
73
+
```
74
+
64
75
### Use SBert
65
76
One can use Sentence Bert with bert-extractive-summarizer with the newest version. It is based off the paper here:
66
77
https://arxiv.org/abs/1908.10084, and the library here: https://www.sbert.net/. To get started,
0 commit comments