Skip to content

Releases: JohnSnowLabs/spark-nlp

Spark NLP 4.4.3: Patch release

26 May 12:13
d5abbc0
Compare
Choose a tag to compare

πŸ“’ Overview

Spark NLP 4.4.3 πŸš€ comes with a new param to switch from multi-class to multi-label in all of our classifiers including ZeroShot, extending support to download models directly with an S3 path in ResourceDownloader, bug fixes, and improvements!

We want to thank our community for their valuable feedback, feature requests, and contributions. Our Models Hub now contains over 18,000+ free and truly open-source models & pipelines. πŸŽ‰

Spark NLP has a new home! https://sparknlp.org is where you can find all the documentation, models, and demos for Spark NLP. It aims to provide valuable resources to anyone interested in 100% open-source NLP solutions by using Spark NLP πŸš€


⭐ New Features & Enhancements

  • New multilabel parameter to switch from multi-class to multi-label on all Classifiers in Spark NLP: AlbertForSequenceClassification, BertForSequenceClassification, DeBertaForSequenceClassification, DistilBertForSequenceClassification, LongformerForSequenceClassification, RoBertaForSequenceClassification, XlmRoBertaForSequenceClassification, XlnetForSequenceClassification, BertForZeroShotClassification, DistilBertForZeroShotClassification, and RobertaForZeroShotClassification
  • Refactor protected Params and Features to avoid unwanted exceptions during runtime #13797
  • Add proper documentation and instructions for ZeroShot classifiers: BertForZeroShotClassification, DistilBertForZeroShotClassification, and RobertaForZeroShotClassification #13798
  • Extend support for downloading models/pipelines directly by given name or S3 path in ResourceDownloader #13796
from sparknlp.pretrained import ResourceDownloader

# partial S3 path
ResourceDownloader.downloadModelDirectly("public/models/albert_base_sequence_classifier_ag_news_en_3.4.0_3.0_1639648298937.zip", remote_loc = "public/models")

# full S3 path
ResourceDownloader.downloadModelDirectly("s3://auxdata.johnsnowlabs.com/public/models/albert_base_sequence_classifier_ag_news_en_3.4.0_3.0_1639648298937.zip", remote_loc = "public/models", unzip = False)

πŸ› Bug Fixes

  • Fix pretrained pipelines that stopped working since the 4.4.2 release on PySpark 3.0 and 3.1 versions (adding 123 new pipelines were added) #13805
  • Fix pretrained pipelines that stopped working since the 4.4.2 release on PySpark 3.4 versions (adding 120 new pipelines were added) #13828
  • Fix Java compatibility issue caused by SystemUtils dependency #13806

Known issue:
Current pre-trained pipelines don't work on PySpark 3.2 and 3.3. They will all be fixed in the next few days.


πŸ“– Documentation


❀️ Community support

  • Slack For live discussion with the Spark NLP community and the team
  • GitHub Bug reports, feature requests, and contributions
  • Discussions Engage with other community members, share ideas, and show off how you use Spark NLP!
  • Medium Spark NLP articles
  • YouTube Spark NLP video tutorials

Installation

Python

#PyPI

pip install spark-nlp==4.4.3

Spark Packages

spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, and 3.3.x (Scala 2.12):

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp_2.12:4.4.3

pyspark --packages com.johnsnowlabs.nlp:spark-nlp_2.12:4.4.3

GPU

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:4.4.3

pyspark --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:4.4.3

Apple Silicon (M1 & M2)

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:4.4.3

pyspark --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:4.4.3

AArch64

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:4.4.3

pyspark --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:4.4.3

Maven

spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, and 3.3.x:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp_2.12</artifactId>
    <version>4.4.3</version>
</dependency>

spark-nlp-gpu:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-gpu_2.12</artifactId>
    <version>4.4.3</version>
</dependency>

spark-nlp-silicon:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-silicon_2.12</artifactId>
    <version>4.4.3</version>
</dependency>

spark-nlp-aarch64:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-aarch64_2.12</artifactId>
    <version>4.4.3</version>
</dependency>

FAT JARs

What's Changed

New Contributors

Full Changelog: 4.4.2...4.4.3

Spark NLP 4.4.2: Patch release

10 May 20:30
f8354b3
Compare
Choose a tag to compare

πŸ“’ Overview

Spark NLP 4.4.2 πŸš€ comes with a new RoBertaForZeroShotClassification annotator for Zero-Shot tex classification (both multi-class and multi-label), full support for Apache Spark 3.4, faster and more memory-efficient BART models, a new cache feature for BartTransformer, new Databricks runtimes, and many more!

We want to thank our community for their valuable feedback, feature requests, and contributions. Our Models Hub now contains over 17,000+ free and truly open-source models & pipelines. πŸŽ‰

Spark NLP has a new home! https://sparknlp.org is where you can find all the documentation, models, and demos for Spark NLP. It aims to provide valuable resources to anyone interested in 100% open-source NLP solutions by using Spark NLP πŸš€


⭐ New Features & Enhancements

  • NEW: Introducing ** RoBertaForZeroShotClassification** annotator for Zero-Shot Text Classification in Spark NLP πŸš€. You can use the RoBertaForZeroShotClassification annotator for text classification with your labels! πŸ’―

Zero-Shot Learning (ZSL): Traditionally, ZSL most often referred to a fairly specific type of task: learning a classifier on one set of labels and then evaluating on a different set of labels that the classifier has never seen before. Recently, especially in NLP, it's been used much more broadly to get a model to do something it wasn't explicitly trained to do. A well-known example of this is in the GPT-2 paper where the authors evaluate a language model on downstream tasks like machine translation without fine-tuning on these tasks directly.

Let's see how easy it is to just use any set of labels our trained model has never seen via the setCandidateLabels() param:

zero_shot_classifier = RoBertaForZeroShotClassification \
    .pretrained() \
    .setInputCols(["document", "token"]) \
    .setOutputCol("class") \
    .setCandidateLabels(["urgent", "mobile", "travel", "movie", "music", "sport", "weather", "technology"])

For Zero-Short Multi-class Text Classification:

+----------------------------------------------------------------------------------------------------------------+--------+
|result                                                                                                          |result  |
+----------------------------------------------------------------------------------------------------------------+--------+
|[I have a problem with my iPhone that needs to be resolved asap!!]                                              |[mobile]|
|[Last week I upgraded my iOS version and ever since then my phone has been overheating whenever I use your app.]|[mobile]|
|[I have a phone and I love it!]                                                                                 |[mobile]|
|[I want to visit Germany and I am planning to go there next year.]                                              |[travel]|
|[Let's watch some movies tonight! I am in the mood for a horror movie.]                                         |[movie] |
|[Have you watched the match yesterday? It was a great game!]                                                    |[sport] |
|[We need to hurry up and get to the airport. We are going to miss our flight!]                                  |[urgent]|
+----------------------------------------------------------------------------------------------------------------+--------+

For Zero-Short Multi-class Text Classification:

+----------------------------------------------------------------------------------------------------------------+-----------------------------------+
|result                                                                                                          |result                             |
+----------------------------------------------------------------------------------------------------------------+-----------------------------------+
|[I have a problem with my iPhone that needs to be resolved asap!!]                                              |[urgent, mobile, movie, technology]|
|[Last week I upgraded my iOS version and ever since then my phone has been overheating whenever I use your app.]|[urgent, technology]               |
|[I have a phone and I love it!]                                                                                 |[mobile]                           |
|[I want to visit Germany and I am planning to go there next year.]                                              |[travel]                           |
|[Let's watch some movies tonight! I am in the mood for a horror movie.]                                         |[movie]                            |
|[Have you watched the match yesterday? It was a great game!]                                                    |[sport]                            |
|[We need to hurry up and get to the airport. We are going to miss our flight!]                                  |[urgent, travel]                   |
+----------------------------------------------------------------------------------------------------------------+-----------------------------------+
  • Offer full support for Apache Spark 3.4 #13773
  • New BART models with memory efficiency and higher speed (it is not possible to use BART models in Colab) #13787
  • Introducing the cache feature in BartTransformer #13787
  • Welcoming 3 new Databricks runtimes to our Spark NLP family:
    • Databricks 13.0 LTS
    • Databricks 13.0 LTS ML
    • Databricks 13.0 LTS ML GPU
  • Improve error handling for max sequence length for transformers in Python #13774
  • Improve the MultiDateMatcher annotator to return multiple dates #13783

πŸ› Bug Fixes

  • Fix a bug in Tapas due to exceeding the maximum rank value #13772
  • Fix loading Transformer models via loadSavedModel() method from DBFS on Databricks #13784

πŸ“– Documentation


❀️ Community support

  • Slack For live discussion with the Spark NLP community and the team
  • GitHub Bug reports, feature requests, and contributions
  • Discussions Engage with other community members, share ideas, and show off how you use Spark NLP!
  • Medium Spark NLP articles
  • YouTube Spark NLP video tutorials

Installation

Python

#PyPI

pip install spark-nlp==4.4.2

Spark Packages

spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, and 3.3.x (Scala 2.12):

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp_2.12:4.4.2

pyspark --packages com.johnsnowlabs.nlp:spark-nlp_2.12:4.4.2

GPU

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:4.4.2

pyspark --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:4.4.2

Apple Silicon (M1 & M2)

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:4.4.2

pyspark --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:4.4.2

AArch64

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:4.4.2

pyspark --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:4.4.2

Maven

spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, and 3.3.x:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp_2.12</artifactId>
    <version>4.4.2</version>
</dependency>

spark-nlp-gpu:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-gpu_2.12</artifactId>
    <version>4.4.2</version>
</dependency>

spark-nlp-silicon:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-silicon_2.12</artifactId>
    <version>4.4.2</version>
</dependency>

spark-nlp-aarch64:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-aarch64_2.12</artifactId>
    <version>4.4.2</version>
</dependency>

FAT JARs

Read more

Spark NLP 4.4.1: Patch release

25 Apr 18:22
d7f91a4
Compare
Choose a tag to compare

πŸ“’ Overview

Spark NLP 4.4.1 πŸš€ comes with a new DistilBertForZeroShotClassification annotator for Zero-Shot tex classification (both multi-class and multi-label), a new threshold parameter in all XXXForSequenceClassification annotators to filter out classes based on their scores, and new notebooks to import models for Image Classification with Swin and ConvNext architectures.

We want to thank our community for their valuable feedback, feature requests, and contributions. Our Models Hub now contains over 17,000+ free and truly open-source models & pipelines. πŸŽ‰

Spark NLP has a new home! https://sparknlp.org is where you can find all the documentation, models, and demos for Spark NLP. It aims to provide valuable resources to anyone interested in 100% open-source NLP solutions by using Spark NLP πŸš€.


⭐ New Features & Enhancements

  • NEW: Introducing DistilBertForZeroShotClassification annotator for Zero-Shot Text Classification in Spark NLP πŸš€. You can use the DistilBertForZeroShotClassification annotator for text classification with your labels! πŸ’―

Zero-Shot Learning (ZSL): Traditionally, ZSL most often referred to a fairly specific type of task: learning a classifier on one set of labels and then evaluating on a different set of labels that the classifier has never seen before. Recently, especially in NLP, it's been used much more broadly to get a model to do something it wasn't explicitly trained to do. A well-known example of this is in the GPT-2 paper where the authors evaluate a language model on downstream tasks like machine translation without fine-tuning on these tasks directly.

Let's see how easy it is to just use any set of labels our trained model has never seen via the setCandidateLabels() param:

zero_shot_classifier = DistilBertForZeroShotClassification \
    .pretrained() \
    .setInputCols(["document", "token"]) \
    .setOutputCol("class") \
    .setCandidateLabels(["urgent", "mobile", "travel", "movie", "music", "sport", "weather", "technology"])

For Zero-Short Multi-class Text Classification:

+----------------------------------------------------------------------------------------------------------------+--------+
|result                                                                                                          |result  |
+----------------------------------------------------------------------------------------------------------------+--------+
|[I have a problem with my iPhone that needs to be resolved asap!!]                                              |[mobile]|
|[Last week I upgraded my iOS version and ever since then my phone has been overheating whenever I use your app.]|[mobile]|
|[I have a phone and I love it!]                                                                                 |[mobile]|
|[I want to visit Germany and I am planning to go there next year.]                                              |[travel]|
|[Let's watch some movies tonight! I am in the mood for a horror movie.]                                         |[movie] |
|[Have you watched the match yesterday? It was a great game!]                                                    |[sport] |
|[We need to hurry up and get to the airport. We are going to miss our flight!]                                  |[urgent]|
+----------------------------------------------------------------------------------------------------------------+--------+

For Zero-Short Multi-class Text Classification:

+----------------------------------------------------------------------------------------------------------------+-----------------------------------+
|result                                                                                                          |result                             |
+----------------------------------------------------------------------------------------------------------------+-----------------------------------+
|[I have a problem with my iPhone that needs to be resolved asap!!]                                              |[urgent, mobile, movie, technology]|
|[Last week I upgraded my iOS version and ever since then my phone has been overheating whenever I use your app.]|[urgent, technology]               |
|[I have a phone and I love it!]                                                                                 |[mobile]                           |
|[I want to visit Germany and I am planning to go there next year.]                                              |[travel]                           |
|[Let's watch some movies tonight! I am in the mood for a horror movie.]                                         |[movie]                            |
|[Have you watched the match yesterday? It was a great game!]                                                    |[sport]                            |
|[We need to hurry up and get to the airport. We are going to miss our flight!]                                  |[urgent, travel]                   |
+----------------------------------------------------------------------------------------------------------------+-----------------------------------+
  • Adding threshold param to AlbertForSequenceClassification, BertForSequenceClassification, BertForZeroShotClassification, DistilBertForSequenceClassification, CamemBertForSequenceClassification, DeBertaForSequenceClassification, LongformerForSequenceClassification, RoBertaForQuestionAnswering, XlmRoBertaForSequenceClassification, and XlnetForSequenceClassification annotators
  • Add new notebooks to import models for SwinForImageClassification and ConvNextForImageClassification annotators for Image Classification

πŸ““ New Notebooks

Notebooks Colab
Zero-Shot Text Classification Open In Colab
ConvNextForImageClassification Open In Colab
SwinForImageClassification Open In Colab

πŸ“– Documentation


❀️ Community support

  • Slack For live discussion with the Spark NLP community and the team
  • GitHub Bug reports, feature requests, and contributions
  • Discussions Engage with other community members, share ideas,
    and show off how you use Spark NLP!
  • Medium Spark NLP articles
  • YouTube Spark NLP video tutorials

Installation

Python

#PyPI

pip install spark-nlp==4.4.1

Spark Packages

spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, and 3.3.x (Scala 2.12):

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp_2.12:4.4.1

pyspark --packages com.johnsnowlabs.nlp:spark-nlp_2.12:4.4.1

GPU

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:4.4.1

pyspark --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:4.4.1

Apple Silicon (M1 & M2)

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:4.4.1

pyspark --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:4.4.1

AArch64

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:4.4.1

pyspark --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:4.4.1

Maven

spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, and 3.3.x:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp_2.12</artifactId>
    <version>4.4.1</version>
</dependency>

spark-nlp-gpu:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-gpu_2.12</artifactId>
    <version>4.4.1</version>
</dependency>

spark-nlp-silicon:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <a...
Read more

Spark NLP 4.4.0: New BART for Text Translation & Summarization, new ConvNeXT Transformer for Image Classification, new Zero-Shot Text Classification by BERT, more than 4000+ state-of-the-art models, and many more!

11 Apr 07:15
Compare
Choose a tag to compare

πŸ“’ Overview

We are thrilled to announce the release of Spark NLP πŸš€ 4.4.0! This release includes new features such as a New BART for NLG, translation, and comprehension; a new ConvNeXT Transformer for Image Classification, a new Zero-Shot Text Classification by BERT, 4000+ new state-of-the-art models, and more enhancements and bug fixes.

We want to thank our community for their valuable feedback, feature requests, and contributions. Our Models Hub now contains over 17,000+ free and truly open-source models & pipelines. πŸŽ‰

Spark NLP has a new home! https://sparknlp.org is where you can find all the documentation, models, and demos for Spark NLP. It aims to provide valuable resources to anyone interested in 100% open-source NLP solutions by using Spark NLP πŸš€.


πŸ”₯ New Features

ConvNeXT Image Classification (By Facebook)

NEW: Introducing ConvNextForImageClassification annotator in Spark NLP πŸš€. ConvNextForImageClassification can load ConvNeXT models that compete favorably with Transformers in terms of accuracy and scalability, achieving 87.8% ImageNet top-1 accuracy and outperforming Swin Transformers on COCO detection and ADE20K segmentation, while maintaining the simplicity and efficiency of standard ConvNets.

This annotator is compatible with all the models trained/fine-tuned by using ConvNextForImageClassification for PyTorch or TFConvNextForImageClassification for TensorFlow models in HuggingFace πŸ€—

A ConvNet: ImageNet-1K classification results for β€’ ConvNets and β—¦ vision Transformers. Each bubble’s area is proportional to FLOPs of a variant in a model family. by Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie.

BART for NLG, Translation, and Comprehension (By Facebook)

NEW: Introducing BartTransformer annotator in Spark NLP πŸš€. BartTransformer can load BART models fine-tuned for tasks like summarizations.

This annotator is compatible with all the models trained/fine-tuned by using BartForConditionalGeneration for PyTorch or TFBartForConditionalGeneration for TensorFlow models in HuggingFace πŸ€—

The abstract explains that Bart uses a standard seq2seq/machine translation architecture, similar to BERT's bidirectional encoder and GPT's left-to-right decoder. The pretraining task involves randomly shuffling the original sentences and replacing text spans with a single mask token. BART is effective for text generation and comprehension tasks, matching RoBERTa's performance with similar training resources on GLUE and SQuAD. It also achieves new state-of-the-art results on various summarization, dialogue, and question-answering tasks with gains of up to 6 ROUGE.

The Bart model was proposed in BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer

Zero-Shot for Text Classification by BERT

NEW: Introducing BertForZeroShotClassification annotator for Zero-Shot Text Classification in Spark NLP πŸš€. You can use the BertForZeroShotClassification annotator for text classification with your labels! πŸ’―

Zero-Shot Learning (ZSL): Traditionally, ZSL most often referred to a fairly specific type of task: learning a classifier on one set of labels and then evaluating on a different set of labels that the classifier has never seen before. Recently, especially in NLP, it's been used much more broadly to get a model to do something it wasn't explicitly trained to do. A well-known example of this is in the GPT-2 paper where the authors evaluate a language model on downstream tasks like machine translation without fine-tuning on these tasks directly.

Let's see how easy it is to just use any set of labels our trained model has never seen via the setCandidateLabels() param:

zero_shot_classifier = BertForZeroShotClassification \
    .pretrained() \
    .setInputCols(["document", "token"]) \
    .setOutputCol("class") \
    .setCandidateLabels(["urgent", "mobile", "travel", "movie", "music", "sport", "weather", "technology"])

For Zero-Short Multi-class Text Classification:

+----------------------------------------------------------------------------------------------------------------+--------+
|result                                                                                                          |result  |
+----------------------------------------------------------------------------------------------------------------+--------+
|[I have a problem with my iPhone that needs to be resolved asap!!]                                              |[mobile]|
|[Last week I upgraded my iOS version and ever since then my phone has been overheating whenever I use your app.]|[mobile]|
|[I have a phone and I love it!]                                                                                 |[mobile]|
|[I want to visit Germany and I am planning to go there next year.]                                              |[travel]|
|[Let's watch some movies tonight! I am in the mood for a horror movie.]                                         |[movie] |
|[Have you watched the match yesterday? It was a great game!]                                                    |[sport] |
|[We need to hurry up and get to the airport. We are going to miss our flight!]                                  |[urgent]|
+----------------------------------------------------------------------------------------------------------------+--------+

For Zero-Short Multi-class Text Classification:

+----------------------------------------------------------------------------------------------------------------+-----------------------------------+
|result                                                                                                          |result                             |
+----------------------------------------------------------------------------------------------------------------+-----------------------------------+
|[I have a problem with my iPhone that needs to be resolved asap!!]                                              |[urgent, mobile, movie, technology]|
|[Last week I upgraded my iOS version and ever since then my phone has been overheating whenever I use your app.]|[urgent, technology]               |
|[I have a phone and I love it!]                                                                                 |[mobile]                           |
|[I want to visit Germany and I am planning to go there next year.]                                              |[travel]                           |
|[Let's watch some movies tonight! I am in the mood for a horror movie.]                                         |[movie]                            |
|[Have you watched the match yesterday? It was a great game!]                                                    |[sport]                            |
|[We need to hurry up and get to the airport. We are going to miss our flight!]                                  |[urgent, travel]                   |
+----------------------------------------------------------------------------------------------------------------+-----------------------------------+

β­πŸ› Improvements & Bug Fixes

  • Add a new nerHasNoSchema param for NerConverter when labels coming from NerDLMOdel and NerCrfModel don't have any schema
  • Set custom entity name in Data2Chunk via setEntityName param
  • Fix loading WordEmbeddingsModel bug when loading a model from S3 via the cache_folder config
  • Fix the WordEmbeddingsModel bug failing when it's used with setEnableInMemoryStorage set to True and LightPipeline
  • Remove deprecated parameter enablePatternRegex from EntityRulerApproach & EntityRulerModel
  • Welcoming 3 new Databricks runtimes to our Spark NLP family:
    • Databricks 12.2 LTS
    • Databricks 12.2 LTS ML
    • Databricks 12.2 LTS ML GPU
  • Deprecate Python 3.6 in Spark NLP 4.4.0

πŸ’Ύ Models

Spark NLP 4.4.0 comes with more than 4300+ new state-of-the-art pre-trained transformer models in multi-languages.

Featured Models

Model Name Lang
BertForZeroShotClassification bert_base_cased_zero_shot_classifier_xnli en
ConvNextForImageClassification image_classifier_convnext_tiny_224_local en
BartTransformer distilbart_xsum_12_6 en
BartTransformer bart_large_cnn en
BertForQuestionAnswering bert_qa_case_base en
HubertForCTC asr_swin_exp_w2v2t_nl_hubert_s319 nl
BertForTokenClassification bert_token_classifier_base_chinese_ner zh

The complete list of...

Read more

Spark NLP 4.3.2: Patch release

14 Mar 19:54
Compare
Choose a tag to compare

πŸ“’ Overview

Spark NLP 4.3.2 πŸš€ comes with a new support for S3 in training classes to read and load CoNLL and CoNLL-U formats, support for NER tags without any schema in NerConverter, improving dedicated and self-hosted examples with more guides, and other enhancements and bug fixes!

As always, we would like to thank our community for their feedback, questions, and feature requests. πŸŽ‰


⭐ New Features & Enhancements

  • Add S3 support for CoNLL(), POS(), CoNLLU() training classes #13596
  • Add support for non-schema NER (I- or B-) tags in NerConverter annotator #13642
  • Improve self-hosted examples with better documentation, Docker examples, no broken links, and more #13575
  • Improve error handling for validation evaluation in ClassifierDL and MultiClassifierDL trainable annotators #13615

πŸ› Bug Fixes

  • Fix Date2Chunk and Chunk2Doc annotators compatibility with PipelineModel #13609
  • Fix DependencyParserModel predicting all Chunks as <no-type> #13620
  • Removed calculationsCol parameter from MultiDocumentAssembler in Python that doesn't actually exist #13594

πŸ“– Documentation

Community support

  • Slack For live discussion with the Spark NLP community and the team
  • GitHub Bug reports, feature requests, and contributions
  • Discussions Engage with other community members, share ideas, and show off how you use Spark NLP!
  • Medium Spark NLP articles
  • YouTube Spark NLP video tutorials

Installation

Python

#PyPI

pip install spark-nlp==4.3.2

Spark Packages

spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, and 3.3.x (Scala 2.12):

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp_2.12:4.3.2

pyspark --packages com.johnsnowlabs.nlp:spark-nlp_2.12:4.3.2

GPU

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:4.3.2

pyspark --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:4.3.2

Apple Silicon (M1 & M2)

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:4.3.2

pyspark --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:4.3.2

AArch64

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:4.3.2

pyspark --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:4.3.2

Maven

spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, and 3.3.x:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp_2.12</artifactId>
    <version>4.3.2</version>
</dependency>

spark-nlp-gpu:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-gpu_2.12</artifactId>
    <version>4.3.2</version>
</dependency>

spark-nlp-silicon:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-silicon_2.12</artifactId>
    <version>4.3.2</version>
</dependency>

spark-nlp-aarch64:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-aarch64_2.12</artifactId>
    <version>4.3.2</version>
</dependency>

FAT JARs

What's Changed

New Contributors

Full Changelog: 4.3.1...4.3.2

Spark NLP 4.3.1: Patch release

24 Feb 18:36
Compare
Choose a tag to compare

πŸ“’ Overview

Spark NLP 4.3.1 πŸš€ comes with a new SpacyToAnnotation feature to import documents, sentences, and tokens from spaCy and similar libraries into Spark NLP pipelines. We have also made other improvements in this patch release.

As always, we would like to thank our community for their feedback, questions, and feature requests. πŸŽ‰


⭐ New Features & Enhancements

  • Easily use external Sentences and Tokens from external libraries such as spaCy in Spark NLP pipeline
# this is how your file from spaCy would look like
! cat ./multi_doc_tokens.json

[
  {
    "tokens": ["John", "went", "to", "the", "store", "last", "night", ".", "He", "bought", "some", "bread", "."],
    "token_spaces": [true, true, true, true, true, true, false, true, true, true, true, false, false],
    "sentence_ends": [7, 12]
  },
  {
    "tokens": ["Hello", "world", "!", "How", "are", "you", "today", "?", "I", "'m", "fine", "thanks", "."],
    "token_spaces": [true, false, true, true, true, true, false, true, false, true, true, false, false],
    "sentence_ends": [2, 7, 12]
  }
]

# we are now going to prepare these documents, sentence, and tokens for Spark NLP
from sparknlp.training import SpacyToAnnotation

nlp_reader = SpacyToAnnotation()
result = nlp_reader.readJsonFile(spark, "./multi_doc_tokens.json")

result.printSchema()
# now you have all the annotations for documents, sentences, and tokens needed in Spark NLP
root
 |-- document: array (nullable = true)
 |    |-- element: struct (containsNull = true)
 |    |    |-- annotatorType: string (nullable = true)
 |    |    |-- begin: integer (nullable = false)
 |    |    |-- end: integer (nullable = false)
 |    |    |-- result: string (nullable = true)
 |    |    |-- metadata: map (nullable = true)
 |    |    |    |-- key: string
 |    |    |    |-- value: string (valueContainsNull = true)
 |    |    |-- embeddings: array (nullable = true)
 |    |    |    |-- element: float (containsNull = false)
 |-- sentence: array (nullable = true)
 |    |-- element: struct (containsNull = true)
 |    |    |-- annotatorType: string (nullable = true)
 |    |    |-- begin: integer (nullable = false)
 |    |    |-- end: integer (nullable = false)
 |    |    |-- result: string (nullable = true)
 |    |    |-- metadata: map (nullable = true)
 |    |    |    |-- key: string
 |    |    |    |-- value: string (valueContainsNull = true)
 |    |    |-- embeddings: array (nullable = true)
 |    |    |    |-- element: float (containsNull = false)
 |-- token: array (nullable = true)
 |    |-- element: struct (containsNull = true)
 |    |    |-- annotatorType: string (nullable = true)
 |    |    |-- begin: integer (nullable = false)
 |    |    |-- end: integer (nullable = false)
 |    |    |-- result: string (nullable = true)
 |    |    |-- metadata: map (nullable = true)
 |    |    |    |-- key: string
 |    |    |    |-- value: string (valueContainsNull = true)
 |    |    |-- embeddings: array (nullable = true)
 |    |    |    |-- element: float (containsNull = false)
  • Implement params parameter which can supply custom configurations to the SparkSession in Scala (to be sync with Python)
val hadoopAwsVersion: String = "3.3.1"
val awsJavaSdkVersion: String = "1.11.901"

val extraParams: Map[String, String] = Map(
  "spark.jars.packages" -> ("org.apache.hadoop:hadoop-aws:" + hadoopAwsVersion + ",com.amazonaws:aws-java-sdk:" + awsJavaSdkVersion),
  "spark.hadoop.fs.s3a.path.style.access" -> "true")

val spark = SparkNLP.start(params = extraParams)
  • Add entity field to the metadata in Date2Chunk
  • Fix ViT models & pipelines examples in Models Hub

πŸ““ New Notebooks

Spark NLP
Import Tokens from spaCy or a JSON file

πŸ“– Documentation

Community support

  • Slack For live discussion with the Spark NLP community and the team
  • GitHub Bug reports, feature requests, and contributions
  • Discussions Engage with other community members, share ideas,
    and show off how you use Spark NLP!
  • Medium Spark NLP articles
  • YouTube Spark NLP video tutorials

Installation

Python

#PyPI

pip install spark-nlp==4.3.1

Spark Packages

spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, and 3.3.x (Scala 2.12):

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp_2.12:4.3.1

pyspark --packages com.johnsnowlabs.nlp:spark-nlp_2.12:4.3.1

GPU

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:4.3.1

pyspark --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:4.3.1

Apple Silicon (M1 & M2)

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:4.3.1

pyspark --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:4.3.1

AArch64

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:4.3.1

pyspark --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:4.3.1

Maven

spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, and 3.3.x:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp_2.12</artifactId>
    <version>4.3.1</version>
</dependency>

spark-nlp-gpu:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-gpu_2.12</artifactId>
    <version>4.3.1</version>
</dependency>

spark-nlp-silicon:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-silicon_2.12</artifactId>
    <version>4.3.1</version>
</dependency>

spark-nlp-aarch64:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-aarch64_2.12</artifactId>
    <version>4.3.1</version>
</dependency>

FAT JARs

What's Changed

Read more

Spark NLP 4.3.0: New HuBERT for speech recognition, new Swin Transformer for Image Classification, new Zero-shot annotator for Entity Recognition, CamemBERT for question answering, new Databricks and EMR with support for Spark 3.3, 1000+ state-of-the-art models and many more!

09 Feb 19:45
Compare
Choose a tag to compare

πŸ“’ Overview

We are very excited to release Spark NLP πŸš€ 4.3.0! This has been one of the biggest releases we have ever done and we are so proud to share this with our community! πŸŽ‰

This release extends support for another Image Classification by introducing Swin Transformer, also extending support for speech recognition by introducing HuBERT annotator, a brand new modern extractive transformer-based Question answering (QA) annotator for tasks like SQuAD based on CamemBERT architecture, new Databricks & EMR with support for Spark 3.3, 1000+ state-of-the-art models, and many more enhancements and bug fixes!

We are also celebrating crossing 12600+ free and open-source models & pipelines in our Models Hub. πŸŽ‰ As always, we would like to thank our community for their feedback, questions, and feature requests.


πŸ”₯ New Features

HuBERT

NEW: Introducing HubertForCTC annotator in Spark NLP πŸš€. HubertForCTC can load HuBERT models that match or surpasses the SOTA approaches for speech representation learning for speech recognition, generation, and compression. The Hidden-Unit BERT (HuBERT) approach was proposed for self-supervised speech representation learning, which utilizes an offline clustering step to provide aligned target labels for a BERT-like prediction loss. This annotator is compatible with all the models trained/fine-tuned by using HubertForCTC for PyTorch or TFHubertForCTC for TensorFlow models in HuggingFace πŸ€—

image

HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed.

Swin Transformer

NEW: Introducing SwinForImageClassification annotator in Spark NLP πŸš€. SwinForImageClassification can load transformer-based deep learning models with state-of-the-art performance in vision tasks. Swin Transformer precedes Vision Transformer (ViT) (Dosovitskiy et al., 2020) with great accuracy and efficiency. This annotator is compatible with all the models trained/fine-tuned by using SwinForImageClassification for PyTorch or TFSwinForImageClassification for TensorFlow models in HuggingFace πŸ€—

image

Swin Transformer: Hierarchical Vision Transformer using Shifted Windows by Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo.

Zero-Shot for Named Entity Recognition

Zero-Shot Learning refers to the process by which a model learns how to recognize objects (image, text, any features) without any labeled training data to help in the classification.

NEW: Introducing ZeroShotNerModel annotator in Spark NLP πŸš€. You can use the ZeroShotNerModel annotator to construct simple questions/answers mapped to NER labels like PERSON, NORP and etc. We use RoBERTa for Question Answering architecture behind the hood and this allows you to use any of the 460 models available on Models Hub to build your Zero-shot Entity Recognition with zero training dataset!

zero_shot_ner = ZeroShotNerModel.pretrained("roberta_base_qa_squad2", "en") \
    .setEntityDefinitions(
        {
            "NAME": ["What is his name?", "What is my name?", "What is her name?"],
            "CITY": ["Which city?", "Which is the city?"]
        }) \
    .setInputCols(["sentence", "token"]) \
    .setOutputCol("zero_shot_ner")

This powerful annotator with such simple rules can detect those entities from the following input: "My name is Clara, I live in New York and Hellen lives in Paris."

+-----------------------------------------------------------------+------+------+----------+------------------+
    |result                                                           |result|word  |confidence|question          |
    +-----------------------------------------------------------------+------+------+----------+------------------+
    |[My name is Clara, I live in New York and Hellen lives in Paris.]|B-CITY|Paris |0.5328949 |Which is the city?|
    |[My name is Clara, I live in New York and Hellen lives in Paris.]|B-NAME|Clara |0.9360068 |What is my name?  |
    |[My name is Clara, I live in New York and Hellen lives in Paris.]|B-CITY|New   |0.83294415|Which city?       |
    |[My name is Clara, I live in New York and Hellen lives in Paris.]|I-CITY|York  |0.83294415|Which city?       |
    |[My name is Clara, I live in New York and Hellen lives in Paris.]|B-NAME|Hellen|0.45366877|What is her name? |
    +-----------------------------------------------------------------+------+------+----------+------------------+

CamemBERT for Question Answering

NEW: Introducing CamemBertForQuestionAnswering annotator in Spark NLP πŸš€. CamemBertForQuestionAnswering can load CamemBERT Models with a span classification head on top for extractive question-answering tasks like SQuAD (linear layers on top of the hidden-states output to compute span start logits and span end logits). This annotator is compatible with all the models trained/fine-tuned by using CamembertForQuestionAnswering for PyTorch or TFCamembertForQuestionAnswering for TensorFlow in HuggingFace πŸ€—

Models Hub

Introduces a new filter by annotator which should help to navigate and find models easier:

image


β­πŸ› Improvements & Bug Fixes

  • New Date2Chunk annotator to convert DATE outputs coming from DateMatcher and MultiDateMatcher annotators to CHUNK that is acceptable by a wider range of annotators
  • Spark NLP 4.3.0 supports Apple Silicon M1 and M2 (still under experimental status until GitHub supports Apple Silicon officially). We have refactored the name m1 to silicon and apple_silicon in our code for better clarity
  • Add new templates for issues, docs, and feature requests on GitHub
  • Add a new log4j2 properties for Spark 3.3.x coming with Log4j 2.x to control the logs on Apache Spark
  • Cross compatibility for all saved pipelines for all major releases of Apache Spark and PySpark
  • Relocating Spark NLP examples to the examples directory in our main repository. We will update them on each release, will keep a history of the changes for each version, adding more languages, especially more use cases with Java and Scala
  • Add PyDoc documentation for ResourceDownloader in Python (clearCache(), showPublicModels(), showPublicPipelines(), and showAvailableAnnotators() )
  • Fix calculating delimiter id in CamemBERT annotators. The delimiter id is actually correct and doesn't need any offset
  • Fix AnalysisException exception that requires a different caught message for Spark 3.3
  • Fix copying existing models & pipelines on S3 before unzipping when cache_pretrained is defined as S3 bucket
  • Fix copying existing models & pipelines on GCP before unzipping when cache_pretrained is defined as GCP bucket
  • Fix loadSavedModel() trying to load external models for private buckets on S3 with better error handling and warnings
  • Enable the params argument in the Spark NLP start function. You can create a params = {} with all Spark NLP and Apache Spark configs and pass it when starting the Spark NLP session
  • Add support for doc id in CoNLL() class when trying to read CoNLL files with id inside each document's header
  • Welcoming 6 new Databricks runtimes to our Spark NLP family:
    • Databricks 12.0
    • Databricks 12.0 ML
    • Databricks 12.0 ML GPU
    • Databricks 12.1
    • Databricks 12.1 ML
    • Databricks 12.1 ML GPU
  • Welcoming 2 new EMR 6.x series to our Spark NLP family:
    • EMR 6.8.0 (Apache Spark 3.3.0 / Hadoop 3.2.1)
    • EMR 6.9.0 (Apache Spark 3.3.0 / Hadoop 3.3.3)
  • New article for semantic similarity with Spark NLP on Play/API/Swagger/ https://medium.com/spark-nlp/semantic-similarity-with-sparknlp-da148fafa3d8

Dependencies & Code Changes

  • Update Apache Spark 3.3.1 (not shipped with Spark NLP
  • Update GCP to 2.16.0
  • Update Scala test to 3.2.14
  • Start publishing spark-nlp-m1 Maven package as spark-nlp-silicon
  • Rename all read model traits to a generic name. A new ai module paving a path to another DL engine
  • Rename TF backends to more generic DL names
  • Refactor more duplicate codes in transformer embeddings

πŸ’Ύ Models

Spark NLP 4.3.0 comes with 1000+ state-of-the-art pre-trained transformer models in many languages.

Featured Models

Model Name Lang
DistilBertForQuestionAnswering distilbert_qa_en_de_vi_zh_es_model xx
DistilBertForQuestionAnswering distilbert_qa_extractive en
DistilBertForQuestionAnswering distilbert_qa_base_cased_squadv2 xx
RoBertaForQuestionAnswering [roberta_qa_roberta](https://nlp.johnsnowlabs.com/2023/01...
Read more

Spark NLP 4.2.8: Patch release

24 Jan 18:22
Compare
Choose a tag to compare

πŸ“’ Overview

Spark NLP 4.2.8 πŸš€ comes with some important bug fixes and improvements. As a result, we highly recommend to update to this latest version if you are using Spark NLP 4.2.x.

As always, we would like to thank our community for their feedback, questions, and feature requests. πŸŽ‰


⭐ πŸ› Bug Fixes & Improvements

  • Fix the issue with optional keys (labels) in metadata when using XXXForSequenceClassitication annotators. This fixes Some(neg) -> 0.13602075 as neg -> 0.13602075 to be in harmony with all the other classifiers. #13396

before 4.2.8:

+-----------------------------------------------------------------------------------------------+
|label                                                                                          |
+-----------------------------------------------------------------------------------------------+
|[{category, 0, 87, pos, {sentence -> 0, Some(neg) -> 0.13602075, Some(pos) -> 0.8639792}, []}] |
|[{category, 0, 47, neg, {sentence -> 0, Some(neg) -> 0.7505674, Some(pos) -> 0.24943262}, []}] |
|[{category, 0, 17, pos, {sentence -> 0, Some(neg) -> 0.31065974, Some(pos) -> 0.6893403}, []}] |
|[{category, 0, 71, neg, {sentence -> 0, Some(neg) -> 0.5079189, Some(pos) -> 0.4920811}, []}]  |
+-----------------------------------------------------------------------------------------------+

after 4.2.8:

+-----------------------------------------------------------------------------------+
|label                                                                              |
+-----------------------------------------------------------------------------------+
|[{category, 0, 87, pos, {sentence -> 0, neg -> 0.13602075, pos -> 0.8639792}, []}] |
|[{category, 0, 47, neg, {sentence -> 0, neg -> 0.7505674, pos -> 0.24943262}, []}] |
|[{category, 0, 17, pos, {sentence -> 0, neg -> 0.31065974, pos -> 0.6893403}, []}] |
|[{category, 0, 71, neg, {sentence -> 0, neg -> 0.5079189, pos -> 0.4920811}, []}]  |
+-----------------------------------------------------------------------------------+
  • Introducing a config to skip LightPipeline validation for inputCols on the Python side for projects depending on Spark NLP. This toggle should only be used for specific annotators that do not follow the convention of predefined inputAnnotatorTypes and outputAnnotatorType #13402

πŸ“– Documentation


Installation

Python

#PyPI

pip install spark-nlp==4.2.8

Spark Packages

spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, and 3.3.x (Scala 2.12):

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp_2.12:4.2.8

pyspark --packages com.johnsnowlabs.nlp:spark-nlp_2.12:4.2.8

GPU

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:4.2.8

pyspark --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:4.2.8

M1

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-m1_2.12:4.2.8

pyspark --packages com.johnsnowlabs.nlp:spark-nlp-m1_2.12:4.2.8

AArch64

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:4.2.8

pyspark --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:4.2.8

Maven

spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, and 3.3.x:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp_2.12</artifactId>
    <version>4.2.8</version>
</dependency>

spark-nlp-gpu:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-gpu_2.12</artifactId>
    <version>4.2.8</version>
</dependency>

spark-nlp-m1:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-m1_2.12</artifactId>
    <version>4.2.8</version>
</dependency>

spark-nlp-aarch64:

<!-- https://mvnrepository.com/artifact/com.johnsnowlabs.nlp/spark-nlp-aarch64 -->
<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-aarch64_2.12</artifactId>
    <version>4.2.8</version>
</dependency>

FAT JARs

What's Changed

Full Changelog: 4.2.7...4.2.8

Spark NLP 4.2.7: Patch release

12 Jan 16:46
Compare
Choose a tag to compare

πŸ“’ Overview

Spark NLP 4.2.7 πŸš€ comes with some important bug fixes and improvements. As a result, we highly recommend to update to this latest version if you are using Spark NLP 4.2.x.

As always, we would like to thank our community for their feedback, questions, and feature requests. πŸŽ‰


πŸ› ⭐ Bug Fixes & Enhancements

  • Fix outputAnnotatorType issue in pipelines with Finisher annotator. This change adds outputAnnotatorType to AnnotatorTransformer to avoid loading outputAnnotatorType attribute when a stage in pipeline does not use it.
  • Fix the wrong sentence index calculation in metadata by annotators in the pipeline when setExplodeSentences param was set to true in SentenceDetector annotator
  • Fix the issue in Tokenizer when a custom pattern is used with lookahead/-behinds and it has 0 width matches. This led to indexes not being calculated correctly
  • Fix missing to output embeddings in .fullAnnotate() method when parseEmbeddings param was set to True/true
  • Fix broken links to the Python API pages, as the generation of the PyDocs was slightly changed in a previous release. This makes the Python APIs accessible from the Annotators and Transformers pages like before
  • Change default values of explodeEntities and mergeEntities parameters to true in GraphExctraction annotator
  • Better error handling when there are empty paths/relations in GraphExctractionannotator. New message will better guide the user on how to configure GraphExtraction to output meaningful relationships
  • Removed the duplicated definition of method setWeightedDistPath from ContextSpellCheckerApproach

πŸ“– Documentation


Installation

Python

#PyPI

pip install spark-nlp==4.2.7

Spark Packages

spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, and 3.3.x (Scala 2.12):

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp_2.12:4.2.7

pyspark --packages com.johnsnowlabs.nlp:spark-nlp_2.12:4.2.7

GPU

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:4.2.7

pyspark --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:4.2.7

M1

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-m1_2.12:4.2.7

pyspark --packages com.johnsnowlabs.nlp:spark-nlp-m1_2.12:4.2.7

AArch64

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:4.2.7

pyspark --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:4.2.7

Maven

spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, and 3.3.x:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp_2.12</artifactId>
    <version>4.2.7</version>
</dependency>

spark-nlp-gpu:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-gpu_2.12</artifactId>
    <version>4.2.7</version>
</dependency>

spark-nlp-m1:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-m1_2.12</artifactId>
    <version>4.2.7</version>
</dependency>

spark-nlp-aarch64:

<!-- https://mvnrepository.com/artifact/com.johnsnowlabs.nlp/spark-nlp-aarch64 -->
<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-aarch64_2.12</artifactId>
    <version>4.2.7</version>
</dependency>

FAT JARs

What's Changed

@dcecchini @Cabir40 @agsfer @gadde5300 @bunyamin-polat @rpranab @jdobes-cz @josejuanmartinez @diatrambitas @maziyarpanahi

Full Changelog: 4.2.6...4.2.7

Spark NLP 4.2.6: Patch release

21 Dec 09:54
Compare
Choose a tag to compare

⭐ Improvements

  • Updating Spark & PySpark dependencies from 3.2.1 to 3.2.3 in provided scripts and in all the documentation

πŸ› Bug Fixes

  • Fix the broken TypedDependencyParserApproach and TypedDependencyParserModel annotators used in Python (this bug was introduced in 4.2.5 release)
  • Fix the broken Python API documentation

πŸ“– Documentation


Installation

Python

#PyPI

pip install spark-nlp==4.2.6

Spark Packages

spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, and 3.3.x (Scala 2.12):

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp_2.12:4.2.6

pyspark --packages com.johnsnowlabs.nlp:spark-nlp_2.12:4.2.6

GPU

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:4.2.6

pyspark --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:4.2.6

M1

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-m1_2.12:4.2.6

pyspark --packages com.johnsnowlabs.nlp:spark-nlp-m1_2.12:4.2.6

AArch64

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:4.2.6

pyspark --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:4.2.6

Maven

spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, and 3.3.x:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp_2.12</artifactId>
    <version>4.2.6</version>
</dependency>

spark-nlp-gpu:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-gpu_2.12</artifactId>
    <version>4.2.6</version>
</dependency>

spark-nlp-m1:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-m1_2.12</artifactId>
    <version>4.2.6</version>
</dependency>

spark-nlp-aarch64:

<!-- https://mvnrepository.com/artifact/com.johnsnowlabs.nlp/spark-nlp-aarch64 -->
<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-aarch64_2.12</artifactId>
    <version>4.2.6</version>
</dependency>

FAT JARs

What's Changed

Contributors

'@gadde5300 @diatrambitas @Cabir40 @josejuanmartinez @danilojsl @jsl-builder @DevinTDHa @maziyarpanahi @dcecchini @agsfer '

Full Changelog: 4.2.5...4.2.6