Given an example input text, the workflow shows different pre-trained BERT/GPT model outputs, starting with a named entitiy recognition model, followed by question answering and BioGPT.
The NER model recognizes bio-medical entities from a given text. It is based on distilbert-base-uncased. Open the view for a NER visualization using the spacy python library.
Question Answering tasks are based on a context which includes the answer to a given question.
The model used is the BioM-ELECTRA-Large, which is already pre-trained on PubMed abstracts. It was fine-tuned on the SQuAD2.0 dataset
BioGPT is a language model pre-trained on biomedical literature and is able to generate text based on a given input.
In this example it auto completes as sentence. The maximum length of the generated sentence and the number of sentences can be adjusted.
BioGPT:
Authors: Luo, Renqian and Sun, Liai and Xia, Yingce and Qin, Tao and Zhang, Sheng and Poon, Hoifung and Liu, Tie-Yan
Title: BioGPT: generative pre-trained transformer for biomedical text generation and mining
Question Answering:
Authors: Alrowili, Sultan and Shanker, Vijay
Title: BioM-Transformers: Building Large Biomedical Language Models with BERT, ALBERT and ELECTRA
Named Entity Recognition:
Authors: Deepak John Reji, Shaina Raza
External resources
Used extensions & nodes
Created with KNIME Analytics Platform version 5.0.1
Legal
By using or downloading the workflow, you agree to our terms and conditions.