You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@kg-nlp We use an extreme-and-selective masking strategy during pre-training, which asks the model to reconstruct large parts of text based only on a sketch. For more details please check our paper. Thanks.
使用的genius-base-chinese
from transformers import pipeline
genius = pipeline("text2text-generation", model=r'genius-base-chinese', device=0)
sketch = "学生[MASK]作文[MASK]感受"
generated_text = genius(sketch, num_beams=1, do_sample=True, max_length=200)[0]['generated_text']
generated_text = generated_text.replace(' ', '')
print(generated_text)
生成结果:
学生在作文中,有感受要先写出自己的理解、感受、观点,再看自己所作用的作品。
学生在阅读和完成作文之间产生了很多的联系和感受
学生在作文中,不仅是感受美国,我觉得也有大量东西可写,感受
学生对自己的作文有丰富的感受,这样就能够给予孩子更深刻的认识和感受
学生通过作文表现出自己对于生活及教育的认识及感受
请问生成质量如果希望提升到更好的效果,可以从哪些方面进行着手改进?
语料规模?
模型规模?
能够具体介绍下?
The text was updated successfully, but these errors were encountered: