Now 312 visitors
Today:260 Yesterday:733
Total: 142630 85S 19P 0R
2025-08-28, Week 35
TACT Journal Page
Call for Paper
Author Page
Paper Procedure
Paper Submission
Registration
Welcome Message
Statistics
Committee
Paper Archives
Outstanding Papers
Proceedings
Presentation Assistant
Hotel & Travel Info
Photo Gallery
FAQ
Member Login
Scheduler Login
Seminar
Archives Login
Sponsors




















IEEE/ICACT20230209 Question.2
Questioner: hbw017@mail.sdu.edu.cn    2023-02-22 ¿ÀÈÄ 6:45:41
IEEE/ICACT20230209 Answer.2
Answer by Auhor saby9996@terenz.ai   2023-02-22 ¿ÀÈÄ 6:45:41
Hello, author. In your paper, BERT is used for language training. Does this model have any advantages over traditional RNN and LSTM? As far as I know, the BERT model needs to set more model parameters and training steps. I would like to ask if you have improved the BERT model. Thank you! Thanks a Lot for your Question. Actually you gave the answer. As the BERT Model needs more model parameters therefore, the depth of understanding is normally higher than traditional RNN and LSTM. We did not use Pretrained Weights of Bert and hence trained the Language Model based on our data.

Select Voice