Now 653 visitors
Today:95 Yesterday:634
Total: 3051 413S 88P 97R
2026-04-10, Week 15
Member Login
Welcome Message
Statistics & History
Committee
TACT Journal Homepage
Call for Paper
Paper Submission
Find My Paper
Author Homepage
Paper Procedure
FAQ
Registration & Invoice
Paper Archives
Outstanding Papers
Program & Proceedings
Presentation Platform
Hotel & Travel Info
Photo Gallery
Scheduler Login
Seminar
Archives Login
Sponsors




















IEEE/ICACT20230209 Question.2
Questioner: hbw017@mail.sdu.edu.cn    2023-02-22 ¿ÀÈÄ 6:45:41
IEEE/ICACT20230209 Answer.2
Answer by Auhor saby9996@terenz.ai   2023-02-22 ¿ÀÈÄ 6:45:41
Hello, author. In your paper, BERT is used for language training. Does this model have any advantages over traditional RNN and LSTM? As far as I know, the BERT model needs to set more model parameters and training steps. I would like to ask if you have improved the BERT model. Thank you! Thanks a Lot for your Question. Actually you gave the answer. As the BERT Model needs more model parameters therefore, the depth of understanding is normally higher than traditional RNN and LSTM. We did not use Pretrained Weights of Bert and hence trained the Language Model based on our data.

Select Voice