ICACT20230209 Question.1 Questioner: franklinlu888@outlook.com 2023-02-22 ¿ÀÈÄ 6:43:11 |
ICACT20230209 Answer.1 Answer by Auhor saby9996@terenz.ai 2023-02-22 ¿ÀÈÄ 6:43:11
Chrome Click!! |
A nice presentation! However, can you explain the equation in Slide 15 well? |
No equation is Slide 15 |
ICACT20230209 Question.11 Questioner: yangkoon@gmail.com 2023-02-22 ¿ÀÈÄ 6:46:15 |
ICACT20230209 Answer.11 Answer by Auhor saby9996@terenz.ai 2023-02-22 ¿ÀÈÄ 6:46:15
Chrome Click!! |
Is there a reason for using BERT, an encoder model, not a decoder model like GPT or an encoder-decoder model like Seq2seq? |
Comparative analysis was done. But BERT came out the best. |
ICACT20230209 Question.2 Questioner: hbw017@mail.sdu.edu.cn 2023-02-22 ¿ÀÈÄ 6:45:41 |
ICACT20230209 Answer.2 Answer by Auhor saby9996@terenz.ai 2023-02-22 ¿ÀÈÄ 6:45:41
Chrome Click!! |
Hello, author. In your paper, BERT is used for language training. Does this model have any advantages over traditional RNN and LSTM? As far as I know, the BERT model needs to set more model parameters and training steps. I would like to ask if you have improved the BERT model. Thank you! |
Thanks a Lot for your Question.
Actually you gave the answer.
As the BERT Model needs more model parameters therefore, the depth of understanding is normally higher than traditional RNN and LSTM. We did not use Pretrained Weights of Bert and hence trained the Language Model based on our data. |
ICACT20230209 Question.3 Questioner: a9991204@gmail.com 2023-02-22 ¿ÀÈÄ 6:46:38 |
ICACT20230209 Answer.3 Answer by Auhor saby9996@terenz.ai 2023-02-22 ¿ÀÈÄ 6:46:38
Chrome Click!! |
Dear Author,Have you considered other methods besides Bayesian SMBO? |
I am sorry but we specifically use SMBO |
ICACT20230209 Question.4 Questioner: ida@niad.ac.jp 2023-02-22 ¿ÀÈÄ 6:47:18 |
ICACT20230209 Answer.4 Answer by Auhor saby9996@terenz.ai 2023-02-22 ¿ÀÈÄ 6:47:18
Chrome Click!! |
Could you introduce the languages that cause difficulties in natural language understanding?
|
Korean and Chinese are difficult. the reason is their Sentence Structure. |
ICACT20230209 Question.5 Questioner: quandh13@fe.edu.vn 2023-02-23 ¿ÀÈÄ 7:57:59 |
ICACT20230209 Answer.5 Answer by Auhor saby9996@terenz.ai 2023-02-23 ¿ÀÈÄ 7:57:59
Chrome Click!! |
Hello, I'd like to know if in your work the BERT model was just fine-tuned or it was re-trained. It will also be great if you could mention the training time for such our experimental dataset.
Thanks. |
Thanks a Lot for your Question. The BERT Model was Re-Trained. The Training for each Epoch Took around 9 Hours on Tesla GPUs |