Now 280 visitors
Today:528 Yesterday:1041
Total: 1129 413S 88P 97R
2026-03-04, Week 10
Member Login
Welcome Message
Statistics
Committee
TACT Journal Homepage
Call for Paper
Paper Submission
Find My Paper
Author Homepage
Paper Procedure
FAQ
Registration / Invoice
Paper Archives
Outstanding Papers
Program / Proceedings
Presentation Platform
Hotel & Travel Info
Photo Gallery
Scheduler Login
Seminar
Archives Login
Sponsors
























Paper Number Author:
Paper Title
Keyword
Q&A Number **Select your question number !!
Questioner eMail:
Question
Answer
by
Author
Save Q&A

* Edit or answer any Q&A by selecting Q&A number Hyper Link below + Write button (Save)
ICACT20240417 Question.1
Questioner: jllovido@bicol-u.edu.ph    2024-02-05 ¿ÀÈÄ 9:40:44
ICACT20240417 Answer.1
Answer by Auhor tiennn18@viettel.com.vn   2024-02-05 ¿ÀÈÄ 9:40:44
Chrome Click!!
How could the findings and strategies proposed in this study impact the broader field of microservices architecture and load balancing, specifically in the context of 5G applications? In the context of 5G applications, there are many 5G applications in which latency is a critical requirement, especially services that involve human health, human life, and high-speed high-accuracy facility control. For example, remote surgery, autonomous vehicles, and real-time interactive AR/VR experiences. If there were incidents while operating services, mostly it would affect to services' capacity. This results in the degrading of QoS. All end users may have a bad experience or even can not use the service. Failing to meet the requirements in SLA may incur a penalty (revenue loss) for the 5G network service provider (in our case we are the telco). The CNF applications implemented with our solution can self-adapt strategies and parameters to maintain URLL during the surge. For broader fields, self-adaptist strategies help the operators maximize resource usage, and increase service density, hence reducing CapEx and OpEx in computing infrastructure.
ICACT20240417 Question.2
Questioner: jllovido@bicol-u.edu.ph    2024-02-05 ¿ÀÈÄ 9:38:16
ICACT20240417 Answer.2
Answer by Auhor tiennn18@viettel.com.vn   2024-02-05 ¿ÀÈÄ 9:38:16
Chrome Click!!
what future research works or directions do you envision to further optimize the performance and adaptability of microservices load balancing in 5G applications? About the envision, due to the ability to negotiate by itself on the basis of peer-to-peer, back-pressure cascading conditions theory and metric for congestion control can be generalized into an enhanced version for the base protocol (DC-TCP is an example). Within the 5G application, we do not need to self-implement bare connection management for each application to obtain URLL requirements. This is also beneficial to non-required URLL applications. Regarding functionality, we are tunning our strategies with more self-negotiable metrics to deliver more fine-grain routing accuracy. And then we will bring AI into strategies to have a better handle during incidents.
ICACT20240417 Question.3
Questioner: madbrogada@bicol-u.edu.ph    2024-02-16 ¿ÀÈÄ 12:05:54
ICACT20240417 Answer.3
Answer by Auhor tiennn18@viettel.com.vn   2024-02-16 ¿ÀÈÄ 12:05:54
Chrome Click!!
In extreme conditions, how do the back-pressure strategies compare to front-pressure algorithms in terms of maintaining Quality of Service (QoS)? The difference is in its ability to control the speed of flows, separately. The back-pressure strategy utilizes statistical metrics of connection and then performs actions on throughput and concurrency level dimension of each flow, due to 2 of our proven cascading conditions. By the use of FSM or any implementation of other back-pressure strategies, it reacts with metrics' states and then keeps data flow utilized under pressure conditions (1). This keeps the system maintaining the highest throughput with minor drop flow (2). (1) help allowed request response in normal QoS, as well as at low utilized system. And (2) help fewer customers receive deny of service. Overall system QoS increased in both ways.