Ruzaliev R:
Federated Learning for Clinical Event Classification Using Vital
Signs Data
2
VOLUME XX, 2023
overshooting the optimal solution, while a low learning rate
can result in slow convergence. The number of
communication rounds determines how many times the model
parameters are updated and aggregated between the
participants and the central server. More communication
rounds can result in better convergence, but also increase the
communication overhead. The local batch size also determines
the number of examples used by each participant to calculate
the gradients for its local model. The next is regularization, a
technique used to prevent overfitting by adding a penalty term
to the loss function. This can help improve the generalization
performance of the model, especially when dealing with small
amounts of data. The distribution of data across the
participants can impact the performance and convergence of
the model. A skewed distribution, where one participant has
significantly more data than others, can result in suboptimal
convergence. The last parameter of federated learning is the
heterogeneity of the data across the participants can impact the
convergence and generalization performance of the model.
This includes differences in the distribution, quality, and label
balance of the data.
IV.
Experimental Results.
We used our Gachon University Laboratory as the
environment for performance metrics in machine learning and
federated learning for clinic event classification tasks with the
following environment: 3090 RTX GPU, 64 GB RAM, core-
i9 4.5Ghz, python, Cuda. The choice of model parameters can
also impact the performance of the machine learning model.
For example, the number of trees in a random forest model or
the regularization parameter in a logistic regression model can
affect the model's performance. Also, the choice of evaluation
metrics is also an important part of the environment. Different
metrics may be more appropriate for different types of
problems and data. There are several ways to compare
machine learning models, such as performance metrics one of
the most common ways to compare machine learning models
is to evaluate their performance using relevant metrics such as
accuracy, precision, recall, and F1-score. These metrics
provide a quantitative assessment of the model's ability to
solve a specific problem. Accuracy is the proportion of correct
predictions made by the model. Precision (1) is the proportion
of true positive predictions made by the model among all
positive predictions. Recall (2) (Sensitivity) is the proportion
of true positive predictions made by the model among all
actual positive cases. Eventually, F1-Score (3) is the harmonic
mean of precision and re-call. Overall, it is important to
consider a combination of these factors when comparing
machine learning models to determine which model is best
suited for a specific problem.
𝑷𝒓𝒆𝒄𝒊𝒔𝒊𝒐𝒏 =
𝑻𝑷
𝐓𝐏 + 𝐅𝐏
(𝟏)
Dostları ilə paylaş: