We argue that, when establishing and benchmarking machine learning (ml)models, the research community should favour evaluation metrics that bettercapture the value delivered by their model in practical applications.
The number of information systems (IS) studies dealing with explainable
artificial intelligence (XAI) is currently exploding as the field demands more
transparency about the internal decision logic of
Enabling robust intelligence in the wild entails learning systems that offer
uninterrupted inference while affording sustained training, with varying
amounts of data & supervision. Such a pragmatic ML
In this paper, we argue that the way we have been training and evaluating ML
models has largely forgotten the fact that they are applied in an organization
or societal context as they provide value to
Algorithmic decisions are now being used on a daily basis, and based on
Machine Learning (ML) processes that may be complex and biased. This raises
several concerns given the critical impact that bias
The lack of specifications is a key difference between traditional software
engineering and machine learning. We discuss how it drastically impacts how we
think about divide-and-conquer approaches to
Model Inversion (MI), in which an adversary abuses access to a trained
Machine Learning (ML) model attempting to infer sensitive information about its
original training data, has attracted increasing
Different machine learning (ml) models are proposed in the present work to predict barrier heights (bhs) from semiempirical quantum-mechanical calculations.
The obtained mean absolute errors (maes) are similar or slightly better than previous models considering the same number of data points.
Artificial intelligence (AI) and machine learning (ML) have become
increasingly vital in the development of novel defense and intelligence
capabilities across all domains of warfare. An adversarial AI
This thesis presents my research that expands our collective knowledge in the areas of accountability and transparency of machine learning (ml) models developed for complex reasoning tasks over text.
Accountabilitymethods, such as adversarial attacks and diagnostic datasets, expose vulnerabilities of ml models that could lead to malicious manipulations or systematic faults in their predictions.
We introduce a gradient-free quantum learning framework that optimizes quantum machine learning (qml) models using quantumoptimization.
The method does not rely on gradient computation and therefore avoids barren plateau (i.e., gradient vanishing) and frequent classical-quantum interactions.
This paper proposes a novel visualisation approach to compare generated substantial amounts of machine learning (ml) models trained with a different number of features of a given data set while revealing implicit dependent relations such as feature importance for ml explanations.
The dependence of ml models with dynamic number of features is encoded into the structure of visualisation, where the dependence of ml models and their dependent features are directly revealed from related line connections.
Development of new machine learning models is typically done on manuallycurated data sets, making them unsuitable for evaluating the models'performance during operations, where the evaluation needs to be performed automatically on incoming streams of new data.
With this in mind, we developed a web-based visualization system that allows the users to quickly gather headline performance numbers while maintaining confidence that the underlying data pipeline is functioning properly.
Translating machine learning (ML) models effectively to clinical practice
requires establishing clinicians' trust. Explainability, or the ability of an
ML model to justify its outcomes and assist clin
This paper formally models the strategic repeated interactions between a machine learning (ml) model and associated explanationmethod, and an end-user who is seeking a prediction/label and its explanationfor a query/input, by means of game theory.
In this game, a malicious end-user must strategically decide when to stop querying and attempt to compromise the system, while the system must strategically decide how much information (in the form of noisy explanations) it should share with the end-user and when to stopsharing, all without knowing the type (honest/malicious) of the end-user.
This paper conducts an in-depth literaturereview of a large volume of research papers that focused on the qualityassurance of machine learning (ml) models into software systems.
We developed a taxonomy of quality assurance issues of machine learning software applications (mlsas) by mapping the various ml adoption challenges across different phases of software development life cycles (sdlc).
Machine learning (ML) models may be deemed confidential due to their
sensitive training data, commercial value, or use in security applications.
Increasingly often, confidential ML models are being de
Unintended biases in machine learning (ML) models are among the major
concerns that must be addressed to maintain public trust in ML. In this paper,
we address process fairness of ML models that consi
Magnetism prediction is of great significance for metallic glasses, which have shown great commercial value.
In this work, machine learning (ml) models learned from a large amount of experimental data were trained based on extreme gradient boosting (xgboost), artificial neural networks (ann), and random forest to predict the magnetic properties of metallic glasses.