Publications

Artificial intelligence (AI) for the purpose of this review is an umbrella term for technologies emulating a nephropathologist’s ability to extract information on diagnosis, prognosis, and therapy responsiveness from native or transplant kidney biopsies. Although AI can be used to analyze a wide variety of biopsy-related data, this review focuses on whole slide images traditionally used in nephropathology. AI applications in nephropathology have recently become available through several advancing technologies, including (i) widespread introduction of glass slide scanners, (ii) data servers in pathology departments worldwide, and (iii) through greatly improved computer hardware to enable AI training. In this review, we explain how AI can enhance the reproducibility of nephropathology results for certain parameters in the context of precision medicine using advanced architectures, such as convolutional neural networks, that are currently the state of the art in machine learning software for this task. Because AI applications in nephropathology are still in their infancy, we show the power and potential of AI applications mostly in the example of oncopathology. Moreover, we discuss the technological obstacles as well as the current stakeholder and regulatory concerns about developing AI applications in nephropathology from the perspective of nephropathologists and the wider nephrology community. We expect the gradual introduction of these technologies into routine diagnostics and research for selective tasks, suggesting that this technology will enhance the performance of nephropathologists rather than making them redundant.

Biosensors are emerging as efficient (sensitive and selective) and affordable analytical diagnostic tools for early-stage disease detection, as required for personalized health wellness management. Low-level detection of a targeted disease biomarker (pM level) has emerged extremely useful to evaluate the progression of disease under therapy. Such collected bioinformatics and its multi-aspects-oriented analytics is in demand to explore the effectiveness of a prescribed treatment, optimize therapy, and correlate biomarker level with disease pathogenesis. Owing to nanotechnology-enabled advancements in sensing unit fabrication, device integration, interfacing, packaging, and sensing performance at point-of-care (POC) has rendered diagnostics according to the requirements of disease management and patient disease profile i.e. in a personalized manner. Efforts are continuously being made to promote the state of art biosensing technology as a next-generation non-invasive disease diagnostics methodology. Keeping this in view, this progressive opinion article describes personalized health care management related analytical tools which can provide access to better health for everyone, with overreaching aim to manage healthy tomorrow timely. Considering accomplishments and predictions, such affordable intelligent diagnostics tools are urgently required to manage COVID-19 pandemic, a life-threatening respiratory infectious disease, where a rapid, selective and sensitive detection of human beta severe acute respiratory system coronavirus (SARS-COoV-2) protein is the key factor.

This Methods/Protocols article is intended for materials scientists interested in performing machine learning-centered research. We cover broad guidelines and best practices regarding the obtaining and treatment of data, feature engineering, model training, validation, evaluation and comparison, popular repositories for materials data and benchmarking data sets, model and architecture sharing, and finally publication. In addition, we include interactive Jupyter notebooks with example Python code to demonstrate some of the concepts, workflows, and best practices discussed. Overall, the data-driven methods and machine learning workflows and considerations are presented in a simple way, allowing interested readers to more intelligently guide their machine learning research using the suggested references, best practices, and their own materials domain expertise.

In a review in this issue of Annals, Kueper et al have uncovered one such zone at the interface between computer science and primary care by describing a collection of research that has been hiding in plain sight since 1986. By connecting 2 disciplines, these 405 articles constitute an area of focus—primary care artificial intelligence—that may be new to primary care researchers but has already generated an impressive compilation.

Despite this body of work, primary care artificial intelligence has failed to transform primary care due to a lack of engagement from the primary care community. Similar to health information technology, primary care artificial intelligence should aim to improve care delivery and health outcomes; using this benchmark, it has yet to make an impact. Even though its history spans 4 decades, primary care artificial intelligence remains in the “early stages of maturity” because few tools have been implemented. Changing primary care is difficult when only 1 out of every 7 of these papers includes a primary care author.2 Without input from primary care, these teams may fail to grasp the context of primary care data collection, its role within the health system, and the forces shaping its evolution.

In this work, we aim to update the understanding of how impurity or promoter metals segregate on metal surfaces, particularly in the application of single-atom alloys (SAA) for catalysis. Using density functional theory, we calculated the relative stability of the idealized SAA relative to subsurface, dimer, and adatom configurations to determine the tendency of the promoter atom to diffuse into the bulk, form surface clusters, or avoid alloying with the host, respectively. We selected 26 d-block metals augmented with Al and Pb to create a 28 × 28 database that indicates a total of 250 combinations for which the SAA configuration is most stable, and an additional 358 systems for which the SAA geometry is within 0.5 eV of the most stable configuration. We classified the data using decision tree, support vector machine, and neural network machine learning algorithms with tabulated atomic properties as the input vector. These black box approaches are unable to extrapolate to other possible geometries, which was circumvented by redefining the stability problem as a regression. We propose a physical bond counting model to formulate intuitive criteria for the formation of stable SAAs. The accuracy is then improved by using the bonding configuration and tabulated atomic properties with a kernel ridge regression (KRR) algorithm. The hybrid KRR model correctly identifies 190 SAAs with 85 false positives. Importantly, its physical basis allows the hybrid model to extend to similar geometries not included in the training data, thereby expanding the domain where the model is useful.

In this work, we aim to update the understanding of how impurity or promoter metals segregate on metal surfaces, particularly in the application of single-atom alloys (SAA) for catalysis. Using density functional theory, we calculated the relative stability of the idealized SAA relative to subsurface, dimer, and adatom configurations to determine the tendency of the promoter atom to diffuse into the bulk, form surface clusters, or avoid alloying with the host, respectively. We selected 26 d-block metals augmented with Al and Pb to create a 28 × 28 database that indicates a total of 250 combinations for which the SAA configuration is most stable, and an additional 358 systems for which the SAA geometry is within 0.5 eV of the most stable configuration. We classified the data using decision tree, support vector machine, and neural network machine learning algorithms with tabulated atomic properties as the input vector. These black box approaches are unable to extrapolate to other possible geometries, which was circumvented by redefining the stability problem as a regression. We propose a physical bond counting model to formulate intuitive criteria for the formation of stable SAAs. The accuracy is then improved by using the bonding configuration and tabulated atomic properties with a kernel ridge regression (KRR) algorithm. The hybrid KRR model correctly identifies 190 SAAs with 85 false positives. Importantly, its physical basis allows the hybrid model to extend to similar geometries not included in the training data, thereby expanding the domain where the model is useful.

Artificial intelligence (AI) for the purpose of this review is an umbrella term for technologies emulating a nephropathologist’s ability to extract information on diagnosis, prognosis, and therapy responsiveness from native or transplant kidney biopsies. Although AI can be used to analyze a wide variety of biopsy-related data, this review focuses on whole slide images traditionally used in nephropathology. AI applications in nephropathology have recently become available through several advancing technologies, including (i) widespread introduction of glass slide scanners, (ii) data servers in pathology departments worldwide, and (iii) through greatly improved computer hardware to enable AI training. In this review, we explain how AI can enhance the reproducibility of nephropathology results for certain parameters in the context of precision medicine using advanced architectures, such as convolutional neural networks, that are currently the state of the art in machine learning software for this task. Because AI applications in nephropathology are still in their infancy, we show the power and potential of AI applications mostly in the example of oncopathology. Moreover, we discuss the technological obstacles as well as the current stakeholder and regulatory concerns about developing AI applications in nephropathology from the perspective of nephropathologists and the wider nephrology community. We expect the gradual introduction of these technologies into routine diagnostics and research for selective tasks, suggesting that this technology will enhance the performance of nephropathologists rather than making them redundant.

According to futurists, the artificial intelligence (AI) revolution in health care is here. While trending now, the concept is not new and was first introduced 70 years ago when Alan Turing described “thinking machines.” John McCarthy later coined the term “AI” to denote the idea of getting a computer to do things which, when done by people, are said to involve intelligence. What is new is the digitization of everything from electronic health records (EHRs) to genes and microbiomes, which provide the data that AI needs to learn. This conversion of images, handwritten notes, and pathology slides into 1’s and 0’s allows machines to perform a wide range of tasks, such as detecting retinopathy, skin cancer, and lung nodules. Even though this surge of available data exceeds what individuals and teams can realistically manage, computers have learned how to process these data to predict outcomes important to our patients, including opioid misuse, emergency department visits, and deaths. Advances like these led Andy Conrad, the CEO of Google’s life sciences subsidiary, to declare that in medicine,“the most important tool is the computer.” 10

Previous studies of Brain Computer Interfaces (BCI) based on scalp electroencephalography (EEG) have demonstrated the feasibility of decoding kinematics for lower limb movements during walking. In this computational study, we investigated offline decoding analysis with different models and conditions to assess how they influence the performance and stability of the decoder. Specifically, we conducted three computational decoding experiments that investigated decoding accuracy: (1) based on delta band time-domain features, (2) when downsampling data, (3) of different frequency band features. In each experiment, eight different decoder algorithms were compared including the current state-of-the-art. Different tap sizes (sample window sizes) were also evaluated for a real-time applicability assessment. A feature of importance analysis was conducted to ascertain which features were most relevant for decoding; moreover, the stability to perturbations was assessed to quantify the robustness of the methods. Results indicated that generally the Gated Recurrent Unit (GRU) and Quasi Recurrent Neural Network (QRNN) outperformed other methods in terms of decoding accuracy and stability. Previous state-of-the-art Unscented Kalman Filter (UKF) still outperformed other decoders when using smaller tap sizes, with fast convergence in performance, but occurred at a cost to noise vulnerability. Downsampling and the inclusion of other frequency band features yielded overall improvement in performance. The results suggest that neural network-based decoders with downsampling or a wide range of frequency band features could not only improve decoder performance but also robustness with applications for stable use of BCIs.

Users connect to web applications for various content and services. In addition to the internet for the global user/ customer base, in today's enterprises, most business needs are also served by web applications. The application performance monitoring, testing, and maintenance is a considerable effort when serving customers and business needs. Ensuring a high level of user experience, isolation of network issues from application service performance, and providing a seamless experience independent of the proximity of the end user to the application server are some of the challenges faced in performance assurance efforts. In this respect, a reliable testing and monitoring mechanism for application performance is essential to business continuity. Furthermore, virtualization technologies along with extensive mobility adds tremendous challenge to collection of reliable and isolated data in conducting performance monitoring for any application. In this paper, we present the results of a comprehensive network traffic analysis conducted in a geographically distributed manner for over 15 representative web applications. We report that web applications utilize a wide range of bandwidth capacity due to location of the dynamic content and the time of day that the data may be retrieved. The variations create an inconsistent performance level in their network usage metrics. Our reported metrics such as the duration of complete web data transfer and bandwidth utilization can help enterprises, network engineers, and service providers in fine tuning their services. Our approach does not contain sensitive user information, allows for extensive configuration and customization of the framework, and finally, data capture processes have a built-in sharing mechanism for repeatable network experiments through a data collection description language.