Publications

This letter focuses on the 3D path-following ofa spiraltype helical magnetic swimmer in a water-filled workspace. The swimmer has a diameter of 2.5 mm, a length of 6 mm, and is controlled by an external time-varying magnetic field. A method to compensate undesired magnetic gradient forces is proposed and tested. Five swimmer designs with different thread pitch values were experimentally analyzed. All were controlled by the same model reference adaptive controller (MRAC). Compared to a conventional hand-tuned PI controller, their 3D path-following performance is significantly improved by using MRAC. At an average speed of 50 mm/s, the path-following mean error of the MRAC is 3.8 ± 1.8 mm, less than one body length of the swimmer. The versatility of this new controller is demonstrated by analyzing pathfollowing through obstacles on a helical trajectory and forward & backward motion.

Artificial intelligence (AI) for the purpose of this review is an umbrella term for technologies emulating a nephropathologist’s ability to extract information on diagnosis, prognosis, and therapy responsiveness from native or transplant kidney biopsies. Although AI can be used to analyze a wide variety of biopsy-related data, this review focuses on whole slide images traditionally used in nephropathology. AI applications in nephropathology have recently become available through several advancing technologies, including (i) widespread introduction of glass slide scanners, (ii) data servers in pathology departments worldwide, and (iii) through greatly improved computer hardware to enable AI training. In this review, we explain how AI can enhance the reproducibility of nephropathology results for certain parameters in the context of precision medicine using advanced architectures, such as convolutional neural networks, that are currently the state of the art in machine learning software for this task. Because AI applications in nephropathology are still in their infancy, we show the power and potential of AI applications mostly in the example of oncopathology. Moreover, we discuss the technological obstacles as well as the current stakeholder and regulatory concerns about developing AI applications in nephropathology from the perspective of nephropathologists and the wider nephrology community. We expect the gradual introduction of these technologies into routine diagnostics and research for selective tasks, suggesting that this technology will enhance the performance of nephropathologists rather than making them redundant.

Human emotion represents a complex neural process within the brain. The ability to automatically recognize emotions from physiological signals has the potential to impact humanity in multiple ways through applications in human-machine interaction, remote health monitoring, smart living environments and entertainment. We present a marked point process-based Bayesian filtering approach to track sympathetic arousal from skin conductance features. The rate at which individual skin conductance responses (SCRs) occur and their respective amplitudes encode important information regarding an individual's psychological arousal level. We develop a state-space model relating a latent neuropsychological arousal state to the rate at which neural impulses underlying SCRs occur and the impulse amplitudes. We simultaneously estimate both arousal and the state-space model parameters within an expectation-maximization framework. We evaluate our method on both simulated and experimental data. Results on simulated data indicate the method's ability to accurately estimate an unobserved state from marked point process observations. The experimental data include studies involving mental stress artificially-induced in laboratory environments, real-world driver stress and Pavlovian fear conditioning. Results on experimental data outperform previous Bayesian filtering approaches in terms of a lower sensitivity to small impulses and avoiding overfitting. The filter is thus able to estimate emotional arousal from skin conductance features and is a promising approach for everyday emotion recognition applications.

According to futurists, the artificial intelligence (AI) revolution in health care is here. While trending now, the concept is not new and was first introduced 70 years ago when Alan Turing described “thinking machines.” John McCarthy later coined the term “AI” to denote the idea of getting a computer to do things which, when done by people, are said to involve intelligence. What is new is the digitization of everything from electronic health records (EHRs) to genes and microbiomes, which provide the data that AI needs to learn. This conversion of images, handwritten notes, and pathology slides into 1’s and 0’s allows machines to perform a wide range of tasks, such as detecting retinopathy, skin cancer, and lung nodules. Even though this surge of available data exceeds what individuals and teams can realistically manage, computers have learned how to process these data to predict outcomes important to our patients, including opioid misuse, emergency department visits, and deaths. Advances like these led Andy Conrad, the CEO of Google’s life sciences subsidiary, to declare that in medicine,“the most important tool is the computer.” 10

Previous studies of Brain Computer Interfaces (BCI) based on scalp electroencephalography (EEG) have demonstrated the feasibility of decoding kinematics for lower limb movements during walking. In this computational study, we investigated offline decoding analysis with different models and conditions to assess how they influence the performance and stability of the decoder. Specifically, we conducted three computational decoding experiments that investigated decoding accuracy: (1) based on delta band time-domain features, (2) when downsampling data, (3) of different frequency band features. In each experiment, eight different decoder algorithms were compared including the current state-of-the-art. Different tap sizes (sample window sizes) were also evaluated for a real-time applicability assessment. A feature of importance analysis was conducted to ascertain which features were most relevant for decoding; moreover, the stability to perturbations was assessed to quantify the robustness of the methods. Results indicated that generally the Gated Recurrent Unit (GRU) and Quasi Recurrent Neural Network (QRNN) outperformed other methods in terms of decoding accuracy and stability. Previous state-of-the-art Unscented Kalman Filter (UKF) still outperformed other decoders when using smaller tap sizes, with fast convergence in performance, but occurred at a cost to noise vulnerability. Downsampling and the inclusion of other frequency band features yielded overall improvement in performance. The results suggest that neural network-based decoders with downsampling or a wide range of frequency band features could not only improve decoder performance but also robustness with applications for stable use of BCIs.

Users connect to web applications for various content and services. In addition to the internet for the global user/ customer base, in today's enterprises, most business needs are also served by web applications. The application performance monitoring, testing, and maintenance is a considerable effort when serving customers and business needs. Ensuring a high level of user experience, isolation of network issues from application service performance, and providing a seamless experience independent of the proximity of the end user to the application server are some of the challenges faced in performance assurance efforts. In this respect, a reliable testing and monitoring mechanism for application performance is essential to business continuity. Furthermore, virtualization technologies along with extensive mobility adds tremendous challenge to collection of reliable and isolated data in conducting performance monitoring for any application. In this paper, we present the results of a comprehensive network traffic analysis conducted in a geographically distributed manner for over 15 representative web applications. We report that web applications utilize a wide range of bandwidth capacity due to location of the dynamic content and the time of day that the data may be retrieved. The variations create an inconsistent performance level in their network usage metrics. Our reported metrics such as the duration of complete web data transfer and bandwidth utilization can help enterprises, network engineers, and service providers in fine tuning their services. Our approach does not contain sensitive user information, allows for extensive configuration and customization of the framework, and finally, data capture processes have a built-in sharing mechanism for repeatable network experiments through a data collection description language.

Nuclear infrastructure systems play an important role in national security. The functions and missions of nuclear infrastructure systems are vital to government, businesses, society and citizen's lives. It is crucial to design nuclear infrastructure for scalability, reliability and robustness. To do this, we can use machine learning, which is a state of the art technology used in various fields ranging from voice recognition, Internet of Things (IoT) device management and autonomous vehicles. In this paper, we propose to design and develop a machine learning algorithm to perform predictive maintenance of nuclear infrastructure. Support vector machine and logistic regression algorithms will be used to perform the prediction. These machine learning techniques have been used to explore and compare rare events that could occur in nuclear infrastructure. As per our literature review, support vector machines provide better performance metrics. In this paper, we have performed parameter optimization for both algorithms mentioned. Existing research has been done in conditions with a great volume of data, but this paper presents a novel approach to correlate nuclear infrastructure data samples where the density of probability is very low. This paper also identifies the respective motivations and distinguishes between benefits and drawbacks of the selected machine learning algorithms.

This book reviews the state of the art in algorithmic approaches addressing the practical challenges that arise with hyperspectral image analysis tasks, with a focus on emerging trends in machine learning and image processing/understanding. It presents advances in deep learning, multiple instance learning, sparse representation based learning, low-dimensional manifold models, anomalous change detection, target recognition, sensor fusion and super-resolution for robust multispectral and hyperspectral image understanding. It presents research from leading international experts who have made foundational contributions in these areas. The book covers a diverse array of applications of multispectral/hyperspectral imagery in the context of these algorithms, including remote sensing, face recognition and biomedicine. This book would be particularly beneficial to graduate students and researchers who are taking advanced courses in (or are working in) the areas of image analysis, machine learning and remote sensing with multi-channel optical imagery. Researchers and professionals in academia and industry working in areas such as electrical engineering, civil and environmental engineering, geosciences and biomedical image processing, who work with multi-channel optical data will find this book useful. 

This book discusses the recent trends and developments in the fields of information processing and information visualization. In view of the increasing amount of data, there is a need to develop visualization techniques to make that data easily understandable. Presenting such approaches from various disciplines, this book serves as a useful resource for graduates.

Automatic and accurate classification of apoptosis, or programmed cell death, will facilitate cell biology research. State-of-the-art approaches in apoptosis classification use deep convolutional neural networks (CNNs). However, these networks are not efficient in encoding the part-whole relationships, thus requiring a large number of training samples to achieve robust generalization. This paper proposes an efficient variant of capsule networks (CapsNets) as an alternative to CNNs. Extensive experimental results demonstrate that the proposed CapsNets achieve competitive performances in target cell apoptosis classification, while significantly outperforming CNNs when the number of training samples is small. To utilize temporal information within microscopy videos, we propose a recurrent capsule network constructed by stacking a CapsNet and a bi-directional long short-term recurrent structure. Our experiments show that when considering temporal constraints, recurrent capsule network achieves 93.8% accuracy and makes significantly more consistent prediction compared to CNNs.