Publications

Detailed plans for integrating technology into the academic environment, using examples from a variety of institutions. Includes worksheet, handouts, and further reading.

Genetically engineered T cells that express chimeric antigen receptors (CAR+) are heterogeneous and thus, understanding the immunotherapeutic efficacy remains a challenge in adoptive cell therapy. We developed a high-throughput single-cell methodology, Timelapse Imaging Microscopy In Nanowell Grids (TIMING) to monitor interactions between immune cells and tumor cells in vitro. Using TIMING we demonstrated that CD4+ CAR+ T cells participate in multi-killing and benefit from improved resistance to activation induced cell death in comparison to CD8+ CAR+ T cells. For both subsets of cells, effector cell fate at the single-cell level was dependent on functional activation through multiple tumor cells.

Industrial Control System (ICS) networks face novel challenges in risk management, feature agility, and deployment flexibility. Essential hardware control systems may have a lifetime of decades while the need for business features and the network security landscape evolve on a daily basis. Even the mix of common protocols for network connectivity is likely to undergo significant market disruption over the 50+ year lifetime of a large industrial complex. Given this reality, the University of Houston Networking Lab [4] has embarked upon an effort, facilitated by the Department of Energy CREDC[2] program, to decouple the long development cycles of hardened industrial equipment from the ever-changing realities of both the local and wide-area networks they must use to transport essential sensor data and control messages.

Research on political representation has traditionally focused on the design of electoral systems. Yet there is evidence that voting costs result in lower turnout and undermine voters’ confidence in the electoral system. Election administrators can selectively manipulate participation costs for different individuals and groups, leading to biased electoral outcomes. Quantifying the costs of voting and designing fair, transparent and efficient rules for voter assignment to polling stations are important for theoretical and practical reasons. Using analytical models, we quantify the differential costs of participation faced by voters, which we measure in terms of distance to polling stations and wait times to cast a vote. To estimate the model parameters, we use real-world data on the 2013 midterm elections in Argentina. The assignment produced by our model cut average voting time by more than 27%, underscoring the inefficiencies of the current method of alphabetical assignment. Our strategy generates better estimates of the role of geographical and temporal conditions on electoral outcomes.

Accurate and cost-effective seizure severity trackingis an important step towards limiting the negative effects ofseizures in epileptic patients. Electroencephalography (EEG)is employed as a means to track seizures due to its hightemporal resolution. In this research, seizure state detection wasperformed using a mixed-filter approach to reduce the numberof channels. We first found two optimized EEG features (onebinary, one continuous) using wrapper feature selection. Thisfeature selection process reduces the number of required EEGchannels to two, making the process more practical and cost-effective. These continuous and binary observations were used ina state-space framework which allows us to model the continuoushidden seizure severity state. Expectation maximization wasemployed offline on the training and validation data-sets toestimate unknown parameters. The estimated model parameterswere used for real-time seizure state tracking. A classifier wasthen used to binarize the continuous seizure state. Our resultson the experimental data (CHB-MIT EEG database) validate theaccuracy of our proposed method and illustrate that the averageaccuracy, sensitivity, and false positive rate are85.8%,91.5%,and14.3%respectively. This type of seizure state modeling couldbe used in further implementation of adaptive closed-loop vagusnerve stimulation applications.

We present a detailed survey of the ongoing destabilization process of the Mosul dam. The dam is located on the Tigris river and is the biggest hydraulic structure in Iraq. From a geological point of view the dam foundation is poor due to a site geology formed by alternating strata of highly soluble materials including gypsum, anhydrite, marl and limestone. Here we present the first multi-sensor cumulative deformation map for the dam generated from space-based interferometric synthetic aperture radar measurements from the Italian constellation COSMO-SkyMed and the European sensor Sentinel-1a over the period 2014–2016 that we compare to an older dataset spanning 2004–2010 acquired with the European Envisat satellite. We found that deformation was rapid during 2004–2010, slowed in 2012–2014 and increased since August 2014 when grouting operations stopped due to the temporary capture of the dam by the self proclaimed Islamic State. We model the inferred deformation using a Markov chain Monte Carlo approach to solve for change in volume for simple tensile dislocations. Results from recent and historical geodetic datasets suggests that the volume dissolution rate remains constant when the equivalent volume of total concrete injected during re-grouting operations is included in the calculations.

Criminological research has long been interested in understanding whether offenders specialize in particular types of crimes (e.g., violent crimes) or are generalists (i.e., commit a broad range of crime types across their lifespan). Several different methodological approaches have been proposed to study this issue. Yet, no consensus exists on the best way to measure specialization in crime. We applied aggregate-level and individual-level methodologies to study offender specialization in at-risk young men, using self-report and official-records measures of their criminal behavior. The predominant pattern for the sample was commitment of a broad range of crime types across a 23-year span. Implications for theory and public policy are discussed.

Motivated by the increasing need to understand the algorithmic foundations of distributed large-scale graph computations, we study a number of fundamental graph problems in a message-passing model for distributed computing where k ≥ 2 machines jointly perform computations on graphs with n nodes (typically, n gg k). The input graph is assumed to be initially randomly partitioned among the k machines, a common implementation in many real-world systems. Communication is point-to-point, and the goal is to minimize the number of communication rounds of the computation. Our main result is an (almost) optimal distributed randomized algorithm for graph connectivity. Our algorithm runs in ~O(n/k2) rounds (~O notation hides a polylog(n) factor and an additive polylog(n) term). This improves over the best previously known bound of ~O(n/k) [Klauck et al., SODA 2015], and is optimal (up to a polylogarithmic factor) in view of an existing lower bound of ~Ω(n/k2). Our improved algorithm uses a bunch of techniques, including linear graph sketching, that prove useful in the design of efficient distributed graph algorithms. We then present fast randomized algorithms for computing minimum spanning trees, (approximate) min-cuts, and for many graph verification problems. All these algorithms take ~O(n/k2) rounds, and are optimal up to polylogarithmic factors. We also show an almost matching lower bound of ~Ω(n/k2) for many graph verification problems using lower bounds in random-partition communication complexity.

Examining sea, land, and air transportation systems and linkages, Logistics and Transportation Security: A Strategic, Tactical, and Operational Guide to Resilience provides thorough coverage of transportation security. Its topics include hazardous material handling, securing transportation networks, logistics essentials, supply chain security, risk assessment, the regulatory framework, strategic planning, and innovation through technology.

Motivated by the increasing need for fast distributed processing of large-scale graphs such as the Web graph and various social networks, we study a message-passing distributed computing model for graph processing and present lower bounds and algorithms for several graph problems. This work is inspired by recent large-scale graph processing systems (e.g., Pregel and Giraph) which are designed based on the message-passing model of distributed computing.
Our model consists of a point-to-point communication network of k machines interconnected by bandwidth-restricted links. Communicating data between the machines is the costly operation (as opposed to local computation). The network is used to process an arbitrary n-node input graph (typically nk>1) that is randomly partitioned among the k machines (a common implementation in many real world systems). Our goal is to study fundamental complexity bounds for solving graph problems in this model.

We present techniques for obtaining lower bounds on the distributed time complexity. Our lower bounds develop and use new bounds in random-partition communication complexity. We first show a lower bound of Ω(n/k) rounds for computing a spanning tree (ST) of the input graph. This result also implies the same bound for other fundamental problems such as computing a minimum spanning tree (MST). We also show an Ω(n/k2) lower bound for connectivity, ST verification and other related problems.
We give algorithms for various fundamental graph problems in our model. We show that problems such as PageRank, MST, connectivity, and graph covering can be solved in Õ (n/k) time, whereas for shortest paths, we present algorithms that run in Õ (n/k) time (for (1+ϵ)-factor approx.) and in Õ (n/k) time (for O(logn)-factor approx.) respectively.