Electrical and Computer Engineering Conference Papers, Posters and Presentations
#noDrivingDirections
0
Images from series: Electrical and Computer Engineering Conference Papers, Posters and Presentations
relativeToGround
-77.005923
38.889779
9000000
Algorithms for Asynchronous Coded Caching
<p>The original formulation of the coded caching problem assumes that the file requests from the users are synchronized, i.e., they arrive at the server at the same time. Several subsequent contributions work under the same assumption. Furthermore, the majority of prior work does not consider a scenario where users have deadlines. In our previous work we formulated the asynchronous coded caching problem where user requests arrive at different times. Furthermore, the users have specified deadlines. We proposed a linear program for obtaining its optimal solution. However, the size of the LP (number of constraints and variables) grows rather quickly with the number of users and cache sizes. In this work, we explore a dual decomposition based approach for solving the LP under consideration. We demonstrate that the dual function can be evaluated by equivalently solving a number of minimum cost network flow algorithms. Minimum cost network flow algorithms have been the subject of much investigation and current solvers routinely solve instances with millions of nodes in minutes. Our proposed approach leverages these fast solvers and allows us to solve several large scale instances of the asynchronous coded caching problem with manageable time and memory complexity.</p>

https://lib.dr.iastate.edu/ece_conf/60
]]>
#sn_blue-dot_copy3
-121.91662150000002,36.6177374,4
Can an influence graph driven by outage data determine transmission line upgrades that mitigate cascading blackouts?
<p>We transform historically observed line outages in a power transmission network into an influence graph that statistically describes how cascades propagate in the power grid. The influence graph can predict the critical lines that are historically most involved in cascading propagation. After upgrading these critical lines, simulating the influence graph suggests that these upgrades could mitigate large blackouts by reducing the probability of large cascades.</p>

https://lib.dr.iastate.edu/ece_conf/51
]]>
#sn_blue-dot_copy3
-116.21460680000001,43.6187102,4
Towards Incorporating Protection and Uncertainty into Cascading Failure Simulation and Analysis
<p>We advance the state-of-the-art in cascading failure simulation through an integrated modeling of system dynamics and protection coupled with data processing that analyzes the simulation results. An enhanced version of the TS3ph-CAPE simulator is used to produce the cascading data. The cascading data is processed to produce metrics describing the cascading size and risk, and to identify critical components contributing most to the cascading risk.</p>

https://lib.dr.iastate.edu/ece_conf/50
]]>
#sn_blue-dot_copy3
-116.21460680000001,43.6187102,4
Scalable and Dynamic Regeneration of Big Data Volumes
<p>A core requirement of database engine testing is the ability to create synthetic versions of the customer’s data warehouse at the vendor site. A rich body of work exists on synthetic database regeneration, but suffers critical limitations with regard to: (a) maintaining statistical fidelity to the client’s query processing, and/or (b) scaling to large data volumes. In this paper, we present HYDRA, a workload-dependent database regenerator that leverages a declarative approach to data regeneration to assure volumetric similarity, a crucial aspect of statistical fidelity, and materially improves on the prior art by adding scale, dynamism and functionality. Specifically, Hydra uses an optimized linear programming (LP) formulation based on a novel regionpartitioning approach. This spatial strategy drastically reduces the LP complexity, enabling it to handle query workloads on which contemporary techniques fail. Second, Hydra incorporates deterministic post-LP processing algorithms that provide high efficiency and improved accuracy. Third, Hydra introduces the concept of dynamic regeneration by constructing a minuscule database summary that can on-the-fly regenerate databases of arbitrary size during query execution, while obeying volumetric specifications derived from the query workload. A detailed experimental evaluation on standard OLAP benchmarks demonstrates that Hydra can efficiently and dynamically regenerate large warehouses that accurately mimic the desired statistical characteristics.</p>

https://lib.dr.iastate.edu/ece_conf/52
]]>
#sn_blue-dot_copy3
16.37381890000006,48.2081743,4
Estimating quantiles from the union of historical and streaming data
<p>Modern enterprises generate huge amounts of streaming data, for example, micro-blog feeds, financial data, network monitoring and industrial application monitoring. While Data Stream Management Systems have proven successful in providing support for real-time alerting, many applications, such as network monitoring for intrusion detection and real-time bidding, require complex analytics over historical and real-time data over the data streams. We present a new method to process one of the most fundamental analytical primitives, quantile queries, on the union of historical and streaming data. Our method combines an index on historical data with a memory-efficient sketch on streaming data to answer quantile queries with accuracy-resource tradeoffs that are significantly better than current solutions that are based solely on disk-resident indexes or solely on streaming algorithms.</p>

https://lib.dr.iastate.edu/ece_conf/44
]]>
#sn_blue-dot_copy3
77.20902120000005,28.6139391,4
Coded Caching for Networks with the Resolvability Property
<p>Coded caching is a recently proposed technique for dealing with large scale content distribution over the Internet. As in conventional caching, it leverages the presence of local caches at the end users. However, it considers coding in the caches and/or coded transmission from the central server and demonstrates that huge savings in transmission rate are possible when the server and the end users are connected via a single shared link. In this work, we consider a more general topology where there is a layer of relay nodes between the server and the users, e.g., combination networks studied in network coding are an instance of these networks. We propose novel schemes for a class of such networks that satisfy a so-called resolvability property and demonstrate that the performance of our scheme is strictly better than previously proposed schemes.</p>

https://lib.dr.iastate.edu/ece_conf/15
]]>
#sn_blue-dot_copy3
2.1734034999999494,41.3850639,4
Blind polychromatic X-ray CT reconstruction from Poisson measurements
<p>We develop a sparse image reconstruction method for Poisson distributed polychromatic X-ray computed tomography (CT) measurements under the blind scenario where the material of the inspected object and the incident energy spectrum are unknown. We employ our mass-attenuation spectrum parameterization of the noiseless measurements for single-material objects and express the mass-attenuation spectrum as a linear combination of B-spline basis functions of order one. A block coordinate descent algorithm is developed for constrained minimization of a penalized Poisson negative log-likelihood (NLL) cost function, where constraints and penalty terms ensure nonnegativity of the spline coefficients and nonnegativity and sparsity of the density-map image; the image sparsity is imposed using a convex total-variation (TV) norm penalty term. This algorithm alternates between a Nesterov’s proximal-gradient (NPG) step for estimating the density-map image and a limited-memory Broyden-Fletcher-Goldfarb-Shanno with box constraints (LBFGS- B) step for estimating the incident-spectrum parameters. We establish conditions for biconvexity of the penalized NLL objective function, which, if satisfied, ensures monotonicity of the NPG-BFGS iteration. We also show that the penalized NLL objective satisfies the Kurdyka-Łojasiewicz property, which is important for establishing local convergence of block-coordinate descent schemes in biconvex optimization problems. Simulation examples demonstrate the performance of the proposed scheme.</p>

https://lib.dr.iastate.edu/ece_conf/10
]]>
#sn_blue-dot_copy3
121.473701,31.230416,4
On computation rates for arithmetic sum
<p>For zero-error function computation over directed acyclic networks, existing upper and lower bounds on the computation capacity are known to be loose. In this work we consider the problem of computing the arithmetic sum over a specific directed acyclic network that is not a tree. We assume the sources to be i.i.d. Bernoulli with parameter 1/2. Even in this simple setting, we demonstrate that upper bounding the computation rate is quite nontrivial. In particular, it requires us to consider variable length network codes and relate the upper bound to equivalently lower bounding the entropy of descriptions observed by the terminal conditioned on the function value. This lower bound is obtained by further lower bounding the entropy of a so-called clumpy distribution. We also demonstrate an achievable scheme that uses variable length network codes and in-network compression.</p>

https://lib.dr.iastate.edu/ece_conf/11
]]>
#sn_blue-dot_copy3
2.1734034999999494,41.3850639,4
Mean-Field-Analysis of Coding versus Replication in Cloud Storage Systems
<p>We study cloud-storage systems with a very large number of files stored in a very large number of servers. In such systems, files are either replicated or coded to ensure reliability, i.e., file recovery from server failures. This redundancy in storage can further be exploited to improve system performance (mean file access delay) through appropriate load-balancing (routing) schemes. However, it is unclear whether coding or replication is better from a system performance perspective since the corresponding queueing analysis of such systems is, in general, quite difficult except for the trivial case when the system load asymptotically tends to zero. Here, we study the more difficult case where the system load is not asymptotically zero. Using the fact that the system size is large, we obtain a mean-field limit for the steady-state distribution of the number of file access requests waiting at each server. We then use the mean-field limit to show that, for a given storage capacity per file, coding strictly outperforms replication at all traffic loads while improving reliability. Further, the factor by which the performance improves in the heavy-traffic is at least as large as in the light-traffic case. Finally, we validate these results through extensive simulations.</p>

https://lib.dr.iastate.edu/ece_conf/16
]]>
#sn_blue-dot_copy3
-122.41941550000001,37.7749295,4
Coded Caching with Low Subpacketization Levels
<p>Caching is popular technique in content delivery networks that allows for reductions in transmission rates from the content-hosting server to the end users. Coded caching is a generalization of conventional caching that considers the possibility of coding in the caches and transmitting coded signals from the server. Prior results in this area demonstrate that huge reductions in transmission rates are possible and this makes coded caching an attractive option for the next generation of content-delivery networks. However, these results require that each file hosted in the server be partitioned into a large number (i.e., the subpacketization level) of non- overlapping subfiles. From a practical perspective, this is problematic as it means that prior schemes are only applicable when the size of the files is extremely large. In this work, we propose a novel coded caching scheme that enjoys a significantly lower subpacketization level than prior schemes, while only suffering a marginal increase in the transmission rate. In particular, for a fixed cache size, the scaling with the number of users is such that the increase in transmission rate is negligible, but the decrease in subpacketization level is exponential.</p>

https://lib.dr.iastate.edu/ece_conf/30
]]>
#sn_blue-dot_copy3
-77.03687070000001,38.9071923,4
Comparing a transmission planning study of cascading with historical line outage data
<p>The paper presents an initial comparison of a transmission planning study of cascading outages with a statistical analysis of historical outages. The planning study identifies the most vulnerable places in the Idaho system and outages that lead to cascading and interruption of load. This analysis is based on a number of case scenarios (short-term and long-term) that cover different seasonal and operating conditions. The historical analysis processes Idaho outage data and estimates statistics, using the number of transmission line outages as a measure of the extent of cascading. An initial number of lines outaged can lead to a cascading propagation of further outages. How much line outages propagate is estimated from Idaho Power outage data. Also, the paper discusses some similarities in the results and highlights the different assumptions of the two approaches to cascading failure analysis.</p>

https://lib.dr.iastate.edu/ece_conf/48
]]>
#sn_blue-dot_copy3
116.40739630000007,39.90419989999999,4
The Impact of Local Power Balance and Link Reliability on Blackout Risk in Heterogeneous Power Transmission Grids
<p>Many critical infrastructures such as the power transmission grid are heterogeneous both in their basic structure and in some of their underlying characteristics, This heterogeneity can be good for system robustness if it reduces the spread of failures or bad if it adds risk or vulnerability to the system. In this paper we investigate the effect of heterogeneity in the strength of the links between parts of the system network structures, as well as the balance of local generation and demand, on the robustness of the power transmission grid using the OPA complex system model of the power transmission system. It is found that increasing or decreasing the reliability of the links between parts of the grid changes the likelihood of different size failures with neither being optimal for all sizes. Furthermore, imbalances between load and generation in the local regions further degrades the system reliability.</p>

https://lib.dr.iastate.edu/ece_conf/47
]]>
#sn_blue-dot_copy3
-159.46916669999996,21.9066667,4
Polychromatic Sparse Image Reconstruction and Mass Attenuation Spectrum Estimation via B-spline Basis Function Expansion
<p>We develop a sparse image reconstruction method for polychromatic computed tomography(CT) measurements under the blind scenario where the material of the inspected object and the incident energy spectrum are unknown. To obtain a parsimonious measurement model parameterization, we first rewrite the measurement equation using our mass-attenuation parameterization, which has the Laplace integral form. The unknown mass-attenuationspectrum is expanded into basis functions using a B-spline basis of order one. We develop a block coordinate-descent algorithm for constrained minimization of a penalized negative log-likelihood function, where constraints and penalty terms ensure nonnegativity of the spline coefficients and sparsity of the density map image in the wavelet domain. This algorithm alternates between a Nesterov’s proximal-gradient step for estimating the density map image and an active-set step for estimating the incident spectrum parameters. Numerical simulations demonstrate the performance of the proposed scheme.</p>

https://lib.dr.iastate.edu/ece_conf/6
]]>
#sn_blue-dot_copy3
-116.21460680000001,43.6187102,4
Projected Nesterov’s Proximal-Gradient Signal Recovery from Compressive Poisson Measurements
<p>We develop a projected Nesterov’s proximalgradient (PNPG) scheme for reconstructing sparse signals from compressive Poisson-distributed measurements with the mean signal intensity that follows an affine model with known intercept. The objective function to be minimized is a sum of convex data fidelity (negative log-likelihood (NLL)) and regularization terms. We apply sparse signal regularization where the signal belongs to a nonempty closed convex set within the domain of the NLL and signal sparsity is imposed using total-variation (TV) penalty. We present analytical upper bounds on the regularization tuning constant. The proposed PNPG method employs projected Nesterov’s acceleration step, function restart, and an adaptive stepsize selection scheme that accounts for varying local Lipschitz constant of the NLL.We establish O k2 convergence of the PNPG method with step-size backtracking only and no restart. Numerical examples compare PNPG with the state-of-the-art sparse Poisson-intensity reconstruction algorithm (SPIRAL).</p>

https://lib.dr.iastate.edu/ece_conf/9
]]>
#sn_blue-dot_copy3
-121.91662150000002,36.6177374,4
Capacity of sum-networks for different message alphabets
<p>A sum-network is a directed acyclic network in which all terminal nodes demand the `sum' of the independent information observed at the source nodes. Many characteristics of the well-studied multiple-unicast network communication problem also hold for sum-networks due to a known reduction between instances of these two problems. Our main result is that unlike a multiple unicast network, the coding capacity of a sum-network is dependent on the message alphabet. We demonstrate this using a construction procedure and show that the choice of a message alphabet can reduce the coding capacity of a sum-network from 1 to close to 0.</p>

https://lib.dr.iastate.edu/ece_conf/12
]]>
#sn_blue-dot_copy3
114.10949700000003,22.396428,4
Improved Lower Bounds for Coded Caching
<p>Content delivery networks often employ caching to reduce transmission rates from the central server to the end users. Recently, the technique of coded caching was introduced whereby coding in the caches and coded transmission signals from the central server are considered. Prior results in this area demonstrate that carefully designing the placement of content in the caches and designing appropriate coded delivery signals from the server allow for a system where the delivery rates can be significantly smaller than conventional schemes. However, matching upper and lower bounds on the transmission rate have not yet been obtained. In this work, we derive tighter lower bounds on the coded caching rate than were known previously. We demonstrate that this problem can equivalently be posed as a combinatorial problem of optimally labeling the leaves of a directed tree. Our proposed labeling algorithm allows for significantly improved lower bounds on the coded caching rate. Furthermore, we study certain structural properties of our algorithm that allow us to analytically quantify improvements on the rate lower bound for general values of the problem parameters. This allows us to obtain a multiplicative gap of at most four between the achievable rate and our lower bound.</p>

https://lib.dr.iastate.edu/ece_conf/14
]]>
#sn_blue-dot_copy3
114.10949700000003,22.396428,4
Monitoring voltage collapse margin with synchrophasors across transmission corridors with multiple lines and multiple contingencies
<p>We use synchrophasor measurements of the complex voltage and current at both ends of multiple transmission lines that connect areas of a power system to monitor the online voltage collapse margin. A new reduction is used to reduce the multiple transmission lines to a single line equivalent and determine how to combine the synchrophasor measurements. Generator reactive power limits can be accommodated. The results show that this methodology can capture the effect of multiple contingencies inside the transmission corridors, giving awareness to the operators about the severity of contingencies with respect to voltage stability.</p>

https://lib.dr.iastate.edu/ece_conf/46
]]>
#sn_blue-dot_copy3
-104.990251,39.7392358,4
Area angle can monitor cascading outages with synchrophasors
<p>We monitor the severity of multiple line outages inside an area of the power system according to the limitations on a bulk power transfer through the area when the outages occur. The monitoring combines together synchrophasor measurements around the border of the area to form an angle across the area that can be tracked in real time. This is an approach based on physical principles to extract actionable information by suitably combining synchrophasor measurements. We show the capabilities of the method on a model of the WECC system on an area with approximately 500 lines.</p>

https://lib.dr.iastate.edu/ece_conf/45
]]>
#sn_blue-dot_copy3
-77.03687070000001,38.9071923,4
Mining maximal cliques from an uncertain graph
<p>We consider mining dense substructures (maximal cliques) from an uncertain graph, which is a probability distribution on a set of deterministic graphs. For parameter 0 we consider the notion of an α-maximal clique in an uncertain graph. We present matching upper and lower bounds on the number of α-maximal cliques possible within a (uncertain) graph. We present an algorithm to enumerate α-maximal cliques whose worst-case runtime is near-optimal, and an experimental evaluation showing the practical utility of the algorithm.</p>

https://lib.dr.iastate.edu/ece_conf/43
]]>
#sn_blue-dot_copy3
126.97796919999996,37.566535,4
Joint optimization of power allocation and training duration for uplink multiuser MIMO communications
<p>In this paper, we consider a multiuser multiple-input multiple-output (MU-MIMO) communication system between a base station equipped with multiple antennas and multiple mobile users each equipped with a single antenna. The uplink scenario is considered. The uplink channels are acquired by the base station through a training phase. Two linear processing schemes are considered, namely maximum-ratio combining (MRC) and zero-forcing (ZF). We optimize the training period and optimal training energy under the average and peak power constraint so that an achievable sum rate is maximized.</p>

https://lib.dr.iastate.edu/ece_conf/49
]]>
#sn_blue-dot_copy3
-90.0715323,29.95106579999999,4
A Fast Proximal Gradient Algorithm for Reconstructing Nonnegative Signals with Sparse Transform Coefficients
<p>We develop a fast proximal gradient scheme for reconstructing nonnegative signals that are sparse in a transform domain from underdetermined measurements. This signal model is motivated by tomographic applications where the signal of interest is known to be nonnegative because it represents a tissue or material density. We adopt the unconstrained regularization framework where the objective function to be minimized is a sum of a convex data fidelity (negative log-likelihood (NLL)) term and a regularization term that imposes signal nonnegativity and sparsity via an `1-norm constraint on the signal’s transform coefficients. This objective function is minimized via Nesterov’s proximal-gradient method with function restart, where the proximal mapping is computed via alternating direction method of multipliers (ADMM). To accelerate the convergence, we develop an adaptive continuation scheme and a step-size selection scheme that accounts for varying local Lipschitz constant of the NLL. In the numerical examples, we consider Gaussian linear and Poisson generalized linear measurement models. We compare the proposed penalized NLL minimization approach and existing signal reconstruction methods via compressed sensing and tomographic reconstruction experiments and demonstrate that, by exploiting both the nonnegativity of the underlying signal and sparsity of its wavelet coefficients, we can achieve significantly better reconstruction performance than the existing methods.</p>

https://lib.dr.iastate.edu/ece_conf/4
]]>
#sn_blue-dot_copy3
-121.91662150000002,36.6177374,4
Sum-networks from undirected graphs: Construction and capacity analysis
<p>We consider a directed acyclic network with multiple sources and multiple terminals where each terminal is interested in decoding the sum of independent sources generated at the source nodes. We describe a procedure whereby a simple undirected graph can be used to construct such a sum-network and demonstrate an upper bound on its computation rate. Furthermore, we show sufficient conditions for the construction of a linear network code that achieves this upper bound. Our procedure allows us to construct sum-networks that have any arbitrary computation rate p/q (where p, q are non-negative integers). Our work significantly generalizes a previous approach for constructing sum-networks with arbitrary capacities. Specifically, we answer an open question in prior work by demonstrating sum-networks with significantly fewer number of sources and terminals.</p>

https://lib.dr.iastate.edu/ece_conf/13
]]>
#sn_blue-dot_copy3
-88.57339789999997,40.0278116,4
Enumerating Maximal Bicliques from a Large Graph using MapReduce
<p>We consider the enumeration of maximal bipartite cliques (bicliques) from a large graph, a task central to many practical data mining problems in social network analysis and bioinformatics. We present novel parallel algorithms for the MapReduce platform, and an experimental evaluation using Hadoop MapReduce. Our algorithm is based on clustering the input graph into smaller sized subgraphs, followed by processing different subgraphs in parallel. Our algorithm uses two ideas that enable it to scale to large graphs: (1) the redundancy in work between different subgraph explorations is minimized through a careful pruning of the search space, and (2) the load on different reducers is balanced through the use of an appropriate total order among the vertices. Our evaluation shows that the algorithm scales to large graphs with millions of edges and tens of millions of maximal bicliques. To our knowledge, this is the first work on maximal biclique enumeration for graphs of this scale.</p>

https://lib.dr.iastate.edu/ece_conf/53
]]>
#sn_blue-dot_copy3
-149.90027780000003,61.2180556,4
Parallel Streaming Frequency-Based Aggregates
<p>We present efficient parallel streaming algorithms for fundamental frequency-based aggregates in both the sliding window and the infinite window settings. In the sliding window setting, we give a parallel algorithm for maintaining a space-bounded block counter (SBBC). Using SBBC, we derive algorithms for basic counting, frequency estimation, and heavy hitters that perform no more work than their best sequential counterparts. In the infinite window setting, we present algorithms for frequency estimation, heavy hitters, and count-min sketch. For both the infinite window and sliding window settings, our parallel algorithms process a "minibatch" of items using linear work and polylog parallel depth. We also prove a lower bound showing that the work of the parallel algorithm is optimal in the case of heavy hitters and frequency estimation. To our knowledge, these are the first parallel algorithms for these problems that are provably work efficient and have low depth.</p>

https://lib.dr.iastate.edu/ece_conf/59
]]>
#sn_blue-dot_copy3
14.43780049999998,50.0755381,4
Sparse X-ray CT image reconstruction and blind beam hardening correction via mass attenuation discretization
<p>We develop a nonlinear sparse X-ray computed tomography (CT) image reconstruction method that accounts for beam hardening effects due to polychromatic X-ray sources. We adopt the blind scenario where the material of the inspected object and the incident polychromatic source spectrum are unknown and apply mass attenuation discretization of the underlying integral expressions that model the noiseless measurements. Our reconstruction algorithm employs constrained minimization of a penalized least-squares cost function, where nonnegativity and maximum-energy constraints are imposed on incident spectrum parameters and negative-energy and smooth ℓ1-norm penalty terms are introduced to ensure the nonnegativity and sparsity of the density map image. This minimization scheme alternates between a nonlinear conjugate-gradient step for estimating the density map image and an active set step for estimating incident spectrum parameters. We compare the proposed method with the existing approaches, which ignore the polychromatic nature of the measurements or sparsity of the density map image.</p>

https://lib.dr.iastate.edu/ece_conf/1
]]>
#sn_blue-dot_copy3
-63.05225100000001,18.08255,4
Counting and Sampling Triangles from a Graph Stream
<p>This paper presents a new space-efficient algorithm for counting and sampling triangles--and more generally, constant-sized cliques--in a massive graph whose edges arrive as a stream. Compared to prior work, our algorithm yields significant improvements in the space and time complexity for these fundamental problems. Our algorithm is simple to implement and has very good practical performance on large graphs.</p>

https://lib.dr.iastate.edu/ece_conf/58
]]>
#sn_blue-dot_copy3
11.121748600000046,46.0747793,4
Beam hardening correction via mass attenuation discretization
<p>We develop a beam-hardening correction method for polychromatic xray computed tomography (ct) reconstruction based on mass attenuation coefficient discretization. We assume that the inspected object consists of an unknown single material and that the incident x-ray spectrum is unknown. In this case, the standard photon-energy discretization of the Beer’s law measurement equation leads to an excessive number of unknown parameters and scaling ambiguity. To obtain a parsimonious measurement model parametrization, we first rewrite the measurement equation in terms of integral expressions of the mass attenuation rather than photon energy. The resulting integrals can be discretized easily thanks to the fact that the range of mass attenuations is bounded and, in practice, fairly narrow. We then develop a constrained least-squares optimization approach for reconstructing the underlying object from logscale measurements, where we impose the nonnegativity constraint to both the signal and the x-ray spectrum density estimates. We demonstrate the performance of the proposed method via a numerical example where we compare it with the standard filtered backprojection (fbp), which ignores the polychromatic nature of the measurements.</p>

https://lib.dr.iastate.edu/ece_conf/2
]]>
#sn_blue-dot_copy3
-123.1139268,49.261226,4
A Numerical Dosimetry Study for Pediatric Transcranial Magnetic Stimulation
<p>Transcranial magnetic stimulation (TMS) has been used to study brain function and is being investigated as a possible treatment for a variety of neurological disorders and neuropsychiatric diseases. Most studies implementing TMS utilize adult subjects but the method may also be a beneficial tool in the treatment of pediatric patients. It is known that the structure of pediatric brains differ to those of adults and the use of scaled adult head models for studying exposure to electromagnetic radiation may provide erroneous results. In this study numerical calculations have been performed incorporating high-resolution anatomically realistic human models of varying age to accurately determine how field induced by TMS depends upon the age of the patient. Results show that different protocols should be considered when conducting TMS experiments with pediatric patients to ensure desired neurological effects are achieved in a safe manner.</p>

https://lib.dr.iastate.edu/ece_conf/8
]]>
#sn_blue-dot_copy3
-117.16108380000003,32.715738,4
Focused and Deep Brain Magnetic Stimulation Using New Coil Design in Mice
<p>Deep brain transcranial magnetic stimulation (TMS) offers promising treatment for neurological disorders that originate from deeper regions of the brain, such as Parkinson's disease. Coils designed for the human head need significant redesigning to stimulate selective regions of the mouse brain for advanced TMS therapy analysis. We report a focused and deep brain TMS coil for mice that is based on a two coil configuration similar to the 'Halo coil'. A heterogeneous MRI derived head model of mouse was used to obtain an electric field of about 150 V/m in selective deeper regions of the brain. Focality of stimulation was quantified using the ratio of half value volume to half value of depth of electric field. A prototype of the final coil design was fabricated and characterized to compare simulated and physical magnetic field profiles.</p>

https://lib.dr.iastate.edu/ece_conf/7
]]>
#sn_blue-dot_copy3
-117.16108380000003,32.715738,4
PREMIER — PRobabilistic error-correction using Markov inference in errored reads
<p>In this work we present a flexible, probabilistic and reference-free method of error correction for high throughput DNA sequencing data. The key is to exploit the high coverage of sequencing data and model short sequence outputs as independent realizations of a Hidden Markov Model (HMM). We pose the problem of error correction of reads as one of maximum likelihood sequence detection over this HMM. While time and memory considerations rule out an implementation of the optimal Baum-Welch algorithm (for parameter estimation) and the optimal Viterbi algorithm (for error correction), we propose low-complexity approximate versions of both. Specifically, we propose an approximate Viterbi and a sequential decoding based algorithm for the error correction. Our results show that when compared with Reptile, a state-of-the-art error correction method, our methods consistently achieve superior performances on both simulated and real data sets.</p>

https://lib.dr.iastate.edu/ece_conf/28
]]>
#sn_blue-dot_copy3
28.97835889999999,41.0082376,4
Replication based storage systems with local repair
<p>We consider the design of regenerating codes for distributed storage systems that enjoy the property of local, exact and uncoded repair, i.e., (a) upon failure, a node can be regenerated by simply downloading packets from the surviving nodes and (b) the number of surviving nodes contacted is strictly smaller than the number of nodes that need to be contacted for reconstructing the stored file. Our codes consist of an outer MDS code and an inner fractional repetition code that specifies the placement of the encoded symbols on the storage nodes. For our class of codes, we identify the tradeoff between the local repair property and the minimum distance. We present codes based on graphs of high girth, affine resolvable designs and projective planes that meet the minimum distance bound for specific choices of file sizes.</p>

https://lib.dr.iastate.edu/ece_conf/26
]]>
#sn_blue-dot_copy3
-114.0708459,51.0486151,4
Parallel Triangle Counting in Massive Streaming Graphs
<p>The number of triangles in a graph is a fundamental metric widely used in social network analysis, link classification and recommendation, and more. In these applications, modern graphs of interest tend to both large and dynamic. This paper presents the design and implementation of a fast parallel algorithm for estimating the number of triangles in a massive undirected graph whose edges arrive as a stream. Our algorithm is designed for shared-memory multicore machines and can make efficient use of parallelism and the memory hierarchy. We provide theoretical guarantees on performance and accuracy, and our experiments on real-world datasets show accurate results and substantial speedups compared to an optimized sequential implementation.</p>

https://lib.dr.iastate.edu/ece_conf/57
]]>
#sn_blue-dot_copy3
-122.41941550000001,37.7749295,4
Selfish Distributed Compression over Networks
<p>We consider the min-cost multicast problem (under network coding) with multiple correlated sources where each terminal wants to losslessly reconstruct all the sources. This can be considered as the network generalization of the classical distributed source coding (Slepian-Wolf) problem. We study the inefficiency brought forth by the selfish behavior of the terminals in this scenario by modeling it as a noncooperative game among the terminals. The solution concept that we adopt for this game is the popular local Nash equilibrium (Waldrop equilibrium) adapted for the scenario with multiple sources. The degradation in performance due to the lack of regulation is measured by the price of anarchy (POA), which is defined as the ratio between the cost of the worst possible Waldrop equilibrium and the socially optimum cost. Our main result is that in contrast with the case of independent sources, the presence of source correlations can significantly increase the price of anarchy. Towards establishing this result we make several contributions. We characterize the socially optimal flow and rate allocation in terms of four intuitive conditions. This result is a key technical contribution of this paper and is of independent interest as well. Next, we show that the Waldrop equilibrium is a socially optimal solution for a different set of (related) cost functions. Using this, we construct explicit examples that demonstrate that the POA > 1 and determine near- tight upper bounds on the POA as well. The main techniques in our analysis are Lagrangian duality theory and the usage of the supermodularity of conditional entropy. Finally, all the techniques and results in this paper will naturally extend to a large class of network information flow problems where the Slepian-Wolf polytope is replaced by any contra-polymatroid (or more generally polymatroid-like set), leading to a nice class of succinct multi-player games and allow the investigation of other practical and meaningful scenarios beyond network coding as well.</p>

https://lib.dr.iastate.edu/ece_conf/33
]]>
#sn_blue-dot_copy3
-43.17289649999998,-22.9068467,4
Repairable Replication-based Storage Systems Using Resolvable Designs
<p>We consider the design of regenerating codes for distributed storage systems at the minimum bandwidth regeneration (MBR) point. The codes allow for a repair process that is exact and uncoded, but table-based. These codes were introduced in prior work and consist of an outer MDS code followed by an inner fractional repetition (FR) code where copies of the coded symbols are placed on the storage nodes. The main challenge in this domain is the design of the inner FR code. In our work, we consider generalizations of FR codes, by establishing their connection with a family of combinatorial structures known as resolvable designs. Our constructions based on affine geometries, Hadamard designs and mutually orthogonal Latin squares allow the design of systems where a new node can be exactly regenerated by downloading β ≥ 1 packets from a subset of the surviving nodes (prior work only considered the case of β = 1). Our techniques allow the design of systems over a large range of parameters. Specifically, the repetition degree of a symbol, which dictates the resilience of the system can be varied over a large range in a simple manner. Moreover, the actual table needed for the repair can also be implemented in a rather straightforward way. Furthermore, we answer an open question posed in prior work by demonstrating the existence of codes with parameters that are not covered by Steiner systems.</p>

https://lib.dr.iastate.edu/ece_conf/27
]]>
#sn_blue-dot_copy3
-87.9364215,39.9117014,4
On the multiple unicast capacity of 3-source, 3-terminal directed acyclic networks
<p>We consider a directed acyclic network with unit-capacity edges and n source-terminal (si - ti) pairs that wish to communicate at unit rate via network coding. The connectivity between the si-ti pairs is quantified by means of a connectivity level vector, [k1 k2 ... kn] such that there exist ki edge-disjoint paths between si and ti. In this work we attempt to classify networks based on the connectivity level. Specifically, for case of n = 3, and a given connectivity level vector [k1 k2 k3], we aim to either present a constructive network coding scheme or an instance of a network that cannot support the desired rate. Certain generalizations for the case of higher n have also been found.</p>

https://lib.dr.iastate.edu/ece_conf/25
]]>
#sn_blue-dot_copy3
-117.16108380000003,32.715738,4
A Lower Bound On Proximity Preservation by Space Filling Curves
<p>Abstract: A space filling curve (SFC) is a proximity preserving mapping from a high dimensional space to a single dimensional space. SFCs have been used extensively in dealing with multi-dimensional data in parallel computing, scientific computing, and databases. The general goal of an SFC is that points that are close to each other in high-dimensional space are also close to each other in the single dimensional space. While SFCs have been used widely, the extent to which proximity can be preserved by an SFC is not precisely understood yet. We consider natural metrics, including the "nearest-neighbor stretch" of an SFC, which measure the extent to which an SFC preserves proximity. We first show a powerful negative result, that there is an inherent lower bound on the stretch of any SFC. We then show that the stretch of the commonly used Z curve is within a factor of 1.5 from the optimal, irrespective of the number of dimensions. Further we show that a very simple SFC also achieves the same stretch as the Z curve. Our results apply to SFCs in any dimension d such that d is a constant.</p>

https://lib.dr.iastate.edu/ece_conf/56
]]>
#sn_blue-dot_copy3
121.47370209999997,31.2303904,4
An Achievable Region for the Double Unicast Problem Based on a Minimum Cut Analysis
<p>We consider the multiple unicast problem under network coding over directed acyclic networks when there are two source-terminal pairs, s1 - t1 and s2 - t2. Current characterizations of the multiple unicast capacity region in this setting have a large number of inequalities, which makes them hard to explicitly evaluate. In this work we consider a slightly different problem. We assume that we only know certain minimum cut values for the network, e.g., mincut(Si, Tj), where Si ⊆ {si, s2} and Tj ⊆ {t1, t2} for different subsets Si and Tj. Based on these values, we propose an achievable rate region for this problem based on linear codes. Towards this end, we begin by defining a base region where both sources are multicast to both the terminals. Following this we enlarge the region by appropriately encoding the information at the source nodes, such that terminal ti is only guaranteed to decode information from the intended source si, while decoding a linear function of the other source. The rate region takes different forms depending upon the relationship of the different cut values in the network.</p>

https://lib.dr.iastate.edu/ece_conf/24
]]>
#sn_blue-dot_copy3
-44.72047570000001,-23.2200542,4
Algebraic Codes for Slepian-Wolf code design
<p>Practical constructions of lossless distributed source codes (for the Slepian-Wolf problem) have been the subject of much investigation in the past decade. In particular, near-capacity achieving code designs based on LDPC codes have been presented for the case of two binary sources, with a binary-symmetric correlation. However, constructing practical codes for the case of non-binary sources with arbitrary correlation remains by and large open. From a practical perspective it is also interesting to consider coding schemes whose performance remains robust to uncertainties in the joint distribution of the sources. In this work we propose the usage of Reed-Solomon (RS) codes for the asymmetric version of this problem. We show that algebraic soft-decision decoding of RS codes can be used effectively under certain correlation structures. In addition, RS codes offer natural rate adaptivity and performance that remains constant across a family of correlation structures with the same conditional entropy. The performance of RS codes is compared with dedicated and rate adaptive multistage LDPC codes (Varodayan et al. '06), where each LDPC code is used to compress the individual bit planes. Our simulations show that in classical Slepian-Wolf scenario, RS codes outperform both dedicated and rate-adaptive LDPC codes under q-ary symmetric correlation, and are better than rate-adaptive LDPC codes in the case of sparse correlation models, where the conditional distribution of the sources has only a few dominant entries. In a feedback scenario, the performance of RS codes is comparable with both designs of LDPC codes. Our simulations also demonstrate that the performance of RS codes in the presence of inaccuracies in the joint distribution of the sources is much better as compared to multistage LDPC codes.</p>

https://lib.dr.iastate.edu/ece_conf/23
]]>
#sn_blue-dot_copy3
30.335098600000038,59.9342802,4
Degrees of Freedom Region for an Interference Network With General Message Demands
<p>We consider a single hop interference network with K transmitters, each with an independent message and J receivers, all having the same number (M) of antennas. Each receiver requests an arbitrary subset of the messages. This generalizes the well-known K user M antenna interference channel, where each message is requested by a unique receiver. For this setup, we derive the exact degrees of freedom (DoF) region. Our achievability scheme generalizes the interference alignment scheme proposed by Cadambe and Jafar '08. In particular, we achieve general points in the DoF region by using multiple base vectors and aligning the interference at each receiver to its largest (in the DoF sense) interferer. As a byproduct of our analysis, we recover the DoF region for the original interference channel.</p>

https://lib.dr.iastate.edu/ece_conf/22
]]>
#sn_blue-dot_copy3
30.335098600000038,59.9342802,4
A Note on the Multiple Unicast Capacity of Directed Acyclic Networks
<p>We consider the multiple unicast problem under network coding over directed acyclic networks with unit capacity edges. There is a set of n source-terminal (si-ti) pairs that wish to communicate at unit rate over this network. The connectivity between the si-ti pairs is quantified by means of a connectivity level vector, [k1 k2 ... kn] such that there exist ki edge-disjoint paths between si and ti. Our main aim is to characterize the feasibility of achieving this for different values of n and [k1 ... kn]. For 3 unicast connections (n = 3), we characterize several achievable and unachievable values of the connectivity 3-tuple. In addition, in this work, we have found certain network topologies, and capacity characterizations that are useful in understanding the case of general n.</p>

https://lib.dr.iastate.edu/ece_conf/21
]]>
#sn_blue-dot_copy3
135.76802939999993,35.0116363,4
Performance Evaluation for ML Sequence Detection in ISI Channels with Gauss Markov Noise
<p>Inter-symbol interference (ISI) channels with data dependent Gauss Markov noise have been used to model read channels in magnetic recording and other data storage systems. The Viterbi algorithm can be adapted for performing maximum likelihood sequence detection in such channels. However, the problem of finding an analytical upper bound on the bit error rate of the Viterbi detector in this case has not been fully investigated. Current techniques rely on an exhaustive enumeration of short error events and determine the BER using a union bound. In this work, we consider a subset of the class of ISI channels with data dependent Gauss-Markov noise. We derive an upper bound on the pairwise error probability (PEP) between the transmitted bit sequence and the decoded bit sequence that can be expressed as a product of functions depending on current and previous states in the (incorrect) decoded sequence and the (correct) transmitted sequence. In general, the PEP is asymmetric. The average BER over all possible bit sequences is then determined using a pairwise state diagram. Simulations results which corroborate the analysis of upper bound, demonstrate that analytic bound on BER is tight in high SNR regime. In the high SNR regime, our proposed upper bound obviates the need for computationally expensive simulation.</p>

https://lib.dr.iastate.edu/ece_conf/19
]]>
#sn_blue-dot_copy3
-80.19179020000001,25.7616798,4
Optimal Random Sampling from Distributed Streams Revisited
<p>We give an improved algorithm for drawing a random sample from a large data stream when the input elements are distributed across multiple sites which communicate via a central coordinator. At any point in time the set of elements held by the coordinator represent a uniform random sample from the set of all the elements observed so far. When compared with prior work, our algorithms asymptotically improve the total number of messages sent in the system as well as the computation required of the coordinator. We also present a matching lower bound, showing that our protocol sends the optimal number of messages up to a constant factor with large probability. As a byproduct, we obtain an improved algorithm for finding the heavy hitters across multiple distributed sites.</p>

https://lib.dr.iastate.edu/ece_conf/55
]]>
#sn_blue-dot_copy3
12.496365500000024,41.9027835,4
Minimum cost content distribution using network coding: Replication vs. coding at the source nodes
<p>Abstract: Consider a large file that needs to be multicast over a network to a given set of terminals. Storing the file at a single server may result in server overload. Accordingly, there are distributed storage solutions that operate by dividing the file into pieces and placing copies of the pieces (replication) or coded versions of the pieces (coding) at multiple source nodes. Suppose that the cost of a given network coding based solution to this problem is defined as the sum of the storage cost and the cost of the flows required to support the multicast. In this work, we consider a network with a set of source nodes that can either contain subsets or coded versions of the pieces of the file and are interested in finding the storage capacities and flows at minimum cost. We provide succinct formulations of the corresponding optimization problems by using information measures. In particular, we show that when there are two source nodes, there is no loss in considering subset sources. For three source nodes, we derive a tight upper bound on the cost gap between the two cases. Algorithms for determining the content of the source nodes are also provided.</p>

https://lib.dr.iastate.edu/ece_conf/40
]]>
#sn_blue-dot_copy3
31.23571160000006,30.0444196,4
Maximum-Likelihood Sequence Detector for Dynamic Mode High Density Probe Storage
<p>There is an increasing need for high density data storage devices driven by the increased demand of consumer electronics. In this work, we consider a data storage system that operates by encoding information as topographic profiles on a polymer medium. A cantilever probe with a sharp tip (few nm radius) is used to create and sense the presence of topographic profiles, resulting in a density of few Tb per in.2. The prevalent mode of using the cantilever probe is the static mode that is harsh on the probe and the media. In this article, the high quality factor dynamic mode operation, that is less harsh on the media and the probe, is analyzed. The read operation is modeled as a communication channel which incorporates system memory due to inter-symbol interference and the cantilever state. We demonstrate an appropriate level of abstraction of this complex nanoscale system that obviates the need for an involved physical model. Next, a solution to the maximum likelihood sequence detection problem based on the Viterbi algorithm is devised. Experimental and simulation results demonstrate that the performance of this detector is several orders of magnitude better than the performance of other existing schemes.</p>

https://lib.dr.iastate.edu/ece_conf/32
]]>
#sn_blue-dot_copy3
-157.85833330000003,21.3069444,4
Communicating the sum of sources in a 3- sources/3-terminals network; revisited
<p>We consider the problem of multicasting sums over directed acyclic networks with unit capacity edges. A set of source nodes si observe independent unit-entropy source processes Xi and want to communicate Σ Xi to a set of terminals tj. Previous work on this problem has established necessary and sufficient conditions on the si -tj connectivity in the case when there are two sources or two terminals (Ramamoorthy '08), and in the case of three sources and three terminals (Langberg-Ramamoorthy '09). In particular the latter result establishes that each terminal can recover the sum if there are two edge disjoint paths between each si-tj pair. In this work, we provide a new and significantly simpler proof of this result, and introduce techniques that may be of independent interest in other network coding problems.</p>

https://lib.dr.iastate.edu/ece_conf/18
]]>
#sn_blue-dot_copy3
-97.74306079999997,30.267153,4
Delay, Cost and Infrastructure Tradeoff of Epidemic Routing in Mobile Sensor Networks
<p>This paper studies the delay, cost and infrastructure tradeoff of epidemic routing in mobile sensor networks. We consider a mobile sensor network with <em>M</em> mobiles and <em>B</em> static base stations. The mobile sensors collect information when moving around and need to report the information to the base stations. Three different epidemic routing schemes --- target epidemic routing, uncontrolled epidemic routing and controlled epidemic routing --- are analyzed in this paper. For each of the three schemes, we characterize the scaling behaviors of the delay, which is defined to be the average number of time slots required to deliver a message, and the cost, which is defined to be the average number of transmissions required to deliver a message, in terms of the number of mobiles (<em>M</em>) and the number of base stations (<em>B</em>). These scaling results reveal the fundamental tradeoff among delay, cost and infrastructure in mobile sensor networks.</p>

https://lib.dr.iastate.edu/ece_conf/54
]]>
#sn_blue-dot_copy3
-0.37067899999999554,49.182863,4
Design and Analysis of E² RC Codes Using EXIT Chart
<p>We present the design and analysis of a family of codes based on the efflciently-encodable rate-compatible irregular LDPC codes ( E² RC codes) introduced by Kim, Ramamoorthy and Mclaughlin '06. The basic idea is to utilize the structured parity part (of the parity check matrix) of E² RC codes and design optimal degree distributions for the systematic part based on EXIT chart analysis. In this work, we propose a new technique for computing EXIT functions of the structured parity part without resorting to Monte Carlo simulations. Our method provides smoother EXIT functions in substantially lesser time. Furthermore, we pose and solve the problem of designing good codes using linear programming. Next, we consider the performance of rate-compatible codes under puncturing using EXIT charts. Finally, we propose joint optimization of our codes at any specified code rate(s) which has not been explored in previous design of rate-compatible punctured codes.</p>

https://lib.dr.iastate.edu/ece_conf/31
]]>
#sn_blue-dot_copy3
13.737262099999953,51.0504088,4
Communicating the sum of sources in a 3-sources/3-terminals network
<p>We consider the network communication scenario in which a number of sources si each holding independent information Xi wish to communicate the sum ∑Xi to a set of terminals tj. In this work we consider directed acyclic graphs with unit capacity edges and independent sources of unit-entropy. The case in which there are only two sources or only two terminals was considered by the work of Ramamoorthy [ISIT 2008] where it was shown that communication is possible if and only if each source terminal pair si/tj is connected by at least a single path. In this work we study the communication problem in general, and show that even for the case of three sources and three terminals, a single path connecting source/terminal pairs does not suffice to communicate ∑Xi. We then present an efficient encoding scheme which enables the communication of ∑Xi for the three sources, three terminals case, given that each source terminal pair is connected by two edge disjoint paths. Our encoding scheme includes a structural decomposition of the network at hand which may be found useful for other network coding problems as well.</p>

https://lib.dr.iastate.edu/ece_conf/17
]]>
#sn_blue-dot_copy3
126.97796919999996,37.566535,4
Protection against link errors and failures using network coding in overlay networks
<p>We propose a network-coding based scheme to protect multiple bidirectional unicast connections against adversarial errors and failures in a network. The end nodes of the bidirectional connections are connected by a set of shared protection paths that provide the redundancy required for protection. Suppose that ne paths are corrupted by an omniscient, computationally unbounded adversary. Under our proposed protocol, the errors can be corrected at all the end nodes with 4ne protection paths. More generally, if there are ne adversarial errors and nƒ failures, 4ne + 2nƒ protection paths are sufficient. The number of protection paths only depends on the number of errors and failures being protected against and is independent of the number of unicast connections.</p>

https://lib.dr.iastate.edu/ece_conf/41
]]>
#sn_blue-dot_copy3
126.97796919999996,37.566535,4
Protograph E²RC Codes
<p>We propose a construction of rate-compatible punctured codes based on protographs that have a special E2RC structure in their parity part (E2RC codes were introduced in Kim, Ramamoorthy and McLaughlin '06) . The protograph representation of these codes facilitates their asymptotic performance analysis and allows the implementation of high speed decoders. The construction process starts with a good high rate protograph. The protographs of lower rate codes are derived from the higher rate protographs via the process of check-splitting. The check-splitting is done in a specific manner so that the parity nodes in the protograph have the special E2RC structure. We also present additional design rules that ensure that the gap to capacity remains low across the range of rates. Using our approach we exhibit protographs that have a gap of at most 0.27 dB to capacity across the range of rates 1/2 to 8/9. These conclusions are supported by our simulation results. Our work, therefore presents a systematic method for the design of E2RC-like codes.</p>

https://lib.dr.iastate.edu/ece_conf/37
]]>
#sn_blue-dot_copy3
-90.0715323,29.95106579999999,4
Rate and power allocation under the pairwise distributed source coding constraint
<p>We explore the problem of rate and power allocation for a sensor network where pairwise distributed source coding is employed (introduced by Roumy and Gesbert ‘07). For noiseless node-terminal channels, we show that the minimum sum rate assignment with this property can be found by finding a minimum weight arborescence in an appropriately defined directed graph. For orthogonal noisy node-terminal channels, the minimum sum power allocation can be found by finding a minimum weight matching forest in a mixed graph. Numerical results are presented for the noiseless case showing that our solution outperforms previous solutions when source correlations are high.</p>

https://lib.dr.iastate.edu/ece_conf/39
]]>
#sn_blue-dot_copy3
-79.38318429999998,43.653226,4
Channel modeling and detector design for dynamic mode high density probe storage
<p>Probe based data storage is a promising solution for satisfying the demand for ultra-high capacity storage devices. One of the main problems with probe storage devices is the wear of tip and media over the lifetime of the device. In this paper we present the dynamic mode operation of the cantilever probe that partially addresses the problems of media/tip wear. A communication system model which incorporates modeling of the cantilever interaction with media is proposed for the system. We demonstrate that by using a controllable canonical state space representation, the entire system can be visualized as a channel with a single input which is the tip-media interaction force. A hypothesis testing formulation for bit-by-bit detection is developed. We present three different classes of detectors for this hypothesis test. In particular, we consider two different cases where statistics on the tip-medium interaction are available and not available. Simulation results are presented for all these detectors and their relative merits are explored.</p>

https://lib.dr.iastate.edu/ece_conf/38
]]>
#sn_blue-dot_copy3
-74.6672226,40.3572976,4
Communicating the sum of sources over a network
<p>We consider a network (that is capable of network coding) with a set of sources and terminals, where each terminal is interested in recovering the sum of the sources. Considering directed acyclic graphs with unit capacity edges and independent, unit-entropy sources, we show the rate region when (a) there are two sources and n terminals, and (b) n sources and two terminals. In these cases as long as there exists at least one path from each source to each terminal we demonstrate that there exists a valid assignment of coding vectors to the edges such that the terminals can recover the sum of the sources.</p>

https://lib.dr.iastate.edu/ece_conf/34
]]>
#sn_blue-dot_copy3
-79.38318429999998,43.653226,4
Minimum cost distributed source coding over a network
<p>This work considers the problem of transmitting multiple compressible sources over a network with minimum cost. The problem is complicated by the fact that the description of the feasible rate region of distributed source coding problems typically has a number of constraints that is exponential in the number of sources that renders general purpose solvers inefficient. We present a framework in which these problems can be solved efficiently by exploiting the structure of the feasible rate regions coupled with dual decomposition and subgradient methods.</p>

https://lib.dr.iastate.edu/ece_conf/36
]]>
#sn_blue-dot_copy3
7.261953199999994,43.7101728,4
Error Floors of LDPC Coded BICM
<p>In recent years performance prediction for communication systems utilizing iteratively decodable codes has been of considerable interest. There have been significant breakthroughs as far as the analysis of LDPC code ensembles is concerned but the more practical problem of predicting the FER/BER of a particular code has proved to be much more difficult. In this work we present a technique (based on the work of Richardson '03) for finding lower and upper bounds on the performance of LDPC coded BICM systems for a given code. The insight gained from the prediction technique is used to design interleavers that improve the error floors of these systems.</p>

https://lib.dr.iastate.edu/ece_conf/35
]]>
#sn_blue-dot_copy3
-4.251805999999988,55.864237,4
A Genealogical Approach to Analyzing Post-Mortem Denial of Service Attacks
<p>Availability requires that computer systems remain functioning as expected without loss of resources to legitimate users. The impact of a lack of availability to services and data is often little more than a nuisance; however the results could be devastating if critical computational and communication resources are targeted. One of the most problematic challenges to availability is the denial of service (DoS) attack. Over time, DoS attacks have become increasingly sophisticated, often employing techniques like address spoofing, coordinated distributed sources of attack, and subverting “inside” computers to assist in carrying out the attack. DoS attacks are very easy to launch, are effective, and are difficult to prevent or mitigate.</p>
<p>The purpose of this work is to study post-mortem DoS attacks over time with the goals of uncovering how the attacks relate to each other, identifying the underlying vulnerability that led to success, and gaining insight on future attack trends. By studying how attacks have changed over time and adapted to overcome new security practices, it is possible to construct attack trees to represent the genealogy and history of DoS attack tools. Through code inspections and close analysis of the attack trees, we were able to identify core techniques copied from one attack to another, the synthesis of more effective techniques based on combinations of existing methods, and the genesis of novel attack strategies. The generation of attack trees allows for an important examination of how attacks relate to one another as well as insight on the core vulnerabilities that still remain in modern software solutions. More importantly, by closely analyzing the genealogy of attack trees and post-mortem DoS exploitation, we not only gain information on the methodologies currently used by attackers but also discover valuable insight on predicting future attack patterns as well as developing possible countermeasure.</p>

https://lib.dr.iastate.edu/ece_conf/3
]]>
#sn_blue-dot_copy3
-117.0001651,46.73238749999999,4
Finite-length MIMO adaptive equalization using canonical correlations
<p>We propose finite-length multi-input multi-output adaptive equalization methods for "smart" antenna arrays using the statistical theory of canonical correlations. We show that the proposed methods are related to maximum likelihood reduced-rank channel and noise estimation algorithms in unknown spatially correlated noise, and to several recently proposed adaptive equalization schemes.</p>

https://lib.dr.iastate.edu/ece_conf/42
]]>
#sn_blue-dot_copy3
-111.89104739999999,40.7607793,4