Mathematical Foundations of Modern Computing

  • Mathematical Foundations of Modern Computing;
    • Noisy computing; Approximate Computing, In-Memory Computing.

Overview: Modern computing systems must process large quantities of data quickly and efficiently. Scaling of the underlying devices is already at the level where simple techniques such as guard banding and replication are too resource intensive. Additionally, some applications may not even need perfect computations to produce outputs of acceptable quality (e.g., images). In this research, we establish the fundamental performance limits of computational systems built out of noisy components, and offer novel mathematical techniques to effectively mitigate the adverse effects of hardware noise while maintaining target quality.

In this work, we explore and formalize an untapped potential of a variety of mathematical techniques as well as intrinsic robustness of iterative algorithms to offer low-cost solutions for computing systems built out of unreliable components.

Recent results: We have developed a comprehensive framework for the analysis and design of iterative algorithms in the presence of computational noise, using variety of tools from probability theory, combinatorics, and information theory. Applications include several LDPC iterative decoders, signal processing, image denoising, and machine learning algorithms. We have proposed algorithm-assisted techniques for the recovery from hardware noise. Using tools from source and channel coding, we have also developed novel message shaping methods that in an algorithm-aware way minimize the effects of hardware noise, with applications to approximate computing.

Current research includes applications of noisy computing to deep space applications to offset the high cost of device shielding and applications for fast in-memory computing.

From hardware modeling to robust applications using new mathematical tools























Representative recent publications: