Computer Science

Permanent URI for this collection

Browse

Recent Submissions

Now showing 1 - 5 of 181
  • Item
    Exploring Functional Graph Representations for Max Flow: The Abstraction of Functional Data Structure Design Techniques
    (2023) Weinstock, Harrison; Wonnacott, David G.
    This paper investigates the techniques involved in the design of functional data structures and how they are realized in a functional implementation of the Ford-Fulkerson max flow method. Specifically, we focus on functional graph representations and how some of the initial issues can be handled safely without affecting user facing code. We examine the difficulty the functional paradigm presents in working with cyclic structure and how this shows up in graph traversals. We review Erwig's induction graphs and Mokhov's algebraic graphs to see two potential solutions to the same problem of handling cyclic structures in a way that allows for easy traversal. Finally, we analyze functional implementations of the max-flow algorithm to exemplify how the tools for abstraction used in the data structure design carry over to an important and common algorithm implementation.
  • Item
    Evaluating the Effect of Training Data on Bias in Generative Adversarial Networks
    (2023) Trotter, Ryan; Grissom, Alvin
    With the rising popularity of generative adversarial networks used for facial image generation, it is becoming increasingly important to ensure that these models are not biased. With the many possible uses of these networks that produce incredible realistic face images, the possibility for bias in these networks to cause harm is substantial. While StyleGAN is very effective when trained on FFHQ, the unbalanced nature of this dataset raises concerns. In this paper, we explore how GANs work, past research on bias in GAN image generation, and possible alternatives to reduce bias in these models. We also examine new results on bias in the GAN discriminator, which reveals new possible research ideas on how to mitigate bias in GANs.
  • Item
    Pre-training and Fine-tuning BERT: Energy and Carbon Considerations
    (2023) Wang, Xiaorong; Friedler, Sorelle
    Artificial intelligence is becoming more powerful and making more decisions for people than anyone would have expected. Its impact has been significant and it is important to consider the performance of machines more than just classification accuracy. For example, some algorithms discriminate against a certain population. In response to that, it is worth the effort to create fairness-aware machines. On the other hand, as models get more complicated, sometimes humans don’t understand why models perform in a certain way. However, as these machines are making important decisions for humans, it is important to understand how they come to the conclusions, which leads us to the measure of interpretability. Furthermore, the more complicated models usually require longer running times which results in higher computation power. The training and inference energy required for these models have a significant impact on the environment. Consequently, it is crucial to account for environmental costs as well. In this thesis, we will delve deeper into evaluating the energy usage regarding Natural Language Processing (NLP) tasks. Despite the popularity of model fine-tuning in the NLP community, existing work on quantifying energy costs and associated carbon emissions has mostly focused on pre-training of language models (e.g. Strubell et al. 2019; Patterson, Gonzalez, Le, et al. 2021; Luccioni et al. 2022).For this reason, we investigate this prevalent yet understudied workload by comparing energy costs of pre-training and fine-tuning, and by comparing energy costs across different fine-tuning tasks, datasets, and hardware infrastructure settings.
  • Item
    Effective Resistance Metric for Networks: Graph Formalism, Algorithms, and Granular Intuition
    (2023) Tseytlin, Ivan; Lindell, Steven; Brzinski, Theodore
    This thesis builds on insufficiency of local features of networks in analyzing complex networks. It motivates the need for multi-scale metrics by extracting intuition from a physical problem of granular packings. It introduces a metric of Effective Resistance on graphs and surveys the analytical solutions and efficient algorithms to solve it. The work explores properties of the metric, bringing them back into the motivating context of granular packings. Thesis provides a description of an algorithm to compute effective resistance between every two points in the network, bounded by cubic polynomial time complexity. It then describes an algorithm to find effective resistance between two points in the network of square polynomial time complexity. It concludes with a discussion of approximate algorithms for these two tasks.
  • Item
    Resilience of Surface Codes and Quantum Error Correction
    (2023) Tan, Shi Jie Samuel; Lindell, Steven; Grin, Daniel
    The standard circuit-level noise model in quantum error correction is typically used to model the noise experienced by the qubits in quantum computation. However, error bursts—noise during a single timestep that occurs with high probability—can occur when qubits are exposed to cosmic rays or during the transduction process required for transmission over a quantum network. In this work, we use Stim and PyMatching’s Minimum Weight Perfect Matching (MWPM) decoder to simulate and analyze the performance of the surface code against error bursts. We describe a model of the logical error rate that accounts for error bursts and demonstrate the effect of an error burst on the logical error rate. We also estimate the accuracy threshold for error bursts and produce a phase diagram for the accuracy threshold with respect to the error burst rate and physical error rate.