Skip to content

Data Lab Research Topics

Carlos Lizarraga-Celaya edited this page Sep 29, 2023 · 6 revisions

The Data Lab pursues training & skill development of grad students in the following subfields of Machine Learning

Deep Learning

  • Generative AI for Vision: Diffusion Models and GANs. Generative AI is a category of AI models and tools that can create new content using existing data. It uses machine learning algorithms to learn from existing content and generate new, realistic artifacts that reflect the characteristics of the training data. Generative AI can produce a variety of novel content, such as images, video, music, speech, text, software code, and product designs. Generative adversarial networks (GANs) are algorithmic architectures that use two neural networks to generate new, synthetic instances of data that can pass for real data. They are used widely in image generation, video generation, and voice generation. Diffusion models are a type of generative model used to create data closely resembling the data on which they are trained. These models learn by adding noise to images and removing it, thus generating new and diverse high-resolution images that are reminiscent of the original data.

  • Large Language Models. A large language model (LLM) is a type of artificial intelligence (AI) algorithm that uses deep learning techniques and massive data sets to understand, summarize, generate, and predict new content. LLMs are built with artificial neural networks and are typically trained using self-supervised learning and semi-supervised learning. They can recognize, summarize, translate, predict, and generate text and other forms of content based on knowledge gained from massive datasets. LLMs are used for a variety of natural language processing (NLP) tasks, such as generating and classifying text, answering questions in a conversational manner, and translating text from one language to another. They are predominantly used in fields that generate large volumes of data, such as healthcare and business. Examples of large language models include the GPT-3 and GPT-4 from OpenAI, LLaMA from Meta, and PaLM2 from Google.

  • NeRF: Neural Radiance Fields. Neural Radiance Fields (NeRF) is a technique that uses machine learning to generate 3D representations of objects or scenes from 2D images. NeRF takes a set of input images and renders the complete scene by interpolating between the scenes. The input can be provided as a blender model or a static set of images. NeRF can generate highly realistic images with a level of detail that is difficult to achieve with traditional methods. It can also be used to generate images from a variety of input sources, including photographs and sketches. NeRF has recently become a significant development in the field of Computer Vision.

  • Object Detection and Segmentation. Object detection is a computer vision technique that identifies and locates objects in an image or video. Object detection finds bounding boxes around objects and classifies them. For example, in autonomous driving, detection identifies specific objects like pedestrians and vehicles.

  • Physics Informed Neural Networks - PINN. Physics-informed neural networks (PINNs) are a powerful deep learning framework that incorporates knowledge of physical laws into neural network models. These networks are particularly useful in solving problems where physical principles play a crucial role.

  • Vision Transformers - ViT. A vision transformer (ViT) is a deep learning model that uses transformers to process images for tasks such as image recognition, classification, and object detection. The model breaks down an image into patches, processes them using transformers, and aggregates the information for classification or object detection.

Other topics of Machine Learning

  • Federated Learning. Federated learning is a machine learning technique that trains an algorithm using multiple independent sessions, each using its own dataset. It's also known as collaborative learning.

Updated: 09/29/2023. Carlos Lizárraga.

University of Arizona. Data Science Institute, 2023.