đź‘‹ Welcome to Thang’s Blog

  • đź‘Ź Hi there, I’m Thang Nguyen-Duc. I am interested in Deep learning, and their applications, such as Natural Language Processing, Generative Models, and Error Detection. My background is in Math and Programming, AI, Algorithm, and Teaching skills.
  • 🌱 Outside my work, I teach algorithms and programming languages like C++, and Python, and I find immense joy in witnessing my students’ growth.
  • đź‘» This blog documents my learning process, and is also where I share knowledge with everyone.

[LLM 02] Data cleaning and Tokenizations

This article summarizes word knowledge from Large Language Models: A Survey 1 . This is a series of articles continuing LLM 01 - Large Language Model Families 1. Data Cleaning Data quality is pivotal to the performance of language models. Effective data cleaning techniques, such as filtering and deduplication, can significantly enhance model performance. 1.1 Data Filtering Data filtering aims to enhance the quality of training data and improve the effectiveness of the trained language models....

June 10, 2024 Â· 10 min Â· Thang Nguyen-Duc

My research aims to improve image and text quality in Stable Diffusion

Thank you all for visiting my blog. Today, I am excited to introduce some showcases demonstrating how my team and I have enhanced the image quality of Stable Diffusion. Additionally, I’m pleased to share that we have significantly improved Stable Diffusion’s text generation capabilities within images, achieving both beauty and stability. Our current research is being applied to develop an image-generating product that automatically creates ads from product URLs. Below are some of our showcases....

June 1, 2024 Â· 1 min Â· Thang Nguyen-Duc

[LLM 01] Large Language Model Overview

This article summarizes word knowledge from Large Language Models: A Survey 1 . In this section, I summarize about Large Language Model Families. 1. Basic Architecture The invention of the Transformer architecture marks another milestone in the development of LLM. By applying self-attention to compute in parallel for every word in a sentence of the document an “attention score” to model the influence each word has on another. Transformers allow for much more parallelization than RNNs....

May 1, 2024 Â· 8 min Â· Thang Nguyen-Duc

Gradient-based methods in error detection

I strongly encourage you to explore my earlier article, Understanding the Influence Function, as it serves as a valuable foundation for comprehending the content presented in this piece. What is error detection problem? The rapid growth of the internet, however, causes data to rise exponentially, posing numerous issues. Deep learning algorithms become less effective when big data is mislabeled or contains many errors. Current studies focus solely on improving the model rather than detecting data issues....

November 25, 2023 Â· 9 min Â· Thang Nguyen-Duc

What is Influence Function?

In this article, I review about Influence functions and various of its - a classic technique from robust statistics - to trace a model’s prediction through the learning algorithm and back to its training data, thereby identifying training points most responsible for a given prediction. Basics of influence function Consider a prediction problem from some input space \(\mathcal{X}\) (e.g., images, text,\(\ldots\)) to an output space \(\mathcal{Y}\) (e.g,. labels)....

November 24, 2023 Â· 7 min Â· Thang Nguyen-Duc